The test passed an empty tempDir which defaulted to /tmp/backupx —
a directory that does not exist in CI runners. Use t.TempDir() based
path instead so the test is self-contained.
Previous approach read the file twice (once for SHA-256, once for upload),
doubling disk I/O. Under concurrent multi-target uploads this becomes a
bottleneck.
New design — hashingReader wraps io.TeeReader + sha256.Hash:
file.Read() → TeeReader → sha256.Write() (hash) + provider (upload)
Single read pass yields both byte count and SHA-256 simultaneously.
Each upload goroutine independently opens the file and computes its own
hash. The first successful target writes checksum to the record via
sync.Once. Zero extra disk I/O, zero extra memory copies, fully
concurrent-safe.
List()-based size check depends on the storage backend returning accurate
file sizes, which is not guaranteed (some WebDAV/Google Drive impls may
return 0 or omit the size field).
New approach: wrap the upload io.Reader with a CountingReader that counts
bytes as they flow through during upload. After upload completes, compare
counter.n against the expected fileSize. This is:
- Zero extra network calls (no List, no Download)
- Zero extra CPU/memory overhead (just an int64 increment per Read)
- Storage-backend agnostic (works with any provider)
If bytes transmitted != expected size → mark failed + auto-delete remote.
The previous approach downloaded the entire backup file after upload to
compute a remote SHA-256, which doubles bandwidth cost for every backup.
New approach:
- Local SHA-256 is still computed before upload (stored in record for audit)
- After upload, use provider.List() to check remote file size (single API call)
- If remote size is 0 or mismatches local size → mark failed + auto-delete
- If List() fails, log a warning but don't block (file may have uploaded fine)
This catches 0KB corrupted uploads with zero download overhead.
Addresses community feedback about 0KB corrupted backup files going
undetected after upload.
Implementation:
- Compute SHA-256 hash of final artifact (after compress/encrypt) before upload
- After each storage target upload, download the file back and verify
the hash matches the local checksum
- If verification fails: mark that target as failed, auto-delete the
corrupted remote file, and log detailed mismatch info
- Store checksum in BackupRecord model (new `checksum` column)
- Display truncated SHA-256 with copy button in backup records UI
Verification flow per storage target:
local SHA-256 → upload → download → remote SHA-256 → compare
- match: mark success
- mismatch: mark failed + delete corrupted remote file
Root cause: ArcoDesign Tree loadMore callback receives NodeInstance where
the key is at node.props.dataRef.key, not node.props.key. The old code
passed node.props directly which resulted in undefined key, causing
child directory loading to silently fail.
Fix:
- Access node key via node.props.dataRef?.key ?? node.props._key
- Add showLine + blockNode + folder icons for better visual hierarchy
- Add path display with copy button in selection modal
- Add unmountOnExit to reset state on close
Closes#19
- docker-compose.yml: change from local build to awuqing/backupx:latest
with clear comments for mounting host volumes
- README: Docker quick start now uses `docker run` / `docker compose`
directly without cloning the repo first
- Add Docker Hub badge and link to awuqing/backupx
- Keep source build instructions as a separate option
Three community-requested features:
1. CLI password reset: `backupx reset-password --username admin --password xxx`
Docker users can run via `docker exec`. No full app init needed.
2. Audit logging: async fire-and-forget audit trail for all key operations
(login, CRUD on tasks/targets/records, settings changes).
New UI page at /audit with category filter and pagination.
3. Multi-source path backup: file backup tasks now support multiple source
directories packed into a single tar archive. Backward compatible with
existing single sourcePath field.
Replace the hdbsql SELECT-based schema DDL export with SAP HANA's official
BACKUP DATA USING FILE for proper data-level backup.
Changes:
- Run: issue BACKUP DATA [FOR <tenant>] USING FILE via hdbsql, package
resulting backup files into tar archive as artifact
- Restore: extract tar, locate backup prefix, issue RECOVER DATA
[FOR <tenant>] USING FILE ... CLEAR LOG
- Add helper functions: buildHdbsqlArgs, packageBackupFiles,
extractTarArchive, findBackupPrefix
- Add 7 unit tests covering backup/restore/error paths
Bumps the go_modules group with 1 update in the /server directory: [golang.org/x/crypto](https://github.com/golang/crypto).
Updates `golang.org/x/crypto` from 0.33.0 to 0.45.0
- [Commits](https://github.com/golang/crypto/compare/v0.33.0...v0.45.0)
---
updated-dependencies:
- dependency-name: golang.org/x/crypto
dependency-version: 0.45.0
dependency-type: direct:production
dependency-group: go_modules
...
Signed-off-by: dependabot[bot] <support@github.com>