Addresses community feedback about 0KB corrupted backup files going
undetected after upload.
Implementation:
- Compute SHA-256 hash of final artifact (after compress/encrypt) before upload
- After each storage target upload, download the file back and verify
the hash matches the local checksum
- If verification fails: mark that target as failed, auto-delete the
corrupted remote file, and log detailed mismatch info
- Store checksum in BackupRecord model (new `checksum` column)
- Display truncated SHA-256 with copy button in backup records UI
Verification flow per storage target:
local SHA-256 → upload → download → remote SHA-256 → compare
- match: mark success
- mismatch: mark failed + delete corrupted remote file
Three community-requested features:
1. CLI password reset: `backupx reset-password --username admin --password xxx`
Docker users can run via `docker exec`. No full app init needed.
2. Audit logging: async fire-and-forget audit trail for all key operations
(login, CRUD on tasks/targets/records, settings changes).
New UI page at /audit with category filter and pagination.
3. Multi-source path backup: file backup tasks now support multiple source
directories packed into a single tar archive. Backward compatible with
existing single sourcePath field.
Replace the hdbsql SELECT-based schema DDL export with SAP HANA's official
BACKUP DATA USING FILE for proper data-level backup.
Changes:
- Run: issue BACKUP DATA [FOR <tenant>] USING FILE via hdbsql, package
resulting backup files into tar archive as artifact
- Restore: extract tar, locate backup prefix, issue RECOVER DATA
[FOR <tenant>] USING FILE ... CLEAR LOG
- Add helper functions: buildHdbsqlArgs, packageBackupFiles,
extractTarArchive, findBackupPrefix
- Add 7 unit tests covering backup/restore/error paths