Addresses community feedback about 0KB corrupted backup files going
undetected after upload.
Implementation:
- Compute SHA-256 hash of final artifact (after compress/encrypt) before upload
- After each storage target upload, download the file back and verify
the hash matches the local checksum
- If verification fails: mark that target as failed, auto-delete the
corrupted remote file, and log detailed mismatch info
- Store checksum in BackupRecord model (new `checksum` column)
- Display truncated SHA-256 with copy button in backup records UI
Verification flow per storage target:
local SHA-256 → upload → download → remote SHA-256 → compare
- match: mark success
- mismatch: mark failed + delete corrupted remote file
Root cause: ArcoDesign Tree loadMore callback receives NodeInstance where
the key is at node.props.dataRef.key, not node.props.key. The old code
passed node.props directly which resulted in undefined key, causing
child directory loading to silently fail.
Fix:
- Access node key via node.props.dataRef?.key ?? node.props._key
- Add showLine + blockNode + folder icons for better visual hierarchy
- Add path display with copy button in selection modal
- Add unmountOnExit to reset state on close
Closes#19
Three community-requested features:
1. CLI password reset: `backupx reset-password --username admin --password xxx`
Docker users can run via `docker exec`. No full app init needed.
2. Audit logging: async fire-and-forget audit trail for all key operations
(login, CRUD on tasks/targets/records, settings changes).
New UI page at /audit with category filter and pagination.
3. Multi-source path backup: file backup tasks now support multiple source
directories packed into a single tar archive. Backward compatible with
existing single sourcePath field.