Files
BackupX/README_EN.md
Wu Qing 757b0fa5ed 功能: 修复并实现多节点集群部署 (#38)
基础修复:
- 新增节点离线检测:每 15s 扫描,超 45s 未心跳的远程节点自动置离线
- 节点删除前检查关联任务,避免孤立备份任务
- BackupTaskRepository 新增 CountByNodeID/ListByNodeID

Master 端 Agent 协议:
- 新增 AgentCommand 模型与命令队列仓储(pending/dispatched/succeeded/failed/timeout)
- 新增 AgentService:任务下发、命令轮询、结果回收、超时扫描
- 新增专用 Agent HTTP API(X-Agent-Token 认证):
  /api/agent/heartbeat
  /api/agent/commands/poll
  /api/agent/commands/:id/result
  /api/agent/tasks/:id
  /api/agent/records/:id
- BackupExecutionService 支持 node 路由:task.NodeID 指向远程节点时自动入队派发

Agent CLI(backupx agent 子命令):
- 配置:YAML 文件 / 环境变量 / CLI 参数,优先级 CLI > 文件 > 环境
- 心跳循环 + 命令轮询循环 + 优雅退出
- 本地复用 BackupRunner 与 storage registry 执行备份并直接上传
- 支持 run_task 和 list_dir 两种命令

远程目录浏览:
- NodeService 支持通过 Agent RPC 列出远程节点目录(15s 超时)

前端:
- NodesPage 添加节点后展示 Agent 启动命令和环境变量配置

文档:
- README 中英文重写"多节点集群"章节,含架构图、步骤、限制、CLI 参考
2026-04-17 12:29:08 +08:00

16 KiB
Raw Blame History

English | 中文

BackupX

Self-hosted Server Backup Management Platform
One binary, one command — manage all your server backups.

Stars Release Go React SQLite License


Dashboard Backup Tasks
Storage Targets Backup Records

Highlights

Capability Details
Backup Types Files/Directories (multi-source), MySQL, PostgreSQL, SQLite, SAP HANA (full / incremental / differential / log backups + parallel channels + retry)
SAP HANA Backint Agent Built-in SAP HANA Backint protocol agent — HANA's native backup interface can route data directly to any storage backend supported by BackupX
70+ Storage Backends Built-in Alibaba OSS / Tencent COS / Qiniu / S3 / Google Drive / WebDAV / FTP + 70+ backends via rclone (SFTP, Azure Blob, Dropbox, OneDrive, etc.)
Scheduling Cron-based + visual editor + auto-retention policy (by days/count, auto empty directory cleanup)
Multi-Node Master-Agent cluster for managing backups across multiple servers with remote directory browsing and node editing
Security JWT + bcrypt + AES-256-GCM encrypted config + optional backup encryption + comprehensive audit logs
Notifications Email / Webhook / Telegram — push on success or failure
Deployment Single binary + embedded SQLite, Docker one-click, zero external dependencies

Quick Start

1. Install

Docker (recommended, no clone needed):

# Create a docker-compose.yml then start
docker compose up -d

# Or run directly
docker run -d --name backupx -p 8340:8340 -v backupx-data:/app/data awuqing/backupx:latest

Docker Hub: awuqing/backupx — supports linux/amd64 and linux/arm64.

docker-compose.yml reference
services:
  backupx:
    image: awuqing/backupx:latest
    container_name: backupx
    restart: unless-stopped
    ports:
      - "8340:8340"
    volumes:
      - backupx-data:/app/data
      # Mount host directories to back up (add as needed):
      # - /var/www:/mnt/www:ro
      # - /etc/nginx:/mnt/nginx-conf:ro
    environment:
      - TZ=Asia/Shanghai

volumes:
  backupx-data:

Pre-built binaries (bare metal):

Download from Releases:

tar xzf backupx-v*-linux-amd64.tar.gz && cd backupx-*
sudo ./install.sh        # Auto-configures systemd + Nginx

Build from source:

git clone https://github.com/Awuqing/BackupX.git && cd BackupX
make build               # Build frontend + backend
make docker-cn           # Or Docker build with China mirrors (goproxy.cn / npmmirror / Aliyun apk)

2. Open the Console

Visit http://your-server:8340 in your browser. First-time access guides you through admin account creation.

3. Add a Storage Target

Go to Storage TargetsAdd, choose a storage type and enter credentials:

Storage Type Required Fields
Alibaba Cloud OSS Region + AccessKey ID/Secret + Bucket
Tencent Cloud COS Region + SecretId/SecretKey + Bucket (name-appid)
Qiniu Cloud Kodo Region + AccessKey/SecretKey + Bucket
S3 Compatible Endpoint + AccessKey + Bucket
Google Drive Client ID/Secret → click Authorize for OAuth
WebDAV Server URL + Username/Password
FTP Host + Port + Username/Password
Local Disk Target directory path
SFTP / Azure / Dropbox / OneDrive etc. Select the type, fill in required fields; advanced options are collapsible

For Chinese cloud providers, just enter Region and AccessKey — the system auto-assembles the Endpoint. Rclone-type configs separate required fields from optional advanced options (collapsed by default).

Click Test Connection to verify.

4. Create a Backup Task

Go to Backup TasksCreate, complete 3 steps:

  1. Basic Info — Task name, backup type, Cron expression (leave empty for manual-only)
  2. Source Config — File backup: select source paths (supports multiple); Database: enter connection info
  3. Storage & Policy — Select storage target(s) (supports multiple), compression, retention days, encryption toggle

Save, then click Run Now to test. View real-time logs in Backup Records.

Deleting a backup task automatically cleans up remote storage files while preserving backup records for audit purposes.

5. Set Up Notifications (Optional)

Go to Notifications to configure Email, Webhook, or Telegram alerts for backup success/failure.


Deployment Guide

Docker

docker compose up -d     # Using the docker-compose.yml above

Mount host directories for file backup (add to volumes in docker-compose.yml):

volumes:
  - backupx-data:/app/data
  - /var/www:/mnt/www:ro
  - /etc/nginx:/mnt/nginx-conf:ro

Override config via environment variables:

environment:
  - TZ=Asia/Shanghai
  - BACKUPX_LOG_LEVEL=debug
  - BACKUPX_BACKUP_MAX_CONCURRENT=4

To upgrade: go to System Settings, click "Check for Updates" to see if a new version is available, then run docker compose pull && docker compose up -d.

Bare Metal

# From pre-built package
tar xzf backupx-v*-linux-amd64.tar.gz && cd backupx-*
sudo ./install.sh

# Or from source
make build
sudo ./deploy/install.sh

The install script creates a system user, installs to /opt/backupx/, configures systemd, and sets up Nginx reverse proxy.

Nginx Reverse Proxy (bare metal)

server {
    listen 80;
    server_name backup.example.com;

    location / {
        root /opt/backupx/web;
        try_files $uri $uri/ /index.html;
    }

    location /api/ {
        proxy_pass http://127.0.0.1:8340;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
    }
}

Configuration

Config file: ./config.yaml (or override with BACKUPX_ prefixed env vars):

server:
  port: 8340
database:
  path: "./data/backupx.db"
security:
  jwt_secret: ""          # Auto-generated and persisted to DB
  encryption_key: ""      # Auto-generated
backup:
  temp_dir: "/tmp/backupx"
  max_concurrent: 2
log:
  level: "info"           # debug | info | warn | error
  file: "./data/backupx.log"

Password Reset

# Bare metal
./backupx reset-password --username admin --password newpass123

# Docker
docker exec -it backupx /app/bin/backupx reset-password --username admin --password newpass123

SAP HANA Support

BackupX offers two SAP HANA backup modes — pick whichever fits:

Mode 1: hdbsql Runner (Web-console managed)

Create a SAP HANA backup task in the Web console. The backend runs hdbsql to perform backups, suitable for BackupX-scheduled recurring jobs.

Source configuration supports:

Field Options Description
Backup type data / log Data or log backup
Backup level full / incremental / differential Auto-disabled for log backups
Parallel channels 1 ~ 32 BACKUP DATA USING FILE ('c1','c2',...) parallel paths
Retry count 1 ~ 10 Exponential backoff (5s × attempt²)
Instance number Optional Inferred from port or manually specified

Mode 2: Backint Protocol Agent (HANA native)

BackupX ships a built-in Backint Agent. SAP HANA calls it via native BACKUP DATA USING BACKINT syntax, and data is routed automatically to BackupX storage targets (S3 / OSS / COS / WebDAV / 70+ backends).

1. Prepare parameter file /opt/backupx/backint_params.ini:

#STORAGE_TYPE = s3
#STORAGE_CONFIG_JSON = /opt/backupx/storage.json
#PARALLEL_FACTOR = 4
#COMPRESS = true
#KEY_PREFIX = hana-backup
#CATALOG_DB = /opt/backupx/backint_catalog.db
#LOG_FILE = /var/log/backupx/backint.log

2. Prepare storage config /opt/backupx/storage.json (same schema as BackupX storage targets):

{
  "endpoint": "https://s3.amazonaws.com",
  "region": "us-east-1",
  "bucket": "hana-prod",
  "accessKeyId": "AKIA...",
  "secretAccessKey": "..."
}

3. Create the hdbbackint symlink:

ln -s /opt/backupx/backupx /usr/sap/<SID>/SYS/global/hdb/opt/hdbbackint

4. Enable in HANA global.ini:

[backup]
data_backup_using_backint = true
catalog_backup_using_backint = true
log_backup_using_backint = true
data_backup_parameter_file = /opt/backupx/backint_params.ini
log_backup_parameter_file = /opt/backupx/backint_params.ini

5. Manual CLI invocation (for troubleshooting):

backupx backint -f backup  -i input.txt -o output.txt -p backint_params.ini
backupx backint -f restore -i input.txt -o output.txt -p backint_params.ini
backupx backint -f inquire -i input.txt -o output.txt -p backint_params.ini
backupx backint -f delete  -i input.txt -o output.txt -p backint_params.ini

The Backint Agent maintains an EBID ↔ object-key catalog in a local SQLite DB. All operations follow the SAP HANA Backint protocol (#PIPE / #SAVED / #RESTORED / #BACKUP / #NOTFOUND / #DELETED / #ERROR).


Multi-Node Cluster

BackupX supports Master-Agent mode for managing multiple servers. Backup tasks can be routed to specific nodes — the Agent runs the backup locally and uploads straight to storage backends.

Architecture

[Web Console] ←── JWT ──→ [Master (backupx)]
                              ↑  ↓
                              │  │ HTTP long-poll (token auth)
                              │  ↓
                         [Agent (backupx agent)]  ← runs on remote host
                              ↓
                          [70+ Storage Backends]
  • Protocol: HTTP long-polling; the Agent initiates all connections — Master never needs reverse access
  • Heartbeat: Agent reports every 15s; Master marks nodes offline after 45s of silence
  • Dispatch: Master persists run_task commands to a queue; Agent polls and claims them
  • Execution: Agent reuses the same BackupRunner (file / mysql / postgresql / sqlite / saphana) and uploads directly to storage
  • Security: Each node gets its own token; the Agent never holds the Master's JWT secret or encryption key

Walkthrough

1. Create a node on Master and copy the token

Web Console → Node ManagementAdd Node. The dialog shows a 64-byte hex token once — keep it safe.

2. Deploy the Agent on a remote host

Upload the BackupX binary (same file as Master) to the target host, then start the Agent:

# Option A: CLI flags
backupx agent --master http://master.example.com:8340 --token <token>

# Option B: config file
cat > /etc/backupx/agent.yaml <<EOF
master: http://master.example.com:8340
token: <token>
heartbeatInterval: 15s
pollInterval: 5s
tempDir: /var/lib/backupx-agent
EOF
backupx agent --config /etc/backupx/agent.yaml

# Option C: environment variables (Docker / systemd-friendly)
BACKUPX_AGENT_MASTER=http://master.example.com:8340 \
BACKUPX_AGENT_TOKEN=<token> \
backupx agent

Once connected, the node appears as online in the list.

3. Create a task routed to that node

In the Backup Tasks page, pick the target node when creating the task. When triggered:

  • Local / unassigned (nodeId=0) tasks run in-process on Master
  • Remote-node tasks are enqueued → Agent claims → Agent runs locally → uploads → reports back

Limitations

  • No encrypted backups via Agent: the Agent doesn't hold Master's AES-256 key. Tasks with encrypt: true will fail if routed to an Agent
  • Directory browse timeout: remote dir listing is a synchronous RPC through the queue; default 15s timeout
  • Command timeout: claimed-but-unfinished commands are marked timed out after 10 minutes

CLI Reference

backupx agent --help
  -master string    Master URL
  -token string     Agent auth token
  -config string    YAML config path (takes precedence over env)
  -temp-dir string  Local temp directory (default /tmp/backupx-agent)
  -insecure-tls     Skip TLS verification (testing only)

Development

Requirements: Go >= 1.25 · Node.js >= 20 · npm

# Dev mode
make dev-server          # Terminal 1: backend (:8340)
make dev-web             # Terminal 2: frontend (Vite HMR)

# Test
make test                # Run all tests

# Build
make build               # Build frontend + backend
make docker              # Docker build
make docker-cn           # Docker build with China mirrors

Release

git tag v1.4.3 && git push --tags
# GitHub Actions: compile dual-arch binaries → publish GitHub Release → push Docker Hub image

Or manually trigger the Release workflow from GitHub Actions page.


API Reference

All endpoints prefixed with /api, authenticated via JWT Bearer Token.

Module Endpoint Description
Auth POST /auth/setup Initialize admin
POST /auth/login Login
PUT /auth/password Change password
Backup Tasks GET|POST /backup/tasks List / Create
GET|PUT|DELETE /backup/tasks/:id Detail / Update / Delete
PUT /backup/tasks/:id/toggle Enable / Disable
POST /backup/tasks/:id/run Manual run
Backup Records GET /backup/records List (with filter)
GET /backup/records/:id/logs/stream Real-time logs (SSE)
GET /backup/records/:id/download Download
POST /backup/records/:id/restore Restore
Storage Targets GET|POST /storage-targets List / Add
POST /storage-targets/test Test connection
GET /storage-targets/rclone/backends Rclone backend list
Nodes GET|POST /nodes List / Add
PUT /nodes/:id Edit node
GET /nodes/:id/fs/list Directory browser
POST /agent/heartbeat Agent heartbeat (Token auth)
Notifications GET|POST /notifications List / Add
Dashboard GET /dashboard/stats Overview stats
Audit Logs GET /audit-logs Operation audit
System GET /system/info System info
GET /system/update-check Check for updates

Tech Stack

Component Technology
Backend Go · Gin · GORM · SQLite · robfig/cron · rclone
Frontend React 18 · TypeScript · ArcoDesign · Vite · Zustand · ECharts
Storage rclone (70+ backends) · AWS SDK v2 · Google Drive API v3
Security JWT · bcrypt · AES-256-GCM

Contributing

Issues and Pull Requests are welcome!

License

Apache License 2.0