mirror of
https://github.com/Syngnat/GoNavi.git
synced 2026-05-12 12:19:47 +08:00
* 🐛 fix(data-viewer): 修复ClickHouse尾部分页异常并增强DuckDB复杂类型兼容 - DataViewer 新增 ClickHouse 反向分页策略,修复最后页与倒数页查询失败 - DuckDB 查询失败时按列类型生成安全 SELECT,复杂类型转 VARCHAR 重试 - 分页状态统一使用 currentPage 回填,避免页码与总数推导不一致 - 增强查询异常日志与重试路径,降低大表场景卡顿与误报 * ✨ feat(frontend-driver): 驱动管理支持快速搜索并优化信息展示 - 新增搜索框,支持按 DuckDB/ClickHouse 等关键字快速定位驱动 - 显示“匹配 x / y”统计与无结果提示 - 优化头部区域排版,提升透明/暗色场景下的视觉对齐 * 🔧 fix(connection-modal): 修复多数据源URI导入解析并校正Oracle服务名校验 - 新增单主机URI解析映射,兼容 postgres/postgresql、sqlserver、redis、tdengine、dameng(dm)、kingbase、highgo、vastbase、clickhouse、oracle - 抽取 parseSingleHostUri 复用逻辑,统一 host/port/user/password/database 回填行为 - Oracle 连接新增服务名必填校验,移除“服务名为空回退用户名”的隐式逻辑 - 连接弹窗补充 Oracle 服务名输入项与 URI 示例 * 🐛 fix(query-export): 修复查询结果导出卡住并统一按数据源能力控制导出路径 - 查询结果页导出增加稳定兜底,异常时确保 loading 关闭避免持续转圈 - DataGrid 导出逻辑按数据源能力分流,优先走后端 ExportQuery 并保留结果集导出降级 - QueryEditor 传递结果导出 SQL,保证查询结果导出范围与当前结果一致 - 后端补充 ExportData/ExportQuery 关键日志,提升导出链路可观测性 * 🐛 fix(precision): 修复查询链路与分页统计的大整数精度丢失 - 代理响应数据解码改为 UseNumber,避免默认 float64 吞精度 - 统一归一化 json.Number 与超界整数,超出 JS 安全范围转字符串 - 修复 DataViewer 总数解析,超大值不再误转 Number 参与分页 - refs #142 * 🐛 fix(driver-manager): 修复驱动管理网络告警重复并强化代理引导 - 新增下载链路域名探测,区分“GitHub可达但驱动下载链路不可达” - 网络不可达场景仅保留红色强提醒,移除重复二级告警 - 强提醒增加“打开全局代理设置”入口,优先引导使用 GoNavi 全局代理 - 统一网络检测与目录说明提示图标尺寸,修复加载期视觉不一致 - refs #141 * ♻️ refactor(frontend-interaction): 统一标签拖拽与暗色主题交互实现 - 重构Tab拖拽排序实现,统一为可配置拖拽引擎 - 规范拖拽与点击事件边界,提升交互一致性 - 统一多组件暗色透明样式策略,减少硬编码色值 - 提升Redis/表格/连接面板在透明模式下的观感一致性 - refs #144 * ♻️ refactor(update-state): 重构在线更新状态流并按版本统一进度展示 - 重构更新检查与下载状态同步流程,减少前后端状态分叉 - 进度展示严格绑定 latestVersion,避免跨版本状态串用 - 优化 about 打开场景的静默检查状态回填逻辑 - 统一下载弹窗关闭/后台隐藏行为 - 保持现有安装流程并补齐目录打开能力 * 🎨 style(sidebar-log): 将SQL执行日志入口调整为悬浮胶囊样式 - 移除侧栏底部整条日志入口容器 - 新增悬浮按钮阴影/边框/透明背景并适配明暗主题 - 为树区域预留底部空间避免入口遮挡内容 * ✨ feat(redis-cluster): 支持集群模式逻辑多库隔离与 0-15 库切换 - 前端恢复 Redis 集群场景下 db0-db15 的数据库选择与展示 - 后端新增集群逻辑库命名空间前缀映射,统一 key/pattern 读写隔离 - 覆盖扫描、读取、写入、删除、重命名等核心操作的键映射规则 - 集群命令通道支持 SELECT 逻辑切库与 FLUSHDB 逻辑库清空 - refs #145 * ✨ feat(DataGrid): 大数据表虚拟滚动性能优化及UI一致性修复 - 启用动态虚拟滚动(数据量≥500行自动切换),解决万行数据表卡顿问题 - 虚拟模式下EditableCell改用div渲染,CSS选择器从元素级改为类级适配虚拟DOM - 修复虚拟模式双水平滚动条:样式化rc-virtual-list内置滚动条为胶囊外观,禁用自定义外部滚动条 - 为rc-virtual-list水平滚动条添加鼠标滚轮支持(MutationObserver + marginLeft驱动) - 修复白色主题透明模式下列名悬浮Tooltip对比度不足的问题 - 新增白色主题全局滚动条样式适配透明模式(App.css) - App.tsx主题token与组件样式优化 - refs #147 * 🔧 chore(app): 清理 App.tsx 类型告警并收敛前端壳层实现 - 清除未使用代码和冗余状态 - 替换弃用 API 以消除 IDE 提示 - 显式处理浮动 Promise 避免告警 - 保持现有更新检查和代理设置行为不变 * 🔧 fix(ci): 修复 Windows AMD64 下 DuckDB 驱动构建链路 - 将 DuckDB 工具链准备切换为优先使用 MSYS2 - 增加 gcc 和 g++ 存在性校验与版本验证 - 在 MSYS2 异常时回退 Chocolatey 安装 MinGW - 保持 Windows ARM64 跳过 DuckDB 构建与平台支持一致 * 🔧 fix(ci): 修复 Windows AMD64 下 DuckDB 驱动构建链路 - 将 DuckDB 工具链准备切换为优先使用 MSYS2 - 增加 gcc 和 g++ 存在性校验与版本验证 - 在 MSYS2 异常时回退 Chocolatey 安装 MinGW - 保持 Windows ARM64 跳过 DuckDB 构建与平台支持一致 * 🔧 fix(ci): 修复 Windows AMD64 下 DuckDB 驱动构建工具链 - 将 DuckDB 编译链从 MINGW64 切换为 MSYS2 UCRT64 - 修正 Windows AMD64 的 gcc 和 g++ 探测路径 - 增加 DuckDB 编译器版本校验步骤 * 📝 docs(contributing): 补充中英文贡献指南并统一 README 入口 - 新增英文版 CONTRIBUTING.md 作为正式贡献文档 - 新增中文版 CONTRIBUTING.zh-CN.md 作为中文贡献说明 - 调整 README 和 README.zh-CN 的贡献入口指向对应语言文档 * - feat(connection,metadata,kingbase): 增强多数据源连接能力并修复金仓/达梦/Oracle/ClickHouse兼容性问题 (#188) (#190) * feat(http-tunnel): 支持独立 HTTP 隧道连接并覆盖多数据源 refs #168 * fix(kingbase-data-grid): 修复金仓打开表卡顿并降低对象渲染开销 refs #178 * fix(kingbase-transaction): 修复金仓事务提交重复引号导致语法错误 refs #176 * fix(driver-agent): 修复老版本 Win10 升级后金仓驱动代理启动失败 refs #177 * chore(ci): 新增手动触发的 macOS 测试构建工作流 * chore(ci): 允许测试工作流在当前分支自动触发 * fix(query-editor): 修复 SQL 编辑中光标随机跳到末尾 refs #185 * feat(data-sync): 增加差异 SQL 预览能力便于审核 refs #174 * fix(clickhouse-connect): 自动识别并回退 HTTP/Native 协议连接 refs #181 * fix(oracle-metadata): 修复视图与函数加载按 schema 过滤异常 refs #155 * fix(dameng-databases): 修复显示全部库时数据库列表不完整 refs #154 * fix(connection,db-list): 统一处理空列表返回并修复达梦连接测试报错 refs #157 Co-authored-by: 辣条 <69459608+tianqijiuyun-latiao@users.noreply.github.com> * ✨ feat(release-notes): 支持自动生成 Release 更新说明并区分配置文件命名 * 🔁 chore(sync): 回灌 main 到 dev (#192) * - feat(connection,metadata,kingbase): 增强多数据源连接能力并修复金仓/达梦/Oracle/ClickHouse兼容性问题 (#188) * feat(http-tunnel): 支持独立 HTTP 隧道连接并覆盖多数据源 refs #168 * fix(kingbase-data-grid): 修复金仓打开表卡顿并降低对象渲染开销 refs #178 * fix(kingbase-transaction): 修复金仓事务提交重复引号导致语法错误 refs #176 * fix(driver-agent): 修复老版本 Win10 升级后金仓驱动代理启动失败 refs #177 * chore(ci): 新增手动触发的 macOS 测试构建工作流 * chore(ci): 允许测试工作流在当前分支自动触发 * fix(query-editor): 修复 SQL 编辑中光标随机跳到末尾 refs #185 * feat(data-sync): 增加差异 SQL 预览能力便于审核 refs #174 * fix(clickhouse-connect): 自动识别并回退 HTTP/Native 协议连接 refs #181 * fix(oracle-metadata): 修复视图与函数加载按 schema 过滤异常 refs #155 * fix(dameng-databases): 修复显示全部库时数据库列表不完整 refs #154 * fix(connection,db-list): 统一处理空列表返回并修复达梦连接测试报错 refs #157 * Release/0.5.3 (#191) --------- Co-authored-by: 辣条 <69459608+tianqijiuyun-latiao@users.noreply.github.com> Co-authored-by: Syngnat <92659908+Syngnat@users.noreply.github.com> * 🐛 fix(branch-sync): 修复 main 回灌 dev 时 mergeable 异步计算导致漏开自动合并 - 增加 mergeable 状态轮询,避免新建同步 PR 后立即返回 UNKNOWN - 在合并状态未稳定时输出中文告警与执行摘要 - 保持冲突分支、待计算分支与自动合并分支的处理路径清晰 * 🔁 chore(sync): 回灌 main 到 dev (#195) * - feat(connection,metadata,kingbase): 增强多数据源连接能力并修复金仓/达梦/Oracle/ClickHouse兼容性问题 (#188) * feat(http-tunnel): 支持独立 HTTP 隧道连接并覆盖多数据源 refs #168 * fix(kingbase-data-grid): 修复金仓打开表卡顿并降低对象渲染开销 refs #178 * fix(kingbase-transaction): 修复金仓事务提交重复引号导致语法错误 refs #176 * fix(driver-agent): 修复老版本 Win10 升级后金仓驱动代理启动失败 refs #177 * chore(ci): 新增手动触发的 macOS 测试构建工作流 * chore(ci): 允许测试工作流在当前分支自动触发 * fix(query-editor): 修复 SQL 编辑中光标随机跳到末尾 refs #185 * feat(data-sync): 增加差异 SQL 预览能力便于审核 refs #174 * fix(clickhouse-connect): 自动识别并回退 HTTP/Native 协议连接 refs #181 * fix(oracle-metadata): 修复视图与函数加载按 schema 过滤异常 refs #155 * fix(dameng-databases): 修复显示全部库时数据库列表不完整 refs #154 * fix(connection,db-list): 统一处理空列表返回并修复达梦连接测试报错 refs #157 * Release/0.5.3 (#191) * - chore(ci): 新增全平台测试包手动构建工作流 tianqijiuyun-latiao 今天 下午4:26 (#194) * feat(http-tunnel): 支持独立 HTTP 隧道连接并覆盖多数据源 refs #168 * fix(kingbase-data-grid): 修复金仓打开表卡顿并降低对象渲染开销 refs #178 * fix(kingbase-transaction): 修复金仓事务提交重复引号导致语法错误 refs #176 * fix(driver-agent): 修复老版本 Win10 升级后金仓驱动代理启动失败 refs #177 * chore(ci): 新增手动触发的 macOS 测试构建工作流 * chore(ci): 允许测试工作流在当前分支自动触发 * fix(query-editor): 修复 SQL 编辑中光标随机跳到末尾 refs #185 * feat(data-sync): 增加差异 SQL 预览能力便于审核 refs #174 * fix(clickhouse-connect): 自动识别并回退 HTTP/Native 协议连接 refs #181 * fix(oracle-metadata): 修复视图与函数加载按 schema 过滤异常 refs #155 * fix(dameng-databases): 修复显示全部库时数据库列表不完整 refs #154 * fix(connection,db-list): 统一处理空列表返回并修复达梦连接测试报错 refs #157 * fix(kingbase): 补齐主键识别并优化宽表卡顿 refs #176 refs #178 * fix(query-execution): 支持带前置注释的读查询结果识别 * chore(ci): 新增全平台测试包手动构建工作流 --------- Co-authored-by: 辣条 <69459608+tianqijiuyun-latiao@users.noreply.github.com> Co-authored-by: Syngnat <92659908+Syngnat@users.noreply.github.com> * ♻️ refactor(frontend-sync): 优化桌面交互细节并移除 main 回灌 dev 自动化 - 优化新建连接、主题设置、侧边栏工具区与 SQL 日志的界面表现 - 调整分页、筛选、透明模式与弹窗样式,统一整体交互层次 - 收口外观参数生效逻辑并补齐多组件适配 - 删除 sync-main-to-dev 工作流并同步维护者手动回灌说明 * feat: 统一筛选条件逻辑按钮宽度 (#201) * 🐛 fix(oracle-query): 修复 Oracle 表数据分页 SQL 兼容问题 refs #196 (#202) * ✨ feat(datasource): 支持 DuckDB Parquet 文件模式并优化弹窗打开链路 - 统一 DuckDB 文件库与 Parquet 文件接入能力 - 补充 URI、文件选择、只读挂载与连接缓存键处理 - 去掉数据源卡片点击前的同步驱动查询,修复打开卡顿 * ✨ feat(datasource): 支持 DuckDB Parquet 文件模式并优化弹窗打开链路 - 统一 DuckDB 文件库与 Parquet 文件接入能力 - 补充 URI、文件选择、只读挂载与连接缓存键处理 - 去掉数据源卡片点击前的同步驱动查询,修复打开卡顿 - refs #166 * 🐛 fix(dameng): 修复达梦连接成功后数据库列表为空问题 - 调整达梦数据库列表获取策略,优先回退查询当前 schema 与当前用户 - 保留可见用户与 owner 聚合逻辑,兼容低权限账号场景 - 补充前端空列表提示与后端单元测试,降低排查成本 - close #203 * ✨ feat(data-sync): 扩展跨库迁移链路并优化数据同步交互 - 统一同库同步与跨库迁移入口,补充模式区分与风险提示 - 扩展 ClickHouse 与 PG-like 双向迁移,并新增 PG-like、ClickHouse、TDengine 到 MongoDB 的迁移路由 - 完善 TDengine 目标端建表规划、回归测试与需求追踪文档 - refs #51 --------- Co-authored-by: Syngnat <yangguofeng919@gmail.com> Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: 辣条 <69459608+tianqijiuyun-latiao@users.noreply.github.com> Co-authored-by: TSS <266256496+Zencok@users.noreply.github.com>
516 lines
20 KiB
Go
516 lines
20 KiB
Go
package sync
|
||
|
||
import (
|
||
"GoNavi-Wails/internal/connection"
|
||
"GoNavi-Wails/internal/db"
|
||
"GoNavi-Wails/internal/logger"
|
||
"fmt"
|
||
"sort"
|
||
"strings"
|
||
"time"
|
||
)
|
||
|
||
// SyncConfig defines the parameters for a synchronization task
|
||
type SyncConfig struct {
|
||
SourceConfig connection.ConnectionConfig `json:"sourceConfig"`
|
||
TargetConfig connection.ConnectionConfig `json:"targetConfig"`
|
||
Tables []string `json:"tables"`
|
||
Content string `json:"content,omitempty"` // "data", "schema", "both"
|
||
Mode string `json:"mode"` // "insert_update", "insert_only", "full_overwrite"
|
||
JobID string `json:"jobId,omitempty"`
|
||
AutoAddColumns bool `json:"autoAddColumns,omitempty"` // 自动补齐缺失字段
|
||
TargetTableStrategy string `json:"targetTableStrategy,omitempty"`
|
||
CreateIndexes bool `json:"createIndexes,omitempty"`
|
||
MongoCollectionName string `json:"mongoCollectionName,omitempty"`
|
||
TableOptions map[string]TableOptions `json:"tableOptions,omitempty"`
|
||
}
|
||
|
||
// SyncResult holds the result of the sync operation
|
||
type SyncResult struct {
|
||
Success bool `json:"success"`
|
||
Message string `json:"message"`
|
||
Logs []string `json:"logs"`
|
||
TablesSynced int `json:"tablesSynced"`
|
||
RowsInserted int `json:"rowsInserted"`
|
||
RowsUpdated int `json:"rowsUpdated"`
|
||
RowsDeleted int `json:"rowsDeleted"`
|
||
}
|
||
|
||
type SyncEngine struct {
|
||
reporter Reporter
|
||
}
|
||
|
||
func NewSyncEngine(reporter Reporter) *SyncEngine {
|
||
return &SyncEngine{reporter: reporter}
|
||
}
|
||
|
||
// CompareAndSync performs the synchronization
|
||
func (s *SyncEngine) RunSync(config SyncConfig) SyncResult {
|
||
result := SyncResult{Success: true, Logs: []string{}}
|
||
logger.Infof("开始数据同步:源=%s 目标=%s 表数量=%d", formatConnSummaryForSync(config.SourceConfig), formatConnSummaryForSync(config.TargetConfig), len(config.Tables))
|
||
if isRedisToMongoKeyspacePair(config) {
|
||
return s.runRedisToMongoSync(config, result)
|
||
}
|
||
if isMongoToRedisKeyspacePair(config) {
|
||
return s.runMongoToRedisSync(config, result)
|
||
}
|
||
|
||
totalTables := len(config.Tables)
|
||
s.progress(config.JobID, 0, totalTables, "", "开始同步")
|
||
|
||
contentRaw := strings.ToLower(strings.TrimSpace(config.Content))
|
||
syncSchema := false
|
||
syncData := true
|
||
switch contentRaw {
|
||
case "", "data":
|
||
syncData = true
|
||
case "schema":
|
||
syncSchema = true
|
||
syncData = false
|
||
case "both":
|
||
syncSchema = true
|
||
syncData = true
|
||
default:
|
||
s.appendLog(config.JobID, &result, "warn", fmt.Sprintf("未知同步内容 %q,已自动使用仅同步数据", config.Content))
|
||
syncData = true
|
||
}
|
||
|
||
modeRaw := strings.ToLower(strings.TrimSpace(config.Mode))
|
||
if modeRaw != "" && modeRaw != "insert_update" && modeRaw != "insert_only" && modeRaw != "full_overwrite" {
|
||
s.appendLog(config.JobID, &result, "warn", fmt.Sprintf("未知同步模式 %q,已自动使用 insert_update", config.Mode))
|
||
}
|
||
defaultMode := normalizeSyncMode(config.Mode)
|
||
strategy := normalizeTargetTableStrategy(config.TargetTableStrategy)
|
||
|
||
contentLabel := "仅同步数据"
|
||
if syncSchema && syncData {
|
||
contentLabel = "同步结构+数据"
|
||
} else if syncSchema {
|
||
contentLabel = "仅同步结构"
|
||
}
|
||
s.appendLog(config.JobID, &result, "info", fmt.Sprintf("同步内容:%s;模式:%s;自动补字段:%v;目标表策略:%s;创建索引:%v", contentLabel, defaultMode, config.AutoAddColumns, strategy, config.CreateIndexes))
|
||
|
||
sourceDB, err := newSyncDatabase(config.SourceConfig.Type)
|
||
if err != nil {
|
||
logger.Error(err, "初始化源数据库驱动失败:类型=%s", config.SourceConfig.Type)
|
||
return s.fail(config.JobID, totalTables, result, "初始化源数据库驱动失败: "+err.Error())
|
||
}
|
||
if config.SourceConfig.Type == "custom" {
|
||
// Custom DB setup would go here if needed
|
||
}
|
||
|
||
targetDB, err := newSyncDatabase(config.TargetConfig.Type)
|
||
if err != nil {
|
||
logger.Error(err, "初始化目标数据库驱动失败:类型=%s", config.TargetConfig.Type)
|
||
return s.fail(config.JobID, totalTables, result, "初始化目标数据库驱动失败: "+err.Error())
|
||
}
|
||
|
||
// Connect Source
|
||
s.appendLog(config.JobID, &result, "info", fmt.Sprintf("正在连接源数据库: %s...", config.SourceConfig.Host))
|
||
s.progress(config.JobID, 0, totalTables, "", "连接源数据库")
|
||
if err := sourceDB.Connect(config.SourceConfig); err != nil {
|
||
logger.Error(err, "源数据库连接失败:%s", formatConnSummaryForSync(config.SourceConfig))
|
||
return s.fail(config.JobID, totalTables, result, "源数据库连接失败: "+err.Error())
|
||
}
|
||
defer sourceDB.Close()
|
||
|
||
// Connect Target
|
||
s.appendLog(config.JobID, &result, "info", fmt.Sprintf("正在连接目标数据库: %s...", config.TargetConfig.Host))
|
||
s.progress(config.JobID, 0, totalTables, "", "连接目标数据库")
|
||
if err := targetDB.Connect(config.TargetConfig); err != nil {
|
||
logger.Error(err, "目标数据库连接失败:%s", formatConnSummaryForSync(config.TargetConfig))
|
||
return s.fail(config.JobID, totalTables, result, "目标数据库连接失败: "+err.Error())
|
||
}
|
||
defer targetDB.Close()
|
||
|
||
for i, tableName := range config.Tables {
|
||
func() {
|
||
tableMode := defaultMode
|
||
s.appendLog(config.JobID, &result, "info", fmt.Sprintf("正在同步表: %s", tableName))
|
||
s.progress(config.JobID, i, totalTables, tableName, fmt.Sprintf("同步表(%d/%d)", i+1, totalTables))
|
||
defer s.progress(config.JobID, i+1, totalTables, tableName, "表处理完成")
|
||
|
||
plan, cols, targetCols, err := buildSchemaMigrationPlan(config, tableName, sourceDB, targetDB)
|
||
if err != nil {
|
||
s.appendLog(config.JobID, &result, "error", fmt.Sprintf("生成迁移计划失败:表=%s 错误=%v", tableName, err))
|
||
return
|
||
}
|
||
for _, warning := range plan.Warnings {
|
||
s.appendLog(config.JobID, &result, "warn", fmt.Sprintf(" -> %s", warning))
|
||
}
|
||
for _, unsupported := range plan.UnsupportedObjects {
|
||
s.appendLog(config.JobID, &result, "warn", fmt.Sprintf(" -> %s", unsupported))
|
||
}
|
||
if strings.TrimSpace(plan.PlannedAction) != "" {
|
||
s.appendLog(config.JobID, &result, "info", fmt.Sprintf(" -> %s", plan.PlannedAction))
|
||
}
|
||
|
||
if !plan.TargetTableExists && !plan.AutoCreate {
|
||
s.appendLog(config.JobID, &result, "warn", fmt.Sprintf("表 %s 目标表不存在,当前策略不允许自动建表,已跳过", tableName))
|
||
return
|
||
}
|
||
|
||
if !plan.TargetTableExists && plan.AutoCreate {
|
||
s.progress(config.JobID, i, totalTables, tableName, "创建目标表")
|
||
if len(plan.PreDataSQL) > 0 {
|
||
if err := executeSQLStatements(targetDB.Exec, plan.PreDataSQL); err != nil {
|
||
s.appendLog(config.JobID, &result, "error", fmt.Sprintf("预执行建表 SQL 失败:表=%s 错误=%v", tableName, err))
|
||
return
|
||
}
|
||
}
|
||
if strings.TrimSpace(plan.CreateTableSQL) == "" {
|
||
s.appendLog(config.JobID, &result, "error", fmt.Sprintf("表 %s 自动建表失败:建表 SQL 为空", tableName))
|
||
return
|
||
}
|
||
if _, err := targetDB.Exec(plan.CreateTableSQL); err != nil {
|
||
s.appendLog(config.JobID, &result, "error", fmt.Sprintf("创建目标表失败:表=%s 错误=%v", tableName, err))
|
||
return
|
||
}
|
||
s.appendLog(config.JobID, &result, "info", fmt.Sprintf("目标表创建成功:%s", tableName))
|
||
targetCols, err = targetDB.GetColumns(plan.TargetSchema, plan.TargetTable)
|
||
if err != nil {
|
||
s.appendLog(config.JobID, &result, "error", fmt.Sprintf("创建目标表后获取字段失败:表=%s 错误=%v", tableName, err))
|
||
return
|
||
}
|
||
} else if len(plan.PreDataSQL) > 0 {
|
||
s.progress(config.JobID, i, totalTables, tableName, "同步表结构")
|
||
if err := executeSQLStatements(targetDB.Exec, plan.PreDataSQL); err != nil {
|
||
s.appendLog(config.JobID, &result, "error", fmt.Sprintf("同步表结构失败:表=%s 错误=%v", tableName, err))
|
||
return
|
||
}
|
||
targetCols, err = targetDB.GetColumns(plan.TargetSchema, plan.TargetTable)
|
||
if err != nil {
|
||
s.appendLog(config.JobID, &result, "warn", fmt.Sprintf("补字段后刷新目标字段失败:表=%s 错误=%v", tableName, err))
|
||
}
|
||
}
|
||
|
||
if !syncData {
|
||
if len(plan.PostDataSQL) > 0 {
|
||
s.progress(config.JobID, i, totalTables, tableName, "创建索引")
|
||
if err := executeSQLStatements(targetDB.Exec, plan.PostDataSQL); err != nil {
|
||
s.appendLog(config.JobID, &result, "error", fmt.Sprintf("创建索引失败:表=%s 错误=%v", tableName, err))
|
||
return
|
||
}
|
||
}
|
||
result.TablesSynced++
|
||
return
|
||
}
|
||
|
||
targetType := resolveMigrationDBType(config.TargetConfig)
|
||
sourceType := resolveMigrationDBType(config.SourceConfig)
|
||
targetTable := plan.TargetTable
|
||
sourceQueryTable, targetQueryTable := plan.SourceQueryTable, plan.TargetQueryTable
|
||
applyTableName := targetTable
|
||
switch targetType {
|
||
case "postgres", "kingbase", "highgo", "vastbase", "sqlserver":
|
||
applyTableName = targetQueryTable
|
||
}
|
||
|
||
sourceColsByLower := make(map[string]connection.ColumnDefinition, len(cols))
|
||
for _, col := range cols {
|
||
if strings.TrimSpace(col.Name) == "" {
|
||
continue
|
||
}
|
||
sourceColsByLower[strings.ToLower(strings.TrimSpace(col.Name))] = col
|
||
}
|
||
|
||
pkCols := make([]string, 0, 2)
|
||
for _, col := range cols {
|
||
if col.Key == "PRI" || col.Key == "PK" {
|
||
pkCols = append(pkCols, col.Name)
|
||
}
|
||
}
|
||
requirePK := tableMode == "insert_update" && plan.TargetTableExists
|
||
pkCol := ""
|
||
if requirePK {
|
||
if len(pkCols) == 0 {
|
||
s.appendLog(config.JobID, &result, "warn", fmt.Sprintf("表 %s 未找到主键,当前模式需要差异对比,已跳过", tableName))
|
||
return
|
||
}
|
||
if len(pkCols) > 1 {
|
||
s.appendLog(config.JobID, &result, "warn", fmt.Sprintf("表 %s 为复合主键(%s),当前暂不支持差异同步", tableName, strings.Join(pkCols, ",")))
|
||
return
|
||
}
|
||
pkCol = pkCols[0]
|
||
}
|
||
|
||
opts := TableOptions{Insert: true, Update: true, Delete: false}
|
||
if config.TableOptions != nil {
|
||
if t, ok := config.TableOptions[tableName]; ok {
|
||
opts = t
|
||
}
|
||
}
|
||
if !opts.Insert && !opts.Update && !opts.Delete {
|
||
s.appendLog(config.JobID, &result, "info", fmt.Sprintf("表 %s 未勾选任何操作,已跳过", tableName))
|
||
return
|
||
}
|
||
|
||
s.progress(config.JobID, i, totalTables, tableName, "读取源表数据")
|
||
sourceRows, _, err := sourceDB.Query(fmt.Sprintf("SELECT * FROM %s", quoteQualifiedIdentByType(sourceType, sourceQueryTable)))
|
||
if err != nil {
|
||
logger.Error(err, "读取源表失败:表=%s", tableName)
|
||
s.appendLog(config.JobID, &result, "error", fmt.Sprintf("读取源表 %s 失败: %v", tableName, err))
|
||
return
|
||
}
|
||
|
||
var inserts []map[string]interface{}
|
||
var updates []connection.UpdateRow
|
||
var deletes []map[string]interface{}
|
||
|
||
if tableMode == "insert_update" && plan.TargetTableExists {
|
||
s.progress(config.JobID, i, totalTables, tableName, "读取目标表数据")
|
||
targetRows, _, err := targetDB.Query(fmt.Sprintf("SELECT * FROM %s", quoteQualifiedIdentByType(targetType, targetQueryTable)))
|
||
if err != nil {
|
||
logger.Error(err, "读取目标表失败:表=%s", tableName)
|
||
s.appendLog(config.JobID, &result, "error", fmt.Sprintf("读取目标表 %s 失败: %v", tableName, err))
|
||
return
|
||
}
|
||
|
||
s.progress(config.JobID, i, totalTables, tableName, "对比差异")
|
||
targetMap := make(map[string]map[string]interface{}, len(targetRows))
|
||
for _, row := range targetRows {
|
||
if row[pkCol] == nil {
|
||
continue
|
||
}
|
||
pkVal := fmt.Sprintf("%v", row[pkCol])
|
||
if strings.TrimSpace(pkVal) == "" || pkVal == "<nil>" {
|
||
continue
|
||
}
|
||
targetMap[pkVal] = row
|
||
}
|
||
sourcePKSet := make(map[string]struct{}, len(sourceRows))
|
||
for _, sRow := range sourceRows {
|
||
if sRow[pkCol] == nil {
|
||
continue
|
||
}
|
||
pkVal := fmt.Sprintf("%v", sRow[pkCol])
|
||
if strings.TrimSpace(pkVal) == "" || pkVal == "<nil>" {
|
||
continue
|
||
}
|
||
sourcePKSet[pkVal] = struct{}{}
|
||
if tRow, exists := targetMap[pkVal]; exists {
|
||
changes := make(map[string]interface{})
|
||
for k, v := range sRow {
|
||
if fmt.Sprintf("%v", v) != fmt.Sprintf("%v", tRow[k]) {
|
||
changes[k] = v
|
||
}
|
||
}
|
||
if len(changes) > 0 {
|
||
updates = append(updates, connection.UpdateRow{Keys: map[string]interface{}{pkCol: sRow[pkCol]}, Values: changes})
|
||
}
|
||
} else {
|
||
inserts = append(inserts, sRow)
|
||
}
|
||
}
|
||
if opts.Delete {
|
||
for pkStr, row := range targetMap {
|
||
if _, ok := sourcePKSet[pkStr]; ok {
|
||
continue
|
||
}
|
||
deletes = append(deletes, map[string]interface{}{pkCol: row[pkCol]})
|
||
}
|
||
}
|
||
inserts = filterRowsByPKSelection(pkCol, inserts, opts.Insert, opts.SelectedInsertPKs)
|
||
updates = filterUpdatesByPKSelection(pkCol, updates, opts.Update, opts.SelectedUpdatePKs)
|
||
deletes = filterRowsByPKSelection(pkCol, deletes, opts.Delete, opts.SelectedDeletePKs)
|
||
} else {
|
||
inserts = sourceRows
|
||
if !opts.Insert {
|
||
inserts = nil
|
||
}
|
||
if tableMode == "full_overwrite" && plan.TargetTableExists {
|
||
s.appendLog(config.JobID, &result, "warn", fmt.Sprintf(" -> 全量覆盖模式:即将清空目标表 %s", tableName))
|
||
s.progress(config.JobID, i, totalTables, tableName, "清空目标表")
|
||
clearSQL := ""
|
||
if targetType == "mysql" {
|
||
clearSQL = fmt.Sprintf("TRUNCATE TABLE %s", quoteQualifiedIdentByType(targetType, targetQueryTable))
|
||
} else {
|
||
clearSQL = fmt.Sprintf("DELETE FROM %s", quoteQualifiedIdentByType(targetType, targetQueryTable))
|
||
}
|
||
if _, err := targetDB.Exec(clearSQL); err != nil {
|
||
s.appendLog(config.JobID, &result, "error", fmt.Sprintf(" -> 清空目标表失败: %v", err))
|
||
return
|
||
}
|
||
}
|
||
}
|
||
|
||
changeSet := connection.ChangeSet{Inserts: inserts, Updates: updates, Deletes: deletes}
|
||
s.progress(config.JobID, i, totalTables, tableName, "检查字段一致性")
|
||
targetColsResolved := targetCols
|
||
if len(targetColsResolved) == 0 {
|
||
targetColsResolved, err = targetDB.GetColumns(plan.TargetSchema, plan.TargetTable)
|
||
if err != nil {
|
||
s.appendLog(config.JobID, &result, "warn", fmt.Sprintf(" -> 获取目标表字段失败,已跳过字段一致性检查: %v", err))
|
||
}
|
||
}
|
||
if len(targetColsResolved) > 0 {
|
||
targetColSet := make(map[string]struct{}, len(targetColsResolved))
|
||
for _, c := range targetColsResolved {
|
||
name := strings.ToLower(strings.TrimSpace(c.Name))
|
||
if name == "" {
|
||
continue
|
||
}
|
||
targetColSet[name] = struct{}{}
|
||
}
|
||
requiredCols := collectRequiredColumns(changeSet.Inserts, changeSet.Updates)
|
||
missing := make([]string, 0)
|
||
for lower, original := range requiredCols {
|
||
if _, ok := targetColSet[lower]; !ok {
|
||
missing = append(missing, original)
|
||
}
|
||
}
|
||
sort.Strings(missing)
|
||
if len(missing) > 0 {
|
||
if config.AutoAddColumns && supportsAutoAddColumnsForPair(sourceType, targetType) {
|
||
s.appendLog(config.JobID, &result, "warn", fmt.Sprintf(" -> 目标表缺少字段 %d 个,开始自动补齐: %s", len(missing), strings.Join(missing, ", ")))
|
||
added := 0
|
||
for _, colName := range missing {
|
||
colLower := strings.ToLower(strings.TrimSpace(colName))
|
||
srcCol, ok := sourceColsByLower[colLower]
|
||
if !ok {
|
||
continue
|
||
}
|
||
alterSQL, err := buildAddColumnSQLForPair(sourceType, targetType, targetQueryTable, srcCol)
|
||
if err != nil {
|
||
s.appendLog(config.JobID, &result, "error", fmt.Sprintf(" -> 自动补字段失败:字段=%s 错误=%v", colName, err))
|
||
continue
|
||
}
|
||
if _, err := targetDB.Exec(alterSQL); err != nil {
|
||
s.appendLog(config.JobID, &result, "error", fmt.Sprintf(" -> 自动补字段失败:字段=%s 错误=%v", colName, err))
|
||
continue
|
||
}
|
||
added++
|
||
targetColSet[colLower] = struct{}{}
|
||
}
|
||
s.appendLog(config.JobID, &result, "info", fmt.Sprintf(" -> 自动补字段完成:成功=%d 失败=%d", added, len(missing)-added))
|
||
} else {
|
||
s.appendLog(config.JobID, &result, "warn", fmt.Sprintf(" -> 目标表缺少字段 %d 个(未开启自动补齐),将自动忽略:%s", len(missing), strings.Join(missing, ", ")))
|
||
}
|
||
changeSet.Inserts = filterInsertRows(changeSet.Inserts, targetColSet)
|
||
changeSet.Updates = filterUpdateRows(changeSet.Updates, targetColSet)
|
||
}
|
||
}
|
||
|
||
s.progress(config.JobID, i, totalTables, tableName, "应用变更")
|
||
if len(changeSet.Inserts) > 0 || len(changeSet.Updates) > 0 || len(changeSet.Deletes) > 0 {
|
||
s.appendLog(config.JobID, &result, "info", fmt.Sprintf(" -> 需插入: %d 行, 需更新: %d 行, 需删除: %d 行", len(changeSet.Inserts), len(changeSet.Updates), len(changeSet.Deletes)))
|
||
if applier, ok := targetDB.(db.BatchApplier); ok {
|
||
if err := applier.ApplyChanges(applyTableName, changeSet); err != nil {
|
||
s.appendLog(config.JobID, &result, "error", fmt.Sprintf(" -> 应用变更失败: %v", err))
|
||
return
|
||
}
|
||
result.RowsInserted += len(changeSet.Inserts)
|
||
result.RowsUpdated += len(changeSet.Updates)
|
||
result.RowsDeleted += len(changeSet.Deletes)
|
||
} else {
|
||
s.appendLog(config.JobID, &result, "warn", " -> 目标驱动不支持应用数据变更 (ApplyChanges).")
|
||
return
|
||
}
|
||
} else {
|
||
s.appendLog(config.JobID, &result, "info", " -> 数据一致,无需变更.")
|
||
}
|
||
|
||
if len(plan.PostDataSQL) > 0 {
|
||
s.progress(config.JobID, i, totalTables, tableName, "创建索引")
|
||
if err := executeSQLStatements(targetDB.Exec, plan.PostDataSQL); err != nil {
|
||
s.appendLog(config.JobID, &result, "error", fmt.Sprintf("创建索引失败:表=%s 错误=%v", tableName, err))
|
||
return
|
||
}
|
||
}
|
||
|
||
result.TablesSynced++
|
||
}()
|
||
}
|
||
|
||
s.progress(config.JobID, totalTables, totalTables, "", "同步完成")
|
||
return result
|
||
}
|
||
|
||
func formatConnSummaryForSync(config connection.ConnectionConfig) string {
|
||
timeoutSeconds := config.Timeout
|
||
if timeoutSeconds <= 0 {
|
||
timeoutSeconds = 30
|
||
}
|
||
|
||
dbName := strings.TrimSpace(config.Database)
|
||
if dbName == "" {
|
||
dbName = "(default)"
|
||
}
|
||
|
||
return fmt.Sprintf("类型=%s 地址=%s:%d 数据库=%s 用户=%s 超时=%ds",
|
||
config.Type, config.Host, config.Port, dbName, config.User, timeoutSeconds)
|
||
}
|
||
|
||
func (s *SyncEngine) appendLog(jobID string, res *SyncResult, level string, msg string) {
|
||
if res != nil {
|
||
res.Logs = append(res.Logs, msg)
|
||
}
|
||
if s.reporter.OnLog != nil && strings.TrimSpace(jobID) != "" {
|
||
s.reporter.OnLog(SyncLogEvent{
|
||
JobID: jobID,
|
||
Level: level,
|
||
Message: msg,
|
||
Ts: time.Now().UnixMilli(),
|
||
})
|
||
}
|
||
}
|
||
|
||
func (s *SyncEngine) progress(jobID string, current, total int, table string, stage string) {
|
||
if s.reporter.OnProgress == nil || strings.TrimSpace(jobID) == "" {
|
||
return
|
||
}
|
||
percent := 0
|
||
if total <= 0 {
|
||
if current > 0 {
|
||
percent = 100
|
||
}
|
||
} else {
|
||
if current < 0 {
|
||
current = 0
|
||
}
|
||
if current > total {
|
||
current = total
|
||
}
|
||
percent = (current * 100) / total
|
||
}
|
||
s.reporter.OnProgress(SyncProgressEvent{
|
||
JobID: jobID,
|
||
Percent: percent,
|
||
Current: current,
|
||
Total: total,
|
||
Table: table,
|
||
Stage: stage,
|
||
})
|
||
}
|
||
|
||
func (s *SyncEngine) fail(jobID string, totalTables int, res SyncResult, msg string) SyncResult {
|
||
res.Success = false
|
||
res.Message = msg
|
||
s.appendLog(jobID, &res, "error", "致命错误: "+msg)
|
||
s.progress(jobID, res.TablesSynced, totalTables, "", "同步失败")
|
||
return res
|
||
}
|
||
|
||
func (s *SyncEngine) execDDLStatements(jobID string, res *SyncResult, database db.Database, tableName string, stage string, statements []string) error {
|
||
for _, statement := range statements {
|
||
sqlText := strings.TrimSpace(statement)
|
||
if sqlText == "" {
|
||
continue
|
||
}
|
||
if _, err := database.Exec(sqlText); err != nil {
|
||
return fmt.Errorf("%s失败: %w", stage, err)
|
||
}
|
||
s.appendLog(jobID, res, "info", fmt.Sprintf("表 %s %s成功:%s", tableName, stage, shortenSyncSQL(sqlText)))
|
||
}
|
||
return nil
|
||
}
|
||
|
||
func shortenSyncSQL(sqlText string) string {
|
||
text := strings.TrimSpace(strings.ReplaceAll(strings.ReplaceAll(sqlText, "\n", " "), "\t", " "))
|
||
text = strings.Join(strings.Fields(text), " ")
|
||
if len(text) <= 120 {
|
||
return text
|
||
}
|
||
return text[:117] + "..."
|
||
}
|