mirror of
https://github.com/Syngnat/GoNavi.git
synced 2026-05-14 20:08:12 +08:00
* 🐛 fix(data-viewer): 修复ClickHouse尾部分页异常并增强DuckDB复杂类型兼容 - DataViewer 新增 ClickHouse 反向分页策略,修复最后页与倒数页查询失败 - DuckDB 查询失败时按列类型生成安全 SELECT,复杂类型转 VARCHAR 重试 - 分页状态统一使用 currentPage 回填,避免页码与总数推导不一致 - 增强查询异常日志与重试路径,降低大表场景卡顿与误报 * ✨ feat(frontend-driver): 驱动管理支持快速搜索并优化信息展示 - 新增搜索框,支持按 DuckDB/ClickHouse 等关键字快速定位驱动 - 显示“匹配 x / y”统计与无结果提示 - 优化头部区域排版,提升透明/暗色场景下的视觉对齐 * 🔧 fix(connection-modal): 修复多数据源URI导入解析并校正Oracle服务名校验 - 新增单主机URI解析映射,兼容 postgres/postgresql、sqlserver、redis、tdengine、dameng(dm)、kingbase、highgo、vastbase、clickhouse、oracle - 抽取 parseSingleHostUri 复用逻辑,统一 host/port/user/password/database 回填行为 - Oracle 连接新增服务名必填校验,移除“服务名为空回退用户名”的隐式逻辑 - 连接弹窗补充 Oracle 服务名输入项与 URI 示例 * 🐛 fix(query-export): 修复查询结果导出卡住并统一按数据源能力控制导出路径 - 查询结果页导出增加稳定兜底,异常时确保 loading 关闭避免持续转圈 - DataGrid 导出逻辑按数据源能力分流,优先走后端 ExportQuery 并保留结果集导出降级 - QueryEditor 传递结果导出 SQL,保证查询结果导出范围与当前结果一致 - 后端补充 ExportData/ExportQuery 关键日志,提升导出链路可观测性 * 🐛 fix(precision): 修复查询链路与分页统计的大整数精度丢失 - 代理响应数据解码改为 UseNumber,避免默认 float64 吞精度 - 统一归一化 json.Number 与超界整数,超出 JS 安全范围转字符串 - 修复 DataViewer 总数解析,超大值不再误转 Number 参与分页 - refs #142 * 🐛 fix(driver-manager): 修复驱动管理网络告警重复并强化代理引导 - 新增下载链路域名探测,区分“GitHub可达但驱动下载链路不可达” - 网络不可达场景仅保留红色强提醒,移除重复二级告警 - 强提醒增加“打开全局代理设置”入口,优先引导使用 GoNavi 全局代理 - 统一网络检测与目录说明提示图标尺寸,修复加载期视觉不一致 - refs #141 * ♻️ refactor(frontend-interaction): 统一标签拖拽与暗色主题交互实现 - 重构Tab拖拽排序实现,统一为可配置拖拽引擎 - 规范拖拽与点击事件边界,提升交互一致性 - 统一多组件暗色透明样式策略,减少硬编码色值 - 提升Redis/表格/连接面板在透明模式下的观感一致性 - refs #144 * ♻️ refactor(update-state): 重构在线更新状态流并按版本统一进度展示 - 重构更新检查与下载状态同步流程,减少前后端状态分叉 - 进度展示严格绑定 latestVersion,避免跨版本状态串用 - 优化 about 打开场景的静默检查状态回填逻辑 - 统一下载弹窗关闭/后台隐藏行为 - 保持现有安装流程并补齐目录打开能力 * 🎨 style(sidebar-log): 将SQL执行日志入口调整为悬浮胶囊样式 - 移除侧栏底部整条日志入口容器 - 新增悬浮按钮阴影/边框/透明背景并适配明暗主题 - 为树区域预留底部空间避免入口遮挡内容 * ✨ feat(redis-cluster): 支持集群模式逻辑多库隔离与 0-15 库切换 - 前端恢复 Redis 集群场景下 db0-db15 的数据库选择与展示 - 后端新增集群逻辑库命名空间前缀映射,统一 key/pattern 读写隔离 - 覆盖扫描、读取、写入、删除、重命名等核心操作的键映射规则 - 集群命令通道支持 SELECT 逻辑切库与 FLUSHDB 逻辑库清空 - refs #145 * ✨ feat(DataGrid): 大数据表虚拟滚动性能优化及UI一致性修复 - 启用动态虚拟滚动(数据量≥500行自动切换),解决万行数据表卡顿问题 - 虚拟模式下EditableCell改用div渲染,CSS选择器从元素级改为类级适配虚拟DOM - 修复虚拟模式双水平滚动条:样式化rc-virtual-list内置滚动条为胶囊外观,禁用自定义外部滚动条 - 为rc-virtual-list水平滚动条添加鼠标滚轮支持(MutationObserver + marginLeft驱动) - 修复白色主题透明模式下列名悬浮Tooltip对比度不足的问题 - 新增白色主题全局滚动条样式适配透明模式(App.css) - App.tsx主题token与组件样式优化 - refs #147 * 🔧 chore(app): 清理 App.tsx 类型告警并收敛前端壳层实现 - 清除未使用代码和冗余状态 - 替换弃用 API 以消除 IDE 提示 - 显式处理浮动 Promise 避免告警 - 保持现有更新检查和代理设置行为不变 * 🔧 fix(ci): 修复 Windows AMD64 下 DuckDB 驱动构建链路 - 将 DuckDB 工具链准备切换为优先使用 MSYS2 - 增加 gcc 和 g++ 存在性校验与版本验证 - 在 MSYS2 异常时回退 Chocolatey 安装 MinGW - 保持 Windows ARM64 跳过 DuckDB 构建与平台支持一致 * 🔧 fix(ci): 修复 Windows AMD64 下 DuckDB 驱动构建链路 - 将 DuckDB 工具链准备切换为优先使用 MSYS2 - 增加 gcc 和 g++ 存在性校验与版本验证 - 在 MSYS2 异常时回退 Chocolatey 安装 MinGW - 保持 Windows ARM64 跳过 DuckDB 构建与平台支持一致 * 🔧 fix(ci): 修复 Windows AMD64 下 DuckDB 驱动构建工具链 - 将 DuckDB 编译链从 MINGW64 切换为 MSYS2 UCRT64 - 修正 Windows AMD64 的 gcc 和 g++ 探测路径 - 增加 DuckDB 编译器版本校验步骤 * 📝 docs(contributing): 补充中英文贡献指南并统一 README 入口 - 新增英文版 CONTRIBUTING.md 作为正式贡献文档 - 新增中文版 CONTRIBUTING.zh-CN.md 作为中文贡献说明 - 调整 README 和 README.zh-CN 的贡献入口指向对应语言文档 * - feat(connection,metadata,kingbase): 增强多数据源连接能力并修复金仓/达梦/Oracle/ClickHouse兼容性问题 (#188) (#190) * feat(http-tunnel): 支持独立 HTTP 隧道连接并覆盖多数据源 refs #168 * fix(kingbase-data-grid): 修复金仓打开表卡顿并降低对象渲染开销 refs #178 * fix(kingbase-transaction): 修复金仓事务提交重复引号导致语法错误 refs #176 * fix(driver-agent): 修复老版本 Win10 升级后金仓驱动代理启动失败 refs #177 * chore(ci): 新增手动触发的 macOS 测试构建工作流 * chore(ci): 允许测试工作流在当前分支自动触发 * fix(query-editor): 修复 SQL 编辑中光标随机跳到末尾 refs #185 * feat(data-sync): 增加差异 SQL 预览能力便于审核 refs #174 * fix(clickhouse-connect): 自动识别并回退 HTTP/Native 协议连接 refs #181 * fix(oracle-metadata): 修复视图与函数加载按 schema 过滤异常 refs #155 * fix(dameng-databases): 修复显示全部库时数据库列表不完整 refs #154 * fix(connection,db-list): 统一处理空列表返回并修复达梦连接测试报错 refs #157 Co-authored-by: 辣条 <69459608+tianqijiuyun-latiao@users.noreply.github.com> * ✨ feat(release-notes): 支持自动生成 Release 更新说明并区分配置文件命名 * 🔁 chore(sync): 回灌 main 到 dev (#192) * - feat(connection,metadata,kingbase): 增强多数据源连接能力并修复金仓/达梦/Oracle/ClickHouse兼容性问题 (#188) * feat(http-tunnel): 支持独立 HTTP 隧道连接并覆盖多数据源 refs #168 * fix(kingbase-data-grid): 修复金仓打开表卡顿并降低对象渲染开销 refs #178 * fix(kingbase-transaction): 修复金仓事务提交重复引号导致语法错误 refs #176 * fix(driver-agent): 修复老版本 Win10 升级后金仓驱动代理启动失败 refs #177 * chore(ci): 新增手动触发的 macOS 测试构建工作流 * chore(ci): 允许测试工作流在当前分支自动触发 * fix(query-editor): 修复 SQL 编辑中光标随机跳到末尾 refs #185 * feat(data-sync): 增加差异 SQL 预览能力便于审核 refs #174 * fix(clickhouse-connect): 自动识别并回退 HTTP/Native 协议连接 refs #181 * fix(oracle-metadata): 修复视图与函数加载按 schema 过滤异常 refs #155 * fix(dameng-databases): 修复显示全部库时数据库列表不完整 refs #154 * fix(connection,db-list): 统一处理空列表返回并修复达梦连接测试报错 refs #157 * Release/0.5.3 (#191) --------- Co-authored-by: 辣条 <69459608+tianqijiuyun-latiao@users.noreply.github.com> Co-authored-by: Syngnat <92659908+Syngnat@users.noreply.github.com> * 🐛 fix(branch-sync): 修复 main 回灌 dev 时 mergeable 异步计算导致漏开自动合并 - 增加 mergeable 状态轮询,避免新建同步 PR 后立即返回 UNKNOWN - 在合并状态未稳定时输出中文告警与执行摘要 - 保持冲突分支、待计算分支与自动合并分支的处理路径清晰 * 🔁 chore(sync): 回灌 main 到 dev (#195) * - feat(connection,metadata,kingbase): 增强多数据源连接能力并修复金仓/达梦/Oracle/ClickHouse兼容性问题 (#188) * feat(http-tunnel): 支持独立 HTTP 隧道连接并覆盖多数据源 refs #168 * fix(kingbase-data-grid): 修复金仓打开表卡顿并降低对象渲染开销 refs #178 * fix(kingbase-transaction): 修复金仓事务提交重复引号导致语法错误 refs #176 * fix(driver-agent): 修复老版本 Win10 升级后金仓驱动代理启动失败 refs #177 * chore(ci): 新增手动触发的 macOS 测试构建工作流 * chore(ci): 允许测试工作流在当前分支自动触发 * fix(query-editor): 修复 SQL 编辑中光标随机跳到末尾 refs #185 * feat(data-sync): 增加差异 SQL 预览能力便于审核 refs #174 * fix(clickhouse-connect): 自动识别并回退 HTTP/Native 协议连接 refs #181 * fix(oracle-metadata): 修复视图与函数加载按 schema 过滤异常 refs #155 * fix(dameng-databases): 修复显示全部库时数据库列表不完整 refs #154 * fix(connection,db-list): 统一处理空列表返回并修复达梦连接测试报错 refs #157 * Release/0.5.3 (#191) * - chore(ci): 新增全平台测试包手动构建工作流 tianqijiuyun-latiao 今天 下午4:26 (#194) * feat(http-tunnel): 支持独立 HTTP 隧道连接并覆盖多数据源 refs #168 * fix(kingbase-data-grid): 修复金仓打开表卡顿并降低对象渲染开销 refs #178 * fix(kingbase-transaction): 修复金仓事务提交重复引号导致语法错误 refs #176 * fix(driver-agent): 修复老版本 Win10 升级后金仓驱动代理启动失败 refs #177 * chore(ci): 新增手动触发的 macOS 测试构建工作流 * chore(ci): 允许测试工作流在当前分支自动触发 * fix(query-editor): 修复 SQL 编辑中光标随机跳到末尾 refs #185 * feat(data-sync): 增加差异 SQL 预览能力便于审核 refs #174 * fix(clickhouse-connect): 自动识别并回退 HTTP/Native 协议连接 refs #181 * fix(oracle-metadata): 修复视图与函数加载按 schema 过滤异常 refs #155 * fix(dameng-databases): 修复显示全部库时数据库列表不完整 refs #154 * fix(connection,db-list): 统一处理空列表返回并修复达梦连接测试报错 refs #157 * fix(kingbase): 补齐主键识别并优化宽表卡顿 refs #176 refs #178 * fix(query-execution): 支持带前置注释的读查询结果识别 * chore(ci): 新增全平台测试包手动构建工作流 --------- Co-authored-by: 辣条 <69459608+tianqijiuyun-latiao@users.noreply.github.com> Co-authored-by: Syngnat <92659908+Syngnat@users.noreply.github.com> * ♻️ refactor(frontend-sync): 优化桌面交互细节并移除 main 回灌 dev 自动化 - 优化新建连接、主题设置、侧边栏工具区与 SQL 日志的界面表现 - 调整分页、筛选、透明模式与弹窗样式,统一整体交互层次 - 收口外观参数生效逻辑并补齐多组件适配 - 删除 sync-main-to-dev 工作流并同步维护者手动回灌说明 * feat: 统一筛选条件逻辑按钮宽度 (#201) * 🐛 fix(oracle-query): 修复 Oracle 表数据分页 SQL 兼容问题 refs #196 (#202) * ✨ feat(datasource): 支持 DuckDB Parquet 文件模式并优化弹窗打开链路 - 统一 DuckDB 文件库与 Parquet 文件接入能力 - 补充 URI、文件选择、只读挂载与连接缓存键处理 - 去掉数据源卡片点击前的同步驱动查询,修复打开卡顿 * ✨ feat(datasource): 支持 DuckDB Parquet 文件模式并优化弹窗打开链路 - 统一 DuckDB 文件库与 Parquet 文件接入能力 - 补充 URI、文件选择、只读挂载与连接缓存键处理 - 去掉数据源卡片点击前的同步驱动查询,修复打开卡顿 - refs #166 * 🐛 fix(dameng): 修复达梦连接成功后数据库列表为空问题 - 调整达梦数据库列表获取策略,优先回退查询当前 schema 与当前用户 - 保留可见用户与 owner 聚合逻辑,兼容低权限账号场景 - 补充前端空列表提示与后端单元测试,降低排查成本 - close #203 * ✨ feat(data-sync): 扩展跨库迁移链路并优化数据同步交互 - 统一同库同步与跨库迁移入口,补充模式区分与风险提示 - 扩展 ClickHouse 与 PG-like 双向迁移,并新增 PG-like、ClickHouse、TDengine 到 MongoDB 的迁移路由 - 完善 TDengine 目标端建表规划、回归测试与需求追踪文档 - refs #51 --------- Co-authored-by: Syngnat <yangguofeng919@gmail.com> Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: 辣条 <69459608+tianqijiuyun-latiao@users.noreply.github.com> Co-authored-by: TSS <266256496+Zencok@users.noreply.github.com>
604 lines
23 KiB
Go
604 lines
23 KiB
Go
package sync
|
||
|
||
import (
|
||
"GoNavi-Wails/internal/connection"
|
||
"GoNavi-Wails/internal/db"
|
||
"encoding/json"
|
||
"fmt"
|
||
"sort"
|
||
"strings"
|
||
"time"
|
||
)
|
||
|
||
func buildMySQLToMongoPlan(config SyncConfig, tableName string, sourceDB db.Database, targetDB db.Database) (SchemaMigrationPlan, []connection.ColumnDefinition, []connection.ColumnDefinition, error) {
|
||
return buildTabularToMongoPlan(config, tableName, sourceDB, targetDB)
|
||
}
|
||
|
||
func buildPGLikeToMongoPlan(config SyncConfig, tableName string, sourceDB db.Database, targetDB db.Database) (SchemaMigrationPlan, []connection.ColumnDefinition, []connection.ColumnDefinition, error) {
|
||
return buildTabularToMongoPlan(config, tableName, sourceDB, targetDB)
|
||
}
|
||
|
||
func buildClickHouseToMongoPlan(config SyncConfig, tableName string, sourceDB db.Database, targetDB db.Database) (SchemaMigrationPlan, []connection.ColumnDefinition, []connection.ColumnDefinition, error) {
|
||
return buildTabularToMongoPlan(config, tableName, sourceDB, targetDB)
|
||
}
|
||
|
||
func buildTDengineToMongoPlan(config SyncConfig, tableName string, sourceDB db.Database, targetDB db.Database) (SchemaMigrationPlan, []connection.ColumnDefinition, []connection.ColumnDefinition, error) {
|
||
return buildTabularToMongoPlan(config, tableName, sourceDB, targetDB)
|
||
}
|
||
|
||
func buildTabularToMongoPlan(config SyncConfig, tableName string, sourceDB db.Database, targetDB db.Database) (SchemaMigrationPlan, []connection.ColumnDefinition, []connection.ColumnDefinition, error) {
|
||
plan := SchemaMigrationPlan{}
|
||
sourceType := resolveMigrationDBType(config.SourceConfig)
|
||
targetType := resolveMigrationDBType(config.TargetConfig)
|
||
plan.SourceSchema, plan.SourceTable = normalizeSchemaAndTable(sourceType, config.SourceConfig.Database, tableName)
|
||
plan.TargetSchema, plan.TargetTable = normalizeSchemaAndTable(targetType, config.TargetConfig.Database, tableName)
|
||
plan.SourceQueryTable = qualifiedNameForQuery(sourceType, plan.SourceSchema, plan.SourceTable, tableName)
|
||
plan.TargetQueryTable = qualifiedNameForQuery(targetType, plan.TargetSchema, plan.TargetTable, tableName)
|
||
plan.PlannedAction = "使用已有目标集合导入"
|
||
|
||
sourceCols, sourceExists, err := inspectTableColumns(sourceDB, plan.SourceSchema, plan.SourceTable)
|
||
if err != nil {
|
||
return plan, nil, nil, fmt.Errorf("获取源表字段失败: %w", err)
|
||
}
|
||
if !sourceExists {
|
||
return plan, nil, nil, fmt.Errorf("源表不存在或无列定义: %s", tableName)
|
||
}
|
||
|
||
targetExists, err := inspectMongoCollection(targetDB, plan.TargetSchema, plan.TargetTable)
|
||
if err != nil {
|
||
return plan, sourceCols, nil, fmt.Errorf("检查目标集合失败: %w", err)
|
||
}
|
||
plan.TargetTableExists = targetExists
|
||
|
||
strategy := normalizeTargetTableStrategy(config.TargetTableStrategy)
|
||
if targetExists {
|
||
plan.Warnings = append(plan.Warnings, "MongoDB 为弱 schema 目标,字段结构以写入文档为准,不执行目标列校验")
|
||
return dedupeSchemaMigrationPlan(plan), sourceCols, nil, nil
|
||
}
|
||
|
||
switch strategy {
|
||
case "existing_only":
|
||
plan.PlannedAction = "目标集合不存在,需先手工创建"
|
||
plan.Warnings = append(plan.Warnings, "当前策略要求目标集合已存在,执行时不会自动创建")
|
||
return dedupeSchemaMigrationPlan(plan), sourceCols, nil, nil
|
||
case "smart", "auto_create_if_missing":
|
||
plan.AutoCreate = true
|
||
plan.PlannedAction = "目标集合不存在,将自动创建集合后导入"
|
||
createCmd, err := buildMongoCreateCollectionCommand(plan.TargetTable)
|
||
if err != nil {
|
||
return plan, sourceCols, nil, err
|
||
}
|
||
plan.PreDataSQL = append(plan.PreDataSQL, createCmd)
|
||
if config.CreateIndexes {
|
||
indexCmds, warnings, unsupported, created, skipped, err := buildMongoIndexCommands(sourceDB, plan.SourceSchema, plan.SourceTable, plan.TargetTable)
|
||
if err != nil {
|
||
plan.Warnings = append(plan.Warnings, fmt.Sprintf("读取源表索引失败,已跳过索引迁移:%v", err))
|
||
} else {
|
||
plan.PostDataSQL = append(plan.PostDataSQL, indexCmds...)
|
||
plan.Warnings = append(plan.Warnings, warnings...)
|
||
plan.UnsupportedObjects = append(plan.UnsupportedObjects, unsupported...)
|
||
plan.IndexesToCreate = created
|
||
plan.IndexesSkipped = skipped
|
||
}
|
||
}
|
||
return dedupeSchemaMigrationPlan(plan), sourceCols, nil, nil
|
||
default:
|
||
return dedupeSchemaMigrationPlan(plan), sourceCols, nil, nil
|
||
}
|
||
}
|
||
|
||
func buildMongoToMySQLPlan(config SyncConfig, tableName string, sourceDB db.Database, targetDB db.Database) (SchemaMigrationPlan, []connection.ColumnDefinition, []connection.ColumnDefinition, error) {
|
||
plan := SchemaMigrationPlan{}
|
||
plan.SourceSchema, plan.SourceTable = normalizeSchemaAndTable(config.SourceConfig.Type, config.SourceConfig.Database, tableName)
|
||
plan.TargetSchema, plan.TargetTable = normalizeSchemaAndTable(config.TargetConfig.Type, config.TargetConfig.Database, tableName)
|
||
plan.SourceQueryTable = qualifiedNameForQuery(config.SourceConfig.Type, plan.SourceSchema, plan.SourceTable, tableName)
|
||
plan.TargetQueryTable = qualifiedNameForQuery(config.TargetConfig.Type, plan.TargetSchema, plan.TargetTable, tableName)
|
||
plan.PlannedAction = "使用已有目标表导入"
|
||
|
||
sourceCols, warnings, err := inferMongoCollectionColumns(sourceDB, plan.SourceTable)
|
||
if err != nil {
|
||
return plan, nil, nil, err
|
||
}
|
||
plan.Warnings = append(plan.Warnings, warnings...)
|
||
if len(sourceCols) == 0 {
|
||
return plan, nil, nil, fmt.Errorf("源集合未推断出可迁移字段: %s", tableName)
|
||
}
|
||
|
||
targetCols, targetExists, err := inspectTableColumns(targetDB, plan.TargetSchema, plan.TargetTable)
|
||
if err != nil {
|
||
return plan, sourceCols, nil, fmt.Errorf("获取目标表字段失败: %w", err)
|
||
}
|
||
plan.TargetTableExists = targetExists
|
||
|
||
strategy := normalizeTargetTableStrategy(config.TargetTableStrategy)
|
||
if targetExists {
|
||
missing := diffMissingColumnNames(sourceCols, targetCols)
|
||
if len(missing) > 0 {
|
||
plan.Warnings = append(plan.Warnings, fmt.Sprintf("目标表缺失字段 %d 个:%s", len(missing), strings.Join(missing, ", ")))
|
||
}
|
||
if config.AutoAddColumns {
|
||
addSQL, addWarnings := buildMongoToMySQLAddColumnSQL(plan.TargetQueryTable, sourceCols, targetCols)
|
||
plan.PreDataSQL = append(plan.PreDataSQL, addSQL...)
|
||
plan.Warnings = append(plan.Warnings, addWarnings...)
|
||
if len(addSQL) > 0 {
|
||
plan.PlannedAction = fmt.Sprintf("补齐缺失字段(%d)后导入", len(addSQL))
|
||
}
|
||
}
|
||
return dedupeSchemaMigrationPlan(plan), sourceCols, targetCols, nil
|
||
}
|
||
|
||
switch strategy {
|
||
case "existing_only":
|
||
plan.PlannedAction = "目标表不存在,需先手工创建"
|
||
plan.Warnings = append(plan.Warnings, "当前策略要求目标表已存在,执行时不会自动建表")
|
||
return dedupeSchemaMigrationPlan(plan), sourceCols, targetCols, nil
|
||
case "smart", "auto_create_if_missing":
|
||
plan.AutoCreate = true
|
||
plan.PlannedAction = "目标表不存在,将自动建表后导入"
|
||
createSQL, postSQL, moreWarnings, unsupported, idxCreate, idxSkip, err := buildMongoToMySQLCreateTablePlan(config, plan.TargetQueryTable, sourceCols, sourceDB, plan.SourceSchema, plan.SourceTable)
|
||
if err != nil {
|
||
return plan, sourceCols, targetCols, err
|
||
}
|
||
plan.CreateTableSQL = createSQL
|
||
plan.PostDataSQL = append(plan.PostDataSQL, postSQL...)
|
||
plan.Warnings = append(plan.Warnings, moreWarnings...)
|
||
plan.UnsupportedObjects = append(plan.UnsupportedObjects, unsupported...)
|
||
plan.IndexesToCreate = idxCreate
|
||
plan.IndexesSkipped = idxSkip
|
||
return dedupeSchemaMigrationPlan(plan), sourceCols, targetCols, nil
|
||
default:
|
||
return dedupeSchemaMigrationPlan(plan), sourceCols, targetCols, nil
|
||
}
|
||
}
|
||
|
||
func inspectMongoCollection(database db.Database, dbName, collection string) (bool, error) {
|
||
items, err := database.GetTables(dbName)
|
||
if err != nil {
|
||
return false, err
|
||
}
|
||
target := strings.TrimSpace(collection)
|
||
for _, item := range items {
|
||
if strings.EqualFold(strings.TrimSpace(item), target) {
|
||
return true, nil
|
||
}
|
||
}
|
||
return false, nil
|
||
}
|
||
|
||
func buildMongoCreateCollectionCommand(collection string) (string, error) {
|
||
cmd := map[string]interface{}{"create": strings.TrimSpace(collection)}
|
||
data, err := json.Marshal(cmd)
|
||
if err != nil {
|
||
return "", err
|
||
}
|
||
return string(data), nil
|
||
}
|
||
|
||
func buildMongoIndexCommands(sourceDB db.Database, dbName, tableName, targetCollection string) ([]string, []string, []string, int, int, error) {
|
||
indexes, err := sourceDB.GetIndexes(dbName, tableName)
|
||
if err != nil {
|
||
return nil, nil, nil, 0, 0, err
|
||
}
|
||
grouped := groupIndexDefinitions(indexes)
|
||
cmds := make([]string, 0, len(grouped))
|
||
warnings := make([]string, 0)
|
||
unsupported := make([]string, 0)
|
||
created := 0
|
||
skipped := 0
|
||
for _, idx := range grouped {
|
||
name := strings.TrimSpace(idx.Name)
|
||
if name == "" || strings.EqualFold(name, "primary") {
|
||
continue
|
||
}
|
||
if len(idx.Columns) == 0 {
|
||
skipped++
|
||
unsupported = append(unsupported, fmt.Sprintf("索引 %s 缺少列定义,已跳过", name))
|
||
continue
|
||
}
|
||
kind := strings.ToLower(strings.TrimSpace(idx.IndexType))
|
||
if idx.SubPart > 0 {
|
||
skipped++
|
||
unsupported = append(unsupported, fmt.Sprintf("索引 %s 使用前缀长度,MongoDB 目标暂不支持等价迁移", name))
|
||
continue
|
||
}
|
||
if kind != "" && kind != "btree" {
|
||
warnings = append(warnings, fmt.Sprintf("索引 %s 类型=%s 将按普通索引迁移到 MongoDB", name, idx.IndexType))
|
||
}
|
||
keySpec := make(map[string]int)
|
||
for _, col := range idx.Columns {
|
||
keySpec[col] = 1
|
||
}
|
||
command := map[string]interface{}{
|
||
"createIndexes": strings.TrimSpace(targetCollection),
|
||
"indexes": []map[string]interface{}{{
|
||
"name": name,
|
||
"key": keySpec,
|
||
"unique": idx.Unique,
|
||
}},
|
||
}
|
||
data, err := json.Marshal(command)
|
||
if err != nil {
|
||
skipped++
|
||
unsupported = append(unsupported, fmt.Sprintf("索引 %s 生成 MongoDB createIndexes 命令失败:%v", name, err))
|
||
continue
|
||
}
|
||
cmds = append(cmds, string(data))
|
||
created++
|
||
}
|
||
return cmds, dedupeStrings(warnings), dedupeStrings(unsupported), created, skipped, nil
|
||
}
|
||
|
||
func inferMongoCollectionColumns(sourceDB db.Database, collection string) ([]connection.ColumnDefinition, []string, error) {
|
||
query := fmt.Sprintf(`{"find":"%s","filter":{},"limit":200}`, strings.TrimSpace(collection))
|
||
rows, _, err := sourceDB.Query(query)
|
||
if err != nil {
|
||
return nil, nil, fmt.Errorf("读取源集合样本失败: %w", err)
|
||
}
|
||
if len(rows) == 0 {
|
||
return []connection.ColumnDefinition{{Name: "_id", Type: "varchar(64)", Nullable: "NO", Key: "PRI"}}, []string{"源集合暂无样本数据,仅按 `_id` 生成基础主键列"}, nil
|
||
}
|
||
fieldNames := make(map[string]struct{})
|
||
for _, row := range rows {
|
||
for key := range row {
|
||
fieldNames[key] = struct{}{}
|
||
}
|
||
}
|
||
orderedFields := make([]string, 0, len(fieldNames))
|
||
for key := range fieldNames {
|
||
orderedFields = append(orderedFields, key)
|
||
}
|
||
sort.Strings(orderedFields)
|
||
if containsString(orderedFields, "_id") {
|
||
orderedFields = moveStringToFront(orderedFields, "_id")
|
||
}
|
||
columns := make([]connection.ColumnDefinition, 0, len(orderedFields))
|
||
warnings := make([]string, 0)
|
||
for _, field := range orderedFields {
|
||
typeName, nullable, fieldWarnings := inferMongoFieldType(rows, field)
|
||
warnings = append(warnings, fieldWarnings...)
|
||
col := connection.ColumnDefinition{
|
||
Name: field,
|
||
Type: typeName,
|
||
Nullable: ternaryString(nullable, "YES", "NO"),
|
||
Key: "",
|
||
Extra: "",
|
||
}
|
||
if field == "_id" {
|
||
col.Key = "PRI"
|
||
col.Nullable = "NO"
|
||
}
|
||
columns = append(columns, col)
|
||
}
|
||
return columns, dedupeStrings(warnings), nil
|
||
}
|
||
|
||
func inferMongoFieldType(rows []map[string]interface{}, field string) (string, bool, []string) {
|
||
nullable := false
|
||
hasString, hasBool, hasInt, hasFloat, hasTime, hasComplex := false, false, false, false, false, false
|
||
for _, row := range rows {
|
||
value, ok := row[field]
|
||
if !ok || value == nil {
|
||
nullable = true
|
||
continue
|
||
}
|
||
switch value.(type) {
|
||
case bool:
|
||
hasBool = true
|
||
case int, int8, int16, int32, int64, uint, uint8, uint16, uint32, uint64:
|
||
hasInt = true
|
||
case float32, float64:
|
||
hasFloat = true
|
||
case time.Time:
|
||
hasTime = true
|
||
case map[string]interface{}, []interface{}:
|
||
hasComplex = true
|
||
default:
|
||
hasString = true
|
||
}
|
||
}
|
||
kinds := 0
|
||
for _, flag := range []bool{hasString, hasBool, hasInt, hasFloat, hasTime, hasComplex} {
|
||
if flag {
|
||
kinds++
|
||
}
|
||
}
|
||
warnings := make([]string, 0)
|
||
if kinds > 1 {
|
||
warnings = append(warnings, fmt.Sprintf("字段 %s 存在多种 BSON 值类型,已按兼容类型降级", field))
|
||
}
|
||
if field == "_id" {
|
||
return "varchar(64)", false, warnings
|
||
}
|
||
switch {
|
||
case hasComplex:
|
||
return "json", nullable, warnings
|
||
case hasTime:
|
||
return "datetime", nullable, warnings
|
||
case hasFloat:
|
||
return "double", nullable, warnings
|
||
case hasInt:
|
||
return "bigint", nullable, warnings
|
||
case hasBool:
|
||
return "tinyint(1)", nullable, warnings
|
||
default:
|
||
return "varchar(255)", nullable, warnings
|
||
}
|
||
}
|
||
|
||
func buildMongoToMySQLAddColumnSQL(targetQueryTable string, sourceCols, targetCols []connection.ColumnDefinition) ([]string, []string) {
|
||
targetSet := make(map[string]struct{}, len(targetCols))
|
||
for _, col := range targetCols {
|
||
key := strings.ToLower(strings.TrimSpace(col.Name))
|
||
if key == "" {
|
||
continue
|
||
}
|
||
targetSet[key] = struct{}{}
|
||
}
|
||
var sqlList []string
|
||
for _, col := range sourceCols {
|
||
key := strings.ToLower(strings.TrimSpace(col.Name))
|
||
if key == "" {
|
||
continue
|
||
}
|
||
if _, ok := targetSet[key]; ok {
|
||
continue
|
||
}
|
||
sqlList = append(sqlList, fmt.Sprintf("ALTER TABLE %s ADD COLUMN %s %s NULL",
|
||
quoteQualifiedIdentByType("mysql", targetQueryTable),
|
||
quoteIdentByType("mysql", col.Name),
|
||
strings.TrimSpace(col.Type),
|
||
))
|
||
}
|
||
return sqlList, nil
|
||
}
|
||
|
||
func buildMongoToMySQLCreateTablePlan(config SyncConfig, targetQueryTable string, sourceCols []connection.ColumnDefinition, sourceDB db.Database, sourceSchema, sourceTable string) (string, []string, []string, []string, int, int, error) {
|
||
columnDefs := make([]string, 0, len(sourceCols)+1)
|
||
warnings := make([]string, 0)
|
||
unsupported := make([]string, 0)
|
||
pkCols := make([]string, 0, 1)
|
||
for _, col := range sourceCols {
|
||
columnDef := fmt.Sprintf("%s %s", quoteIdentByType("mysql", col.Name), strings.TrimSpace(col.Type))
|
||
if strings.EqualFold(strings.TrimSpace(col.Nullable), "NO") {
|
||
columnDef += " NOT NULL"
|
||
}
|
||
columnDefs = append(columnDefs, columnDef)
|
||
if col.Key == "PRI" || col.Key == "PK" {
|
||
pkCols = append(pkCols, quoteIdentByType("mysql", col.Name))
|
||
}
|
||
}
|
||
if len(pkCols) > 0 {
|
||
columnDefs = append(columnDefs, fmt.Sprintf("PRIMARY KEY (%s)", strings.Join(pkCols, ", ")))
|
||
} else {
|
||
warnings = append(warnings, "MongoDB 源集合未推断出稳定主键,目标表将不自动创建主键")
|
||
}
|
||
createSQL := fmt.Sprintf("CREATE TABLE %s (\n %s\n)", quoteQualifiedIdentByType("mysql", targetQueryTable), strings.Join(columnDefs, ",\n "))
|
||
if !config.CreateIndexes {
|
||
return createSQL, nil, dedupeStrings(warnings), dedupeStrings(unsupported), 0, 0, nil
|
||
}
|
||
indexes, err := sourceDB.GetIndexes(sourceSchema, sourceTable)
|
||
if err != nil {
|
||
warnings = append(warnings, fmt.Sprintf("读取源集合索引失败,已跳过索引迁移:%v", err))
|
||
return createSQL, nil, dedupeStrings(warnings), dedupeStrings(unsupported), 0, 0, nil
|
||
}
|
||
grouped := groupIndexDefinitions(indexes)
|
||
postSQL := make([]string, 0, len(grouped))
|
||
created := 0
|
||
skipped := 0
|
||
for _, idx := range grouped {
|
||
name := strings.TrimSpace(idx.Name)
|
||
if name == "" || strings.EqualFold(name, "_id_") || strings.EqualFold(name, "primary") {
|
||
continue
|
||
}
|
||
if len(idx.Columns) == 0 {
|
||
skipped++
|
||
unsupported = append(unsupported, fmt.Sprintf("索引 %s 缺少列定义,已跳过", name))
|
||
continue
|
||
}
|
||
quotedCols := make([]string, 0, len(idx.Columns))
|
||
for _, col := range idx.Columns {
|
||
quotedCols = append(quotedCols, quoteIdentByType("mysql", col))
|
||
}
|
||
prefix := "CREATE INDEX"
|
||
if idx.Unique {
|
||
prefix = "CREATE UNIQUE INDEX"
|
||
}
|
||
postSQL = append(postSQL, fmt.Sprintf("%s %s ON %s (%s)", prefix, quoteIdentByType("mysql", name), quoteQualifiedIdentByType("mysql", targetQueryTable), strings.Join(quotedCols, ", ")))
|
||
created++
|
||
}
|
||
return createSQL, postSQL, dedupeStrings(warnings), dedupeStrings(unsupported), created, skipped, nil
|
||
}
|
||
|
||
func containsString(items []string, target string) bool {
|
||
for _, item := range items {
|
||
if item == target {
|
||
return true
|
||
}
|
||
}
|
||
return false
|
||
}
|
||
|
||
func moveStringToFront(items []string, target string) []string {
|
||
out := make([]string, 0, len(items))
|
||
for _, item := range items {
|
||
if item == target {
|
||
continue
|
||
}
|
||
out = append(out, item)
|
||
}
|
||
return append([]string{target}, out...)
|
||
}
|
||
|
||
func buildMongoToPGLikePlan(config SyncConfig, tableName string, sourceDB db.Database, targetDB db.Database) (SchemaMigrationPlan, []connection.ColumnDefinition, []connection.ColumnDefinition, error) {
|
||
plan := SchemaMigrationPlan{}
|
||
targetType := strings.ToLower(strings.TrimSpace(config.TargetConfig.Type))
|
||
plan.SourceSchema, plan.SourceTable = normalizeSchemaAndTable(config.SourceConfig.Type, config.SourceConfig.Database, tableName)
|
||
plan.TargetSchema, plan.TargetTable = normalizeSchemaAndTable(config.TargetConfig.Type, config.TargetConfig.Database, tableName)
|
||
plan.SourceQueryTable = qualifiedNameForQuery(config.SourceConfig.Type, plan.SourceSchema, plan.SourceTable, tableName)
|
||
plan.TargetQueryTable = qualifiedNameForQuery(config.TargetConfig.Type, plan.TargetSchema, plan.TargetTable, tableName)
|
||
plan.PlannedAction = "使用已有目标表导入"
|
||
|
||
sourceCols, warnings, err := inferMongoCollectionColumns(sourceDB, plan.SourceTable)
|
||
if err != nil {
|
||
return plan, nil, nil, err
|
||
}
|
||
plan.Warnings = append(plan.Warnings, warnings...)
|
||
if len(sourceCols) == 0 {
|
||
return plan, nil, nil, fmt.Errorf("源集合未推断出可迁移字段: %s", tableName)
|
||
}
|
||
|
||
targetCols, targetExists, err := inspectTableColumns(targetDB, plan.TargetSchema, plan.TargetTable)
|
||
if err != nil {
|
||
return plan, sourceCols, nil, fmt.Errorf("获取目标表字段失败: %w", err)
|
||
}
|
||
plan.TargetTableExists = targetExists
|
||
|
||
strategy := normalizeTargetTableStrategy(config.TargetTableStrategy)
|
||
if targetExists {
|
||
missing := diffMissingColumnNames(sourceCols, targetCols)
|
||
if len(missing) > 0 {
|
||
plan.Warnings = append(plan.Warnings, fmt.Sprintf("目标表缺失字段 %d 个:%s", len(missing), strings.Join(missing, ", ")))
|
||
}
|
||
if config.AutoAddColumns {
|
||
addSQL, addWarnings := buildMongoToPGLikeAddColumnSQL(targetType, plan.TargetQueryTable, sourceCols, targetCols)
|
||
plan.PreDataSQL = append(plan.PreDataSQL, addSQL...)
|
||
plan.Warnings = append(plan.Warnings, addWarnings...)
|
||
if len(addSQL) > 0 {
|
||
plan.PlannedAction = fmt.Sprintf("补齐缺失字段(%d)后导入", len(addSQL))
|
||
}
|
||
}
|
||
return dedupeSchemaMigrationPlan(plan), sourceCols, targetCols, nil
|
||
}
|
||
|
||
switch strategy {
|
||
case "existing_only":
|
||
plan.PlannedAction = "目标表不存在,需先手工创建"
|
||
plan.Warnings = append(plan.Warnings, "当前策略要求目标表已存在,执行时不会自动建表")
|
||
return dedupeSchemaMigrationPlan(plan), sourceCols, targetCols, nil
|
||
case "smart", "auto_create_if_missing":
|
||
plan.AutoCreate = true
|
||
plan.PlannedAction = "目标表不存在,将自动建表后导入"
|
||
createSQL, postSQL, moreWarnings, unsupported, idxCreate, idxSkip, err := buildMongoToPGLikeCreateTablePlan(targetType, config, plan.TargetQueryTable, sourceCols, sourceDB, plan.SourceSchema, plan.SourceTable)
|
||
if err != nil {
|
||
return plan, sourceCols, targetCols, err
|
||
}
|
||
plan.CreateTableSQL = createSQL
|
||
plan.PostDataSQL = append(plan.PostDataSQL, postSQL...)
|
||
plan.Warnings = append(plan.Warnings, moreWarnings...)
|
||
plan.UnsupportedObjects = append(plan.UnsupportedObjects, unsupported...)
|
||
plan.IndexesToCreate = idxCreate
|
||
plan.IndexesSkipped = idxSkip
|
||
return dedupeSchemaMigrationPlan(plan), sourceCols, targetCols, nil
|
||
default:
|
||
return dedupeSchemaMigrationPlan(plan), sourceCols, targetCols, nil
|
||
}
|
||
}
|
||
|
||
func buildMongoToPGLikeAddColumnSQL(targetType string, targetQueryTable string, sourceCols, targetCols []connection.ColumnDefinition) ([]string, []string) {
|
||
targetSet := make(map[string]struct{}, len(targetCols))
|
||
for _, col := range targetCols {
|
||
key := strings.ToLower(strings.TrimSpace(col.Name))
|
||
if key == "" {
|
||
continue
|
||
}
|
||
targetSet[key] = struct{}{}
|
||
}
|
||
var sqlList []string
|
||
var warnings []string
|
||
for _, col := range sourceCols {
|
||
key := strings.ToLower(strings.TrimSpace(col.Name))
|
||
if key == "" {
|
||
continue
|
||
}
|
||
if _, ok := targetSet[key]; ok {
|
||
continue
|
||
}
|
||
colType, mapWarnings := mapMongoInferredColumnToPGLike(col)
|
||
warnings = append(warnings, mapWarnings...)
|
||
sqlList = append(sqlList, fmt.Sprintf("ALTER TABLE %s ADD COLUMN %s %s NULL",
|
||
quoteQualifiedIdentByType(targetType, targetQueryTable),
|
||
quoteIdentByType(targetType, col.Name),
|
||
colType,
|
||
))
|
||
}
|
||
return sqlList, dedupeStrings(warnings)
|
||
}
|
||
|
||
func buildMongoToPGLikeCreateTablePlan(targetType string, config SyncConfig, targetQueryTable string, sourceCols []connection.ColumnDefinition, sourceDB db.Database, sourceSchema, sourceTable string) (string, []string, []string, []string, int, int, error) {
|
||
columnDefs := make([]string, 0, len(sourceCols)+1)
|
||
warnings := make([]string, 0)
|
||
unsupported := make([]string, 0)
|
||
pkCols := make([]string, 0, 1)
|
||
for _, col := range sourceCols {
|
||
colType, colWarnings := mapMongoInferredColumnToPGLike(col)
|
||
warnings = append(warnings, colWarnings...)
|
||
parts := []string{colType}
|
||
if strings.EqualFold(strings.TrimSpace(col.Nullable), "NO") {
|
||
parts = append(parts, "NOT NULL")
|
||
}
|
||
columnDefs = append(columnDefs, fmt.Sprintf("%s %s", quoteIdentByType(targetType, col.Name), strings.Join(parts, " ")))
|
||
if col.Key == "PRI" || col.Key == "PK" {
|
||
pkCols = append(pkCols, quoteIdentByType(targetType, col.Name))
|
||
}
|
||
}
|
||
if len(pkCols) > 0 {
|
||
columnDefs = append(columnDefs, fmt.Sprintf("PRIMARY KEY (%s)", strings.Join(pkCols, ", ")))
|
||
}
|
||
createSQL := fmt.Sprintf("CREATE TABLE %s (\n %s\n)", quoteQualifiedIdentByType(targetType, targetQueryTable), strings.Join(columnDefs, ",\n "))
|
||
if !config.CreateIndexes {
|
||
return createSQL, nil, dedupeStrings(warnings), dedupeStrings(unsupported), 0, 0, nil
|
||
}
|
||
indexes, err := sourceDB.GetIndexes(sourceSchema, sourceTable)
|
||
if err != nil {
|
||
warnings = append(warnings, fmt.Sprintf("读取源集合索引失败,已跳过索引迁移:%v", err))
|
||
return createSQL, nil, dedupeStrings(warnings), dedupeStrings(unsupported), 0, 0, nil
|
||
}
|
||
grouped := groupIndexDefinitions(indexes)
|
||
postSQL := make([]string, 0, len(grouped))
|
||
created := 0
|
||
skipped := 0
|
||
for _, idx := range grouped {
|
||
name := strings.TrimSpace(idx.Name)
|
||
if name == "" || strings.EqualFold(name, "_id_") || strings.EqualFold(name, "primary") {
|
||
continue
|
||
}
|
||
if len(idx.Columns) == 0 {
|
||
skipped++
|
||
unsupported = append(unsupported, fmt.Sprintf("索引 %s 缺少列定义,已跳过", name))
|
||
continue
|
||
}
|
||
quotedCols := make([]string, 0, len(idx.Columns))
|
||
for _, col := range idx.Columns {
|
||
quotedCols = append(quotedCols, quoteIdentByType(targetType, col))
|
||
}
|
||
prefix := "CREATE INDEX"
|
||
if idx.Unique {
|
||
prefix = "CREATE UNIQUE INDEX"
|
||
}
|
||
postSQL = append(postSQL, fmt.Sprintf("%s %s ON %s (%s)", prefix, quoteIdentByType(targetType, name), quoteQualifiedIdentByType(targetType, targetQueryTable), strings.Join(quotedCols, ", ")))
|
||
created++
|
||
}
|
||
return createSQL, postSQL, dedupeStrings(warnings), dedupeStrings(unsupported), created, skipped, nil
|
||
}
|
||
|
||
func mapMongoInferredColumnToPGLike(col connection.ColumnDefinition) (string, []string) {
|
||
raw := strings.ToLower(strings.TrimSpace(col.Type))
|
||
warnings := make([]string, 0)
|
||
switch {
|
||
case strings.HasPrefix(raw, "varchar"):
|
||
return col.Type, warnings
|
||
case raw == "json":
|
||
return "jsonb", warnings
|
||
case raw == "datetime":
|
||
return "timestamp", warnings
|
||
case raw == "tinyint(1)":
|
||
return "boolean", warnings
|
||
case raw == "double":
|
||
return "double precision", warnings
|
||
case raw == "bigint":
|
||
return "bigint", warnings
|
||
default:
|
||
return col.Type, warnings
|
||
}
|
||
}
|