Compare commits

..

65 Commits

Author SHA1 Message Date
Syngnat
d1d3fa26f1 🔧 fix(frontend/ci): 移除前端测试对 node:assert 的类型依赖 (#234) 2026-03-13 15:37:16 +08:00
Syngnat
cabf84a041 🔧 fix(frontend/ci): 移除前端测试对 node:assert 的类型依赖
- 修复 darwin/arm64 构建中 tsc 无法解析 node:assert 的 TS2307 报错
- 将 dataGridLayout.test.ts 中的 node:assert 替换为本地 assertEqual
- 将 redisViewerWorkbenchTheme.test.ts 中的 node:assert 替换为本地断言函数
- 将 overlayWorkbenchTheme.test.ts 中的 node:assert 替换为本地断言函数
- 保持原有断言语义不变,避免引入新的运行时依赖
- 本地验证 npm --prefix frontend run build 通过
2026-03-13 15:36:09 +08:00
Syngnat
fc8e62b997 release/0.5.8 (#233) 2026-03-13 15:29:53 +08:00
Syngnat
9b02720169 🔧 fix(data-grid): 修复虚拟表格滚动条遮挡并统一横向同步链路
- 修复数据视图横向滚动条遮挡最后一行内容的问题
- 为虚拟表格接入外部横向滚动条,移除内部重复横向滚动轨道
- 统一拖拽滚动条与鼠标滑轮的横向同步逻辑,修复内容移动但滚动条不跟随
- 调整横向滚动条底部停靠间距,避免继续压住表格内容
- 提升纵向滚动条 thumb 对比度并增加弱轨道底色,改善深色主题下可见性
- 新增 DataGrid 布局计算辅助函数与最小测试用例
- refs #220
2026-03-13 15:27:18 +08:00
Syngnat
eb36dcc5a2 🔧 fix(redis/ui): 统一 Redis 工作台交互样式并修复 Tree 节点异常高亮
- Redis 页面重构为工作台样式,统一左右面板、工具条和详情区层级
- 接入 light/dark/透明模式主题参数,修复 Redis 页面与全局主题不一致问题
- 新增文件夹递归勾选、全选全部、分组全选/取消全选能力
- 支持 Redis Key 右键菜单重命名并同步更新树节点、选中态和详情面板
- 修复 type=none 时读取失败问题,过期或已删除 Key 自动提示并移出列表
- 接管 Redis Tree 展开箭头渲染,修复 switcher 命中区错位和悬浮白线问题
- 统一工具、代理、主题、关于、筛选、新建组和新建连接等弹层主题
- refs #231
2026-03-13 14:51:20 +08:00
Syngnat
1a3f137438 🔧 fix(db/kingbase): 统一 search_path 构建并修复双引号重复转义
- 新增 buildKingbaseSearchPathCommon,统一 search_path 规范化与拼装逻辑
- schema 名称先做 normalize + 去重,避免已带引号值被二次转义为 ""schema""
- getSearchPathStr 改为收集原始 schema 后走公共构建流程
- optional-driver-agent 复用同一构建函数,消除两套实现偏差
- 对 public 做大小写归一,确保 search_path 输出稳定
- 新增 TestBuildKingbaseSearchPathCommon 覆盖 quoted/escaped/dedupe 场景
2026-03-13 11:22:35 +08:00
杨国锋
5f94cd3911 🔧 fix(tab-manager): 修复切换Tab导致表数据编辑与筛选状态丢失
- 移除非活动业务Tab内容置空逻辑,避免DataViewer/DataGrid卸载重建
- 设置 destroyInactiveTabPane=false,确保切换Tab不销毁页面
- 在 DataViewer 统一快照持久化并增加卸载兜底写回
- 保持切换Tab不自动刷新,仅手动刷新或显式状态变化触发加载
- refs #218
2026-03-12 23:25:32 +08:00
杨国锋
bb257c35bc feat(data-grid): 新增同表多列跨行复制粘贴能力
- 在单元格编辑模式新增复制缓冲区,保存源行与多列值
- 新增“复制选区列值”操作,仅允许同一行多列选区复制
- 新增“粘贴到选中行”操作,按同名列批量写入并自动排除源行
- 复用 addedRows/modifiedRows 变更路径,保持提交事务与回滚逻辑一致
- 单元格右键菜单增加“粘贴已复制列(同名列)”入口
- 切换连接/库/表时自动清空复制缓冲区,避免跨上下文误粘贴
- refs #217
2026-03-12 23:14:52 +08:00
Syngnat
b0eb93bfa3 Release/0.5.7 (#230)
🔧 fix(ci/release-winget): 修复 Node20 弃用告警并强制启用 Node24 运行时

- 在 release-winget workflow 增加 FORCE_JAVASCRIPT_ACTIONS_TO_NODE24=true
- 与现有 release/test workflow 的 Node24 配置保持一致
- 避免 actions/checkout、setup-go、setup-node 触发 Node20 弃用告警

🔧 fix(window): 修复Windows启动全屏锁死并补齐标题栏退出全屏逻辑
2026-03-12 19:46:40 +08:00
杨国锋
11b8e0f12a Merge branch 'dev' into release/0.5.7 2026-03-12 19:39:42 +08:00
杨国锋
1dabac1a65 🔧 fix(window): 修复Windows启动全屏锁死并补齐标题栏退出全屏逻辑 2026-03-12 19:38:54 +08:00
杨国锋
e013288967 🔧 fix(ci/release-winget): 修复 Node20 弃用告警并强制启用 Node24 运行时
- 在 release-winget workflow 增加 FORCE_JAVASCRIPT_ACTIONS_TO_NODE24=true
- 与现有 release/test workflow 的 Node24 配置保持一致
- 避免 actions/checkout、setup-go、setup-node 触发 Node20 弃用告警
2026-03-12 19:23:46 +08:00
Syngnat
8c5fee1c7a * 🔧 fix(release/macos): 移除 macOS 打包链路的 UPX 压缩逻辑 2026-03-12 19:08:05 +08:00
杨国锋
ec05f518a9 Merge remote-tracking branch 'origin/main' into release/0.5.7
# Conflicts:
#	.github/workflows/release.yml
#	.github/workflows/test-build-all-platforms.yml
#	build-release.sh
2026-03-12 19:06:48 +08:00
杨国锋
2c9aa640fd Merge branch 'dev' into release/0.5.7 2026-03-12 19:04:20 +08:00
杨国锋
d467322ebe 🔧 fix(release/macos): 移除 macOS 打包链路的 UPX 压缩逻辑
- 删除 release 与手动测试工作流中的 macOS UPX 安装与压缩步骤
- build-release.sh 不再对 macOS arm64/amd64 主程序执行 UPX
- 保留 Windows 与 Linux 的 UPX 压缩策略
2026-03-12 19:00:21 +08:00
Syngnat
9f7cc58fad Release/0.5.7 (#227)
* 🎨 style(DataGrid): 清理冗余代码与静态分析告警

- 类型重构:通过修正 React Context 的函数签名解决了 void 类型的链式调用错误
- 代码精简:利用 Nullish Coalescing (??) 优化组件配置项降级逻辑,剥离无意义的隐式 undefined 赋值
- 工具链适配:适配 IDE 拼写检查与 Promise strict rules,确保全文件零警

* 🔧 fix(db/kingbase_impl): 修复标识符无条件加双引号导致SQL语法报错

- quoteKingbaseIdent 改为条件引用,仅对大写字母、保留字、特殊字符的标识符添加双引号
- 新增 kingbaseIdentNeedsQuote 判断标识符是否需要引用
- 新增 isKingbaseReservedWord 检测常见SQL保留字
- 补充 TestQuoteKingbaseIdent、TestKingbaseIdentNeedsQuote 单测覆盖各场景
- refs #176

* 🔧 fix(release,db/kingbase_impl): 修复金仓默认 schema 并静默生成 DMG

- Kingbase:在 current_schema() 为 public 时探测候选 schema,并通过 DSN search_path 重连,兼容未限定 schema 的查询
- 候选优先级:数据库名/用户名同名 schema(存在性校验),否则仅在“唯一用户 schema 有表”场景兜底
- 避免连接污染:每次 Connect 重置探测结果,重连成功后替换连接并关闭旧连接
- 打包脚本:create-dmg 增加 --sandbox-safe,避免构建时自动弹出/打开挂载窗口
- 产物格式:强制 --format UDZO,并将 rw.*.dmg/UDRW 中间产物转换为可分发 DMG
- 校验门禁:增加 hdiutil verify,失败时保留 .app 便于排查,同时修正卷图标探测并补 ad-hoc 签名

* 🐛 fix(connection/redis): 修复 Redis URI 用户名处理导致认证失败

- Redis URI 解析回填 user 字段,兼容 redis://user:pass@... 与 redis://:pass@...
- 生成 URI 时按需输出 user/password,避免丢失用户名信息
- Redis 类型默认用户名置空,并在构建配置时清理历史默认 root
- 避免 go-redis 触发 ACL AUTH(user, pass) 导致 WRONGPASS
- refs #212

* 🔧 fix(release,ssh): 修复 SSH 误判连接成功并纠正 DMG 打包结构

- SSH 缓存 key 纳入认证指纹(password/keyPath),避免改错凭证仍复用旧连接/端口转发
- MySQL/MariaDB/Doris:SSH 隧道建立失败直接返回错误,不再回退直连导致测试误判成功
- 新增最小单测覆盖 SSH cache key 与 UseSSH 异常路径
- build-release.sh:create-dmg 使用 staging 目录作为 source,避免 DMG 根目录变成 Contents
- refs #213

* fix: KingBase 连接后自动设置 search_path,修复自定义 schema 下表查询报 relation does not exist 的问题 (#215)

* 🔧 fix(driver/kingbase,mongodb): 修复外置驱动事务引用与连接测试链路问题

- 金仓外置驱动链路增加表名与变更字段归一化,修复 ApplyChanges 场景下双引号转义异常导致的 SQL 语法错误
- 新增金仓公共标识符工具并复用到 kingbase_impl 与 optional_driver_agent_impl,统一处理多重转义、schema.table 拆分与引用规范
- 金仓代理连接后自动探测并设置 search_path,降低查询时必须手写 schema 前缀的概率
- MongoDB 连接参数改为显式 host/hosts 优先,避免被 URI 中 localhost 覆盖;代理链路保留目标地址不再改写为本地地址
- 连接测试增加前后端超时收敛与日志增强,避免长时间转圈;连接错误文案在未启用 TLS 时移除误导性的“SSL”前缀
- 统一日志级别为 INFO/WARN/ERROR,默认日志目录收敛到 ~/.GoNavi/Logs,并补充驱动构建脚本 build-driver-agents.sh

* 🔧 fix(release/sidebar): 统一跨平台UPX压缩并修复PG函数列表查询兼容性

- 构建脚本新增通用 UPX 压缩函数,覆盖 macOS、Linux、Windows 产物
- 本地打包改为强制压缩策略:未安装 upx、压缩失败或校验失败直接终止
- macOS 打包在签名前压缩 .app 主程序并执行 upx -t 校验
- Linux 打包在生成 tar.gz 前压缩可执行文件并执行 upx -t 校验
- GitHub Release 与测试构建流程补齐 macOS/Linux/Windows 的 upx 安装与压缩步骤
- PostgreSQL/PG-like 函数元数据查询增加多路兼容 SQL,修复函数列表不显示问题
- refs #221
- refs #222

* 🔧 fix(release/ci): 修复跨平台UPX兼容并处理Windows ARM64打包失败

- CI 工作流统一启用 Node24 JavaScript 运行时,消除 Node20 退役告警干扰
- macOS 打包阶段为 UPX 增加 --force-macos,修复 Mach-O 压缩失败
- Windows 打包按架构分流:arm64 跳过 UPX 并保留原始 EXE,amd64 继续强制压缩
- Windows 压缩流程新增 $LASTEXITCODE 显式校验,避免命令失败被误判为成功
- 本地 build-release.sh 同步 macOS/Windows 的 UPX 兼容策略与错误处理逻辑

---------

Co-authored-by: Syngnat <yangguofeng919@gmail.com>
Co-authored-by: 凌封 <49424247+fengin@users.noreply.github.com>
2026-03-12 17:58:05 +08:00
Syngnat
97bf891df3 Merge remote-tracking branch 'origin/main' into release/0.5.7
# Conflicts:
#	.github/workflows/release.yml
#	.github/workflows/test-build-all-platforms.yml
#	build-release.sh
2026-03-12 17:55:17 +08:00
Syngnat
72a9692200 Merge branch 'dev' into release/0.5.7 2026-03-12 17:54:26 +08:00
Syngnat
e26a456eae 🔧 fix(release/ci): 修复跨平台UPX兼容并处理Windows ARM64打包失败
- CI 工作流统一启用 Node24 JavaScript 运行时,消除 Node20 退役告警干扰
- macOS 打包阶段为 UPX 增加 --force-macos,修复 Mach-O 压缩失败
- Windows 打包按架构分流:arm64 跳过 UPX 并保留原始 EXE,amd64 继续强制压缩
- Windows 压缩流程新增 $LASTEXITCODE 显式校验,避免命令失败被误判为成功
- 本地 build-release.sh 同步 macOS/Windows 的 UPX 兼容策略与错误处理逻辑
2026-03-12 17:54:09 +08:00
Syngnat
eaa45f17fd Release/0.5.7 (#226)
* 🎨 style(DataGrid): 清理冗余代码与静态分析告警

- 类型重构:通过修正 React Context 的函数签名解决了 void 类型的链式调用错误
- 代码精简:利用 Nullish Coalescing (??) 优化组件配置项降级逻辑,剥离无意义的隐式 undefined 赋值
- 工具链适配:适配 IDE 拼写检查与 Promise strict rules,确保全文件零警

* 🔧 fix(db/kingbase_impl): 修复标识符无条件加双引号导致SQL语法报错

- quoteKingbaseIdent 改为条件引用,仅对大写字母、保留字、特殊字符的标识符添加双引号
- 新增 kingbaseIdentNeedsQuote 判断标识符是否需要引用
- 新增 isKingbaseReservedWord 检测常见SQL保留字
- 补充 TestQuoteKingbaseIdent、TestKingbaseIdentNeedsQuote 单测覆盖各场景
- refs #176

* 🔧 fix(release,db/kingbase_impl): 修复金仓默认 schema 并静默生成 DMG

- Kingbase:在 current_schema() 为 public 时探测候选 schema,并通过 DSN search_path 重连,兼容未限定 schema 的查询
- 候选优先级:数据库名/用户名同名 schema(存在性校验),否则仅在“唯一用户 schema 有表”场景兜底
- 避免连接污染:每次 Connect 重置探测结果,重连成功后替换连接并关闭旧连接
- 打包脚本:create-dmg 增加 --sandbox-safe,避免构建时自动弹出/打开挂载窗口
- 产物格式:强制 --format UDZO,并将 rw.*.dmg/UDRW 中间产物转换为可分发 DMG
- 校验门禁:增加 hdiutil verify,失败时保留 .app 便于排查,同时修正卷图标探测并补 ad-hoc 签名

* 🐛 fix(connection/redis): 修复 Redis URI 用户名处理导致认证失败

- Redis URI 解析回填 user 字段,兼容 redis://user:pass@... 与 redis://:pass@...
- 生成 URI 时按需输出 user/password,避免丢失用户名信息
- Redis 类型默认用户名置空,并在构建配置时清理历史默认 root
- 避免 go-redis 触发 ACL AUTH(user, pass) 导致 WRONGPASS
- refs #212

* 🔧 fix(release,ssh): 修复 SSH 误判连接成功并纠正 DMG 打包结构

- SSH 缓存 key 纳入认证指纹(password/keyPath),避免改错凭证仍复用旧连接/端口转发
- MySQL/MariaDB/Doris:SSH 隧道建立失败直接返回错误,不再回退直连导致测试误判成功
- 新增最小单测覆盖 SSH cache key 与 UseSSH 异常路径
- build-release.sh:create-dmg 使用 staging 目录作为 source,避免 DMG 根目录变成 Contents
- refs #213

* fix: KingBase 连接后自动设置 search_path,修复自定义 schema 下表查询报 relation does not exist 的问题 (#215)

* 🔧 fix(driver/kingbase,mongodb): 修复外置驱动事务引用与连接测试链路问题

- 金仓外置驱动链路增加表名与变更字段归一化,修复 ApplyChanges 场景下双引号转义异常导致的 SQL 语法错误
- 新增金仓公共标识符工具并复用到 kingbase_impl 与 optional_driver_agent_impl,统一处理多重转义、schema.table 拆分与引用规范
- 金仓代理连接后自动探测并设置 search_path,降低查询时必须手写 schema 前缀的概率
- MongoDB 连接参数改为显式 host/hosts 优先,避免被 URI 中 localhost 覆盖;代理链路保留目标地址不再改写为本地地址
- 连接测试增加前后端超时收敛与日志增强,避免长时间转圈;连接错误文案在未启用 TLS 时移除误导性的“SSL”前缀
- 统一日志级别为 INFO/WARN/ERROR,默认日志目录收敛到 ~/.GoNavi/Logs,并补充驱动构建脚本 build-driver-agents.sh

* 🔧 fix(release/sidebar): 统一跨平台UPX压缩并修复PG函数列表查询兼容性

- 构建脚本新增通用 UPX 压缩函数,覆盖 macOS、Linux、Windows 产物
- 本地打包改为强制压缩策略:未安装 upx、压缩失败或校验失败直接终止
- macOS 打包在签名前压缩 .app 主程序并执行 upx -t 校验
- Linux 打包在生成 tar.gz 前压缩可执行文件并执行 upx -t 校验
- GitHub Release 与测试构建流程补齐 macOS/Linux/Windows 的 upx 安装与压缩步骤
- PostgreSQL/PG-like 函数元数据查询增加多路兼容 SQL,修复函数列表不显示问题
- refs #221
- refs #222

---------

Co-authored-by: Syngnat <yangguofeng919@gmail.com>
Co-authored-by: 凌封 <49424247+fengin@users.noreply.github.com>
2026-03-12 17:40:35 +08:00
Syngnat
f101a59d32 Merge remote-tracking branch 'origin/main' into release/0.5.7
# Conflicts:
#	frontend/src/App.tsx
#	frontend/src/components/ConnectionModal.tsx
#	frontend/src/components/DataGrid.tsx
2026-03-12 17:34:07 +08:00
Syngnat
501ad9e9a3 Merge branch 'fix/ssh-issue-20260310-ygf' into dev
# Conflicts:
#	internal/db/kingbase_impl.go
2026-03-12 17:30:48 +08:00
Syngnat
482a7fce2e 🔧 fix(release/sidebar): 统一跨平台UPX压缩并修复PG函数列表查询兼容性
- 构建脚本新增通用 UPX 压缩函数,覆盖 macOS、Linux、Windows 产物
- 本地打包改为强制压缩策略:未安装 upx、压缩失败或校验失败直接终止
- macOS 打包在签名前压缩 .app 主程序并执行 upx -t 校验
- Linux 打包在生成 tar.gz 前压缩可执行文件并执行 upx -t 校验
- GitHub Release 与测试构建流程补齐 macOS/Linux/Windows 的 upx 安装与压缩步骤
- PostgreSQL/PG-like 函数元数据查询增加多路兼容 SQL,修复函数列表不显示问题
- refs #221
- refs #222
2026-03-12 17:30:16 +08:00
Syngnat
e6af5f966b 🔧 fix(driver/kingbase,mongodb): 修复外置驱动事务引用与连接测试链路问题
- 金仓外置驱动链路增加表名与变更字段归一化,修复 ApplyChanges 场景下双引号转义异常导致的 SQL 语法错误
- 新增金仓公共标识符工具并复用到 kingbase_impl 与 optional_driver_agent_impl,统一处理多重转义、schema.table 拆分与引用规范
- 金仓代理连接后自动探测并设置 search_path,降低查询时必须手写 schema 前缀的概率
- MongoDB 连接参数改为显式 host/hosts 优先,避免被 URI 中 localhost 覆盖;代理链路保留目标地址不再改写为本地地址
- 连接测试增加前后端超时收敛与日志增强,避免长时间转圈;连接错误文案在未启用 TLS 时移除误导性的“SSL”前缀
- 统一日志级别为 INFO/WARN/ERROR,默认日志目录收敛到 ~/.GoNavi/Logs,并补充驱动构建脚本 build-driver-agents.sh
2026-03-12 16:45:46 +08:00
凌封
eef973b7fc fix: KingBase 连接后自动设置 search_path,修复自定义 schema 下表查询报 relation does not exist 的问题 (#215) 2026-03-12 10:04:49 +08:00
Syngnat
d8b6b4ef8d 🔧 fix(release,ssh): 修复 SSH 误判连接成功并纠正 DMG 打包结构
- SSH 缓存 key 纳入认证指纹(password/keyPath),避免改错凭证仍复用旧连接/端口转发
- MySQL/MariaDB/Doris:SSH 隧道建立失败直接返回错误,不再回退直连导致测试误判成功
- 新增最小单测覆盖 SSH cache key 与 UseSSH 异常路径
- build-release.sh:create-dmg 使用 staging 目录作为 source,避免 DMG 根目录变成 Contents
- refs #213
2026-03-11 14:36:36 +08:00
Syngnat
4d58cc6e26 🐛 fix(connection/redis): 修复 Redis URI 用户名处理导致认证失败
- Redis URI 解析回填 user 字段,兼容 redis://user:pass@... 与 redis://:pass@...
- 生成 URI 时按需输出 user/password,避免丢失用户名信息
- Redis 类型默认用户名置空,并在构建配置时清理历史默认 root
- 避免 go-redis 触发 ACL AUTH(user, pass) 导致 WRONGPASS
- refs #212
2026-03-11 14:04:37 +08:00
Syngnat
b0bdddad9b 🔧 fix(release,db/kingbase_impl): 修复金仓默认 schema 并静默生成 DMG
- Kingbase:在 current_schema() 为 public 时探测候选 schema,并通过 DSN search_path 重连,兼容未限定 schema 的查询
- 候选优先级:数据库名/用户名同名 schema(存在性校验),否则仅在“唯一用户 schema 有表”场景兜底
- 避免连接污染:每次 Connect 重置探测结果,重连成功后替换连接并关闭旧连接
- 打包脚本:create-dmg 增加 --sandbox-safe,避免构建时自动弹出/打开挂载窗口
- 产物格式:强制 --format UDZO,并将 rw.*.dmg/UDRW 中间产物转换为可分发 DMG
- 校验门禁:增加 hdiutil verify,失败时保留 .app 便于排查,同时修正卷图标探测并补 ad-hoc 签名
2026-03-11 13:39:41 +08:00
Syngnat
a73ca36a32 🔧 fix(db/kingbase_impl): 修复标识符无条件加双引号导致SQL语法报错
- quoteKingbaseIdent 改为条件引用,仅对大写字母、保留字、特殊字符的标识符添加双引号
- 新增 kingbaseIdentNeedsQuote 判断标识符是否需要引用
- 新增 isKingbaseReservedWord 检测常见SQL保留字
- 补充 TestQuoteKingbaseIdent、TestKingbaseIdentNeedsQuote 单测覆盖各场景
- refs #176
2026-03-11 10:23:41 +08:00
Syngnat
92e9381fcc 🎨 style(DataGrid): 清理冗余代码与静态分析告警
- 类型重构:通过修正 React Context 的函数签名解决了 void 类型的链式调用错误
- 代码精简:利用 Nullish Coalescing (??) 优化组件配置项降级逻辑,剥离无意义的隐式 undefined 赋值
- 工具链适配:适配 IDE 拼写检查与 Promise strict rules,确保全文件零警
2026-03-11 09:19:49 +08:00
Syngnat
c4c7e379d1 feat(DataGrid): 增加表格列的动态显示与隐藏控制
- 字段面板新增列可见性筛选,支持列表内快速搜索、按需勾选与一键重置
- 新增持久化状态,自动记忆每张数据表的个性化隐藏列配置
- 优化数据提交链路,确保列的隐藏仅影响视图交互,不干扰增删改及复制功能
2026-03-10 16:45:35 +08:00
Syngnat
695713c779 feat(DataGrid): 实现数据视图列标题拖拽排序及顺序记忆
- 功能集成:接入 @dnd-kit 实现表头水平拖拽排序,支持多列位置灵活调整
- 持久化:Store 新增 tableColumnOrders 状态,支持按“连接-库-表”多维度记忆自定义列序
- 交互优化:重构表头 DOM 结构并消除内边距,实现“悬停手型、按住抓取”的精准指针反馈
- 性能提升:通过 React.memo 减少重渲染,并启用 will-change 硬件加速确保 60FPS 流畅度
- 稳定性:增强 Wails 环境接口调用的异常捕获,并补全前端独立开发环境下的 API Stub
2026-03-10 15:49:22 +08:00
Syngnat
6ad690cffc release/0.5.6 (#210)
* 🐛 fix(data-viewer): 修复ClickHouse尾部分页异常并增强DuckDB复杂类型兼容

- DataViewer 新增 ClickHouse 反向分页策略,修复最后页与倒数页查询失败
- DuckDB 查询失败时按列类型生成安全 SELECT,复杂类型转 VARCHAR 重试
- 分页状态统一使用 currentPage 回填,避免页码与总数推导不一致
- 增强查询异常日志与重试路径,降低大表场景卡顿与误报

*  feat(frontend-driver): 驱动管理支持快速搜索并优化信息展示

- 新增搜索框,支持按 DuckDB/ClickHouse 等关键字快速定位驱动
- 显示“匹配 x / y”统计与无结果提示
- 优化头部区域排版,提升透明/暗色场景下的视觉对齐

* 🔧 fix(connection-modal): 修复多数据源URI导入解析并校正Oracle服务名校验

- 新增单主机URI解析映射,兼容 postgres/postgresql、sqlserver、redis、tdengine、dameng(dm)、kingbase、highgo、vastbase、clickhouse、oracle
- 抽取 parseSingleHostUri 复用逻辑,统一 host/port/user/password/database 回填行为
- Oracle 连接新增服务名必填校验,移除“服务名为空回退用户名”的隐式逻辑
- 连接弹窗补充 Oracle 服务名输入项与 URI 示例

* 🐛 fix(query-export): 修复查询结果导出卡住并统一按数据源能力控制导出路径

- 查询结果页导出增加稳定兜底,异常时确保 loading 关闭避免持续转圈
- DataGrid 导出逻辑按数据源能力分流,优先走后端 ExportQuery 并保留结果集导出降级
- QueryEditor 传递结果导出 SQL,保证查询结果导出范围与当前结果一致
- 后端补充 ExportData/ExportQuery 关键日志,提升导出链路可观测性

* 🐛 fix(precision): 修复查询链路与分页统计的大整数精度丢失

- 代理响应数据解码改为 UseNumber,避免默认 float64 吞精度
- 统一归一化 json.Number 与超界整数,超出 JS 安全范围转字符串
- 修复 DataViewer 总数解析,超大值不再误转 Number 参与分页
- refs #142

* 🐛 fix(driver-manager): 修复驱动管理网络告警重复并强化代理引导

- 新增下载链路域名探测,区分“GitHub可达但驱动下载链路不可达”
- 网络不可达场景仅保留红色强提醒,移除重复二级告警
- 强提醒增加“打开全局代理设置”入口,优先引导使用 GoNavi 全局代理
- 统一网络检测与目录说明提示图标尺寸,修复加载期视觉不一致
- refs #141

* ♻️ refactor(frontend-interaction): 统一标签拖拽与暗色主题交互实现

- 重构Tab拖拽排序实现,统一为可配置拖拽引擎
- 规范拖拽与点击事件边界,提升交互一致性
- 统一多组件暗色透明样式策略,减少硬编码色值
- 提升Redis/表格/连接面板在透明模式下的观感一致性
- refs #144

* ♻️ refactor(update-state): 重构在线更新状态流并按版本统一进度展示

- 重构更新检查与下载状态同步流程,减少前后端状态分叉
- 进度展示严格绑定 latestVersion,避免跨版本状态串用
- 优化 about 打开场景的静默检查状态回填逻辑
- 统一下载弹窗关闭/后台隐藏行为
- 保持现有安装流程并补齐目录打开能力

* 🎨 style(sidebar-log): 将SQL执行日志入口调整为悬浮胶囊样式

- 移除侧栏底部整条日志入口容器
- 新增悬浮按钮阴影/边框/透明背景并适配明暗主题
- 为树区域预留底部空间避免入口遮挡内容

*  feat(redis-cluster): 支持集群模式逻辑多库隔离与 0-15 库切换

- 前端恢复 Redis 集群场景下 db0-db15 的数据库选择与展示
- 后端新增集群逻辑库命名空间前缀映射,统一 key/pattern 读写隔离
- 覆盖扫描、读取、写入、删除、重命名等核心操作的键映射规则
- 集群命令通道支持 SELECT 逻辑切库与 FLUSHDB 逻辑库清空
- refs #145

*  feat(DataGrid): 大数据表虚拟滚动性能优化及UI一致性修复

- 启用动态虚拟滚动(数据量≥500行自动切换),解决万行数据表卡顿问题
- 虚拟模式下EditableCell改用div渲染,CSS选择器从元素级改为类级适配虚拟DOM
- 修复虚拟模式双水平滚动条:样式化rc-virtual-list内置滚动条为胶囊外观,禁用自定义外部滚动条
- 为rc-virtual-list水平滚动条添加鼠标滚轮支持(MutationObserver + marginLeft驱动)
- 修复白色主题透明模式下列名悬浮Tooltip对比度不足的问题
- 新增白色主题全局滚动条样式适配透明模式(App.css)
- App.tsx主题token与组件样式优化
- refs #147

* 🔧 chore(app): 清理 App.tsx 类型告警并收敛前端壳层实现

- 清除未使用代码和冗余状态
- 替换弃用 API 以消除 IDE 提示
- 显式处理浮动 Promise 避免告警
- 保持现有更新检查和代理设置行为不变

* 🔧 fix(ci): 修复 Windows AMD64 下 DuckDB 驱动构建链路

- 将 DuckDB 工具链准备切换为优先使用 MSYS2
- 增加 gcc 和 g++ 存在性校验与版本验证
- 在 MSYS2 异常时回退 Chocolatey 安装 MinGW
- 保持 Windows ARM64 跳过 DuckDB 构建与平台支持一致

* 🔧 fix(ci): 修复 Windows AMD64 下 DuckDB 驱动构建链路

- 将 DuckDB 工具链准备切换为优先使用 MSYS2
- 增加 gcc 和 g++ 存在性校验与版本验证
- 在 MSYS2 异常时回退 Chocolatey 安装 MinGW
- 保持 Windows ARM64 跳过 DuckDB 构建与平台支持一致

* 🔧 fix(ci): 修复 Windows AMD64 下 DuckDB 驱动构建工具链

- 将 DuckDB 编译链从 MINGW64 切换为 MSYS2 UCRT64
- 修正 Windows AMD64 的 gcc 和 g++ 探测路径
- 增加 DuckDB 编译器版本校验步骤

* 📝 docs(contributing): 补充中英文贡献指南并统一 README 入口

- 新增英文版 CONTRIBUTING.md 作为正式贡献文档
- 新增中文版 CONTRIBUTING.zh-CN.md 作为中文贡献说明
- 调整 README 和 README.zh-CN 的贡献入口指向对应语言文档

* - feat(connection,metadata,kingbase): 增强多数据源连接能力并修复金仓/达梦/Oracle/ClickHouse兼容性问题 (#188) (#190)

* feat(http-tunnel): 支持独立 HTTP 隧道连接并覆盖多数据源

refs #168

* fix(kingbase-data-grid): 修复金仓打开表卡顿并降低对象渲染开销

refs #178

* fix(kingbase-transaction): 修复金仓事务提交重复引号导致语法错误

refs #176

* fix(driver-agent): 修复老版本 Win10 升级后金仓驱动代理启动失败

refs #177

* chore(ci): 新增手动触发的 macOS 测试构建工作流

* chore(ci): 允许测试工作流在当前分支自动触发

* fix(query-editor): 修复 SQL 编辑中光标随机跳到末尾 refs #185

* feat(data-sync): 增加差异 SQL 预览能力便于审核 refs #174

* fix(clickhouse-connect): 自动识别并回退 HTTP/Native 协议连接 refs #181

* fix(oracle-metadata): 修复视图与函数加载按 schema 过滤异常 refs #155

* fix(dameng-databases): 修复显示全部库时数据库列表不完整 refs #154

* fix(connection,db-list): 统一处理空列表返回并修复达梦连接测试报错 refs #157

Co-authored-by: 辣条 <69459608+tianqijiuyun-latiao@users.noreply.github.com>

*  feat(release-notes): 支持自动生成 Release 更新说明并区分配置文件命名

* 🔁 chore(sync): 回灌 main 到 dev (#192)

* - feat(connection,metadata,kingbase): 增强多数据源连接能力并修复金仓/达梦/Oracle/ClickHouse兼容性问题 (#188)

* feat(http-tunnel): 支持独立 HTTP 隧道连接并覆盖多数据源

refs #168

* fix(kingbase-data-grid): 修复金仓打开表卡顿并降低对象渲染开销

refs #178

* fix(kingbase-transaction): 修复金仓事务提交重复引号导致语法错误

refs #176

* fix(driver-agent): 修复老版本 Win10 升级后金仓驱动代理启动失败

refs #177

* chore(ci): 新增手动触发的 macOS 测试构建工作流

* chore(ci): 允许测试工作流在当前分支自动触发

* fix(query-editor): 修复 SQL 编辑中光标随机跳到末尾 refs #185

* feat(data-sync): 增加差异 SQL 预览能力便于审核 refs #174

* fix(clickhouse-connect): 自动识别并回退 HTTP/Native 协议连接 refs #181

* fix(oracle-metadata): 修复视图与函数加载按 schema 过滤异常 refs #155

* fix(dameng-databases): 修复显示全部库时数据库列表不完整 refs #154

* fix(connection,db-list): 统一处理空列表返回并修复达梦连接测试报错 refs #157

* Release/0.5.3 (#191)

---------

Co-authored-by: 辣条 <69459608+tianqijiuyun-latiao@users.noreply.github.com>
Co-authored-by: Syngnat <92659908+Syngnat@users.noreply.github.com>

* 🐛 fix(branch-sync): 修复 main 回灌 dev 时 mergeable 异步计算导致漏开自动合并

- 增加 mergeable 状态轮询,避免新建同步 PR 后立即返回 UNKNOWN
- 在合并状态未稳定时输出中文告警与执行摘要
- 保持冲突分支、待计算分支与自动合并分支的处理路径清晰

* 🔁 chore(sync): 回灌 main 到 dev (#195)

* - feat(connection,metadata,kingbase): 增强多数据源连接能力并修复金仓/达梦/Oracle/ClickHouse兼容性问题 (#188)

* feat(http-tunnel): 支持独立 HTTP 隧道连接并覆盖多数据源

refs #168

* fix(kingbase-data-grid): 修复金仓打开表卡顿并降低对象渲染开销

refs #178

* fix(kingbase-transaction): 修复金仓事务提交重复引号导致语法错误

refs #176

* fix(driver-agent): 修复老版本 Win10 升级后金仓驱动代理启动失败

refs #177

* chore(ci): 新增手动触发的 macOS 测试构建工作流

* chore(ci): 允许测试工作流在当前分支自动触发

* fix(query-editor): 修复 SQL 编辑中光标随机跳到末尾 refs #185

* feat(data-sync): 增加差异 SQL 预览能力便于审核 refs #174

* fix(clickhouse-connect): 自动识别并回退 HTTP/Native 协议连接 refs #181

* fix(oracle-metadata): 修复视图与函数加载按 schema 过滤异常 refs #155

* fix(dameng-databases): 修复显示全部库时数据库列表不完整 refs #154

* fix(connection,db-list): 统一处理空列表返回并修复达梦连接测试报错 refs #157

* Release/0.5.3 (#191)

* - chore(ci): 新增全平台测试包手动构建工作流 tianqijiuyun-latiao 今天 下午4:26 (#194)

* feat(http-tunnel): 支持独立 HTTP 隧道连接并覆盖多数据源

refs #168

* fix(kingbase-data-grid): 修复金仓打开表卡顿并降低对象渲染开销

refs #178

* fix(kingbase-transaction): 修复金仓事务提交重复引号导致语法错误

refs #176

* fix(driver-agent): 修复老版本 Win10 升级后金仓驱动代理启动失败

refs #177

* chore(ci): 新增手动触发的 macOS 测试构建工作流

* chore(ci): 允许测试工作流在当前分支自动触发

* fix(query-editor): 修复 SQL 编辑中光标随机跳到末尾 refs #185

* feat(data-sync): 增加差异 SQL 预览能力便于审核 refs #174

* fix(clickhouse-connect): 自动识别并回退 HTTP/Native 协议连接 refs #181

* fix(oracle-metadata): 修复视图与函数加载按 schema 过滤异常 refs #155

* fix(dameng-databases): 修复显示全部库时数据库列表不完整 refs #154

* fix(connection,db-list): 统一处理空列表返回并修复达梦连接测试报错 refs #157

* fix(kingbase): 补齐主键识别并优化宽表卡顿 refs #176 refs #178

* fix(query-execution): 支持带前置注释的读查询结果识别

* chore(ci): 新增全平台测试包手动构建工作流

---------

Co-authored-by: 辣条 <69459608+tianqijiuyun-latiao@users.noreply.github.com>
Co-authored-by: Syngnat <92659908+Syngnat@users.noreply.github.com>

* ♻️ refactor(frontend-sync): 优化桌面交互细节并移除 main 回灌 dev 自动化

- 优化新建连接、主题设置、侧边栏工具区与 SQL 日志的界面表现
- 调整分页、筛选、透明模式与弹窗样式,统一整体交互层次
- 收口外观参数生效逻辑并补齐多组件适配
- 删除 sync-main-to-dev 工作流并同步维护者手动回灌说明

* feat: 统一筛选条件逻辑按钮宽度 (#201)

* 🐛 fix(oracle-query): 修复 Oracle 表数据分页 SQL 兼容问题 refs #196 (#202)

*  feat(datasource): 支持 DuckDB Parquet 文件模式并优化弹窗打开链路

- 统一 DuckDB 文件库与 Parquet 文件接入能力
- 补充 URI、文件选择、只读挂载与连接缓存键处理
- 去掉数据源卡片点击前的同步驱动查询,修复打开卡顿

*  feat(datasource): 支持 DuckDB Parquet 文件模式并优化弹窗打开链路

- 统一 DuckDB 文件库与 Parquet 文件接入能力
- 补充 URI、文件选择、只读挂载与连接缓存键处理
- 去掉数据源卡片点击前的同步驱动查询,修复打开卡顿
- refs #166

* 🐛 fix(dameng): 修复达梦连接成功后数据库列表为空问题

- 调整达梦数据库列表获取策略,优先回退查询当前 schema 与当前用户
- 保留可见用户与 owner 聚合逻辑,兼容低权限账号场景
- 补充前端空列表提示与后端单元测试,降低排查成本
- close #203

*  feat(data-sync): 扩展跨库迁移链路并优化数据同步交互

- 统一同库同步与跨库迁移入口,补充模式区分与风险提示
- 扩展 ClickHouse 与 PG-like 双向迁移,并新增 PG-like、ClickHouse、TDengine 到 MongoDB 的迁移路由
- 完善 TDengine 目标端建表规划、回归测试与需求追踪文档
- refs #51

* 🐛 fix(connection): 修复新建连接时标签切换导致表单数据丢失

- 在 SSH 标签页测试连接时,基础信息的 host 回退为默认值 localhost
- 在基础信息标签页保存时,SSH 配置丢失
- 保存结果仅包含当前选中标签页的字段
- refs #208

* 🐛 fix(mongodb): 修复单机模式连接副本集实例时地址被替换为内网地址

- getURI 在 topology=single 时未设置 directConnection=true
- 驱动连接目标地址后自动跟随副本集成员发现,切换到 localhost:27017
- 在 mongodb_impl.go 和 mongodb_impl_v1.go 中添加 directConnection=true
- 仅在 topology 非 replica、无 replicaSet、非 SRV 时生效
- refs #205

* 🐛 fix(DataGrid): 修复虚拟滚动模式下右键菜单失效

- 行级和单元格级右键菜单的启用条件互斥,虚拟滚动模式下两者同时失效
- enableLargeResultOptimizedEditing 关闭了内联编辑但未回退启用行级菜单
- 修改 useContextMenuRow 和 enableRowContextMenu 条件,虚拟模式下启用行级菜单
- 更新 dataContextValue 的 useMemo 依赖数组
- refs #209

* 🐛 fix(sqlserver): 修复 SQL Server 查看表数据时分页语法和标识符引用错误

- quoteIdentPart 缺少 sqlserver 分支,标识符使用双引号而非 [bracket]
- buildPaginatedSelectSQL 增加 mssql 别名兜底,避免 dbType 变体导致走 default 分支
- 修复后标识符使用 [bracket],分页使用 OFFSET FETCH NEXT 语法
- refs #204

*  feat(DataGrid): 统一表格右键菜单交互体验

- 彻底移除功能较少的行级右键菜单 ContextMenuRow,统一使用功能更丰富的单元格右键菜单
- 优化虚拟滚动模式和只读模式下的渲染,支持触发单元格右键菜单
- 菜单展示自适应:在只读或不可修改数据的场景下自动隐藏「设置为 NULL」与「填充到选中行」等编辑项
- refs #209

* 🔧 fix(DataGrid): 默认开启虚拟滚动并修复多选单元格高亮失效问题

- 移除根据数据量和列数动态判断是否开启虚拟滚动的阈值限制,改为在表格视图下默认全量开启,彻底解决卡顿问题
- 修复 `updateCellSelection` 在查找坐标节点时硬编码 `td` 选择器的问题,改为精确匹配 `.ant-table-cell`,兼容虚拟滚动时的 `div` 渲染模式
- 修复因透明窗口特性导致的 `transparent !important` 把高亮样式强行覆盖的问题,拔高了多选状态下背景与边框 CSS 的优先级
- 解决单元格内外多重属性嵌套导致的高亮右侧留白现象,使得高亮框完全贴合表格单元格边缘
- 适配主题色响应(暗黑模式使用黄色深色高亮,白昼模式使用默认蓝色高亮)

---------

Co-authored-by: Syngnat <yangguofeng919@gmail.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: 辣条 <69459608+tianqijiuyun-latiao@users.noreply.github.com>
Co-authored-by: TSS <266256496+Zencok@users.noreply.github.com>
2026-03-10 11:26:02 +08:00
Syngnat
ca49b37dc7 🔧 fix(DataGrid): 默认开启虚拟滚动并修复多选单元格高亮失效问题
- 移除根据数据量和列数动态判断是否开启虚拟滚动的阈值限制,改为在表格视图下默认全量开启,彻底解决卡顿问题
- 修复 `updateCellSelection` 在查找坐标节点时硬编码 `td` 选择器的问题,改为精确匹配 `.ant-table-cell`,兼容虚拟滚动时的 `div` 渲染模式
- 修复因透明窗口特性导致的 `transparent !important` 把高亮样式强行覆盖的问题,拔高了多选状态下背景与边框 CSS 的优先级
- 解决单元格内外多重属性嵌套导致的高亮右侧留白现象,使得高亮框完全贴合表格单元格边缘
- 适配主题色响应(暗黑模式使用黄色深色高亮,白昼模式使用默认蓝色高亮)
2026-03-10 11:17:03 +08:00
Syngnat
c8c0c5f20a feat(DataGrid): 统一表格右键菜单交互体验
- 彻底移除功能较少的行级右键菜单 ContextMenuRow,统一使用功能更丰富的单元格右键菜单
- 优化虚拟滚动模式和只读模式下的渲染,支持触发单元格右键菜单
- 菜单展示自适应:在只读或不可修改数据的场景下自动隐藏「设置为 NULL」与「填充到选中行」等编辑项
- refs #209
2026-03-10 10:58:27 +08:00
Syngnat
d61d7ec39b 🐛 fix(sqlserver): 修复 SQL Server 查看表数据时分页语法和标识符引用错误
- quoteIdentPart 缺少 sqlserver 分支,标识符使用双引号而非 [bracket]
- buildPaginatedSelectSQL 增加 mssql 别名兜底,避免 dbType 变体导致走 default 分支
- 修复后标识符使用 [bracket],分页使用 OFFSET FETCH NEXT 语法
- refs #204
2026-03-10 10:50:16 +08:00
Syngnat
e964c8ecf8 🐛 fix(DataGrid): 修复虚拟滚动模式下右键菜单失效
- 行级和单元格级右键菜单的启用条件互斥,虚拟滚动模式下两者同时失效
- enableLargeResultOptimizedEditing 关闭了内联编辑但未回退启用行级菜单
- 修改 useContextMenuRow 和 enableRowContextMenu 条件,虚拟模式下启用行级菜单
- 更新 dataContextValue 的 useMemo 依赖数组
- refs #209
2026-03-10 10:42:34 +08:00
Syngnat
7644462180 🐛 fix(mongodb): 修复单机模式连接副本集实例时地址被替换为内网地址
- getURI 在 topology=single 时未设置 directConnection=true
- 驱动连接目标地址后自动跟随副本集成员发现,切换到 localhost:27017
- 在 mongodb_impl.go 和 mongodb_impl_v1.go 中添加 directConnection=true
- 仅在 topology 非 replica、无 replicaSet、非 SRV 时生效
- refs #205
2026-03-10 10:32:31 +08:00
Syngnat
3bd02e2e09 🐛 fix(connection): 修复新建连接时标签切换导致表单数据丢失
- 在 SSH 标签页测试连接时,基础信息的 host 回退为默认值 localhost
- 在基础信息标签页保存时,SSH 配置丢失
- 保存结果仅包含当前选中标签页的字段
- refs #208
2026-03-10 10:27:13 +08:00
Syngnat
22bd1c4c28 Release/0.5.5 (#207)
* 🐛 fix(data-viewer): 修复ClickHouse尾部分页异常并增强DuckDB复杂类型兼容

- DataViewer 新增 ClickHouse 反向分页策略,修复最后页与倒数页查询失败
- DuckDB 查询失败时按列类型生成安全 SELECT,复杂类型转 VARCHAR 重试
- 分页状态统一使用 currentPage 回填,避免页码与总数推导不一致
- 增强查询异常日志与重试路径,降低大表场景卡顿与误报

*  feat(frontend-driver): 驱动管理支持快速搜索并优化信息展示

- 新增搜索框,支持按 DuckDB/ClickHouse 等关键字快速定位驱动
- 显示“匹配 x / y”统计与无结果提示
- 优化头部区域排版,提升透明/暗色场景下的视觉对齐

* 🔧 fix(connection-modal): 修复多数据源URI导入解析并校正Oracle服务名校验

- 新增单主机URI解析映射,兼容 postgres/postgresql、sqlserver、redis、tdengine、dameng(dm)、kingbase、highgo、vastbase、clickhouse、oracle
- 抽取 parseSingleHostUri 复用逻辑,统一 host/port/user/password/database 回填行为
- Oracle 连接新增服务名必填校验,移除“服务名为空回退用户名”的隐式逻辑
- 连接弹窗补充 Oracle 服务名输入项与 URI 示例

* 🐛 fix(query-export): 修复查询结果导出卡住并统一按数据源能力控制导出路径

- 查询结果页导出增加稳定兜底,异常时确保 loading 关闭避免持续转圈
- DataGrid 导出逻辑按数据源能力分流,优先走后端 ExportQuery 并保留结果集导出降级
- QueryEditor 传递结果导出 SQL,保证查询结果导出范围与当前结果一致
- 后端补充 ExportData/ExportQuery 关键日志,提升导出链路可观测性

* 🐛 fix(precision): 修复查询链路与分页统计的大整数精度丢失

- 代理响应数据解码改为 UseNumber,避免默认 float64 吞精度
- 统一归一化 json.Number 与超界整数,超出 JS 安全范围转字符串
- 修复 DataViewer 总数解析,超大值不再误转 Number 参与分页
- refs #142

* 🐛 fix(driver-manager): 修复驱动管理网络告警重复并强化代理引导

- 新增下载链路域名探测,区分“GitHub可达但驱动下载链路不可达”
- 网络不可达场景仅保留红色强提醒,移除重复二级告警
- 强提醒增加“打开全局代理设置”入口,优先引导使用 GoNavi 全局代理
- 统一网络检测与目录说明提示图标尺寸,修复加载期视觉不一致
- refs #141

* ♻️ refactor(frontend-interaction): 统一标签拖拽与暗色主题交互实现

- 重构Tab拖拽排序实现,统一为可配置拖拽引擎
- 规范拖拽与点击事件边界,提升交互一致性
- 统一多组件暗色透明样式策略,减少硬编码色值
- 提升Redis/表格/连接面板在透明模式下的观感一致性
- refs #144

* ♻️ refactor(update-state): 重构在线更新状态流并按版本统一进度展示

- 重构更新检查与下载状态同步流程,减少前后端状态分叉
- 进度展示严格绑定 latestVersion,避免跨版本状态串用
- 优化 about 打开场景的静默检查状态回填逻辑
- 统一下载弹窗关闭/后台隐藏行为
- 保持现有安装流程并补齐目录打开能力

* 🎨 style(sidebar-log): 将SQL执行日志入口调整为悬浮胶囊样式

- 移除侧栏底部整条日志入口容器
- 新增悬浮按钮阴影/边框/透明背景并适配明暗主题
- 为树区域预留底部空间避免入口遮挡内容

*  feat(redis-cluster): 支持集群模式逻辑多库隔离与 0-15 库切换

- 前端恢复 Redis 集群场景下 db0-db15 的数据库选择与展示
- 后端新增集群逻辑库命名空间前缀映射,统一 key/pattern 读写隔离
- 覆盖扫描、读取、写入、删除、重命名等核心操作的键映射规则
- 集群命令通道支持 SELECT 逻辑切库与 FLUSHDB 逻辑库清空
- refs #145

*  feat(DataGrid): 大数据表虚拟滚动性能优化及UI一致性修复

- 启用动态虚拟滚动(数据量≥500行自动切换),解决万行数据表卡顿问题
- 虚拟模式下EditableCell改用div渲染,CSS选择器从元素级改为类级适配虚拟DOM
- 修复虚拟模式双水平滚动条:样式化rc-virtual-list内置滚动条为胶囊外观,禁用自定义外部滚动条
- 为rc-virtual-list水平滚动条添加鼠标滚轮支持(MutationObserver + marginLeft驱动)
- 修复白色主题透明模式下列名悬浮Tooltip对比度不足的问题
- 新增白色主题全局滚动条样式适配透明模式(App.css)
- App.tsx主题token与组件样式优化
- refs #147

* 🔧 chore(app): 清理 App.tsx 类型告警并收敛前端壳层实现

- 清除未使用代码和冗余状态
- 替换弃用 API 以消除 IDE 提示
- 显式处理浮动 Promise 避免告警
- 保持现有更新检查和代理设置行为不变

* 🔧 fix(ci): 修复 Windows AMD64 下 DuckDB 驱动构建链路

- 将 DuckDB 工具链准备切换为优先使用 MSYS2
- 增加 gcc 和 g++ 存在性校验与版本验证
- 在 MSYS2 异常时回退 Chocolatey 安装 MinGW
- 保持 Windows ARM64 跳过 DuckDB 构建与平台支持一致

* 🔧 fix(ci): 修复 Windows AMD64 下 DuckDB 驱动构建链路

- 将 DuckDB 工具链准备切换为优先使用 MSYS2
- 增加 gcc 和 g++ 存在性校验与版本验证
- 在 MSYS2 异常时回退 Chocolatey 安装 MinGW
- 保持 Windows ARM64 跳过 DuckDB 构建与平台支持一致

* 🔧 fix(ci): 修复 Windows AMD64 下 DuckDB 驱动构建工具链

- 将 DuckDB 编译链从 MINGW64 切换为 MSYS2 UCRT64
- 修正 Windows AMD64 的 gcc 和 g++ 探测路径
- 增加 DuckDB 编译器版本校验步骤

* 📝 docs(contributing): 补充中英文贡献指南并统一 README 入口

- 新增英文版 CONTRIBUTING.md 作为正式贡献文档
- 新增中文版 CONTRIBUTING.zh-CN.md 作为中文贡献说明
- 调整 README 和 README.zh-CN 的贡献入口指向对应语言文档

* - feat(connection,metadata,kingbase): 增强多数据源连接能力并修复金仓/达梦/Oracle/ClickHouse兼容性问题 (#188) (#190)

* feat(http-tunnel): 支持独立 HTTP 隧道连接并覆盖多数据源

refs #168

* fix(kingbase-data-grid): 修复金仓打开表卡顿并降低对象渲染开销

refs #178

* fix(kingbase-transaction): 修复金仓事务提交重复引号导致语法错误

refs #176

* fix(driver-agent): 修复老版本 Win10 升级后金仓驱动代理启动失败

refs #177

* chore(ci): 新增手动触发的 macOS 测试构建工作流

* chore(ci): 允许测试工作流在当前分支自动触发

* fix(query-editor): 修复 SQL 编辑中光标随机跳到末尾 refs #185

* feat(data-sync): 增加差异 SQL 预览能力便于审核 refs #174

* fix(clickhouse-connect): 自动识别并回退 HTTP/Native 协议连接 refs #181

* fix(oracle-metadata): 修复视图与函数加载按 schema 过滤异常 refs #155

* fix(dameng-databases): 修复显示全部库时数据库列表不完整 refs #154

* fix(connection,db-list): 统一处理空列表返回并修复达梦连接测试报错 refs #157

Co-authored-by: 辣条 <69459608+tianqijiuyun-latiao@users.noreply.github.com>

*  feat(release-notes): 支持自动生成 Release 更新说明并区分配置文件命名

* 🔁 chore(sync): 回灌 main 到 dev (#192)

* - feat(connection,metadata,kingbase): 增强多数据源连接能力并修复金仓/达梦/Oracle/ClickHouse兼容性问题 (#188)

* feat(http-tunnel): 支持独立 HTTP 隧道连接并覆盖多数据源

refs #168

* fix(kingbase-data-grid): 修复金仓打开表卡顿并降低对象渲染开销

refs #178

* fix(kingbase-transaction): 修复金仓事务提交重复引号导致语法错误

refs #176

* fix(driver-agent): 修复老版本 Win10 升级后金仓驱动代理启动失败

refs #177

* chore(ci): 新增手动触发的 macOS 测试构建工作流

* chore(ci): 允许测试工作流在当前分支自动触发

* fix(query-editor): 修复 SQL 编辑中光标随机跳到末尾 refs #185

* feat(data-sync): 增加差异 SQL 预览能力便于审核 refs #174

* fix(clickhouse-connect): 自动识别并回退 HTTP/Native 协议连接 refs #181

* fix(oracle-metadata): 修复视图与函数加载按 schema 过滤异常 refs #155

* fix(dameng-databases): 修复显示全部库时数据库列表不完整 refs #154

* fix(connection,db-list): 统一处理空列表返回并修复达梦连接测试报错 refs #157

* Release/0.5.3 (#191)

---------

Co-authored-by: 辣条 <69459608+tianqijiuyun-latiao@users.noreply.github.com>
Co-authored-by: Syngnat <92659908+Syngnat@users.noreply.github.com>

* 🐛 fix(branch-sync): 修复 main 回灌 dev 时 mergeable 异步计算导致漏开自动合并

- 增加 mergeable 状态轮询,避免新建同步 PR 后立即返回 UNKNOWN
- 在合并状态未稳定时输出中文告警与执行摘要
- 保持冲突分支、待计算分支与自动合并分支的处理路径清晰

* 🔁 chore(sync): 回灌 main 到 dev (#195)

* - feat(connection,metadata,kingbase): 增强多数据源连接能力并修复金仓/达梦/Oracle/ClickHouse兼容性问题 (#188)

* feat(http-tunnel): 支持独立 HTTP 隧道连接并覆盖多数据源

refs #168

* fix(kingbase-data-grid): 修复金仓打开表卡顿并降低对象渲染开销

refs #178

* fix(kingbase-transaction): 修复金仓事务提交重复引号导致语法错误

refs #176

* fix(driver-agent): 修复老版本 Win10 升级后金仓驱动代理启动失败

refs #177

* chore(ci): 新增手动触发的 macOS 测试构建工作流

* chore(ci): 允许测试工作流在当前分支自动触发

* fix(query-editor): 修复 SQL 编辑中光标随机跳到末尾 refs #185

* feat(data-sync): 增加差异 SQL 预览能力便于审核 refs #174

* fix(clickhouse-connect): 自动识别并回退 HTTP/Native 协议连接 refs #181

* fix(oracle-metadata): 修复视图与函数加载按 schema 过滤异常 refs #155

* fix(dameng-databases): 修复显示全部库时数据库列表不完整 refs #154

* fix(connection,db-list): 统一处理空列表返回并修复达梦连接测试报错 refs #157

* Release/0.5.3 (#191)

* - chore(ci): 新增全平台测试包手动构建工作流 tianqijiuyun-latiao 今天 下午4:26 (#194)

* feat(http-tunnel): 支持独立 HTTP 隧道连接并覆盖多数据源

refs #168

* fix(kingbase-data-grid): 修复金仓打开表卡顿并降低对象渲染开销

refs #178

* fix(kingbase-transaction): 修复金仓事务提交重复引号导致语法错误

refs #176

* fix(driver-agent): 修复老版本 Win10 升级后金仓驱动代理启动失败

refs #177

* chore(ci): 新增手动触发的 macOS 测试构建工作流

* chore(ci): 允许测试工作流在当前分支自动触发

* fix(query-editor): 修复 SQL 编辑中光标随机跳到末尾 refs #185

* feat(data-sync): 增加差异 SQL 预览能力便于审核 refs #174

* fix(clickhouse-connect): 自动识别并回退 HTTP/Native 协议连接 refs #181

* fix(oracle-metadata): 修复视图与函数加载按 schema 过滤异常 refs #155

* fix(dameng-databases): 修复显示全部库时数据库列表不完整 refs #154

* fix(connection,db-list): 统一处理空列表返回并修复达梦连接测试报错 refs #157

* fix(kingbase): 补齐主键识别并优化宽表卡顿 refs #176 refs #178

* fix(query-execution): 支持带前置注释的读查询结果识别

* chore(ci): 新增全平台测试包手动构建工作流

---------

Co-authored-by: 辣条 <69459608+tianqijiuyun-latiao@users.noreply.github.com>
Co-authored-by: Syngnat <92659908+Syngnat@users.noreply.github.com>

* ♻️ refactor(frontend-sync): 优化桌面交互细节并移除 main 回灌 dev 自动化

- 优化新建连接、主题设置、侧边栏工具区与 SQL 日志的界面表现
- 调整分页、筛选、透明模式与弹窗样式,统一整体交互层次
- 收口外观参数生效逻辑并补齐多组件适配
- 删除 sync-main-to-dev 工作流并同步维护者手动回灌说明

* feat: 统一筛选条件逻辑按钮宽度 (#201)

* 🐛 fix(oracle-query): 修复 Oracle 表数据分页 SQL 兼容问题 refs #196 (#202)

*  feat(datasource): 支持 DuckDB Parquet 文件模式并优化弹窗打开链路

- 统一 DuckDB 文件库与 Parquet 文件接入能力
- 补充 URI、文件选择、只读挂载与连接缓存键处理
- 去掉数据源卡片点击前的同步驱动查询,修复打开卡顿

*  feat(datasource): 支持 DuckDB Parquet 文件模式并优化弹窗打开链路

- 统一 DuckDB 文件库与 Parquet 文件接入能力
- 补充 URI、文件选择、只读挂载与连接缓存键处理
- 去掉数据源卡片点击前的同步驱动查询,修复打开卡顿
- refs #166

* 🐛 fix(dameng): 修复达梦连接成功后数据库列表为空问题

- 调整达梦数据库列表获取策略,优先回退查询当前 schema 与当前用户
- 保留可见用户与 owner 聚合逻辑,兼容低权限账号场景
- 补充前端空列表提示与后端单元测试,降低排查成本
- close #203

*  feat(data-sync): 扩展跨库迁移链路并优化数据同步交互

- 统一同库同步与跨库迁移入口,补充模式区分与风险提示
- 扩展 ClickHouse 与 PG-like 双向迁移,并新增 PG-like、ClickHouse、TDengine 到 MongoDB 的迁移路由
- 完善 TDengine 目标端建表规划、回归测试与需求追踪文档
- refs #51

---------

Co-authored-by: Syngnat <yangguofeng919@gmail.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: 辣条 <69459608+tianqijiuyun-latiao@users.noreply.github.com>
Co-authored-by: TSS <266256496+Zencok@users.noreply.github.com>
2026-03-09 17:36:52 +08:00
Syngnat
0daf702d25 feat(data-sync): 扩展跨库迁移链路并优化数据同步交互
- 统一同库同步与跨库迁移入口,补充模式区分与风险提示
- 扩展 ClickHouse 与 PG-like 双向迁移,并新增 PG-like、ClickHouse、TDengine 到 MongoDB 的迁移路由
- 完善 TDengine 目标端建表规划、回归测试与需求追踪文档
- refs #51
2026-03-09 17:22:26 +08:00
Syngnat
058c74e49a 🐛 fix(dameng): 修复达梦连接成功后数据库列表为空问题
- 调整达梦数据库列表获取策略,优先回退查询当前 schema 与当前用户
- 保留可见用户与 owner 聚合逻辑,兼容低权限账号场景
- 补充前端空列表提示与后端单元测试,降低排查成本
- close #203
2026-03-09 11:02:00 +08:00
杨国锋
b85c7529ec feat(datasource): 支持 DuckDB Parquet 文件模式并优化弹窗打开链路
- 统一 DuckDB 文件库与 Parquet 文件接入能力
- 补充 URI、文件选择、只读挂载与连接缓存键处理
- 去掉数据源卡片点击前的同步驱动查询,修复打开卡顿
- refs #166
2026-03-08 18:42:27 +08:00
杨国锋
e521d2125f feat(datasource): 支持 DuckDB Parquet 文件模式并优化弹窗打开链路
- 统一 DuckDB 文件库与 Parquet 文件接入能力
- 补充 URI、文件选择、只读挂载与连接缓存键处理
- 去掉数据源卡片点击前的同步驱动查询,修复打开卡顿
2026-03-08 18:41:05 +08:00
辣条
450fdfa59e 🐛 fix(oracle-query): 修复 Oracle 表数据分页 SQL 兼容问题 refs #196 (#202) 2026-03-08 00:42:48 +08:00
TSS
c87b15b22a feat: 统一筛选条件逻辑按钮宽度 (#201) 2026-03-07 21:45:26 +08:00
Syngnat
89c81823bc Release/0.5.4 (#199)
* 🐛 fix(data-viewer): 修复ClickHouse尾部分页异常并增强DuckDB复杂类型兼容

- DataViewer 新增 ClickHouse 反向分页策略,修复最后页与倒数页查询失败
- DuckDB 查询失败时按列类型生成安全 SELECT,复杂类型转 VARCHAR 重试
- 分页状态统一使用 currentPage 回填,避免页码与总数推导不一致
- 增强查询异常日志与重试路径,降低大表场景卡顿与误报

*  feat(frontend-driver): 驱动管理支持快速搜索并优化信息展示

- 新增搜索框,支持按 DuckDB/ClickHouse 等关键字快速定位驱动
- 显示“匹配 x / y”统计与无结果提示
- 优化头部区域排版,提升透明/暗色场景下的视觉对齐

* 🔧 fix(connection-modal): 修复多数据源URI导入解析并校正Oracle服务名校验

- 新增单主机URI解析映射,兼容 postgres/postgresql、sqlserver、redis、tdengine、dameng(dm)、kingbase、highgo、vastbase、clickhouse、oracle
- 抽取 parseSingleHostUri 复用逻辑,统一 host/port/user/password/database 回填行为
- Oracle 连接新增服务名必填校验,移除“服务名为空回退用户名”的隐式逻辑
- 连接弹窗补充 Oracle 服务名输入项与 URI 示例

* 🐛 fix(query-export): 修复查询结果导出卡住并统一按数据源能力控制导出路径

- 查询结果页导出增加稳定兜底,异常时确保 loading 关闭避免持续转圈
- DataGrid 导出逻辑按数据源能力分流,优先走后端 ExportQuery 并保留结果集导出降级
- QueryEditor 传递结果导出 SQL,保证查询结果导出范围与当前结果一致
- 后端补充 ExportData/ExportQuery 关键日志,提升导出链路可观测性

* 🐛 fix(precision): 修复查询链路与分页统计的大整数精度丢失

- 代理响应数据解码改为 UseNumber,避免默认 float64 吞精度
- 统一归一化 json.Number 与超界整数,超出 JS 安全范围转字符串
- 修复 DataViewer 总数解析,超大值不再误转 Number 参与分页
- refs #142

* 🐛 fix(driver-manager): 修复驱动管理网络告警重复并强化代理引导

- 新增下载链路域名探测,区分“GitHub可达但驱动下载链路不可达”
- 网络不可达场景仅保留红色强提醒,移除重复二级告警
- 强提醒增加“打开全局代理设置”入口,优先引导使用 GoNavi 全局代理
- 统一网络检测与目录说明提示图标尺寸,修复加载期视觉不一致
- refs #141

* ♻️ refactor(frontend-interaction): 统一标签拖拽与暗色主题交互实现

- 重构Tab拖拽排序实现,统一为可配置拖拽引擎
- 规范拖拽与点击事件边界,提升交互一致性
- 统一多组件暗色透明样式策略,减少硬编码色值
- 提升Redis/表格/连接面板在透明模式下的观感一致性
- refs #144

* ♻️ refactor(update-state): 重构在线更新状态流并按版本统一进度展示

- 重构更新检查与下载状态同步流程,减少前后端状态分叉
- 进度展示严格绑定 latestVersion,避免跨版本状态串用
- 优化 about 打开场景的静默检查状态回填逻辑
- 统一下载弹窗关闭/后台隐藏行为
- 保持现有安装流程并补齐目录打开能力

* 🎨 style(sidebar-log): 将SQL执行日志入口调整为悬浮胶囊样式

- 移除侧栏底部整条日志入口容器
- 新增悬浮按钮阴影/边框/透明背景并适配明暗主题
- 为树区域预留底部空间避免入口遮挡内容

*  feat(redis-cluster): 支持集群模式逻辑多库隔离与 0-15 库切换

- 前端恢复 Redis 集群场景下 db0-db15 的数据库选择与展示
- 后端新增集群逻辑库命名空间前缀映射,统一 key/pattern 读写隔离
- 覆盖扫描、读取、写入、删除、重命名等核心操作的键映射规则
- 集群命令通道支持 SELECT 逻辑切库与 FLUSHDB 逻辑库清空
- refs #145

*  feat(DataGrid): 大数据表虚拟滚动性能优化及UI一致性修复

- 启用动态虚拟滚动(数据量≥500行自动切换),解决万行数据表卡顿问题
- 虚拟模式下EditableCell改用div渲染,CSS选择器从元素级改为类级适配虚拟DOM
- 修复虚拟模式双水平滚动条:样式化rc-virtual-list内置滚动条为胶囊外观,禁用自定义外部滚动条
- 为rc-virtual-list水平滚动条添加鼠标滚轮支持(MutationObserver + marginLeft驱动)
- 修复白色主题透明模式下列名悬浮Tooltip对比度不足的问题
- 新增白色主题全局滚动条样式适配透明模式(App.css)
- App.tsx主题token与组件样式优化
- refs #147

* 🔧 chore(app): 清理 App.tsx 类型告警并收敛前端壳层实现

- 清除未使用代码和冗余状态
- 替换弃用 API 以消除 IDE 提示
- 显式处理浮动 Promise 避免告警
- 保持现有更新检查和代理设置行为不变

* 🔧 fix(ci): 修复 Windows AMD64 下 DuckDB 驱动构建链路

- 将 DuckDB 工具链准备切换为优先使用 MSYS2
- 增加 gcc 和 g++ 存在性校验与版本验证
- 在 MSYS2 异常时回退 Chocolatey 安装 MinGW
- 保持 Windows ARM64 跳过 DuckDB 构建与平台支持一致

* 🔧 fix(ci): 修复 Windows AMD64 下 DuckDB 驱动构建链路

- 将 DuckDB 工具链准备切换为优先使用 MSYS2
- 增加 gcc 和 g++ 存在性校验与版本验证
- 在 MSYS2 异常时回退 Chocolatey 安装 MinGW
- 保持 Windows ARM64 跳过 DuckDB 构建与平台支持一致

* 🔧 fix(ci): 修复 Windows AMD64 下 DuckDB 驱动构建工具链

- 将 DuckDB 编译链从 MINGW64 切换为 MSYS2 UCRT64
- 修正 Windows AMD64 的 gcc 和 g++ 探测路径
- 增加 DuckDB 编译器版本校验步骤

* 📝 docs(contributing): 补充中英文贡献指南并统一 README 入口

- 新增英文版 CONTRIBUTING.md 作为正式贡献文档
- 新增中文版 CONTRIBUTING.zh-CN.md 作为中文贡献说明
- 调整 README 和 README.zh-CN 的贡献入口指向对应语言文档

* - feat(connection,metadata,kingbase): 增强多数据源连接能力并修复金仓/达梦/Oracle/ClickHouse兼容性问题 (#188) (#190)

* feat(http-tunnel): 支持独立 HTTP 隧道连接并覆盖多数据源

refs #168

* fix(kingbase-data-grid): 修复金仓打开表卡顿并降低对象渲染开销

refs #178

* fix(kingbase-transaction): 修复金仓事务提交重复引号导致语法错误

refs #176

* fix(driver-agent): 修复老版本 Win10 升级后金仓驱动代理启动失败

refs #177

* chore(ci): 新增手动触发的 macOS 测试构建工作流

* chore(ci): 允许测试工作流在当前分支自动触发

* fix(query-editor): 修复 SQL 编辑中光标随机跳到末尾 refs #185

* feat(data-sync): 增加差异 SQL 预览能力便于审核 refs #174

* fix(clickhouse-connect): 自动识别并回退 HTTP/Native 协议连接 refs #181

* fix(oracle-metadata): 修复视图与函数加载按 schema 过滤异常 refs #155

* fix(dameng-databases): 修复显示全部库时数据库列表不完整 refs #154

* fix(connection,db-list): 统一处理空列表返回并修复达梦连接测试报错 refs #157

Co-authored-by: 辣条 <69459608+tianqijiuyun-latiao@users.noreply.github.com>

*  feat(release-notes): 支持自动生成 Release 更新说明并区分配置文件命名

* 🔁 chore(sync): 回灌 main 到 dev (#192)

* - feat(connection,metadata,kingbase): 增强多数据源连接能力并修复金仓/达梦/Oracle/ClickHouse兼容性问题 (#188)

* feat(http-tunnel): 支持独立 HTTP 隧道连接并覆盖多数据源

refs #168

* fix(kingbase-data-grid): 修复金仓打开表卡顿并降低对象渲染开销

refs #178

* fix(kingbase-transaction): 修复金仓事务提交重复引号导致语法错误

refs #176

* fix(driver-agent): 修复老版本 Win10 升级后金仓驱动代理启动失败

refs #177

* chore(ci): 新增手动触发的 macOS 测试构建工作流

* chore(ci): 允许测试工作流在当前分支自动触发

* fix(query-editor): 修复 SQL 编辑中光标随机跳到末尾 refs #185

* feat(data-sync): 增加差异 SQL 预览能力便于审核 refs #174

* fix(clickhouse-connect): 自动识别并回退 HTTP/Native 协议连接 refs #181

* fix(oracle-metadata): 修复视图与函数加载按 schema 过滤异常 refs #155

* fix(dameng-databases): 修复显示全部库时数据库列表不完整 refs #154

* fix(connection,db-list): 统一处理空列表返回并修复达梦连接测试报错 refs #157

* Release/0.5.3 (#191)

---------

Co-authored-by: 辣条 <69459608+tianqijiuyun-latiao@users.noreply.github.com>
Co-authored-by: Syngnat <92659908+Syngnat@users.noreply.github.com>

* 🐛 fix(branch-sync): 修复 main 回灌 dev 时 mergeable 异步计算导致漏开自动合并

- 增加 mergeable 状态轮询,避免新建同步 PR 后立即返回 UNKNOWN
- 在合并状态未稳定时输出中文告警与执行摘要
- 保持冲突分支、待计算分支与自动合并分支的处理路径清晰

* 🔁 chore(sync): 回灌 main 到 dev (#195)

* - feat(connection,metadata,kingbase): 增强多数据源连接能力并修复金仓/达梦/Oracle/ClickHouse兼容性问题 (#188)

* feat(http-tunnel): 支持独立 HTTP 隧道连接并覆盖多数据源

refs #168

* fix(kingbase-data-grid): 修复金仓打开表卡顿并降低对象渲染开销

refs #178

* fix(kingbase-transaction): 修复金仓事务提交重复引号导致语法错误

refs #176

* fix(driver-agent): 修复老版本 Win10 升级后金仓驱动代理启动失败

refs #177

* chore(ci): 新增手动触发的 macOS 测试构建工作流

* chore(ci): 允许测试工作流在当前分支自动触发

* fix(query-editor): 修复 SQL 编辑中光标随机跳到末尾 refs #185

* feat(data-sync): 增加差异 SQL 预览能力便于审核 refs #174

* fix(clickhouse-connect): 自动识别并回退 HTTP/Native 协议连接 refs #181

* fix(oracle-metadata): 修复视图与函数加载按 schema 过滤异常 refs #155

* fix(dameng-databases): 修复显示全部库时数据库列表不完整 refs #154

* fix(connection,db-list): 统一处理空列表返回并修复达梦连接测试报错 refs #157

* Release/0.5.3 (#191)

* - chore(ci): 新增全平台测试包手动构建工作流 tianqijiuyun-latiao 今天 下午4:26 (#194)

* feat(http-tunnel): 支持独立 HTTP 隧道连接并覆盖多数据源

refs #168

* fix(kingbase-data-grid): 修复金仓打开表卡顿并降低对象渲染开销

refs #178

* fix(kingbase-transaction): 修复金仓事务提交重复引号导致语法错误

refs #176

* fix(driver-agent): 修复老版本 Win10 升级后金仓驱动代理启动失败

refs #177

* chore(ci): 新增手动触发的 macOS 测试构建工作流

* chore(ci): 允许测试工作流在当前分支自动触发

* fix(query-editor): 修复 SQL 编辑中光标随机跳到末尾 refs #185

* feat(data-sync): 增加差异 SQL 预览能力便于审核 refs #174

* fix(clickhouse-connect): 自动识别并回退 HTTP/Native 协议连接 refs #181

* fix(oracle-metadata): 修复视图与函数加载按 schema 过滤异常 refs #155

* fix(dameng-databases): 修复显示全部库时数据库列表不完整 refs #154

* fix(connection,db-list): 统一处理空列表返回并修复达梦连接测试报错 refs #157

* fix(kingbase): 补齐主键识别并优化宽表卡顿 refs #176 refs #178

* fix(query-execution): 支持带前置注释的读查询结果识别

* chore(ci): 新增全平台测试包手动构建工作流

---------

Co-authored-by: 辣条 <69459608+tianqijiuyun-latiao@users.noreply.github.com>
Co-authored-by: Syngnat <92659908+Syngnat@users.noreply.github.com>

* ♻️ refactor(frontend-sync): 优化桌面交互细节并移除 main 回灌 dev 自动化

- 优化新建连接、主题设置、侧边栏工具区与 SQL 日志的界面表现
- 调整分页、筛选、透明模式与弹窗样式,统一整体交互层次
- 收口外观参数生效逻辑并补齐多组件适配
- 删除 sync-main-to-dev 工作流并同步维护者手动回灌说明

---------

Co-authored-by: Syngnat <yangguofeng919@gmail.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: 辣条 <69459608+tianqijiuyun-latiao@users.noreply.github.com>
2026-03-07 17:15:30 +08:00
Syngnat
797ba27d20 Merge remote-tracking branch 'origin/main' into dev
# Conflicts:
#	.github/workflows/test-build-all-platforms.yml
#	frontend/src/components/ConnectionModal.tsx
#	internal/db/query_value.go
#	internal/db/query_value_test.go
2026-03-07 17:10:17 +08:00
Syngnat
ed1f40e04a ♻️ refactor(frontend-sync): 优化桌面交互细节并移除 main 回灌 dev 自动化
- 优化新建连接、主题设置、侧边栏工具区与 SQL 日志的界面表现
- 调整分页、筛选、透明模式与弹窗样式,统一整体交互层次
- 收口外观参数生效逻辑并补齐多组件适配
- 删除 sync-main-to-dev 工作流并同步维护者手动回灌说明
2026-03-07 17:01:49 +08:00
辣条
2b190e564f feat(multi-db,query,ci): 增强多数据源兼容性、查询体验与全平台测试构建流程 (#197)
* feat(http-tunnel): 支持独立 HTTP 隧道连接并覆盖多数据源

refs #168

* fix(kingbase-data-grid): 修复金仓打开表卡顿并降低对象渲染开销

refs #178

* fix(kingbase-transaction): 修复金仓事务提交重复引号导致语法错误

refs #176

* fix(driver-agent): 修复老版本 Win10 升级后金仓驱动代理启动失败

refs #177

* chore(ci): 新增手动触发的 macOS 测试构建工作流

* chore(ci): 允许测试工作流在当前分支自动触发

* fix(query-editor): 修复 SQL 编辑中光标随机跳到末尾 refs #185

* feat(data-sync): 增加差异 SQL 预览能力便于审核 refs #174

* fix(clickhouse-connect): 自动识别并回退 HTTP/Native 协议连接 refs #181

* fix(oracle-metadata): 修复视图与函数加载按 schema 过滤异常 refs #155

* fix(dameng-databases): 修复显示全部库时数据库列表不完整 refs #154

* fix(connection,db-list): 统一处理空列表返回并修复达梦连接测试报错 refs #157

* fix(kingbase): 补齐主键识别并优化宽表卡顿 refs #176 refs #178

* fix(query-execution): 支持带前置注释的读查询结果识别

* chore(ci): 新增全平台测试包手动构建工作流

* fix(ci): 修复全平台测试包 artifact 命名冲突

* fix(data-viewer): 保持切换标签后的表格滚动位置

* fix(datetime-display): 修复零日期显示被错误转换 refs #189

* fix(window-scale): 修复任务栏切换后字体异常放大 refs #193

* fix(data-grid-scroll): 修复数据区触摸板横向滚动失效 refs #175

* fix(db-query-value): 清理 query_value 合并冲突并保持零日期处理

* chore(ci): 删除旧的 macOS 单平台测试工作流
2026-03-07 13:40:50 +08:00
github-actions[bot]
1c050aefd0 🔁 chore(sync): 回灌 main 到 dev (#195)
* - feat(connection,metadata,kingbase): 增强多数据源连接能力并修复金仓/达梦/Oracle/ClickHouse兼容性问题 (#188)

* feat(http-tunnel): 支持独立 HTTP 隧道连接并覆盖多数据源

refs #168

* fix(kingbase-data-grid): 修复金仓打开表卡顿并降低对象渲染开销

refs #178

* fix(kingbase-transaction): 修复金仓事务提交重复引号导致语法错误

refs #176

* fix(driver-agent): 修复老版本 Win10 升级后金仓驱动代理启动失败

refs #177

* chore(ci): 新增手动触发的 macOS 测试构建工作流

* chore(ci): 允许测试工作流在当前分支自动触发

* fix(query-editor): 修复 SQL 编辑中光标随机跳到末尾 refs #185

* feat(data-sync): 增加差异 SQL 预览能力便于审核 refs #174

* fix(clickhouse-connect): 自动识别并回退 HTTP/Native 协议连接 refs #181

* fix(oracle-metadata): 修复视图与函数加载按 schema 过滤异常 refs #155

* fix(dameng-databases): 修复显示全部库时数据库列表不完整 refs #154

* fix(connection,db-list): 统一处理空列表返回并修复达梦连接测试报错 refs #157

* Release/0.5.3 (#191)

* - chore(ci): 新增全平台测试包手动构建工作流 tianqijiuyun-latiao 今天 下午4:26 (#194)

* feat(http-tunnel): 支持独立 HTTP 隧道连接并覆盖多数据源

refs #168

* fix(kingbase-data-grid): 修复金仓打开表卡顿并降低对象渲染开销

refs #178

* fix(kingbase-transaction): 修复金仓事务提交重复引号导致语法错误

refs #176

* fix(driver-agent): 修复老版本 Win10 升级后金仓驱动代理启动失败

refs #177

* chore(ci): 新增手动触发的 macOS 测试构建工作流

* chore(ci): 允许测试工作流在当前分支自动触发

* fix(query-editor): 修复 SQL 编辑中光标随机跳到末尾 refs #185

* feat(data-sync): 增加差异 SQL 预览能力便于审核 refs #174

* fix(clickhouse-connect): 自动识别并回退 HTTP/Native 协议连接 refs #181

* fix(oracle-metadata): 修复视图与函数加载按 schema 过滤异常 refs #155

* fix(dameng-databases): 修复显示全部库时数据库列表不完整 refs #154

* fix(connection,db-list): 统一处理空列表返回并修复达梦连接测试报错 refs #157

* fix(kingbase): 补齐主键识别并优化宽表卡顿 refs #176 refs #178

* fix(query-execution): 支持带前置注释的读查询结果识别

* chore(ci): 新增全平台测试包手动构建工作流

---------

Co-authored-by: 辣条 <69459608+tianqijiuyun-latiao@users.noreply.github.com>
Co-authored-by: Syngnat <92659908+Syngnat@users.noreply.github.com>
2026-03-06 17:36:28 +08:00
辣条
75a5a322e0 - chore(ci): 新增全平台测试包手动构建工作流 tianqijiuyun-latiao 今天 下午4:26 (#194)
* feat(http-tunnel): 支持独立 HTTP 隧道连接并覆盖多数据源

refs #168

* fix(kingbase-data-grid): 修复金仓打开表卡顿并降低对象渲染开销

refs #178

* fix(kingbase-transaction): 修复金仓事务提交重复引号导致语法错误

refs #176

* fix(driver-agent): 修复老版本 Win10 升级后金仓驱动代理启动失败

refs #177

* chore(ci): 新增手动触发的 macOS 测试构建工作流

* chore(ci): 允许测试工作流在当前分支自动触发

* fix(query-editor): 修复 SQL 编辑中光标随机跳到末尾 refs #185

* feat(data-sync): 增加差异 SQL 预览能力便于审核 refs #174

* fix(clickhouse-connect): 自动识别并回退 HTTP/Native 协议连接 refs #181

* fix(oracle-metadata): 修复视图与函数加载按 schema 过滤异常 refs #155

* fix(dameng-databases): 修复显示全部库时数据库列表不完整 refs #154

* fix(connection,db-list): 统一处理空列表返回并修复达梦连接测试报错 refs #157

* fix(kingbase): 补齐主键识别并优化宽表卡顿 refs #176 refs #178

* fix(query-execution): 支持带前置注释的读查询结果识别

* chore(ci): 新增全平台测试包手动构建工作流
2026-03-06 17:32:14 +08:00
Syngnat
61d6197fe3 Merge branch 'fix/editor-sql-error-20260306-ygf' into dev 2026-03-06 14:57:06 +08:00
Syngnat
6157161293 🐛 fix(branch-sync): 修复 main 回灌 dev 时 mergeable 异步计算导致漏开自动合并
- 增加 mergeable 状态轮询,避免新建同步 PR 后立即返回 UNKNOWN
- 在合并状态未稳定时输出中文告警与执行摘要
- 保持冲突分支、待计算分支与自动合并分支的处理路径清晰
2026-03-06 14:56:43 +08:00
github-actions[bot]
0f843a7dcf 🔁 chore(sync): 回灌 main 到 dev (#192)
* - feat(connection,metadata,kingbase): 增强多数据源连接能力并修复金仓/达梦/Oracle/ClickHouse兼容性问题 (#188)

* feat(http-tunnel): 支持独立 HTTP 隧道连接并覆盖多数据源

refs #168

* fix(kingbase-data-grid): 修复金仓打开表卡顿并降低对象渲染开销

refs #178

* fix(kingbase-transaction): 修复金仓事务提交重复引号导致语法错误

refs #176

* fix(driver-agent): 修复老版本 Win10 升级后金仓驱动代理启动失败

refs #177

* chore(ci): 新增手动触发的 macOS 测试构建工作流

* chore(ci): 允许测试工作流在当前分支自动触发

* fix(query-editor): 修复 SQL 编辑中光标随机跳到末尾 refs #185

* feat(data-sync): 增加差异 SQL 预览能力便于审核 refs #174

* fix(clickhouse-connect): 自动识别并回退 HTTP/Native 协议连接 refs #181

* fix(oracle-metadata): 修复视图与函数加载按 schema 过滤异常 refs #155

* fix(dameng-databases): 修复显示全部库时数据库列表不完整 refs #154

* fix(connection,db-list): 统一处理空列表返回并修复达梦连接测试报错 refs #157

* Release/0.5.3 (#191)

---------

Co-authored-by: 辣条 <69459608+tianqijiuyun-latiao@users.noreply.github.com>
Co-authored-by: Syngnat <92659908+Syngnat@users.noreply.github.com>
2026-03-06 14:31:15 +08:00
Syngnat
fb65b553e9 Release/0.5.3 (#191) 2026-03-06 14:30:07 +08:00
Syngnat
1a5bf79dd3 Merge branch 'fix/editor-sql-error-20260306-ygf' into dev 2026-03-06 14:27:39 +08:00
Syngnat
dea096d4c2 feat(release-notes): 支持自动生成 Release 更新说明并区分配置文件命名 2026-03-06 14:26:08 +08:00
github-actions[bot]
04f8b266d3 - feat(connection,metadata,kingbase): 增强多数据源连接能力并修复金仓/达梦/Oracle/ClickHouse兼容性问题 (#188) (#190)
* feat(http-tunnel): 支持独立 HTTP 隧道连接并覆盖多数据源

refs #168

* fix(kingbase-data-grid): 修复金仓打开表卡顿并降低对象渲染开销

refs #178

* fix(kingbase-transaction): 修复金仓事务提交重复引号导致语法错误

refs #176

* fix(driver-agent): 修复老版本 Win10 升级后金仓驱动代理启动失败

refs #177

* chore(ci): 新增手动触发的 macOS 测试构建工作流

* chore(ci): 允许测试工作流在当前分支自动触发

* fix(query-editor): 修复 SQL 编辑中光标随机跳到末尾 refs #185

* feat(data-sync): 增加差异 SQL 预览能力便于审核 refs #174

* fix(clickhouse-connect): 自动识别并回退 HTTP/Native 协议连接 refs #181

* fix(oracle-metadata): 修复视图与函数加载按 schema 过滤异常 refs #155

* fix(dameng-databases): 修复显示全部库时数据库列表不完整 refs #154

* fix(connection,db-list): 统一处理空列表返回并修复达梦连接测试报错 refs #157

Co-authored-by: 辣条 <69459608+tianqijiuyun-latiao@users.noreply.github.com>
2026-03-06 13:57:11 +08:00
辣条
b53227cb15 - feat(connection,metadata,kingbase): 增强多数据源连接能力并修复金仓/达梦/Oracle/ClickHouse兼容性问题 (#188)
* feat(http-tunnel): 支持独立 HTTP 隧道连接并覆盖多数据源

refs #168

* fix(kingbase-data-grid): 修复金仓打开表卡顿并降低对象渲染开销

refs #178

* fix(kingbase-transaction): 修复金仓事务提交重复引号导致语法错误

refs #176

* fix(driver-agent): 修复老版本 Win10 升级后金仓驱动代理启动失败

refs #177

* chore(ci): 新增手动触发的 macOS 测试构建工作流

* chore(ci): 允许测试工作流在当前分支自动触发

* fix(query-editor): 修复 SQL 编辑中光标随机跳到末尾 refs #185

* feat(data-sync): 增加差异 SQL 预览能力便于审核 refs #174

* fix(clickhouse-connect): 自动识别并回退 HTTP/Native 协议连接 refs #181

* fix(oracle-metadata): 修复视图与函数加载按 schema 过滤异常 refs #155

* fix(dameng-databases): 修复显示全部库时数据库列表不完整 refs #154

* fix(connection,db-list): 统一处理空列表返回并修复达梦连接测试报错 refs #157
2026-03-06 13:55:13 +08:00
Syngnat
0246d7fae5 Merge remote-tracking branch 'origin/main' into dev
# Conflicts:
#	CONTRIBUTING.md
#	CONTRIBUTING.zh-CN.md
2026-03-06 11:17:18 +08:00
Syngnat
4aa177ed37 🔧 chore(branch-sync): 补充 main 回灌 dev 权限前置条件并增加失败告警 2026-03-06 11:05:27 +08:00
Syngnat
4f5a7bd94b feat(branch-sync): 新增 main 回灌 dev 自动同步工作流并同步中英文贡献指南 2026-03-06 09:40:49 +08:00
Syngnat
00c6f9871f Release/0.5.2 (#183) 2026-03-05 17:17:03 +08:00
104 changed files with 17441 additions and 2372 deletions

26
.github/release.yaml vendored Normal file
View File

@@ -0,0 +1,26 @@
changelog:
categories:
- title: 新功能
labels:
- feature
- enhancement
- feat
- title: 问题修复
labels:
- bug
- fix
- title: 文档与流程
labels:
- docs
- documentation
- ci
- workflow
- chore
- title: 重构与优化
labels:
- refactor
- perf
- optimization
- title: 其他更新
labels:
- '*'

View File

@@ -10,6 +10,9 @@ on:
description: 'Tag of release you want to publish'
type: string
env:
FORCE_JAVASCRIPT_ACTIONS_TO_NODE24: "true"
jobs:
publish:
runs-on: windows-latest

View File

@@ -8,6 +8,9 @@ on:
permissions:
contents: write
env:
FORCE_JAVASCRIPT_ACTIONS_TO_NODE24: "true"
jobs:
# Phase 1: Build in parallel and output artifacts
build:
@@ -88,6 +91,18 @@ jobs:
with:
node-version: '20'
- name: Install UPX (Windows)
if: contains(matrix.platform, 'windows')
shell: pwsh
run: |
choco install upx --no-progress -y
$upxCmd = Get-Command upx -ErrorAction SilentlyContinue
if ($null -eq $upxCmd) {
Write-Error "❌ 未检测到 upx无法保证 Windows 产物经过压缩"
exit 1
}
& upx --version
# Linux Dependencies (GTK3, WebKit2GTK required by Wails)
- name: Install Linux Dependencies
if: contains(matrix.platform, 'linux')
@@ -102,6 +117,9 @@ jobs:
sudo apt-get install -y libwebkit2gtk-4.0-dev
fi
sudo apt-get install -y upx-ucl || sudo apt-get install -y upx
upx --version
# AppImage 运行/打包可能需要 FUSE2。不同发行版/版本包名不同,做兼容兜底。
sudo apt-get install -y libfuse2 || sudo apt-get install -y libfuse2t64 || true
@@ -277,6 +295,13 @@ jobs:
exit 1
fi
APP_NAME=$(basename "$APP_PATH")
APP_BIN=$(find "$APP_PATH/Contents/MacOS" -maxdepth 1 -type f | head -n 1)
if [ -z "$APP_BIN" ]; then
echo "❌ 未找到 macOS 应用主程序!"
exit 1
fi
echo " macOS 产物不执行 UPX 压缩,保留原始主程序。"
echo "🔏 正在进行 Ad-hoc 签名..."
# 注意Ad-hoc + hardened runtime--options runtime在未配置 entitlements 时,
@@ -301,7 +326,7 @@ jobs:
mv "$DMG_NAME" "../../$FINAL_NAME"
# Windows Packaging
- name: Package Windows Portable Zip
- name: Package Windows EXE
if: contains(matrix.platform, 'windows')
shell: pwsh
run: |
@@ -312,7 +337,6 @@ jobs:
}
$target = "${{ matrix.build_name }}"
$finalExeName = "GoNavi-$version-${{ matrix.os_name }}-${{ matrix.arch_name }}${{ matrix.artifact_suffix }}.exe"
$finalZipName = "GoNavi-$version-${{ matrix.os_name }}-${{ matrix.arch_name }}${{ matrix.artifact_suffix }}.zip"
if (Test-Path "$target.exe") {
$finalExe = "$target.exe"
@@ -324,11 +348,39 @@ jobs:
exit 1
}
Write-Host "📦 生成 Windows 可执行文件 $finalExeName..."
Copy-Item -LiteralPath $finalExe -Destination "..\\..\\$finalExeName" -Force
$isArm64Target = "${{ matrix.arch_name }}".ToLowerInvariant() -eq "arm64"
if ($isArm64Target) {
Write-Warning "⚠️ UPX 当前不支持 win64/arm64跳过压缩并保留原始 EXE。"
$LASTEXITCODE = 0
} else {
$upxCmd = Get-Command upx -ErrorAction SilentlyContinue
if ($null -eq $upxCmd) {
Write-Error "❌ 未找到 upx无法保证 Windows 产物经过压缩"
exit 1
}
$beforeBytes = (Get-Item -LiteralPath $finalExe).Length
Write-Host "🗜️ 使用 UPX 压缩 $finalExe ..."
& upx --best --lzma --force $finalExe | Out-Host
if ($LASTEXITCODE -ne 0) {
Write-Error "❌ UPX 压缩失败($LASTEXITCODE"
exit 1
}
& upx -t $finalExe | Out-Host
if ($LASTEXITCODE -ne 0) {
Write-Error "❌ UPX 校验失败($LASTEXITCODE"
exit 1
}
$afterBytes = (Get-Item -LiteralPath $finalExe).Length
if ($afterBytes -lt $beforeBytes) {
$savedBytes = $beforeBytes - $afterBytes
Write-Host ("✅ UPX 压缩完成:{0:N2}MB -> {1:N2}MB减少 {2:N2}MB" -f ($beforeBytes / 1MB), ($afterBytes / 1MB), ($savedBytes / 1MB))
} else {
Write-Host (" UPX 压缩完成:{0:N2}MB -> {1:N2}MB" -f ($beforeBytes / 1MB), ($afterBytes / 1MB))
}
}
Write-Host "📦 生成 Windows 压缩包 $finalZipName..."
Compress-Archive -LiteralPath $finalExe -DestinationPath "..\\..\\$finalZipName" -Force
Write-Host "📦 输出 Windows 可执行文件 $finalExeName..."
Copy-Item -LiteralPath $finalExe -Destination "..\\..\\$finalExeName" -Force
# Linux Packaging (tar.gz and AppImage)
- name: Package Linux
@@ -347,6 +399,17 @@ jobs:
fi
chmod +x "$TARGET"
BEFORE_BYTES=$(wc -c <"$TARGET" | tr -d '[:space:]')
echo "🗜️ 正在使用 UPX 压缩 Linux 可执行文件: $TARGET ..."
upx --best --lzma --force "$TARGET"
upx -t "$TARGET"
AFTER_BYTES=$(wc -c <"$TARGET" | tr -d '[:space:]')
if [ "$AFTER_BYTES" -lt "$BEFORE_BYTES" ]; then
SAVED_BYTES=$((BEFORE_BYTES - AFTER_BYTES))
awk -v b="$BEFORE_BYTES" -v a="$AFTER_BYTES" -v s="$SAVED_BYTES" 'BEGIN { printf "✅ Linux UPX 压缩完成:%.2fMB -> %.2fMB,减少 %.2fMB\n", b/1024/1024, a/1024/1024, s/1024/1024 }'
else
awk -v b="$BEFORE_BYTES" -v a="$AFTER_BYTES" 'BEGIN { printf " Linux UPX 压缩完成:%.2fMB -> %.2fMB\n", b/1024/1024, a/1024/1024 }'
fi
# 1. Create tar.gz
echo "📦 正在打包 $TAR_NAME..."
@@ -419,7 +482,6 @@ jobs:
path: |
GoNavi-*.dmg
GoNavi-*.exe
GoNavi-*.zip
GoNavi-*.tar.gz
GoNavi-*.AppImage
drivers/**
@@ -550,5 +612,6 @@ jobs:
files: release-assets/*
draft: true
make_latest: true
generate_release_notes: true
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -0,0 +1,404 @@
name: Test Build All Platforms (Manual)
on:
workflow_dispatch:
inputs:
build_label:
description: "测试包标识(仅用于文件名)"
required: false
default: "test"
permissions:
contents: read
env:
FORCE_JAVASCRIPT_ACTIONS_TO_NODE24: "true"
concurrency:
group: test-build-${{ github.ref }}
cancel-in-progress: false
jobs:
build:
name: Build ${{ matrix.platform }}
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
include:
- os: macos-latest
platform: darwin/amd64
os_name: MacOS
arch_name: Amd64
build_name: gonavi-test-darwin-amd64
wails_tags: ""
artifact_suffix: ""
build_optional_agents: true
linux_webkit: ""
- os: macos-latest
platform: darwin/arm64
os_name: MacOS
arch_name: Arm64
build_name: gonavi-test-darwin-arm64
wails_tags: ""
artifact_suffix: ""
build_optional_agents: true
linux_webkit: ""
- os: windows-latest
platform: windows/amd64
os_name: Windows
arch_name: Amd64
build_name: gonavi-test-windows-amd64
wails_tags: ""
artifact_suffix: ""
build_optional_agents: true
linux_webkit: ""
- os: windows-latest
platform: windows/arm64
os_name: Windows
arch_name: Arm64
build_name: gonavi-test-windows-arm64
wails_tags: ""
artifact_suffix: ""
build_optional_agents: true
linux_webkit: ""
- os: ubuntu-22.04
platform: linux/amd64
os_name: Linux
arch_name: Amd64
build_name: gonavi-test-linux-amd64
wails_tags: ""
artifact_suffix: ""
build_optional_agents: true
linux_webkit: "4.0"
- os: ubuntu-24.04
platform: linux/amd64
os_name: Linux
arch_name: Amd64
build_name: gonavi-test-linux-amd64-webkit41
wails_tags: "webkit2_41"
artifact_suffix: "-WebKit41"
build_optional_agents: false
linux_webkit: "4.1"
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Go
uses: actions/setup-go@v5
with:
go-version: '1.24'
check-latest: true
- name: Setup Node
uses: actions/setup-node@v4
with:
node-version: '20'
- name: Install UPX (Windows)
if: contains(matrix.platform, 'windows')
shell: pwsh
run: |
choco install upx --no-progress -y
$upxCmd = Get-Command upx -ErrorAction SilentlyContinue
if ($null -eq $upxCmd) {
Write-Error "❌ 未检测到 upx无法保证 Windows 测试产物经过压缩"
exit 1
}
& upx --version
- name: Install Linux Dependencies
if: contains(matrix.platform, 'linux')
run: |
sudo apt-get update
sudo apt-get install -y libgtk-3-dev
if [ "${{ matrix.linux_webkit }}" = "4.1" ]; then
sudo apt-get install -y libwebkit2gtk-4.1-dev libsoup-3.0-dev
else
sudo apt-get install -y libwebkit2gtk-4.0-dev
fi
sudo apt-get install -y upx-ucl || sudo apt-get install -y upx
upx --version
sudo apt-get install -y libfuse2 || sudo apt-get install -y libfuse2t64 || true
LINUXDEPLOY_URL="https://github.com/linuxdeploy/linuxdeploy/releases/download/continuous/linuxdeploy-x86_64.AppImage"
PLUGIN_URL="https://github.com/linuxdeploy/linuxdeploy-plugin-gtk/releases/download/continuous/linuxdeploy-plugin-gtk-x86_64.AppImage"
wget --retry-connrefused --waitretry=1 --read-timeout=20 --timeout=15 --tries=3 -O /tmp/linuxdeploy "$LINUXDEPLOY_URL" || {
echo "skip-appimage=true" >> "$GITHUB_ENV"
}
wget --retry-connrefused --waitretry=1 --read-timeout=20 --timeout=15 --tries=3 -O /tmp/linuxdeploy-plugin-gtk "$PLUGIN_URL" || {
echo "skip-appimage=true" >> "$GITHUB_ENV"
}
if [ "${skip-appimage:-false}" != "true" ]; then
chmod +x /tmp/linuxdeploy /tmp/linuxdeploy-plugin-gtk
fi
- name: Install Wails
run: go install github.com/wailsapp/wails/v2/cmd/wails@v2.11.0
- name: Setup MSYS2 Toolchain For DuckDB (Windows AMD64)
id: msys2_duckdb
if: ${{ matrix.build_optional_agents && matrix.platform == 'windows/amd64' }}
continue-on-error: true
uses: msys2/setup-msys2@v2
with:
msystem: UCRT64
update: true
install: >-
mingw-w64-ucrt-x86_64-gcc
- name: Configure DuckDB CGO Toolchain (Windows AMD64)
if: ${{ matrix.build_optional_agents && matrix.platform == 'windows/amd64' }}
shell: pwsh
run: |
function Find-MingwBin([string[]]$candidates) {
foreach ($bin in $candidates) {
if ([string]::IsNullOrWhiteSpace($bin)) {
continue
}
$gcc = Join-Path $bin 'gcc.exe'
$gxx = Join-Path $bin 'g++.exe'
if ((Test-Path $gcc) -and (Test-Path $gxx)) {
return $bin
}
}
return $null
}
$msys2Location = "${{ steps.msys2_duckdb.outputs['msys2-location'] }}"
$candidateBins = @()
if (-not [string]::IsNullOrWhiteSpace($msys2Location)) {
$candidateBins += Join-Path $msys2Location 'ucrt64\bin'
}
$candidateBins += @(
'C:\msys64\ucrt64\bin',
'D:\a\_temp\msys64\ucrt64\bin'
)
$candidateBins = @($candidateBins | Select-Object -Unique)
$mingwBin = Find-MingwBin $candidateBins
if (-not $mingwBin) {
Write-Error "❌ 未找到可用的 DuckDB UCRT64 编译器。"
exit 1
}
$gcc = Join-Path $mingwBin 'gcc.exe'
$gxx = Join-Path $mingwBin 'g++.exe'
"$mingwBin" | Out-File -FilePath $env:GITHUB_PATH -Append -Encoding utf8
"CC=$gcc" | Out-File -FilePath $env:GITHUB_ENV -Append -Encoding utf8
"CXX=$gxx" | Out-File -FilePath $env:GITHUB_ENV -Append -Encoding utf8
- name: Build App
shell: bash
run: |
set -euo pipefail
BUILD_LABEL="${{ inputs.build_label }}"
if [ -z "$BUILD_LABEL" ]; then
BUILD_LABEL="test"
fi
APP_VERSION="${BUILD_LABEL}-${GITHUB_RUN_NUMBER}"
if [ -n "${{ matrix.wails_tags }}" ]; then
wails build -platform "${{ matrix.platform }}" -clean -o "${{ matrix.build_name }}" -tags "${{ matrix.wails_tags }}" -ldflags "-s -w -X GoNavi-Wails/internal/app.AppVersion=${APP_VERSION}"
else
wails build -platform "${{ matrix.platform }}" -clean -o "${{ matrix.build_name }}" -ldflags "-s -w -X GoNavi-Wails/internal/app.AppVersion=${APP_VERSION}"
fi
- name: Build Optional Driver Agents
if: ${{ matrix.build_optional_agents }}
shell: bash
run: |
set -euo pipefail
TARGET_PLATFORM="${{ matrix.platform }}"
GOOS="${TARGET_PLATFORM%%/*}"
GOARCH="${TARGET_PLATFORM##*/}"
DRIVERS=(mariadb doris sphinx sqlserver sqlite duckdb dameng kingbase highgo vastbase mongodb tdengine clickhouse)
OUTDIR="drivers/${{ matrix.os_name }}"
mkdir -p "$OUTDIR"
for DRIVER in "${DRIVERS[@]}"; do
BUILD_DRIVER="$DRIVER"
if [ "$DRIVER" = "doris" ]; then
BUILD_DRIVER="diros"
fi
if [ "$DRIVER" = "duckdb" ] && [ "$GOOS" = "windows" ] && [ "$GOARCH" != "amd64" ]; then
echo "跳过 DuckDB driver: ${GOOS}/${GOARCH}"
continue
fi
TAG="gonavi_${BUILD_DRIVER}_driver"
OUTPUT="${DRIVER}-driver-agent-${GOOS}-${GOARCH}"
if [ "$GOOS" = "windows" ]; then
OUTPUT="${OUTPUT}.exe"
fi
OUTPUT_PATH="${OUTDIR}/${OUTPUT}"
if [ "$DRIVER" = "duckdb" ]; then
CGO_ENABLED=1 GOOS="$GOOS" GOARCH="$GOARCH" go build -tags "$TAG" -trimpath -ldflags "-s -w" -o "$OUTPUT_PATH" ./cmd/optional-driver-agent
else
CGO_ENABLED=0 GOOS="$GOOS" GOARCH="$GOARCH" go build -tags "$TAG" -trimpath -ldflags "-s -w" -o "$OUTPUT_PATH" ./cmd/optional-driver-agent
fi
done
- name: Package macOS
if: contains(matrix.platform, 'darwin')
shell: bash
run: |
set -euo pipefail
brew install create-dmg
LABEL="${{ inputs.build_label }}"
if [ -z "$LABEL" ]; then
LABEL="test"
fi
cd build/bin
APP_PATH=$(find . -maxdepth 1 -name "*.app" | head -n 1)
if [ -z "$APP_PATH" ]; then
echo "未找到 .app 应用包"
exit 1
fi
APP_NAME=$(basename "$APP_PATH")
APP_BIN=$(find "$APP_PATH/Contents/MacOS" -maxdepth 1 -type f | head -n 1)
if [ -z "$APP_BIN" ]; then
echo "未找到 macOS 应用主程序"
exit 1
fi
echo " macOS 产物不执行 UPX 压缩,保留原始主程序。"
codesign --force --deep --sign - "$APP_NAME"
ZIP_NAME="GoNavi-${LABEL}-${{ matrix.os_name }}-${{ matrix.arch_name }}-run${GITHUB_RUN_NUMBER}.zip"
DMG_NAME="GoNavi-${LABEL}-${{ matrix.os_name }}-${{ matrix.arch_name }}-run${GITHUB_RUN_NUMBER}.dmg"
mkdir -p ../../artifacts
ditto -c -k --sequesterRsrc --keepParent "$APP_NAME" "../../artifacts/$ZIP_NAME"
create-dmg \
--volname "GoNavi Test Installer" \
--window-pos 200 120 \
--window-size 800 400 \
--icon-size 100 \
--icon "$APP_NAME" 200 190 \
--hide-extension "$APP_NAME" \
--app-drop-link 600 185 \
"$DMG_NAME" \
"$APP_NAME"
mv "$DMG_NAME" "../../artifacts/$DMG_NAME"
shasum -a 256 "../../artifacts/$ZIP_NAME" > "../../artifacts/$ZIP_NAME.sha256"
shasum -a 256 "../../artifacts/$DMG_NAME" > "../../artifacts/$DMG_NAME.sha256"
- name: Package Windows
if: contains(matrix.platform, 'windows')
shell: pwsh
run: |
$label = "${{ inputs.build_label }}"
if ([string]::IsNullOrWhiteSpace($label)) { $label = 'test' }
Set-Location build/bin
$target = "${{ matrix.build_name }}"
$finalExeName = "GoNavi-$label-${{ matrix.os_name }}-${{ matrix.arch_name }}-run$env:GITHUB_RUN_NUMBER.exe"
if (Test-Path "$target.exe") {
$finalExe = "$target.exe"
} elseif (Test-Path "$target") {
Rename-Item -Path "$target" -NewName "$target.exe"
$finalExe = "$target.exe"
} else {
Write-Error "未找到构建产物 '$target'"
exit 1
}
$isArm64Target = "${{ matrix.arch_name }}".ToLowerInvariant() -eq "arm64"
if ($isArm64Target) {
Write-Warning "⚠️ UPX 当前不支持 win64/arm64跳过压缩并保留原始 EXE。"
$LASTEXITCODE = 0
} else {
$upxCmd = Get-Command upx -ErrorAction SilentlyContinue
if ($null -eq $upxCmd) {
Write-Error "❌ 未找到 upx无法保证 Windows 测试产物经过压缩"
exit 1
}
$beforeBytes = (Get-Item -LiteralPath $finalExe).Length
Write-Host "🗜️ 使用 UPX 压缩 $finalExe ..."
& upx --best --lzma --force $finalExe | Out-Host
if ($LASTEXITCODE -ne 0) {
Write-Error "❌ UPX 压缩失败($LASTEXITCODE"
exit 1
}
& upx -t $finalExe | Out-Host
if ($LASTEXITCODE -ne 0) {
Write-Error "❌ UPX 校验失败($LASTEXITCODE"
exit 1
}
$afterBytes = (Get-Item -LiteralPath $finalExe).Length
if ($afterBytes -lt $beforeBytes) {
$savedBytes = $beforeBytes - $afterBytes
Write-Host ("✅ UPX 压缩完成:{0:N2}MB -> {1:N2}MB减少 {2:N2}MB" -f ($beforeBytes / 1MB), ($afterBytes / 1MB), ($savedBytes / 1MB))
} else {
Write-Host (" UPX 压缩完成:{0:N2}MB -> {1:N2}MB" -f ($beforeBytes / 1MB), ($afterBytes / 1MB))
}
}
New-Item -ItemType Directory -Force -Path ..\..\artifacts | Out-Null
Copy-Item -LiteralPath $finalExe -Destination "..\..\artifacts\$finalExeName" -Force
Get-FileHash "..\..\artifacts\$finalExeName" -Algorithm SHA256 | ForEach-Object { "{0} *{1}" -f $_.Hash.ToLower(), (Split-Path $_.Path -Leaf) } | Out-File "..\..\artifacts\$finalExeName.sha256" -Encoding ascii
- name: Package Linux
if: contains(matrix.platform, 'linux')
shell: bash
run: |
set -euo pipefail
LABEL="${{ inputs.build_label }}"
if [ -z "$LABEL" ]; then
LABEL="test"
fi
cd build/bin
TARGET="${{ matrix.build_name }}"
TAR_NAME="GoNavi-${LABEL}-${{ matrix.os_name }}-${{ matrix.arch_name }}${{ matrix.artifact_suffix }}-run${GITHUB_RUN_NUMBER}.tar.gz"
APPIMAGE_NAME="GoNavi-${LABEL}-${{ matrix.os_name }}-${{ matrix.arch_name }}${{ matrix.artifact_suffix }}-run${GITHUB_RUN_NUMBER}.AppImage"
mkdir -p ../../artifacts
if [ ! -f "$TARGET" ]; then
echo "未找到构建产物 '$TARGET'"
exit 1
fi
chmod +x "$TARGET"
BEFORE_BYTES=$(wc -c <"$TARGET" | tr -d '[:space:]')
echo "🗜️ 使用 UPX 压缩 Linux 可执行文件: $TARGET ..."
upx --best --lzma --force "$TARGET"
upx -t "$TARGET"
AFTER_BYTES=$(wc -c <"$TARGET" | tr -d '[:space:]')
if [ "$AFTER_BYTES" -lt "$BEFORE_BYTES" ]; then
SAVED_BYTES=$((BEFORE_BYTES - AFTER_BYTES))
awk -v b="$BEFORE_BYTES" -v a="$AFTER_BYTES" -v s="$SAVED_BYTES" 'BEGIN { printf "✅ Linux UPX 压缩完成:%.2fMB -> %.2fMB,减少 %.2fMB\n", b/1024/1024, a/1024/1024, s/1024/1024 }'
else
awk -v b="$BEFORE_BYTES" -v a="$AFTER_BYTES" 'BEGIN { printf " Linux UPX 压缩完成:%.2fMB -> %.2fMB\n", b/1024/1024, a/1024/1024 }'
fi
tar -czvf "../../artifacts/$TAR_NAME" "$TARGET"
sha256sum "../../artifacts/$TAR_NAME" > "../../artifacts/$TAR_NAME.sha256"
if [ "${skip-appimage:-false}" = "true" ]; then
echo "跳过 AppImage 打包"
exit 0
fi
mkdir -p AppDir/usr/bin AppDir/usr/share/applications AppDir/usr/share/icons/hicolor/256x256/apps
cp "$TARGET" AppDir/usr/bin/gonavi
printf '%s\n' '[Desktop Entry]' 'Name=GoNavi' 'Exec=gonavi' 'Icon=gonavi' 'Type=Application' 'Categories=Development;Database;' 'Comment=Database Management Tool' > AppDir/usr/share/applications/gonavi.desktop
cp AppDir/usr/share/applications/gonavi.desktop AppDir/gonavi.desktop
if [ -f "../../build/appicon.png" ]; then
cp "../../build/appicon.png" AppDir/usr/share/icons/hicolor/256x256/apps/gonavi.png
cp "../../build/appicon.png" AppDir/gonavi.png
else
touch AppDir/gonavi.png
cp AppDir/gonavi.png AppDir/usr/share/icons/hicolor/256x256/apps/gonavi.png
fi
export DEPLOY_GTK_VERSION=3
/tmp/linuxdeploy --appdir AppDir --plugin gtk --output appimage || exit 0
mv GoNavi*.AppImage "$APPIMAGE_NAME" 2>/dev/null || exit 0
mv "$APPIMAGE_NAME" "../../artifacts/$APPIMAGE_NAME"
sha256sum "../../artifacts/$APPIMAGE_NAME" > "../../artifacts/$APPIMAGE_NAME.sha256"
- name: Upload Artifact
uses: actions/upload-artifact@v4
with:
name: test-build-${{ matrix.build_name }}-run${{ github.run_number }}
path: |
artifacts/*
drivers/**
if-no-files-found: error
retention-days: 7

94
.github/workflows/test-macos-build.yml vendored Normal file
View File

@@ -0,0 +1,94 @@
name: Test Build macOS (Manual)
on:
workflow_dispatch:
inputs:
build_label:
description: "测试包标识(仅用于文件名)"
required: false
default: "test"
push:
branches:
- feature/kingbase_opt
paths:
- ".github/workflows/test-macos-build.yml"
permissions:
contents: read
env:
FORCE_JAVASCRIPT_ACTIONS_TO_NODE24: "true"
jobs:
build-macos:
name: Build macOS ${{ matrix.arch }}
runs-on: macos-latest
strategy:
fail-fast: false
matrix:
include:
- platform: darwin/amd64
arch: amd64
- platform: darwin/arm64
arch: arm64
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Go
uses: actions/setup-go@v5
with:
go-version: "1.24.3"
check-latest: true
- name: Setup Node
uses: actions/setup-node@v4
with:
node-version: "20"
- name: Install Wails
run: go install github.com/wailsapp/wails/v2/cmd/wails@v2.11.0
- name: Build App
run: |
set -euo pipefail
OUTPUT_NAME="gonavi-test-${{ matrix.arch }}"
BUILD_LABEL="${{ inputs.build_label }}"
if [ -z "$BUILD_LABEL" ]; then
BUILD_LABEL="test"
fi
APP_VERSION="${BUILD_LABEL}-${GITHUB_RUN_NUMBER}"
wails build \
-platform "${{ matrix.platform }}" \
-clean \
-o "$OUTPUT_NAME" \
-ldflags "-s -w -X GoNavi-Wails/internal/app.AppVersion=${APP_VERSION}"
- name: Package Zip
run: |
set -euo pipefail
APP_PATH="build/bin/gonavi-test-${{ matrix.arch }}.app"
if [ ! -d "$APP_PATH" ]; then
APP_PATH=$(find build/bin -maxdepth 1 -name "*.app" | head -n 1 || true)
fi
if [ -z "$APP_PATH" ] || [ ! -d "$APP_PATH" ]; then
echo "未找到 .app 产物"
ls -la build/bin || true
exit 1
fi
LABEL="${{ inputs.build_label }}"
if [ -z "$LABEL" ]; then
LABEL="test"
fi
ZIP_NAME="GoNavi-${LABEL}-macos-${{ matrix.arch }}-run${GITHUB_RUN_NUMBER}.zip"
mkdir -p artifacts
ditto -c -k --sequesterRsrc --keepParent "$APP_PATH" "artifacts/$ZIP_NAME"
shasum -a 256 "artifacts/$ZIP_NAME" > "artifacts/$ZIP_NAME.sha256"
- name: Upload Artifact
uses: actions/upload-artifact@v4
with:
name: gonavi-macos-${{ matrix.arch }}-run${{ github.run_number }}
path: artifacts/*
if-no-files-found: error

1
.gitignore vendored
View File

@@ -17,6 +17,7 @@ dist/
GoNavi-Wails
GoNavi-Wails.exe
.ace-tool/
.superpowers/
.claude/
tmpclaude-*

View File

@@ -79,7 +79,8 @@ Because external pull requests are merged directly into `main`, maintainers must
### 1. Sync `main` -> `dev` (required)
Every change merged into `main` must be synced into `dev`:
The automatic GitHub Actions sync workflow has been removed.
Maintainers should sync `main` back to `dev` manually when needed:
```bash
git checkout dev
@@ -114,7 +115,7 @@ git push origin v0.6.0
### 4. Sync `main` back to `dev` after release
After the release, sync `main` back into `dev` again:
After the release, the same automation still applies. If needed, you can run the workflow manually (`workflow_dispatch`) or execute the fallback commands:
```bash
git checkout dev

View File

@@ -79,7 +79,8 @@ feature/* / fix/* -> dev -> release/* -> main -> tag(vX.Y.Z)
### 1. main → dev 同步(必做)
任何合入 `main` 的变更,都必须同步到 `dev`
仓库已移除 GitHub Actions 自动回灌 workflow。
当前统一采用手动方式将 `main` 同步回 `dev`
```bash
git checkout dev
@@ -114,7 +115,7 @@ git push origin v0.6.0
### 4. main 回流到 dev发版后必做
发布完成后,再次将 `main` 回流到 `dev`,确保开发线与发布线一致:
发布完成后,仍沿用同一套自动化流程;如有需要,也可以手动触发 `workflow_dispatch`,或执行以下兜底命令,确保开发线与发布线一致:
```bash
git checkout dev

View File

@@ -154,6 +154,7 @@ Artifacts are generated in `build/bin`.
The repository includes a release workflow.
Push a `v*` tag to trigger automated build and release.
Release notes are generated automatically from merged pull requests and categorized by `.github/release.yaml`.
Target artifacts include:
- macOS (AMD64 / ARM64)

View File

@@ -147,6 +147,7 @@ wails build -clean
### 跨平台发布GitHub Actions
仓库内置发布流水线,推送 `v*` Tag 可自动构建并发布 Release。
Release 更新说明会基于已合并 Pull Request 自动生成,并按 `.github/release.yaml` 分类。
支持目标:
- macOS (AMD64 / ARM64)

228
build-driver-agents.sh Executable file
View File

@@ -0,0 +1,228 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
cd "$SCRIPT_DIR"
DEFAULT_DRIVERS=(mariadb doris sphinx sqlserver sqlite duckdb dameng kingbase highgo vastbase mongodb tdengine clickhouse)
usage() {
cat <<'EOF'
用法:
./build-driver-agents.sh [选项]
选项:
--drivers <列表> 指定驱动列表逗号分隔例如kingbase,mongodb
--platform <GOOS/GOARCH>
目标平台,默认使用当前 Go 环境go env GOOS/GOARCH
--out-dir <目录> 输出目录根路径默认dist/driver-agents
--bundle-name <文件名> 驱动总包 zip 名称默认GoNavi-DriverAgents.zip
--strict 任一驱动构建失败即中断(默认失败后继续,最后汇总)
-h, --help 显示帮助
示例:
./build-driver-agents.sh
./build-driver-agents.sh --drivers kingbase
./build-driver-agents.sh --platform windows/amd64 --drivers kingbase,mongodb
EOF
}
normalize_driver() {
local name
name="$(echo "${1:-}" | tr '[:upper:]' '[:lower:]' | xargs)"
case "$name" in
doris|diros) echo "doris" ;;
mariadb|sphinx|sqlserver|sqlite|duckdb|dameng|kingbase|highgo|vastbase|mongodb|tdengine|clickhouse)
echo "$name"
;;
*)
return 1
;;
esac
}
build_driver_name() {
case "$1" in
doris) echo "diros" ;;
*) echo "$1" ;;
esac
}
platform_dir_name() {
case "$1" in
windows) echo "Windows" ;;
darwin) echo "MacOS" ;;
linux) echo "Linux" ;;
*) echo "Unknown" ;;
esac
}
driver_csv=""
target_platform=""
out_root="dist/driver-agents"
bundle_name="GoNavi-DriverAgents.zip"
strict_mode="false"
while [[ $# -gt 0 ]]; do
case "$1" in
--drivers)
driver_csv="${2:-}"
shift 2
;;
--platform)
target_platform="${2:-}"
shift 2
;;
--out-dir)
out_root="${2:-}"
shift 2
;;
--bundle-name)
bundle_name="${2:-}"
shift 2
;;
--strict)
strict_mode="true"
shift
;;
-h|--help)
usage
exit 0
;;
*)
echo "❌ 未知参数:$1"
usage
exit 1
;;
esac
done
if ! command -v go >/dev/null 2>&1; then
echo "❌ 未找到 Go请先安装 Go 并确保 go 在 PATH 中。"
exit 1
fi
if [[ -z "$target_platform" ]]; then
target_platform="$(go env GOOS)/$(go env GOARCH)"
fi
if [[ "$target_platform" != */* ]]; then
echo "❌ --platform 参数格式错误,应为 GOOS/GOARCH例如 darwin/arm64"
exit 1
fi
goos="${target_platform%%/*}"
goarch="${target_platform##*/}"
platform_key="${goos}-${goarch}"
platform_dir="$(platform_dir_name "$goos")"
declare -a drivers=()
if [[ -n "$driver_csv" ]]; then
IFS=',' read -r -a raw_drivers <<<"$driver_csv"
for item in "${raw_drivers[@]}"; do
normalized="$(normalize_driver "$item")" || {
echo "❌ 不支持的驱动:$item"
exit 1
}
drivers+=("$normalized")
done
else
drivers=("${DEFAULT_DRIVERS[@]}")
fi
output_dir="${out_root%/}/${platform_key}"
bundle_stage_dir="$(mktemp -d "${TMPDIR:-/tmp}/gonavi-driver-bundle.XXXXXX")"
bundle_platform_dir="$bundle_stage_dir/$platform_dir"
cleanup() {
rm -rf "$bundle_stage_dir"
}
trap cleanup EXIT
mkdir -p "$output_dir" "$bundle_platform_dir"
output_dir_abs="$(cd "$output_dir" && pwd)"
bundle_zip_path="$output_dir_abs/$bundle_name"
declare -a built_assets=()
declare -a failed_drivers=()
declare -a skipped_drivers=()
echo "🚀 开始构建 optional-driver-agent"
echo " 平台:$goos/$goarch"
echo " 输出目录:$output_dir_abs"
echo " 驱动列表:${drivers[*]}"
for driver in "${drivers[@]}"; do
if [[ "$driver" == "duckdb" && "$goos" == "windows" && "$goarch" != "amd64" ]]; then
echo "⚠️ 跳过 duckdb仅支持 windows/amd64"
skipped_drivers+=("$driver")
continue
fi
build_driver="$(build_driver_name "$driver")"
tag="gonavi_${build_driver}_driver"
asset_name="${driver}-driver-agent-${goos}-${goarch}"
if [[ "$goos" == "windows" ]]; then
asset_name="${asset_name}.exe"
fi
output_path="$output_dir_abs/$asset_name"
cgo_enabled=0
if [[ "$driver" == "duckdb" ]]; then
cgo_enabled=1
fi
echo "🔧 构建 $driver -> $asset_name (tag=$tag, CGO_ENABLED=$cgo_enabled)"
set +e
CGO_ENABLED="$cgo_enabled" GOOS="$goos" GOARCH="$goarch" GOTOOLCHAIN=auto \
go build -tags "$tag" -trimpath -ldflags "-s -w" -o "$output_path" ./cmd/optional-driver-agent
build_exit=$?
set -e
if [[ $build_exit -ne 0 ]]; then
echo "❌ 构建失败:$driver"
failed_drivers+=("$driver")
if [[ "$strict_mode" == "true" ]]; then
exit $build_exit
fi
continue
fi
cp "$output_path" "$bundle_platform_dir/$asset_name"
built_assets+=("$asset_name")
done
if [[ ${#built_assets[@]} -eq 0 ]]; then
echo "❌ 未成功构建任何驱动代理。"
exit 1
fi
rm -f "$bundle_zip_path"
if command -v zip >/dev/null 2>&1; then
(
cd "$bundle_stage_dir"
zip -qry "$bundle_zip_path" "$platform_dir"
)
elif command -v ditto >/dev/null 2>&1; then
(
cd "$bundle_stage_dir"
ditto -c -k --sequesterRsrc --keepParent "$platform_dir" "$bundle_zip_path"
)
else
echo "❌ 未找到 zip/ditto无法生成驱动总包 zip。"
exit 1
fi
echo ""
echo "✅ 构建完成"
echo " 单文件输出目录:$output_dir_abs"
echo " 驱动总包:$bundle_zip_path"
echo " 已构建:${built_assets[*]}"
if [[ ${#skipped_drivers[@]} -gt 0 ]]; then
echo " 已跳过:${skipped_drivers[*]}"
fi
if [[ ${#failed_drivers[@]} -gt 0 ]]; then
echo "⚠️ 构建失败驱动:${failed_drivers[*]}"
exit 2
fi

View File

@@ -20,6 +20,75 @@ RED='\033[0;31m'
YELLOW='\033[1;33m'
NC='\033[0m'
get_file_size_bytes() {
local target="$1"
if [ ! -f "$target" ]; then
echo 0
return
fi
if stat -f%z "$target" >/dev/null 2>&1; then
stat -f%z "$target"
return
fi
if stat -c%s "$target" >/dev/null 2>&1; then
stat -c%s "$target"
return
fi
wc -c <"$target" | tr -d '[:space:]'
}
format_size_mb() {
local bytes="${1:-0}"
awk -v b="$bytes" 'BEGIN { printf "%.2fMB", b / 1024 / 1024 }'
}
try_compress_binary_with_upx() {
local exe_path="$1"
local label="$2"
if [ ! -f "$exe_path" ]; then
echo -e "${RED} ❌ 未找到 ${label} 文件:$exe_path${NC}"
exit 1
fi
if ! command -v upx >/dev/null 2>&1; then
echo -e "${RED} ❌ 未找到 upx${label} 必须进行压缩后才能继续打包。${NC}"
case "$(uname -s)" in
Darwin)
echo " 安装命令: brew install upx"
;;
Linux)
echo " 安装命令: sudo apt-get install -y upx-ucl (或对应发行版包管理器)"
;;
esac
exit 1
fi
local before_bytes after_bytes
before_bytes=$(get_file_size_bytes "$exe_path")
echo " 🗜️ 正在使用 UPX 压缩 ${label}..."
if upx --best --lzma --force "$exe_path" >/dev/null 2>&1; then
if ! upx -t "$exe_path" >/dev/null 2>&1; then
echo -e "${RED} ❌ UPX 校验失败:${label}${NC}"
exit 1
fi
after_bytes=$(get_file_size_bytes "$exe_path")
if [ "$after_bytes" -lt "$before_bytes" ]; then
local saved_bytes=$((before_bytes - after_bytes))
echo " ✅ UPX 压缩完成: $(format_size_mb "$before_bytes") -> $(format_size_mb "$after_bytes"),减少 $(format_size_mb "$saved_bytes")"
else
echo " UPX 压缩完成: $(format_size_mb "$before_bytes") -> $(format_size_mb "$after_bytes")"
fi
else
echo -e "${RED} ❌ UPX 压缩失败:${label}${NC}"
exit 1
fi
}
MAC_VOLICON_PATH="build/darwin/icon.icns"
if [ ! -f "$MAC_VOLICON_PATH" ]; then
MAC_VOLICON_PATH=""
fi
echo -e "${GREEN}🚀 开始构建 $APP_NAME $VERSION...${NC}"
# 清理并创建输出目录
@@ -36,47 +105,101 @@ if [ $? -eq 0 ]; then
# 移动 .app 到 dist
mv "$APP_SRC" "$DIST_DIR/$APP_DEST_NAME"
APP_BIN_PATH=$(find "$DIST_DIR/$APP_DEST_NAME/Contents/MacOS" -maxdepth 1 -type f -print -quit)
if [ -n "$APP_BIN_PATH" ] && [ -f "$APP_BIN_PATH" ]; then
echo -e "${YELLOW} ⚠️ macOS arm64 不再执行 UPX 压缩,保留原始主程序。${NC}"
else
echo -e "${RED} ❌ 未找到 macOS arm64 主程序文件。${NC}"
exit 1
fi
# 创建 DMG
if command -v create-dmg &> /dev/null; then
echo " 📦 正在打包 DMG (arm64)..."
# 移除已存在的 DMG (以防万一)
rm -f "$DIST_DIR/$DMG_NAME"
create-dmg \
--volname "${APP_NAME} ${VERSION}" \
--volicon "build/appicon.icns" \
--window-pos 200 120 \
--window-size 800 400 \
--icon-size 100 \
--icon "$APP_DEST_NAME" 200 190 \
--hide-extension "$APP_DEST_NAME" \
--app-drop-link 600 185 \
"$DIST_DIR/$DMG_NAME" \
"$DIST_DIR/$APP_DEST_NAME"
# 检查是否生成了 rw.* 的临时文件并重命名 (create-dmg 有时会有此行为)
if [ ! -f "$DIST_DIR/$DMG_NAME" ]; then
RW_FILE=$(find "$DIST_DIR" -name "rw.*.dmg" -print -quit)
if [ -n "$RW_FILE" ]; then
echo -e "${YELLOW} ⚠️ 检测到临时文件名,正在重命名...${NC}"
mv "$RW_FILE" "$DIST_DIR/$DMG_NAME"
fi
# Ad-hoc 代码签名(无 Apple Developer 账号时防止 Gatekeeper 报已损坏)
echo " 🔏 正在对 .app 进行 ad-hoc 签名 (arm64)..."
codesign --force --deep --sign - "$DIST_DIR/$APP_DEST_NAME"
# 创建 DMG
if command -v create-dmg &> /dev/null; then
echo " 📦 正在打包 DMG (arm64)..."
# 移除已存在的 DMG (以防万一)
rm -f "$DIST_DIR/$DMG_NAME"
# create-dmg 的 source 需要是“包含 .app 的目录”,不能直接传 .app 路径。
STAGE_DIR=$(mktemp -d "$DIST_DIR/.dmg-stage-${APP_NAME}-${VERSION}-arm64.XXXXXX")
if [ -z "$STAGE_DIR" ] || [ ! -d "$STAGE_DIR" ]; then
echo -e "${RED} ❌ 创建 DMG 临时目录失败,跳过 DMG 打包。${NC}"
else
if command -v ditto &> /dev/null; then
ditto "$DIST_DIR/$APP_DEST_NAME" "$STAGE_DIR/$APP_DEST_NAME"
else
cp -R "$DIST_DIR/$APP_DEST_NAME" "$STAGE_DIR/$APP_DEST_NAME"
fi
# --sandbox-safe 会跳过 Finder 的 AppleScript 排版,避免打包过程中弹出/打开挂载窗口CI/本地静默打包更友好)。
CREATE_DMG_ARGS=(--volname "${APP_NAME} ${VERSION}" --format UDZO --sandbox-safe)
if [ -n "$MAC_VOLICON_PATH" ]; then
CREATE_DMG_ARGS+=(--volicon "$MAC_VOLICON_PATH")
else
echo -e "${YELLOW} ⚠️ 未找到 macOS 卷图标 (build/darwin/icon.icns),跳过 --volicon。${NC}"
fi
# 删除中间的 .app 文件,保持目录整洁
rm -rf "$DIST_DIR/$APP_DEST_NAME"
if [ -f "$DIST_DIR/$DMG_NAME" ]; then
echo " ✅ 已生成 $DMG_NAME"
else
echo -e "${RED} ❌ DMG 生成失败,请检查 create-dmg 输出。${NC}"
create-dmg "${CREATE_DMG_ARGS[@]}" \
--window-pos 200 120 \
--window-size 800 400 \
--icon-size 100 \
--icon "$APP_DEST_NAME" 200 190 \
--hide-extension "$APP_DEST_NAME" \
--app-drop-link 600 185 \
"$DIST_DIR/$DMG_NAME" \
"$STAGE_DIR"
CREATE_DMG_EXIT_CODE=$?
rm -rf "$STAGE_DIR"
if [ $CREATE_DMG_EXIT_CODE -ne 0 ]; then
echo -e "${RED} ❌ create-dmg 执行失败 (exit=$CREATE_DMG_EXIT_CODE),保留 .app 以便排查。${NC}"
else
# create-dmg 可能会在失败时遗留 rw.*.dmg 中间产物;不要直接当作最终产物使用
if [ ! -f "$DIST_DIR/$DMG_NAME" ]; then
RW_FILE=$(find "$DIST_DIR" -maxdepth 1 -name "rw.*.dmg" -print -quit)
if [ -n "$RW_FILE" ]; then
echo -e "${YELLOW} ⚠️ 检测到 create-dmg 中间产物: $(basename "$RW_FILE"),正在转换为可分发 DMG...${NC}"
hdiutil convert "$RW_FILE" -format UDZO -o "$DIST_DIR/$DMG_NAME" >/dev/null 2>&1
rm -f "$RW_FILE"
fi
fi
# 防御性:即使生成了目标文件,也要确保不是 UDRWUDRW 在 Finder 下可能表现为“已损坏/无法打开”)
if [ -f "$DIST_DIR/$DMG_NAME" ] && command -v hdiutil &> /dev/null; then
DMG_FORMAT=$(hdiutil imageinfo "$DIST_DIR/$DMG_NAME" 2>/dev/null | awk -F': ' '/^Format:/{print $2; exit}')
if [ "$DMG_FORMAT" = "UDRW" ]; then
echo -e "${YELLOW} ⚠️ 检测到 UDRW可写原始映像正在转换为 UDZO...${NC}"
TMP_UDZO="$DIST_DIR/.tmp.$DMG_NAME"
rm -f "$TMP_UDZO"
hdiutil convert "$DIST_DIR/$DMG_NAME" -format UDZO -o "$TMP_UDZO" >/dev/null 2>&1 && mv "$TMP_UDZO" "$DIST_DIR/$DMG_NAME"
fi
fi
if [ -f "$DIST_DIR/$DMG_NAME" ] && command -v hdiutil &> /dev/null; then
hdiutil verify "$DIST_DIR/$DMG_NAME" >/dev/null 2>&1
if [ $? -ne 0 ]; then
echo -e "${RED} ❌ DMG 校验失败,保留 .app 以便排查。${NC}"
else
# 删除中间的 .app 文件,保持目录整洁
rm -rf "$DIST_DIR/$APP_DEST_NAME"
echo " ✅ 已生成 $DMG_NAME"
fi
fi
fi
else
echo -e "${YELLOW} ⚠️ 未找到 create-dmg 工具,跳过 DMG 打包,仅保留 .app。${NC}"
echo " 安装命令: brew install create-dmg"
fi
else
if [ ! -f "$DIST_DIR/$DMG_NAME" ]; then
echo -e "${RED} ❌ DMG 生成失败,请检查 create-dmg 输出。${NC}"
fi
fi
else
echo -e "${YELLOW} ⚠️ 未找到 create-dmg 工具,跳过 DMG 打包,仅保留 .app。${NC}"
echo " 安装命令: brew install create-dmg"
fi
else
echo -e "${RED} ❌ macOS arm64 构建失败。${NC}"
fi
@@ -89,44 +212,96 @@ if [ $? -eq 0 ]; then
DMG_NAME="${APP_NAME}-${VERSION}-mac-amd64.dmg"
mv "$APP_SRC" "$DIST_DIR/$APP_DEST_NAME"
if command -v create-dmg &> /dev/null; then
echo " 📦 正在打包 DMG (amd64)..."
rm -f "$DIST_DIR/$DMG_NAME"
create-dmg \
--volname "${APP_NAME} ${VERSION}" \
--volicon "build/appicon.icns" \
--window-pos 200 120 \
--window-size 800 400 \
--icon-size 100 \
--icon "$APP_DEST_NAME" 200 190 \
--hide-extension "$APP_DEST_NAME" \
--app-drop-link 600 185 \
"$DIST_DIR/$DMG_NAME" \
"$DIST_DIR/$APP_DEST_NAME"
# 检查是否生成了 rw.* 的临时文件并重命名
if [ ! -f "$DIST_DIR/$DMG_NAME" ]; then
RW_FILE=$(find "$DIST_DIR" -name "rw.*.dmg" -print -quit)
if [ -n "$RW_FILE" ]; then
echo -e "${YELLOW} ⚠️ 检测到临时文件名,正在重命名...${NC}"
mv "$RW_FILE" "$DIST_DIR/$DMG_NAME"
fi
fi
rm -rf "$DIST_DIR/$APP_DEST_NAME"
if [ -f "$DIST_DIR/$DMG_NAME" ]; then
echo " ✅ 已生成 $DMG_NAME"
else
echo -e "${RED} ❌ DMG 生成失败。${NC}"
fi
APP_BIN_PATH=$(find "$DIST_DIR/$APP_DEST_NAME/Contents/MacOS" -maxdepth 1 -type f -print -quit)
if [ -n "$APP_BIN_PATH" ] && [ -f "$APP_BIN_PATH" ]; then
echo -e "${YELLOW} ⚠️ macOS amd64 不再执行 UPX 压缩,保留原始主程序。${NC}"
else
echo -e "${YELLOW} ⚠️ 未找到 create-dmg 工具${NC}"
echo -e "${RED} 未找到 macOS amd64 主程序文件${NC}"
exit 1
fi
else
echo -e "${RED} ❌ macOS amd64 构建失败。${NC}"
# Ad-hoc 代码签名
echo " 🔏 正在对 .app 进行 ad-hoc 签名 (amd64)..."
codesign --force --deep --sign - "$DIST_DIR/$APP_DEST_NAME"
if command -v create-dmg &> /dev/null; then
echo " 📦 正在打包 DMG (amd64)..."
rm -f "$DIST_DIR/$DMG_NAME"
# create-dmg 的 source 需要是“包含 .app 的目录”,不能直接传 .app 路径。
STAGE_DIR=$(mktemp -d "$DIST_DIR/.dmg-stage-${APP_NAME}-${VERSION}-amd64.XXXXXX")
if [ -z "$STAGE_DIR" ] || [ ! -d "$STAGE_DIR" ]; then
echo -e "${RED} ❌ 创建 DMG 临时目录失败,跳过 DMG 打包。${NC}"
else
if command -v ditto &> /dev/null; then
ditto "$DIST_DIR/$APP_DEST_NAME" "$STAGE_DIR/$APP_DEST_NAME"
else
cp -R "$DIST_DIR/$APP_DEST_NAME" "$STAGE_DIR/$APP_DEST_NAME"
fi
# --sandbox-safe 会跳过 Finder 的 AppleScript 排版,避免打包过程中弹出/打开挂载窗口CI/本地静默打包更友好)。
CREATE_DMG_ARGS=(--volname "${APP_NAME} ${VERSION}" --format UDZO --sandbox-safe)
if [ -n "$MAC_VOLICON_PATH" ]; then
CREATE_DMG_ARGS+=(--volicon "$MAC_VOLICON_PATH")
else
echo -e "${YELLOW} ⚠️ 未找到 macOS 卷图标 (build/darwin/icon.icns),跳过 --volicon。${NC}"
fi
create-dmg "${CREATE_DMG_ARGS[@]}" \
--window-pos 200 120 \
--window-size 800 400 \
--icon-size 100 \
--icon "$APP_DEST_NAME" 200 190 \
--hide-extension "$APP_DEST_NAME" \
--app-drop-link 600 185 \
"$DIST_DIR/$DMG_NAME" \
"$STAGE_DIR"
CREATE_DMG_EXIT_CODE=$?
rm -rf "$STAGE_DIR"
if [ $CREATE_DMG_EXIT_CODE -ne 0 ]; then
echo -e "${RED} ❌ create-dmg 执行失败 (exit=$CREATE_DMG_EXIT_CODE),保留 .app 以便排查。${NC}"
else
if [ ! -f "$DIST_DIR/$DMG_NAME" ]; then
RW_FILE=$(find "$DIST_DIR" -maxdepth 1 -name "rw.*.dmg" -print -quit)
if [ -n "$RW_FILE" ]; then
echo -e "${YELLOW} ⚠️ 检测到 create-dmg 中间产物: $(basename "$RW_FILE"),正在转换为可分发 DMG...${NC}"
hdiutil convert "$RW_FILE" -format UDZO -o "$DIST_DIR/$DMG_NAME" >/dev/null 2>&1
rm -f "$RW_FILE"
fi
fi
if [ -f "$DIST_DIR/$DMG_NAME" ] && command -v hdiutil &> /dev/null; then
DMG_FORMAT=$(hdiutil imageinfo "$DIST_DIR/$DMG_NAME" 2>/dev/null | awk -F': ' '/^Format:/{print $2; exit}')
if [ "$DMG_FORMAT" = "UDRW" ]; then
echo -e "${YELLOW} ⚠️ 检测到 UDRW可写原始映像正在转换为 UDZO...${NC}"
TMP_UDZO="$DIST_DIR/.tmp.$DMG_NAME"
rm -f "$TMP_UDZO"
hdiutil convert "$DIST_DIR/$DMG_NAME" -format UDZO -o "$TMP_UDZO" >/dev/null 2>&1 && mv "$TMP_UDZO" "$DIST_DIR/$DMG_NAME"
fi
fi
if [ -f "$DIST_DIR/$DMG_NAME" ] && command -v hdiutil &> /dev/null; then
hdiutil verify "$DIST_DIR/$DMG_NAME" >/dev/null 2>&1
if [ $? -ne 0 ]; then
echo -e "${RED} ❌ DMG 校验失败,保留 .app 以便排查。${NC}"
else
rm -rf "$DIST_DIR/$APP_DEST_NAME"
echo " ✅ 已生成 $DMG_NAME"
fi
fi
fi
if [ ! -f "$DIST_DIR/$DMG_NAME" ]; then
echo -e "${RED} ❌ DMG 生成失败。${NC}"
fi
fi
else
echo -e "${YELLOW} ⚠️ 未找到 create-dmg 工具。${NC}"
fi
else
echo -e "${RED} ❌ macOS amd64 构建失败。${NC}"
fi
# --- Windows AMD64 构建 ---
@@ -134,7 +309,9 @@ echo -e "${GREEN}🪟 正在构建 Windows (amd64)...${NC}"
if command -v x86_64-w64-mingw32-gcc &> /dev/null; then
wails build -platform windows/amd64 -clean -ldflags "$LDFLAGS"
if [ $? -eq 0 ]; then
mv "$BUILD_BIN_DIR/${DEFAULT_BINARY_NAME}.exe" "$DIST_DIR/${APP_NAME}-${VERSION}-windows-amd64.exe"
TARGET_EXE="$DIST_DIR/${APP_NAME}-${VERSION}-windows-amd64.exe"
mv "$BUILD_BIN_DIR/${DEFAULT_BINARY_NAME}.exe" "$TARGET_EXE"
try_compress_binary_with_upx "$TARGET_EXE" "Windows amd64 可执行文件"
echo " ✅ 已生成 ${APP_NAME}-${VERSION}-windows-amd64.exe"
else
echo -e "${RED} ❌ Windows amd64 构建失败。${NC}"
@@ -148,7 +325,9 @@ echo -e "${GREEN}🪟 正在构建 Windows (arm64)...${NC}"
if command -v aarch64-w64-mingw32-gcc &> /dev/null; then
wails build -platform windows/arm64 -clean -ldflags "$LDFLAGS"
if [ $? -eq 0 ]; then
mv "$BUILD_BIN_DIR/${DEFAULT_BINARY_NAME}.exe" "$DIST_DIR/${APP_NAME}-${VERSION}-windows-arm64.exe"
TARGET_EXE="$DIST_DIR/${APP_NAME}-${VERSION}-windows-arm64.exe"
mv "$BUILD_BIN_DIR/${DEFAULT_BINARY_NAME}.exe" "$TARGET_EXE"
echo -e "${YELLOW} ⚠️ 当前 UPX 不支持 win64/arm64跳过 Windows arm64 压缩。${NC}"
echo " ✅ 已生成 ${APP_NAME}-${VERSION}-windows-arm64.exe"
else
echo -e "${RED} ❌ Windows arm64 构建失败。${NC}"
@@ -168,8 +347,10 @@ if [ "$CURRENT_OS" = "Linux" ] && [ "$CURRENT_ARCH" = "x86_64" ]; then
# 本机 Linux amd64直接构建
wails build -platform linux/amd64 -clean -ldflags "$LDFLAGS"
if [ $? -eq 0 ]; then
mv "$BUILD_BIN_DIR/${DEFAULT_BINARY_NAME}" "$DIST_DIR/${APP_NAME}-${VERSION}-linux-amd64"
chmod +x "$DIST_DIR/${APP_NAME}-${VERSION}-linux-amd64"
TARGET_LINUX_BIN="$DIST_DIR/${APP_NAME}-${VERSION}-linux-amd64"
mv "$BUILD_BIN_DIR/${DEFAULT_BINARY_NAME}" "$TARGET_LINUX_BIN"
chmod +x "$TARGET_LINUX_BIN"
try_compress_binary_with_upx "$TARGET_LINUX_BIN" "Linux amd64 可执行文件"
# 打包为 tar.gz
cd "$DIST_DIR"
tar -czvf "${APP_NAME}-${VERSION}-linux-amd64.tar.gz" "${APP_NAME}-${VERSION}-linux-amd64"
@@ -186,8 +367,10 @@ elif command -v x86_64-linux-gnu-gcc &> /dev/null; then
export CGO_ENABLED=1
wails build -platform linux/amd64 -clean -ldflags "$LDFLAGS"
if [ $? -eq 0 ]; then
mv "$BUILD_BIN_DIR/${DEFAULT_BINARY_NAME}" "$DIST_DIR/${APP_NAME}-${VERSION}-linux-amd64"
chmod +x "$DIST_DIR/${APP_NAME}-${VERSION}-linux-amd64"
TARGET_LINUX_BIN="$DIST_DIR/${APP_NAME}-${VERSION}-linux-amd64"
mv "$BUILD_BIN_DIR/${DEFAULT_BINARY_NAME}" "$TARGET_LINUX_BIN"
chmod +x "$TARGET_LINUX_BIN"
try_compress_binary_with_upx "$TARGET_LINUX_BIN" "Linux amd64 可执行文件"
cd "$DIST_DIR"
tar -czvf "${APP_NAME}-${VERSION}-linux-amd64.tar.gz" "${APP_NAME}-${VERSION}-linux-amd64"
rm "${APP_NAME}-${VERSION}-linux-amd64"
@@ -208,8 +391,10 @@ if [ "$CURRENT_OS" = "Linux" ] && [ "$CURRENT_ARCH" = "aarch64" ]; then
# 本机 Linux arm64直接构建
wails build -platform linux/arm64 -clean -ldflags "$LDFLAGS"
if [ $? -eq 0 ]; then
mv "$BUILD_BIN_DIR/${DEFAULT_BINARY_NAME}" "$DIST_DIR/${APP_NAME}-${VERSION}-linux-arm64"
chmod +x "$DIST_DIR/${APP_NAME}-${VERSION}-linux-arm64"
TARGET_LINUX_BIN="$DIST_DIR/${APP_NAME}-${VERSION}-linux-arm64"
mv "$BUILD_BIN_DIR/${DEFAULT_BINARY_NAME}" "$TARGET_LINUX_BIN"
chmod +x "$TARGET_LINUX_BIN"
try_compress_binary_with_upx "$TARGET_LINUX_BIN" "Linux arm64 可执行文件"
cd "$DIST_DIR"
tar -czvf "${APP_NAME}-${VERSION}-linux-arm64.tar.gz" "${APP_NAME}-${VERSION}-linux-arm64"
rm "${APP_NAME}-${VERSION}-linux-arm64"
@@ -225,8 +410,10 @@ elif command -v aarch64-linux-gnu-gcc &> /dev/null; then
export CGO_ENABLED=1
wails build -platform linux/arm64 -clean -ldflags "$LDFLAGS"
if [ $? -eq 0 ]; then
mv "$BUILD_BIN_DIR/${DEFAULT_BINARY_NAME}" "$DIST_DIR/${APP_NAME}-${VERSION}-linux-arm64"
chmod +x "$DIST_DIR/${APP_NAME}-${VERSION}-linux-arm64"
TARGET_LINUX_BIN="$DIST_DIR/${APP_NAME}-${VERSION}-linux-arm64"
mv "$BUILD_BIN_DIR/${DEFAULT_BINARY_NAME}" "$TARGET_LINUX_BIN"
chmod +x "$TARGET_LINUX_BIN"
try_compress_binary_with_upx "$TARGET_LINUX_BIN" "Linux arm64 可执行文件"
cd "$DIST_DIR"
tar -czvf "${APP_NAME}-${VERSION}-linux-arm64.tar.gz" "${APP_NAME}-${VERSION}-linux-arm64"
rm "${APP_NAME}-${VERSION}-linux-arm64"

View File

@@ -5,6 +5,23 @@
<link rel="icon" type="image/svg+xml" href="/logo.svg" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>GoNavi</title>
<script>
if (typeof window !== 'undefined' && !window.go) {
window.go = {
app: {
App: new Proxy({}, { get: () => async () => ({ success: false }) })
}
};
}
if (typeof window !== 'undefined' && !window.runtime) {
window.runtime = new Proxy({}, {
get: (target, prop) => {
if (prop === 'Environment') return async () => ({ platform: 'darwin' });
return typeof prop === 'string' && prop.startsWith('WindowIs') ? () => false : () => {};
}
});
}
</script>
</head>
<body>
<div id="root"></div>

View File

@@ -37,6 +37,91 @@ body, #root {
padding-right: 8px;
}
.redis-viewer-workbench .ant-tree {
background: transparent;
}
.redis-viewer-workbench .ant-tree .ant-tree-list-holder-inner,
.redis-viewer-workbench .ant-tree .ant-tree-list-holder-inner .ant-tree-treenode {
width: 100% !important;
}
.redis-viewer-workbench .ant-tree .ant-tree-node-content-wrapper {
min-height: 36px;
border-radius: 14px;
transition: background-color 0.2s ease, border-color 0.2s ease, color 0.2s ease;
background: transparent !important;
border: none !important;
box-shadow: none !important;
outline: none !important;
flex: 1 1 auto;
min-width: 0;
width: auto !important;
}
.redis-viewer-workbench .ant-tree .ant-tree-node-content-wrapper:hover,
.redis-viewer-workbench .ant-tree .ant-tree-node-content-wrapper:active,
.redis-viewer-workbench .ant-tree .ant-tree-node-content-wrapper:focus,
.redis-viewer-workbench .ant-tree .ant-tree-node-content-wrapper:focus-visible,
.redis-viewer-workbench .ant-tree .ant-tree-node-content-wrapper.ant-tree-node-selected,
.redis-viewer-workbench .ant-tree .ant-tree-node-content-wrapper.ant-tree-node-selected:hover {
background: transparent !important;
border-color: transparent !important;
box-shadow: none !important;
outline: none !important;
}
.redis-viewer-workbench .ant-tree .ant-tree-treenode {
padding: 2px 0;
width: 100%;
border-radius: 14px;
transition: background-color 0.2s ease, border-color 0.2s ease, color 0.2s ease;
border: none;
align-items: center;
position: relative;
z-index: 0;
display: flex !important;
box-sizing: border-box;
}
.redis-viewer-workbench .ant-tree .ant-tree-switcher {
width: 0 !important;
min-width: 0 !important;
margin-inline-end: 0 !important;
padding: 0 !important;
overflow: hidden !important;
background: transparent !important;
}
.redis-viewer-workbench .ant-tree .ant-tree-switcher:hover,
.redis-viewer-workbench .ant-tree .ant-tree-switcher:active,
.redis-viewer-workbench .ant-tree .ant-tree-switcher:focus {
background: transparent !important;
}
.redis-viewer-workbench .redis-tree-expander-button:hover,
.redis-viewer-workbench .redis-tree-expander-button:focus-visible {
background: transparent !important;
outline: none;
}
.redis-viewer-workbench .ant-radio-group .ant-radio-button-wrapper {
border-radius: 10px;
margin-inline-end: 6px;
}
.redis-viewer-workbench .ant-radio-group .ant-radio-button-wrapper:last-child {
margin-inline-end: 0;
}
.redis-viewer-workbench .ant-table {
background: transparent;
}
.redis-viewer-workbench .ant-table-wrapper .ant-table-thead > tr > th {
font-weight: 700;
}
/* Scrollbar styling for dark mode */
body[data-theme='dark'] ::-webkit-scrollbar {
width: 10px;
@@ -97,6 +182,16 @@ body[data-theme='dark'] .ant-tree .ant-tree-node-content-wrapper.ant-tree-node-s
color: rgba(255, 236, 179, 0.98) !important;
}
body[data-theme='dark'] .redis-viewer-workbench .ant-tree .ant-tree-treenode:hover {
background: rgba(255, 255, 255, 0.05) !important;
}
body[data-theme='dark'] .redis-viewer-workbench .ant-tree .ant-tree-treenode.ant-tree-treenode-selected,
body[data-theme='dark'] .redis-viewer-workbench .ant-tree .ant-tree-treenode.ant-tree-treenode-selected:hover {
background: linear-gradient(90deg, rgba(246, 196, 83, 0.22), rgba(246, 196, 83, 0.08)) !important;
border: 1px solid rgba(246, 196, 83, 0.24) !important;
}
body[data-theme='dark'] .ant-checkbox-checked .ant-checkbox-inner {
background-color: #f6c453 !important;
border-color: #f6c453 !important;
@@ -135,6 +230,41 @@ body[data-theme='dark'] .ant-table-tbody .ant-table-row.ant-table-row-selected:h
background: rgba(246, 196, 83, 0.26) !important;
}
body[data-theme='dark'] .redis-viewer-workbench .ant-radio-button-wrapper {
background: rgba(255, 255, 255, 0.04);
border-color: rgba(255, 255, 255, 0.08);
color: rgba(230, 234, 242, 0.9);
}
body[data-theme='dark'] .redis-viewer-workbench .ant-radio-button-wrapper-checked:not(.ant-radio-button-wrapper-disabled) {
background: rgba(246, 196, 83, 0.16);
border-color: rgba(246, 196, 83, 0.3);
color: #f6c453;
}
body[data-theme='light'] .redis-viewer-workbench .ant-tree .ant-tree-treenode:hover {
background: rgba(15, 23, 42, 0.04) !important;
}
body[data-theme='light'] .redis-viewer-workbench .ant-tree .ant-tree-treenode.ant-tree-treenode-selected,
body[data-theme='light'] .redis-viewer-workbench .ant-tree .ant-tree-treenode.ant-tree-treenode-selected:hover {
color: rgba(15, 23, 42, 0.92) !important;
background: linear-gradient(90deg, rgba(22, 119, 255, 0.12), rgba(22, 119, 255, 0.04)) !important;
border: 1px solid rgba(22, 119, 255, 0.18) !important;
}
body[data-theme='light'] .redis-viewer-workbench .ant-radio-button-wrapper {
background: rgba(255, 255, 255, 0.72);
border-color: rgba(15, 23, 42, 0.08);
color: rgba(51, 65, 85, 0.88);
}
body[data-theme='light'] .redis-viewer-workbench .ant-radio-button-wrapper-checked:not(.ant-radio-button-wrapper-disabled) {
background: rgba(22, 119, 255, 0.1);
border-color: rgba(22, 119, 255, 0.22);
color: #1677ff;
}
/* 连接配置弹窗:滚动仅在弹窗 body 内部,不使用外层 wrap 滚动条 */
.connection-modal-wrap {
overflow: hidden !important;

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,9 +1,11 @@
import React, { useState, useEffect, useRef } from 'react';
import { Modal, Form, Select, Button, message, Steps, Transfer, Card, Alert, Divider, Typography, Progress, Checkbox, Table, Drawer, Tabs } from 'antd';
import React, { useState, useEffect, useMemo, useRef } from 'react';
import { Modal, Form, Select, Input, Button, message, Steps, Transfer, Card, Alert, Divider, Typography, Progress, Checkbox, Table, Drawer, Tabs, theme as antdTheme } from 'antd';
import { DatabaseOutlined, RocketOutlined, SwapOutlined, TableOutlined } from '@ant-design/icons';
import { useStore } from '../store';
import { DBGetDatabases, DBGetTables, DataSync, DataSyncAnalyze, DataSyncPreview } from '../../wailsjs/go/app/App';
import { SavedConnection } from '../types';
import { EventsOn } from '../../wailsjs/runtime/runtime';
import { normalizeOpacityForPlatform, resolveAppearanceValues } from '../utils/appearance';
const { Title, Text } = Typography;
const { Step } = Steps;
@@ -21,6 +23,12 @@ type TableDiffSummary = {
deletes?: number;
same?: number;
message?: string;
targetTableExists?: boolean;
plannedAction?: string;
warnings?: string[];
unsupportedObjects?: string[];
indexesToCreate?: number;
indexesSkipped?: number;
};
type TableOps = {
insert: boolean;
@@ -31,10 +39,135 @@ type TableOps = {
selectedDeletePks?: string[];
};
type WorkflowType = 'sync' | 'migration';
const quoteSqlIdent = (dbType: string, ident: string): string => {
const raw = String(ident || '').trim();
if (!raw) return raw;
const t = String(dbType || '').toLowerCase();
if (t === 'mysql' || t === 'mariadb' || t === 'diros' || t === 'sphinx' || t === 'clickhouse' || t === 'tdengine') {
return `\`${raw.replace(/`/g, '``')}\``;
}
if (t === 'sqlserver') {
return `[${raw.replace(/]/g, ']]')}]`;
}
return `"${raw.replace(/"/g, '""')}"`;
};
const quoteSqlTable = (dbType: string, tableName: string): string => {
const raw = String(tableName || '').trim();
if (!raw) return raw;
if (!raw.includes('.')) return quoteSqlIdent(dbType, raw);
return raw
.split('.')
.map((part) => quoteSqlIdent(dbType, part))
.join('.');
};
const toSqlLiteral = (value: any, dbType: string): string => {
if (value === null || value === undefined) return 'NULL';
if (typeof value === 'number') return Number.isFinite(value) ? String(value) : 'NULL';
if (typeof value === 'bigint') return value.toString();
if (typeof value === 'boolean') {
const t = String(dbType || '').toLowerCase();
if (t === 'sqlserver') return value ? '1' : '0';
return value ? 'TRUE' : 'FALSE';
}
if (value instanceof Date) {
return `'${value.toISOString().replace(/'/g, "''")}'`;
}
if (typeof value === 'object') {
try {
return `'${JSON.stringify(value).replace(/'/g, "''")}'`;
} catch {
return `'${String(value).replace(/'/g, "''")}'`;
}
}
return `'${String(value).replace(/'/g, "''")}'`;
};
const resolveRedisDbIndex = (raw?: string): number => {
const value = Number(String(raw || '').trim());
return Number.isInteger(value) && value >= 0 && value <= 15 ? value : 0;
};
const buildSqlPreview = (
previewData: any,
tableName: string,
dbType: string,
ops?: TableOps,
): { sqlText: string; statementCount: number } => {
if (!previewData || !tableName) return { sqlText: '', statementCount: 0 };
const tableExpr = quoteSqlTable(dbType, tableName);
const pkCol = String(previewData.pkColumn || 'id');
const statements: string[] = [];
const insertRows = Array.isArray(previewData.inserts) ? previewData.inserts : [];
const updateRows = Array.isArray(previewData.updates) ? previewData.updates : [];
const deleteRows = Array.isArray(previewData.deletes) ? previewData.deletes : [];
const selectedInsert = new Set((ops?.selectedInsertPks || []).map((v) => String(v)));
const selectedUpdate = new Set((ops?.selectedUpdatePks || []).map((v) => String(v)));
const selectedDelete = new Set((ops?.selectedDeletePks || []).map((v) => String(v)));
if (ops?.insert !== false) {
insertRows.forEach((rowWrap: any) => {
const pk = String(rowWrap?.pk ?? '');
if (selectedInsert.size > 0 && !selectedInsert.has(pk)) return;
const row = rowWrap?.row || {};
const columns = Object.keys(row);
if (columns.length === 0) return;
const colExpr = columns.map((c) => quoteSqlIdent(dbType, c)).join(', ');
const valExpr = columns.map((c) => toSqlLiteral(row[c], dbType)).join(', ');
statements.push(`INSERT INTO ${tableExpr} (${colExpr}) VALUES (${valExpr});`);
});
}
if (ops?.update !== false) {
updateRows.forEach((rowWrap: any) => {
const pk = String(rowWrap?.pk ?? '');
if (selectedUpdate.size > 0 && !selectedUpdate.has(pk)) return;
const source = rowWrap?.source || {};
const changedColumns = Array.isArray(rowWrap?.changedColumns)
? rowWrap.changedColumns
: Object.keys(source).filter((k) => k !== pkCol);
const setCols = changedColumns.filter((c: string) => String(c) !== pkCol);
if (setCols.length === 0) return;
const setExpr = setCols
.map((c: string) => `${quoteSqlIdent(dbType, c)} = ${toSqlLiteral(source[c], dbType)}`)
.join(', ');
statements.push(
`UPDATE ${tableExpr} SET ${setExpr} WHERE ${quoteSqlIdent(dbType, pkCol)} = ${toSqlLiteral(pk, dbType)};`,
);
});
}
if (ops?.delete) {
deleteRows.forEach((rowWrap: any) => {
const pk = String(rowWrap?.pk ?? '');
if (selectedDelete.size > 0 && !selectedDelete.has(pk)) return;
statements.push(
`DELETE FROM ${tableExpr} WHERE ${quoteSqlIdent(dbType, pkCol)} = ${toSqlLiteral(pk, dbType)};`,
);
});
}
return {
sqlText: statements.join('\n'),
statementCount: statements.length,
};
};
const DataSyncModal: React.FC<{ open: boolean; onClose: () => void }> = ({ open, onClose }) => {
const connections = useStore((state) => state.connections);
const themeMode = useStore((state) => state.theme);
const appearance = useStore((state) => state.appearance);
const [currentStep, setCurrentStep] = useState(0);
const [loading, setLoading] = useState(false);
const { token } = antdTheme.useToken();
const darkMode = themeMode === 'dark';
const resolvedAppearance = resolveAppearanceValues(appearance);
const effectiveOpacity = normalizeOpacityForPlatform(resolvedAppearance.opacity);
// Step 1: Config
const [sourceConnId, setSourceConnId] = useState<string>('');
@@ -50,9 +183,13 @@ const DataSyncModal: React.FC<{ open: boolean; onClose: () => void }> = ({ open,
const [selectedTables, setSelectedTables] = useState<string[]>([]);
// Options
const [workflowType, setWorkflowType] = useState<WorkflowType>('sync');
const [syncContent, setSyncContent] = useState<'data' | 'schema' | 'both'>('data');
const [syncMode, setSyncMode] = useState<string>('insert_update');
const [autoAddColumns, setAutoAddColumns] = useState<boolean>(true);
const [targetTableStrategy, setTargetTableStrategy] = useState<'existing_only' | 'auto_create_if_missing' | 'smart'>('existing_only');
const [createIndexes, setCreateIndexes] = useState<boolean>(false);
const [mongoCollectionName, setMongoCollectionName] = useState<string>('');
const [showSameTables, setShowSameTables] = useState<boolean>(false);
const [analyzing, setAnalyzing] = useState<boolean>(false);
const [diffTables, setDiffTables] = useState<TableDiffSummary[]>([]);
@@ -128,9 +265,12 @@ const DataSyncModal: React.FC<{ open: boolean; onClose: () => void }> = ({ open,
setSourceDb('');
setTargetDb('');
setSelectedTables([]);
setWorkflowType('sync');
setSyncContent('data');
setSyncMode('insert_update');
setAutoAddColumns(true);
setTargetTableStrategy('existing_only');
setCreateIndexes(false);
setShowSameTables(false);
setAnalyzing(false);
setDiffTables([]);
@@ -148,36 +288,66 @@ const DataSyncModal: React.FC<{ open: boolean; onClose: () => void }> = ({ open,
}
}, [open]);
useEffect(() => {
if (workflowType === 'migration') {
if (syncMode === 'insert_update') {
setSyncMode('insert_only');
}
if (syncContent === 'schema') {
setSyncContent('both');
}
if (targetTableStrategy === 'existing_only') {
setTargetTableStrategy('smart');
}
if (!createIndexes) {
setCreateIndexes(true);
}
} else {
if (targetTableStrategy !== 'existing_only') {
setTargetTableStrategy('existing_only');
}
if (createIndexes) {
setCreateIndexes(false);
}
}
}, [workflowType]);
const handleSourceConnChange = async (connId: string) => {
setSourceConnId(connId);
setSourceDb('');
const conn = connections.find(c => c.id === connId);
if (conn) {
setLoading(true);
try {
const res = await DBGetDatabases(normalizeConnConfig(conn) as any);
if (res.success) {
setSourceDbs((res.data as any[]).map((r: any) => r.Database || r.database || r.username));
}
} catch(e) { message.error("Failed to fetch source databases"); }
setLoading(false);
}
if (conn) {
setLoading(true);
try {
const res = await DBGetDatabases(normalizeConnConfig(conn) as any);
if (res.success) {
const dbRows = Array.isArray(res.data) ? res.data : [];
setSourceDbs(dbRows
.map((r: any) => r?.Database || r?.database || r?.username)
.filter((name: any) => typeof name === 'string' && name.trim() !== ''));
}
} catch(e) { message.error("Failed to fetch source databases"); }
setLoading(false);
}
};
const handleTargetConnChange = async (connId: string) => {
setTargetConnId(connId);
setTargetDb('');
const conn = connections.find(c => c.id === connId);
if (conn) {
setLoading(true);
try {
const res = await DBGetDatabases(normalizeConnConfig(conn) as any);
if (res.success) {
setTargetDbs((res.data as any[]).map((r: any) => r.Database || r.database || r.username));
}
} catch(e) { message.error("Failed to fetch target databases"); }
setLoading(false);
}
if (conn) {
setLoading(true);
try {
const res = await DBGetDatabases(normalizeConnConfig(conn) as any);
if (res.success) {
const dbRows = Array.isArray(res.data) ? res.data : [];
setTargetDbs(dbRows
.map((r: any) => r?.Database || r?.database || r?.username)
.filter((name: any) => typeof name === 'string' && name.trim() !== ''));
}
} catch(e) { message.error("Failed to fetch target databases"); }
setLoading(false);
}
};
const nextToTables = async () => {
@@ -189,14 +359,17 @@ const DataSyncModal: React.FC<{ open: boolean; onClose: () => void }> = ({ open,
try {
const conn = connections.find(c => c.id === sourceConnId);
if (conn) {
const config = normalizeConnConfig(conn, sourceDb);
const res = await DBGetTables(config as any, sourceDb);
if (res.success) {
// DBGetTables returns [{Table: "name"}, ...]
const tables = (res.data as any[]).map((row: any) => row.Table || row.table || row.TABLE_NAME || Object.values(row)[0]);
setAllTables(tables as string[]);
setCurrentStep(1);
} else {
const config = normalizeConnConfig(conn, sourceDb);
const res = await DBGetTables(config as any, sourceDb);
if (res.success) {
// DBGetTables returns [{Table: "name"}, ...]
const tableRows = Array.isArray(res.data) ? res.data : [];
const tables = tableRows
.map((row: any) => row?.Table || row?.table || row?.TABLE_NAME || Object.values(row || {})[0])
.filter((name: any) => typeof name === 'string' && name.trim() !== '');
setAllTables(tables as string[]);
setCurrentStep(1);
} else {
message.error(res.message);
}
}
@@ -236,6 +409,9 @@ const DataSyncModal: React.FC<{ open: boolean; onClose: () => void }> = ({ open,
content: syncContent,
mode: "insert_update",
autoAddColumns,
targetTableStrategy,
createIndexes,
mongoCollectionName: mongoCollectionName.trim(),
jobId,
};
@@ -286,6 +462,9 @@ const DataSyncModal: React.FC<{ open: boolean; onClose: () => void }> = ({ open,
content: "data",
mode: "insert_update",
autoAddColumns,
targetTableStrategy,
createIndexes,
mongoCollectionName: mongoCollectionName.trim(),
};
try {
@@ -362,6 +541,9 @@ const DataSyncModal: React.FC<{ open: boolean; onClose: () => void }> = ({ open,
content: syncContent,
mode: syncMode,
autoAddColumns,
targetTableStrategy,
createIndexes,
mongoCollectionName: mongoCollectionName.trim(),
tableOptions,
jobId,
};
@@ -402,10 +584,139 @@ const DataSyncModal: React.FC<{ open: boolean; onClose: () => void }> = ({ open,
);
};
const previewSql = useMemo(() => {
if (!previewData || !previewTable) return { sqlText: '', statementCount: 0 };
const targetType = String(connections.find(c => c.id === targetConnId)?.config?.type || '');
const ops = tableOptions[previewTable] || { insert: true, update: true, delete: false };
return buildSqlPreview(previewData, previewTable, targetType, ops);
}, [previewData, previewTable, targetConnId, connections, tableOptions]);
const analysisWarnings = useMemo(() => {
const items: string[] = [];
diffTables.forEach((table) => {
(table.warnings || []).forEach((warning) => items.push(`${table.table}: ${warning}`));
(table.unsupportedObjects || []).forEach((warning) => items.push(`${table.table}: ${warning}`));
});
return Array.from(new Set(items));
}, [diffTables]);
const isMigrationWorkflow = workflowType === 'migration';
const sourceConn = useMemo(() => connections.find(c => c.id === sourceConnId), [connections, sourceConnId]);
const targetConn = useMemo(() => connections.find(c => c.id === targetConnId), [connections, targetConnId]);
const sourceType = String(sourceConn?.config?.type || '').toLowerCase();
const targetType = String(targetConn?.config?.type || '').toLowerCase();
const isRedisMongoKeyspaceMigration = isMigrationWorkflow && (
(sourceType === 'redis' && targetType === 'mongodb') ||
(sourceType === 'mongodb' && targetType === 'redis')
);
const defaultMongoCollectionName = useMemo(() => {
if (sourceType === 'redis' && targetType === 'mongodb') {
return `redis_db_${resolveRedisDbIndex(sourceDb || sourceConn?.config?.database)}_keys`;
}
if (sourceType === 'mongodb' && targetType === 'redis') {
return selectedTables[0] || `redis_db_${resolveRedisDbIndex(targetDb || targetConn?.config?.database)}_keys`;
}
return '';
}, [sourceType, targetType, sourceDb, targetDb, sourceConn, targetConn, selectedTables]);
const modalPanelStyle = useMemo(() => ({
background: darkMode
? 'linear-gradient(180deg, rgba(16,22,34,0.96) 0%, rgba(10,14,24,0.98) 100%)'
: 'linear-gradient(180deg, rgba(255,255,255,0.98) 0%, rgba(246,248,252,0.98) 100%)',
border: darkMode ? '1px solid rgba(255,255,255,0.08)' : '1px solid rgba(16,24,40,0.08)',
boxShadow: darkMode ? '0 24px 56px rgba(0,0,0,0.36)' : '0 18px 44px rgba(15,23,42,0.14)',
backdropFilter: darkMode ? 'blur(18px)' : 'none',
}), [darkMode]);
const shellCardStyle = useMemo<React.CSSProperties>(() => ({
borderRadius: 18,
border: darkMode ? '1px solid rgba(255,255,255,0.08)' : '1px solid rgba(15,23,42,0.08)',
background: darkMode ? 'rgba(255,255,255,0.03)' : `rgba(255,255,255,${Math.max(effectiveOpacity, 0.88)})`,
boxShadow: darkMode ? '0 12px 32px rgba(0,0,0,0.22)' : '0 10px 24px rgba(15,23,42,0.08)',
overflow: 'hidden',
}), [darkMode, effectiveOpacity]);
const heroPanelStyle = useMemo<React.CSSProperties>(() => ({
padding: 18,
borderRadius: 18,
border: darkMode ? '1px solid rgba(255,214,102,0.12)' : '1px solid rgba(24,144,255,0.12)',
background: darkMode
? 'linear-gradient(135deg, rgba(255,214,102,0.10) 0%, rgba(255,255,255,0.03) 100%)'
: 'linear-gradient(135deg, rgba(24,144,255,0.10) 0%, rgba(255,255,255,0.95) 100%)',
marginBottom: 18,
}), [darkMode]);
const badgeStyle = useMemo<React.CSSProperties>(() => ({
display: 'inline-flex',
alignItems: 'center',
gap: 6,
padding: '6px 10px',
borderRadius: 999,
border: darkMode ? '1px solid rgba(255,255,255,0.10)' : '1px solid rgba(15,23,42,0.08)',
background: darkMode ? 'rgba(255,255,255,0.04)' : 'rgba(255,255,255,0.86)',
color: darkMode ? 'rgba(255,255,255,0.88)' : '#334155',
fontSize: 12,
fontWeight: 600,
}), [darkMode]);
const quietPanelStyle = useMemo<React.CSSProperties>(() => ({
padding: 14,
borderRadius: 16,
border: darkMode ? '1px solid rgba(255,255,255,0.08)' : '1px solid rgba(15,23,42,0.08)',
background: darkMode ? 'rgba(255,255,255,0.025)' : 'rgba(248,250,252,0.92)',
}), [darkMode]);
const modalWorkspaceStyle = useMemo<React.CSSProperties>(() => ({
display: 'flex',
flexDirection: 'column',
height: '100%',
minHeight: 0,
}), []);
const modalScrollableContentStyle = useMemo<React.CSSProperties>(() => ({
flex: 1,
minHeight: 0,
overflowY: 'auto',
overflowX: 'hidden',
paddingRight: 4,
overscrollBehavior: 'contain',
}), []);
const modalFooterBarStyle = useMemo<React.CSSProperties>(() => ({
marginTop: 18,
display: 'flex',
justifyContent: 'flex-end',
gap: 8,
paddingTop: 12,
borderTop: darkMode ? '1px solid rgba(255,255,255,0.06)' : '1px solid rgba(15,23,42,0.06)',
flex: '0 0 auto',
}), [darkMode]);
const renderModalTitle = (title: string, description: string) => (
<div style={{ display: 'flex', alignItems: 'flex-start', gap: 12 }}>
<div style={{
width: 38,
height: 38,
borderRadius: 14,
display: 'grid',
placeItems: 'center',
background: darkMode ? 'rgba(255,214,102,0.12)' : 'rgba(24,144,255,0.10)',
color: darkMode ? '#ffd666' : token.colorPrimary,
flexShrink: 0,
}}>
{isMigrationWorkflow ? <RocketOutlined /> : <SwapOutlined />}
</div>
<div style={{ minWidth: 0 }}>
<div style={{ fontSize: 16, fontWeight: 700, color: darkMode ? '#f8fafc' : '#0f172a' }}>{title}</div>
<div style={{ marginTop: 4, fontSize: 12, lineHeight: 1.6, color: darkMode ? 'rgba(255,255,255,0.56)' : 'rgba(15,23,42,0.58)' }}>{description}</div>
</div>
</div>
);
return (
<>
<Modal
title="数据同步"
title={renderModalTitle(isMigrationWorkflow ? '跨库迁移工作台' : '数据同步工作台', isMigrationWorkflow ? '按源库 → 目标库完成建表、导入与风险预检。' : '按已有目标表完成差异对比、同步执行与结果确认。')}
open={open}
onCancel={() => {
if (syncing) {
@@ -414,23 +725,61 @@ const DataSyncModal: React.FC<{ open: boolean; onClose: () => void }> = ({ open,
}
onClose();
}}
width={800}
width={920}
footer={null}
destroyOnHidden
closable={!syncing}
maskClosable={!syncing}
styles={{
content: modalPanelStyle,
header: { background: 'transparent', borderBottom: 'none', paddingBottom: 10 },
body: {
paddingTop: 8,
height: 760,
maxHeight: 'calc(100vh - 120px)',
overflow: 'hidden',
display: 'flex',
flexDirection: 'column',
},
footer: { background: 'transparent', borderTop: 'none', paddingTop: 12 },
}}
>
<div style={modalWorkspaceStyle}>
<div style={{ flex: '0 0 auto' }}>
<div style={heroPanelStyle}>
<div style={{ display: 'flex', justifyContent: 'space-between', gap: 12, alignItems: 'flex-start', flexWrap: 'wrap' }}>
<div style={{ minWidth: 0 }}>
<div style={{ fontSize: 18, fontWeight: 700, color: darkMode ? '#f8fafc' : '#0f172a' }}>{isMigrationWorkflow ? '跨数据源迁移' : '数据同步'}</div>
<div style={{ marginTop: 6, fontSize: 13, lineHeight: 1.7, color: darkMode ? 'rgba(255,255,255,0.62)' : 'rgba(15,23,42,0.62)' }}>
{isMigrationWorkflow
? '适合把源表迁移到另一套数据库,可按策略自动建表、导入数据并补建可兼容索引。'
: '适合目标表已存在的场景,先做差异分析,再按勾选执行插入、更新或删除。'}
</div>
</div>
<div style={{ display: 'flex', flexWrap: 'wrap', gap: 8 }}>
<span style={badgeStyle}>{isMigrationWorkflow ? <RocketOutlined /> : <SwapOutlined />} {isMigrationWorkflow ? '迁移模式' : '同步模式'}</span>
<span style={badgeStyle}><DatabaseOutlined /> {sourceConnId ? '已选源连接' : '待选源连接'}</span>
<span style={badgeStyle}><TableOutlined /> {selectedTables.length || 0} </span>
</div>
</div>
</div>
<Steps current={currentStep} style={{ marginBottom: 24 }}>
<Step title="配置源与目标" />
<Step title="选择表" />
<Step title="执行结果" />
</Steps>
</div>
<div style={modalScrollableContentStyle}>
{/* STEP 1: CONFIG */}
{currentStep === 0 && (
<div>
<div style={{ display: 'flex', gap: 24, justifyContent: 'center' }}>
<Card title="源数据库" style={{ width: 350 }}>
<div style={{ display: 'grid', gridTemplateColumns: 'minmax(0, 1fr) 44px minmax(0, 1fr)', gap: 18, alignItems: 'stretch' }}>
<Card
title="源数据库"
style={shellCardStyle}
styles={{ header: { borderBottom: darkMode ? '1px solid rgba(255,255,255,0.08)' : '1px solid rgba(15,23,42,0.06)', fontWeight: 700 }, body: { padding: 18 } }}
>
<Form layout="vertical">
<Form.Item label="连接">
<Select value={sourceConnId} onChange={handleSourceConnChange}>
@@ -444,8 +793,16 @@ const DataSyncModal: React.FC<{ open: boolean; onClose: () => void }> = ({ open,
</Form.Item>
</Form>
</Card>
<div style={{ display: 'flex', alignItems: 'center' }}></div>
<Card title="目标数据库" style={{ width: 350 }}>
<div style={{ display: 'grid', placeItems: 'center' }}>
<div style={{ ...badgeStyle, width: 44, height: 44, borderRadius: 14, justifyContent: 'center', padding: 0 }}>
<SwapOutlined />
</div>
</div>
<Card
title="目标数据库"
style={shellCardStyle}
styles={{ header: { borderBottom: darkMode ? '1px solid rgba(255,255,255,0.08)' : '1px solid rgba(15,23,42,0.06)', fontWeight: 700 }, body: { padding: 18 } }}
>
<Form layout="vertical">
<Form.Item label="连接">
<Select value={targetConnId} onChange={handleTargetConnChange}>
@@ -461,27 +818,94 @@ const DataSyncModal: React.FC<{ open: boolean; onClose: () => void }> = ({ open,
</Card>
</div>
<Card title="同步选项" style={{ marginTop: 16 }}>
<Card
title={isMigrationWorkflow ? '迁移选项' : '同步选项'}
style={{ ...shellCardStyle, marginTop: 18 }}
styles={{ header: { borderBottom: darkMode ? '1px solid rgba(255,255,255,0.08)' : '1px solid rgba(15,23,42,0.06)', fontWeight: 700 }, body: { padding: 18 } }}
>
<div style={{ ...quietPanelStyle, marginBottom: 14 }}>
<Text style={{ color: darkMode ? 'rgba(255,255,255,0.72)' : 'rgba(15,23,42,0.68)', lineHeight: 1.7 }}>
</Text>
</div>
<Form layout="vertical">
<Form.Item label="同步内容">
<Form.Item label="功能类型">
<Select value={workflowType} onChange={setWorkflowType}>
<Option value="sync"></Option>
<Option value="migration"></Option>
</Select>
</Form.Item>
<Alert
type={isMigrationWorkflow ? 'info' : 'success'}
showIcon
style={{ marginBottom: 12 }}
message={isMigrationWorkflow
? '当前为“跨库迁移”模式:适合将表迁移到另一数据源,可自动建表并导入数据。'
: '当前为“数据同步”模式:适合目标表已存在时做增量同步或覆盖导入。'}
/>
<Form.Item label={isMigrationWorkflow ? '迁移内容' : '同步内容'}>
<Select value={syncContent} onChange={setSyncContent}>
<Option value="data"></Option>
<Option value="schema"></Option>
<Option value="both"> + </Option>
</Select>
</Form.Item>
<Form.Item label="同步模式">
<Form.Item label={isMigrationWorkflow ? '迁移模式' : '同步模式'}>
<Select value={syncMode} onChange={setSyncMode} disabled={syncContent === 'schema'}>
<Option value="insert_update">//</Option>
<Option value="insert_only"></Option>
<Option value="full_overwrite"></Option>
</Select>
</Form.Item>
<Form.Item label={isMigrationWorkflow ? '目标表处理策略' : '目标表要求'}>
<Select value={targetTableStrategy} onChange={setTargetTableStrategy} disabled={!isMigrationWorkflow}>
<Option value="existing_only">使</Option>
<Option value="auto_create_if_missing"></Option>
<Option value="smart"></Option>
</Select>
</Form.Item>
{isRedisMongoKeyspaceMigration && (
<Form.Item
label="Mongo 集合名(可选)"
extra={sourceType === 'redis'
? '为空时沿用默认集合名;填写后本次 Redis 键空间会统一写入该 Mongo 集合。'
: 'MongoDB → Redis 场景下通常直接选择源集合;这里留空即可,未显式选集合时才会回退使用该名称。'}
>
<Input
value={mongoCollectionName}
onChange={(e) => setMongoCollectionName(e.target.value)}
placeholder={defaultMongoCollectionName || '请输入 Mongo 集合名'}
allowClear
maxLength={128}
/>
</Form.Item>
)}
<Form.Item>
<Checkbox checked={autoAddColumns} onChange={(e) => setAutoAddColumns(e.target.checked)}>
MySQL
MySQL MySQL Kingbase
</Checkbox>
</Form.Item>
<Form.Item>
<Checkbox checked={createIndexes} onChange={(e) => setCreateIndexes(e.target.checked)} disabled={!isMigrationWorkflow || targetTableStrategy === 'existing_only'}>
/
</Checkbox>
</Form.Item>
{isMigrationWorkflow && targetTableStrategy !== 'existing_only' && (
<Alert
type="info"
showIcon
message="自动建表模式首期仅支持 MySQL → Kingbase将迁移字段、主键、普通/唯一/联合索引,并显式跳过全文、空间、前缀、函数类索引。"
style={{ marginBottom: 12 }}
/>
)}
{!isMigrationWorkflow && (
<Alert
type="info"
showIcon
message="数据同步模式默认基于已有目标表执行;如需跨数据源建表导入,请切换到“跨库迁移”。"
style={{ marginBottom: 12 }}
/>
)}
{syncContent !== 'schema' && syncMode === 'full_overwrite' && (
<Alert
type="warning"
@@ -496,26 +920,42 @@ const DataSyncModal: React.FC<{ open: boolean; onClose: () => void }> = ({ open,
{/* STEP 2: TABLES */}
{currentStep === 1 && (
<div style={{ display: 'flex', flexDirection: 'column', gap: 12 }}>
<div style={{ display: 'flex', justifyContent: 'space-between', alignItems: 'center' }}>
<Text type="secondary">:</Text>
<div style={{ display: 'flex', flexDirection: 'column', gap: 14 }}>
<div style={quietPanelStyle}>
<div style={{ display: 'flex', justifyContent: 'space-between', alignItems: 'center', marginBottom: 10 }}>
<Text type="secondary"></Text>
<Checkbox checked={showSameTables} onChange={(e) => setShowSameTables(e.target.checked)}>
</Checkbox>
</div>
<Transfer
</div>
<Transfer
dataSource={allTables.map(t => ({ key: t, title: t }))}
titles={['源表', '已选表']}
targetKeys={selectedTables}
onChange={(keys) => setSelectedTables(keys as string[])}
render={item => item.title}
listStyle={{ width: 350, height: 280, marginTop: 0 }}
locale={{ itemUnit: '项', itemsUnit: '项', searchPlaceholder: '搜索表', notFoundContent: '暂无数据' }}
listStyle={{ width: 390, height: 320, marginTop: 0, borderRadius: 14, overflow: 'hidden' }}
locale={{ itemUnit: '项', itemsUnit: '项', searchPlaceholder: '搜索表', notFoundContent: '暂无数据' }}
/>
</div>
{diffTables.length > 0 && (
<div>
<Divider orientation="left"></Divider>
<div style={quietPanelStyle}>
<Divider orientation="left" style={{ marginTop: 0 }}></Divider>
{analysisWarnings.length > 0 && (
<Alert
type="warning"
showIcon
message="预检发现风险或降级项,请在执行前确认"
description={
<ul style={{ margin: 0, paddingLeft: 18 }}>
{analysisWarnings.slice(0, 8).map((item) => <li key={item}>{item}</li>)}
{analysisWarnings.length > 8 && <li> {analysisWarnings.length - 8} </li>}
</ul>
}
style={{ marginBottom: 12 }}
/>
)}
<Table
size="small"
pagination={false}
@@ -527,13 +967,29 @@ const DataSyncModal: React.FC<{ open: boolean; onClose: () => void }> = ({ open,
const same = Number(t.same || 0);
const msg = String(t.message || '').trim();
const can = !!t.canSync;
const warns = Array.isArray(t.warnings) ? t.warnings.length : 0;
const unsupported = Array.isArray(t.unsupportedObjects) ? t.unsupportedObjects.length : 0;
if (showSameTables) return true;
if (!can) return true;
if (msg) return true;
if (msg || warns > 0 || unsupported > 0) return true;
return ins > 0 || upd > 0 || del > 0 || same === 0;
})}
columns={[
{ title: '表名', dataIndex: 'table', key: 'table', ellipsis: true },
{
title: '目标表',
key: 'targetTableExists',
width: 90,
render: (_: any, r: any) => r.targetTableExists ? '已存在' : '不存在'
},
{
title: '计划',
dataIndex: 'plannedAction',
key: 'plannedAction',
width: 220,
ellipsis: true,
render: (v: any) => String(v || '')
},
{
title: '插入',
key: 'inserts',
@@ -542,11 +998,7 @@ const DataSyncModal: React.FC<{ open: boolean; onClose: () => void }> = ({ open,
const ops = tableOptions[r.table] || { insert: true, update: true, delete: false };
const disabled = !r.canSync || analyzing || Number(r.inserts || 0) === 0;
return (
<Checkbox
checked={!!ops.insert}
disabled={disabled}
onChange={(e) => updateTableOption(r.table, 'insert', e.target.checked)}
>
<Checkbox checked={!!ops.insert} disabled={disabled} onChange={(e) => updateTableOption(r.table, 'insert', e.target.checked)}>
{Number(r.inserts || 0)}
</Checkbox>
);
@@ -560,11 +1012,7 @@ const DataSyncModal: React.FC<{ open: boolean; onClose: () => void }> = ({ open,
const ops = tableOptions[r.table] || { insert: true, update: true, delete: false };
const disabled = !r.canSync || analyzing || Number(r.updates || 0) === 0;
return (
<Checkbox
checked={!!ops.update}
disabled={disabled}
onChange={(e) => updateTableOption(r.table, 'update', e.target.checked)}
>
<Checkbox checked={!!ops.update} disabled={disabled} onChange={(e) => updateTableOption(r.table, 'update', e.target.checked)}>
{Number(r.updates || 0)}
</Checkbox>
);
@@ -578,18 +1026,28 @@ const DataSyncModal: React.FC<{ open: boolean; onClose: () => void }> = ({ open,
const ops = tableOptions[r.table] || { insert: true, update: true, delete: false };
const disabled = !r.canSync || analyzing || Number(r.deletes || 0) === 0;
return (
<Checkbox
checked={!!ops.delete}
disabled={disabled}
onChange={(e) => updateTableOption(r.table, 'delete', e.target.checked)}
>
<Checkbox checked={!!ops.delete} disabled={disabled} onChange={(e) => updateTableOption(r.table, 'delete', e.target.checked)}>
{Number(r.deletes || 0)}
</Checkbox>
);
}
},
{ title: '相同', dataIndex: 'same', key: 'same', width: 70, render: (v: any) => Number(v || 0) },
{ title: '消息', dataIndex: 'message', key: 'message', ellipsis: true, render: (v: any) => (v ? String(v) : '') },
{
title: '风险',
key: 'warnings',
width: 220,
render: (_: any, r: any) => {
const warns = [...(Array.isArray(r.warnings) ? r.warnings : []), ...(Array.isArray(r.unsupportedObjects) ? r.unsupportedObjects : [])];
if (warns.length === 0) return '-';
return (
<div style={{ color: '#d48806', fontSize: 12, lineHeight: 1.5 }}>
{warns.slice(0, 2).map((item: string) => <div key={item}>{item}</div>)}
{warns.length > 2 && <div> {warns.length - 2} </div>}
</div>
);
}
},
{
title: '预览',
key: 'preview',
@@ -613,7 +1071,8 @@ const DataSyncModal: React.FC<{ open: boolean; onClose: () => void }> = ({ open,
{/* STEP 3: RESULT */}
{currentStep === 2 && (
<div>
<div style={{ display: 'flex', flexDirection: 'column', gap: 14 }}>
<div style={quietPanelStyle}>
<Alert
message={syncing ? "正在同步" : (syncResult?.success ? "同步完成" : "同步失败")}
description={
@@ -625,7 +1084,7 @@ const DataSyncModal: React.FC<{ open: boolean; onClose: () => void }> = ({ open,
showIcon
/>
<div style={{ marginTop: 12 }}>
<div style={{ marginTop: 14 }}>
<Progress
percent={syncProgress.percent}
status={syncing ? "active" : (syncResult?.success ? "success" : "exception")}
@@ -633,7 +1092,9 @@ const DataSyncModal: React.FC<{ open: boolean; onClose: () => void }> = ({ open,
/>
</div>
<Divider orientation="left"></Divider>
</div>
<div style={quietPanelStyle}>
<Divider orientation="left" style={{ marginTop: 0 }}></Divider>
<div
ref={logBoxRef}
onScroll={() => {
@@ -642,14 +1103,25 @@ const DataSyncModal: React.FC<{ open: boolean; onClose: () => void }> = ({ open,
const nearBottom = el.scrollHeight - el.scrollTop - el.clientHeight < 40;
autoScrollRef.current = nearBottom;
}}
style={{ background: '#f5f5f5', padding: 12, height: 300, overflowY: 'auto', fontFamily: 'monospace' }}
style={{
background: darkMode ? 'rgba(255,255,255,0.03)' : 'rgba(248,250,252,0.92)',
border: darkMode ? '1px solid rgba(255,255,255,0.08)' : '1px solid rgba(15,23,42,0.06)',
borderRadius: 14,
padding: 12,
height: 300,
overflowY: 'auto',
fontFamily: 'SFMono-Regular, ui-monospace, Menlo, Consolas, monospace'
}}
>
{syncLogs.map((item, i: number) => <div key={i}>{renderSyncLogItem(item)}</div>)}
</div>
</div>
</div>
)}
<div style={{ marginTop: 24, textAlign: 'right' }}>
</div>
<div style={modalFooterBarStyle}>
{currentStep === 0 && (
<Button type="primary" onClick={nextToTables} loading={loading}></Button>
)}
@@ -676,14 +1148,16 @@ const DataSyncModal: React.FC<{ open: boolean; onClose: () => void }> = ({ open,
</>
)}
</div>
</div>
</Modal>
<Drawer
title={`差异预览:${previewTable}`}
styles={{ body: { background: darkMode ? 'rgba(9,13,20,0.98)' : '#f8fafc' } }}
open={previewOpen}
onClose={() => { setPreviewOpen(false); setPreviewTable(''); setPreviewData(null); }}
width={900}
>
{previewLoading && <Alert type="info" showIcon message="正在加载差异预览..." />}
{previewLoading && <Alert type="info" showIcon message="正在加载差异预览" />}
{!previewLoading && previewData && (
<div>
<Alert
@@ -794,6 +1268,51 @@ const DataSyncModal: React.FC<{ open: boolean; onClose: () => void }> = ({ open,
/>
</div>
)
},
{
key: 'sql',
label: `SQL(${previewSql.statementCount})`,
children: (
<div>
<Alert
type="info"
showIcon
message="SQL 预览会按当前勾选的插入/更新/删除与行选择范围生成,用于审核确认。"
/>
<div style={{ marginTop: 8, marginBottom: 8, display: 'flex', justifyContent: 'space-between', alignItems: 'center' }}>
<Text type="secondary"> {previewSql.statementCount} 200 /</Text>
<Button
size="small"
disabled={!previewSql.sqlText}
onClick={async () => {
try {
await navigator.clipboard.writeText(previewSql.sqlText || '');
message.success('SQL 已复制');
} catch {
message.error('复制失败,请手动复制');
}
}}
>
SQL
</Button>
</div>
<pre
style={{
margin: 0,
padding: 10,
border: '1px solid #f0f0f0',
borderRadius: 6,
background: '#fafafa',
maxHeight: 420,
overflow: 'auto',
whiteSpace: 'pre-wrap',
wordBreak: 'break-word'
}}
>
{previewSql.sqlText || '-- 当前勾选范围下无 SQL 可预览'}
</pre>
</div>
)
}
]}
/>

View File

@@ -4,7 +4,7 @@ import { TabData, ColumnDefinition } from '../types';
import { useStore } from '../store';
import { DBQuery, DBGetColumns } from '../../wailsjs/go/app/App';
import DataGrid, { GONAVI_ROW_KEY } from './DataGrid';
import { buildOrderBySQL, buildWhereSQL, quoteIdentPart, quoteQualifiedIdent, withSortBufferTuningSQL, type FilterCondition } from '../utils/sql';
import { buildOrderBySQL, buildPaginatedSelectSQL, buildWhereSQL, quoteIdentPart, quoteQualifiedIdent, withSortBufferTuningSQL, type FilterCondition } from '../utils/sql';
import { buildMongoCountCommand, buildMongoFilter, buildMongoFindCommand, buildMongoSort } from '../utils/mongodb';
import { getDataSourceCapabilities } from '../utils/dataSourceCapabilities';
@@ -155,6 +155,16 @@ const reverseOrderBySQL = (orderBySQL: string): string => {
type ViewerFilterSnapshot = {
showFilter: boolean;
conditions: FilterCondition[];
currentPage: number;
pageSize: number;
sortInfo: { columnKey: string, order: string } | null;
scrollTop: number;
scrollLeft: number;
};
type ViewerScrollSnapshot = {
top: number;
left: number;
};
const viewerFilterSnapshotsByTab = new Map<string, ViewerFilterSnapshot>();
@@ -175,15 +185,23 @@ const normalizeViewerFilterConditions = (conditions: FilterCondition[] | undefin
const getViewerFilterSnapshot = (tabId: string): ViewerFilterSnapshot => {
const cached = viewerFilterSnapshotsByTab.get(String(tabId || '').trim());
if (!cached) {
return { showFilter: false, conditions: [] };
return { showFilter: false, conditions: [], currentPage: 1, pageSize: 100, sortInfo: null, scrollTop: 0, scrollLeft: 0 };
}
return {
showFilter: cached.showFilter === true,
conditions: normalizeViewerFilterConditions(cached.conditions),
currentPage: Number.isFinite(Number(cached.currentPage)) && Number(cached.currentPage) > 0 ? Number(cached.currentPage) : 1,
pageSize: Number.isFinite(Number(cached.pageSize)) && Number(cached.pageSize) > 0 ? Number(cached.pageSize) : 100,
sortInfo: cached.sortInfo && cached.sortInfo.columnKey && (cached.sortInfo.order === 'ascend' || cached.sortInfo.order === 'descend')
? { columnKey: String(cached.sortInfo.columnKey), order: cached.sortInfo.order }
: null,
scrollTop: Number.isFinite(Number(cached.scrollTop)) ? Number(cached.scrollTop) : 0,
scrollLeft: Number.isFinite(Number(cached.scrollLeft)) ? Number(cached.scrollLeft) : 0,
};
};
const DataViewer: React.FC<{ tab: TabData }> = ({ tab }) => {
const initialViewerSnapshot = useMemo(() => getViewerFilterSnapshot(tab.id), [tab.id]);
const [data, setData] = useState<any[]>([]);
const [columnNames, setColumnNames] = useState<string[]>([]);
const [pkColumns, setPkColumns] = useState<string[]>([]);
@@ -204,10 +222,15 @@ const DataViewer: React.FC<{ tab: TabData }> = ({ tab }) => {
const latestDbNameRef = useRef<string>('');
const latestCountSqlRef = useRef<string>('');
const latestCountKeyRef = useRef<string>('');
const scrollSnapshotRef = useRef<ViewerScrollSnapshot>({
top: initialViewerSnapshot.scrollTop,
left: initialViewerSnapshot.scrollLeft,
});
const initialLoadRef = useRef(false);
const [pagination, setPagination] = useState<ViewerPaginationState>({
current: 1,
pageSize: 100,
current: initialViewerSnapshot.currentPage,
pageSize: initialViewerSnapshot.pageSize,
total: 0,
totalKnown: false,
totalApprox: false,
@@ -215,30 +238,51 @@ const DataViewer: React.FC<{ tab: TabData }> = ({ tab }) => {
totalCountCancelled: false,
});
const [sortInfo, setSortInfo] = useState<{ columnKey: string, order: string } | null>(null);
const [sortInfo, setSortInfo] = useState<{ columnKey: string, order: string } | null>(initialViewerSnapshot.sortInfo);
const [showFilter, setShowFilter] = useState<boolean>(() => getViewerFilterSnapshot(tab.id).showFilter);
const [filterConditions, setFilterConditions] = useState<FilterCondition[]>(() => getViewerFilterSnapshot(tab.id).conditions);
const [showFilter, setShowFilter] = useState<boolean>(initialViewerSnapshot.showFilter);
const [filterConditions, setFilterConditions] = useState<FilterCondition[]>(initialViewerSnapshot.conditions);
const duckdbSafeSelectCacheRef = useRef<Record<string, string>>({});
const currentConnConfig = connections.find(c => c.id === tab.connectionId)?.config;
const currentConnCaps = getDataSourceCapabilities(currentConnConfig);
const currentConnType = currentConnCaps.type;
const forceReadOnly = currentConnCaps.forceReadOnlyQueryResult;
const persistViewerSnapshot = useCallback((tabId: string, overrides?: Partial<ViewerFilterSnapshot>) => {
const normalizedTabId = String(tabId || '').trim();
if (!normalizedTabId) return;
viewerFilterSnapshotsByTab.set(normalizedTabId, {
showFilter,
conditions: normalizeViewerFilterConditions(filterConditions),
currentPage: pagination.current,
pageSize: pagination.pageSize,
sortInfo,
scrollTop: scrollSnapshotRef.current.top,
scrollLeft: scrollSnapshotRef.current.left,
...overrides,
});
}, [showFilter, filterConditions, pagination.current, pagination.pageSize, sortInfo]);
useEffect(() => {
const snapshot = getViewerFilterSnapshot(tab.id);
setShowFilter(snapshot.showFilter);
setFilterConditions(snapshot.conditions);
setSortInfo(snapshot.sortInfo);
scrollSnapshotRef.current = { top: snapshot.scrollTop, left: snapshot.scrollLeft };
initialLoadRef.current = false;
}, [tab.id]);
useEffect(() => {
viewerFilterSnapshotsByTab.set(tab.id, {
showFilter,
conditions: normalizeViewerFilterConditions(filterConditions),
});
}, [tab.id, showFilter, filterConditions]);
persistViewerSnapshot(tab.id);
}, [tab.id, persistViewerSnapshot]);
useEffect(() => {
return () => {
persistViewerSnapshot(tab.id);
};
}, [tab.id, persistViewerSnapshot]);
useEffect(() => {
const snapshot = getViewerFilterSnapshot(tab.id);
setPkColumns([]);
pkKeyRef.current = '';
countKeyRef.current = '';
@@ -250,16 +294,27 @@ const DataViewer: React.FC<{ tab: TabData }> = ({ tab }) => {
latestDbNameRef.current = '';
latestCountSqlRef.current = '';
latestCountKeyRef.current = '';
scrollSnapshotRef.current = { top: snapshot.scrollTop, left: snapshot.scrollLeft };
initialLoadRef.current = false;
setPagination(prev => ({
...prev,
current: 1,
current: snapshot.currentPage,
pageSize: snapshot.pageSize,
total: 0,
totalKnown: false,
totalApprox: false,
totalCountLoading: false,
totalCountCancelled: false,
}));
}, [tab.connectionId, tab.dbName, tab.tableName]);
}, [tab.id, tab.connectionId, tab.dbName, tab.tableName]);
const handleTableScrollSnapshotChange = useCallback((snapshot: ViewerScrollSnapshot) => {
scrollSnapshotRef.current = snapshot;
persistViewerSnapshot(tab.id, {
scrollTop: snapshot.top,
scrollLeft: snapshot.left,
});
}, [tab.id, persistViewerSnapshot]);
const handleDuckDBManualCount = useCallback(async () => {
if (latestDbTypeRef.current !== 'duckdb') {
@@ -410,7 +465,7 @@ const DataViewer: React.FC<{ tab: TabData }> = ({ tab }) => {
if (pageRowCount > 0) {
const tailOffset = Math.max(0, totalRows - (offset + pageRowCount));
if (tailOffset < offset) {
sql = `${baseSql}${reverseOrderSQL} LIMIT ${pageRowCount} OFFSET ${tailOffset}`;
sql = buildPaginatedSelectSQL(dbType, baseSql, reverseOrderSQL, pageRowCount, tailOffset);
useClickHouseReversePagination = true;
clickHouseReverseLimit = pageRowCount;
clickHouseReverseHasMore = currentPage < totalPages;
@@ -419,7 +474,7 @@ const DataViewer: React.FC<{ tab: TabData }> = ({ tab }) => {
}
if (!useClickHouseReversePagination) {
// 大表性能:打开表不阻塞在 COUNT(*),先通过多取 1 条判断是否还有下一页;总数在后台统计并异步回填。
sql += ` LIMIT ${size + 1} OFFSET ${offset}`;
sql = buildPaginatedSelectSQL(dbType, baseSql, orderBySQL, size + 1, offset);
}
}
@@ -489,8 +544,7 @@ const DataViewer: React.FC<{ tab: TabData }> = ({ tab }) => {
if (safeSelect) {
let fallbackSql = `SELECT ${safeSelect} FROM ${quoteQualifiedIdent(dbType, tableName)} ${whereSQL}`;
fallbackSql += buildOrderBySQL(dbType, sortInfo, pkColumns);
fallbackSql += ` LIMIT ${size + 1} OFFSET ${offset}`;
fallbackSql = buildPaginatedSelectSQL(dbType, fallbackSql, buildOrderBySQL(dbType, sortInfo, pkColumns), size + 1, offset);
executedSql = fallbackSql;
resData = await executeDataQuery(fallbackSql, '复杂类型降级重试');
}
@@ -765,8 +819,13 @@ const DataViewer: React.FC<{ tab: TabData }> = ({ tab }) => {
}, [tab.tableName, currentConnConfig?.type, filterConditions, sortInfo, pkColumns]);
useEffect(() => {
fetchData(1, pagination.pageSize);
}, [tab, sortInfo, filterConditions]); // Initial load and re-load on sort/filter
if (!initialLoadRef.current) {
initialLoadRef.current = true;
fetchData(pagination.current, pagination.pageSize);
return;
}
fetchData(1, pagination.pageSize);
}, [tab.id, tab.connectionId, tab.dbName, tab.tableName, sortInfo, filterConditions]); // Initial load and re-load on sort/filter
return (
<div style={{ flex: '1 1 auto', minHeight: 0, minWidth: 0, height: '100%', width: '100%', overflow: 'hidden', display: 'flex', flexDirection: 'column' }}>
@@ -792,6 +851,8 @@ const DataViewer: React.FC<{ tab: TabData }> = ({ tab }) => {
readOnly={forceReadOnly}
sortInfoExternal={sortInfo}
exportSqlWithFilter={exportSqlWithFilter || undefined}
scrollSnapshot={scrollSnapshotRef.current}
onScrollSnapshotChange={handleTableScrollSnapshotChange}
/>
</div>
);

View File

@@ -3,7 +3,7 @@ import { Alert, Button, Collapse, Input, Modal, Progress, Select, Space, Switch,
import { DeleteOutlined, DownloadOutlined, FileSearchOutlined, FolderOpenOutlined, InfoCircleFilled, ReloadOutlined } from '@ant-design/icons';
import { EventsOn } from '../../wailsjs/runtime/runtime';
import { useStore } from '../store';
import { normalizeOpacityForPlatform } from '../utils/appearance';
import { normalizeOpacityForPlatform, resolveAppearanceValues } from '../utils/appearance';
import {
CheckDriverNetworkStatus,
DownloadDriverPackage,
@@ -166,7 +166,8 @@ const DriverManagerModal: React.FC<{ open: boolean; onClose: () => void; onOpenG
const theme = useStore((state) => state.theme);
const appearance = useStore((state) => state.appearance);
const darkMode = theme === 'dark';
const opacity = normalizeOpacityForPlatform(appearance.opacity);
const resolvedAppearance = resolveAppearanceValues(appearance);
const opacity = normalizeOpacityForPlatform(resolvedAppearance.opacity);
const modalContentRef = useRef<HTMLDivElement | null>(null);
const tableContainerRef = useRef<HTMLDivElement | null>(null);
const tableScrollTargetsRef = useRef<HTMLElement[]>([]);
@@ -1223,7 +1224,7 @@ const DriverManagerModal: React.FC<{ open: boolean; onClose: () => void; onOpenG
paddingRight: 18,
},
}}
destroyOnClose
destroyOnHidden
footer={(
<div className="driver-manager-footer">
<div

View File

@@ -1,8 +1,8 @@
import React, { useRef, useEffect } from 'react';
import { Table, Tag, Button, Tooltip } from 'antd';
import { ClearOutlined, CloseOutlined, CaretRightOutlined, BugOutlined } from '@ant-design/icons';
import { Table, Tag, Button, Tooltip, Empty } from 'antd';
import { ClearOutlined, CloseOutlined, BugOutlined, ClockCircleOutlined } from '@ant-design/icons';
import { useStore } from '../store';
import { normalizeOpacityForPlatform } from '../utils/appearance';
import { normalizeOpacityForPlatform, resolveAppearanceValues } from '../utils/appearance';
interface LogPanelProps {
height: number;
@@ -16,7 +16,8 @@ const LogPanel: React.FC<LogPanelProps> = ({ height, onClose, onResizeStart }) =
const theme = useStore(state => state.theme);
const appearance = useStore(state => state.appearance);
const darkMode = theme === 'dark';
const opacity = normalizeOpacityForPlatform(appearance.opacity);
const resolvedAppearance = resolveAppearanceValues(appearance);
const opacity = normalizeOpacityForPlatform(resolvedAppearance.opacity);
// Background Helper
const getBg = (darkHex: string) => {
@@ -28,10 +29,25 @@ const LogPanel: React.FC<LogPanelProps> = ({ height, onClose, onResizeStart }) =
return `rgba(${r}, ${g}, ${b}, ${opacity})`;
};
const bgMain = getBg('#1d1d1d');
const panelDividerColor = darkMode ? 'rgba(255,255,255,0.08)' : 'rgba(0,0,0,0.08)';
const shellOpacity = darkMode ? Math.max(0.18, opacity * 0.82) : Math.max(0.28, opacity * 0.92);
const shellOpacityStrong = darkMode ? Math.max(0.22, opacity * 0.9) : Math.max(0.34, opacity * 0.96);
const panelDividerColor = darkMode
? `rgba(255,255,255,${Math.max(0.04, opacity * 0.10)})`
: `rgba(0,0,0,${Math.max(0.04, opacity * 0.08)})`;
const panelMutedTextColor = darkMode ? 'rgba(255,255,255,0.62)' : 'rgba(0,0,0,0.58)';
const logScrollbarThumb = darkMode ? 'rgba(255, 255, 255, 0.34)' : 'rgba(0, 0, 0, 0.26)';
const logScrollbarThumbHover = darkMode ? 'rgba(255, 255, 255, 0.5)' : 'rgba(0, 0, 0, 0.36)';
const panelShellBg = darkMode
? `linear-gradient(180deg, rgba(15,20,30,${shellOpacity}) 0%, rgba(9,13,22,${shellOpacityStrong}) 100%)`
: `linear-gradient(180deg, rgba(255,255,255,${shellOpacityStrong}) 0%, rgba(246,248,252,${shellOpacity}) 100%)`;
const panelAccentColor = darkMode ? '#ffd666' : '#1677ff';
const panelShadow = darkMode
? `0 12px 28px rgba(0,0,0,${Math.max(0.05, opacity * 0.18)})`
: `0 12px 24px rgba(15,23,42,${Math.max(0.02, opacity * 0.08)})`;
const logScrollbarThumb = darkMode
? `rgba(255, 255, 255, ${Math.max(0.18, opacity * 0.34)})`
: `rgba(0, 0, 0, ${Math.max(0.12, opacity * 0.26)})`;
const logScrollbarThumbHover = darkMode
? `rgba(255, 255, 255, ${Math.max(0.28, opacity * 0.48)})`
: `rgba(0, 0, 0, ${Math.max(0.18, opacity * 0.36)})`;
const columns = [
{
@@ -45,7 +61,7 @@ const LogPanel: React.FC<LogPanelProps> = ({ height, onClose, onResizeStart }) =
dataIndex: 'status',
width: 70,
render: (status: string) => (
<Tag color={status === 'success' ? 'success' : 'error'} style={{ marginRight: 0 }}>
<Tag color={status === 'success' ? 'success' : 'error'} style={{ marginRight: 0, borderRadius: 999, paddingInline: 8, fontSize: 11, fontWeight: 700 }}>
{status === 'success' ? 'OK' : 'ERR'}
</Tag>
)
@@ -60,7 +76,7 @@ const LogPanel: React.FC<LogPanelProps> = ({ height, onClose, onResizeStart }) =
title: 'SQL / Message',
dataIndex: 'sql',
render: (text: string, record: any) => (
<div style={{ fontFamily: 'monospace', wordBreak: 'break-all', fontSize: '12px', lineHeight: '1.2' }}>
<div style={{ fontFamily: 'monospace', wordBreak: 'break-all', fontSize: '12px', lineHeight: '1.45' }}>
<div style={{ color: darkMode ? '#a6e22e' : '#005cc5' }}>{text}</div>
{record.message && <div style={{ color: '#ff4d4f', marginTop: 2 }}>{record.message}</div>}
{record.affectedRows !== undefined && <div style={{ color: panelMutedTextColor, marginTop: 1 }}>Affected: {record.affectedRows}</div>}
@@ -72,12 +88,18 @@ const LogPanel: React.FC<LogPanelProps> = ({ height, onClose, onResizeStart }) =
return (
<div style={{
height,
borderTop: `1px solid ${panelDividerColor}`,
background: bgMain,
margin: 0,
border: `1px solid ${panelDividerColor}`,
borderRadius: 14,
background: panelShellBg,
WebkitBackdropFilter: opacity < 0.999 ? 'blur(14px)' : 'none',
boxShadow: panelShadow,
backdropFilter: darkMode && opacity < 0.999 ? 'blur(18px)' : 'none',
display: 'flex',
flexDirection: 'column',
position: 'relative',
zIndex: 100 // Ensure above other content
overflow: 'hidden',
zIndex: 100
}}>
{/* Resize Handle */}
<div
@@ -95,38 +117,53 @@ const LogPanel: React.FC<LogPanelProps> = ({ height, onClose, onResizeStart }) =
{/* Toolbar */}
<div style={{
padding: '4px 8px',
padding: '10px 14px',
borderBottom: `1px solid ${panelDividerColor}`,
display: 'flex',
justifyContent: 'space-between',
alignItems: 'center',
height: 32
gap: 12,
minHeight: 48
}}>
<div style={{ display: 'flex', alignItems: 'center', gap: 8, fontWeight: 'bold', fontSize: '12px' }}>
<BugOutlined /> SQL
<div style={{ display: 'flex', alignItems: 'center', gap: 10, minWidth: 0 }}>
<div style={{ width: 30, height: 30, borderRadius: 10, display: 'grid', placeItems: 'center', background: darkMode ? `rgba(255,214,102,${Math.max(0.10, Math.min(0.18, opacity * 0.18))})` : `rgba(24,144,255,${Math.max(0.08, Math.min(0.16, opacity * 0.16))})`, color: panelAccentColor, flexShrink: 0 }}>
<BugOutlined />
</div>
<div style={{ minWidth: 0 }}>
<div style={{ fontWeight: 700, fontSize: 13, color: darkMode ? '#f5f7ff' : '#162033' }}>SQL </div>
<div style={{ fontSize: 12, color: panelMutedTextColor }}>便</div>
</div>
</div>
<div>
<div style={{ display: 'flex', alignItems: 'center', gap: 6 }}>
<Tooltip title="清空日志">
<Button type="text" size="small" icon={<ClearOutlined />} onClick={clearSqlLogs} />
<Button type="text" size="small" icon={<ClearOutlined />} onClick={clearSqlLogs} style={{ color: panelMutedTextColor }} />
</Tooltip>
<Tooltip title="关闭面板">
<Button type="text" size="small" icon={<CloseOutlined />} onClick={onClose} />
<Button type="text" size="small" icon={<CloseOutlined />} onClick={onClose} style={{ color: panelMutedTextColor }} />
</Tooltip>
</div>
</div>
{/* List */}
<div className="log-panel-scroll" style={{ flex: 1, overflow: 'auto' }}>
<Table
className="log-panel-table"
dataSource={sqlLogs}
columns={columns}
size="small"
pagination={false}
rowKey="id"
showHeader={false}
// scroll={{ y: height - 32 }} // Let flex handle it
/>
<div className="log-panel-scroll" style={{ flex: 1, overflow: 'auto', padding: '8px 10px 10px' }}>
{sqlLogs.length === 0 ? (
<div style={{ height: '100%', minHeight: 160, display: 'grid', placeItems: 'center' }}>
<Empty
image={Empty.PRESENTED_IMAGE_SIMPLE}
description={<span style={{ color: panelMutedTextColor }}> SQL </span>}
/>
</div>
) : (
<Table
className="log-panel-table"
dataSource={sqlLogs}
columns={columns}
size="small"
pagination={false}
rowKey="id"
showHeader={false}
/>
)}
</div>
<style>{`
.log-panel-scroll {
@@ -156,6 +193,16 @@ const LogPanel: React.FC<LogPanelProps> = ({ height, onClose, onResizeStart }) =
.log-panel-table .ant-table-tbody > tr > td {
background: transparent !important;
}
.log-panel-table .ant-table-tbody > tr > td {
padding: 8px 10px !important;
border-bottom: 1px solid ${panelDividerColor} !important;
}
.log-panel-table .ant-table-tbody > tr:last-child > td {
border-bottom: none !important;
}
.log-panel-table .ant-table-row:hover > td {
background: ${darkMode ? 'rgba(255,255,255,0.03)' : 'rgba(16,24,40,0.03)'} !important;
}
`}</style>
</div>
);

View File

@@ -48,6 +48,7 @@ const QueryEditor: React.FC<{ tab: TabData }> = ({ tab }) => {
const [editorHeight, setEditorHeight] = useState(300);
const editorRef = useRef<any>(null);
const monacoRef = useRef<any>(null);
const lastExternalQueryRef = useRef<string>(tab.query || '');
const dragRef = useRef<{ startY: number, startHeight: number } | null>(null);
const tablesRef = useRef<{dbName: string, tableName: string}[]>([]); // Store tables for autocomplete (cross-db)
const allColumnsRef = useRef<{dbName: string, tableName: string, name: string, type: string}[]>([]); // Store all columns (cross-db)
@@ -95,10 +96,30 @@ const QueryEditor: React.FC<{ tab: TabData }> = ({ tab }) => {
connectionsRef.current = connections;
}, [connections]);
const getCurrentQuery = () => {
const val = editorRef.current?.getValue?.();
if (typeof val === 'string') return val;
return query || '';
};
const syncQueryToEditor = (sql: string) => {
const next = sql || '';
setQuery(next);
const editor = editorRef.current;
if (editor && editor.getValue?.() !== next) {
editor.setValue(next);
}
};
// If opening a saved query, load its SQL
useEffect(() => {
if (tab.query) setQuery(tab.query);
}, [tab.query]);
const incoming = tab.query || '';
if (incoming === lastExternalQueryRef.current) {
return;
}
lastExternalQueryRef.current = incoming;
syncQueryToEditor(incoming || 'SELECT * FROM ');
}, [tab.id, tab.query]);
// Fetch Database List
useEffect(() => {
@@ -557,8 +578,8 @@ const QueryEditor: React.FC<{ tab: TabData }> = ({ tab }) => {
const handleFormat = () => {
try {
const formatted = format(query, { language: 'mysql', keywordCase: sqlFormatOptions.keywordCase });
setQuery(formatted);
const formatted = format(getCurrentQuery(), { language: 'mysql', keywordCase: sqlFormatOptions.keywordCase });
syncQueryToEditor(formatted);
} catch (e) {
message.error("格式化失败: SQL 语法可能有误");
}
@@ -1045,7 +1066,8 @@ const QueryEditor: React.FC<{ tab: TabData }> = ({ tab }) => {
};
const handleRun = async () => {
if (!query.trim()) return;
const currentQuery = getCurrentQuery();
if (!currentQuery.trim()) return;
if (!currentDb) {
message.error("请先选择数据库");
return;
@@ -1086,7 +1108,7 @@ const QueryEditor: React.FC<{ tab: TabData }> = ({ tab }) => {
};
try {
const rawSQL = getSelectedSQL() || query;
const rawSQL = getSelectedSQL() || currentQuery;
const dbType = String((config as any).type || 'mysql');
const normalizedDbType = dbType.trim().toLowerCase();
const normalizedRawSQL = String(rawSQL || '').replace(//g, ';');
@@ -1367,7 +1389,7 @@ const QueryEditor: React.FC<{ tab: TabData }> = ({ tab }) => {
saveQuery({
id: tab.id.startsWith('saved-') ? tab.id : `saved-${Date.now()}`,
name: values.name,
sql: query,
sql: getCurrentQuery(),
connectionId: currentConnectionId,
dbName: currentDb || tab.dbName || '',
createdAt: Date.now()
@@ -1512,7 +1534,7 @@ const QueryEditor: React.FC<{ tab: TabData }> = ({ tab }) => {
height="100%"
defaultLanguage="sql"
theme={darkMode ? "transparent-dark" : "transparent-light"}
value={query}
defaultValue={query}
onChange={(val) => setQuery(val || '')}
onMount={handleEditorDidMount}
options={{

File diff suppressed because it is too large Load Diff

View File

@@ -27,12 +27,16 @@ import { Tree, message, Dropdown, MenuProps, Input, Button, Modal, Form, Badge,
DisconnectOutlined,
CloudOutlined,
CheckSquareOutlined,
CodeOutlined
CodeOutlined,
TagOutlined,
CheckOutlined,
FilterOutlined
} from '@ant-design/icons';
import { useStore } from '../store';
import { useStore } from '../store';
import { buildOverlayWorkbenchTheme } from '../utils/overlayWorkbenchTheme';
import { SavedConnection } from '../types';
import { DBGetDatabases, DBGetTables, DBQuery, DBShowCreateTable, ExportTable, OpenSQLFile, CreateDatabase, RenameDatabase, DropDatabase, RenameTable, DropTable, DropView, DropFunction, RenameView } from '../../wailsjs/go/app/App';
import { normalizeOpacityForPlatform } from '../utils/appearance';
import { normalizeOpacityForPlatform, resolveAppearanceValues } from '../utils/appearance';
const { Search } = Input;
@@ -73,6 +77,15 @@ const SEARCH_SCOPE_LABEL_MAP: Record<SearchScope, string> = SEARCH_SCOPE_OPTIONS
return acc;
}, {} as Record<SearchScope, string>);
const SEARCH_SCOPE_ICON_MAP: Record<SearchScope, React.ReactNode> = {
smart: <ThunderboltOutlined />,
object: <TableOutlined />,
database: <DatabaseOutlined />,
host: <CloudOutlined />,
tag: <TagOutlined />,
};
const Sidebar: React.FC<{ onEditConnection?: (conn: SavedConnection) => void }> = ({ onEditConnection }) => {
const connections = useStore(state => state.connections);
const savedQueries = useStore(state => state.savedQueries);
@@ -95,7 +108,8 @@ const Sidebar: React.FC<{ onEditConnection?: (conn: SavedConnection) => void }>
const recordTableAccess = useStore(state => state.recordTableAccess);
const setTableSortPreference = useStore(state => state.setTableSortPreference);
const darkMode = theme === 'dark';
const opacity = normalizeOpacityForPlatform(appearance.opacity);
const resolvedAppearance = resolveAppearanceValues(appearance);
const opacity = normalizeOpacityForPlatform(resolvedAppearance.opacity);
const [treeData, setTreeData] = useState<TreeNode[]>([]);
// Background Helper (Duplicate logic for now, ideally shared)
@@ -108,6 +122,43 @@ const Sidebar: React.FC<{ onEditConnection?: (conn: SavedConnection) => void }>
return `rgba(${r}, ${g}, ${b}, ${opacity})`;
};
const bgMain = getBg('#141414');
const overlayTheme = useMemo(() => buildOverlayWorkbenchTheme(darkMode), [darkMode]);
const modalPanelStyle = useMemo(() => ({
background: overlayTheme.shellBg,
border: overlayTheme.shellBorder,
boxShadow: overlayTheme.shellShadow,
backdropFilter: overlayTheme.shellBackdropFilter,
}), [overlayTheme]);
const modalSectionStyle = useMemo(() => ({
padding: 14,
borderRadius: 14,
border: overlayTheme.sectionBorder,
background: overlayTheme.sectionBg,
}), [overlayTheme]);
const modalScrollSectionStyle = useMemo(() => ({
maxHeight: 400,
overflow: 'auto' as const,
border: overlayTheme.sectionBorder,
borderRadius: 14,
padding: 12,
background: overlayTheme.sectionBg,
}), [overlayTheme]);
const modalHintTextStyle = useMemo(() => ({
color: overlayTheme.mutedText,
fontSize: 12,
lineHeight: 1.6,
}), [overlayTheme]);
const renderSidebarModalTitle = (icon: React.ReactNode, title: string, description: string) => (
<div style={{ display: 'flex', alignItems: 'flex-start', gap: 12 }}>
<div style={{ width: 34, height: 34, borderRadius: 12, display: 'grid', placeItems: 'center', background: overlayTheme.iconBg, color: overlayTheme.iconColor, flexShrink: 0 }}>
{icon}
</div>
<div style={{ minWidth: 0 }}>
<div style={{ fontSize: 16, fontWeight: 700, color: overlayTheme.titleText }}>{title}</div>
<div style={{ marginTop: 4, color: overlayTheme.mutedText, fontSize: 12, lineHeight: 1.6 }}>{description}</div>
</div>
</div>
);
const [searchValue, setSearchValue] = useState('');
const [searchScopes, setSearchScopes] = useState<SearchScope[]>(['smart']);
const [isSearchScopePopoverOpen, setIsSearchScopePopoverOpen] = useState(false);
@@ -382,6 +433,16 @@ const Sidebar: React.FC<{ onEditConnection?: (conn: SavedConnection) => void }>
password: readString(rawProxy.password, rawProxy.Password, cloned.proxyPassword, cloned.ProxyPassword),
};
const hasProxyDetail = Boolean(normalizedProxy.host || normalizedProxy.user || normalizedProxy.password);
const rawHttpTunnel = (cloned.httpTunnel ?? cloned.HTTPTunnel ?? {}) as Record<string, unknown>;
const normalizedHttpTunnel = {
host: readString(rawHttpTunnel.host, rawHttpTunnel.Host, cloned.httpTunnelHost, cloned.HttpTunnelHost),
port: readNumber(8080, rawHttpTunnel.port, rawHttpTunnel.Port, cloned.httpTunnelPort, cloned.HttpTunnelPort),
user: readString(rawHttpTunnel.user, rawHttpTunnel.User, cloned.httpTunnelUser, cloned.HttpTunnelUser),
password: readString(rawHttpTunnel.password, rawHttpTunnel.Password, cloned.httpTunnelPassword, cloned.HttpTunnelPassword),
};
const hasHttpTunnelDetail = Boolean(normalizedHttpTunnel.host || normalizedHttpTunnel.user || normalizedHttpTunnel.password);
const normalizedUseHttpTunnel = readBool(hasHttpTunnelDetail, cloned.useHttpTunnel, cloned.UseHTTPTunnel);
const normalizedUseProxy = !normalizedUseHttpTunnel && readBool(hasProxyDetail, cloned.useProxy, cloned.UseProxy);
const rawHosts = Array.isArray(cloned.hosts)
? cloned.hosts
@@ -394,8 +455,10 @@ const Sidebar: React.FC<{ onEditConnection?: (conn: SavedConnection) => void }>
...(cloned as SavedConnection['config']),
useSSH: readBool(hasSSHDetail, cloned.useSSH, cloned.UseSSH),
ssh: normalizedSSH,
useProxy: readBool(hasProxyDetail, cloned.useProxy, cloned.UseProxy),
useProxy: normalizedUseProxy,
proxy: normalizedProxy,
useHttpTunnel: normalizedUseHttpTunnel,
httpTunnel: normalizedHttpTunnel,
hosts: normalizedHosts,
timeout: readNumber(30, cloned.timeout, cloned.Timeout),
};
@@ -645,10 +708,15 @@ const Sidebar: React.FC<{ onEditConnection?: (conn: SavedConnection) => void }>
}
case 'oracle':
case 'dm':
if (!safeDbName) {
return [{ sql: `SELECT VIEW_NAME AS view_name FROM USER_VIEWS ORDER BY VIEW_NAME` }];
}
return [{ sql: `SELECT OWNER AS schema_name, VIEW_NAME AS view_name FROM ALL_VIEWS WHERE OWNER = '${safeDbName.toUpperCase()}' ORDER BY VIEW_NAME` }];
return normalizeMetadataQuerySpecs([
{ sql: `SELECT VIEW_NAME AS view_name FROM USER_VIEWS ORDER BY VIEW_NAME` },
{ sql: `SELECT OWNER AS schema_name, VIEW_NAME AS view_name FROM ALL_VIEWS WHERE OWNER = USER ORDER BY VIEW_NAME` },
{
sql: safeDbName
? `SELECT OWNER AS schema_name, VIEW_NAME AS view_name FROM ALL_VIEWS WHERE OWNER = '${safeDbName.toUpperCase()}' ORDER BY VIEW_NAME`
: '',
},
]);
case 'sqlite':
return [{ sql: `SELECT name AS view_name FROM sqlite_master WHERE type = 'view' ORDER BY name` }];
case 'duckdb':
@@ -724,17 +792,35 @@ const Sidebar: React.FC<{ onEditConnection?: (conn: SavedConnection) => void }>
case 'kingbase':
case 'highgo':
case 'vastbase':
return [{ sql: `SELECT n.nspname AS schema_name, p.proname AS routine_name, CASE WHEN p.prokind = 'p' THEN 'PROCEDURE' ELSE 'FUNCTION' END AS routine_type FROM pg_proc p JOIN pg_namespace n ON p.pronamespace = n.oid WHERE n.nspname NOT IN ('pg_catalog', 'information_schema') AND n.nspname NOT LIKE 'pg_%' ORDER BY n.nspname, routine_type, p.proname` }];
return normalizeMetadataQuerySpecs([
{
// PostgreSQL 11+ / 部分 PG-like通过 prokind 区分 FUNCTION/PROCEDURE
sql: `SELECT n.nspname AS schema_name, p.proname AS routine_name, CASE WHEN p.prokind = 'p' THEN 'PROCEDURE' ELSE 'FUNCTION' END AS routine_type FROM pg_proc p JOIN pg_namespace n ON p.pronamespace = n.oid WHERE n.nspname NOT IN ('pg_catalog', 'information_schema') AND n.nspname NOT LIKE 'pg_%' ORDER BY n.nspname, routine_type, p.proname`,
},
{
// PostgreSQL 10 / 不支持 prokind 的兼容路径
sql: `SELECT r.routine_schema AS schema_name, r.routine_name AS routine_name, COALESCE(NULLIF(UPPER(r.routine_type), ''), 'FUNCTION') AS routine_type FROM information_schema.routines r WHERE r.routine_schema NOT IN ('pg_catalog', 'information_schema') AND r.routine_schema NOT LIKE 'pg_%' ORDER BY r.routine_schema, routine_type, r.routine_name`,
},
{
// 最后兜底:仅函数列表,确保 prokind/routines 视图异常时仍可展示
sql: `SELECT n.nspname AS schema_name, p.proname AS routine_name, 'FUNCTION' AS routine_type FROM pg_proc p JOIN pg_namespace n ON p.pronamespace = n.oid WHERE n.nspname NOT IN ('pg_catalog', 'information_schema') AND n.nspname NOT LIKE 'pg_%' ORDER BY n.nspname, p.proname`,
},
]);
case 'sqlserver': {
const safeDb = quoteSqlServerIdentifier(dbName || 'master');
return [{ sql: `SELECT s.name AS schema_name, o.name AS routine_name, CASE o.type WHEN 'P' THEN 'PROCEDURE' WHEN 'FN' THEN 'FUNCTION' WHEN 'IF' THEN 'FUNCTION' WHEN 'TF' THEN 'FUNCTION' END AS routine_type FROM ${safeDb}.sys.objects o JOIN ${safeDb}.sys.schemas s ON o.schema_id = s.schema_id WHERE o.type IN ('P','FN','IF','TF') ORDER BY o.type, s.name, o.name` }];
}
case 'oracle':
case 'dm':
if (!safeDbName) {
return [{ sql: `SELECT OBJECT_NAME AS routine_name, OBJECT_TYPE AS routine_type FROM USER_OBJECTS WHERE OBJECT_TYPE IN ('FUNCTION','PROCEDURE') ORDER BY OBJECT_TYPE, OBJECT_NAME` }];
}
return [{ sql: `SELECT OWNER AS schema_name, OBJECT_NAME AS routine_name, OBJECT_TYPE AS routine_type FROM ALL_OBJECTS WHERE OWNER = '${safeDbName.toUpperCase()}' AND OBJECT_TYPE IN ('FUNCTION','PROCEDURE') ORDER BY OBJECT_TYPE, OBJECT_NAME` }];
return normalizeMetadataQuerySpecs([
{ sql: `SELECT OBJECT_NAME AS routine_name, OBJECT_TYPE AS routine_type FROM USER_OBJECTS WHERE OBJECT_TYPE IN ('FUNCTION','PROCEDURE') ORDER BY OBJECT_TYPE, OBJECT_NAME` },
{ sql: `SELECT OWNER AS schema_name, OBJECT_NAME AS routine_name, OBJECT_TYPE AS routine_type FROM ALL_OBJECTS WHERE OWNER = USER AND OBJECT_TYPE IN ('FUNCTION','PROCEDURE') ORDER BY OBJECT_TYPE, OBJECT_NAME` },
{
sql: safeDbName
? `SELECT OWNER AS schema_name, OBJECT_NAME AS routine_name, OBJECT_TYPE AS routine_type FROM ALL_OBJECTS WHERE OWNER = '${safeDbName.toUpperCase()}' AND OBJECT_TYPE IN ('FUNCTION','PROCEDURE') ORDER BY OBJECT_TYPE, OBJECT_NAME`
: '',
},
]);
case 'duckdb':
return [{
sql: `SELECT schema_name, function_name AS routine_name, 'FUNCTION' AS routine_type FROM duckdb_functions() WHERE internal = false AND lower(function_type) = 'macro' AND COALESCE(macro_definition, '') <> '' ORDER BY schema_name, function_name`,
@@ -2449,32 +2535,98 @@ const Sidebar: React.FC<{ onEditConnection?: (conn: SavedConnection) => void }>
const searchScopePopoverContent = useMemo(() => {
const smartSelected = searchScopes.includes('smart');
const scopedOptions = SEARCH_SCOPE_OPTIONS.filter((option) => option.value !== 'smart');
const borderColor = overlayTheme.sectionBorder.replace('1px solid ', '');
const mutedTextColor = overlayTheme.mutedText;
const titleColor = overlayTheme.titleText;
const panelBg = overlayTheme.shellBg;
const smartBg = smartSelected
? (darkMode ? 'linear-gradient(135deg, rgba(255,214,102,0.22) 0%, rgba(255,179,71,0.16) 100%)' : 'linear-gradient(135deg, rgba(255,214,102,0.26) 0%, rgba(255,244,204,0.92) 100%)')
: (darkMode ? 'rgba(255,255,255,0.03)' : 'rgba(255,255,255,0.72)');
const smartBorder = smartSelected
? (darkMode ? 'rgba(255,214,102,0.42)' : 'rgba(245,176,65,0.34)')
: borderColor;
const getOptionCardStyle = (checked: boolean) => ({
display: 'flex',
alignItems: 'center' as const,
justifyContent: 'space-between' as const,
gap: 12,
padding: '10px 12px',
borderRadius: 12,
border: `1px solid ${checked ? (darkMode ? 'rgba(118,169,250,0.44)' : 'rgba(24,144,255,0.32)') : borderColor}`,
background: checked
? (darkMode ? 'rgba(64,124,255,0.18)' : 'rgba(24,144,255,0.08)')
: (darkMode ? 'rgba(255,255,255,0.03)' : 'rgba(255,255,255,0.76)'),
transition: 'all 120ms ease',
});
return (
<div style={{ minWidth: 220, display: 'flex', flexDirection: 'column', gap: 8 }}>
<div style={{ fontSize: 12, color: '#8c8c8c' }}></div>
<Checkbox
checked={smartSelected}
onChange={(e) => setSearchScopeChecked('smart', e.target.checked)}
>
</Checkbox>
<div style={{ paddingLeft: 12, display: 'grid', gap: 6 }}>
{scopedOptions.map((option) => (
<Checkbox
key={option.value}
checked={searchScopes.includes(option.value)}
onChange={(e) => setSearchScopeChecked(option.value, e.target.checked)}
>
{option.label}
</Checkbox>
))}
<div style={{ minWidth: 280, display: 'flex', flexDirection: 'column', background: panelBg, padding: 14, gap: 12 }}>
<div style={{ display: 'flex', alignItems: 'flex-start', justifyContent: 'space-between', gap: 12 }}>
<div>
<div style={{ fontSize: 12, fontWeight: 700, letterSpacing: 0.4, color: mutedTextColor, textTransform: 'uppercase' }}></div>
<div style={{ marginTop: 4, fontSize: 13, lineHeight: 1.5, color: mutedTextColor }}></div>
</div>
<div style={{ width: 32, height: 32, borderRadius: 10, display: 'grid', placeItems: 'center', background: darkMode ? 'rgba(255,255,255,0.05)' : 'rgba(17,24,39,0.06)', color: darkMode ? '#ffd666' : '#1677ff', flexShrink: 0 }}>
<FilterOutlined />
</div>
</div>
<div style={{ fontSize: 12, color: '#8c8c8c' }}>
<label style={{ display: 'block', cursor: 'pointer' }}>
<div style={{ display: 'flex', alignItems: 'center', gap: 12, padding: '12px 14px', borderRadius: 14, border: `1px solid ${smartBorder}`, background: smartBg, boxShadow: smartSelected ? (darkMode ? '0 10px 24px rgba(0,0,0,0.24)' : '0 10px 24px rgba(245,176,65,0.14)') : 'none' }}>
<Checkbox
checked={smartSelected}
onChange={(e) => setSearchScopeChecked('smart', e.target.checked)}
/>
<div style={{ width: 30, height: 30, borderRadius: 10, display: 'grid', placeItems: 'center', background: darkMode ? 'rgba(255,214,102,0.16)' : 'rgba(255,214,102,0.3)', color: darkMode ? '#ffd666' : '#ad6800', flexShrink: 0 }}>
{SEARCH_SCOPE_ICON_MAP.smart}
</div>
<div style={{ flex: 1, minWidth: 0 }}>
<div style={{ display: 'flex', alignItems: 'center', gap: 8, flexWrap: 'wrap' }}>
<span style={{ fontSize: 14, fontWeight: 700, color: titleColor }}></span>
<span style={{ padding: '2px 8px', borderRadius: 999, fontSize: 11, fontWeight: 700, color: darkMode ? '#ffe58f' : '#ad6800', background: darkMode ? 'rgba(255,214,102,0.16)' : 'rgba(255,214,102,0.35)' }}></span>
</div>
<div style={{ marginTop: 3, fontSize: 12, lineHeight: 1.5, color: mutedTextColor }}>Host </div>
</div>
</div>
</label>
<div style={{ height: 1, background: overlayTheme.divider, opacity: 0.9 }} />
<div style={{ display: 'flex', alignItems: 'center', justifyContent: 'space-between', gap: 12 }}>
<div style={{ fontSize: 12, fontWeight: 700, letterSpacing: 0.3, color: mutedTextColor, textTransform: 'uppercase' }}></div>
<div style={{ fontSize: 12, color: mutedTextColor }}></div>
</div>
<div style={{ display: 'grid', gap: 8 }}>
{scopedOptions.map((option) => {
const checked = searchScopes.includes(option.value);
return (
<label key={option.value} style={{ display: 'block', cursor: 'pointer' }}>
<div style={getOptionCardStyle(checked)}>
<div style={{ display: 'flex', alignItems: 'center', gap: 12, minWidth: 0 }}>
<Checkbox
checked={checked}
onChange={(e) => setSearchScopeChecked(option.value, e.target.checked)}
/>
<div style={{ width: 28, height: 28, borderRadius: 9, display: 'grid', placeItems: 'center', background: checked ? (darkMode ? 'rgba(118,169,250,0.2)' : 'rgba(24,144,255,0.12)') : (darkMode ? 'rgba(255,255,255,0.05)' : 'rgba(17,24,39,0.06)'), color: checked ? (darkMode ? '#91caff' : '#1677ff') : mutedTextColor, flexShrink: 0 }}>
{SEARCH_SCOPE_ICON_MAP[option.value]}
</div>
<span style={{ fontSize: 14, fontWeight: 600, color: titleColor, whiteSpace: 'nowrap' }}>{option.label}</span>
</div>
<div style={{ width: 18, display: 'flex', justifyContent: 'center', color: checked ? (darkMode ? '#91caff' : '#1677ff') : 'transparent', flexShrink: 0 }}>
<CheckOutlined />
</div>
</div>
</label>
);
})}
</div>
<div style={{ padding: '10px 12px', borderRadius: 12, background: darkMode ? 'rgba(255,255,255,0.03)' : 'rgba(17,24,39,0.04)', color: mutedTextColor, fontSize: 12, lineHeight: 1.6 }}>
Host
</div>
</div>
);
}, [searchScopes]);
}, [darkMode, overlayTheme, searchScopes]);
const parseHostOnlyToken = (value: unknown): string[] => {
const raw = String(value || '').trim();
@@ -3279,14 +3431,14 @@ const Sidebar: React.FC<{ onEditConnection?: (conn: SavedConnection) => void }>
return (
<div style={{ display: 'flex', flexDirection: 'column', height: '100%' }}>
<div style={{ padding: '4px 8px' }}>
<Space.Compact block size="small">
<div style={{ padding: '4px 10px' }}>
<div style={{ display: 'flex', alignItems: 'center', gap: 8 }}>
<Search
ref={searchInputRef}
placeholder="搜索..."
onChange={onSearch}
size="small"
style={{ width: '100%' }}
style={{ flex: 1, minWidth: 0 }}
/>
<Popover
content={searchScopePopoverContent}
@@ -3294,18 +3446,66 @@ const Sidebar: React.FC<{ onEditConnection?: (conn: SavedConnection) => void }>
placement="bottomRight"
open={isSearchScopePopoverOpen}
onOpenChange={setIsSearchScopePopoverOpen}
styles={{ body: { padding: 0, borderRadius: 18, overflow: 'hidden' } }}
>
<Tooltip title={`搜索范围:${searchScopeSummary}`}>
<Button size="small" icon={<DownOutlined />} style={{ width: 86 }}>
{searchScopes.includes('smart') ? '(智)' : `(${searchScopes.length})`}
<Button
size="small"
style={{
minWidth: 86,
display: 'inline-flex',
alignItems: 'center',
justifyContent: 'center',
gap: 6,
paddingInline: 10,
borderRadius: 10,
borderColor: darkMode ? 'rgba(255,255,255,0.12)' : 'rgba(16,24,40,0.12)',
background: darkMode ? bgMain : 'rgba(255,255,255,0.92)',
color: darkMode ? 'rgba(255,255,255,0.88)' : '#162033',
boxShadow: isSearchScopePopoverOpen
? (darkMode ? '0 0 0 1px rgba(255,214,102,0.22) inset' : '0 0 0 1px rgba(24,144,255,0.24) inset')
: 'none',
backdropFilter: darkMode ? 'blur(10px)' : 'none',
flexShrink: 0,
}}
>
<span style={{ display: 'inline-flex', alignItems: 'center', color: searchScopes.includes('smart') ? '#ffd666' : (darkMode ? 'rgba(255,255,255,0.72)' : 'rgba(22,32,51,0.72)') }}>
<FilterOutlined />
</span>
<span style={{ fontWeight: 700, color: darkMode ? 'rgba(255,255,255,0.88)' : '#162033' }}></span>
<span
style={{
minWidth: 18,
height: 18,
padding: '0 5px',
borderRadius: 999,
display: 'inline-flex',
alignItems: 'center',
justifyContent: 'center',
fontSize: 11,
fontWeight: 700,
lineHeight: 1,
background: searchScopes.includes('smart')
? (darkMode ? 'rgba(255,214,102,0.16)' : 'rgba(24,144,255,0.12)')
: (darkMode ? 'rgba(118,169,250,0.18)' : 'rgba(24,144,255,0.12)'),
color: searchScopes.includes('smart')
? (darkMode ? '#ffd666' : '#1677ff')
: (darkMode ? '#91caff' : '#1677ff'),
}}
>
{searchScopes.includes('smart') ? '智' : searchScopes.length}
</span>
<span style={{ display: 'inline-flex', alignItems: 'center', color: darkMode ? 'rgba(255,255,255,0.48)' : 'rgba(22,32,51,0.4)', fontSize: 12 }}>
<DownOutlined />
</span>
</Button>
</Tooltip>
</Popover>
</Space.Compact>
</div>
</div>
{/* Toolbar */}
<div style={{ padding: '4px 8px', borderBottom: 'none', display: 'flex', flexWrap: 'wrap', gap: 4 }}>
<div style={{ padding: '4px 10px', borderBottom: 'none', display: 'flex', flexWrap: 'wrap', gap: 4 }}>
<Button
size="small"
icon={<FolderOpenOutlined />}
@@ -3373,8 +3573,14 @@ const Sidebar: React.FC<{ onEditConnection?: (conn: SavedConnection) => void }>
)}
<Modal
title={renameViewTarget?.type === 'tag' ? "编辑标签" : "新建组"}
title={renderSidebarModalTitle(
<FolderOpenOutlined />,
renameViewTarget?.type === 'tag' ? "编辑标签" : "新建组",
renameViewTarget?.type === 'tag' ? "调整分组名称和包含的连接。" : "为连接树创建一个更清晰的分组视图。"
)}
open={isCreateTagModalOpen}
centered
styles={{ content: modalPanelStyle, header: { background: 'transparent', borderBottom: 'none', paddingBottom: 10 }, body: { paddingTop: 8 }, footer: { background: 'transparent', borderTop: 'none', paddingTop: 12 } }}
onOk={() => {
createTagForm.validateFields().then(values => {
if (renameViewTarget?.type === 'tag') {
@@ -3409,20 +3615,24 @@ const Sidebar: React.FC<{ onEditConnection?: (conn: SavedConnection) => void }>
onCancel={() => setIsCreateTagModalOpen(false)}
>
<Form form={createTagForm} layout="vertical">
<Form.Item name="name" label="标签名称" rules={[{ required: true, message: '请输入标签名称' }]}>
<Input />
</Form.Item>
<Form.Item name="connectionIds" label="选择连接">
<Checkbox.Group style={{ width: '100%' }}>
<Space direction="vertical" style={{ width: '100%', maxHeight: '400px', overflowY: 'auto' }}>
{connections.map(conn => (
<Checkbox key={conn.id} value={conn.id}>
{conn.name} {conn.config.host ? `(${conn.config.host})` : ''}
</Checkbox>
))}
</Space>
</Checkbox.Group>
</Form.Item>
<div style={modalSectionStyle}>
<Form.Item name="name" label="标签名称" rules={[{ required: true, message: '请输入标签名称' }]}>
<Input placeholder="例如:线上环境 / 核心业务 / 临时调试" />
</Form.Item>
<Form.Item name="connectionIds" label="选择连接" style={{ marginBottom: 0 }}>
<Checkbox.Group style={{ width: '100%' }}>
<div style={modalScrollSectionStyle}>
<Space direction="vertical" style={{ width: '100%' }}>
{connections.map(conn => (
<Checkbox key={conn.id} value={conn.id}>
{conn.name} {conn.config.host ? `(${conn.config.host})` : ''}
</Checkbox>
))}
</Space>
</div>
</Checkbox.Group>
</Form.Item>
</div>
</Form>
</Modal>
@@ -3492,10 +3702,12 @@ const Sidebar: React.FC<{ onEditConnection?: (conn: SavedConnection) => void }>
</Modal>
<Modal
title="批量操作表"
title={renderSidebarModalTitle(<TableOutlined />, "批量操作表", "按对象批量导出结构、数据或完整备份。")}
open={isBatchModalOpen}
onCancel={() => setIsBatchModalOpen(false)}
width={680}
width={720}
centered
styles={{ content: modalPanelStyle, header: { background: 'transparent', borderBottom: 'none', paddingBottom: 10 }, body: { paddingTop: 8 }, footer: { background: 'transparent', borderTop: 'none', paddingTop: 12 } }}
footer={
<div style={{ display: 'flex', justifyContent: 'space-between', alignItems: 'center', gap: 8, flexWrap: 'wrap' }}>
<Button key="cancel" onClick={() => setIsBatchModalOpen(false)}>
@@ -3531,7 +3743,7 @@ const Sidebar: React.FC<{ onEditConnection?: (conn: SavedConnection) => void }>
</div>
}
>
<div style={{ marginBottom: 16 }}>
<div style={{ ...modalSectionStyle, marginBottom: 16 }}>
<div style={{ marginBottom: 8 }}>
<label style={{ display: 'block', marginBottom: 4, fontWeight: 500 }}></label>
<Select
@@ -3563,10 +3775,11 @@ const Sidebar: React.FC<{ onEditConnection?: (conn: SavedConnection) => void }>
))}
</Select>
</div>
<div style={modalHintTextStyle}></div>
</div>
{batchTables.length > 0 && (
<div style={{ marginBottom: 16 }}>
<div style={{ ...modalSectionStyle, marginBottom: 16 }}>
<Space wrap size={8} style={{ width: '100%' }}>
<Input
allowClear
@@ -3604,7 +3817,7 @@ const Sidebar: React.FC<{ onEditConnection?: (conn: SavedConnection) => void }>
{batchTables.length > 0 && (
<>
<div style={{ marginBottom: 16 }}>
<div style={{ ...modalSectionStyle, marginBottom: 16 }}>
<Space>
<Button
size="small"
@@ -3632,7 +3845,7 @@ const Sidebar: React.FC<{ onEditConnection?: (conn: SavedConnection) => void }>
</span>
</Space>
</div>
<div style={{ maxHeight: 400, overflow: 'auto', border: darkMode ? '1px solid #303030' : '1px solid #f0f0f0', borderRadius: 4, padding: 8 }}>
<div style={modalScrollSectionStyle}>
<Checkbox.Group
value={checkedTableKeys}
onChange={(values) => setCheckedTableKeys(values as string[])}
@@ -3682,10 +3895,12 @@ const Sidebar: React.FC<{ onEditConnection?: (conn: SavedConnection) => void }>
</Modal>
<Modal
title="批量操作库"
title={renderSidebarModalTitle(<DatabaseOutlined />, "批量操作库", "按数据库批量导出结构,或生成结构加数据的备份。")}
open={isBatchDbModalOpen}
onCancel={() => setIsBatchDbModalOpen(false)}
width={600}
width={640}
centered
styles={{ content: modalPanelStyle, header: { background: 'transparent', borderBottom: 'none', paddingBottom: 10 }, body: { paddingTop: 8 }, footer: { background: 'transparent', borderTop: 'none', paddingTop: 12 } }}
footer={[
<Button key="cancel" onClick={() => setIsBatchDbModalOpen(false)}>
@@ -3709,8 +3924,8 @@ const Sidebar: React.FC<{ onEditConnection?: (conn: SavedConnection) => void }>
</Button>
]}
>
<div style={{ marginBottom: 16 }}>
<label style={{ display: 'block', marginBottom: 4, fontWeight: 500 }}></label>
<div style={{ ...modalSectionStyle, marginBottom: 16 }}>
<label style={{ display: 'block', marginBottom: 4, fontWeight: 600, color: darkMode ? '#f5f7ff' : '#162033' }}></label>
<Select
value={selectedDbConnection}
onChange={handleDbConnectionChange}
@@ -3723,11 +3938,12 @@ const Sidebar: React.FC<{ onEditConnection?: (conn: SavedConnection) => void }>
</Select.Option>
))}
</Select>
<div style={{ ...modalHintTextStyle, marginTop: 10 }}></div>
</div>
{batchDatabases.length > 0 && (
<>
<div style={{ marginBottom: 16 }}>
<div style={{ ...modalSectionStyle, marginBottom: 16 }}>
<Space>
<Button
size="small"
@@ -3752,7 +3968,7 @@ const Sidebar: React.FC<{ onEditConnection?: (conn: SavedConnection) => void }>
</span>
</Space>
</div>
<div style={{ maxHeight: 400, overflow: 'auto', border: darkMode ? '1px solid #303030' : '1px solid #f0f0f0', borderRadius: 4, padding: 8 }}>
<div style={modalScrollSectionStyle}>
<Checkbox.Group
value={checkedDbKeys}
onChange={(values) => setCheckedDbKeys(values as string[])}

View File

@@ -144,12 +144,8 @@ const TabManager: React.FC = () => {
const items = useMemo(() => tabs.map((tab, index) => {
const connectionName = connections.find((conn) => conn.id === tab.connectionId)?.name;
const displayTitle = buildTabDisplayTitle(tab, connectionName);
const keepMountedWhenInactive = tab.type === 'query' || tab.type === 'redis-command';
const shouldRenderContent = activeTabId === tab.id || keepMountedWhenInactive;
let content;
if (!shouldRenderContent) {
content = null;
} else if (tab.type === 'query') {
if (tab.type === 'query') {
content = <QueryEditor tab={tab} />;
} else if (tab.type === 'table') {
content = <DataViewer tab={tab} />;
@@ -203,7 +199,7 @@ const TabManager: React.FC = () => {
key: tab.id,
children: content,
};
}), [tabs, connections, activeTabId, closeOtherTabs, closeTabsToLeft, closeTabsToRight, closeAllTabs]);
}), [tabs, connections, closeOtherTabs, closeTabsToLeft, closeTabsToRight, closeAllTabs]);
return (
<>
@@ -297,6 +293,7 @@ const TabManager: React.FC = () => {
<Tabs
className="main-tabs"
type="editable-card"
destroyInactiveTabPane={false}
onChange={(newActiveKey) => {
if (Date.now() < suppressClickUntilRef.current) return;
onChange(newActiveKey);

View File

@@ -2491,7 +2491,7 @@ END;`;
okText="应用"
cancelText="取消"
width={640}
destroyOnClose
destroyOnHidden
>
<Input.TextArea
value={commentEditorValue}

View File

@@ -0,0 +1,39 @@
import { calculateTableBodyBottomPadding } from './dataGridLayout';
const assertEqual = (actual: unknown, expected: unknown, message: string) => {
if (actual !== expected) {
throw new Error(`${message}\nactual: ${String(actual)}\nexpected: ${String(expected)}`);
}
};
assertEqual(
calculateTableBodyBottomPadding({
hasHorizontalOverflow: false,
floatingScrollbarHeight: 10,
floatingScrollbarGap: 6,
}),
0,
'无横向滚动条时不应增加底部间距'
);
assertEqual(
calculateTableBodyBottomPadding({
hasHorizontalOverflow: true,
floatingScrollbarHeight: 10,
floatingScrollbarGap: 6,
}),
28,
'默认悬浮滚动条应预留滚动条高度、间距和额外安全区'
);
assertEqual(
calculateTableBodyBottomPadding({
hasHorizontalOverflow: true,
floatingScrollbarHeight: 14,
floatingScrollbarGap: 4,
}),
30,
'较粗滚动条场景下应同步放大底部安全区'
);
console.log('dataGridLayout tests passed');

View File

@@ -0,0 +1,23 @@
export interface TableBodyBottomPaddingOptions {
hasHorizontalOverflow: boolean;
floatingScrollbarHeight: number;
floatingScrollbarGap: number;
}
const MIN_SCROLLBAR_CLEARANCE = 8;
const FLOATING_SCROLLBAR_VISUAL_EXTRA = 4;
export const calculateTableBodyBottomPadding = ({
hasHorizontalOverflow,
floatingScrollbarHeight,
floatingScrollbarGap,
}: TableBodyBottomPaddingOptions): number => {
if (!hasHorizontalOverflow) {
return 0;
}
const safeScrollbarHeight = Math.max(0, Math.ceil(floatingScrollbarHeight));
const safeScrollbarGap = Math.max(0, Math.ceil(floatingScrollbarGap));
return safeScrollbarHeight + FLOATING_SCROLLBAR_VISUAL_EXTRA + safeScrollbarGap + MIN_SCROLLBAR_CLEARANCE;
};

View File

@@ -0,0 +1,105 @@
import type { RedisKeyInfo } from '../types';
import {
applyRenamedRedisKeyState,
applyTreeNodeCheck,
buildCheckedTreeNodeState,
buildRedisKeyTree,
isGroupFullyChecked,
} from './redisViewerTree';
const assert = (condition: unknown, message: string) => {
if (!condition) {
throw new Error(message);
}
};
const assertEqual = (actual: unknown, expected: unknown, message: string) => {
const actualText = JSON.stringify(actual);
const expectedText = JSON.stringify(expected);
if (actualText !== expectedText) {
throw new Error(`${message}\nactual: ${actualText}\nexpected: ${expectedText}`);
}
};
const sampleKeys: RedisKeyInfo[] = [
{ key: 'app:user:1', type: 'string', ttl: -1 },
{ key: 'app:user:2', type: 'string', ttl: -1 },
{ key: 'app:order:1', type: 'hash', ttl: 120 },
{ key: 'misc', type: 'set', ttl: -1 },
];
const tree = buildRedisKeyTree(sampleKeys, true);
const appGroup = tree.treeData.find((node) => node.key === 'group:app');
const userGroup = appGroup?.children?.find((node) => node.key === 'group:app:user');
assert(appGroup, '应生成 group:app 节点');
assert(userGroup, '应生成 group:app:user 节点');
assertEqual(
appGroup?.descendantRawKeys,
['app:order:1', 'app:user:1', 'app:user:2'],
'app 分组应收集全部后代 key'
);
const selectedAfterGroupCheck = applyTreeNodeCheck([], appGroup!, true);
assertEqual(
selectedAfterGroupCheck,
['app:order:1', 'app:user:1', 'app:user:2'],
'勾选分组应递归选中全部后代 key'
);
const checkedState = buildCheckedTreeNodeState(selectedAfterGroupCheck, tree);
assertEqual(
checkedState.checked,
['key:app:order:1', 'group:app:order', 'key:app:user:1', 'key:app:user:2', 'group:app:user', 'group:app'],
'全部后代已选中时,父分组和叶子都应进入 checked'
);
assertEqual(checkedState.halfChecked, [], '全部后代已选中时不应有 halfChecked');
assertEqual(isGroupFullyChecked(appGroup!, selectedAfterGroupCheck), true, '全部后代已选中时,分组应视为 fully checked');
const selectedAfterGroupUncheck = applyTreeNodeCheck(selectedAfterGroupCheck, appGroup!, false);
assertEqual(selectedAfterGroupUncheck, [], '取消勾选分组应移除全部后代 key');
assertEqual(isGroupFullyChecked(appGroup!, selectedAfterGroupUncheck), false, '取消后分组不应再是 fully checked');
const partialState = buildCheckedTreeNodeState(['app:user:1'], tree);
assertEqual(
partialState.halfChecked,
['group:app:user', 'group:app'],
'仅部分后代选中时,相关分组应进入 halfChecked'
);
assertEqual(isGroupFullyChecked(appGroup!, ['app:user:1']), false, '部分选中时分组不应是 fully checked');
const renamedState = applyRenamedRedisKeyState(
{
keys: sampleKeys,
selectedKey: 'app:user:2',
selectedKeys: ['app:user:1', 'app:user:2', 'misc'],
},
'app:user:2',
'app:user:200'
);
assertEqual(
renamedState.keys.map((item) => item.key),
['app:user:1', 'app:user:200', 'app:order:1', 'misc'],
'重命名后 keys 列表应替换旧 key'
);
assertEqual(renamedState.selectedKey, 'app:user:200', '当前详情选中的 key 应切换为新 key');
assertEqual(
renamedState.selectedKeys,
['app:user:1', 'app:user:200', 'misc'],
'批量选中集合中的旧 key 应映射为新 key'
);
const unrelatedRenameState = applyRenamedRedisKeyState(
{
keys: sampleKeys,
selectedKey: 'misc',
selectedKeys: ['app:user:1'],
},
'app:order:1',
'app:order:9'
);
assertEqual(unrelatedRenameState.selectedKey, 'misc', '非当前详情 key 的重命名不应影响 selectedKey');
assertEqual(unrelatedRenameState.selectedKeys, ['app:user:1'], '非已勾选 key 的重命名不应污染选中集合');
console.log('redisViewerTree tests passed');

View File

@@ -0,0 +1,260 @@
import type { DataNode } from 'antd/es/tree';
import type { RedisKeyInfo } from '../types';
const KEY_GROUP_DELIMITER = ':';
const EMPTY_SEGMENT_LABEL = '(empty)';
type RedisKeyTreeLeaf = {
keyInfo: RedisKeyInfo;
label: string;
};
type RedisKeyTreeGroup = {
name: string;
path: string;
children: Map<string, RedisKeyTreeGroup>;
leaves: RedisKeyTreeLeaf[];
leafCount: number;
};
export type RedisTreeDataNode = DataNode & {
nodeType: 'group' | 'leaf';
groupName?: string;
groupLeafCount?: number;
leafLabel?: string;
rawKey?: string;
keyType?: string;
ttl?: number;
descendantRawKeys?: string[];
};
export type RedisKeyTreeResult = {
treeData: RedisTreeDataNode[];
groupKeys: string[];
};
export type RedisTreeCheckedState = {
checked: string[];
halfChecked: string[];
};
export type RenamedRedisKeyStateInput = {
keys: RedisKeyInfo[];
selectedKey: string | null;
selectedKeys: string[];
};
export type RenamedRedisKeyStateResult = {
keys: RedisKeyInfo[];
selectedKey: string | null;
selectedKeys: string[];
};
const normalizeKeySegment = (segment: string): string => {
return segment === '' ? EMPTY_SEGMENT_LABEL : segment;
};
const createTreeGroup = (name: string, path: string): RedisKeyTreeGroup => {
return { name, path, children: new Map(), leaves: [], leafCount: 0 };
};
const calculateGroupLeafCount = (group: RedisKeyTreeGroup): number => {
let count = group.leaves.length;
group.children.forEach((child) => {
count += calculateGroupLeafCount(child);
});
group.leafCount = count;
return count;
};
export const buildLeafNodeKey = (rawKey: string): string => `key:${rawKey}`;
export const parseRawKeyFromNodeKey = (nodeKey: React.Key): string | null => {
const keyText = String(nodeKey);
if (!keyText.startsWith('key:')) {
return null;
}
return keyText.slice(4);
};
export const buildRedisKeyTree = (
keys: RedisKeyInfo[],
sortLeafNodes: boolean
): RedisKeyTreeResult => {
const root = createTreeGroup('__root__', '__root__');
keys.forEach((keyInfo) => {
const segments = keyInfo.key.split(KEY_GROUP_DELIMITER);
if (segments.length <= 1) {
root.leaves.push({ keyInfo, label: keyInfo.key });
return;
}
const groupSegments = segments.slice(0, -1);
const leafLabel = normalizeKeySegment(segments[segments.length - 1]);
let current = root;
const pathParts: string[] = [];
groupSegments.forEach((segment) => {
const normalized = normalizeKeySegment(segment);
pathParts.push(normalized);
const groupPath = pathParts.join(KEY_GROUP_DELIMITER);
let child = current.children.get(normalized);
if (!child) {
child = createTreeGroup(normalized, groupPath);
current.children.set(normalized, child);
}
current = child;
});
current.leaves.push({ keyInfo, label: leafLabel });
});
calculateGroupLeafCount(root);
const groupKeys: string[] = [];
const toTreeNodes = (group: RedisKeyTreeGroup): RedisTreeDataNode[] => {
const childGroups = Array.from(group.children.values()).sort((a, b) => a.name.localeCompare(b.name));
const childLeaves = sortLeafNodes
? [...group.leaves].sort((a, b) => a.keyInfo.key.localeCompare(b.keyInfo.key))
: group.leaves;
const groupNodes: RedisTreeDataNode[] = childGroups.map((child) => {
const children = toTreeNodes(child);
const descendantRawKeys = children.flatMap((node) => {
if (node.nodeType === 'leaf') {
return node.rawKey ? [node.rawKey] : [];
}
return node.descendantRawKeys || [];
});
const groupNodeKey = `group:${child.path}`;
groupKeys.push(groupNodeKey);
return {
key: groupNodeKey,
title: child.name,
nodeType: 'group',
groupName: child.name,
groupLeafCount: child.leafCount,
selectable: false,
descendantRawKeys,
children,
};
});
const leafNodes: RedisTreeDataNode[] = childLeaves.map((leaf) => {
return {
key: buildLeafNodeKey(leaf.keyInfo.key),
isLeaf: true,
title: leaf.label,
nodeType: 'leaf',
leafLabel: leaf.label,
rawKey: leaf.keyInfo.key,
keyType: leaf.keyInfo.type,
ttl: leaf.keyInfo.ttl,
};
});
return [...groupNodes, ...leafNodes];
};
return {
treeData: toTreeNodes(root),
groupKeys,
};
};
export const applyTreeNodeCheck = (
selectedKeys: string[],
node: RedisTreeDataNode,
checked: boolean
): string[] => {
if (node.nodeType === 'leaf') {
if (!node.rawKey) {
return selectedKeys;
}
if (checked) {
return Array.from(new Set([...selectedKeys, node.rawKey]));
}
return selectedKeys.filter((item) => item !== node.rawKey);
}
const descendantRawKeys = node.descendantRawKeys || [];
if (descendantRawKeys.length === 0) {
return selectedKeys;
}
if (checked) {
return Array.from(new Set([...selectedKeys, ...descendantRawKeys]));
}
const removeSet = new Set(descendantRawKeys);
return selectedKeys.filter((item) => !removeSet.has(item));
};
const walkGroupStates = (
nodes: RedisTreeDataNode[],
selectedKeySet: Set<string>,
checked: string[],
halfChecked: string[]
) => {
nodes.forEach((node) => {
if (node.nodeType === 'leaf') {
if (node.rawKey && selectedKeySet.has(node.rawKey)) {
checked.push(String(node.key));
}
return;
}
walkGroupStates((node.children || []) as RedisTreeDataNode[], selectedKeySet, checked, halfChecked);
const descendantRawKeys = node.descendantRawKeys || [];
if (descendantRawKeys.length === 0) {
return;
}
const selectedCount = descendantRawKeys.filter((rawKey) => selectedKeySet.has(rawKey)).length;
if (selectedCount === descendantRawKeys.length) {
checked.push(String(node.key));
return;
}
if (selectedCount > 0) {
halfChecked.push(String(node.key));
}
});
};
export const buildCheckedTreeNodeState = (
selectedKeys: string[],
keyTree: RedisKeyTreeResult
): RedisTreeCheckedState => {
const selectedKeySet = new Set(selectedKeys);
const checked: string[] = [];
const halfChecked: string[] = [];
walkGroupStates(keyTree.treeData, selectedKeySet, checked, halfChecked);
return { checked, halfChecked };
};
export const isGroupFullyChecked = (
node: RedisTreeDataNode,
selectedKeys: string[]
): boolean => {
if (node.nodeType !== 'group') {
return false;
}
const descendantRawKeys = node.descendantRawKeys || [];
if (descendantRawKeys.length === 0) {
return false;
}
const selectedKeySet = new Set(selectedKeys);
return descendantRawKeys.every((rawKey) => selectedKeySet.has(rawKey));
};
export const applyRenamedRedisKeyState = (
state: RenamedRedisKeyStateInput,
oldKey: string,
newKey: string
): RenamedRedisKeyStateResult => {
return {
keys: state.keys.map((item) => (item.key === oldKey ? { ...item, key: newKey } : item)),
selectedKey: state.selectedKey === oldKey ? newKey : state.selectedKey,
selectedKeys: state.selectedKeys.map((item) => (item === oldKey ? newKey : item)),
};
};

View File

@@ -0,0 +1,50 @@
import { buildRedisWorkbenchTheme } from './redisViewerWorkbenchTheme';
const assertEqual = (actual: unknown, expected: unknown, message: string) => {
if (actual !== expected) {
throw new Error(`${message}\nactual: ${String(actual)}\nexpected: ${String(expected)}`);
}
};
const assertNotEqual = (actual: unknown, expected: unknown, message: string) => {
if (actual === expected) {
throw new Error(`${message}\nactual: ${String(actual)}\nnotExpected: ${String(expected)}`);
}
};
const assertMatch = (value: string, pattern: RegExp, message: string) => {
if (!pattern.test(value)) {
throw new Error(`${message}\nactual: ${value}\npattern: ${String(pattern)}`);
}
};
const darkTheme = buildRedisWorkbenchTheme({
darkMode: true,
opacity: 0.72,
blur: 14,
});
assertEqual(darkTheme.isDark, true, 'dark 主题标记应为 true');
assertMatch(darkTheme.panelBg, /^rgba\(/, 'dark 主题面板背景应为 rgba');
assertMatch(darkTheme.toolbarPrimaryBg, /^linear-gradient\(/, '工具栏主按钮应使用渐变背景');
assertNotEqual(darkTheme.actionDangerBg, darkTheme.actionSecondaryBg, '危险态按钮背景不应与普通按钮相同');
assertNotEqual(darkTheme.treeSelectedBg, darkTheme.treeHoverBg, '树节点选中态与悬浮态不应相同');
assertMatch(darkTheme.appBg, /rgba\(15, 15, 17,/, 'dark 背景应保持中性黑基底');
assertMatch(darkTheme.panelBg, /rgba\(24, 24, 28,/, 'dark 面板背景应保持中性黑灰');
assertMatch(darkTheme.panelBgStrong, /rgba\(31, 31, 36,/, 'dark 强面板背景应保持中性黑灰');
assertEqual(darkTheme.backdropFilter, 'blur(14px)', 'blur 参数应映射为 backdropFilter');
const lightTheme = buildRedisWorkbenchTheme({
darkMode: false,
opacity: 1,
blur: 0,
});
assertEqual(lightTheme.isDark, false, 'light 主题标记应为 false');
assertMatch(lightTheme.panelBg, /^rgba\(/, 'light 主题面板背景应为 rgba');
assertMatch(lightTheme.contentEmptyBg, /^linear-gradient\(/, 'light 空状态背景应为渐变');
assertNotEqual(lightTheme.textPrimary, lightTheme.textSecondary, '主次文本颜色应区分');
assertNotEqual(lightTheme.statusTagBg, lightTheme.statusTagMutedBg, '状态 tag 应区分普通与弱化样式');
assertEqual(lightTheme.backdropFilter, 'none', 'blur=0 时 backdropFilter 应为 none');
console.log('redisViewerWorkbenchTheme tests passed');

View File

@@ -0,0 +1,129 @@
type RedisWorkbenchThemeInput = {
darkMode: boolean;
opacity: number;
blur: number;
};
type RedisWorkbenchTheme = {
isDark: boolean;
appBg: string;
panelBg: string;
panelBgStrong: string;
panelBgSubtle: string;
panelBorder: string;
panelInset: string;
toolbarPrimaryBg: string;
contentEmptyBg: string;
textPrimary: string;
textSecondary: string;
textMuted: string;
accent: string;
accentSoft: string;
accentBorder: string;
actionSecondaryBg: string;
actionSecondaryBorder: string;
actionDangerBg: string;
actionDangerBorder: string;
actionDangerText: string;
statusTagBg: string;
statusTagBorder: string;
statusTagMutedBg: string;
statusTagMutedBorder: string;
treeHoverBg: string;
treeSelectedBg: string;
treeSelectedBorder: string;
divider: string;
shadow: string;
backdropFilter: string;
};
const clamp = (value: number, min: number, max: number) => Math.min(max, Math.max(min, value));
export const buildRedisWorkbenchTheme = ({
darkMode,
opacity,
blur,
}: RedisWorkbenchThemeInput): RedisWorkbenchTheme => {
const normalizedOpacity = clamp(opacity, 0.1, 1);
const normalizedBlur = Math.max(0, Math.round(blur));
const isTranslucent = normalizedOpacity < 0.999 || normalizedBlur > 0;
if (darkMode) {
const appTopAlpha = isTranslucent ? Math.max(0.08, Math.min(0.22, normalizedOpacity * 0.16)) : 0.92;
const appBottomAlpha = isTranslucent ? Math.max(0.12, Math.min(0.28, normalizedOpacity * 0.22)) : 0.96;
const panelAlpha = isTranslucent ? Math.max(0.06, Math.min(0.16, normalizedOpacity * 0.1)) : 0.34;
const strongAlpha = isTranslucent ? Math.max(0.1, Math.min(0.22, normalizedOpacity * 0.16)) : 0.42;
const subtleAlpha = isTranslucent ? Math.max(0.03, Math.min(0.08, normalizedOpacity * 0.05)) : 0.08;
return {
isDark: true,
appBg: `linear-gradient(180deg, rgba(15, 15, 17, ${appTopAlpha}) 0%, rgba(11, 11, 13, ${appBottomAlpha}) 100%)`,
panelBg: `rgba(24, 24, 28, ${panelAlpha})`,
panelBgStrong: `rgba(31, 31, 36, ${strongAlpha})`,
panelBgSubtle: `rgba(255, 255, 255, ${subtleAlpha})`,
panelBorder: `1px solid rgba(255, 255, 255, ${isTranslucent ? Math.max(0.12, Math.min(0.24, normalizedOpacity * 0.2)) : 0.08})`,
panelInset: `inset 0 1px 0 rgba(255,255,255,${isTranslucent ? Math.max(0.05, Math.min(0.12, normalizedOpacity * 0.1)) : 0.04})`,
toolbarPrimaryBg: `linear-gradient(135deg, rgba(246,196,83,0.22) 0%, rgba(246,196,83,0.12) 100%)`,
contentEmptyBg: `linear-gradient(180deg, rgba(255,255,255,0.03) 0%, rgba(255,255,255,0.015) 100%)`,
textPrimary: 'rgba(245, 247, 251, 0.96)',
textSecondary: 'rgba(218, 224, 235, 0.82)',
textMuted: 'rgba(168, 177, 194, 0.72)',
accent: '#f6c453',
accentSoft: 'rgba(246, 196, 83, 0.18)',
accentBorder: 'rgba(246, 196, 83, 0.3)',
actionSecondaryBg: 'rgba(255, 255, 255, 0.04)',
actionSecondaryBorder: 'rgba(255, 255, 255, 0.09)',
actionDangerBg: 'rgba(255, 95, 95, 0.12)',
actionDangerBorder: 'rgba(255, 95, 95, 0.28)',
actionDangerText: '#ff8f8f',
statusTagBg: 'rgba(25, 106, 255, 0.16)',
statusTagBorder: 'rgba(25, 106, 255, 0.28)',
statusTagMutedBg: 'rgba(255, 255, 255, 0.04)',
statusTagMutedBorder: 'rgba(255, 255, 255, 0.08)',
treeHoverBg: 'rgba(255, 255, 255, 0.045)',
treeSelectedBg: 'linear-gradient(90deg, rgba(246,196,83,0.2) 0%, rgba(246,196,83,0.08) 100%)',
treeSelectedBorder: 'rgba(246, 196, 83, 0.24)',
divider: 'rgba(255, 255, 255, 0.07)',
shadow: '0 20px 48px rgba(0, 0, 0, 0.26)',
backdropFilter: normalizedBlur > 0 ? `blur(${normalizedBlur}px)` : 'none',
};
}
const appTopAlpha = isTranslucent ? Math.max(0.16, Math.min(0.36, normalizedOpacity * 0.24)) : 0.98;
const appBottomAlpha = isTranslucent ? Math.max(0.22, Math.min(0.44, normalizedOpacity * 0.32)) : 0.96;
const panelAlpha = isTranslucent ? Math.max(0.18, Math.min(0.4, normalizedOpacity * 0.26)) : 0.94;
const strongAlpha = isTranslucent ? Math.max(0.26, Math.min(0.52, normalizedOpacity * 0.34)) : 0.98;
return {
isDark: false,
appBg: `linear-gradient(180deg, rgba(248, 250, 252, ${appTopAlpha}) 0%, rgba(242, 245, 248, ${appBottomAlpha}) 100%)`,
panelBg: `rgba(255, 255, 255, ${panelAlpha})`,
panelBgStrong: `rgba(255, 255, 255, ${strongAlpha})`,
panelBgSubtle: 'rgba(15, 23, 42, 0.03)',
panelBorder: `1px solid rgba(15, 23, 42, ${isTranslucent ? Math.max(0.1, Math.min(0.18, normalizedOpacity * 0.12)) : 0.08})`,
panelInset: `inset 0 1px 0 rgba(255,255,255,${isTranslucent ? 0.38 : 0.72})`,
toolbarPrimaryBg: 'linear-gradient(135deg, rgba(22,119,255,0.12) 0%, rgba(22,119,255,0.06) 100%)',
contentEmptyBg: 'linear-gradient(180deg, rgba(15,23,42,0.02) 0%, rgba(15,23,42,0.01) 100%)',
textPrimary: 'rgba(15, 23, 42, 0.92)',
textSecondary: 'rgba(51, 65, 85, 0.82)',
textMuted: 'rgba(100, 116, 139, 0.76)',
accent: '#1677ff',
accentSoft: 'rgba(22, 119, 255, 0.12)',
accentBorder: 'rgba(22, 119, 255, 0.22)',
actionSecondaryBg: 'rgba(255, 255, 255, 0.72)',
actionSecondaryBorder: 'rgba(15, 23, 42, 0.08)',
actionDangerBg: 'rgba(255, 77, 79, 0.08)',
actionDangerBorder: 'rgba(255, 77, 79, 0.24)',
actionDangerText: '#cf1322',
statusTagBg: 'rgba(22, 119, 255, 0.1)',
statusTagBorder: 'rgba(22, 119, 255, 0.16)',
statusTagMutedBg: 'rgba(15, 23, 42, 0.04)',
statusTagMutedBorder: 'rgba(15, 23, 42, 0.08)',
treeHoverBg: 'rgba(15, 23, 42, 0.035)',
treeSelectedBg: 'linear-gradient(90deg, rgba(22,119,255,0.12) 0%, rgba(22,119,255,0.05) 100%)',
treeSelectedBorder: 'rgba(22, 119, 255, 0.18)',
divider: 'rgba(15, 23, 42, 0.08)',
shadow: '0 22px 52px rgba(15, 23, 42, 0.08)',
backdropFilter: normalizedBlur > 0 ? `blur(${normalizedBlur}px)` : 'none',
};
};
export type { RedisWorkbenchTheme, RedisWorkbenchThemeInput };

View File

@@ -9,6 +9,36 @@ import { loader } from '@monaco-editor/react'
import * as monaco from 'monaco-editor'
loader.config({ monaco })
if (typeof window !== 'undefined' && !(window as any).go) {
(window as any).go = {
app: {
App: {
CheckUpdate: async () => ({ success: false }),
DownloadUpdate: async () => ({ success: false }),
GetSavedConnections: async () => [],
SaveConnection: async () => null,
DeleteConnection: async () => null,
OpenConnection: async () => null,
CloseConnection: async () => null,
GetDatabases: async () => [],
GetTables: async () => [],
GetTableData: async () => ({ columns: [], rows: [], total: 0 }),
GetTableColumns: async () => [],
ExecuteQuery: async () => ({ columns: [], rows: [], time: 0 }),
GetSavedQueries: async () => [],
SaveQuery: async () => null,
DeleteQuery: async () => null,
GetAppInfo: async () => ({}),
CheckForUpdates: async () => ({ success: false }),
OpenDownloadedUpdateDirectory: async () => ({ success: false }),
InstallUpdateAndRestart: async () => ({ success: false }),
ImportConfigFile: async () => ({ success: false }),
ExportData: async () => ({ success: false }),
}
}
};
}
// 全局注册透明主题,避免每个 Editor 组件 beforeMount 中重复定义
monaco.editor.defineTheme('transparent-dark', {
base: 'vs-dark', inherit: true, rules: [],

View File

@@ -10,7 +10,7 @@ import {
sanitizeShortcutOptions,
} from './utils/shortcuts';
const DEFAULT_APPEARANCE = { opacity: 1.0, blur: 0 };
const DEFAULT_APPEARANCE = { enabled: true, opacity: 1.0, blur: 0 };
const DEFAULT_UI_SCALE = 1.0;
const MIN_UI_SCALE = 0.8;
const MAX_UI_SCALE = 1.25;
@@ -25,7 +25,7 @@ const MAX_HOST_ENTRY_LENGTH = 512;
const MAX_HOST_ENTRIES = 64;
const DEFAULT_TIMEOUT_SECONDS = 30;
const MAX_TIMEOUT_SECONDS = 3600;
const PERSIST_VERSION = 5;
const PERSIST_VERSION = 6;
const DEFAULT_CONNECTION_TYPE = 'mysql';
const DEFAULT_GLOBAL_PROXY: GlobalProxyConfig = {
enabled: false,
@@ -231,6 +231,18 @@ const sanitizeConnectionConfig = (value: unknown): ConnectionConfig => {
user: toTrimmedString(proxyRaw.user),
password: toTrimmedString(proxyRaw.password),
};
const httpTunnelRaw = (raw.httpTunnel && typeof raw.httpTunnel === 'object')
? raw.httpTunnel as Record<string, unknown>
: ((raw.HTTPTunnel && typeof raw.HTTPTunnel === 'object') ? raw.HTTPTunnel as Record<string, unknown> : {});
const httpTunnel = {
host: toTrimmedString(httpTunnelRaw.host ?? raw.httpTunnelHost),
port: normalizePort(httpTunnelRaw.port ?? raw.httpTunnelPort, 8080),
user: toTrimmedString(httpTunnelRaw.user ?? raw.httpTunnelUser),
password: toTrimmedString(httpTunnelRaw.password ?? raw.httpTunnelPassword),
};
const supportsNetworkTunnel = type !== 'sqlite' && type !== 'duckdb';
const useHttpTunnel = supportsNetworkTunnel && (raw.useHttpTunnel === true || raw.UseHTTPTunnel === true);
const useProxy = supportsNetworkTunnel && !!raw.useProxy && !useHttpTunnel;
const safeConfig: ConnectionConfig & Record<string, unknown> = {
...raw,
@@ -247,8 +259,10 @@ const sanitizeConnectionConfig = (value: unknown): ConnectionConfig => {
sslKeyPath: sslCapable ? toTrimmedString(raw.sslKeyPath) : '',
useSSH: !!raw.useSSH,
ssh,
useProxy: !!raw.useProxy,
useProxy,
proxy,
useHttpTunnel,
httpTunnel,
uri: toTrimmedString(raw.uri).slice(0, MAX_URI_LENGTH),
hosts: sanitizeAddressList(raw.hosts),
topology: raw.topology === 'replica' ? 'replica' : (raw.topology === 'cluster' ? 'cluster' : 'single'),
@@ -391,7 +405,7 @@ interface AppState {
activeContext: { connectionId: string; dbName: string } | null;
savedQueries: SavedQuery[];
theme: 'light' | 'dark';
appearance: { opacity: number; blur: number };
appearance: { enabled: boolean; opacity: number; blur: number };
uiScale: number;
fontSize: number;
startupFullscreen: boolean;
@@ -402,6 +416,10 @@ interface AppState {
sqlLogs: SqlLog[];
tableAccessCount: Record<string, number>;
tableSortPreference: Record<string, 'name' | 'frequency'>;
tableColumnOrders: Record<string, string[]>;
enableColumnOrderMemory: boolean;
tableHiddenColumns: Record<string, string[]>;
enableHiddenColumnMemory: boolean;
addConnection: (conn: SavedConnection) => void;
updateConnection: (conn: SavedConnection) => void;
@@ -429,7 +447,7 @@ interface AppState {
deleteQuery: (id: string) => void;
setTheme: (theme: 'light' | 'dark') => void;
setAppearance: (appearance: Partial<{ opacity: number; blur: number }>) => void;
setAppearance: (appearance: Partial<{ enabled: boolean; opacity: number; blur: number }>) => void;
setUiScale: (scale: number) => void;
setFontSize: (size: number) => void;
setStartupFullscreen: (enabled: boolean) => void;
@@ -444,6 +462,13 @@ interface AppState {
recordTableAccess: (connectionId: string, dbName: string, tableName: string) => void;
setTableSortPreference: (connectionId: string, dbName: string, sortBy: 'name' | 'frequency') => void;
setTableColumnOrder: (connectionId: string, dbName: string, tableName: string, order: string[]) => void;
setEnableColumnOrderMemory: (enabled: boolean) => void;
clearTableColumnOrder: (connectionId: string, dbName: string, tableName: string) => void;
setTableHiddenColumns: (connectionId: string, dbName: string, tableName: string, hiddenColumns: string[]) => void;
setEnableHiddenColumnMemory: (enabled: boolean) => void;
clearTableHiddenColumns: (connectionId: string, dbName: string, tableName: string) => void;
}
const sanitizeSavedQueries = (value: unknown): SavedQuery[] => {
@@ -507,14 +532,37 @@ const sanitizeTableSortPreference = (value: unknown): Record<string, 'name' | 'f
return result;
};
const sanitizeTableColumnOrders = (value: unknown): Record<string, string[]> => {
const raw = (value && typeof value === 'object') ? value as Record<string, unknown> : {};
const result: Record<string, string[]> = {};
Object.entries(raw).forEach(([key, orderArray]) => {
if (Array.isArray(orderArray)) {
result[key] = orderArray.map(col => String(col));
}
});
return result;
};
const sanitizeTableHiddenColumns = (value: unknown): Record<string, string[]> => {
const raw = (value && typeof value === 'object') ? value as Record<string, unknown> : {};
const result: Record<string, string[]> = {};
Object.entries(raw).forEach(([key, hiddenArray]) => {
if (Array.isArray(hiddenArray)) {
result[key] = hiddenArray.map(col => String(col));
}
});
return result;
};
const sanitizeAppearance = (
appearance: Partial<{ opacity: number; blur: number }> | undefined,
appearance: Partial<{ enabled: boolean; opacity: number; blur: number }> | undefined,
version: number
): { opacity: number; blur: number } => {
): { enabled: boolean; opacity: number; blur: number } => {
if (!appearance || typeof appearance !== 'object') {
return { ...DEFAULT_APPEARANCE };
}
const nextAppearance = {
enabled: typeof appearance.enabled === 'boolean' ? appearance.enabled : DEFAULT_APPEARANCE.enabled,
opacity: typeof appearance.opacity === 'number' ? appearance.opacity : DEFAULT_APPEARANCE.opacity,
blur: typeof appearance.blur === 'number' ? appearance.blur : DEFAULT_APPEARANCE.blur,
};
@@ -583,6 +631,10 @@ export const useStore = create<AppState>()(
sqlLogs: [],
tableAccessCount: {},
tableSortPreference: {},
tableColumnOrders: {},
enableColumnOrderMemory: true,
tableHiddenColumns: {},
enableHiddenColumnMemory: true,
addConnection: (conn) => set((state) => ({ connections: [...state.connections, conn] })),
updateConnection: (conn) => set((state) => ({
@@ -785,6 +837,44 @@ export const useStore = create<AppState>()(
}
};
}),
setTableColumnOrder: (connectionId, dbName, tableName, order) => set((state) => {
const key = `${connectionId}-${dbName}-${tableName}`;
return {
tableColumnOrders: {
...state.tableColumnOrders,
[key]: order
}
};
}),
clearTableColumnOrder: (connectionId, dbName, tableName) => set((state) => {
const key = `${connectionId}-${dbName}-${tableName}`;
const newOrders = { ...state.tableColumnOrders };
delete newOrders[key];
return { tableColumnOrders: newOrders };
}),
setEnableColumnOrderMemory: (enabled) => set({ enableColumnOrderMemory: !!enabled }),
setTableHiddenColumns: (connectionId, dbName, tableName, hiddenColumns) => set((state) => {
const key = `${connectionId}-${dbName}-${tableName}`;
return {
tableHiddenColumns: {
...state.tableHiddenColumns,
[key]: hiddenColumns
}
};
}),
clearTableHiddenColumns: (connectionId, dbName, tableName) => set((state) => {
const key = `${connectionId}-${dbName}-${tableName}`;
const newHidden = { ...state.tableHiddenColumns };
delete newHidden[key];
return { tableHiddenColumns: newHidden };
}),
setEnableHiddenColumnMemory: (enabled) => set({ enableHiddenColumnMemory: !!enabled }),
}),
{
name: 'lite-db-storage', // name of the item in the storage (must be unique)
@@ -810,6 +900,13 @@ export const useStore = create<AppState>()(
nextState.shortcutOptions = sanitizeShortcutOptions(state.shortcutOptions);
nextState.tableAccessCount = sanitizeTableAccessCount(state.tableAccessCount);
nextState.tableSortPreference = sanitizeTableSortPreference(state.tableSortPreference);
// 新增的列排序记忆状态不需要做版本特殊兼容,直接做基本的类型保护
const safeOrders = sanitizeTableColumnOrders(state.tableColumnOrders);
nextState.tableColumnOrders = safeOrders;
nextState.enableColumnOrderMemory = state.enableColumnOrderMemory !== false;
const safeHidden = sanitizeTableHiddenColumns(state.tableHiddenColumns);
nextState.tableHiddenColumns = safeHidden;
nextState.enableHiddenColumnMemory = state.enableHiddenColumnMemory !== false;
return nextState as AppState;
},
merge: (persistedState, currentState) => {
@@ -826,11 +923,16 @@ export const useStore = create<AppState>()(
fontSize: sanitizeFontSize(state.fontSize),
startupFullscreen: sanitizeStartupFullscreen(state.startupFullscreen),
globalProxy: sanitizeGlobalProxy(state.globalProxy),
tableSortPreference: sanitizeTableSortPreference(state.tableSortPreference),
tableColumnOrders: sanitizeTableColumnOrders(state.tableColumnOrders),
enableColumnOrderMemory: state.enableColumnOrderMemory !== false,
tableHiddenColumns: sanitizeTableHiddenColumns(state.tableHiddenColumns),
enableHiddenColumnMemory: state.enableHiddenColumnMemory !== false,
sqlFormatOptions: sanitizeSqlFormatOptions(state.sqlFormatOptions),
queryOptions: sanitizeQueryOptions(state.queryOptions),
shortcutOptions: sanitizeShortcutOptions(state.shortcutOptions),
tableAccessCount: sanitizeTableAccessCount(state.tableAccessCount),
tableSortPreference: sanitizeTableSortPreference(state.tableSortPreference),
};
},
partialize: (state) => ({
@@ -847,7 +949,11 @@ export const useStore = create<AppState>()(
queryOptions: state.queryOptions,
shortcutOptions: state.shortcutOptions,
tableAccessCount: state.tableAccessCount,
tableSortPreference: state.tableSortPreference
tableSortPreference: state.tableSortPreference,
tableColumnOrders: state.tableColumnOrders,
enableColumnOrderMemory: state.enableColumnOrderMemory,
tableHiddenColumns: state.tableHiddenColumns,
enableHiddenColumnMemory: state.enableHiddenColumnMemory
}), // Don't persist logs
}
)

View File

@@ -14,6 +14,13 @@ export interface ProxyConfig {
password?: string;
}
export interface HTTPTunnelConfig {
host: string;
port: number;
user?: string;
password?: string;
}
export interface ConnectionConfig {
type: string;
host: string;
@@ -30,6 +37,8 @@ export interface ConnectionConfig {
ssh?: SSHConfig;
useProxy?: boolean;
proxy?: ProxyConfig;
useHttpTunnel?: boolean;
httpTunnel?: HTTPTunnelConfig;
driver?: string;
dsn?: string;
timeout?: number;

View File

@@ -10,6 +10,22 @@ const WINDOWS_BLUR_FACTOR = 1.00;
const clamp = (value: number, min: number, max: number) => Math.min(max, Math.max(min, value));
export interface AppearanceSettingsLike {
enabled?: boolean;
opacity?: number;
blur?: number;
}
export const resolveAppearanceValues = (appearance: AppearanceSettingsLike | undefined): { opacity: number; blur: number } => {
if (!appearance || appearance.enabled !== false) {
return {
opacity: appearance?.opacity ?? DEFAULT_OPACITY,
blur: appearance?.blur ?? 0,
};
}
return { opacity: DEFAULT_OPACITY, blur: 0 };
};
export const isMacLikePlatform = (): boolean => {
if (typeof navigator === 'undefined') {
return false;

View File

@@ -0,0 +1,27 @@
import { buildOverlayWorkbenchTheme } from './overlayWorkbenchTheme';
const assertEqual = (actual: unknown, expected: unknown, message: string) => {
if (actual !== expected) {
throw new Error(`${message}\nactual: ${String(actual)}\nexpected: ${String(expected)}`);
}
};
const assertMatch = (value: string, pattern: RegExp, message: string) => {
if (!pattern.test(value)) {
throw new Error(`${message}\nactual: ${value}\npattern: ${String(pattern)}`);
}
};
const darkTheme = buildOverlayWorkbenchTheme(true);
assertEqual(darkTheme.isDark, true, 'dark 主题标记应为 true');
assertMatch(darkTheme.shellBg, /rgba\(15, 15, 17,/, 'dark 弹层背景应保持中性黑');
assertMatch(darkTheme.sectionBg, /rgba\(255,?\s*255,?\s*255,?\s*0\.03\)/, 'dark section 背景透明度应匹配');
assertEqual(darkTheme.iconColor, '#ffd666', 'dark 图标色应为金色强调');
const lightTheme = buildOverlayWorkbenchTheme(false);
assertEqual(lightTheme.isDark, false, 'light 主题标记应为 false');
assertMatch(lightTheme.shellBg, /rgba\(255,255,255,0\.98\)/, 'light 弹层背景透明度应匹配');
assertMatch(lightTheme.sectionBg, /rgba\(255,?\s*255,?\s*255,?\s*0\.84\)/, 'light section 背景透明度应匹配');
assertEqual(lightTheme.iconColor, '#1677ff', 'light 图标色应为蓝色强调');
console.log('overlayWorkbenchTheme tests passed');

View File

@@ -0,0 +1,59 @@
type OverlayWorkbenchTheme = {
isDark: boolean;
shellBg: string;
shellBorder: string;
shellShadow: string;
shellBackdropFilter: string;
sectionBg: string;
sectionBorder: string;
mutedText: string;
titleText: string;
iconBg: string;
iconColor: string;
hoverBg: string;
selectedBg: string;
selectedText: string;
divider: string;
};
export const buildOverlayWorkbenchTheme = (darkMode: boolean): OverlayWorkbenchTheme => {
if (darkMode) {
return {
isDark: true,
shellBg: 'linear-gradient(180deg, rgba(15, 15, 17, 0.96) 0%, rgba(11, 11, 13, 0.98) 100%)',
shellBorder: '1px solid rgba(255,255,255,0.08)',
shellShadow: '0 24px 56px rgba(0,0,0,0.34)',
shellBackdropFilter: 'blur(18px)',
sectionBg: 'rgba(255,255,255,0.03)',
sectionBorder: '1px solid rgba(255,255,255,0.08)',
mutedText: 'rgba(255,255,255,0.5)',
titleText: '#f5f7ff',
iconBg: 'rgba(255,214,102,0.12)',
iconColor: '#ffd666',
hoverBg: 'rgba(255,214,102,0.10)',
selectedBg: 'rgba(255,214,102,0.14)',
selectedText: '#ffd666',
divider: 'rgba(255,255,255,0.08)',
};
}
return {
isDark: false,
shellBg: 'linear-gradient(180deg, rgba(255,255,255,0.98) 0%, rgba(246,248,252,0.98) 100%)',
shellBorder: '1px solid rgba(16,24,40,0.08)',
shellShadow: '0 18px 42px rgba(15,23,42,0.12)',
shellBackdropFilter: 'none',
sectionBg: 'rgba(255,255,255,0.84)',
sectionBorder: '1px solid rgba(16,24,40,0.08)',
mutedText: 'rgba(16,24,40,0.55)',
titleText: '#162033',
iconBg: 'rgba(24,144,255,0.1)',
iconColor: '#1677ff',
hoverBg: 'rgba(24,144,255,0.08)',
selectedBg: 'rgba(24,144,255,0.12)',
selectedText: '#1677ff',
divider: 'rgba(16,24,40,0.08)',
};
};
export type { OverlayWorkbenchTheme };

View File

@@ -50,6 +50,11 @@ export const quoteIdentPart = (dbType: string, ident: string) => {
return raw;
}
// SQL Server 使用 [bracket] 标识符
if (dbTypeLower === 'sqlserver' || dbTypeLower === 'mssql') {
return `[${raw.replace(/]/g, ']]')}]`;
}
// 其他数据库默认加双引号
return `"${raw.replace(/"/g, '""')}"`;
};
@@ -134,6 +139,42 @@ export const buildOrderBySQL = (
return '';
};
export const buildPaginatedSelectSQL = (
dbType: string,
baseSql: string,
orderBySQL: string,
limit: number,
offset: number,
) => {
const normalizedType = String(dbType || '').trim().toLowerCase();
const safeLimit = Math.max(0, Math.floor(Number(limit) || 0));
const safeOffset = Math.max(0, Math.floor(Number(offset) || 0));
const base = String(baseSql || '').trim();
const orderBy = String(orderBySQL || '');
if (!base || safeLimit <= 0) {
return `${base}${orderBy}`;
}
switch (normalizedType) {
case 'oracle': {
const orderedSql = `${base}${orderBy}`;
const upperBound = safeOffset + safeLimit;
if (safeOffset <= 0) {
return `SELECT * FROM (${orderedSql}) WHERE ROWNUM <= ${upperBound}`;
}
return `SELECT * FROM (SELECT "__gonavi_page__".*, ROWNUM "__gonavi_rn__" FROM (${orderedSql}) "__gonavi_page__" WHERE ROWNUM <= ${upperBound}) WHERE "__gonavi_rn__" > ${safeOffset}`;
}
case 'sqlserver':
case 'mssql': {
const effectiveOrderBy = orderBy.trim() ? orderBy : ' ORDER BY (SELECT NULL)';
return `${base}${effectiveOrderBy} OFFSET ${safeOffset} ROWS FETCH NEXT ${safeLimit} ROWS ONLY`;
}
default:
return `${base}${orderBy} LIMIT ${safeLimit} OFFSET ${safeOffset}`;
}
};
export const parseListValues = (val: string) => {
const raw = (val || '').trim();
if (!raw) return [];

View File

@@ -131,6 +131,8 @@ export function RedisGetServerInfo(arg1:connection.ConnectionConfig):Promise<con
export function RedisGetValue(arg1:connection.ConnectionConfig,arg2:string):Promise<connection.QueryResult>;
export function RedisKeyExists(arg1:connection.ConnectionConfig,arg2:string):Promise<connection.QueryResult>;
export function RedisListPush(arg1:connection.ConnectionConfig,arg2:string,arg3:Array<string>):Promise<connection.QueryResult>;
export function RedisListSet(arg1:connection.ConnectionConfig,arg2:string,arg3:number,arg4:string):Promise<connection.QueryResult>;

View File

@@ -254,6 +254,10 @@ export function RedisGetValue(arg1, arg2) {
return window['go']['app']['App']['RedisGetValue'](arg1, arg2);
}
export function RedisKeyExists(arg1, arg2) {
return window['go']['app']['App']['RedisKeyExists'](arg1, arg2);
}
export function RedisListPush(arg1, arg2, arg3) {
return window['go']['app']['App']['RedisListPush'](arg1, arg2, arg3);
}

View File

@@ -48,6 +48,24 @@ export namespace connection {
return a;
}
}
export class HTTPTunnelConfig {
host: string;
port: number;
user?: string;
password?: string;
static createFrom(source: any = {}) {
return new HTTPTunnelConfig(source);
}
constructor(source: any = {}) {
if ('string' === typeof source) source = JSON.parse(source);
this.host = source["host"];
this.port = source["port"];
this.user = source["user"];
this.password = source["password"];
}
}
export class ProxyConfig {
type: string;
host: string;
@@ -104,6 +122,8 @@ export namespace connection {
ssh: SSHConfig;
useProxy?: boolean;
proxy?: ProxyConfig;
useHttpTunnel?: boolean;
httpTunnel?: HTTPTunnelConfig;
driver?: string;
dsn?: string;
timeout?: number;
@@ -142,6 +162,8 @@ export namespace connection {
this.ssh = this.convertValues(source["ssh"], SSHConfig);
this.useProxy = source["useProxy"];
this.proxy = this.convertValues(source["proxy"], ProxyConfig);
this.useHttpTunnel = source["useHttpTunnel"];
this.httpTunnel = this.convertValues(source["httpTunnel"], HTTPTunnelConfig);
this.driver = source["driver"];
this.dsn = source["dsn"];
this.timeout = source["timeout"];
@@ -179,6 +201,7 @@ export namespace connection {
}
}
export class QueryResult {
success: boolean;
message: string;
@@ -254,6 +277,9 @@ export namespace sync {
mode: string;
jobId?: string;
autoAddColumns?: boolean;
targetTableStrategy?: string;
createIndexes?: boolean;
mongoCollectionName?: string;
tableOptions?: Record<string, TableOptions>;
static createFrom(source: any = {}) {
@@ -269,6 +295,9 @@ export namespace sync {
this.mode = source["mode"];
this.jobId = source["jobId"];
this.autoAddColumns = source["autoAddColumns"];
this.targetTableStrategy = source["targetTableStrategy"];
this.createIndexes = source["createIndexes"];
this.mongoCollectionName = source["mongoCollectionName"];
this.tableOptions = this.convertValues(source["tableOptions"], TableOptions, true);
}

View File

@@ -8,6 +8,8 @@ import (
"errors"
"fmt"
"net"
"net/url"
"os"
"strings"
"sync"
"time"
@@ -96,6 +98,9 @@ func normalizeCacheKeyConfig(config connection.ConnectionConfig) connection.Conn
if !normalized.UseProxy {
normalized.Proxy = connection.ProxyConfig{}
}
if !normalized.UseHTTPTunnel {
normalized.HTTPTunnel = connection.HTTPTunnelConfig{}
}
if isFileDatabaseType(normalized.Type) {
dsn := strings.TrimSpace(normalized.Host)
@@ -124,6 +129,8 @@ func normalizeCacheKeyConfig(config connection.ConnectionConfig) connection.Conn
normalized.MongoAuthMechanism = ""
normalized.MongoReplicaUser = ""
normalized.MongoReplicaPassword = ""
normalized.UseHTTPTunnel = false
normalized.HTTPTunnel = connection.HTTPTunnelConfig{}
}
return normalized
@@ -213,6 +220,7 @@ func wrapConnectError(config connection.ConnectionConfig, err error) error {
if err == nil {
return nil
}
err = sanitizeMongoConnectErrorLabel(config, err)
var netErr net.Error
if errors.Is(err, context.DeadlineExceeded) || (errors.As(err, &netErr) && netErr.Timeout()) {
@@ -226,6 +234,73 @@ func wrapConnectError(config connection.ConnectionConfig, err error) error {
return withLogHint{err: err, logPath: logger.Path()}
}
type errorMessageOverride struct {
message string
cause error
}
func (e errorMessageOverride) Error() string {
return e.message
}
func (e errorMessageOverride) Unwrap() error {
return e.cause
}
func sanitizeMongoConnectErrorLabel(config connection.ConnectionConfig, err error) error {
if err == nil {
return nil
}
if strings.ToLower(strings.TrimSpace(config.Type)) != "mongodb" {
return err
}
if mongoConnectUsesTLS(config) {
return err
}
original := err.Error()
rewritten := strings.ReplaceAll(original, "SSL 主库凭据", "主库凭据")
rewritten = strings.ReplaceAll(rewritten, "SSL 从库凭据", "从库凭据")
if rewritten == original {
return err
}
return errorMessageOverride{
message: rewritten,
cause: err,
}
}
func mongoConnectUsesTLS(config connection.ConnectionConfig) bool {
if config.UseSSL {
return true
}
uriText := strings.TrimSpace(config.URI)
if uriText == "" {
return false
}
parsed, err := url.Parse(uriText)
if err != nil {
return false
}
for _, key := range []string{"tls", "ssl"} {
if enabled, known := parseMongoBool(parsed.Query().Get(key)); known {
return enabled
}
}
return strings.EqualFold(strings.TrimSpace(parsed.Scheme), "mongodb+srv")
}
func parseMongoBool(raw string) (enabled bool, known bool) {
value := strings.ToLower(strings.TrimSpace(raw))
switch value {
case "1", "true", "t", "yes", "y", "on", "required":
return true, true
case "0", "false", "f", "no", "n", "off", "disable", "disabled":
return false, true
default:
return false, false
}
}
type withLogHint struct {
err error
logPath string
@@ -233,10 +308,15 @@ type withLogHint struct {
func (e withLogHint) Error() string {
message := normalizeErrorMessage(e.err)
if strings.TrimSpace(e.logPath) == "" {
path := strings.TrimSpace(e.logPath)
if path == "" {
return message
}
return fmt.Sprintf("%s详细日志%s", message, e.logPath)
info, statErr := os.Stat(path)
if statErr != nil || info.IsDir() || info.Size() <= 0 {
return message
}
return fmt.Sprintf("%s详细日志%s", message, path)
}
func (e withLogHint) Unwrap() error {
@@ -303,6 +383,12 @@ func formatConnSummary(config connection.ConnectionConfig) string {
b.WriteString(" 代理认证=已配置")
}
}
if config.UseHTTPTunnel {
b.WriteString(fmt.Sprintf(" HTTP隧道=%s:%d", strings.TrimSpace(config.HTTPTunnel.Host), config.HTTPTunnel.Port))
if strings.TrimSpace(config.HTTPTunnel.User) != "" {
b.WriteString(" HTTP隧道认证=已配置")
}
}
if config.Type == "custom" {
driver := strings.TrimSpace(config.Driver)

View File

@@ -0,0 +1,84 @@
package app
import (
"errors"
"os"
"path/filepath"
"strings"
"testing"
"GoNavi-Wails/internal/connection"
)
func TestWrapConnectError_MongoNoSSL_RemovesMisleadingSSLLabel(t *testing.T) {
config := connection.ConnectionConfig{
Type: "mongodb",
UseSSL: false,
}
sourceErr := errors.New("MongoDB 连接失败SSL 主库凭据验证失败: mock error")
wrapped := wrapConnectError(config, sourceErr)
text := wrapped.Error()
if strings.Contains(text, "SSL 主库凭据") {
t.Fatalf("expected ssl label to be removed when TLS disabled, got: %s", text)
}
if !strings.Contains(text, "主库凭据验证失败") {
t.Fatalf("expected auth label to remain, got: %s", text)
}
}
func TestWrapConnectError_MongoURIForcesTLS_KeepsSSLLabel(t *testing.T) {
config := connection.ConnectionConfig{
Type: "mongodb",
UseSSL: false,
URI: "mongodb://user:pass@127.0.0.1:27017/admin?tls=true",
}
sourceErr := errors.New("MongoDB 连接失败SSL 主库凭据验证失败: mock error")
wrapped := wrapConnectError(config, sourceErr)
text := wrapped.Error()
if !strings.Contains(text, "SSL 主库凭据") {
t.Fatalf("expected ssl label to remain when URI enables TLS, got: %s", text)
}
}
func TestWrapConnectError_MongoSRVDefaultTLS_KeepsSSLLabel(t *testing.T) {
config := connection.ConnectionConfig{
Type: "mongodb",
UseSSL: false,
URI: "mongodb+srv://user:pass@cluster0.example.com/admin",
}
sourceErr := errors.New("MongoDB 连接失败SSL 主库凭据验证失败: mock error")
wrapped := wrapConnectError(config, sourceErr)
text := wrapped.Error()
if !strings.Contains(text, "SSL 主库凭据") {
t.Fatalf("expected ssl label to remain for mongodb+srv default TLS, got: %s", text)
}
}
func TestWithLogHintError_OmitEmptyLogPath(t *testing.T) {
dir := t.TempDir()
logPath := filepath.Join(dir, "gonavi.log")
if err := os.WriteFile(logPath, nil, 0o644); err != nil {
t.Fatalf("write empty log failed: %v", err)
}
err := withLogHint{err: errors.New("连接失败"), logPath: logPath}
text := err.Error()
if strings.Contains(text, "详细日志:") {
t.Fatalf("expected no log hint for empty file, got: %s", text)
}
}
func TestWithLogHintError_IncludeNonEmptyLogPath(t *testing.T) {
dir := t.TempDir()
logPath := filepath.Join(dir, "gonavi.log")
if err := os.WriteFile(logPath, []byte("log entry\n"), 0o644); err != nil {
t.Fatalf("write log failed: %v", err)
}
err := withLogHint{err: errors.New("连接失败"), logPath: logPath}
text := err.Error()
if !strings.Contains(text, "详细日志:"+logPath) {
t.Fatalf("expected log hint with path, got: %s", text)
}
}

View File

@@ -1,6 +1,7 @@
package app
import (
"strconv"
"strings"
"GoNavi-Wails/internal/connection"
@@ -20,6 +21,11 @@ func normalizeRunConfig(config connection.ConnectionConfig, dbName string) conne
case "dameng":
// 达梦使用 schema 参数沿用现有行为dbName 表示 schema。
runConfig.Database = name
case "redis":
runConfig.Database = name
if idx, err := strconv.Atoi(name); err == nil && idx >= 0 && idx <= 15 {
runConfig.RedisDB = idx
}
default:
// oracle: dbName 表示 schema/owner不能覆盖 config.Database服务名
// sqlite: 无需设置 Database

View File

@@ -12,8 +12,35 @@ import (
func resolveDialConfigWithProxy(raw connection.ConnectionConfig) (connection.ConnectionConfig, error) {
config := raw
if config.UseHTTPTunnel {
if config.UseProxy {
return connection.ConnectionConfig{}, fmt.Errorf("HTTP 隧道与普通代理不能同时启用")
}
tunnelHost := strings.TrimSpace(config.HTTPTunnel.Host)
if tunnelHost == "" {
return connection.ConnectionConfig{}, fmt.Errorf("HTTP 隧道主机不能为空")
}
tunnelPort := config.HTTPTunnel.Port
if tunnelPort <= 0 {
tunnelPort = 8080
}
if tunnelPort > 65535 {
return connection.ConnectionConfig{}, fmt.Errorf("HTTP 隧道端口无效:%d", config.HTTPTunnel.Port)
}
config.UseProxy = true
config.Proxy = connection.ProxyConfig{
Type: "http",
Host: tunnelHost,
Port: tunnelPort,
User: strings.TrimSpace(config.HTTPTunnel.User),
Password: config.HTTPTunnel.Password,
}
}
if !config.UseProxy {
config.Proxy = connection.ProxyConfig{}
config.UseHTTPTunnel = false
config.HTTPTunnel = connection.HTTPTunnelConfig{}
return config, nil
}
@@ -22,6 +49,8 @@ func resolveDialConfigWithProxy(raw connection.ConnectionConfig) (connection.Con
return connection.ConnectionConfig{}, err
}
config.Proxy = normalizedProxy
config.UseHTTPTunnel = false
config.HTTPTunnel = connection.HTTPTunnelConfig{}
if config.UseSSH {
sshPort := config.SSH.Port
@@ -44,8 +73,8 @@ func resolveDialConfigWithProxy(raw connection.ConnectionConfig) (connection.Con
// 文件型/自定义 DSN 类型不走标准 host:port不在此层改写。
return config, nil
}
if normalizedType == "mongodb" && config.MongoSRV {
// Mongo SRV 由驱动侧 Dialer 处理代理,避免破坏 DNS SRV 拓扑发现
if normalizedType == "mongodb" {
// MongoDB 统一由驱动侧 Dialer 处理代理,保留原始目标地址,避免将连接目标改写为本地转发地址
return config, nil
}

View File

@@ -0,0 +1,64 @@
package app
import (
"reflect"
"testing"
"GoNavi-Wails/internal/connection"
)
func TestResolveDialConfigWithProxy_MongoKeepsTargetAddress(t *testing.T) {
hosts := []string{"10.20.30.40:27017", "10.20.30.41:27017"}
raw := connection.ConnectionConfig{
Type: "mongodb",
Host: "10.20.30.40",
Port: 27017,
UseProxy: true,
Proxy: connection.ProxyConfig{
Type: "socks5",
Host: "127.0.0.1",
Port: 1080,
},
Hosts: hosts,
}
got, err := resolveDialConfigWithProxy(raw)
if err != nil {
t.Fatalf("resolveDialConfigWithProxy returned error: %v", err)
}
if got.Host != raw.Host || got.Port != raw.Port {
t.Fatalf("mongo target address should be kept, got=%s:%d want=%s:%d", got.Host, got.Port, raw.Host, raw.Port)
}
if !got.UseProxy {
t.Fatalf("mongo should keep UseProxy=true for driver-level dialer")
}
if !reflect.DeepEqual(got.Hosts, hosts) {
t.Fatalf("mongo hosts should be kept, got=%v want=%v", got.Hosts, hosts)
}
}
func TestResolveDialConfigWithProxy_MongoSRVKeepsTargetAddress(t *testing.T) {
raw := connection.ConnectionConfig{
Type: "mongodb",
Host: "cluster0.example.com",
Port: 27017,
MongoSRV: true,
UseProxy: true,
Proxy: connection.ProxyConfig{
Type: "http",
Host: "127.0.0.1",
Port: 7890,
},
}
got, err := resolveDialConfigWithProxy(raw)
if err != nil {
t.Fatalf("resolveDialConfigWithProxy returned error: %v", err)
}
if got.Host != raw.Host || got.Port != raw.Port {
t.Fatalf("mongo SRV target address should be kept, got=%s:%d want=%s:%d", got.Host, got.Port, raw.Host, raw.Port)
}
if !got.UseProxy {
t.Fatalf("mongo SRV should keep UseProxy=true for driver-level dialer")
}
}

View File

@@ -72,25 +72,30 @@ func setGlobalProxyConfig(enabled bool, proxyConfig connection.ProxyConfig) (glo
}
func (a *App) ConfigureGlobalProxy(enabled bool, proxyConfig connection.ProxyConfig) connection.QueryResult {
before := currentGlobalProxyConfig()
snapshot, err := setGlobalProxyConfig(enabled, proxyConfig)
if err != nil {
return connection.QueryResult{Success: false, Message: err.Error()}
}
if snapshot.Enabled {
authState := ""
if strings.TrimSpace(snapshot.Proxy.User) != "" {
authState = "(认证:已配置)"
// 前端可能在同一配置下重复触发同步(例如严格模式或状态回放),
// 这里做幂等日志,避免重复刷屏。
if !globalProxySnapshotEqual(before, snapshot) {
if snapshot.Enabled {
authState := ""
if strings.TrimSpace(snapshot.Proxy.User) != "" {
authState = "(认证:已配置)"
}
logger.Infof(
"全局代理已启用:%s://%s:%d%s",
strings.ToLower(strings.TrimSpace(snapshot.Proxy.Type)),
strings.TrimSpace(snapshot.Proxy.Host),
snapshot.Proxy.Port,
authState,
)
} else {
logger.Infof("全局代理已关闭")
}
logger.Infof(
"全局代理已启用:%s://%s:%d%s",
strings.ToLower(strings.TrimSpace(snapshot.Proxy.Type)),
strings.TrimSpace(snapshot.Proxy.Host),
snapshot.Proxy.Port,
authState,
)
} else {
logger.Infof("全局代理已关闭")
}
return connection.QueryResult{
@@ -100,6 +105,24 @@ func (a *App) ConfigureGlobalProxy(enabled bool, proxyConfig connection.ProxyCon
}
}
func globalProxySnapshotEqual(a, b globalProxySnapshot) bool {
if a.Enabled != b.Enabled {
return false
}
if !a.Enabled {
return true
}
return proxyConfigEqual(a.Proxy, b.Proxy)
}
func proxyConfigEqual(a, b connection.ProxyConfig) bool {
return strings.EqualFold(strings.TrimSpace(a.Type), strings.TrimSpace(b.Type)) &&
strings.TrimSpace(a.Host) == strings.TrimSpace(b.Host) &&
a.Port == b.Port &&
strings.TrimSpace(a.User) == strings.TrimSpace(b.User) &&
a.Password == b.Password
}
func (a *App) GetGlobalProxyConfig() connection.QueryResult {
return connection.QueryResult{
Success: true,
@@ -110,7 +133,7 @@ func (a *App) GetGlobalProxyConfig() connection.QueryResult {
func applyGlobalProxyToConnection(config connection.ConnectionConfig) connection.ConnectionConfig {
effective := config
if effective.UseProxy {
if effective.UseProxy || effective.UseHTTPTunnel {
return effective
}
if isFileDatabaseType(effective.Type) {

View File

@@ -3,6 +3,7 @@ package app
import (
"context"
"fmt"
"strconv"
"strings"
"time"
@@ -12,6 +13,16 @@ import (
"GoNavi-Wails/internal/utils"
)
const testConnectionTimeoutUpperBoundSeconds = 12
func normalizeTestConnectionConfig(config connection.ConnectionConfig) connection.ConnectionConfig {
normalized := config
if normalized.Timeout <= 0 || normalized.Timeout > testConnectionTimeoutUpperBoundSeconds {
normalized.Timeout = testConnectionTimeoutUpperBoundSeconds
}
return normalized
}
// Generic DB Methods
func (a *App) DBConnect(config connection.ConnectionConfig) connection.QueryResult {
@@ -27,13 +38,16 @@ func (a *App) DBConnect(config connection.ConnectionConfig) connection.QueryResu
}
func (a *App) TestConnection(config connection.ConnectionConfig) connection.QueryResult {
_, err := a.getDatabaseForcePing(config)
testConfig := normalizeTestConnectionConfig(config)
started := time.Now()
logger.Infof("TestConnection 开始:%s", formatConnSummary(testConfig))
_, err := a.getDatabaseForcePing(testConfig)
if err != nil {
logger.Error(err, "TestConnection 连接测试失败:%s", formatConnSummary(config))
logger.Error(err, "TestConnection 连接测试失败:耗时=%s %s", time.Since(started).Round(time.Millisecond), formatConnSummary(testConfig))
return connection.QueryResult{Success: false, Message: err.Error()}
}
logger.Infof("TestConnection 连接测试成功:%s", formatConnSummary(config))
logger.Infof("TestConnection 连接测试成功:耗时=%s %s", time.Since(started).Round(time.Millisecond), formatConnSummary(testConfig))
return connection.QueryResult{Success: true, Message: "连接成功"}
}
@@ -416,12 +430,7 @@ func (a *App) DBQueryWithCancel(config connection.ConnectionConfig, dbName strin
a.queryMu.Unlock()
}()
lowerQuery := strings.TrimSpace(strings.ToLower(query))
isReadQuery := strings.HasPrefix(lowerQuery, "select") || strings.HasPrefix(lowerQuery, "show") || strings.HasPrefix(lowerQuery, "describe") || strings.HasPrefix(lowerQuery, "explain")
// MongoDB JSON 命令中的 find/count/aggregate 也属于读查询
if !isReadQuery && strings.ToLower(strings.TrimSpace(runConfig.Type)) == "mongodb" && strings.HasPrefix(strings.TrimSpace(query), "{") {
isReadQuery = true
}
isReadQuery := isReadOnlySQLQuery(runConfig.Type, query)
runReadQuery := func(inst db.Database) ([]map[string]interface{}, []string, error) {
if q, ok := inst.(interface {
@@ -500,11 +509,7 @@ func (a *App) DBQueryIsolated(config connection.ConnectionConfig, dbName string,
ctx, cancel := utils.ContextWithTimeout(time.Duration(timeoutSeconds) * time.Second)
defer cancel()
lowerQuery := strings.TrimSpace(strings.ToLower(query))
isReadQuery := strings.HasPrefix(lowerQuery, "select") || strings.HasPrefix(lowerQuery, "show") || strings.HasPrefix(lowerQuery, "describe") || strings.HasPrefix(lowerQuery, "explain")
if !isReadQuery && strings.ToLower(strings.TrimSpace(runConfig.Type)) == "mongodb" && strings.HasPrefix(strings.TrimSpace(query), "{") {
isReadQuery = true
}
isReadQuery := isReadOnlySQLQuery(runConfig.Type, query)
if isReadQuery {
var data []map[string]interface{}
@@ -547,8 +552,33 @@ func sqlSnippet(query string) string {
return q[:max] + "..."
}
func ensureNonNilSlice[T any](items []T) []T {
if items == nil {
return make([]T, 0)
}
return items
}
func (a *App) DBGetDatabases(config connection.ConnectionConfig) connection.QueryResult {
runConfig := normalizeRunConfig(config, "")
if strings.EqualFold(strings.TrimSpace(runConfig.Type), "redis") {
runConfig.Type = "redis"
client, err := a.getRedisClient(runConfig)
if err != nil {
logger.Error(err, "DBGetDatabases 获取 Redis 连接失败:%s", formatConnSummary(runConfig))
return connection.QueryResult{Success: false, Message: err.Error()}
}
dbs, err := client.GetDatabases()
if err != nil {
logger.Error(err, "DBGetDatabases 获取 Redis 库列表失败:%s", formatConnSummary(runConfig))
return connection.QueryResult{Success: false, Message: err.Error()}
}
resData := make([]map[string]string, 0, len(dbs))
for _, item := range dbs {
resData = append(resData, map[string]string{"Database": strconv.Itoa(item.Index)})
}
return connection.QueryResult{Success: true, Data: resData}
}
dbInst, err := a.getDatabase(runConfig)
if err != nil {
logger.Error(err, "DBGetDatabases 获取连接失败:%s", formatConnSummary(runConfig))
@@ -571,7 +601,7 @@ func (a *App) DBGetDatabases(config connection.ConnectionConfig) connection.Quer
return connection.QueryResult{Success: false, Message: err.Error()}
}
var resData []map[string]string
resData := make([]map[string]string, 0, len(dbs))
for _, name := range dbs {
resData = append(resData, map[string]string{"Database": name})
}
@@ -581,6 +611,48 @@ func (a *App) DBGetDatabases(config connection.ConnectionConfig) connection.Quer
func (a *App) DBGetTables(config connection.ConnectionConfig, dbName string) connection.QueryResult {
runConfig := normalizeRunConfig(config, dbName)
if strings.EqualFold(strings.TrimSpace(runConfig.Type), "redis") {
runConfig.Type = "redis"
client, err := a.getRedisClient(runConfig)
if err != nil {
logger.Error(err, "DBGetTables 获取 Redis 连接失败:%s", formatConnSummary(runConfig))
return connection.QueryResult{Success: false, Message: err.Error()}
}
cursor := uint64(0)
tables := make([]string, 0, 128)
seen := make(map[string]struct{}, 128)
for {
result, err := client.ScanKeys("*", cursor, 1000)
if err != nil {
logger.Error(err, "DBGetTables 扫描 Redis Key 失败:%s", formatConnSummary(runConfig))
return connection.QueryResult{Success: false, Message: err.Error()}
}
for _, item := range result.Keys {
key := strings.TrimSpace(item.Key)
if key == "" {
continue
}
if _, ok := seen[key]; ok {
continue
}
seen[key] = struct{}{}
tables = append(tables, key)
}
if strings.TrimSpace(result.Cursor) == "" || strings.TrimSpace(result.Cursor) == "0" {
break
}
next, err := strconv.ParseUint(strings.TrimSpace(result.Cursor), 10, 64)
if err != nil || next == cursor {
break
}
cursor = next
}
resData := make([]map[string]string, 0, len(tables))
for _, name := range tables {
resData = append(resData, map[string]string{"Table": name})
}
return connection.QueryResult{Success: true, Data: resData}
}
dbInst, err := a.getDatabase(runConfig)
if err != nil {
@@ -604,7 +676,7 @@ func (a *App) DBGetTables(config connection.ConnectionConfig, dbName string) con
return connection.QueryResult{Success: false, Message: err.Error()}
}
var resData []map[string]string
resData := make([]map[string]string, 0, len(tables))
for _, name := range tables {
resData = append(resData, map[string]string{"Table": name})
}
@@ -786,7 +858,7 @@ func (a *App) DBGetColumns(config connection.ConnectionConfig, dbName string, ta
return connection.QueryResult{Success: false, Message: err.Error()}
}
return connection.QueryResult{Success: true, Data: columns}
return connection.QueryResult{Success: true, Data: ensureNonNilSlice(columns)}
}
func (a *App) DBGetIndexes(config connection.ConnectionConfig, dbName string, tableName string) connection.QueryResult {
@@ -803,7 +875,7 @@ func (a *App) DBGetIndexes(config connection.ConnectionConfig, dbName string, ta
return connection.QueryResult{Success: false, Message: err.Error()}
}
return connection.QueryResult{Success: true, Data: indexes}
return connection.QueryResult{Success: true, Data: ensureNonNilSlice(indexes)}
}
func (a *App) DBGetForeignKeys(config connection.ConnectionConfig, dbName string, tableName string) connection.QueryResult {
@@ -820,7 +892,7 @@ func (a *App) DBGetForeignKeys(config connection.ConnectionConfig, dbName string
return connection.QueryResult{Success: false, Message: err.Error()}
}
return connection.QueryResult{Success: true, Data: fks}
return connection.QueryResult{Success: true, Data: ensureNonNilSlice(fks)}
}
func (a *App) DBGetTriggers(config connection.ConnectionConfig, dbName string, tableName string) connection.QueryResult {
@@ -837,7 +909,7 @@ func (a *App) DBGetTriggers(config connection.ConnectionConfig, dbName string, t
return connection.QueryResult{Success: false, Message: err.Error()}
}
return connection.QueryResult{Success: true, Data: triggers}
return connection.QueryResult{Success: true, Data: ensureNonNilSlice(triggers)}
}
func (a *App) DropView(config connection.ConnectionConfig, dbName string, viewName string) connection.QueryResult {
@@ -975,5 +1047,5 @@ func (a *App) DBGetAllColumns(config connection.ConnectionConfig, dbName string)
return connection.QueryResult{Success: false, Message: err.Error()}
}
return connection.QueryResult{Success: true, Data: cols}
return connection.QueryResult{Success: true, Data: ensureNonNilSlice(cols)}
}

View File

@@ -0,0 +1,31 @@
package app
import (
"testing"
"GoNavi-Wails/internal/connection"
)
func TestNormalizeTestConnectionConfig_DefaultToUpperBound(t *testing.T) {
config := connection.ConnectionConfig{Type: "mongodb", Timeout: 0}
got := normalizeTestConnectionConfig(config)
if got.Timeout != testConnectionTimeoutUpperBoundSeconds {
t.Fatalf("expected timeout=%d, got=%d", testConnectionTimeoutUpperBoundSeconds, got.Timeout)
}
}
func TestNormalizeTestConnectionConfig_KeepSmallerTimeout(t *testing.T) {
config := connection.ConnectionConfig{Type: "mongodb", Timeout: 6}
got := normalizeTestConnectionConfig(config)
if got.Timeout != 6 {
t.Fatalf("expected timeout=6, got=%d", got.Timeout)
}
}
func TestNormalizeTestConnectionConfig_ClampLargeTimeout(t *testing.T) {
config := connection.ConnectionConfig{Type: "mongodb", Timeout: 60}
got := normalizeTestConnectionConfig(config)
if got.Timeout != testConnectionTimeoutUpperBoundSeconds {
t.Fatalf("expected timeout=%d, got=%d", testConnectionTimeoutUpperBoundSeconds, got.Timeout)
}
}

View File

@@ -2536,6 +2536,9 @@ func installOptionalDriverAgentFromLocalPath(definition driverDefinition, filePa
return installedDriverPackage{}, fmt.Errorf("导入本地驱动代理失败:%w", copyErr)
}
}
if validateErr := db.ValidateOptionalDriverAgentExecutable(driverType, executablePath); validateErr != nil {
return installedDriverPackage{}, validateErr
}
hash, hashErr := hashFileSHA256(executablePath)
if hashErr != nil {
@@ -2789,15 +2792,19 @@ func ensureOptionalDriverAgentBinary(a *App, definition driverDefinition, execut
driverType := normalizeDriverType(definition.Type)
displayName := resolveDriverDisplayName(definition)
forceSourceBuild := shouldForceSourceBuildForVersion(driverType, selectedVersion)
preferSourceBuildBeforeDownload := shouldPreferSourceBuildBeforeDownload(driverType, selectedVersion)
skipReuseCandidate := shouldSkipReusableAgentCandidate(driverType, selectedVersion)
info, err := os.Stat(executablePath)
if err == nil && !info.IsDir() {
hash, hashErr := hashFileSHA256(executablePath)
if hashErr != nil {
return "", "", fmt.Errorf("读取已安装 %s 驱动代理摘要失败:%w", displayName, hashErr)
if validateErr := db.ValidateOptionalDriverAgentExecutable(driverType, executablePath); validateErr != nil {
_ = os.Remove(executablePath)
} else {
// 用户点击“安装/重装”时应强制刷新驱动代理,避免沿用旧二进制导致修复不生效。
if removeErr := os.Remove(executablePath); removeErr != nil {
return "", "", fmt.Errorf("清理已安装 %s 驱动代理失败:%w", displayName, removeErr)
}
}
return fmt.Sprintf("local://existing/%s-driver-agent", driverType), hash, nil
}
if err == nil && info.IsDir() {
return "", "", fmt.Errorf("%s 驱动代理路径被目录占用:%s", displayName, executablePath)
@@ -2814,6 +2821,10 @@ func ensureOptionalDriverAgentBinary(a *App, definition driverDefinition, execut
if copyErr := copyAgentBinary(sourcePath, executablePath); copyErr != nil {
return "", "", fmt.Errorf("复制预置 %s 驱动代理失败:%w", displayName, copyErr)
}
if validateErr := db.ValidateOptionalDriverAgentExecutable(driverType, executablePath); validateErr != nil {
_ = os.Remove(executablePath)
return "", "", validateErr
}
hash, hashErr := hashFileSHA256(executablePath)
if hashErr != nil {
return "", "", fmt.Errorf("计算预置 %s 驱动代理摘要失败:%w", displayName, hashErr)
@@ -2823,6 +2834,22 @@ func ensureOptionalDriverAgentBinary(a *App, definition driverDefinition, execut
}
var downloadErrs []string
var sourceBuildAttempted bool
var sourceBuildErr error
if !forceSourceBuild && preferSourceBuildBeforeDownload {
sourceBuildAttempted = true
if a != nil {
a.emitDriverDownloadProgress(driverType, "downloading", 16, 100, fmt.Sprintf("优先使用本地源码构建 %s 驱动代理", displayName))
}
hash, buildErr := buildOptionalDriverAgentFromSource(definition, executablePath, selectedVersion)
if buildErr == nil {
return fmt.Sprintf("local://go-build/%s-driver-agent", driverType), hash, nil
}
sourceBuildErr = buildErr
logger.Warnf("预先本地构建 %s 驱动代理失败,将继续尝试下载预编译包:%v", displayName, buildErr)
}
if !forceSourceBuild {
downloadURLs := resolveOptionalDriverAgentDownloadURLs(definition, downloadURL)
if len(downloadURLs) > 0 {
@@ -2855,9 +2882,15 @@ func ensureOptionalDriverAgentBinary(a *App, definition driverDefinition, execut
a.emitDriverDownloadProgress(driverType, "downloading", 92, 100, "未命中预编译包,尝试开发态本地构建")
}
hash, buildErr := buildOptionalDriverAgentFromSource(definition, executablePath, selectedVersion)
if buildErr == nil {
return fmt.Sprintf("local://go-build/%s-driver-agent", driverType), hash, nil
var buildErr error
if sourceBuildAttempted {
buildErr = sourceBuildErr
} else {
hash, runErr := buildOptionalDriverAgentFromSource(definition, executablePath, selectedVersion)
buildErr = runErr
if buildErr == nil {
return fmt.Sprintf("local://go-build/%s-driver-agent", driverType), hash, nil
}
}
var parts []string
@@ -2901,6 +2934,10 @@ func downloadOptionalDriverAgentBinary(a *App, definition driverDefinition, urlT
if chmodErr := os.Chmod(executablePath, 0o755); chmodErr != nil && stdRuntime.GOOS != "windows" {
return "", fmt.Errorf("设置代理权限失败:%w", chmodErr)
}
if validateErr := db.ValidateOptionalDriverAgentExecutable(driverType, executablePath); validateErr != nil {
_ = os.Remove(executablePath)
return "", validateErr
}
return hash, nil
}
@@ -3009,6 +3046,10 @@ func downloadOptionalDriverAgentFromBundle(a *App, definition driverDefinition,
if chmodErr := os.Chmod(executablePath, 0o755); chmodErr != nil && stdRuntime.GOOS != "windows" {
return "", "", fmt.Errorf("设置驱动代理权限失败:%w", chmodErr)
}
if validateErr := db.ValidateOptionalDriverAgentExecutable(driverType, executablePath); validateErr != nil {
_ = os.Remove(executablePath)
return "", "", validateErr
}
hash, err := hashFileSHA256(executablePath)
if err != nil {
return "", "", fmt.Errorf("计算驱动代理摘要失败:%w", err)
@@ -3067,12 +3108,25 @@ func shouldForceSourceBuildForVersion(driverType string, selectedVersion string)
return resolveMongoDriverMajorFromVersion(selectedVersion) == 1
}
func shouldSkipReusableAgentCandidate(driverType string, selectedVersion string) bool {
if normalizeDriverType(driverType) != "mongodb" {
func shouldPreferSourceBuildBeforeDownload(driverType string, selectedVersion string) bool {
_ = selectedVersion
switch normalizeDriverType(driverType) {
case "kingbase":
// 金仓迭代期优先本地源码构建,避免下载到旧版本预编译代理导致修复不生效。
return true
default:
return false
}
}
func shouldSkipReusableAgentCandidate(driverType string, selectedVersion string) bool {
_ = selectedVersion
return true
switch normalizeDriverType(driverType) {
case "mongodb", "kingbase":
return true
default:
return false
}
}
func optionalDriverBuildTag(driverType string, selectedVersion string) (string, error) {
@@ -3334,6 +3388,7 @@ func resolveOptionalDriverAgentDownloadURLs(definition driverDefinition, rawURL
}
func findExistingOptionalDriverAgentCandidate(definition driverDefinition, targetPath string) (string, bool) {
driverType := normalizeDriverType(definition.Type)
targetAbs, _ := filepath.Abs(targetPath)
candidates := resolveOptionalDriverAgentCandidatePaths(definition)
for _, candidate := range candidates {
@@ -3349,9 +3404,13 @@ func findExistingOptionalDriverAgentCandidate(definition driverDefinition, targe
continue
}
info, statErr := os.Stat(absPath)
if statErr == nil && !info.IsDir() {
return absPath, true
if statErr != nil || info.IsDir() {
continue
}
if validateErr := db.ValidateOptionalDriverAgentExecutable(driverType, absPath); validateErr != nil {
continue
}
return absPath, true
}
return "", false
}

View File

@@ -23,12 +23,20 @@ var (
// getRedisClient gets or creates a Redis client from cache
func (a *App) getRedisClient(config connection.ConnectionConfig) (redis.RedisClient, error) {
key := getRedisClientCacheKey(config)
effectiveConfig := applyGlobalProxyToConnection(config)
connectConfig, proxyErr := resolveDialConfigWithProxy(effectiveConfig)
if proxyErr != nil {
wrapped := wrapConnectError(effectiveConfig, proxyErr)
logger.Error(wrapped, "Redis 代理准备失败:%s", formatRedisConnSummary(effectiveConfig))
return nil, wrapped
}
key := getRedisClientCacheKey(connectConfig)
shortKey := key
if len(shortKey) > 12 {
shortKey = shortKey[:12]
}
logger.Infof("获取 Redis 连接:%s 缓存Key=%s", formatRedisConnSummary(config), shortKey)
logger.Infof("获取 Redis 连接:%s 缓存Key=%s", formatRedisConnSummary(effectiveConfig), shortKey)
redisCacheMu.Lock()
defer redisCacheMu.Unlock()
@@ -47,21 +55,20 @@ func (a *App) getRedisClient(config connection.ConnectionConfig) (redis.RedisCli
logger.Infof("创建 Redis 客户端实例缓存Key=%s", shortKey)
client := redis.NewRedisClient()
if err := client.Connect(config); err != nil {
logger.Error(err, "Redis 连接失败:%s 缓存Key=%s", formatRedisConnSummary(config), shortKey)
return nil, err
if err := client.Connect(connectConfig); err != nil {
wrapped := wrapConnectError(effectiveConfig, err)
logger.Error(wrapped, "Redis 连接失败:%s 缓存Key=%s", formatRedisConnSummary(effectiveConfig), shortKey)
return nil, wrapped
}
redisCache[key] = client
logger.Infof("Redis 连接成功并写入缓存:%s 缓存Key=%s", formatRedisConnSummary(config), shortKey)
logger.Infof("Redis 连接成功并写入缓存:%s 缓存Key=%s", formatRedisConnSummary(effectiveConfig), shortKey)
return client, nil
}
func getRedisClientCacheKey(config connection.ConnectionConfig) string {
if !config.UseSSH {
config.SSH = connection.SSHConfig{}
}
b, _ := json.Marshal(config)
normalized := normalizeCacheKeyConfig(config)
b, _ := json.Marshal(normalized)
sum := sha256.Sum256(b)
return hex.EncodeToString(sum[:])
}
@@ -91,6 +98,26 @@ func formatRedisConnSummary(config connection.ConnectionConfig) string {
b.WriteString(" 用户=")
b.WriteString(config.SSH.User)
}
if config.UseProxy {
b.WriteString(" 代理=")
b.WriteString(strings.ToLower(strings.TrimSpace(config.Proxy.Type)))
b.WriteString("://")
b.WriteString(config.Proxy.Host)
b.WriteString(":")
b.WriteString(strconv.Itoa(config.Proxy.Port))
if strings.TrimSpace(config.Proxy.User) != "" {
b.WriteString(" 代理认证=已配置")
}
}
if config.UseHTTPTunnel {
b.WriteString(" HTTP隧道=")
b.WriteString(strings.TrimSpace(config.HTTPTunnel.Host))
b.WriteString(":")
b.WriteString(strconv.Itoa(config.HTTPTunnel.Port))
if strings.TrimSpace(config.HTTPTunnel.User) != "" {
b.WriteString(" HTTP隧道认证=已配置")
}
}
return b.String()
}
@@ -426,6 +453,23 @@ func (a *App) RedisRenameKey(config connection.ConnectionConfig, oldKey, newKey
return connection.QueryResult{Success: true, Message: "重命名成功"}
}
// RedisKeyExists checks whether a key already exists
func (a *App) RedisKeyExists(config connection.ConnectionConfig, key string) connection.QueryResult {
config.Type = "redis"
client, err := a.getRedisClient(config)
if err != nil {
return connection.QueryResult{Success: false, Message: err.Error()}
}
exists, err := client.KeyExists(key)
if err != nil {
logger.Error(err, "RedisKeyExists 检查失败key=%s", key)
return connection.QueryResult{Success: false, Message: err.Error()}
}
return connection.QueryResult{Success: true, Data: map[string]bool{"exists": exists}}
}
// RedisDeleteHashField deletes fields from a hash
func (a *App) RedisDeleteHashField(config connection.ConnectionConfig, key string, fields []string) connection.QueryResult {
config.Type = "redis"

View File

@@ -5,6 +5,66 @@ import (
"unicode"
)
func leadingSQLKeyword(query string) string {
text := strings.TrimSpace(query)
for len(text) > 0 {
trimmed := strings.TrimLeft(text, " \t\r\n")
if trimmed == "" {
return ""
}
text = trimmed
switch {
case strings.HasPrefix(text, "--"):
if idx := strings.IndexByte(text, '\n'); idx >= 0 {
text = text[idx+1:]
continue
}
return ""
case strings.HasPrefix(text, "#"):
if idx := strings.IndexByte(text, '\n'); idx >= 0 {
text = text[idx+1:]
continue
}
return ""
case strings.HasPrefix(text, "/*"):
if idx := strings.Index(text, "*/"); idx >= 0 {
text = text[idx+2:]
continue
}
return ""
}
break
}
if text == "" {
return ""
}
for i, r := range text {
if unicode.IsLetter(r) || unicode.IsDigit(r) || r == '_' {
continue
}
if i == 0 {
return ""
}
return strings.ToLower(text[:i])
}
return strings.ToLower(text)
}
func isReadOnlySQLQuery(dbType string, query string) bool {
if strings.ToLower(strings.TrimSpace(dbType)) == "mongodb" && strings.HasPrefix(strings.TrimSpace(query), "{") {
return true
}
switch leadingSQLKeyword(query) {
case "select", "with", "show", "describe", "desc", "explain", "pragma", "values":
return true
default:
return false
}
}
func sanitizeSQLForPgLike(dbType string, query string) string {
switch strings.ToLower(strings.TrimSpace(dbType)) {
case "postgres", "kingbase", "highgo", "vastbase":

View File

@@ -18,39 +18,49 @@ type ProxyConfig struct {
Password string `json:"password,omitempty"`
}
// HTTPTunnelConfig holds independent HTTP CONNECT tunnel details
type HTTPTunnelConfig struct {
Host string `json:"host"`
Port int `json:"port"`
User string `json:"user,omitempty"`
Password string `json:"password,omitempty"`
}
// ConnectionConfig holds database connection details including SSH
type ConnectionConfig struct {
Type string `json:"type"`
Host string `json:"host"`
Port int `json:"port"`
User string `json:"user"`
Password string `json:"password"`
SavePassword bool `json:"savePassword,omitempty"` // Persist password in saved connection
Database string `json:"database"`
UseSSL bool `json:"useSSL,omitempty"` // MySQL-like SSL/TLS switch
SSLMode string `json:"sslMode,omitempty"` // preferred | required | skip-verify | disable
SSLCertPath string `json:"sslCertPath,omitempty"` // TLS client certificate path (e.g., Dameng)
SSLKeyPath string `json:"sslKeyPath,omitempty"` // TLS client private key path (e.g., Dameng)
UseSSH bool `json:"useSSH"`
SSH SSHConfig `json:"ssh"`
UseProxy bool `json:"useProxy,omitempty"`
Proxy ProxyConfig `json:"proxy,omitempty"`
Driver string `json:"driver,omitempty"` // For custom connection
DSN string `json:"dsn,omitempty"` // For custom connection
Timeout int `json:"timeout,omitempty"` // Connection timeout in seconds (default: 30)
RedisDB int `json:"redisDB,omitempty"` // Redis database index (0-15)
URI string `json:"uri,omitempty"` // Connection URI for copy/paste
Hosts []string `json:"hosts,omitempty"` // Multi-host addresses: host:port
Topology string `json:"topology,omitempty"` // single | replica | cluster
MySQLReplicaUser string `json:"mysqlReplicaUser,omitempty"` // MySQL replica auth user
MySQLReplicaPassword string `json:"mysqlReplicaPassword,omitempty"` // MySQL replica auth password
ReplicaSet string `json:"replicaSet,omitempty"` // MongoDB replica set name
AuthSource string `json:"authSource,omitempty"` // MongoDB authSource
ReadPreference string `json:"readPreference,omitempty"` // MongoDB readPreference
MongoSRV bool `json:"mongoSrv,omitempty"` // MongoDB use mongodb+srv URI scheme
MongoAuthMechanism string `json:"mongoAuthMechanism,omitempty"` // MongoDB authMechanism
MongoReplicaUser string `json:"mongoReplicaUser,omitempty"` // MongoDB replica auth user
MongoReplicaPassword string `json:"mongoReplicaPassword,omitempty"` // MongoDB replica auth password
Type string `json:"type"`
Host string `json:"host"`
Port int `json:"port"`
User string `json:"user"`
Password string `json:"password"`
SavePassword bool `json:"savePassword,omitempty"` // Persist password in saved connection
Database string `json:"database"`
UseSSL bool `json:"useSSL,omitempty"` // MySQL-like SSL/TLS switch
SSLMode string `json:"sslMode,omitempty"` // preferred | required | skip-verify | disable
SSLCertPath string `json:"sslCertPath,omitempty"` // TLS client certificate path (e.g., Dameng)
SSLKeyPath string `json:"sslKeyPath,omitempty"` // TLS client private key path (e.g., Dameng)
UseSSH bool `json:"useSSH"`
SSH SSHConfig `json:"ssh"`
UseProxy bool `json:"useProxy,omitempty"`
Proxy ProxyConfig `json:"proxy,omitempty"`
UseHTTPTunnel bool `json:"useHttpTunnel,omitempty"`
HTTPTunnel HTTPTunnelConfig `json:"httpTunnel,omitempty"`
Driver string `json:"driver,omitempty"` // For custom connection
DSN string `json:"dsn,omitempty"` // For custom connection
Timeout int `json:"timeout,omitempty"` // Connection timeout in seconds (default: 30)
RedisDB int `json:"redisDB,omitempty"` // Redis database index (0-15)
URI string `json:"uri,omitempty"` // Connection URI for copy/paste
Hosts []string `json:"hosts,omitempty"` // Multi-host addresses: host:port
Topology string `json:"topology,omitempty"` // single | replica | cluster
MySQLReplicaUser string `json:"mysqlReplicaUser,omitempty"` // MySQL replica auth user
MySQLReplicaPassword string `json:"mysqlReplicaPassword,omitempty"` // MySQL replica auth password
ReplicaSet string `json:"replicaSet,omitempty"` // MongoDB replica set name
AuthSource string `json:"authSource,omitempty"` // MongoDB authSource
ReadPreference string `json:"readPreference,omitempty"` // MongoDB readPreference
MongoSRV bool `json:"mongoSrv,omitempty"` // MongoDB use mongodb+srv URI scheme
MongoAuthMechanism string `json:"mongoAuthMechanism,omitempty"` // MongoDB authMechanism
MongoReplicaUser string `json:"mongoReplicaUser,omitempty"` // MongoDB replica auth user
MongoReplicaPassword string `json:"mongoReplicaPassword,omitempty"` // MongoDB replica auth password
}
// QueryResult is the standard response format for Wails methods
@@ -80,6 +90,7 @@ type IndexDefinition struct {
NonUnique int `json:"nonUnique"`
SeqInIndex int `json:"seqInIndex"`
IndexType string `json:"indexType"`
SubPart int `json:"subPart,omitempty"`
}
// ForeignKeyDefinition represents a foreign key

View File

@@ -8,6 +8,7 @@ import (
"fmt"
"net"
"net/url"
"sort"
"strconv"
"strings"
"time"
@@ -107,7 +108,9 @@ func (c *ClickHouseDB) buildClickHouseOptions(config connection.ConnectionConfig
if readTimeout < minClickHouseReadTimeout {
readTimeout = minClickHouseReadTimeout
}
protocol := detectClickHouseProtocol(config)
opts := &clickhouse.Options{
Protocol: protocol,
Addr: []string{
net.JoinHostPort(config.Host, strconv.Itoa(config.Port)),
},
@@ -125,6 +128,46 @@ func (c *ClickHouseDB) buildClickHouseOptions(config connection.ConnectionConfig
return opts
}
func detectClickHouseProtocol(config connection.ConnectionConfig) clickhouse.Protocol {
uriText := strings.ToLower(strings.TrimSpace(config.URI))
if strings.HasPrefix(uriText, "http://") || strings.HasPrefix(uriText, "https://") {
return clickhouse.HTTP
}
if config.Port == 8123 || config.Port == 8443 {
return clickhouse.HTTP
}
return clickhouse.Native
}
func isClickHouseProtocolMismatch(err error) bool {
if err == nil {
return false
}
text := strings.ToLower(strings.TrimSpace(err.Error()))
if text == "" {
return false
}
return strings.Contains(text, "unexpected packet [72]") ||
(strings.Contains(text, "unexpected packet") && strings.Contains(text, "handshake")) ||
strings.Contains(text, "http response to https client") ||
strings.Contains(text, "malformed http response")
}
func withClickHouseProtocol(config connection.ConnectionConfig, protocol clickhouse.Protocol) connection.ConnectionConfig {
next := config
switch protocol {
case clickhouse.HTTP:
if next.Port == 0 {
next.Port = 8123
}
default:
if next.Port == 0 {
next.Port = defaultClickHousePort
}
}
return next
}
func (c *ClickHouseDB) Connect(config connection.ConnectionConfig) error {
if supported, reason := DriverRuntimeSupportStatus("clickhouse"); !supported {
if strings.TrimSpace(reason) == "" {
@@ -176,23 +219,41 @@ func (c *ClickHouseDB) Connect(config connection.ConnectionConfig) error {
var failures []string
for idx, attempt := range attempts {
c.conn = clickhouse.OpenDB(c.buildClickHouseOptions(attempt))
if err := c.Ping(); err != nil {
failures = append(failures, fmt.Sprintf("第%d次连接验证失败: %v", idx+1, err))
if c.conn != nil {
_ = c.conn.Close()
c.conn = nil
primaryProtocol := detectClickHouseProtocol(attempt)
protocols := []clickhouse.Protocol{primaryProtocol}
if primaryProtocol == clickhouse.Native {
protocols = append(protocols, clickhouse.HTTP)
} else {
protocols = append(protocols, clickhouse.Native)
}
for pIdx, protocol := range protocols {
protocolConfig := withClickHouseProtocol(attempt, protocol)
c.conn = clickhouse.OpenDB(c.buildClickHouseOptions(protocolConfig))
if err := c.Ping(); err != nil {
failures = append(failures, fmt.Sprintf("第%d次连接验证失败(protocol=%s): %v", idx+1, protocol.String(), err))
if c.conn != nil {
_ = c.conn.Close()
c.conn = nil
}
if pIdx == 0 && !isClickHouseProtocolMismatch(err) {
// 首次连接不是协议误配特征,避免无谓重试次协议。
break
}
continue
}
continue
if idx > 0 {
logger.Warnf("ClickHouse SSL 优先连接失败,已回退至明文连接")
}
if pIdx > 0 {
logger.Warnf("ClickHouse 已自动切换连接协议为 %s常见于 8123/8443 HTTP 端口)", protocol.String())
}
return nil
}
if idx > 0 {
logger.Warnf("ClickHouse SSL 优先连接失败,已回退至明文连接")
}
return nil
}
_ = c.Close()
return fmt.Errorf("连接建立后验证失败:%s", strings.Join(failures, ""))
return fmt.Errorf("连接建立后验证失败(可检查 ClickHouse 端口与协议是否匹配Native=9000/9440HTTP=8123/8443%s", strings.Join(failures, ""))
}
func (c *ClickHouseDB) Close() error {
@@ -618,3 +679,134 @@ func isClickHouseTruthy(value interface{}) bool {
return normalized == "1" || normalized == "true" || normalized == "yes" || normalized == "y"
}
}
func (c *ClickHouseDB) ApplyChanges(tableName string, changes connection.ChangeSet) error {
if c.conn == nil {
return fmt.Errorf("connection not open")
}
database, table, err := c.resolveDatabaseAndTable(c.database, tableName)
if err != nil {
return err
}
qualifiedTable := fmt.Sprintf("%s.%s", quoteClickHouseIdentifier(database), quoteClickHouseIdentifier(table))
for _, pk := range changes.Deletes {
whereExpr := buildClickHouseWhereClause(pk)
if whereExpr == "" {
continue
}
query := fmt.Sprintf("ALTER TABLE %s DELETE WHERE %s", qualifiedTable, whereExpr)
if _, err := c.conn.Exec(query); err != nil {
return fmt.Errorf("delete error: %v; sql=%s", err, query)
}
}
for _, update := range changes.Updates {
setExpr := buildClickHouseAssignments(update.Values)
whereExpr := buildClickHouseWhereClause(update.Keys)
if setExpr == "" || whereExpr == "" {
continue
}
query := fmt.Sprintf("ALTER TABLE %s UPDATE %s WHERE %s", qualifiedTable, setExpr, whereExpr)
if _, err := c.conn.Exec(query); err != nil {
return fmt.Errorf("update error: %v; sql=%s", err, query)
}
}
for _, row := range changes.Inserts {
query, err := buildClickHouseInsertSQL(qualifiedTable, row)
if err != nil {
return err
}
if query == "" {
continue
}
if _, err := c.conn.Exec(query); err != nil {
return fmt.Errorf("insert error: %v; sql=%s", err, query)
}
}
return nil
}
func buildClickHouseInsertSQL(qualifiedTable string, row map[string]interface{}) (string, error) {
if len(row) == 0 {
return "", nil
}
cols := make([]string, 0, len(row))
for k := range row {
if strings.TrimSpace(k) == "" {
continue
}
cols = append(cols, k)
}
if len(cols) == 0 {
return "", nil
}
sort.Strings(cols)
quotedCols := make([]string, 0, len(cols))
values := make([]string, 0, len(cols))
for _, col := range cols {
quotedCols = append(quotedCols, quoteClickHouseIdentifier(col))
values = append(values, clickHouseLiteral(row[col]))
}
return fmt.Sprintf("INSERT INTO %s (%s) VALUES (%s)", qualifiedTable, strings.Join(quotedCols, ", "), strings.Join(values, ", ")), nil
}
func buildClickHouseAssignments(values map[string]interface{}) string {
if len(values) == 0 {
return ""
}
cols := make([]string, 0, len(values))
for k := range values {
if strings.TrimSpace(k) == "" {
continue
}
cols = append(cols, k)
}
sort.Strings(cols)
parts := make([]string, 0, len(cols))
for _, col := range cols {
parts = append(parts, fmt.Sprintf("%s = %s", quoteClickHouseIdentifier(col), clickHouseLiteral(values[col])))
}
return strings.Join(parts, ", ")
}
func buildClickHouseWhereClause(keys map[string]interface{}) string {
if len(keys) == 0 {
return ""
}
cols := make([]string, 0, len(keys))
for k := range keys {
if strings.TrimSpace(k) == "" {
continue
}
cols = append(cols, k)
}
sort.Strings(cols)
parts := make([]string, 0, len(cols))
for _, col := range cols {
parts = append(parts, fmt.Sprintf("%s = %s", quoteClickHouseIdentifier(col), clickHouseLiteral(keys[col])))
}
return strings.Join(parts, " AND ")
}
func clickHouseLiteral(value interface{}) string {
switch val := value.(type) {
case nil:
return "NULL"
case bool:
if val {
return "1"
}
return "0"
case int, int8, int16, int32, int64, uint, uint8, uint16, uint32, uint64, float32, float64:
return fmt.Sprintf("%v", val)
case time.Time:
return fmt.Sprintf("'%s'", val.Format("2006-01-02 15:04:05"))
case []byte:
return fmt.Sprintf("'%s'", strings.ReplaceAll(string(val), "'", "''"))
default:
return fmt.Sprintf("'%s'", strings.ReplaceAll(fmt.Sprintf("%v", val), "'", "''"))
}
}

View File

@@ -204,22 +204,9 @@ func (d *DamengDB) Exec(query string) (int64, error) {
}
func (d *DamengDB) GetDatabases() ([]string, error) {
// DM: List Users/Schemas
data, _, err := d.Query("SELECT username FROM dba_users")
if err != nil {
// Fallback if dba_users not accessible
data, _, err = d.Query("SELECT username FROM all_users")
if err != nil {
return nil, err
}
}
var dbs []string
for _, row := range data {
if val, ok := row["USERNAME"]; ok {
dbs = append(dbs, fmt.Sprintf("%v", val))
}
}
return dbs, nil
// 达梦在本项目中将 schema/owner 作为“数据库”展示口径。
// 先查当前 schema / 当前用户,再聚合可见用户与 owner避免权限受限时返回空列表。
return collectDamengDatabaseNames(d.Query)
}
func (d *DamengDB) GetTables(dbName string) ([]string, error) {

View File

@@ -0,0 +1,91 @@
package db
import (
"fmt"
"sort"
"strings"
)
var damengDatabaseQueries = []string{
"SELECT SYS_CONTEXT('USERENV', 'CURRENT_SCHEMA') AS DATABASE_NAME FROM DUAL",
"SELECT SYS_CONTEXT('USERENV', 'CURRENT_USER') AS DATABASE_NAME FROM DUAL",
"SELECT USERNAME AS DATABASE_NAME FROM USER_USERS",
"SELECT USERNAME AS DATABASE_NAME FROM ALL_USERS ORDER BY USERNAME",
"SELECT USERNAME AS DATABASE_NAME FROM DBA_USERS ORDER BY USERNAME",
"SELECT USERNAME AS DATABASE_NAME FROM SYS.DBA_USERS ORDER BY USERNAME",
"SELECT DISTINCT OWNER AS DATABASE_NAME FROM ALL_OBJECTS ORDER BY OWNER",
"SELECT DISTINCT OWNER AS DATABASE_NAME FROM ALL_TABLES ORDER BY OWNER",
}
type damengQueryFunc func(query string) ([]map[string]interface{}, []string, error)
func collectDamengDatabaseNames(query damengQueryFunc) ([]string, error) {
seen := make(map[string]struct{})
dbs := make([]string, 0, 64)
var lastErr error
for _, q := range damengDatabaseQueries {
data, _, err := query(q)
if err != nil {
lastErr = err
continue
}
for _, row := range data {
name := getDamengRowString(row,
"DATABASE_NAME",
"USERNAME",
"OWNER",
"SCHEMA_NAME",
"CURRENT_SCHEMA",
"CURRENT_USER",
)
if name == "" {
for _, v := range row {
text := strings.TrimSpace(fmt.Sprintf("%v", v))
if text == "" || strings.EqualFold(text, "<nil>") {
continue
}
name = text
break
}
}
if name == "" {
continue
}
key := strings.ToUpper(name)
if _, ok := seen[key]; ok {
continue
}
seen[key] = struct{}{}
dbs = append(dbs, name)
}
}
if len(dbs) == 0 && lastErr != nil {
return nil, lastErr
}
sort.Slice(dbs, func(i, j int) bool {
return strings.ToUpper(dbs[i]) < strings.ToUpper(dbs[j])
})
return dbs, nil
}
func getDamengRowString(row map[string]interface{}, keys ...string) string {
if len(row) == 0 {
return ""
}
for _, key := range keys {
for k, v := range row {
if !strings.EqualFold(strings.TrimSpace(k), strings.TrimSpace(key)) {
continue
}
text := strings.TrimSpace(fmt.Sprintf("%v", v))
if text == "" || strings.EqualFold(text, "<nil>") {
return ""
}
return text
}
}
return ""
}

View File

@@ -0,0 +1,73 @@
package db
import (
"errors"
"reflect"
"testing"
)
func TestCollectDamengDatabaseNames_UsesCurrentSchemaFallback(t *testing.T) {
t.Parallel()
got, err := collectDamengDatabaseNames(func(query string) ([]map[string]interface{}, []string, error) {
switch query {
case damengDatabaseQueries[0]:
return []map[string]interface{}{{"DATABASE_NAME": "APP_SCHEMA"}}, nil, nil
case damengDatabaseQueries[1]:
return []map[string]interface{}{{"DATABASE_NAME": "app_schema"}}, nil, nil
default:
return nil, nil, errors.New("permission denied")
}
})
if err != nil {
t.Fatalf("collectDamengDatabaseNames 返回错误: %v", err)
}
want := []string{"APP_SCHEMA"}
if !reflect.DeepEqual(got, want) {
t.Fatalf("unexpected database names, got=%v want=%v", got, want)
}
}
func TestCollectDamengDatabaseNames_CollectsOwnersWhenVisible(t *testing.T) {
t.Parallel()
got, err := collectDamengDatabaseNames(func(query string) ([]map[string]interface{}, []string, error) {
switch query {
case damengDatabaseQueries[0], damengDatabaseQueries[1], damengDatabaseQueries[2], damengDatabaseQueries[3], damengDatabaseQueries[4], damengDatabaseQueries[5]:
return []map[string]interface{}{}, nil, nil
case damengDatabaseQueries[6]:
return []map[string]interface{}{{"OWNER": "BIZ"}, {"OWNER": "audit"}}, nil, nil
case damengDatabaseQueries[7]:
return []map[string]interface{}{{"OWNER": "BIZ"}}, nil, nil
default:
return nil, nil, nil
}
})
if err != nil {
t.Fatalf("collectDamengDatabaseNames 返回错误: %v", err)
}
want := []string{"audit", "BIZ"}
if !reflect.DeepEqual(got, want) {
t.Fatalf("unexpected database names, got=%v want=%v", got, want)
}
}
func TestCollectDamengDatabaseNames_ReturnsErrorWhenNoNameResolved(t *testing.T) {
t.Parallel()
expectErr := errors.New("last query failed")
got, err := collectDamengDatabaseNames(func(query string) ([]map[string]interface{}, []string, error) {
if query == damengDatabaseQueries[len(damengDatabaseQueries)-1] {
return nil, nil, expectErr
}
return nil, nil, errors.New("permission denied")
})
if err == nil {
t.Fatalf("期望返回错误,实际 got=%v", got)
}
if !errors.Is(err, expectErr) {
t.Fatalf("错误不符合预期: %v", err)
}
}

View File

@@ -9,7 +9,6 @@ import (
"strings"
"GoNavi-Wails/internal/connection"
"GoNavi-Wails/internal/logger"
"GoNavi-Wails/internal/ssh"
"GoNavi-Wails/internal/utils"
@@ -135,26 +134,26 @@ func collectDirosAddresses(config connection.ConnectionConfig) []string {
return result
}
func (d *DirosDB) getDSN(config connection.ConnectionConfig) string {
func (d *DirosDB) getDSN(config connection.ConnectionConfig) (string, error) {
database := config.Database
protocol := "tcp"
address := normalizeMySQLAddress(config.Host, config.Port)
if config.UseSSH {
netName, err := ssh.RegisterSSHNetwork(config.SSH)
if err == nil {
protocol = netName
address = normalizeMySQLAddress(config.Host, config.Port)
} else {
logger.Warnf("注册 Doris SSH 网络失败,将尝试直连:地址=%s:%d 用户=%s原因%v", config.Host, config.Port, config.User, err)
if err != nil {
return "", fmt.Errorf("创建 SSH 隧道失败:%w", err)
}
protocol = netName
}
timeout := getConnectTimeoutSeconds(config)
tlsMode := resolveMySQLTLSMode(config)
return fmt.Sprintf("%s:%s@%s(%s)/%s?charset=utf8mb4&parseTime=True&loc=Local&timeout=%ds&tls=%s",
config.User, config.Password, protocol, address, database, timeout, url.QueryEscape(tlsMode))
return fmt.Sprintf(
"%s:%s@%s(%s)/%s?charset=utf8mb4&parseTime=True&loc=Local&timeout=%ds&tls=%s",
config.User, config.Password, protocol, address, database, timeout, url.QueryEscape(tlsMode),
), nil
}
func resolveDirosCredential(config connection.ConnectionConfig, addressIndex int) (string, string) {
@@ -192,7 +191,11 @@ func (d *DirosDB) Connect(config connection.ConnectionConfig) error {
candidateConfig.Port = port
candidateConfig.User, candidateConfig.Password = resolveDirosCredential(runConfig, index)
dsn := d.getDSN(candidateConfig)
dsn, err := d.getDSN(candidateConfig)
if err != nil {
errorDetails = append(errorDetails, fmt.Sprintf("%s 生成连接串失败: %v", address, err))
continue
}
db, err := sql.Open(dirosDriverName, dsn)
if err != nil {
errorDetails = append(errorDetails, fmt.Sprintf("%s 打开失败: %v", address, err))

View File

@@ -0,0 +1,74 @@
package db
import (
"debug/pe"
"fmt"
"runtime"
"strings"
)
const (
peMachineI386 uint16 = 0x014c
peMachineAmd64 uint16 = 0x8664
peMachineArm64 uint16 = 0xaa64
)
func windowsMachineLabel(machine uint16) string {
switch machine {
case peMachineI386:
return "windows-386"
case peMachineAmd64:
return "windows-amd64"
case peMachineArm64:
return "windows-arm64"
default:
return fmt.Sprintf("windows-unknown(0x%04x)", machine)
}
}
func expectedWindowsMachineForGoArch(goarch string) (uint16, string, bool) {
switch strings.ToLower(strings.TrimSpace(goarch)) {
case "386":
return peMachineI386, "windows-386", true
case "amd64":
return peMachineAmd64, "windows-amd64", true
case "arm64":
return peMachineArm64, "windows-arm64", true
default:
return 0, "", false
}
}
func validateWindowsExecutableMachine(pathText string) error {
file, err := pe.Open(pathText)
if err != nil {
return fmt.Errorf("无法识别为有效的 Windows 可执行文件:%w", err)
}
defer file.Close()
expectedMachine, expectedLabel, ok := expectedWindowsMachineForGoArch(runtime.GOARCH)
if !ok {
return nil
}
actualMachine := file.FileHeader.Machine
if actualMachine != expectedMachine {
return fmt.Errorf("可执行文件架构不兼容(文件=%s当前进程=%s", windowsMachineLabel(actualMachine), expectedLabel)
}
return nil
}
// ValidateOptionalDriverAgentExecutable 校验可选驱动代理二进制是否可在当前进程中执行。
// 当前主要用于 Windows 下的 PE 架构兼容性校验,避免升级后复用到错误架构的旧代理。
func ValidateOptionalDriverAgentExecutable(driverType string, executablePath string) error {
pathText := strings.TrimSpace(executablePath)
if pathText == "" {
return fmt.Errorf("%s 驱动代理路径为空", driverDisplayName(driverType))
}
if runtime.GOOS != "windows" {
return nil
}
if err := validateWindowsExecutableMachine(pathText); err != nil {
return fmt.Errorf("%s 驱动代理不可用:%w", driverDisplayName(driverType), err)
}
return nil
}

View File

@@ -194,6 +194,9 @@ func optionalGoDriverRuntimeReady(driverType string) (bool, string) {
if statErr != nil || info.IsDir() {
return false, fmt.Sprintf("%s 驱动代理缺失,请在驱动管理中重新安装启用", driverDisplayName(normalized))
}
if validateErr := ValidateOptionalDriverAgentExecutable(normalized, executablePath); validateErr != nil {
return false, fmt.Sprintf("%s请在驱动管理中重新安装启用", validateErr.Error())
}
return true, ""
}

View File

@@ -65,11 +65,22 @@ func TestManagedDriverRequiresInstallMarker(t *testing.T) {
if err != nil {
t.Fatalf("解析 mariadb 代理路径失败: %v", err)
}
if err := os.WriteFile(executablePath, []byte("placeholder"), 0o755); err != nil {
t.Fatalf("写入 mariadb 代理占位文件失败: %v", err)
}
if runtime.GOOS == "windows" {
_ = os.Chmod(executablePath, 0o644)
selfPath, selfErr := os.Executable()
if selfErr != nil {
t.Fatalf("获取测试进程路径失败: %v", selfErr)
}
content, readErr := os.ReadFile(selfPath)
if readErr != nil {
t.Fatalf("读取测试进程失败: %v", readErr)
}
if err := os.WriteFile(executablePath, content, 0o755); err != nil {
t.Fatalf("写入 mariadb 代理占位可执行文件失败: %v", err)
}
} else {
if err := os.WriteFile(executablePath, []byte("placeholder"), 0o755); err != nil {
t.Fatalf("写入 mariadb 代理占位文件失败: %v", err)
}
}
supported, reason := DriverRuntimeSupportStatus("mariadb")

View File

@@ -0,0 +1,206 @@
package db
import "strings"
func normalizeKingbaseIdentCommon(raw string) string {
value := strings.TrimSpace(raw)
if value == "" {
return ""
}
// 兼容被多次 JSON 序列化后的转义引号:
// \\\"schema\\\" -> \"schema\" -> "schema"
for i := 0; i < 8; i++ {
next := strings.TrimSpace(value)
next = strings.ReplaceAll(next, `\\\"`, `\"`)
next = strings.ReplaceAll(next, `\"`, `"`)
if next == value {
break
}
value = next
}
value = strings.TrimSpace(value)
stripWrapperOnce := func(text string) string {
t := strings.TrimSpace(text)
if strings.HasPrefix(t, `\`) && len(t) > 1 {
t = strings.TrimSpace(strings.TrimPrefix(t, `\`))
}
if strings.HasSuffix(t, `\`) && len(t) > 1 {
t = strings.TrimSpace(strings.TrimSuffix(t, `\`))
}
if len(t) >= 4 && strings.HasPrefix(t, `\"`) && strings.HasSuffix(t, `\"`) {
return strings.TrimSpace(t[2 : len(t)-2])
}
if len(t) >= 2 && strings.HasPrefix(t, `"`) && strings.HasSuffix(t, `"`) {
return strings.TrimSpace(t[1 : len(t)-1])
}
if len(t) >= 2 && strings.HasPrefix(t, "`") && strings.HasSuffix(t, "`") {
return strings.TrimSpace(t[1 : len(t)-1])
}
if len(t) >= 2 && strings.HasPrefix(t, "[") && strings.HasSuffix(t, "]") {
return strings.TrimSpace(t[1 : len(t)-1])
}
return t
}
for i := 0; i < 8; i++ {
next := stripWrapperOnce(value)
if next == value {
break
}
value = next
}
value = strings.TrimSpace(value)
// 兼容错误的二次引用与残留反斜杠。
value = strings.ReplaceAll(value, `\"`, `"`)
value = strings.ReplaceAll(value, `""`, "")
value = strings.TrimSpace(value)
for i := 0; i < 8; i++ {
next := strings.TrimSpace(value)
changed := false
if strings.HasPrefix(next, `\`) && len(next) > 1 {
next = strings.TrimSpace(strings.TrimPrefix(next, `\`))
changed = true
}
if strings.HasSuffix(next, `\`) && len(next) > 1 {
next = strings.TrimSpace(strings.TrimSuffix(next, `\`))
changed = true
}
if !changed || next == value {
break
}
value = next
}
return strings.TrimSpace(value)
}
func splitKingbaseQualifiedNameCommon(raw string) (schema string, table string) {
text := strings.TrimSpace(raw)
if text == "" {
return "", ""
}
sep := findKingbaseQualifiedSeparator(text)
if sep < 0 {
return "", normalizeKingbaseIdentCommon(text)
}
schemaPart := normalizeKingbaseIdentCommon(text[:sep])
tablePart := normalizeKingbaseIdentCommon(text[sep+1:])
if tablePart == "" {
if schemaPart == "" {
return "", normalizeKingbaseIdentCommon(text)
}
return "", schemaPart
}
if schemaPart == "" {
return "", tablePart
}
return schemaPart, tablePart
}
func findKingbaseQualifiedSeparator(raw string) int {
inDouble := false
inBacktick := false
inBracket := false
escaped := false
for i := 0; i < len(raw); i++ {
ch := raw[i]
if escaped {
escaped = false
continue
}
if ch == '\\' {
escaped = true
continue
}
if inDouble {
if ch == '"' {
// SQL 双引号转义:"" 代表字面量 "
if i+1 < len(raw) && raw[i+1] == '"' {
i++
continue
}
inDouble = false
}
continue
}
if inBacktick {
if ch == '`' {
inBacktick = false
}
continue
}
if inBracket {
if ch == ']' {
inBracket = false
}
continue
}
switch ch {
case '"':
inDouble = true
case '`':
inBacktick = true
case '[':
inBracket = true
case '.':
return i
}
}
return -1
}
// buildKingbaseSearchPathCommon 统一构建 Kingbase search_path。
// 返回 search_path SQL 片段和规范化后的 schema 列表(用于调试/扩展)。
func buildKingbaseSearchPathCommon(rawSchemas []string) (string, []string) {
if len(rawSchemas) == 0 {
return "", nil
}
seen := make(map[string]struct{}, len(rawSchemas)+1)
quotedParts := make([]string, 0, len(rawSchemas)+1)
normalizedSchemas := make([]string, 0, len(rawSchemas)+1)
appendSchema := func(raw string) {
cleaned := normalizeKingbaseIdentCommon(raw)
if cleaned == "" {
return
}
if strings.EqualFold(cleaned, "public") {
cleaned = "public"
}
key := strings.ToLower(cleaned)
if _, ok := seen[key]; ok {
return
}
seen[key] = struct{}{}
normalizedSchemas = append(normalizedSchemas, cleaned)
escaped := strings.ReplaceAll(cleaned, `"`, `""`)
quotedParts = append(quotedParts, `"`+escaped+`"`)
}
for _, raw := range rawSchemas {
appendSchema(raw)
}
if _, ok := seen["public"]; !ok {
appendSchema("public")
}
if len(quotedParts) == 0 {
return "", normalizedSchemas
}
return strings.Join(quotedParts, ", "), normalizedSchemas
}

View File

@@ -0,0 +1,92 @@
package db
import "testing"
func TestNormalizeKingbaseIdentCommon(t *testing.T) {
tests := []struct {
name string
in string
want string
}{
{name: "plain", in: "ldf_server", want: "ldf_server"},
{name: "quoted", in: `"ldf_server"`, want: "ldf_server"},
{name: "escaped quoted", in: `\"ldf_server\"`, want: "ldf_server"},
{name: "double escaped quoted", in: `\\\"ldf_server\\\"`, want: "ldf_server"},
{name: "double quoted", in: `""ldf_server""`, want: "ldf_server"},
{name: "backtick quoted", in: "`ldf_server`", want: "ldf_server"},
{name: "bracket quoted", in: "[ldf_server]", want: "ldf_server"},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := normalizeKingbaseIdentCommon(tt.in); got != tt.want {
t.Fatalf("normalizeKingbaseIdentCommon(%q)=%q,want=%q", tt.in, got, tt.want)
}
})
}
}
func TestSplitKingbaseQualifiedNameCommon(t *testing.T) {
tests := []struct {
name string
in string
wantSchema string
wantTable string
}{
{name: "plain", in: "ldf_server.andon_events", wantSchema: "ldf_server", wantTable: "andon_events"},
{name: "quoted", in: `"ldf_server"."andon_events"`, wantSchema: "ldf_server", wantTable: "andon_events"},
{name: "escaped quoted", in: `\"ldf_server\".\"andon_events\"`, wantSchema: "ldf_server", wantTable: "andon_events"},
{name: "double escaped quoted", in: `\\\"ldf_server\\\".\\\"andon_events\\\"`, wantSchema: "ldf_server", wantTable: "andon_events"},
{name: "space around dot", in: ` "ldf_server" . "andon_events" `, wantSchema: "ldf_server", wantTable: "andon_events"},
{name: "table only", in: "andon_events", wantSchema: "", wantTable: "andon_events"},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
gotSchema, gotTable := splitKingbaseQualifiedNameCommon(tt.in)
if gotSchema != tt.wantSchema || gotTable != tt.wantTable {
t.Fatalf("splitKingbaseQualifiedNameCommon(%q)=(%q,%q),want=(%q,%q)", tt.in, gotSchema, gotTable, tt.wantSchema, tt.wantTable)
}
})
}
}
func TestBuildKingbaseSearchPathCommon(t *testing.T) {
tests := []struct {
name string
in []string
want string
wantLen int
}{
{
name: "normal schemas",
in: []string{"ldf_server", "public"},
want: `"ldf_server", "public"`,
wantLen: 2,
},
{
name: "quoted and escaped schemas should not be double quoted",
in: []string{`"ldf_server"`, `""bcs_barcode""`, `\"public\"`},
want: `"ldf_server", "bcs_barcode", "public"`,
wantLen: 3,
},
{
name: "dedupe ignoring case and keep public fallback",
in: []string{"LDF_SERVER", "ldf_server", "PUBLIC"},
want: `"LDF_SERVER", "public"`,
wantLen: 2,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, parts := buildKingbaseSearchPathCommon(tt.in)
if got != tt.want {
t.Fatalf("buildKingbaseSearchPathCommon(%v)=%q,want=%q", tt.in, got, tt.want)
}
if len(parts) != tt.wantLen {
t.Fatalf("buildKingbaseSearchPathCommon(%v) parts=%v, len=%d, wantLen=%d", tt.in, parts, len(parts), tt.wantLen)
}
})
}
}

View File

@@ -7,6 +7,7 @@ import (
"database/sql"
"fmt"
"net"
"regexp"
"strconv"
"strings"
"time"
@@ -136,11 +137,83 @@ func (k *KingbaseDB) Connect(config connection.ConnectionConfig) error {
if idx > 0 {
logger.Warnf("人大金仓 SSL 优先连接失败,已回退至明文连接")
}
// 获取 schema 列表以重构带有 search_path 的连接池
searchPathStr := k.getSearchPathStr()
if searchPathStr != "" {
// 将 search_path 参数拼入 DSN
finalDSN := dsn + " search_path=" + quoteConnValue(searchPathStr)
if finalDB, err := sql.Open("kingbase", finalDSN); err == nil {
k.pingTimeout = getConnectTimeout(attempt)
finalDB.SetConnMaxLifetime(5 * time.Minute)
// 临时将 k.conn 指向 finalDB 来做 ping 测试
oldConn := k.conn
k.conn = finalDB
if err := k.Ping(); err == nil {
// 成功使用带 search_path 的连接池
_ = oldConn.Close()
logger.Infof("人大金仓已配置连接级 search_path%s", searchPathStr)
} else {
_ = finalDB.Close()
k.conn = oldConn
}
}
}
if searchPathStr != "" {
timeout := k.pingTimeout
if timeout <= 0 {
timeout = 5 * time.Second
}
ctx, cancel := utils.ContextWithTimeout(timeout)
defer cancel()
if _, err := k.conn.ExecContext(ctx, fmt.Sprintf("SET search_path TO %s", searchPathStr)); err != nil {
logger.Warnf("人大金仓显式设置 search_path 失败:%v", err)
} else {
logger.Infof("人大金仓已设置默认 search_path%s", searchPathStr)
}
}
return nil
}
return fmt.Errorf("连接建立后验证失败:%s", strings.Join(failures, ""))
}
// getSearchPathStr 查询当前数据库中所有用户 schema配置 DSN 的 search_path。
// KingBase 默认 search_path 为 "$user", public对于自定义 schema 下的表不可见。
func (k *KingbaseDB) getSearchPathStr() string {
if k.conn == nil {
return ""
}
query := `SELECT nspname FROM pg_namespace
WHERE nspname NOT IN ('pg_catalog', 'information_schema')
AND nspname NOT LIKE 'pg_%'
ORDER BY nspname`
rows, err := k.conn.Query(query)
if err != nil {
logger.Warnf("人大金仓查询用户 schema 失败,跳过 search_path 设置:%v", err)
return ""
}
defer rows.Close()
var rawSchemas []string
for rows.Next() {
var name string
if err := rows.Scan(&name); err != nil {
continue
}
name = strings.TrimSpace(name)
if name != "" {
rawSchemas = append(rawSchemas, name)
}
}
searchPath, _ := buildKingbaseSearchPathCommon(rawSchemas)
return searchPath
}
func (k *KingbaseDB) Close() error {
// Close SSH forwarder first if exists
if k.forwarder != nil {
@@ -305,10 +378,30 @@ func (k *KingbaseDB) GetColumns(dbName, tableName string) ([]connection.ColumnDe
return strings.ReplaceAll(s, "'", "''")
}
query := fmt.Sprintf(`SELECT column_name, data_type, is_nullable, column_default
FROM information_schema.columns
WHERE table_schema = '%s' AND table_name = '%s'
ORDER BY ordinal_position`, esc(schema), esc(table))
query := fmt.Sprintf(`
SELECT
a.attname AS column_name,
pg_catalog.format_type(a.atttypid, a.atttypmod) AS data_type,
CASE WHEN a.attnotnull THEN 'NO' ELSE 'YES' END AS is_nullable,
pg_get_expr(ad.adbin, ad.adrelid) AS column_default,
col_description(a.attrelid, a.attnum) AS comment,
CASE WHEN pk.attname IS NOT NULL THEN 'PRI' ELSE '' END AS column_key
FROM pg_class c
JOIN pg_namespace n ON n.oid = c.relnamespace
JOIN pg_attribute a ON a.attrelid = c.oid
LEFT JOIN pg_attrdef ad ON ad.adrelid = c.oid AND ad.adnum = a.attnum
LEFT JOIN (
SELECT i.indrelid, a3.attname
FROM pg_index i
JOIN pg_attribute a3 ON a3.attrelid = i.indrelid AND a3.attnum = ANY(i.indkey)
WHERE i.indisprimary
) pk ON pk.indrelid = c.oid AND pk.attname = a.attname
WHERE c.relkind IN ('r', 'p')
AND n.nspname = '%s'
AND c.relname = '%s'
AND a.attnum > 0
AND NOT a.attisdropped
ORDER BY a.attnum`, esc(schema), esc(table))
data, _, err := k.Query(query)
if err != nil {
@@ -321,11 +414,21 @@ func (k *KingbaseDB) GetColumns(dbName, tableName string) ([]connection.ColumnDe
Name: fmt.Sprintf("%v", row["column_name"]),
Type: fmt.Sprintf("%v", row["data_type"]),
Nullable: fmt.Sprintf("%v", row["is_nullable"]),
Key: fmt.Sprintf("%v", row["column_key"]),
Extra: "",
Comment: "",
}
if row["column_default"] != nil {
def := fmt.Sprintf("%v", row["column_default"])
col.Default = &def
if strings.HasPrefix(strings.ToLower(strings.TrimSpace(def)), "nextval(") {
col.Extra = "auto_increment"
}
}
if v, ok := row["comment"]; ok && v != nil {
col.Comment = fmt.Sprintf("%v", v)
}
columns = append(columns, col)
@@ -347,10 +450,30 @@ func (k *KingbaseDB) getColumnsWithCurrentSchema(tableName string) ([]connection
}
// 使用 current_schema() 获取当前schema
query := fmt.Sprintf(`SELECT column_name, data_type, is_nullable, column_default
FROM information_schema.columns
WHERE table_schema = current_schema() AND table_name = '%s'
ORDER BY ordinal_position`, esc(table))
query := fmt.Sprintf(`
SELECT
a.attname AS column_name,
pg_catalog.format_type(a.atttypid, a.atttypmod) AS data_type,
CASE WHEN a.attnotnull THEN 'NO' ELSE 'YES' END AS is_nullable,
pg_get_expr(ad.adbin, ad.adrelid) AS column_default,
col_description(a.attrelid, a.attnum) AS comment,
CASE WHEN pk.attname IS NOT NULL THEN 'PRI' ELSE '' END AS column_key
FROM pg_class c
JOIN pg_namespace n ON n.oid = c.relnamespace
JOIN pg_attribute a ON a.attrelid = c.oid
LEFT JOIN pg_attrdef ad ON ad.adrelid = c.oid AND ad.adnum = a.attnum
LEFT JOIN (
SELECT i.indrelid, a3.attname
FROM pg_index i
JOIN pg_attribute a3 ON a3.attrelid = i.indrelid AND a3.attnum = ANY(i.indkey)
WHERE i.indisprimary
) pk ON pk.indrelid = c.oid AND pk.attname = a.attname
WHERE c.relkind IN ('r', 'p')
AND n.nspname = current_schema()
AND c.relname = '%s'
AND a.attnum > 0
AND NOT a.attisdropped
ORDER BY a.attnum`, esc(table))
data, _, err := k.Query(query)
if err != nil {
@@ -363,11 +486,21 @@ func (k *KingbaseDB) getColumnsWithCurrentSchema(tableName string) ([]connection
Name: fmt.Sprintf("%v", row["column_name"]),
Type: fmt.Sprintf("%v", row["data_type"]),
Nullable: fmt.Sprintf("%v", row["is_nullable"]),
Key: fmt.Sprintf("%v", row["column_key"]),
Extra: "",
Comment: "",
}
if row["column_default"] != nil {
def := fmt.Sprintf("%v", row["column_default"])
col.Default = &def
if strings.HasPrefix(strings.ToLower(strings.TrimSpace(def)), "nextval(") {
col.Extra = "auto_increment"
}
}
if v, ok := row["comment"]; ok && v != nil {
col.Comment = fmt.Sprintf("%v", v)
}
columns = append(columns, col)
@@ -623,28 +756,16 @@ func (k *KingbaseDB) ApplyChanges(tableName string, changes connection.ChangeSet
}
defer tx.Rollback()
quoteIdent := func(name string) string {
n := strings.TrimSpace(name)
n = strings.Trim(n, "\"")
n = strings.ReplaceAll(n, "\"", "\"\"")
if n == "" {
return "\"\""
}
return `"` + n + `"`
}
schema := ""
table := strings.TrimSpace(tableName)
if parts := strings.SplitN(table, ".", 2); len(parts) == 2 {
schema = strings.TrimSpace(parts[0])
table = strings.TrimSpace(parts[1])
schema, table := splitKingbaseQualifiedTable(tableName)
if table == "" {
return fmt.Errorf("table name required")
}
qualifiedTable := ""
if schema != "" {
qualifiedTable = fmt.Sprintf("%s.%s", quoteIdent(schema), quoteIdent(table))
qualifiedTable = fmt.Sprintf("%s.%s", quoteKingbaseIdent(schema), quoteKingbaseIdent(table))
} else {
qualifiedTable = quoteIdent(table)
qualifiedTable = quoteKingbaseIdent(table)
}
// 1. Deletes
@@ -654,7 +775,7 @@ func (k *KingbaseDB) ApplyChanges(tableName string, changes connection.ChangeSet
idx := 0
for k, v := range pk {
idx++
wheres = append(wheres, fmt.Sprintf("%s = $%d", quoteIdent(k), idx))
wheres = append(wheres, fmt.Sprintf("%s = $%d", quoteKingbaseIdent(k), idx))
args = append(args, v)
}
if len(wheres) == 0 {
@@ -662,7 +783,7 @@ func (k *KingbaseDB) ApplyChanges(tableName string, changes connection.ChangeSet
}
query := fmt.Sprintf("DELETE FROM %s WHERE %s", qualifiedTable, strings.Join(wheres, " AND "))
if _, err := tx.Exec(query, args...); err != nil {
return fmt.Errorf("delete error: %v", err)
return fmt.Errorf("delete error: %v; sql=%s", err, query)
}
}
@@ -674,7 +795,7 @@ func (k *KingbaseDB) ApplyChanges(tableName string, changes connection.ChangeSet
for k, v := range update.Values {
idx++
sets = append(sets, fmt.Sprintf("%s = $%d", quoteIdent(k), idx))
sets = append(sets, fmt.Sprintf("%s = $%d", quoteKingbaseIdent(k), idx))
args = append(args, v)
}
@@ -685,7 +806,7 @@ func (k *KingbaseDB) ApplyChanges(tableName string, changes connection.ChangeSet
var wheres []string
for k, v := range update.Keys {
idx++
wheres = append(wheres, fmt.Sprintf("%s = $%d", quoteIdent(k), idx))
wheres = append(wheres, fmt.Sprintf("%s = $%d", quoteKingbaseIdent(k), idx))
args = append(args, v)
}
@@ -695,7 +816,7 @@ func (k *KingbaseDB) ApplyChanges(tableName string, changes connection.ChangeSet
query := fmt.Sprintf("UPDATE %s SET %s WHERE %s", qualifiedTable, strings.Join(sets, ", "), strings.Join(wheres, " AND "))
if _, err := tx.Exec(query, args...); err != nil {
return fmt.Errorf("update error: %v", err)
return fmt.Errorf("update error: %v; sql=%s", err, query)
}
}
@@ -708,7 +829,7 @@ func (k *KingbaseDB) ApplyChanges(tableName string, changes connection.ChangeSet
for k, v := range row {
idx++
cols = append(cols, quoteIdent(k))
cols = append(cols, quoteKingbaseIdent(k))
placeholders = append(placeholders, fmt.Sprintf("$%d", idx))
args = append(args, v)
}
@@ -719,13 +840,73 @@ func (k *KingbaseDB) ApplyChanges(tableName string, changes connection.ChangeSet
query := fmt.Sprintf("INSERT INTO %s (%s) VALUES (%s)", qualifiedTable, strings.Join(cols, ", "), strings.Join(placeholders, ", "))
if _, err := tx.Exec(query, args...); err != nil {
return fmt.Errorf("insert error: %v", err)
return fmt.Errorf("insert error: %v; sql=%s", err, query)
}
}
return tx.Commit()
}
func normalizeKingbaseIdentifier(raw string) string {
return normalizeKingbaseIdentCommon(raw)
}
// kingbaseIdentNeedsQuote 判断标识符是否需要双引号包裹。
// 与前端 sql.ts 中 needsQuote 逻辑保持一致。
func kingbaseIdentNeedsQuote(ident string) bool {
if ident == "" {
return false
}
// 不是合法裸标识符格式(必须以字母或下划线开头,仅含字母、数字、下划线)
if matched, _ := regexp.MatchString(`^[a-zA-Z_][a-zA-Z0-9_]*$`, ident); !matched {
return true
}
// 包含大写字母时需要引号保护KingbaseES/PostgreSQL 默认将未加引号的标识符折叠为小写)
for _, r := range ident {
if r >= 'A' && r <= 'Z' {
return true
}
}
// 是 SQL 保留字
return isKingbaseReservedWord(ident)
}
// isKingbaseReservedWord 检查是否为常见 SQL 保留字(简化版,与前端保持一致)。
func isKingbaseReservedWord(ident string) bool {
switch strings.ToLower(ident) {
case "select", "from", "where", "table", "index", "user", "order", "group", "by",
"limit", "offset", "and", "or", "not", "null", "true", "false", "key",
"primary", "foreign", "references", "default", "constraint",
"create", "drop", "alter", "insert", "update", "delete", "set", "values", "into",
"join", "left", "right", "inner", "outer", "on", "as", "is", "in", "like",
"between", "case", "when", "then", "else", "end", "having", "distinct",
"all", "any", "exists", "union", "except", "intersect",
"column", "check", "unique", "with", "grant", "revoke", "trigger",
"begin", "commit", "rollback", "schema", "database", "view", "function",
"procedure", "sequence", "type", "domain", "role", "session", "current",
"authorization", "cross", "full", "natural", "some", "cast", "fetch",
"for", "to", "do", "if", "return", "returns", "declare", "cursor", "server", "owner":
return true
}
return false
}
func quoteKingbaseIdent(name string) string {
n := normalizeKingbaseIdentifier(name)
if n == "" {
return "\"\""
}
if !kingbaseIdentNeedsQuote(n) {
return n
}
n = strings.ReplaceAll(n, `"`, `""`)
return `"` + n + `"`
}
func splitKingbaseQualifiedTable(tableName string) (schema string, table string) {
return splitKingbaseQualifiedNameCommon(tableName)
}
func (k *KingbaseDB) GetAllColumns(dbName string) ([]connection.ColumnDefinitionWithTable, error) {
// dbName 在本项目语义里是“数据库”schema 由 table_schema 决定;这里返回全部用户 schema 的列用于查询提示。
query := `

View File

@@ -0,0 +1,117 @@
//go:build gonavi_full_drivers || gonavi_kingbase_driver
package db
import "testing"
func TestNormalizeKingbaseIdentifier(t *testing.T) {
tests := []struct {
name string
in string
want string
}{
{name: "plain", in: "ldf_server", want: "ldf_server"},
{name: "quoted", in: `"ldf_server"`, want: "ldf_server"},
{name: "double quoted", in: `""ldf_server""`, want: "ldf_server"},
{name: "quad quoted", in: `""""ldf_server""""`, want: "ldf_server"},
{name: "escaped quoted", in: `\"ldf_server\"`, want: "ldf_server"},
{name: "double escaped quoted", in: `\\\"ldf_server\\\"`, want: "ldf_server"},
{name: "backtick quoted", in: "`ldf_server`", want: "ldf_server"},
{name: "bracket quoted", in: "[ldf_server]", want: "ldf_server"},
{name: "embedded double quotes", in: `ldf""server`, want: "ldfserver"},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := normalizeKingbaseIdentifier(tt.in); got != tt.want {
t.Fatalf("normalizeKingbaseIdentifier(%q) = %q, want %q", tt.in, got, tt.want)
}
})
}
}
func TestQuoteKingbaseIdent(t *testing.T) {
tests := []struct {
name string
in string
want string
}{
// 纯小写+下划线:不加引号
{name: "plain lowercase", in: "ldf_server", want: "ldf_server"},
{name: "plain lowercase 2", in: "bcs_barcode", want: "bcs_barcode"},
{name: "double quoted input", in: `""ldf_server""`, want: "ldf_server"},
{name: "escaped quoted input", in: `\"ldf_server\"`, want: "ldf_server"},
// 含大写字母:加引号
{name: "uppercase", in: "LDF_Server", want: `"LDF_Server"`},
{name: "mixed case", in: "myTable", want: `"myTable"`},
// SQL 保留字:加引号
{name: "reserved word order", in: "order", want: `"order"`},
{name: "reserved word user", in: "user", want: `"user"`},
{name: "reserved word table", in: "table", want: `"table"`},
{name: "reserved word select", in: "select", want: `"select"`},
// 含特殊字符:加引号
{name: "with hyphen", in: "my-table", want: `"my-table"`},
{name: "with space", in: "my table", want: `"my table"`},
{name: "with embedded quote", in: `ab"cd`, want: `"ab""cd"`},
// 空值
{name: "empty", in: "", want: `""`},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := quoteKingbaseIdent(tt.in); got != tt.want {
t.Fatalf("quoteKingbaseIdent(%q) = %q, want %q", tt.in, got, tt.want)
}
})
}
}
func TestKingbaseIdentNeedsQuote(t *testing.T) {
tests := []struct {
name string
in string
want bool
}{
{name: "plain lowercase", in: "ldf_server", want: false},
{name: "starts with underscore", in: "_col", want: false},
{name: "with digits", in: "col123", want: false},
{name: "uppercase", in: "MyTable", want: true},
{name: "reserved word", in: "order", want: true},
{name: "with hyphen", in: "my-col", want: true},
{name: "starts with digit", in: "123col", want: true},
{name: "empty", in: "", want: false},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := kingbaseIdentNeedsQuote(tt.in); got != tt.want {
t.Fatalf("kingbaseIdentNeedsQuote(%q) = %v, want %v", tt.in, got, tt.want)
}
})
}
}
func TestSplitKingbaseQualifiedTable(t *testing.T) {
tests := []struct {
name string
in string
wantSchema string
wantTable string
}{
{name: "plain qualified", in: "ldf_server.t_user", wantSchema: "ldf_server", wantTable: "t_user"},
{name: "double quoted qualified", in: `""ldf_server"".""t_user""`, wantSchema: "ldf_server", wantTable: "t_user"},
{name: "escaped qualified", in: `\"ldf_server\".\"t_user\"`, wantSchema: "ldf_server", wantTable: "t_user"},
{name: "double escaped qualified", in: `\\\"ldf_server\\\".\\\"t_user\\\"`, wantSchema: "ldf_server", wantTable: "t_user"},
{name: "bracket qualified", in: "[ldf_server].[t_user]", wantSchema: "ldf_server", wantTable: "t_user"},
{name: "table only", in: `""t_user""`, wantSchema: "", wantTable: "t_user"},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
gotSchema, gotTable := splitKingbaseQualifiedTable(tt.in)
if gotSchema != tt.wantSchema || gotTable != tt.wantTable {
t.Fatalf("splitKingbaseQualifiedTable(%q) = (%q, %q), want (%q, %q)", tt.in, gotSchema, gotTable, tt.wantSchema, tt.wantTable)
}
})
}
}

View File

@@ -11,7 +11,6 @@ import (
"time"
"GoNavi-Wails/internal/connection"
"GoNavi-Wails/internal/logger"
"GoNavi-Wails/internal/ssh"
"GoNavi-Wails/internal/utils"
@@ -25,30 +24,33 @@ type MariaDB struct {
pingTimeout time.Duration
}
func (m *MariaDB) getDSN(config connection.ConnectionConfig) string {
func (m *MariaDB) getDSN(config connection.ConnectionConfig) (string, error) {
database := config.Database
protocol := "tcp"
address := fmt.Sprintf("%s:%d", config.Host, config.Port)
if config.UseSSH {
netName, err := ssh.RegisterSSHNetwork(config.SSH)
if err == nil {
protocol = netName
address = fmt.Sprintf("%s:%d", config.Host, config.Port)
} else {
logger.Warnf("注册 SSH 网络失败,将尝试直连:地址=%s:%d 用户=%s原因%v", config.Host, config.Port, config.User, err)
if err != nil {
return "", fmt.Errorf("创建 SSH 隧道失败:%w", err)
}
protocol = netName
}
timeout := getConnectTimeoutSeconds(config)
tlsMode := resolveMySQLTLSMode(config)
return fmt.Sprintf("%s:%s@%s(%s)/%s?charset=utf8mb4&parseTime=True&loc=Local&timeout=%ds&tls=%s",
config.User, config.Password, protocol, address, database, timeout, url.QueryEscape(tlsMode))
return fmt.Sprintf(
"%s:%s@%s(%s)/%s?charset=utf8mb4&parseTime=True&loc=Local&timeout=%ds&tls=%s",
config.User, config.Password, protocol, address, database, timeout, url.QueryEscape(tlsMode),
), nil
}
func (m *MariaDB) Connect(config connection.ConnectionConfig) error {
dsn := m.getDSN(config)
dsn, err := m.getDSN(config)
if err != nil {
return err
}
db, err := sql.Open("mysql", dsn)
if err != nil {
return fmt.Errorf("打开数据库连接失败:%w", err)
@@ -250,12 +252,22 @@ func (m *MariaDB) GetIndexes(dbName, tableName string) ([]connection.IndexDefini
}
}
subPart := 0
if val, ok := row["Sub_part"]; ok && val != nil {
if f, ok := val.(float64); ok {
subPart = int(f)
} else if i, ok := val.(int64); ok {
subPart = int(i)
}
}
idx := connection.IndexDefinition{
Name: fmt.Sprintf("%v", row["Key_name"]),
ColumnName: fmt.Sprintf("%v", row["Column_name"]),
NonUnique: nonUnique,
SeqInIndex: seq,
IndexType: fmt.Sprintf("%v", row["Index_type"]),
SubPart: subPart,
}
indexes = append(indexes, idx)
}
@@ -323,7 +335,7 @@ func (m *MariaDB) ApplyChanges(tableName string, changes connection.ChangeSet) e
var args []interface{}
for k, v := range pk {
wheres = append(wheres, fmt.Sprintf("`%s` = ?", k))
args = append(args, normalizeMySQLDateTimeValue(v))
args = append(args, normalizeMySQLComplexValue(normalizeMySQLDateTimeValue(v)))
}
if len(wheres) == 0 {
continue
@@ -341,7 +353,7 @@ func (m *MariaDB) ApplyChanges(tableName string, changes connection.ChangeSet) e
for k, v := range update.Values {
sets = append(sets, fmt.Sprintf("`%s` = ?", k))
args = append(args, normalizeMySQLDateTimeValue(v))
args = append(args, normalizeMySQLComplexValue(normalizeMySQLDateTimeValue(v)))
}
if len(sets) == 0 {
@@ -351,7 +363,7 @@ func (m *MariaDB) ApplyChanges(tableName string, changes connection.ChangeSet) e
var wheres []string
for k, v := range update.Keys {
wheres = append(wheres, fmt.Sprintf("`%s` = ?", k))
args = append(args, normalizeMySQLDateTimeValue(v))
args = append(args, normalizeMySQLComplexValue(normalizeMySQLDateTimeValue(v)))
}
if len(wheres) == 0 {
@@ -373,7 +385,7 @@ func (m *MariaDB) ApplyChanges(tableName string, changes connection.ChangeSet) e
for k, v := range row {
cols = append(cols, fmt.Sprintf("`%s`", k))
placeholders = append(placeholders, "?")
args = append(args, normalizeMySQLDateTimeValue(v))
args = append(args, normalizeMySQLComplexValue(normalizeMySQLDateTimeValue(v)))
}
if len(cols) == 0 {

View File

@@ -151,10 +151,14 @@ func applyMongoURI(config connection.ConnectionConfig) connection.ConnectionConf
}
}
if len(config.Hosts) == 0 && len(hostsFromURI) > 0 {
explicitHost := strings.TrimSpace(config.Host) != ""
explicitHosts := len(config.Hosts) > 0
// 显式填写的 host/hosts 优先级高于 URI避免表单 host 被 URI 中的 localhost 覆盖。
if !explicitHost && !explicitHosts && len(hostsFromURI) > 0 {
config.Hosts = hostsFromURI
}
if strings.TrimSpace(config.Host) == "" && len(hostsFromURI) > 0 {
if !explicitHost && !explicitHosts && len(hostsFromURI) > 0 {
host, port, ok := parseHostPortWithDefault(hostsFromURI[0], defaultPort)
if ok {
config.Host = host
@@ -251,6 +255,11 @@ func (m *MongoDB) getURI(config connection.ConnectionConfig) string {
params.Set("authMechanism", authMechanism)
}
// 单机模式且未指定副本集名称时,启用 directConnection 避免驱动自动跟随副本集成员发现
if strings.TrimSpace(config.Topology) != "replica" && strings.TrimSpace(config.ReplicaSet) == "" && !config.MongoSRV {
params.Set("directConnection", "true")
}
if encoded := params.Encode(); encoded != "" {
uri += "?" + encoded
}
@@ -276,9 +285,44 @@ func buildMongoAuthAttempts(config connection.ConnectionConfig) []connection.Con
return attempts
}
func mongoURIForcesTLS(uriText string) bool {
trimmed := strings.TrimSpace(uriText)
if trimmed == "" {
return false
}
parsed, err := url.Parse(trimmed)
if err != nil {
return false
}
query := parsed.Query()
for _, key := range []string{"tls", "ssl"} {
value := strings.ToLower(strings.TrimSpace(query.Get(key)))
switch value {
case "1", "true", "t", "yes", "y", "required":
return true
}
}
return false
}
func mongoAttemptSSLLabel(config connection.ConnectionConfig, fallbackToPlain bool) string {
if fallbackToPlain {
return "明文回退"
}
if mongoURIForcesTLS(config.URI) {
return "SSL"
}
enabled, _ := resolveMongoTLSSettings(config)
if enabled {
return "SSL"
}
return "明文"
}
func (m *MongoDB) Connect(config connection.ConnectionConfig) error {
runConfig := applyMongoURI(config)
connectConfig := runConfig
sshRouteHint := ""
if runConfig.UseSSH && runConfig.MongoSRV {
return fmt.Errorf("MongoDB SRV 记录模式暂不支持 SSH 隧道")
@@ -319,6 +363,7 @@ func (m *MongoDB) Connect(config connection.ConnectionConfig) error {
localConfig.URI = ""
localConfig.Hosts = []string{normalizeMongoAddress(host, port)}
connectConfig = localConfig
sshRouteHint = fmt.Sprintf("SSH隧道 %s -> %s:%d", forwarder.LocalAddr, targetHost, targetPort)
logger.Infof("MongoDB 通过本地端口转发连接:%s -> %s:%d", forwarder.LocalAddr, targetHost, targetPort)
}
@@ -332,20 +377,32 @@ func (m *MongoDB) Connect(config connection.ConnectionConfig) error {
if shouldTrySSLPreferredFallback(connectConfig) {
sslAttempts = append(sslAttempts, withSSLDisabled(connectConfig))
}
totalAttempts := 0
for _, attemptConfig := range sslAttempts {
totalAttempts += len(buildMongoAuthAttempts(attemptConfig))
}
attemptNo := 0
var errorDetails []string
for sslIndex, sslConfig := range sslAttempts {
sslLabel := "SSL"
if sslIndex > 0 {
sslLabel = "明文回退"
}
sslLabel := mongoAttemptSSLLabel(sslConfig, sslIndex > 0)
attemptConfigs := buildMongoAuthAttempts(sslConfig)
for index, attemptConfig := range attemptConfigs {
attemptNo++
authLabel := "主库凭据"
if index > 0 {
authLabel = "从库凭据"
}
targets := collectMongoSeeds(attemptConfig)
if len(targets) == 0 {
targets = append(targets, normalizeMongoAddress(attemptConfig.Host, attemptConfig.Port))
}
attemptStarted := time.Now()
logger.Infof(
"MongoDB 连接尝试:%d/%d 模式=%s 凭据=%s 目标=%s 代理=%t",
attemptNo, totalAttempts, sslLabel, authLabel, strings.Join(targets, ","), attemptConfig.UseProxy,
)
if sslIndex > 0 {
attemptConfig.URI = ""
@@ -364,7 +421,13 @@ func (m *MongoDB) Connect(config connection.ConnectionConfig) error {
}
client, err := mongo.Connect(clientOpts)
if err != nil {
errorDetails = append(errorDetails, fmt.Sprintf("%s %s连接失败: %v", sslLabel, authLabel, err))
logger.Warnf("MongoDB 连接尝试失败:%d/%d 模式=%s 凭据=%s 耗时=%s 错误=%v",
attemptNo, totalAttempts, sslLabel, authLabel, time.Since(attemptStarted).Round(time.Millisecond), err)
detail := fmt.Sprintf("%s %s连接失败: %v", sslLabel, authLabel, err)
if sshRouteHint != "" {
detail = fmt.Sprintf("%s%s", detail, sshRouteHint)
}
errorDetails = append(errorDetails, detail)
continue
}
@@ -374,9 +437,17 @@ func (m *MongoDB) Connect(config connection.ConnectionConfig) error {
_ = client.Disconnect(ctx)
cancel()
m.client = nil
errorDetails = append(errorDetails, fmt.Sprintf("%s %s验证失败: %v", sslLabel, authLabel, err))
logger.Warnf("MongoDB 连接尝试验证失败:%d/%d 模式=%s 凭据=%s 耗时=%s 错误=%v",
attemptNo, totalAttempts, sslLabel, authLabel, time.Since(attemptStarted).Round(time.Millisecond), err)
detail := fmt.Sprintf("%s %s验证失败: %v", sslLabel, authLabel, err)
if sshRouteHint != "" {
detail = fmt.Sprintf("%s%s", detail, sshRouteHint)
}
errorDetails = append(errorDetails, detail)
continue
}
logger.Infof("MongoDB 连接尝试成功:%d/%d 模式=%s 凭据=%s 耗时=%s",
attemptNo, totalAttempts, sslLabel, authLabel, time.Since(attemptStarted).Round(time.Millisecond))
if sslIndex > 0 {
logger.Warnf("MongoDB SSL 优先连接失败,已回退至明文连接")
}

View File

@@ -0,0 +1,39 @@
//go:build gonavi_full_drivers || gonavi_mongodb_driver
package db
import (
"testing"
"GoNavi-Wails/internal/connection"
)
func TestApplyMongoURI_ExplicitHostDoesNotAdoptURIHosts(t *testing.T) {
config := connection.ConnectionConfig{
Host: "10.10.10.10",
Port: 27017,
URI: "mongodb://localhost:27017/admin",
}
got := applyMongoURI(config)
if got.Host != "10.10.10.10" {
t.Fatalf("expected host to remain explicit, got %q", got.Host)
}
if len(got.Hosts) != 0 {
t.Fatalf("expected hosts to remain empty when explicit host exists, got %v", got.Hosts)
}
}
func TestApplyMongoURI_ExplicitHostsDoesNotAdoptURIHosts(t *testing.T) {
config := connection.ConnectionConfig{
Host: "10.10.10.10",
Port: 27017,
Hosts: []string{"10.10.10.10:27017", "10.10.10.11:27017"},
URI: "mongodb://localhost:27017,localhost:27018/admin?replicaSet=rs0",
}
got := applyMongoURI(config)
if len(got.Hosts) != 2 || got.Hosts[0] != "10.10.10.10:27017" {
t.Fatalf("expected explicit hosts to stay untouched, got %v", got.Hosts)
}
}

View File

@@ -152,10 +152,14 @@ func applyMongoURI(config connection.ConnectionConfig) connection.ConnectionConf
}
}
if len(config.Hosts) == 0 && len(hostsFromURI) > 0 {
explicitHost := strings.TrimSpace(config.Host) != ""
explicitHosts := len(config.Hosts) > 0
// 显式填写的 host/hosts 优先级高于 URI避免表单 host 被 URI 中的 localhost 覆盖。
if !explicitHost && !explicitHosts && len(hostsFromURI) > 0 {
config.Hosts = hostsFromURI
}
if strings.TrimSpace(config.Host) == "" && len(hostsFromURI) > 0 {
if !explicitHost && !explicitHosts && len(hostsFromURI) > 0 {
host, port, ok := parseHostPortWithDefault(hostsFromURI[0], defaultPort)
if ok {
config.Host = host
@@ -252,6 +256,11 @@ func (m *MongoDBV1) getURI(config connection.ConnectionConfig) string {
params.Set("authMechanism", authMechanism)
}
// 单机模式且未指定副本集名称时,启用 directConnection 避免驱动自动跟随副本集成员发现
if strings.TrimSpace(config.Topology) != "replica" && strings.TrimSpace(config.ReplicaSet) == "" && !config.MongoSRV {
params.Set("directConnection", "true")
}
if encoded := params.Encode(); encoded != "" {
uri += "?" + encoded
}
@@ -277,9 +286,44 @@ func buildMongoAuthAttempts(config connection.ConnectionConfig) []connection.Con
return attempts
}
func mongoURIForcesTLS(uriText string) bool {
trimmed := strings.TrimSpace(uriText)
if trimmed == "" {
return false
}
parsed, err := url.Parse(trimmed)
if err != nil {
return false
}
query := parsed.Query()
for _, key := range []string{"tls", "ssl"} {
value := strings.ToLower(strings.TrimSpace(query.Get(key)))
switch value {
case "1", "true", "t", "yes", "y", "required":
return true
}
}
return false
}
func mongoAttemptSSLLabel(config connection.ConnectionConfig, fallbackToPlain bool) string {
if fallbackToPlain {
return "明文回退"
}
if mongoURIForcesTLS(config.URI) {
return "SSL"
}
enabled, _ := resolveMongoTLSSettings(config)
if enabled {
return "SSL"
}
return "明文"
}
func (m *MongoDBV1) Connect(config connection.ConnectionConfig) error {
runConfig := applyMongoURI(config)
connectConfig := runConfig
sshRouteHint := ""
if runConfig.UseSSH && runConfig.MongoSRV {
return fmt.Errorf("MongoDB SRV 记录模式暂不支持 SSH 隧道")
@@ -320,6 +364,7 @@ func (m *MongoDBV1) Connect(config connection.ConnectionConfig) error {
localConfig.URI = ""
localConfig.Hosts = []string{normalizeMongoAddress(host, port)}
connectConfig = localConfig
sshRouteHint = fmt.Sprintf("SSH隧道 %s -> %s:%d", forwarder.LocalAddr, targetHost, targetPort)
logger.Infof("MongoDB 通过本地端口转发连接:%s -> %s:%d", forwarder.LocalAddr, targetHost, targetPort)
}
@@ -333,20 +378,32 @@ func (m *MongoDBV1) Connect(config connection.ConnectionConfig) error {
if shouldTrySSLPreferredFallback(connectConfig) {
sslAttempts = append(sslAttempts, withSSLDisabled(connectConfig))
}
totalAttempts := 0
for _, attemptConfig := range sslAttempts {
totalAttempts += len(buildMongoAuthAttempts(attemptConfig))
}
attemptNo := 0
var errorDetails []string
for sslIndex, sslConfig := range sslAttempts {
sslLabel := "SSL"
if sslIndex > 0 {
sslLabel = "明文回退"
}
sslLabel := mongoAttemptSSLLabel(sslConfig, sslIndex > 0)
attemptConfigs := buildMongoAuthAttempts(sslConfig)
for index, attemptConfig := range attemptConfigs {
attemptNo++
authLabel := "主库凭据"
if index > 0 {
authLabel = "从库凭据"
}
targets := collectMongoSeeds(attemptConfig)
if len(targets) == 0 {
targets = append(targets, normalizeMongoAddress(attemptConfig.Host, attemptConfig.Port))
}
attemptStarted := time.Now()
logger.Infof(
"MongoDB(v1) 连接尝试:%d/%d 模式=%s 凭据=%s 目标=%s 代理=%t",
attemptNo, totalAttempts, sslLabel, authLabel, strings.Join(targets, ","), attemptConfig.UseProxy,
)
if sslIndex > 0 {
attemptConfig.URI = ""
@@ -367,7 +424,13 @@ func (m *MongoDBV1) Connect(config connection.ConnectionConfig) error {
client, err := mongo.Connect(connectCtx, clientOpts)
connectCancel()
if err != nil {
errorDetails = append(errorDetails, fmt.Sprintf("%s %s连接失败: %v", sslLabel, authLabel, err))
logger.Warnf("MongoDB(v1) 连接尝试失败:%d/%d 模式=%s 凭据=%s 耗时=%s 错误=%v",
attemptNo, totalAttempts, sslLabel, authLabel, time.Since(attemptStarted).Round(time.Millisecond), err)
detail := fmt.Sprintf("%s %s连接失败: %v", sslLabel, authLabel, err)
if sshRouteHint != "" {
detail = fmt.Sprintf("%s%s", detail, sshRouteHint)
}
errorDetails = append(errorDetails, detail)
continue
}
@@ -377,9 +440,17 @@ func (m *MongoDBV1) Connect(config connection.ConnectionConfig) error {
_ = client.Disconnect(ctx)
cancel()
m.client = nil
errorDetails = append(errorDetails, fmt.Sprintf("%s %s验证失败: %v", sslLabel, authLabel, err))
logger.Warnf("MongoDB(v1) 连接尝试验证失败:%d/%d 模式=%s 凭据=%s 耗时=%s 错误=%v",
attemptNo, totalAttempts, sslLabel, authLabel, time.Since(attemptStarted).Round(time.Millisecond), err)
detail := fmt.Sprintf("%s %s验证失败: %v", sslLabel, authLabel, err)
if sshRouteHint != "" {
detail = fmt.Sprintf("%s%s", detail, sshRouteHint)
}
errorDetails = append(errorDetails, detail)
continue
}
logger.Infof("MongoDB(v1) 连接尝试成功:%d/%d 模式=%s 凭据=%s 耗时=%s",
attemptNo, totalAttempts, sslLabel, authLabel, time.Since(attemptStarted).Round(time.Millisecond))
if sslIndex > 0 {
logger.Warnf("MongoDB(v1) SSL 优先连接失败,已回退至明文连接")
}

View File

@@ -0,0 +1,25 @@
//go:build gonavi_mongodb_driver_v1
package db
import (
"testing"
"GoNavi-Wails/internal/connection"
)
func TestApplyMongoURIV1_ExplicitHostDoesNotAdoptURIHosts(t *testing.T) {
config := connection.ConnectionConfig{
Host: "10.10.10.10",
Port: 27017,
URI: "mongodb://localhost:27017/admin",
}
got := applyMongoURI(config)
if got.Host != "10.10.10.10" {
t.Fatalf("expected host to remain explicit, got %q", got.Host)
}
if len(got.Hosts) != 0 {
t.Fatalf("expected hosts to remain empty when explicit host exists, got %v", got.Hosts)
}
}

View File

@@ -3,6 +3,7 @@ package db
import (
"context"
"database/sql"
"encoding/json"
"fmt"
"net/url"
"strconv"
@@ -168,26 +169,26 @@ func collectMySQLAddresses(config connection.ConnectionConfig) []string {
return result
}
func (m *MySQLDB) getDSN(config connection.ConnectionConfig) string {
func (m *MySQLDB) getDSN(config connection.ConnectionConfig) (string, error) {
database := config.Database
protocol := "tcp"
address := normalizeMySQLAddress(config.Host, config.Port)
if config.UseSSH {
netName, err := ssh.RegisterSSHNetwork(config.SSH)
if err == nil {
protocol = netName
address = normalizeMySQLAddress(config.Host, config.Port)
} else {
logger.Warnf("注册 SSH 网络失败,将尝试直连:地址=%s:%d 用户=%s原因%v", config.Host, config.Port, config.User, err)
if err != nil {
return "", fmt.Errorf("创建 SSH 隧道失败:%w", err)
}
protocol = netName
}
timeout := getConnectTimeoutSeconds(config)
tlsMode := resolveMySQLTLSMode(config)
return fmt.Sprintf("%s:%s@%s(%s)/%s?charset=utf8mb4&parseTime=True&loc=Local&timeout=%ds&tls=%s",
config.User, config.Password, protocol, address, database, timeout, url.QueryEscape(tlsMode))
return fmt.Sprintf(
"%s:%s@%s(%s)/%s?charset=utf8mb4&parseTime=True&loc=Local&timeout=%ds&tls=%s",
config.User, config.Password, protocol, address, database, timeout, url.QueryEscape(tlsMode),
), nil
}
func resolveMySQLCredential(config connection.ConnectionConfig, addressIndex int) (string, string) {
@@ -225,7 +226,11 @@ func (m *MySQLDB) Connect(config connection.ConnectionConfig) error {
candidateConfig.Port = port
candidateConfig.User, candidateConfig.Password = resolveMySQLCredential(runConfig, index)
dsn := m.getDSN(candidateConfig)
dsn, err := m.getDSN(candidateConfig)
if err != nil {
errorDetails = append(errorDetails, fmt.Sprintf("%s 生成连接串失败: %v", address, err))
continue
}
db, err := sql.Open("mysql", dsn)
if err != nil {
errorDetails = append(errorDetails, fmt.Sprintf("%s 打开失败: %v", address, err))
@@ -441,12 +446,22 @@ func (m *MySQLDB) GetIndexes(dbName, tableName string) ([]connection.IndexDefini
}
}
subPart := 0
if val, ok := row["Sub_part"]; ok && val != nil {
if f, ok := val.(float64); ok {
subPart = int(f)
} else if i, ok := val.(int64); ok {
subPart = int(i)
}
}
idx := connection.IndexDefinition{
Name: fmt.Sprintf("%v", row["Key_name"]),
ColumnName: fmt.Sprintf("%v", row["Column_name"]),
NonUnique: nonUnique,
SeqInIndex: seq,
IndexType: fmt.Sprintf("%v", row["Index_type"]),
SubPart: subPart,
}
indexes = append(indexes, idx)
}
@@ -606,6 +621,18 @@ func (m *MySQLDB) ApplyChanges(tableName string, changes connection.ChangeSet) e
return tx.Commit()
}
func normalizeMySQLComplexValue(value interface{}) interface{} {
switch v := value.(type) {
case map[string]interface{}, []interface{}:
if data, err := json.Marshal(v); err == nil {
return string(data)
}
return fmt.Sprintf("%v", value)
default:
return value
}
}
func normalizeMySQLDateTimeValue(value interface{}) interface{} {
text, ok := value.(string)
if !ok {
@@ -670,7 +697,7 @@ func (m *MySQLDB) loadColumnTypeMap(tableName string) map[string]string {
func normalizeMySQLValueForInsert(columnName string, value interface{}, columnTypeMap map[string]string) (interface{}, bool) {
columnType := strings.ToLower(strings.TrimSpace(columnTypeMap[strings.ToLower(strings.TrimSpace(columnName))]))
if !isMySQLTemporalColumnType(columnType) {
return value, false
return normalizeMySQLComplexValue(value), false
}
text, ok := value.(string)
if ok && strings.TrimSpace(text) == "" {

View File

@@ -0,0 +1,26 @@
package db
import (
"testing"
"GoNavi-Wails/internal/connection"
)
func TestMySQLDSN_UseSSH_ShouldFailWhenSSHInvalid(t *testing.T) {
m := &MySQLDB{}
_, err := m.getDSN(connection.ConnectionConfig{
Host: "127.0.0.1",
Port: 3306,
User: "root",
UseSSH: true,
SSH: connection.SSHConfig{
Host: "127.0.0.1",
Port: 0, // invalid port, should fail immediately
User: "bad",
Password: "bad",
},
})
if err == nil {
t.Fatalf("expected error when UseSSH=true and SSH config invalid")
}
}

View File

@@ -9,8 +9,11 @@ import (
"io"
"os"
"os/exec"
"reflect"
"runtime"
"strings"
"sync"
"syscall"
"time"
"GoNavi-Wails/internal/connection"
@@ -94,6 +97,9 @@ func newOptionalDriverAgentClient(driverType string, executablePath string) (*op
return nil, fmt.Errorf("创建 %s 驱动代理 stderr 失败:%w", driverDisplayName(driverType), err)
}
if err := cmd.Start(); err != nil {
if isWindowsExecutableMachineMismatch(err) {
return nil, fmt.Errorf("启动 %s 驱动代理失败:%w检测到驱动代理与当前系统架构不兼容请在驱动管理中重新安装启用", driverDisplayName(driverType), err)
}
return nil, fmt.Errorf("启动 %s 驱动代理失败:%w", driverDisplayName(driverType), err)
}
@@ -107,6 +113,30 @@ func newOptionalDriverAgentClient(driverType string, executablePath string) (*op
return client, nil
}
func isWindowsExecutableMachineMismatch(err error) bool {
if err == nil || runtime.GOOS != "windows" {
return false
}
var errno syscall.Errno
if errors.As(err, &errno) && errno == syscall.Errno(216) {
return true
}
text := strings.ToLower(strings.TrimSpace(err.Error()))
if text == "" {
return false
}
if strings.Contains(text, "not compatible with the version of windows") {
return true
}
if strings.Contains(text, "win32") && strings.Contains(text, "compatible") {
return true
}
if strings.Contains(text, "不是有效的win32应用程序") || strings.Contains(text, "无法在win32模式下运行") {
return true
}
return false
}
func (c *optionalDriverAgentClient) captureStderr(stderr io.Reader) {
scanner := bufio.NewScanner(stderr)
buffer := make([]byte, 0, 8<<10)
@@ -116,6 +146,7 @@ func (c *optionalDriverAgentClient) captureStderr(stderr io.Reader) {
if line == "" {
continue
}
logger.Warnf("%s 驱动代理 stderr: %s", driverDisplayName(c.driver), line)
c.stderrMu.Lock()
if c.stderr.Len() > 0 {
c.stderr.WriteString(" | ")
@@ -239,6 +270,7 @@ func (d *OptionalDriverAgentDB) Connect(config connection.ConnectionConfig) erro
return err
}
d.client = client
d.ensureKingbaseSearchPath(config)
return nil
}
@@ -459,6 +491,16 @@ func (d *OptionalDriverAgentDB) ApplyChanges(tableName string, changes connectio
if err != nil {
return err
}
if strings.EqualFold(d.driverType, "kingbase") {
if normalized := normalizeKingbaseAgentTableName(tableName); normalized != "" {
tableName = normalized
}
if normalized, normErr := d.normalizeKingbaseAgentChangeSet(tableName, changes); normErr == nil {
changes = normalized
} else {
logger.Warnf("Kingbase ApplyChanges 字段名规范化失败:%v", normErr)
}
}
return client.call(optionalAgentRequest{
Method: optionalAgentMethodApplyChanges,
TableName: tableName,
@@ -473,6 +515,250 @@ func (d *OptionalDriverAgentDB) requireClient() (*optionalDriverAgentClient, err
return d.client, nil
}
func (d *OptionalDriverAgentDB) ensureKingbaseSearchPath(config connection.ConnectionConfig) {
if !strings.EqualFold(d.driverType, "kingbase") {
return
}
client, err := d.requireClient()
if err != nil || client == nil {
return
}
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
schemas, err := d.listKingbaseSchemas(ctx)
if err != nil || len(schemas) == 0 {
if err != nil {
logger.Warnf("人大金仓驱动代理探测 schema 失败:%v", err)
}
return
}
searchPath := buildKingbaseSearchPathFromSchemas(schemas)
if strings.TrimSpace(searchPath) == "" {
return
}
if _, err := d.ExecContext(ctx, fmt.Sprintf("SET search_path TO %s", searchPath)); err != nil {
logger.Warnf("人大金仓驱动代理设置 search_path 失败:%v", err)
return
}
logger.Infof("人大金仓驱动代理已设置默认 search_path%s", searchPath)
}
func (d *OptionalDriverAgentDB) listKingbaseSchemas(ctx context.Context) ([]string, error) {
query := `SELECT nspname FROM pg_namespace
WHERE nspname NOT IN ('pg_catalog', 'information_schema')
AND nspname NOT LIKE 'pg_%'
ORDER BY nspname`
rows, _, err := d.QueryContext(ctx, query)
if err != nil {
return nil, err
}
schemas := make([]string, 0, len(rows))
for _, row := range rows {
for key, val := range row {
if strings.EqualFold(key, "nspname") || strings.EqualFold(key, "schema") {
name := strings.TrimSpace(fmt.Sprintf("%v", val))
if name != "" {
schemas = append(schemas, name)
}
break
}
}
if len(row) == 1 {
for _, val := range row {
name := strings.TrimSpace(fmt.Sprintf("%v", val))
if name != "" {
schemas = append(schemas, name)
}
break
}
}
}
return schemas, nil
}
func buildKingbaseSearchPathFromSchemas(schemas []string) string {
searchPath, _ := buildKingbaseSearchPathCommon(schemas)
return searchPath
}
func quoteKingbaseAgentIdent(name string) string {
n := normalizeKingbaseAgentIdent(name)
if n == "" {
return "\"\""
}
n = strings.ReplaceAll(n, `"`, `""`)
return `"` + n + `"`
}
func normalizeKingbaseAgentTableName(raw string) string {
schema, table := splitKingbaseQualifiedNameCommon(raw)
if table == "" {
return ""
}
if schema == "" {
return table
}
return schema + "." + table
}
func normalizeKingbaseAgentIdent(raw string) string {
return normalizeKingbaseIdentCommon(raw)
}
type kingbaseAgentColumnIndex struct {
exact map[string]string
compact map[string]string
}
func buildKingbaseAgentColumnIndex(columns []string) kingbaseAgentColumnIndex {
exact := make(map[string]string, len(columns))
compact := make(map[string]string, len(columns))
compactSeen := make(map[string]string, len(columns))
compactDup := make(map[string]struct{}, len(columns))
for _, col := range columns {
name := normalizeKingbaseAgentIdent(col)
if name == "" {
continue
}
lower := strings.ToLower(name)
if _, ok := exact[lower]; !ok {
exact[lower] = name
}
key := normalizeKingbaseAgentCompactKey(name)
if key == "" {
continue
}
if prev, ok := compactSeen[key]; ok && !strings.EqualFold(prev, name) {
compactDup[key] = struct{}{}
continue
}
compactSeen[key] = name
}
if len(compactDup) > 0 {
for key := range compactDup {
delete(compactSeen, key)
}
}
for key, value := range compactSeen {
compact[key] = value
}
return kingbaseAgentColumnIndex{exact: exact, compact: compact}
}
func normalizeKingbaseAgentCompactKey(raw string) string {
name := normalizeKingbaseAgentIdent(raw)
if name == "" {
return ""
}
name = strings.ToLower(strings.TrimSpace(name))
name = strings.Join(strings.Fields(name), "")
name = strings.ReplaceAll(name, "_", "")
return name
}
func resolveKingbaseAgentColumnName(name string, index kingbaseAgentColumnIndex) string {
cleaned := normalizeKingbaseAgentIdent(name)
if cleaned == "" {
return name
}
lower := strings.ToLower(cleaned)
if actual, ok := index.exact[lower]; ok {
return actual
}
compact := normalizeKingbaseAgentCompactKey(cleaned)
if actual, ok := index.compact[compact]; ok {
return actual
}
return cleaned
}
func normalizeKingbaseAgentChangeSetByColumns(changes connection.ChangeSet, columns []string) (connection.ChangeSet, error) {
index := buildKingbaseAgentColumnIndex(columns)
if len(index.exact) == 0 && len(index.compact) == 0 {
return changes, nil
}
mapRow := func(row map[string]interface{}) (map[string]interface{}, error) {
if row == nil {
return row, nil
}
out := make(map[string]interface{}, len(row))
for key, value := range row {
nextKey := resolveKingbaseAgentColumnName(key, index)
if existing, ok := out[nextKey]; ok && !reflect.DeepEqual(existing, value) {
return nil, fmt.Errorf("duplicate mapped column %q", nextKey)
}
out[nextKey] = value
}
return out, nil
}
next := connection.ChangeSet{
Inserts: make([]map[string]interface{}, 0, len(changes.Inserts)),
Updates: make([]connection.UpdateRow, 0, len(changes.Updates)),
Deletes: make([]map[string]interface{}, 0, len(changes.Deletes)),
}
for _, row := range changes.Inserts {
mapped, err := mapRow(row)
if err != nil {
return changes, err
}
next.Inserts = append(next.Inserts, mapped)
}
for _, upd := range changes.Updates {
keys, err := mapRow(upd.Keys)
if err != nil {
return changes, err
}
values, err := mapRow(upd.Values)
if err != nil {
return changes, err
}
next.Updates = append(next.Updates, connection.UpdateRow{
Keys: keys,
Values: values,
})
}
for _, row := range changes.Deletes {
mapped, err := mapRow(row)
if err != nil {
return changes, err
}
next.Deletes = append(next.Deletes, mapped)
}
return next, nil
}
func (d *OptionalDriverAgentDB) normalizeKingbaseAgentChangeSet(tableName string, changes connection.ChangeSet) (connection.ChangeSet, error) {
columns, err := d.GetColumns("", tableName)
if err != nil {
return changes, err
}
if len(columns) == 0 {
return changes, nil
}
names := make([]string, 0, len(columns))
for _, col := range columns {
name := strings.TrimSpace(col.Name)
if name == "" {
continue
}
names = append(names, name)
}
return normalizeKingbaseAgentChangeSetByColumns(changes, names)
}
func timeoutMsFromContext(ctx context.Context) int64 {
deadline, ok := ctx.Deadline()
if !ok {

View File

@@ -1,32 +1,67 @@
package db
import (
"context"
"testing"
"time"
"GoNavi-Wails/internal/connection"
)
func TestTimeoutMsFromContext_NoDeadline(t *testing.T) {
if got := timeoutMsFromContext(context.Background()); got != 0 {
t.Fatalf("无 deadline 时应返回 0got=%d", got)
func TestNormalizeKingbaseAgentTableName(t *testing.T) {
tests := []struct {
name string
in string
want string
}{
{name: "plain", in: "ldf_server.andon_events", want: "ldf_server.andon_events"},
{name: "quoted", in: `"ldf_server"."andon_events"`, want: "ldf_server.andon_events"},
{name: "double quoted", in: `""ldf_server"".""andon_events""`, want: "ldf_server.andon_events"},
{name: "escaped", in: `\"ldf_server\".\"andon_events\"`, want: "ldf_server.andon_events"},
{name: "double escaped", in: `\\\"ldf_server\\\".\\\"andon_events\\\"`, want: "ldf_server.andon_events"},
{name: "space around dot", in: ` "ldf_server" . "andon_events" `, want: "ldf_server.andon_events"},
{name: "table only", in: `bcs_barcode`, want: "bcs_barcode"},
{name: "table only quoted", in: `"bcs_barcode"`, want: "bcs_barcode"},
{name: "table only double quoted", in: `""bcs_barcode""`, want: "bcs_barcode"},
{name: "table only double escaped", in: `\\\"bcs_barcode\\\"`, want: "bcs_barcode"},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := normalizeKingbaseAgentTableName(tt.in); got != tt.want {
t.Fatalf("normalizeKingbaseAgentTableName(%q) = %q, want %q", tt.in, got, tt.want)
}
})
}
}
func TestTimeoutMsFromContext_WithDeadline(t *testing.T) {
ctx, cancel := context.WithTimeout(context.Background(), 3*time.Second)
defer cancel()
func TestNormalizeKingbaseAgentChangeSetByColumns(t *testing.T) {
columns := []string{"andon_events_id", "event_name", "event_code"}
input := connection.ChangeSet{
Inserts: []map[string]interface{}{
{"event name": "物料1", "event_code": "EV-0001", "andon_events_id": 1},
},
Updates: []connection.UpdateRow{
{Keys: map[string]interface{}{"andon_events_id": 1}, Values: map[string]interface{}{"event name": "物料2"}},
},
Deletes: []map[string]interface{}{
{"andon_events_id": 1},
},
}
got := timeoutMsFromContext(ctx)
if got <= 0 {
t.Fatalf("有 deadline 时应返回正值got=%d", got)
}
}
func TestTimeoutMsFromContext_ExpiredDeadline(t *testing.T) {
ctx, cancel := context.WithDeadline(context.Background(), time.Now().Add(-time.Second))
defer cancel()
if got := timeoutMsFromContext(ctx); got != 1 {
t.Fatalf("过期 deadline 应返回 1got=%d", got)
out, err := normalizeKingbaseAgentChangeSetByColumns(input, columns)
if err != nil {
t.Fatalf("normalizeKingbaseAgentChangeSetByColumns error: %v", err)
}
if _, ok := out.Inserts[0]["event_name"]; !ok {
t.Fatalf("expected insert to map \"event name\" -> \"event_name\"")
}
if _, ok := out.Inserts[0]["event name"]; ok {
t.Fatalf("unexpected insert key \"event name\" after normalization")
}
if _, ok := out.Updates[0].Values["event_name"]; !ok {
t.Fatalf("expected update values to map \"event name\" -> \"event_name\"")
}
if _, ok := out.Updates[0].Values["event name"]; ok {
t.Fatalf("unexpected update value key \"event name\" after normalization")
}
}

View File

@@ -8,6 +8,7 @@ import (
"reflect"
"strconv"
"strings"
"time"
"unicode"
"unicode/utf8"
)
@@ -30,12 +31,44 @@ func normalizeQueryValue(v interface{}) interface{} {
}
func normalizeQueryValueWithDBType(v interface{}, databaseTypeName string) interface{} {
if tm, ok := v.(time.Time); ok {
return normalizeTemporalValueForDisplay(tm, databaseTypeName)
}
if b, ok := v.([]byte); ok {
return bytesToDisplayValue(b, databaseTypeName)
}
return normalizeCompositeQueryValue(v)
}
func normalizeTemporalValueForDisplay(value time.Time, databaseTypeName string) interface{} {
if value.IsZero() {
if zeroValue, ok := zeroTemporalDisplayValue(databaseTypeName); ok {
return zeroValue
}
}
return value.Format(time.RFC3339Nano)
}
func zeroTemporalDisplayValue(databaseTypeName string) (string, bool) {
typeName := strings.ToUpper(strings.TrimSpace(databaseTypeName))
if typeName == "" {
return "0000-00-00 00:00:00", true
}
switch {
case strings.Contains(typeName, "TIMESTAMP") || strings.Contains(typeName, "DATETIME"):
return "0000-00-00 00:00:00", true
case typeName == "DATE" || typeName == "NEWDATE":
return "0000-00-00", true
case strings.Contains(typeName, "TIME"):
return "00:00:00", true
case strings.Contains(typeName, "YEAR"):
return "0000", true
default:
return "", false
}
}
func normalizeCompositeQueryValue(v interface{}) interface{} {
if v == nil {
return nil
@@ -86,6 +119,16 @@ func normalizeCompositeQueryValue(v interface{}) interface{} {
items[i] = normalizeQueryValue(rv.Index(i).Interface())
}
return items
case reflect.Struct:
// 部分驱动(如 Kingbase会返回复杂结构体值直接透传会导致前端渲染和比较开销激增。
// 统一降级为可读字符串,避免对象深层序列化触发 UI 卡顿。
if tm, ok := v.(time.Time); ok {
return normalizeTemporalValueForDisplay(tm, "")
}
if stringer, ok := v.(fmt.Stringer); ok {
return stringer.String()
}
return fmt.Sprintf("%v", v)
default:
return normalizeUnsafeIntegerForJS(rv, v)
}

View File

@@ -2,7 +2,9 @@ package db
import (
"encoding/json"
"fmt"
"testing"
"time"
)
type duckMapLike map[any]any
@@ -165,3 +167,61 @@ func TestNormalizeQueryValueWithDBType_JSONNumber(t *testing.T) {
})
}
}
type customStructValue struct {
Name string
Age int
}
func (v customStructValue) String() string {
return fmt.Sprintf("%s-%d", v.Name, v.Age)
}
func TestNormalizeQueryValueWithDBType_StructToString(t *testing.T) {
got := normalizeQueryValueWithDBType(customStructValue{Name: "alice", Age: 18}, "")
if got != "alice-18" {
t.Fatalf("结构体应降级为可读字符串,实际=%v(%T)", got, got)
}
}
func TestNormalizeQueryValueWithDBType_TimeStructToRFC3339(t *testing.T) {
input := time.Date(2026, 3, 5, 18, 30, 15, 123456789, time.UTC)
got := normalizeQueryValueWithDBType(input, "")
text, ok := got.(string)
if !ok {
t.Fatalf("time.Time 应转为字符串,实际=%v(%T)", got, got)
}
if text != "2026-03-05T18:30:15.123456789Z" {
t.Fatalf("time.Time 规整值异常,实际=%s", text)
}
}
func TestNormalizeQueryValueWithDBType_ZeroTemporalValues(t *testing.T) {
zero := time.Time{}
cases := []struct {
name string
dbType string
wantText string
}{
{name: "date", dbType: "DATE", wantText: "0000-00-00"},
{name: "newdate", dbType: "NEWDATE", wantText: "0000-00-00"},
{name: "datetime", dbType: "DATETIME", wantText: "0000-00-00 00:00:00"},
{name: "timestamp", dbType: "TIMESTAMP", wantText: "0000-00-00 00:00:00"},
{name: "time", dbType: "TIME", wantText: "00:00:00"},
{name: "year", dbType: "YEAR", wantText: "0000"},
{name: "unknown", dbType: "", wantText: "0000-00-00 00:00:00"},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
got := normalizeQueryValueWithDBType(zero, tc.dbType)
text, ok := got.(string)
if !ok {
t.Fatalf("期望 string实际=%v(%T)", got, got)
}
if text != tc.wantText {
t.Fatalf("dbType=%s 期望=%s实际=%s", tc.dbType, tc.wantText, text)
}
})
}
}

View File

@@ -0,0 +1,168 @@
//go:build gonavi_full_drivers || gonavi_tdengine_driver
package db
import (
"context"
"database/sql"
"database/sql/driver"
"fmt"
"strings"
"sync"
"testing"
"GoNavi-Wails/internal/connection"
)
const tdengineRecordingDriverName = "gonavi_tdengine_recording"
var (
registerTDengineRecordingDriverOnce sync.Once
tdengineRecordingDriverMu sync.Mutex
tdengineRecordingDriverSeq int
tdengineRecordingDriverStates = map[string]*tdengineRecordingState{}
)
type tdengineRecordingState struct {
mu sync.Mutex
queries []string
execErr error
}
func (s *tdengineRecordingState) snapshotQueries() []string {
s.mu.Lock()
defer s.mu.Unlock()
queries := make([]string, len(s.queries))
copy(queries, s.queries)
return queries
}
type tdengineRecordingDriver struct{}
func (tdengineRecordingDriver) Open(name string) (driver.Conn, error) {
tdengineRecordingDriverMu.Lock()
state := tdengineRecordingDriverStates[name]
tdengineRecordingDriverMu.Unlock()
if state == nil {
return nil, fmt.Errorf("recording state not found: %s", name)
}
return &tdengineRecordingConn{state: state}, nil
}
type tdengineRecordingConn struct {
state *tdengineRecordingState
}
func (c *tdengineRecordingConn) Prepare(query string) (driver.Stmt, error) {
return nil, fmt.Errorf("prepare not supported in tdengine recording driver: %s", query)
}
func (c *tdengineRecordingConn) Close() error { return nil }
func (c *tdengineRecordingConn) Begin() (driver.Tx, error) {
return nil, fmt.Errorf("transactions not supported in tdengine recording driver")
}
func (c *tdengineRecordingConn) ExecContext(_ context.Context, query string, args []driver.NamedValue) (driver.Result, error) {
if len(args) > 0 {
return nil, fmt.Errorf("unexpected exec args: %d", len(args))
}
c.state.mu.Lock()
defer c.state.mu.Unlock()
if c.state.execErr != nil {
return nil, c.state.execErr
}
c.state.queries = append(c.state.queries, query)
return driver.RowsAffected(1), nil
}
var _ driver.ExecerContext = (*tdengineRecordingConn)(nil)
func openTDengineRecordingDB(t *testing.T) (*sql.DB, *tdengineRecordingState) {
t.Helper()
registerTDengineRecordingDriverOnce.Do(func() {
sql.Register(tdengineRecordingDriverName, tdengineRecordingDriver{})
})
tdengineRecordingDriverMu.Lock()
tdengineRecordingDriverSeq++
dsn := fmt.Sprintf("tdengine-recording-%d", tdengineRecordingDriverSeq)
state := &tdengineRecordingState{}
tdengineRecordingDriverStates[dsn] = state
tdengineRecordingDriverMu.Unlock()
dbConn, err := sql.Open(tdengineRecordingDriverName, dsn)
if err != nil {
t.Fatalf("打开 recording db 失败: %v", err)
}
t.Cleanup(func() {
_ = dbConn.Close()
tdengineRecordingDriverMu.Lock()
delete(tdengineRecordingDriverStates, dsn)
tdengineRecordingDriverMu.Unlock()
})
return dbConn, state
}
func TestTDengineApplyChanges_InsertsIntoQualifiedTable(t *testing.T) {
t.Parallel()
dbConn, state := openTDengineRecordingDB(t)
td := &TDengineDB{conn: dbConn}
changes := connection.ChangeSet{
Inserts: []map[string]interface{}{
{
"ts": "2026-03-09 10:00:00",
"value": 12.5,
"device": "sensor-a",
"enabled": true,
},
},
}
if err := td.ApplyChanges("analytics.metrics", changes); err != nil {
t.Fatalf("ApplyChanges 返回错误: %v", err)
}
queries := state.snapshotQueries()
if len(queries) != 1 {
t.Fatalf("期望执行 1 条 SQL实际 %d 条: %#v", len(queries), queries)
}
want := "INSERT INTO `analytics`.`metrics` (`device`, `enabled`, `ts`, `value`) VALUES ('sensor-a', 1, '2026-03-09 10:00:00', 12.5)"
if queries[0] != want {
t.Fatalf("插入 SQL 不符合预期\nwant: %s\n got: %s", want, queries[0])
}
}
func TestTDengineApplyChanges_RejectsMixedUpdatesWithoutPartialWrite(t *testing.T) {
t.Parallel()
dbConn, state := openTDengineRecordingDB(t)
td := &TDengineDB{conn: dbConn}
changes := connection.ChangeSet{
Inserts: []map[string]interface{}{{
"ts": "2026-03-09 10:00:00",
"value": 12.5,
}},
Updates: []connection.UpdateRow{{
Keys: map[string]interface{}{"ts": "2026-03-09 10:00:00"},
Values: map[string]interface{}{"value": 18.8},
}},
}
err := td.ApplyChanges("metrics", changes)
if err == nil {
t.Fatalf("期望 mixed changes 被拒绝")
}
if !strings.Contains(err.Error(), "UPDATE/DELETE") {
t.Fatalf("错误信息未说明限制边界: %v", err)
}
if queries := state.snapshotQueries(); len(queries) != 0 {
t.Fatalf("期望拒绝 mixed changes 时不执行任何 SQL实际=%#v", queries)
}
}

View File

@@ -7,6 +7,7 @@ import (
"database/sql"
"fmt"
"net"
"sort"
"strconv"
"strings"
"time"
@@ -362,6 +363,83 @@ func (t *TDengineDB) GetTriggers(dbName, tableName string) ([]connection.Trigger
return []connection.TriggerDefinition{}, nil
}
func (t *TDengineDB) ApplyChanges(tableName string, changes connection.ChangeSet) error {
if t.conn == nil {
return fmt.Errorf("connection not open")
}
if strings.TrimSpace(tableName) == "" {
return fmt.Errorf("table name required")
}
if len(changes.Updates) > 0 || len(changes.Deletes) > 0 {
return fmt.Errorf("TDengine 目标端当前仅支持 INSERT 写入,暂不支持 UPDATE/DELETE 差异同步,请改用仅插入或全量覆盖模式")
}
qualifiedTable := quoteTDengineTable("", tableName)
for _, row := range changes.Inserts {
query, err := buildTDengineInsertSQL(qualifiedTable, row)
if err != nil {
return err
}
if query == "" {
continue
}
if _, err := t.conn.Exec(query); err != nil {
return fmt.Errorf("insert error: %v; sql=%s", err, query)
}
}
return nil
}
func buildTDengineInsertSQL(qualifiedTable string, row map[string]interface{}) (string, error) {
if strings.TrimSpace(qualifiedTable) == "" {
return "", fmt.Errorf("qualified table required")
}
if len(row) == 0 {
return "", nil
}
cols := make([]string, 0, len(row))
for key := range row {
if strings.TrimSpace(key) == "" {
continue
}
cols = append(cols, key)
}
if len(cols) == 0 {
return "", nil
}
sort.Strings(cols)
quotedCols := make([]string, 0, len(cols))
values := make([]string, 0, len(cols))
for _, col := range cols {
quotedCols = append(quotedCols, fmt.Sprintf("`%s`", escapeBacktickIdent(col)))
values = append(values, tdengineLiteral(row[col]))
}
return fmt.Sprintf("INSERT INTO %s (%s) VALUES (%s)", qualifiedTable, strings.Join(quotedCols, ", "), strings.Join(values, ", ")), nil
}
func tdengineLiteral(value interface{}) string {
switch val := value.(type) {
case nil:
return "NULL"
case bool:
if val {
return "1"
}
return "0"
case int, int8, int16, int32, int64, uint, uint8, uint16, uint32, uint64, float32, float64:
return fmt.Sprintf("%v", val)
case time.Time:
return fmt.Sprintf("'%s'", val.Format("2006-01-02 15:04:05"))
case []byte:
return fmt.Sprintf("'%s'", strings.ReplaceAll(string(val), "'", "''"))
default:
return fmt.Sprintf("'%s'", strings.ReplaceAll(fmt.Sprintf("%v", val), "'", "''"))
}
}
func getValueFromRow(row map[string]interface{}, keys ...string) (interface{}, bool) {
if len(row) == 0 {
return nil, false

View File

@@ -14,8 +14,9 @@ import (
)
const (
envLogDir = "GONAVI_LOG_DIR"
appDirName = "GoNavi"
envLogDir = "GONAVI_LOG_DIR"
appHiddenDir = ".GoNavi"
appLogDirName = "Logs"
logFileName = "gonavi.log"
logRotateMaxBytes = 10 * 1024 * 1024 // 10MB
@@ -37,7 +38,7 @@ func Init() {
defer logMu.Unlock()
logPath = path
logInst = log.New(out, "", log.Ldate|log.Ltime|log.Lmicroseconds)
logInst.Printf("[信息] 日志初始化完成,日志文件:%s", logPath)
logInst.Printf("[INFO] 日志初始化完成,日志文件:%s", logPath)
})
}
@@ -62,15 +63,15 @@ func Close() {
}
func Infof(format string, args ...any) {
printf("信息", format, args...)
printf("INFO", format, args...)
}
func Warnf(format string, args ...any) {
printf("警告", format, args...)
printf("WARN", format, args...)
}
func Errorf(format string, args ...any) {
printf("错误", format, args...)
printf("ERROR", format, args...)
}
func Error(err error, format string, args ...any) {
@@ -115,37 +116,58 @@ func ErrorChain(err error) string {
func printf(level string, format string, args ...any) {
Init()
logMu.Lock()
defer logMu.Unlock()
inst := logInst
logMu.Unlock()
if inst == nil {
return
}
inst.Printf("[%s] %s", level, fmt.Sprintf(format, args...))
if logFile != nil {
_ = logFile.Sync()
}
}
func initOutput() (string, io.Writer) {
dir := strings.TrimSpace(os.Getenv(envLogDir))
if dir == "" {
base, err := os.UserConfigDir()
if err != nil || strings.TrimSpace(base) == "" {
base = os.TempDir()
}
dir = filepath.Join(base, appDirName, "logs")
dir = defaultLogDir()
}
if path, writer, ok := openLogFile(dir); ok {
return path, writer
}
fallbackDir := filepath.Join(os.TempDir(), appHiddenDir, appLogDirName)
if path, writer, ok := openLogFile(fallbackDir); ok {
return path, writer
}
return "", os.Stderr
}
func defaultLogDir() string {
home, err := os.UserHomeDir()
if err != nil || strings.TrimSpace(home) == "" {
return filepath.Join(os.TempDir(), appHiddenDir, appLogDirName)
}
return filepath.Join(home, appHiddenDir, appLogDirName)
}
func openLogFile(dir string) (string, io.Writer, bool) {
if strings.TrimSpace(dir) == "" {
return "", nil, false
}
if err := os.MkdirAll(dir, 0o755); err != nil {
return filepath.Join(dir, logFileName), os.Stderr
return "", nil, false
}
path := filepath.Join(dir, logFileName)
rotateIfNeeded(path, dir)
f, err := os.OpenFile(path, os.O_CREATE|os.O_APPEND|os.O_WRONLY, 0o644)
if err != nil {
return path, os.Stderr
return "", nil, false
}
logFile = f
return path, f
return path, f, true
}
func rotateIfNeeded(path, dir string) {

View File

@@ -3,8 +3,10 @@ package redis
import (
"context"
"crypto/tls"
"errors"
"fmt"
"net"
"net/url"
"strconv"
"strings"
"sync"
@@ -17,6 +19,8 @@ import (
"github.com/redis/go-redis/v9"
)
var ErrRedisKeyGone = errors.New("Redis Key 不存在或已过期")
// RedisClientImpl implements RedisClient using go-redis
type RedisClientImpl struct {
client redis.UniversalClient
@@ -174,8 +178,31 @@ func (r *RedisClientImpl) toDisplayKey(key string) string {
return strings.TrimPrefix(key, prefix)
}
// sanitizeRedisPassword 对 Redis 密码进行防御性 URL 解码。
// 当密码中包含 URL 编码序列(如 %40尝试解码还原原始字符。
// 这可以防止前端 URI 构建中 encodeURIComponent 编码后的密码被误传入。
func sanitizeRedisPassword(password string) string {
if password == "" {
return password
}
// 仅当密码中包含 '%' 且后跟两位十六进制数字时,才尝试 URL 解码
if !strings.Contains(password, "%") {
return password
}
decoded, err := url.QueryUnescape(password)
if err != nil {
// 解码失败,使用原始密码
return password
}
if decoded != password {
logger.Warnf("Redis 密码检测到 URL 编码,已自动解码(原长度=%d 解码后长度=%d", len(password), len(decoded))
}
return decoded
}
// Connect establishes a connection to Redis
func (r *RedisClientImpl) Connect(config connection.ConnectionConfig) error {
config.Password = sanitizeRedisPassword(config.Password)
r.config = config
if r.config.RedisDB < 0 || r.config.RedisDB > 15 {
r.config.RedisDB = 0
@@ -448,20 +475,29 @@ func (r *RedisClientImpl) loadRedisKeyInfos(ctx context.Context, keys []string)
if ttlErr != nil && ttlErr != redis.Nil {
ttlValue = -2
}
ttlSeconds := toRedisTTLSeconds(ttlValue)
if isRedisKeyGone(keyType, ttlSeconds) {
continue
}
result = append(result, RedisKeyInfo{
Key: r.toDisplayKey(key),
Type: keyType,
TTL: toRedisTTLSeconds(ttlValue),
TTL: ttlSeconds,
})
}
return result
}
for i, key := range keys {
keyType := typeResults[i].Val()
ttlSeconds := toRedisTTLSeconds(ttlResults[i].Val())
if isRedisKeyGone(keyType, ttlSeconds) {
continue
}
result = append(result, RedisKeyInfo{
Key: r.toDisplayKey(key),
Type: typeResults[i].Val(),
TTL: toRedisTTLSeconds(ttlResults[i].Val()),
Type: keyType,
TTL: ttlSeconds,
})
}
return result
@@ -477,6 +513,17 @@ func toRedisTTLSeconds(ttl time.Duration) int64 {
return int64(ttl.Seconds())
}
func isRedisKeyGone(keyType string, ttl int64) bool {
return keyType == "none" || ttl == -2
}
func normalizeRedisGetValueError(keyType string, ttl int64) error {
if isRedisKeyGone(keyType, ttl) {
return ErrRedisKeyGone
}
return nil
}
// GetKeyType returns the type of a key
func (r *RedisClientImpl) GetKeyType(key string) (string, error) {
if r.client == nil {
@@ -570,6 +617,9 @@ func (r *RedisClientImpl) GetValue(key string) (*RedisValue, error) {
}
ttl, _ := r.GetTTL(key)
if err := normalizeRedisGetValueError(keyType, ttl); err != nil {
return nil, err
}
physicalKey := r.toPhysicalKey(key)
result := &RedisValue{

View File

@@ -0,0 +1,121 @@
package redis
import (
"errors"
"testing"
)
func TestSanitizeRedisPassword(t *testing.T) {
tests := []struct {
name string
input string
expected string
}{
{
name: "empty password",
input: "",
expected: "",
},
{
name: "plain password without special chars",
input: "mypassword123",
expected: "mypassword123",
},
{
name: "password with @ not encoded",
input: "p@ssword",
expected: "p@ssword",
},
{
name: "password with @ URL-encoded as %40",
input: "p%40ssword",
expected: "p@ssword",
},
{
name: "password with multiple encoded chars",
input: "p%40ss%23word",
expected: "p@ss#word",
},
{
name: "password with + encoded as %2B",
input: "p%2Bss",
expected: "p+ss",
},
{
name: "password that is purely encoded",
input: "%40%23%24",
expected: "@#$",
},
{
name: "password with invalid percent encoding",
input: "p%ZZssword",
expected: "p%ZZssword",
},
{
name: "password with trailing percent",
input: "password%",
expected: "password%",
},
{
name: "password with literal percent not encoding anything",
input: "100%safe",
expected: "100%safe",
},
{
name: "password with space encoded as %20",
input: "my%20pass",
expected: "my pass",
},
{
name: "complex password with mixed content",
input: "P%40ss%23w0rd!",
expected: "P@ss#w0rd!",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := sanitizeRedisPassword(tt.input)
if result != tt.expected {
t.Errorf("sanitizeRedisPassword(%q) = %q, want %q", tt.input, result, tt.expected)
}
})
}
}
func TestIsRedisKeyGone(t *testing.T) {
tests := []struct {
name string
keyType string
ttl int64
want bool
}{
{name: "type none", keyType: "none", ttl: -2, want: true},
{name: "type none without ttl", keyType: "none", ttl: -1, want: true},
{name: "missing by ttl", keyType: "string", ttl: -2, want: true},
{name: "normal string", keyType: "string", ttl: 30, want: false},
{name: "permanent hash", keyType: "hash", ttl: -1, want: false},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := isRedisKeyGone(tt.keyType, tt.ttl); got != tt.want {
t.Fatalf("isRedisKeyGone(%q, %d)=%v, want %v", tt.keyType, tt.ttl, got, tt.want)
}
})
}
}
func TestNormalizeRedisGetValueError(t *testing.T) {
err := normalizeRedisGetValueError("none", -2)
if !errors.Is(err, ErrRedisKeyGone) {
t.Fatalf("expected ErrRedisKeyGone, got %v", err)
}
if err == nil || err.Error() != "Redis Key 不存在或已过期" {
t.Fatalf("unexpected error text: %v", err)
}
if normalizeRedisGetValueError("hash", -1) != nil {
t.Fatal("expected nil for supported existing key")
}
}

View File

@@ -2,10 +2,13 @@ package ssh
import (
"context"
"crypto/sha256"
"encoding/hex"
"fmt"
"io"
"net"
"os"
"strconv"
"sync"
"time"
@@ -69,7 +72,7 @@ func connectSSH(config connection.SSHConfig) (*ssh.Client, error) {
}
}
}
if config.Password != "" {
authMethods = append(authMethods, ssh.Password(config.Password))
}
@@ -105,7 +108,7 @@ func RegisterSSHNetwork(sshConfig connection.SSHConfig) (string, error) {
// Generate unique network name
netName := fmt.Sprintf("ssh_%s_%d", sshConfig.Host, time.Now().UnixNano())
logger.Infof("注册 SSH 网络:%s地址=%s:%d 用户=%s", netName, sshConfig.Host, sshConfig.Port, sshConfig.User)
mysql.RegisterDialContext(netName, func(ctx context.Context, addr string) (net.Conn, error) {
return dialContext(ctx, client, "tcp", addr)
})
@@ -115,12 +118,58 @@ func RegisterSSHNetwork(sshConfig connection.SSHConfig) (string, error) {
// sshClientCache stores SSH clients to avoid creating multiple connections
var (
sshClientCache = make(map[string]*ssh.Client)
sshClientCache = make(map[sshClientCacheKey]*ssh.Client)
sshClientCacheMu sync.RWMutex
localForwarders = make(map[string]*LocalForwarder)
localForwarders = make(map[forwarderCacheKey]*LocalForwarder)
forwarderMu sync.RWMutex
)
type sshClientCacheKey struct {
host string
port int
user string
auth string
}
type forwarderCacheKey struct {
ssh sshClientCacheKey
remoteHost string
remotePort int
}
func sshAuthFingerprint(config connection.SSHConfig) string {
hasher := sha256.New()
_, _ = hasher.Write([]byte(config.Password))
_, _ = hasher.Write([]byte{0})
_, _ = hasher.Write([]byte(config.KeyPath))
if config.KeyPath != "" {
if st, err := os.Stat(config.KeyPath); err == nil {
_, _ = hasher.Write([]byte{0})
_, _ = hasher.Write([]byte(st.ModTime().UTC().Format(time.RFC3339Nano)))
_, _ = hasher.Write([]byte{0})
_, _ = hasher.Write([]byte(strconv.FormatInt(st.Size(), 10)))
} else {
_, _ = hasher.Write([]byte{0})
_, _ = hasher.Write([]byte("stat_err"))
}
}
sum := hasher.Sum(nil)
return hex.EncodeToString(sum[:8])
}
func newSSHClientCacheKey(config connection.SSHConfig) sshClientCacheKey {
return sshClientCacheKey{
host: config.Host,
port: config.Port,
user: config.User,
auth: sshAuthFingerprint(config),
}
}
func formatSSHClientKeyForLog(key sshClientCacheKey) string {
return fmt.Sprintf("%s:%d 用户=%s", key.host, key.port, key.user)
}
// LocalForwarder represents a local port forwarder through SSH
type LocalForwarder struct {
LocalAddr string
@@ -249,9 +298,13 @@ func (f *LocalForwarder) IsClosed() bool {
// GetOrCreateLocalForwarder returns a cached forwarder or creates a new one
func GetOrCreateLocalForwarder(sshConfig connection.SSHConfig, remoteHost string, remotePort int) (*LocalForwarder, error) {
key := fmt.Sprintf("%s:%d:%s->%s:%d",
sshConfig.Host, sshConfig.Port, sshConfig.User,
remoteHost, remotePort)
key := forwarderCacheKey{
ssh: newSSHClientCacheKey(sshConfig),
remoteHost: remoteHost,
remotePort: remotePort,
}
logKey := fmt.Sprintf("%s:%d:%s->%s:%d",
sshConfig.Host, sshConfig.Port, sshConfig.User, remoteHost, remotePort)
forwarderMu.RLock()
forwarder, exists := localForwarders[key]
@@ -259,7 +312,7 @@ func GetOrCreateLocalForwarder(sshConfig connection.SSHConfig, remoteHost string
// Check if exists and is still valid
if exists && forwarder != nil && !forwarder.IsClosed() {
logger.Infof("复用已有端口转发:%s", key)
logger.Infof("复用已有端口转发:%s", logKey)
return forwarder, nil
}
@@ -287,24 +340,18 @@ func CloseAllForwarders() {
forwarderMu.Lock()
defer forwarderMu.Unlock()
for key, forwarder := range localForwarders {
for _, forwarder := range localForwarders {
if forwarder != nil {
_ = forwarder.Close()
logger.Infof("已关闭端口转发:%s", key)
logger.Infof("已关闭端口转发:本地 %s -> 远程 %s", forwarder.LocalAddr, forwarder.RemoteAddr)
}
}
localForwarders = make(map[string]*LocalForwarder)
}
// getSSHClientCacheKey generates a unique cache key for SSH config
func getSSHClientCacheKey(config connection.SSHConfig) string {
return fmt.Sprintf("%s:%d:%s", config.Host, config.Port, config.User)
localForwarders = make(map[forwarderCacheKey]*LocalForwarder)
}
// GetOrCreateSSHClient returns a cached SSH client or creates a new one
func GetOrCreateSSHClient(config connection.SSHConfig) (*ssh.Client, error) {
key := getSSHClientCacheKey(config)
key := newSSHClientCacheKey(config)
sshClientCacheMu.RLock()
client, exists := sshClientCache[key]
@@ -315,11 +362,11 @@ func GetOrCreateSSHClient(config connection.SSHConfig) (*ssh.Client, error) {
session, err := client.NewSession()
if err == nil {
session.Close()
logger.Infof("复用已有 SSH 连接:%s", key)
logger.Infof("复用已有 SSH 连接:%s", formatSSHClientKeyForLog(key))
return client, nil
}
// Connection is dead, remove from cache
logger.Warnf("SSH 连接已断开,重新建立:%s (错误: %v)", key, err)
logger.Warnf("SSH 连接已断开,重新建立:%s (错误: %v)", formatSSHClientKeyForLog(key), err)
sshClientCacheMu.Lock()
delete(sshClientCache, key)
sshClientCacheMu.Unlock()
@@ -338,7 +385,7 @@ func GetOrCreateSSHClient(config connection.SSHConfig) (*ssh.Client, error) {
sshClientCache[key] = client
sshClientCacheMu.Unlock()
logger.Infof("已缓存 SSH 连接:%s", key)
logger.Infof("已缓存 SSH 连接:%s", formatSSHClientKeyForLog(key))
return client, nil
}
@@ -367,9 +414,8 @@ func CloseAllSSHClients() {
for key, client := range sshClientCache {
if client != nil {
_ = client.Close()
logger.Infof("已关闭 SSH 连接:%s", key)
logger.Infof("已关闭 SSH 连接:%s", formatSSHClientKeyForLog(key))
}
}
sshClientCache = make(map[string]*ssh.Client)
sshClientCache = make(map[sshClientCacheKey]*ssh.Client)
}

View File

@@ -0,0 +1,46 @@
package ssh
import (
"testing"
"GoNavi-Wails/internal/connection"
)
func TestNewSSHClientCacheKey_DiffPassword(t *testing.T) {
a := newSSHClientCacheKey(connection.SSHConfig{
Host: "127.0.0.1",
Port: 22,
User: "root",
Password: "a",
})
b := newSSHClientCacheKey(connection.SSHConfig{
Host: "127.0.0.1",
Port: 22,
User: "root",
Password: "b",
})
if a == b {
t.Fatalf("expected different cache key when password differs")
}
if a.host != b.host || a.port != b.port || a.user != b.user {
t.Fatalf("expected host/port/user to stay identical")
}
}
func TestNewSSHClientCacheKey_DiffKeyPath(t *testing.T) {
a := newSSHClientCacheKey(connection.SSHConfig{
Host: "127.0.0.1",
Port: 22,
User: "root",
KeyPath: "/tmp/a.key",
})
b := newSSHClientCacheKey(connection.SSHConfig{
Host: "127.0.0.1",
Port: 22,
User: "root",
KeyPath: "/tmp/b.key",
})
if a == b {
t.Fatalf("expected different cache key when keyPath differs")
}
}

View File

@@ -1,22 +1,27 @@
package sync
import (
"GoNavi-Wails/internal/db"
"GoNavi-Wails/internal/logger"
"fmt"
"strings"
)
type TableDiffSummary struct {
Table string `json:"table"`
PKColumn string `json:"pkColumn,omitempty"`
CanSync bool `json:"canSync"`
Inserts int `json:"inserts"`
Updates int `json:"updates"`
Deletes int `json:"deletes"`
Same int `json:"same"`
Message string `json:"message,omitempty"`
HasSchema bool `json:"hasSchema,omitempty"`
Table string `json:"table"`
PKColumn string `json:"pkColumn,omitempty"`
CanSync bool `json:"canSync"`
Inserts int `json:"inserts"`
Updates int `json:"updates"`
Deletes int `json:"deletes"`
Same int `json:"same"`
Message string `json:"message,omitempty"`
HasSchema bool `json:"hasSchema,omitempty"`
TargetTableExists bool `json:"targetTableExists,omitempty"`
PlannedAction string `json:"plannedAction,omitempty"`
Warnings []string `json:"warnings,omitempty"`
UnsupportedObjects []string `json:"unsupportedObjects,omitempty"`
IndexesToCreate int `json:"indexesToCreate,omitempty"`
IndexesSkipped int `json:"indexesSkipped,omitempty"`
}
type SyncAnalyzeResult struct {
@@ -27,6 +32,12 @@ type SyncAnalyzeResult struct {
func (s *SyncEngine) Analyze(config SyncConfig) SyncAnalyzeResult {
result := SyncAnalyzeResult{Success: true, Tables: []TableDiffSummary{}}
if isRedisToMongoKeyspacePair(config) {
return s.analyzeRedisToMongo(config)
}
if isMongoToRedisKeyspacePair(config) {
return s.analyzeMongoToRedis(config)
}
contentRaw := strings.ToLower(strings.TrimSpace(config.Content))
syncSchema := false
@@ -48,25 +59,23 @@ func (s *SyncEngine) Analyze(config SyncConfig) SyncAnalyzeResult {
totalTables := len(config.Tables)
s.progress(config.JobID, 0, totalTables, "", "差异分析开始")
sourceDB, err := db.NewDatabase(config.SourceConfig.Type)
sourceDB, err := newSyncDatabase(config.SourceConfig.Type)
if err != nil {
logger.Error(err, "初始化源数据库驱动失败:类型=%s", config.SourceConfig.Type)
return SyncAnalyzeResult{Success: false, Message: "初始化源数据库驱动失败: " + err.Error()}
}
targetDB, err := db.NewDatabase(config.TargetConfig.Type)
targetDB, err := newSyncDatabase(config.TargetConfig.Type)
if err != nil {
logger.Error(err, "初始化目标数据库驱动失败:类型=%s", config.TargetConfig.Type)
return SyncAnalyzeResult{Success: false, Message: "初始化目标数据库驱动失败: " + err.Error()}
}
// Connect Source
if err := sourceDB.Connect(config.SourceConfig); err != nil {
logger.Error(err, "源数据库连接失败:%s", formatConnSummaryForSync(config.SourceConfig))
return SyncAnalyzeResult{Success: false, Message: "源数据库连接失败: " + err.Error()}
}
defer sourceDB.Close()
// Connect Target
if err := targetDB.Connect(config.TargetConfig); err != nil {
logger.Error(err, "目标数据库连接失败:%s", formatConnSummaryForSync(config.TargetConfig))
return SyncAnalyzeResult{Success: false, Message: "目标数据库连接失败: " + err.Error()}
@@ -88,51 +97,76 @@ func (s *SyncEngine) Analyze(config SyncConfig) SyncAnalyzeResult {
HasSchema: syncSchema,
}
sourceSchema, sourceTable := normalizeSchemaAndTable(config.SourceConfig.Type, config.SourceConfig.Database, tableName)
targetSchema, targetTable := normalizeSchemaAndTable(config.TargetConfig.Type, config.TargetConfig.Database, tableName)
sourceQueryTable := qualifiedNameForQuery(config.SourceConfig.Type, sourceSchema, sourceTable, tableName)
targetQueryTable := qualifiedNameForQuery(config.TargetConfig.Type, targetSchema, targetTable, tableName)
cols, err := sourceDB.GetColumns(sourceSchema, sourceTable)
plan, cols, _, err := buildSchemaMigrationPlan(config, tableName, sourceDB, targetDB)
if err != nil {
summary.Message = "获取源表字段失败: " + err.Error()
summary.Message = err.Error()
result.Tables = append(result.Tables, summary)
return
}
summary.TargetTableExists = plan.TargetTableExists
summary.PlannedAction = plan.PlannedAction
summary.Warnings = append(summary.Warnings, plan.Warnings...)
summary.UnsupportedObjects = append(summary.UnsupportedObjects, plan.UnsupportedObjects...)
summary.IndexesToCreate = plan.IndexesToCreate
summary.IndexesSkipped = plan.IndexesSkipped
if !plan.TargetTableExists && !plan.AutoCreate {
summary.Message = firstNonEmpty(plan.PlannedAction, "目标表不存在,无法执行同步")
result.Tables = append(result.Tables, summary)
return
}
if !syncData {
summary.CanSync = true
summary.Message = "仅同步结构,未执行数据差异分析"
summary.Message = firstNonEmpty(plan.PlannedAction, "仅同步结构,未执行数据差异分析")
result.Tables = append(result.Tables, summary)
return
}
tableMode := normalizeSyncMode(config.Mode)
pkCols := make([]string, 0, 2)
for _, c := range cols {
if c.Key == "PRI" || c.Key == "PK" {
pkCols = append(pkCols, c.Name)
}
}
if len(pkCols) == 0 {
summary.Message = "无主键,不支持数据对比/同步"
result.Tables = append(result.Tables, summary)
return
}
if len(pkCols) > 1 {
summary.Message = fmt.Sprintf("复合主键(%s暂不支持数据对比/同步", strings.Join(pkCols, ","))
result.Tables = append(result.Tables, summary)
return
}
summary.PKColumn = pkCols[0]
// Query data for diff
sourceRows, _, err := sourceDB.Query(fmt.Sprintf("SELECT * FROM %s", quoteQualifiedIdentByType(config.SourceConfig.Type, sourceQueryTable)))
sourceRows, _, err := sourceDB.Query(fmt.Sprintf("SELECT * FROM %s", quoteQualifiedIdentByType(config.SourceConfig.Type, plan.SourceQueryTable)))
if err != nil {
summary.Message = "读取源表失败: " + err.Error()
result.Tables = append(result.Tables, summary)
return
}
targetRows, _, err := targetDB.Query(fmt.Sprintf("SELECT * FROM %s", quoteQualifiedIdentByType(config.TargetConfig.Type, targetQueryTable)))
if !plan.TargetTableExists && plan.AutoCreate {
summary.CanSync = true
summary.Inserts = len(sourceRows)
summary.Message = firstNonEmpty(plan.PlannedAction, "目标表不存在,执行时将自动建表并导入全部源数据")
result.Tables = append(result.Tables, summary)
return
}
if tableMode != "insert_update" {
summary.CanSync = true
summary.Inserts = len(sourceRows)
summary.Message = firstNonEmpty(plan.PlannedAction, "当前模式无需差异对比,将按源表数据执行导入")
result.Tables = append(result.Tables, summary)
return
}
if len(pkCols) == 0 {
summary.Message = "无主键,不支持差异对比同步;如需直接导入请使用仅插入或全量覆盖模式"
result.Tables = append(result.Tables, summary)
return
}
if len(pkCols) > 1 {
summary.Message = fmt.Sprintf("复合主键(%s暂不支持差异对比同步", strings.Join(pkCols, ","))
result.Tables = append(result.Tables, summary)
return
}
summary.PKColumn = pkCols[0]
targetRows, _, err := targetDB.Query(fmt.Sprintf("SELECT * FROM %s", quoteQualifiedIdentByType(config.TargetConfig.Type, plan.TargetQueryTable)))
if err != nil {
summary.Message = "读取目标表失败: " + err.Error()
result.Tables = append(result.Tables, summary)
@@ -188,6 +222,9 @@ func (s *SyncEngine) Analyze(config SyncConfig) SyncAnalyzeResult {
}
summary.CanSync = true
if strings.TrimSpace(summary.Message) == "" {
summary.Message = firstNonEmpty(plan.PlannedAction, "差异分析完成")
}
result.Tables = append(result.Tables, summary)
}()
}
@@ -196,3 +233,12 @@ func (s *SyncEngine) Analyze(config SyncConfig) SyncAnalyzeResult {
result.Message = fmt.Sprintf("已完成 %d 张表的差异分析", len(result.Tables))
return result
}
func firstNonEmpty(values ...string) string {
for _, value := range values {
if strings.TrimSpace(value) != "" {
return value
}
}
return ""
}

View File

@@ -0,0 +1,741 @@
package sync
import (
"GoNavi-Wails/internal/connection"
"GoNavi-Wails/internal/db"
"fmt"
"regexp"
"strings"
)
func buildMySQLToClickHousePlan(config SyncConfig, tableName string, sourceDB db.Database, targetDB db.Database) (SchemaMigrationPlan, []connection.ColumnDefinition, []connection.ColumnDefinition, error) {
plan := SchemaMigrationPlan{}
plan.SourceSchema, plan.SourceTable = normalizeSchemaAndTable(config.SourceConfig.Type, config.SourceConfig.Database, tableName)
plan.TargetSchema, plan.TargetTable = normalizeSchemaAndTable(config.TargetConfig.Type, config.TargetConfig.Database, tableName)
plan.SourceQueryTable = qualifiedNameForQuery(config.SourceConfig.Type, plan.SourceSchema, plan.SourceTable, tableName)
plan.TargetQueryTable = qualifiedNameForQuery(config.TargetConfig.Type, plan.TargetSchema, plan.TargetTable, tableName)
plan.PlannedAction = "使用已有目标表导入"
sourceCols, sourceExists, err := inspectTableColumns(sourceDB, plan.SourceSchema, plan.SourceTable)
if err != nil {
return plan, nil, nil, fmt.Errorf("获取源表字段失败: %w", err)
}
if !sourceExists {
return plan, nil, nil, fmt.Errorf("源表不存在或无列定义: %s", tableName)
}
targetCols, targetExists, err := inspectTableColumns(targetDB, plan.TargetSchema, plan.TargetTable)
if err != nil {
return plan, sourceCols, nil, fmt.Errorf("获取目标表字段失败: %w", err)
}
plan.TargetTableExists = targetExists
strategy := normalizeTargetTableStrategy(config.TargetTableStrategy)
if targetExists {
missing := diffMissingColumnNames(sourceCols, targetCols)
if len(missing) > 0 {
plan.Warnings = append(plan.Warnings, fmt.Sprintf("目标表缺失字段 %d 个:%s", len(missing), strings.Join(missing, ", ")))
}
if config.AutoAddColumns {
addSQL, addWarnings := buildMySQLToClickHouseAddColumnSQL(plan.TargetQueryTable, sourceCols, targetCols)
plan.PreDataSQL = append(plan.PreDataSQL, addSQL...)
plan.Warnings = append(plan.Warnings, addWarnings...)
if len(addSQL) > 0 {
plan.PlannedAction = fmt.Sprintf("补齐缺失字段(%d)后导入", len(addSQL))
}
}
plan.Warnings = append(plan.Warnings, "ClickHouse 目标端建议优先使用仅插入或全量覆盖;更新/删除语义与传统关系型存在差异")
return dedupeSchemaMigrationPlan(plan), sourceCols, targetCols, nil
}
switch strategy {
case "existing_only":
plan.PlannedAction = "目标表不存在,需先手工创建"
plan.Warnings = append(plan.Warnings, "当前策略要求目标表已存在,执行时不会自动建表")
return dedupeSchemaMigrationPlan(plan), sourceCols, targetCols, nil
case "smart", "auto_create_if_missing":
plan.AutoCreate = true
plan.PlannedAction = "目标表不存在,将自动建表后导入"
createSQL, warnings, unsupported := buildMySQLToClickHouseCreateTableSQL(plan.TargetQueryTable, sourceCols)
plan.CreateTableSQL = createSQL
plan.Warnings = append(plan.Warnings, warnings...)
plan.UnsupportedObjects = append(plan.UnsupportedObjects, unsupported...)
return dedupeSchemaMigrationPlan(plan), sourceCols, targetCols, nil
default:
return dedupeSchemaMigrationPlan(plan), sourceCols, targetCols, nil
}
}
func buildPGLikeToClickHousePlan(config SyncConfig, tableName string, sourceDB db.Database, targetDB db.Database) (SchemaMigrationPlan, []connection.ColumnDefinition, []connection.ColumnDefinition, error) {
plan := SchemaMigrationPlan{}
sourceType := resolveMigrationDBType(config.SourceConfig)
targetType := resolveMigrationDBType(config.TargetConfig)
plan.SourceSchema, plan.SourceTable = normalizeSchemaAndTable(sourceType, config.SourceConfig.Database, tableName)
plan.TargetSchema, plan.TargetTable = normalizeSchemaAndTable(targetType, config.TargetConfig.Database, tableName)
plan.SourceQueryTable = qualifiedNameForQuery(sourceType, plan.SourceSchema, plan.SourceTable, tableName)
plan.TargetQueryTable = qualifiedNameForQuery(targetType, plan.TargetSchema, plan.TargetTable, tableName)
plan.PlannedAction = "使用已有目标表导入"
sourceCols, sourceExists, err := inspectTableColumns(sourceDB, plan.SourceSchema, plan.SourceTable)
if err != nil {
return plan, nil, nil, fmt.Errorf("获取源表字段失败: %w", err)
}
if !sourceExists {
return plan, nil, nil, fmt.Errorf("源表不存在或无列定义: %s", tableName)
}
targetCols, targetExists, err := inspectTableColumns(targetDB, plan.TargetSchema, plan.TargetTable)
if err != nil {
return plan, sourceCols, nil, fmt.Errorf("获取目标表字段失败: %w", err)
}
plan.TargetTableExists = targetExists
strategy := normalizeTargetTableStrategy(config.TargetTableStrategy)
if targetExists {
missing := diffMissingColumnNames(sourceCols, targetCols)
if len(missing) > 0 {
plan.Warnings = append(plan.Warnings, fmt.Sprintf("目标表缺失字段 %d 个:%s", len(missing), strings.Join(missing, ", ")))
}
if config.AutoAddColumns {
addSQL, addWarnings := buildPGLikeToClickHouseAddColumnSQL(plan.TargetQueryTable, sourceCols, targetCols)
plan.PreDataSQL = append(plan.PreDataSQL, addSQL...)
plan.Warnings = append(plan.Warnings, addWarnings...)
if len(addSQL) > 0 {
plan.PlannedAction = fmt.Sprintf("补齐缺失字段(%d)后导入", len(addSQL))
}
}
plan.Warnings = append(plan.Warnings, "ClickHouse 目标端建议优先使用仅插入或全量覆盖;更新/删除语义与传统关系型存在差异")
return dedupeSchemaMigrationPlan(plan), sourceCols, targetCols, nil
}
switch strategy {
case "existing_only":
plan.PlannedAction = "目标表不存在,需先手工创建"
plan.Warnings = append(plan.Warnings, "当前策略要求目标表已存在,执行时不会自动建表")
return dedupeSchemaMigrationPlan(plan), sourceCols, targetCols, nil
case "smart", "auto_create_if_missing":
plan.AutoCreate = true
plan.PlannedAction = "目标表不存在,将自动建表后导入"
createSQL, warnings, unsupported := buildPGLikeToClickHouseCreateTableSQL(plan.TargetQueryTable, sourceCols)
plan.CreateTableSQL = createSQL
plan.Warnings = append(plan.Warnings, warnings...)
plan.UnsupportedObjects = append(plan.UnsupportedObjects, unsupported...)
return dedupeSchemaMigrationPlan(plan), sourceCols, targetCols, nil
default:
return dedupeSchemaMigrationPlan(plan), sourceCols, targetCols, nil
}
}
func buildClickHouseToMySQLPlan(config SyncConfig, tableName string, sourceDB db.Database, targetDB db.Database) (SchemaMigrationPlan, []connection.ColumnDefinition, []connection.ColumnDefinition, error) {
plan := SchemaMigrationPlan{}
plan.SourceSchema, plan.SourceTable = normalizeSchemaAndTable(config.SourceConfig.Type, config.SourceConfig.Database, tableName)
plan.TargetSchema, plan.TargetTable = normalizeSchemaAndTable(config.TargetConfig.Type, config.TargetConfig.Database, tableName)
plan.SourceQueryTable = qualifiedNameForQuery(config.SourceConfig.Type, plan.SourceSchema, plan.SourceTable, tableName)
plan.TargetQueryTable = qualifiedNameForQuery(config.TargetConfig.Type, plan.TargetSchema, plan.TargetTable, tableName)
plan.PlannedAction = "使用已有目标表导入"
sourceCols, sourceExists, err := inspectTableColumns(sourceDB, plan.SourceSchema, plan.SourceTable)
if err != nil {
return plan, nil, nil, fmt.Errorf("获取源表字段失败: %w", err)
}
if !sourceExists {
return plan, nil, nil, fmt.Errorf("源表不存在或无列定义: %s", tableName)
}
targetCols, targetExists, err := inspectTableColumns(targetDB, plan.TargetSchema, plan.TargetTable)
if err != nil {
return plan, sourceCols, nil, fmt.Errorf("获取目标表字段失败: %w", err)
}
plan.TargetTableExists = targetExists
strategy := normalizeTargetTableStrategy(config.TargetTableStrategy)
if targetExists {
missing := diffMissingColumnNames(sourceCols, targetCols)
if len(missing) > 0 {
plan.Warnings = append(plan.Warnings, fmt.Sprintf("目标表缺失字段 %d 个:%s", len(missing), strings.Join(missing, ", ")))
}
if config.AutoAddColumns {
addSQL, addWarnings := buildClickHouseToMySQLAddColumnSQL(plan.TargetQueryTable, sourceCols, targetCols)
plan.PreDataSQL = append(plan.PreDataSQL, addSQL...)
plan.Warnings = append(plan.Warnings, addWarnings...)
if len(addSQL) > 0 {
plan.PlannedAction = fmt.Sprintf("补齐缺失字段(%d)后导入", len(addSQL))
}
}
plan.Warnings = append(plan.Warnings, "ClickHouse 源端索引/约束元数据有限,反向迁移将以字段和数据为主")
return dedupeSchemaMigrationPlan(plan), sourceCols, targetCols, nil
}
switch strategy {
case "existing_only":
plan.PlannedAction = "目标表不存在,需先手工创建"
plan.Warnings = append(plan.Warnings, "当前策略要求目标表已存在,执行时不会自动建表")
return dedupeSchemaMigrationPlan(plan), sourceCols, targetCols, nil
case "smart", "auto_create_if_missing":
plan.AutoCreate = true
plan.PlannedAction = "目标表不存在,将自动建表后导入"
createSQL, warnings := buildClickHouseToMySQLCreateTableSQL(plan.TargetQueryTable, sourceCols)
plan.CreateTableSQL = createSQL
plan.Warnings = append(plan.Warnings, warnings...)
return dedupeSchemaMigrationPlan(plan), sourceCols, targetCols, nil
default:
return dedupeSchemaMigrationPlan(plan), sourceCols, targetCols, nil
}
}
func buildClickHouseToPGLikePlan(config SyncConfig, tableName string, sourceDB db.Database, targetDB db.Database) (SchemaMigrationPlan, []connection.ColumnDefinition, []connection.ColumnDefinition, error) {
plan := SchemaMigrationPlan{}
sourceType := resolveMigrationDBType(config.SourceConfig)
targetType := resolveMigrationDBType(config.TargetConfig)
plan.SourceSchema, plan.SourceTable = normalizeSchemaAndTable(sourceType, config.SourceConfig.Database, tableName)
plan.TargetSchema, plan.TargetTable = normalizeSchemaAndTable(targetType, config.TargetConfig.Database, tableName)
plan.SourceQueryTable = qualifiedNameForQuery(sourceType, plan.SourceSchema, plan.SourceTable, tableName)
plan.TargetQueryTable = qualifiedNameForQuery(targetType, plan.TargetSchema, plan.TargetTable, tableName)
plan.PlannedAction = "使用已有目标表导入"
sourceCols, sourceExists, err := inspectTableColumns(sourceDB, plan.SourceSchema, plan.SourceTable)
if err != nil {
return plan, nil, nil, fmt.Errorf("获取源表字段失败: %w", err)
}
if !sourceExists {
return plan, nil, nil, fmt.Errorf("源表不存在或无列定义: %s", tableName)
}
targetCols, targetExists, err := inspectTableColumns(targetDB, plan.TargetSchema, plan.TargetTable)
if err != nil {
return plan, sourceCols, nil, fmt.Errorf("获取目标表字段失败: %w", err)
}
plan.TargetTableExists = targetExists
strategy := normalizeTargetTableStrategy(config.TargetTableStrategy)
if targetExists {
missing := diffMissingColumnNames(sourceCols, targetCols)
if len(missing) > 0 {
plan.Warnings = append(plan.Warnings, fmt.Sprintf("目标表缺失字段 %d 个:%s", len(missing), strings.Join(missing, ", ")))
}
if config.AutoAddColumns {
addSQL, addWarnings := buildClickHouseToPGLikeAddColumnSQL(targetType, plan.TargetQueryTable, sourceCols, targetCols)
plan.PreDataSQL = append(plan.PreDataSQL, addSQL...)
plan.Warnings = append(plan.Warnings, addWarnings...)
if len(addSQL) > 0 {
plan.PlannedAction = fmt.Sprintf("补齐缺失字段(%d)后导入", len(addSQL))
}
}
plan.Warnings = append(plan.Warnings, "ClickHouse 源端索引/约束元数据有限,反向迁移将以字段和数据为主")
return dedupeSchemaMigrationPlan(plan), sourceCols, targetCols, nil
}
switch strategy {
case "existing_only":
plan.PlannedAction = "目标表不存在,需先手工创建"
plan.Warnings = append(plan.Warnings, "当前策略要求目标表已存在,执行时不会自动建表")
return dedupeSchemaMigrationPlan(plan), sourceCols, targetCols, nil
case "smart", "auto_create_if_missing":
plan.AutoCreate = true
plan.PlannedAction = "目标表不存在,将自动建表后导入"
createSQL, warnings, unsupported := buildClickHouseToPGLikeCreateTableSQL(targetType, plan.TargetQueryTable, sourceCols)
plan.CreateTableSQL = createSQL
plan.Warnings = append(plan.Warnings, warnings...)
plan.UnsupportedObjects = append(plan.UnsupportedObjects, unsupported...)
return dedupeSchemaMigrationPlan(plan), sourceCols, targetCols, nil
default:
return dedupeSchemaMigrationPlan(plan), sourceCols, targetCols, nil
}
}
func buildPGLikeToClickHouseAddColumnSQL(targetQueryTable string, sourceCols, targetCols []connection.ColumnDefinition) ([]string, []string) {
targetSet := make(map[string]struct{}, len(targetCols))
for _, col := range targetCols {
key := strings.ToLower(strings.TrimSpace(col.Name))
if key == "" {
continue
}
targetSet[key] = struct{}{}
}
var sqlList []string
var warnings []string
for _, col := range sourceCols {
key := strings.ToLower(strings.TrimSpace(col.Name))
if key == "" {
continue
}
if _, ok := targetSet[key]; ok {
continue
}
colType, mapWarnings := mapPGLikeColumnToClickHouse(col)
warnings = append(warnings, mapWarnings...)
sqlList = append(sqlList, fmt.Sprintf("ALTER TABLE %s ADD COLUMN %s %s",
quoteQualifiedIdentByType("clickhouse", targetQueryTable),
quoteIdentByType("clickhouse", col.Name),
colType,
))
}
return sqlList, dedupeStrings(warnings)
}
func buildMySQLToClickHouseAddColumnSQL(targetQueryTable string, sourceCols, targetCols []connection.ColumnDefinition) ([]string, []string) {
targetSet := make(map[string]struct{}, len(targetCols))
for _, col := range targetCols {
key := strings.ToLower(strings.TrimSpace(col.Name))
if key == "" {
continue
}
targetSet[key] = struct{}{}
}
var sqlList []string
var warnings []string
for _, col := range sourceCols {
key := strings.ToLower(strings.TrimSpace(col.Name))
if key == "" {
continue
}
if _, ok := targetSet[key]; ok {
continue
}
colType, mapWarnings := mapMySQLColumnToClickHouse(col)
warnings = append(warnings, mapWarnings...)
sqlList = append(sqlList, fmt.Sprintf("ALTER TABLE %s ADD COLUMN %s %s",
quoteQualifiedIdentByType("clickhouse", targetQueryTable),
quoteIdentByType("clickhouse", col.Name),
colType,
))
}
return sqlList, dedupeStrings(warnings)
}
func buildClickHouseToPGLikeAddColumnSQL(targetType string, targetQueryTable string, sourceCols, targetCols []connection.ColumnDefinition) ([]string, []string) {
targetSet := make(map[string]struct{}, len(targetCols))
for _, col := range targetCols {
key := strings.ToLower(strings.TrimSpace(col.Name))
if key == "" {
continue
}
targetSet[key] = struct{}{}
}
var sqlList []string
var warnings []string
for _, col := range sourceCols {
key := strings.ToLower(strings.TrimSpace(col.Name))
if key == "" {
continue
}
if _, ok := targetSet[key]; ok {
continue
}
colType, mapWarnings := mapClickHouseColumnToPGLike(col)
warnings = append(warnings, mapWarnings...)
sqlList = append(sqlList, fmt.Sprintf("ALTER TABLE %s ADD COLUMN %s %s NULL",
quoteQualifiedIdentByType(targetType, targetQueryTable),
quoteIdentByType(targetType, col.Name),
colType,
))
}
return sqlList, dedupeStrings(warnings)
}
func buildClickHouseToMySQLAddColumnSQL(targetQueryTable string, sourceCols, targetCols []connection.ColumnDefinition) ([]string, []string) {
targetSet := make(map[string]struct{}, len(targetCols))
for _, col := range targetCols {
key := strings.ToLower(strings.TrimSpace(col.Name))
if key == "" {
continue
}
targetSet[key] = struct{}{}
}
var sqlList []string
var warnings []string
for _, col := range sourceCols {
key := strings.ToLower(strings.TrimSpace(col.Name))
if key == "" {
continue
}
if _, ok := targetSet[key]; ok {
continue
}
colType, mapWarnings := mapClickHouseColumnToMySQL(col)
warnings = append(warnings, mapWarnings...)
sqlList = append(sqlList, fmt.Sprintf("ALTER TABLE %s ADD COLUMN %s %s NULL",
quoteQualifiedIdentByType("mysql", targetQueryTable),
quoteIdentByType("mysql", col.Name),
colType,
))
}
return sqlList, dedupeStrings(warnings)
}
func buildPGLikeToClickHouseCreateTableSQL(targetQueryTable string, sourceCols []connection.ColumnDefinition) (string, []string, []string) {
columnDefs := make([]string, 0, len(sourceCols))
warnings := make([]string, 0)
unsupported := make([]string, 0)
orderByCols := make([]string, 0)
for _, col := range sourceCols {
def, colWarnings := buildPGLikeToClickHouseColumnDefinition(col)
warnings = append(warnings, colWarnings...)
columnDefs = append(columnDefs, fmt.Sprintf("%s %s", quoteIdentByType("clickhouse", col.Name), def))
if col.Key == "PRI" || col.Key == "PK" {
orderByCols = append(orderByCols, quoteIdentByType("clickhouse", col.Name))
}
}
orderExpr := "tuple()"
if len(orderByCols) > 0 {
orderExpr = "(" + strings.Join(orderByCols, ", ") + ")"
} else {
warnings = append(warnings, "源表未识别到主键ClickHouse 将使用 ORDER BY tuple() 建表,后续查询性能可能受影响")
}
warnings = append(warnings, "ClickHouse 不保留关系型外键/唯一约束语义,将仅迁移字段与数据")
createSQL := fmt.Sprintf("CREATE TABLE %s (\n %s\n) ENGINE = MergeTree() ORDER BY %s", quoteQualifiedIdentByType("clickhouse", targetQueryTable), strings.Join(columnDefs, ",\n "), orderExpr)
return createSQL, dedupeStrings(warnings), dedupeStrings(unsupported)
}
func buildMySQLToClickHouseCreateTableSQL(targetQueryTable string, sourceCols []connection.ColumnDefinition) (string, []string, []string) {
columnDefs := make([]string, 0, len(sourceCols))
warnings := make([]string, 0)
unsupported := make([]string, 0)
orderByCols := make([]string, 0)
for _, col := range sourceCols {
def, colWarnings := buildMySQLToClickHouseColumnDefinition(col)
warnings = append(warnings, colWarnings...)
columnDefs = append(columnDefs, fmt.Sprintf("%s %s", quoteIdentByType("clickhouse", col.Name), def))
if col.Key == "PRI" || col.Key == "PK" {
orderByCols = append(orderByCols, quoteIdentByType("clickhouse", col.Name))
}
}
orderExpr := "tuple()"
if len(orderByCols) > 0 {
orderExpr = "(" + strings.Join(orderByCols, ", ") + ")"
} else {
warnings = append(warnings, "源表未识别到主键ClickHouse 将使用 ORDER BY tuple() 建表,后续查询性能可能受影响")
}
warnings = append(warnings, "ClickHouse 不保留关系型外键/唯一约束语义,将仅迁移字段与数据")
createSQL := fmt.Sprintf("CREATE TABLE %s (\n %s\n) ENGINE = MergeTree() ORDER BY %s", quoteQualifiedIdentByType("clickhouse", targetQueryTable), strings.Join(columnDefs, ",\n "), orderExpr)
return createSQL, dedupeStrings(warnings), dedupeStrings(unsupported)
}
func buildClickHouseToPGLikeCreateTableSQL(targetType string, targetQueryTable string, sourceCols []connection.ColumnDefinition) (string, []string, []string) {
columnDefs := make([]string, 0, len(sourceCols)+1)
warnings := make([]string, 0)
unsupported := []string{"ClickHouse ORDER BY/PARTITION/TTL/Projection/物化视图 语义当前不会自动迁移到 PG-like"}
pkCols := make([]string, 0)
for _, col := range sourceCols {
def, colWarnings := buildClickHouseToPGLikeColumnDefinition(col)
warnings = append(warnings, colWarnings...)
columnDefs = append(columnDefs, fmt.Sprintf("%s %s", quoteIdentByType(targetType, col.Name), def))
if col.Key == "PRI" || col.Key == "PK" {
pkCols = append(pkCols, quoteIdentByType(targetType, col.Name))
}
}
if len(pkCols) > 0 {
columnDefs = append(columnDefs, fmt.Sprintf("PRIMARY KEY (%s)", strings.Join(pkCols, ", ")))
} else {
warnings = append(warnings, "ClickHouse 源端未返回主键信息,目标 PG-like 表将不自动创建主键")
}
createSQL := fmt.Sprintf("CREATE TABLE %s (\n %s\n)", quoteQualifiedIdentByType(targetType, targetQueryTable), strings.Join(columnDefs, ",\n "))
return createSQL, dedupeStrings(warnings), dedupeStrings(unsupported)
}
func buildClickHouseToMySQLCreateTableSQL(targetQueryTable string, sourceCols []connection.ColumnDefinition) (string, []string) {
columnDefs := make([]string, 0, len(sourceCols)+1)
warnings := make([]string, 0)
pkCols := make([]string, 0)
for _, col := range sourceCols {
def, colWarnings := buildClickHouseToMySQLColumnDefinition(col)
warnings = append(warnings, colWarnings...)
columnDefs = append(columnDefs, fmt.Sprintf("%s %s", quoteIdentByType("mysql", col.Name), def))
if col.Key == "PRI" || col.Key == "PK" {
pkCols = append(pkCols, quoteIdentByType("mysql", col.Name))
}
}
if len(pkCols) > 0 {
columnDefs = append(columnDefs, fmt.Sprintf("PRIMARY KEY (%s)", strings.Join(pkCols, ", ")))
} else {
warnings = append(warnings, "ClickHouse 源端未返回主键信息,目标 MySQL 表将不自动创建主键")
}
createSQL := fmt.Sprintf("CREATE TABLE %s (\n %s\n)", quoteQualifiedIdentByType("mysql", targetQueryTable), strings.Join(columnDefs, ",\n "))
return createSQL, dedupeStrings(warnings)
}
func buildPGLikeToClickHouseColumnDefinition(col connection.ColumnDefinition) (string, []string) {
targetType, warnings := mapPGLikeColumnToClickHouse(col)
parts := []string{targetType}
return strings.Join(parts, " "), dedupeStrings(warnings)
}
func buildMySQLToClickHouseColumnDefinition(col connection.ColumnDefinition) (string, []string) {
targetType, warnings := mapMySQLColumnToClickHouse(col)
parts := []string{targetType}
if strings.EqualFold(strings.TrimSpace(col.Nullable), "NO") && !strings.HasPrefix(strings.ToLower(targetType), "nullable(") {
return strings.Join(parts, " "), dedupeStrings(warnings)
}
return strings.Join(parts, " "), dedupeStrings(warnings)
}
func buildClickHouseToPGLikeColumnDefinition(col connection.ColumnDefinition) (string, []string) {
targetType, warnings := mapClickHouseColumnToPGLike(col)
parts := []string{targetType}
if strings.EqualFold(strings.TrimSpace(col.Nullable), "NO") {
parts = append(parts, "NOT NULL")
}
return strings.Join(parts, " "), dedupeStrings(warnings)
}
func buildClickHouseToMySQLColumnDefinition(col connection.ColumnDefinition) (string, []string) {
targetType, warnings := mapClickHouseColumnToMySQL(col)
parts := []string{targetType}
if strings.EqualFold(strings.TrimSpace(col.Nullable), "NO") {
parts = append(parts, "NOT NULL")
}
return strings.Join(parts, " "), dedupeStrings(warnings)
}
func mapPGLikeColumnToClickHouse(col connection.ColumnDefinition) (string, []string) {
raw := strings.ToLower(strings.TrimSpace(col.Type))
warnings := make([]string, 0)
if raw == "" {
return "String", []string{fmt.Sprintf("字段 %s 类型为空,已降级为 String", col.Name)}
}
baseType := "String"
switch {
case raw == "boolean" || strings.HasPrefix(raw, "bool"):
baseType = "UInt8"
case raw == "smallint":
baseType = "Int16"
case raw == "integer" || raw == "int4":
baseType = "Int32"
case raw == "bigint" || raw == "int8":
baseType = "Int64"
case strings.HasPrefix(raw, "numeric"), strings.HasPrefix(raw, "decimal"):
baseType = replaceTypeBase(raw, []string{"numeric", "decimal"}, "Decimal")
case raw == "real" || raw == "float4":
baseType = "Float32"
case raw == "double precision" || raw == "float8":
baseType = "Float64"
case raw == "date":
baseType = "Date"
case strings.HasPrefix(raw, "timestamp") || strings.Contains(raw, "without time zone") || strings.Contains(raw, "with time zone"):
baseType = "DateTime"
case strings.HasPrefix(raw, "time"):
warnings = append(warnings, fmt.Sprintf("字段 %s 类型 %s 已降级为 String", col.Name, col.Type))
baseType = "String"
case strings.HasPrefix(raw, "character varying"), strings.HasPrefix(raw, "varchar("), strings.HasPrefix(raw, "character("), strings.HasPrefix(raw, "char("), raw == "character", raw == "text", raw == "uuid":
baseType = "String"
case raw == "json" || raw == "jsonb" || raw == "bytea":
warnings = append(warnings, fmt.Sprintf("字段 %s 类型 %s 已降级为 String", col.Name, col.Type))
baseType = "String"
case strings.HasSuffix(raw, "[]") || strings.HasPrefix(raw, "array"):
warnings = append(warnings, fmt.Sprintf("字段 %s 类型 %s 已降级为 String", col.Name, col.Type))
baseType = "String"
case raw == "user-defined":
warnings = append(warnings, fmt.Sprintf("字段 %s 为用户自定义类型,已降级为 String", col.Name))
baseType = "String"
default:
warnings = append(warnings, fmt.Sprintf("字段 %s 类型 %s 暂无专门映射,已降级为 String", col.Name, col.Type))
baseType = "String"
}
if strings.EqualFold(strings.TrimSpace(col.Nullable), "YES") && !strings.HasPrefix(strings.ToLower(baseType), "nullable(") {
baseType = fmt.Sprintf("Nullable(%s)", baseType)
}
if strings.Contains(strings.ToLower(strings.TrimSpace(col.Extra)), "identity") || strings.Contains(strings.ToLower(strings.TrimSpace(col.Extra)), "auto_increment") {
warnings = append(warnings, fmt.Sprintf("字段 %s 的 identity/自增语义在 ClickHouse 中不保留", col.Name))
}
return baseType, dedupeStrings(warnings)
}
func mapMySQLColumnToClickHouse(col connection.ColumnDefinition) (string, []string) {
raw := strings.ToLower(strings.TrimSpace(col.Type))
warnings := make([]string, 0)
if raw == "" {
return "String", []string{fmt.Sprintf("字段 %s 类型为空,已降级为 String", col.Name)}
}
unsigned := strings.Contains(raw, "unsigned")
clean := strings.ReplaceAll(raw, " unsigned", "")
clean = strings.ReplaceAll(clean, " zerofill", "")
baseType := "String"
switch {
case strings.HasPrefix(clean, "tinyint(1)"):
baseType = "UInt8"
case strings.HasPrefix(clean, "tinyint"):
if unsigned {
baseType = "UInt8"
} else {
baseType = "Int8"
}
case strings.HasPrefix(clean, "smallint"):
if unsigned {
baseType = "UInt16"
} else {
baseType = "Int16"
}
case strings.HasPrefix(clean, "mediumint"), strings.HasPrefix(clean, "int"), strings.HasPrefix(clean, "integer"):
if unsigned {
baseType = "UInt32"
} else {
baseType = "Int32"
}
case strings.HasPrefix(clean, "bigint"):
if unsigned {
baseType = "UInt64"
} else {
baseType = "Int64"
}
case strings.HasPrefix(clean, "decimal"), strings.HasPrefix(clean, "numeric"):
baseType = replaceTypeBase(strings.Title(clean), []string{"Decimal", "Numeric"}, "Decimal")
case strings.HasPrefix(clean, "float"):
baseType = "Float32"
case strings.HasPrefix(clean, "double"):
baseType = "Float64"
case strings.HasPrefix(clean, "date"):
baseType = "Date"
case strings.HasPrefix(clean, "datetime"), strings.HasPrefix(clean, "timestamp"):
baseType = "DateTime"
case strings.HasPrefix(clean, "time"):
warnings = append(warnings, fmt.Sprintf("字段 %s 类型 time 已降级为 String", col.Name))
baseType = "String"
case strings.HasPrefix(clean, "json"), strings.HasPrefix(clean, "enum"), strings.HasPrefix(clean, "set"), strings.HasPrefix(clean, "char"), strings.HasPrefix(clean, "varchar"), strings.Contains(clean, "text"):
baseType = "String"
case strings.Contains(clean, "blob"), strings.Contains(clean, "binary"):
warnings = append(warnings, fmt.Sprintf("字段 %s 二进制类型已降级为 String", col.Name))
baseType = "String"
default:
warnings = append(warnings, fmt.Sprintf("字段 %s 类型 %s 暂无专门映射,已降级为 String", col.Name, col.Type))
baseType = "String"
}
if strings.EqualFold(strings.TrimSpace(col.Nullable), "YES") && !strings.HasPrefix(strings.ToLower(baseType), "nullable(") {
baseType = fmt.Sprintf("Nullable(%s)", baseType)
}
if strings.Contains(strings.ToLower(strings.TrimSpace(col.Extra)), "auto_increment") {
warnings = append(warnings, fmt.Sprintf("字段 %s 的 AUTO_INCREMENT 在 ClickHouse 中不保留自增语义", col.Name))
}
return baseType, dedupeStrings(warnings)
}
var clickHouseDecimalPattern = regexp.MustCompile(`^(decimal|numeric)\((\d+)\s*,\s*(\d+)\)$`)
var clickHouseStringArgsPattern = regexp.MustCompile(`^fixedstring\((\d+)\)$`)
func mapClickHouseColumnToPGLike(col connection.ColumnDefinition) (string, []string) {
raw := strings.TrimSpace(col.Type)
lower := strings.ToLower(raw)
warnings := make([]string, 0)
if strings.HasPrefix(lower, "nullable(") && strings.HasSuffix(lower, ")") {
raw = strings.TrimSpace(raw[len("Nullable(") : len(raw)-1])
lower = strings.ToLower(raw)
}
for {
if strings.HasPrefix(lower, "lowcardinality(") && strings.HasSuffix(lower, ")") {
raw = strings.TrimSpace(raw[len("LowCardinality(") : len(raw)-1])
lower = strings.ToLower(raw)
continue
}
break
}
switch {
case lower == "bool" || lower == "boolean":
return "boolean", warnings
case lower == "int8":
return "smallint", warnings
case lower == "uint8":
return "smallint", warnings
case lower == "int16":
return "smallint", warnings
case lower == "uint16":
return "integer", warnings
case lower == "int32":
return "integer", warnings
case lower == "uint32":
return "bigint", warnings
case lower == "int64":
return "bigint", warnings
case lower == "uint64":
warnings = append(warnings, fmt.Sprintf("字段 %s 类型 %s 已映射为 numeric(20,0) 以避免无符号溢出", col.Name, col.Type))
return "numeric(20,0)", warnings
case lower == "float32":
return "real", warnings
case lower == "float64":
return "double precision", warnings
case lower == "date":
return "date", warnings
case strings.HasPrefix(lower, "datetime"):
return "timestamp", warnings
case lower == "string":
return "text", warnings
case lower == "uuid":
return "uuid", warnings
case lower == "json", strings.HasPrefix(lower, "map("), strings.HasPrefix(lower, "array("), strings.HasPrefix(lower, "tuple("), strings.HasPrefix(lower, "nested("):
warnings = append(warnings, fmt.Sprintf("字段 %s 类型 %s 已降级为 jsonb", col.Name, col.Type))
return "jsonb", warnings
case strings.HasPrefix(lower, "enum8("), strings.HasPrefix(lower, "enum16("):
warnings = append(warnings, fmt.Sprintf("字段 %s 枚举类型 %s 已降级为 varchar(255)", col.Name, col.Type))
return "varchar(255)", warnings
case clickHouseDecimalPattern.MatchString(lower):
parts := clickHouseDecimalPattern.FindStringSubmatch(lower)
return fmt.Sprintf("numeric(%s,%s)", parts[2], parts[3]), warnings
case clickHouseStringArgsPattern.MatchString(lower):
parts := clickHouseStringArgsPattern.FindStringSubmatch(lower)
return fmt.Sprintf("varchar(%s)", parts[1]), warnings
default:
warnings = append(warnings, fmt.Sprintf("字段 %s 类型 %s 暂无专门 PG-like 映射,已降级为 text", col.Name, col.Type))
return "text", warnings
}
}
func mapClickHouseColumnToMySQL(col connection.ColumnDefinition) (string, []string) {
raw := strings.TrimSpace(col.Type)
lower := strings.ToLower(raw)
warnings := make([]string, 0)
nullable := false
if strings.HasPrefix(lower, "nullable(") && strings.HasSuffix(lower, ")") {
nullable = true
raw = strings.TrimSpace(raw[len("Nullable(") : len(raw)-1])
lower = strings.ToLower(raw)
}
for {
if strings.HasPrefix(lower, "lowcardinality(") && strings.HasSuffix(lower, ")") {
raw = strings.TrimSpace(raw[len("LowCardinality(") : len(raw)-1])
lower = strings.ToLower(raw)
continue
}
break
}
_ = nullable
switch {
case lower == "bool" || lower == "boolean" || lower == "uint8":
return "tinyint(1)", warnings
case lower == "int8":
return "tinyint", warnings
case lower == "uint16":
return "smallint unsigned", warnings
case lower == "int16":
return "smallint", warnings
case lower == "uint32":
return "int unsigned", warnings
case lower == "int32":
return "int", warnings
case lower == "uint64":
return "bigint unsigned", warnings
case lower == "int64":
return "bigint", warnings
case lower == "float32":
return "float", warnings
case lower == "float64":
return "double", warnings
case lower == "date":
return "date", warnings
case strings.HasPrefix(lower, "datetime"):
return "datetime", warnings
case lower == "string":
return "text", warnings
case lower == "uuid":
return "char(36)", warnings
case lower == "json", strings.HasPrefix(lower, "map("), strings.HasPrefix(lower, "array("), strings.HasPrefix(lower, "tuple("), strings.HasPrefix(lower, "nested("):
warnings = append(warnings, fmt.Sprintf("字段 %s 类型 %s 已降级为 json", col.Name, col.Type))
return "json", warnings
case clickHouseDecimalPattern.MatchString(lower):
parts := clickHouseDecimalPattern.FindStringSubmatch(lower)
return fmt.Sprintf("decimal(%s,%s)", parts[2], parts[3]), warnings
case clickHouseStringArgsPattern.MatchString(lower):
parts := clickHouseStringArgsPattern.FindStringSubmatch(lower)
return fmt.Sprintf("varchar(%s)", parts[1]), warnings
default:
warnings = append(warnings, fmt.Sprintf("字段 %s 类型 %s 暂无专门映射,已降级为 text", col.Name, col.Type))
return "text", warnings
}
}

View File

@@ -0,0 +1,379 @@
package sync
import (
"GoNavi-Wails/internal/connection"
"GoNavi-Wails/internal/db"
"fmt"
"strings"
)
type genericLegacyPlanner struct{}
type mysqlToPGLikePlanner struct{}
type mysqlToClickHousePlanner struct{}
type pgLikeToClickHousePlanner struct{}
type clickHouseToMySQLPlanner struct{}
type clickHouseToPGLikePlanner struct{}
type mysqlToMongoPlanner struct{}
type pgLikeToMongoPlanner struct{}
type clickHouseToMongoPlanner struct{}
type tdengineToMongoPlanner struct{}
type mongoToMySQLPlanner struct{}
type mongoToPGLikePlanner struct{}
type pgLikeToMySQLPlanner struct{}
type tdengineToMySQLPlanner struct{}
type tdengineToPGLikePlanner struct{}
type mongoToRelationalPlanner struct{}
func buildSchemaMigrationPlan(config SyncConfig, tableName string, sourceDB db.Database, targetDB db.Database) (SchemaMigrationPlan, []connection.ColumnDefinition, []connection.ColumnDefinition, error) {
ctx := MigrationBuildContext{
Config: config,
TableName: tableName,
SourceDB: sourceDB,
TargetDB: targetDB,
}
planner := resolveMigrationPlanner(ctx)
if planner == nil {
return buildSchemaMigrationPlanLegacy(config, tableName, sourceDB, targetDB)
}
return planner.BuildPlan(ctx)
}
func resolveMigrationPlanner(ctx MigrationBuildContext) MigrationPlanner {
planners := []MigrationPlanner{
mysqlToPGLikePlanner{},
mySQLLikeToTDenginePlanner{},
pgLikeToTDenginePlanner{},
clickHouseToTDenginePlanner{},
tdengineToTDenginePlanner{},
tdengineToPGLikePlanner{},
tdengineToMySQLPlanner{},
mysqlToClickHousePlanner{},
pgLikeToClickHousePlanner{},
clickHouseToMySQLPlanner{},
clickHouseToPGLikePlanner{},
mysqlToMongoPlanner{},
pgLikeToMongoPlanner{},
clickHouseToMongoPlanner{},
tdengineToMongoPlanner{},
mongoToMySQLPlanner{},
mongoToPGLikePlanner{},
pgLikeToMySQLPlanner{},
mongoToRelationalPlanner{},
genericLegacyPlanner{},
}
bestLevel := MigrationSupportLevelUnsupported
var bestPlanner MigrationPlanner
for _, planner := range planners {
level := planner.SupportLevel(ctx)
if migrationSupportRank(level) > migrationSupportRank(bestLevel) {
bestLevel = level
bestPlanner = planner
}
}
return bestPlanner
}
func migrationSupportRank(level MigrationSupportLevel) int {
switch level {
case MigrationSupportLevelFull:
return 4
case MigrationSupportLevelPlanned:
return 3
case MigrationSupportLevelPartial:
return 2
default:
return 1
}
}
func isMySQLLikeType(dbType string) bool {
return isMySQLLikeWritableTargetType(dbType)
}
func classifyMigrationDataModel(dbType string) MigrationDataModel {
switch normalizeMigrationDBType(dbType) {
case "mysql", "mariadb", "postgres", "kingbase", "highgo", "vastbase", "oracle", "sqlserver", "dameng", "sqlite", "duckdb":
return MigrationDataModelRelational
case "mongodb":
return MigrationDataModelDocument
case "clickhouse", "diros", "sphinx":
return MigrationDataModelColumnar
case "tdengine":
return MigrationDataModelTimeSeries
case "redis":
return MigrationDataModelKeyValue
default:
return MigrationDataModelCustom
}
}
func (genericLegacyPlanner) Name() string { return "generic-legacy-planner" }
func (genericLegacyPlanner) SupportLevel(ctx MigrationBuildContext) MigrationSupportLevel {
_ = ctx
return MigrationSupportLevelPartial
}
func (genericLegacyPlanner) BuildPlan(ctx MigrationBuildContext) (SchemaMigrationPlan, []connection.ColumnDefinition, []connection.ColumnDefinition, error) {
return buildSchemaMigrationPlanLegacy(ctx.Config, ctx.TableName, ctx.SourceDB, ctx.TargetDB)
}
func (mysqlToPGLikePlanner) Name() string { return "mysql-pglike-planner" }
func (mysqlToPGLikePlanner) SupportLevel(ctx MigrationBuildContext) MigrationSupportLevel {
sourceType := resolveMigrationDBType(ctx.Config.SourceConfig)
targetType := resolveMigrationDBType(ctx.Config.TargetConfig)
if isMySQLLikeSourceType(sourceType) && isPGLikeTarget(targetType) {
return MigrationSupportLevelFull
}
return MigrationSupportLevelUnsupported
}
func (mysqlToPGLikePlanner) BuildPlan(ctx MigrationBuildContext) (SchemaMigrationPlan, []connection.ColumnDefinition, []connection.ColumnDefinition, error) {
return buildMySQLToPGLikePlan(ctx.Config, ctx.TableName, ctx.SourceDB, ctx.TargetDB)
}
func (tdengineToMySQLPlanner) Name() string { return "tdengine-mysql-planner" }
func (tdengineToMySQLPlanner) SupportLevel(ctx MigrationBuildContext) MigrationSupportLevel {
sourceType := resolveMigrationDBType(ctx.Config.SourceConfig)
targetType := resolveMigrationDBType(ctx.Config.TargetConfig)
if sourceType == "tdengine" && isMySQLLikeWritableTargetType(targetType) {
return MigrationSupportLevelFull
}
return MigrationSupportLevelUnsupported
}
func (tdengineToMySQLPlanner) BuildPlan(ctx MigrationBuildContext) (SchemaMigrationPlan, []connection.ColumnDefinition, []connection.ColumnDefinition, error) {
return buildTDengineToMySQLPlan(ctx.Config, ctx.TableName, ctx.SourceDB, ctx.TargetDB)
}
func (tdengineToPGLikePlanner) Name() string { return "tdengine-pglike-planner" }
func (tdengineToPGLikePlanner) SupportLevel(ctx MigrationBuildContext) MigrationSupportLevel {
sourceType := resolveMigrationDBType(ctx.Config.SourceConfig)
targetType := resolveMigrationDBType(ctx.Config.TargetConfig)
if sourceType == "tdengine" && isPGLikeTarget(targetType) {
return MigrationSupportLevelFull
}
return MigrationSupportLevelUnsupported
}
func (tdengineToPGLikePlanner) BuildPlan(ctx MigrationBuildContext) (SchemaMigrationPlan, []connection.ColumnDefinition, []connection.ColumnDefinition, error) {
return buildTDengineToPGLikePlan(ctx.Config, ctx.TableName, ctx.SourceDB, ctx.TargetDB)
}
func (mysqlToClickHousePlanner) Name() string { return "mysql-clickhouse-planner" }
func (mysqlToClickHousePlanner) SupportLevel(ctx MigrationBuildContext) MigrationSupportLevel {
sourceType := resolveMigrationDBType(ctx.Config.SourceConfig)
targetType := resolveMigrationDBType(ctx.Config.TargetConfig)
if isMySQLCoreType(sourceType) && targetType == "clickhouse" {
return MigrationSupportLevelFull
}
return MigrationSupportLevelUnsupported
}
func (mysqlToClickHousePlanner) BuildPlan(ctx MigrationBuildContext) (SchemaMigrationPlan, []connection.ColumnDefinition, []connection.ColumnDefinition, error) {
return buildMySQLToClickHousePlan(ctx.Config, ctx.TableName, ctx.SourceDB, ctx.TargetDB)
}
func (pgLikeToClickHousePlanner) Name() string { return "pglike-clickhouse-planner" }
func (pgLikeToClickHousePlanner) SupportLevel(ctx MigrationBuildContext) MigrationSupportLevel {
sourceType := resolveMigrationDBType(ctx.Config.SourceConfig)
targetType := resolveMigrationDBType(ctx.Config.TargetConfig)
if isPGLikeSource(sourceType) && targetType == "clickhouse" {
return MigrationSupportLevelFull
}
return MigrationSupportLevelUnsupported
}
func (pgLikeToClickHousePlanner) BuildPlan(ctx MigrationBuildContext) (SchemaMigrationPlan, []connection.ColumnDefinition, []connection.ColumnDefinition, error) {
return buildPGLikeToClickHousePlan(ctx.Config, ctx.TableName, ctx.SourceDB, ctx.TargetDB)
}
func (clickHouseToMySQLPlanner) Name() string { return "clickhouse-mysql-planner" }
func (clickHouseToMySQLPlanner) SupportLevel(ctx MigrationBuildContext) MigrationSupportLevel {
sourceType := resolveMigrationDBType(ctx.Config.SourceConfig)
targetType := resolveMigrationDBType(ctx.Config.TargetConfig)
if sourceType == "clickhouse" && isMySQLLikeWritableTargetType(targetType) {
return MigrationSupportLevelFull
}
return MigrationSupportLevelUnsupported
}
func (clickHouseToMySQLPlanner) BuildPlan(ctx MigrationBuildContext) (SchemaMigrationPlan, []connection.ColumnDefinition, []connection.ColumnDefinition, error) {
return buildClickHouseToMySQLPlan(ctx.Config, ctx.TableName, ctx.SourceDB, ctx.TargetDB)
}
func (clickHouseToPGLikePlanner) Name() string { return "clickhouse-pglike-planner" }
func (clickHouseToPGLikePlanner) SupportLevel(ctx MigrationBuildContext) MigrationSupportLevel {
sourceType := resolveMigrationDBType(ctx.Config.SourceConfig)
targetType := resolveMigrationDBType(ctx.Config.TargetConfig)
if sourceType == "clickhouse" && isPGLikeTarget(targetType) {
return MigrationSupportLevelFull
}
return MigrationSupportLevelUnsupported
}
func (clickHouseToPGLikePlanner) BuildPlan(ctx MigrationBuildContext) (SchemaMigrationPlan, []connection.ColumnDefinition, []connection.ColumnDefinition, error) {
return buildClickHouseToPGLikePlan(ctx.Config, ctx.TableName, ctx.SourceDB, ctx.TargetDB)
}
func (mysqlToMongoPlanner) Name() string { return "mysql-mongo-planner" }
func (mysqlToMongoPlanner) SupportLevel(ctx MigrationBuildContext) MigrationSupportLevel {
sourceType := resolveMigrationDBType(ctx.Config.SourceConfig)
targetType := resolveMigrationDBType(ctx.Config.TargetConfig)
if isMySQLCoreType(sourceType) && targetType == "mongodb" {
return MigrationSupportLevelFull
}
return MigrationSupportLevelUnsupported
}
func (mysqlToMongoPlanner) BuildPlan(ctx MigrationBuildContext) (SchemaMigrationPlan, []connection.ColumnDefinition, []connection.ColumnDefinition, error) {
return buildMySQLToMongoPlan(ctx.Config, ctx.TableName, ctx.SourceDB, ctx.TargetDB)
}
func (pgLikeToMongoPlanner) Name() string { return "pglike-mongo-planner" }
func (pgLikeToMongoPlanner) SupportLevel(ctx MigrationBuildContext) MigrationSupportLevel {
sourceType := resolveMigrationDBType(ctx.Config.SourceConfig)
targetType := resolveMigrationDBType(ctx.Config.TargetConfig)
if isPGLikeSource(sourceType) && targetType == "mongodb" {
return MigrationSupportLevelFull
}
return MigrationSupportLevelUnsupported
}
func (pgLikeToMongoPlanner) BuildPlan(ctx MigrationBuildContext) (SchemaMigrationPlan, []connection.ColumnDefinition, []connection.ColumnDefinition, error) {
return buildPGLikeToMongoPlan(ctx.Config, ctx.TableName, ctx.SourceDB, ctx.TargetDB)
}
func (clickHouseToMongoPlanner) Name() string { return "clickhouse-mongo-planner" }
func (clickHouseToMongoPlanner) SupportLevel(ctx MigrationBuildContext) MigrationSupportLevel {
sourceType := resolveMigrationDBType(ctx.Config.SourceConfig)
targetType := resolveMigrationDBType(ctx.Config.TargetConfig)
if sourceType == "clickhouse" && targetType == "mongodb" {
return MigrationSupportLevelFull
}
return MigrationSupportLevelUnsupported
}
func (clickHouseToMongoPlanner) BuildPlan(ctx MigrationBuildContext) (SchemaMigrationPlan, []connection.ColumnDefinition, []connection.ColumnDefinition, error) {
return buildClickHouseToMongoPlan(ctx.Config, ctx.TableName, ctx.SourceDB, ctx.TargetDB)
}
func (tdengineToMongoPlanner) Name() string { return "tdengine-mongo-planner" }
func (tdengineToMongoPlanner) SupportLevel(ctx MigrationBuildContext) MigrationSupportLevel {
sourceType := resolveMigrationDBType(ctx.Config.SourceConfig)
targetType := resolveMigrationDBType(ctx.Config.TargetConfig)
if sourceType == "tdengine" && targetType == "mongodb" {
return MigrationSupportLevelFull
}
return MigrationSupportLevelUnsupported
}
func (tdengineToMongoPlanner) BuildPlan(ctx MigrationBuildContext) (SchemaMigrationPlan, []connection.ColumnDefinition, []connection.ColumnDefinition, error) {
return buildTDengineToMongoPlan(ctx.Config, ctx.TableName, ctx.SourceDB, ctx.TargetDB)
}
func (mongoToMySQLPlanner) Name() string { return "mongo-mysql-planner" }
func (mongoToMySQLPlanner) SupportLevel(ctx MigrationBuildContext) MigrationSupportLevel {
sourceType := resolveMigrationDBType(ctx.Config.SourceConfig)
targetType := resolveMigrationDBType(ctx.Config.TargetConfig)
if sourceType == "mongodb" && isMySQLLikeWritableTargetType(targetType) {
return MigrationSupportLevelFull
}
return MigrationSupportLevelUnsupported
}
func (mongoToMySQLPlanner) BuildPlan(ctx MigrationBuildContext) (SchemaMigrationPlan, []connection.ColumnDefinition, []connection.ColumnDefinition, error) {
return buildMongoToMySQLPlan(ctx.Config, ctx.TableName, ctx.SourceDB, ctx.TargetDB)
}
func (mongoToPGLikePlanner) Name() string { return "mongo-pglike-planner" }
func (mongoToPGLikePlanner) SupportLevel(ctx MigrationBuildContext) MigrationSupportLevel {
sourceType := resolveMigrationDBType(ctx.Config.SourceConfig)
targetType := resolveMigrationDBType(ctx.Config.TargetConfig)
if sourceType == "mongodb" && isPGLikeTarget(targetType) {
return MigrationSupportLevelFull
}
return MigrationSupportLevelUnsupported
}
func (mongoToPGLikePlanner) BuildPlan(ctx MigrationBuildContext) (SchemaMigrationPlan, []connection.ColumnDefinition, []connection.ColumnDefinition, error) {
return buildMongoToPGLikePlan(ctx.Config, ctx.TableName, ctx.SourceDB, ctx.TargetDB)
}
func (pgLikeToMySQLPlanner) Name() string { return "pglike-mysql-planner" }
func (pgLikeToMySQLPlanner) SupportLevel(ctx MigrationBuildContext) MigrationSupportLevel {
sourceType := resolveMigrationDBType(ctx.Config.SourceConfig)
targetType := resolveMigrationDBType(ctx.Config.TargetConfig)
if isPGLikeSource(sourceType) && isMySQLLikeWritableTargetType(targetType) {
return MigrationSupportLevelFull
}
return MigrationSupportLevelUnsupported
}
func (pgLikeToMySQLPlanner) BuildPlan(ctx MigrationBuildContext) (SchemaMigrationPlan, []connection.ColumnDefinition, []connection.ColumnDefinition, error) {
return buildPGLikeToMySQLPlan(ctx.Config, ctx.TableName, ctx.SourceDB, ctx.TargetDB)
}
func (mongoToRelationalPlanner) Name() string { return "mongo-relational-inference-planner" }
func (mongoToRelationalPlanner) SupportLevel(ctx MigrationBuildContext) MigrationSupportLevel {
sourceType := resolveMigrationDBType(ctx.Config.SourceConfig)
targetType := resolveMigrationDBType(ctx.Config.TargetConfig)
if !shouldUseSchemaInference(sourceType, targetType) {
return MigrationSupportLevelUnsupported
}
return MigrationSupportLevelPlanned
}
func (mongoToRelationalPlanner) BuildPlan(ctx MigrationBuildContext) (SchemaMigrationPlan, []connection.ColumnDefinition, []connection.ColumnDefinition, error) {
sourceType := resolveMigrationDBType(ctx.Config.SourceConfig)
targetType := resolveMigrationDBType(ctx.Config.TargetConfig)
inference, err := inferSchemaForPair(sourceType, targetType, ctx.TableName)
if err != nil {
return SchemaMigrationPlan{}, nil, nil, err
}
plan := SchemaMigrationPlan{}
plan.SourceSchema, plan.SourceTable = normalizeSchemaAndTable(sourceType, ctx.Config.SourceConfig.Database, ctx.TableName)
plan.TargetSchema, plan.TargetTable = normalizeSchemaAndTable(targetType, ctx.Config.TargetConfig.Database, ctx.TableName)
plan.SourceQueryTable = qualifiedNameForQuery(sourceType, plan.SourceSchema, plan.SourceTable, ctx.TableName)
plan.TargetQueryTable = qualifiedNameForQuery(targetType, plan.TargetSchema, plan.TargetTable, ctx.TableName)
plan.PlannedAction = "当前库对已进入迁移内核规划阶段,等待 schema 推断与目标方言生成器落地"
for _, issue := range inference.Issues {
msg := strings.TrimSpace(issue.Message)
if msg == "" {
continue
}
plan.Warnings = append(plan.Warnings, msg)
}
plan.Warnings = append(plan.Warnings, fmt.Sprintf("迁移对象=%s目标类型=%s当前仅提供规划入口暂不执行自动建表", inference.Object.Kind, targetType))
return dedupeSchemaMigrationPlan(plan), nil, nil, nil
}

View File

@@ -0,0 +1,447 @@
package sync
import (
"GoNavi-Wails/internal/connection"
"strings"
"testing"
)
func TestClassifyMigrationDataModel(t *testing.T) {
t.Parallel()
cases := map[string]MigrationDataModel{
"mysql": MigrationDataModelRelational,
"postgres": MigrationDataModelRelational,
"kingbase": MigrationDataModelRelational,
"mongodb": MigrationDataModelDocument,
"clickhouse": MigrationDataModelColumnar,
"tdengine": MigrationDataModelTimeSeries,
"redis": MigrationDataModelKeyValue,
"custom": MigrationDataModelCustom,
}
for input, want := range cases {
input, want := input, want
t.Run(input, func(t *testing.T) {
t.Parallel()
got := classifyMigrationDataModel(input)
if got != want {
t.Fatalf("unexpected data model, input=%s got=%s want=%s", input, got, want)
}
})
}
}
func TestResolveMigrationPlanner_PrefersMySQLKingbasePlanner(t *testing.T) {
t.Parallel()
planner := resolveMigrationPlanner(MigrationBuildContext{
Config: SyncConfig{
SourceConfig: connection.ConnectionConfig{Type: "mysql"},
TargetConfig: connection.ConnectionConfig{Type: "kingbase"},
},
})
if planner == nil {
t.Fatalf("expected planner")
}
if planner.Name() != "mysql-pglike-planner" {
t.Fatalf("unexpected planner: %s", planner.Name())
}
}
func TestResolveMigrationPlanner_UsesSchemaInferencePlannerForMongoToMySQL(t *testing.T) {
t.Parallel()
planner := resolveMigrationPlanner(MigrationBuildContext{
Config: SyncConfig{
SourceConfig: connection.ConnectionConfig{Type: "mongodb"},
TargetConfig: connection.ConnectionConfig{Type: "mysql"},
},
})
if planner == nil {
t.Fatalf("expected planner")
}
if planner.Name() != "mongo-mysql-planner" {
t.Fatalf("unexpected planner: %s", planner.Name())
}
}
func TestInferSchemaForPair_MongoToMySQLReturnsPlannedWarning(t *testing.T) {
t.Parallel()
result, err := inferSchemaForPair("mongodb", "mysql", "users")
if err != nil {
t.Fatalf("inferSchemaForPair returned error: %v", err)
}
if !result.NeedsReview {
t.Fatalf("expected needs review")
}
if result.Object.Name != "users" {
t.Fatalf("unexpected object name: %s", result.Object.Name)
}
if len(result.Issues) == 0 || !strings.Contains(result.Issues[0].Message, "schema 推断") {
t.Fatalf("unexpected issues: %+v", result.Issues)
}
}
func TestResolveMigrationPlanner_UsesPGLikeMySQLPlannerForKingbaseToMySQL(t *testing.T) {
t.Parallel()
planner := resolveMigrationPlanner(MigrationBuildContext{
Config: SyncConfig{
SourceConfig: connection.ConnectionConfig{Type: "kingbase"},
TargetConfig: connection.ConnectionConfig{Type: "mysql"},
},
})
if planner == nil {
t.Fatalf("expected planner")
}
if planner.Name() != "pglike-mysql-planner" {
t.Fatalf("unexpected planner: %s", planner.Name())
}
}
func TestResolveMigrationPlanner_UsesMySQLPGLikePlannerForMySQLToPostgres(t *testing.T) {
t.Parallel()
planner := resolveMigrationPlanner(MigrationBuildContext{
Config: SyncConfig{
SourceConfig: connection.ConnectionConfig{Type: "mysql"},
TargetConfig: connection.ConnectionConfig{Type: "postgres"},
},
})
if planner == nil {
t.Fatalf("expected planner")
}
if planner.Name() != "mysql-pglike-planner" {
t.Fatalf("unexpected planner: %s", planner.Name())
}
}
func TestResolveMigrationPlanner_UsesMySQLClickHousePlanner(t *testing.T) {
t.Parallel()
planner := resolveMigrationPlanner(MigrationBuildContext{
Config: SyncConfig{
SourceConfig: connection.ConnectionConfig{Type: "mysql"},
TargetConfig: connection.ConnectionConfig{Type: "clickhouse"},
},
})
if planner == nil {
t.Fatalf("expected planner")
}
if planner.Name() != "mysql-clickhouse-planner" {
t.Fatalf("unexpected planner: %s", planner.Name())
}
}
func TestResolveMigrationPlanner_UsesClickHouseMySQLPlanner(t *testing.T) {
t.Parallel()
planner := resolveMigrationPlanner(MigrationBuildContext{
Config: SyncConfig{
SourceConfig: connection.ConnectionConfig{Type: "clickhouse"},
TargetConfig: connection.ConnectionConfig{Type: "mysql"},
},
})
if planner == nil {
t.Fatalf("expected planner")
}
if planner.Name() != "clickhouse-mysql-planner" {
t.Fatalf("unexpected planner: %s", planner.Name())
}
}
func TestResolveMigrationPlanner_UsesMySQLMongoPlanner(t *testing.T) {
t.Parallel()
planner := resolveMigrationPlanner(MigrationBuildContext{
Config: SyncConfig{
SourceConfig: connection.ConnectionConfig{Type: "mysql"},
TargetConfig: connection.ConnectionConfig{Type: "mongodb"},
},
})
if planner == nil || planner.Name() != "mysql-mongo-planner" {
t.Fatalf("unexpected planner: %v", planner)
}
}
func TestResolveMigrationPlanner_UsesMongoMySQLPlanner(t *testing.T) {
t.Parallel()
planner := resolveMigrationPlanner(MigrationBuildContext{
Config: SyncConfig{
SourceConfig: connection.ConnectionConfig{Type: "mongodb"},
TargetConfig: connection.ConnectionConfig{Type: "mysql"},
},
})
if planner == nil || planner.Name() != "mongo-mysql-planner" {
t.Fatalf("unexpected planner: %v", planner)
}
}
func TestResolveMigrationPlanner_UsesMongoPGLikePlanner(t *testing.T) {
t.Parallel()
planner := resolveMigrationPlanner(MigrationBuildContext{
Config: SyncConfig{
SourceConfig: connection.ConnectionConfig{Type: "mongodb"},
TargetConfig: connection.ConnectionConfig{Type: "postgres"},
},
})
if planner == nil || planner.Name() != "mongo-pglike-planner" {
t.Fatalf("unexpected planner: %v", planner)
}
}
func TestResolveMigrationPlanner_UsesPGLikeMongoPlanner(t *testing.T) {
t.Parallel()
planner := resolveMigrationPlanner(MigrationBuildContext{
Config: SyncConfig{
SourceConfig: connection.ConnectionConfig{Type: "postgres"},
TargetConfig: connection.ConnectionConfig{Type: "mongodb"},
},
})
if planner == nil || planner.Name() != "pglike-mongo-planner" {
t.Fatalf("unexpected planner: %v", planner)
}
}
func TestResolveMigrationPlanner_UsesClickHouseMongoPlanner(t *testing.T) {
t.Parallel()
planner := resolveMigrationPlanner(MigrationBuildContext{
Config: SyncConfig{
SourceConfig: connection.ConnectionConfig{Type: "clickhouse"},
TargetConfig: connection.ConnectionConfig{Type: "mongodb"},
},
})
if planner == nil || planner.Name() != "clickhouse-mongo-planner" {
t.Fatalf("unexpected planner: %v", planner)
}
}
func TestResolveMigrationPlanner_UsesTDengineMongoPlanner(t *testing.T) {
t.Parallel()
planner := resolveMigrationPlanner(MigrationBuildContext{
Config: SyncConfig{
SourceConfig: connection.ConnectionConfig{Type: "tdengine"},
TargetConfig: connection.ConnectionConfig{Type: "mongodb"},
},
})
if planner == nil || planner.Name() != "tdengine-mongo-planner" {
t.Fatalf("unexpected planner: %v", planner)
}
}
func TestResolveMigrationPlanner_UsesMySQLPGLikePlannerForDirosToPostgres(t *testing.T) {
t.Parallel()
planner := resolveMigrationPlanner(MigrationBuildContext{
Config: SyncConfig{
SourceConfig: connection.ConnectionConfig{Type: "diros"},
TargetConfig: connection.ConnectionConfig{Type: "postgres"},
},
})
if planner == nil || planner.Name() != "mysql-pglike-planner" {
t.Fatalf("unexpected planner: %v", planner)
}
}
func TestResolveMigrationPlanner_UsesPGLikeMySQLPlannerForPostgresToDiros(t *testing.T) {
t.Parallel()
planner := resolveMigrationPlanner(MigrationBuildContext{
Config: SyncConfig{
SourceConfig: connection.ConnectionConfig{Type: "postgres"},
TargetConfig: connection.ConnectionConfig{Type: "diros"},
},
})
if planner == nil || planner.Name() != "pglike-mysql-planner" {
t.Fatalf("unexpected planner: %v", planner)
}
}
func TestResolveMigrationPlanner_UsesMySQLPGLikePlannerForMySQLToDuckDB(t *testing.T) {
t.Parallel()
planner := resolveMigrationPlanner(MigrationBuildContext{
Config: SyncConfig{
SourceConfig: connection.ConnectionConfig{Type: "mysql"},
TargetConfig: connection.ConnectionConfig{Type: "duckdb"},
},
})
if planner == nil || planner.Name() != "mysql-pglike-planner" {
t.Fatalf("unexpected planner: %v", planner)
}
}
func TestResolveMigrationPlanner_UsesPGLikeClickHousePlanner(t *testing.T) {
t.Parallel()
planner := resolveMigrationPlanner(MigrationBuildContext{
Config: SyncConfig{
SourceConfig: connection.ConnectionConfig{Type: "postgres"},
TargetConfig: connection.ConnectionConfig{Type: "clickhouse"},
},
})
if planner == nil || planner.Name() != "pglike-clickhouse-planner" {
t.Fatalf("unexpected planner: %v", planner)
}
}
func TestResolveMigrationPlanner_UsesPGLikeMySQLPlannerForDuckDBToMySQL(t *testing.T) {
t.Parallel()
planner := resolveMigrationPlanner(MigrationBuildContext{
Config: SyncConfig{
SourceConfig: connection.ConnectionConfig{Type: "duckdb"},
TargetConfig: connection.ConnectionConfig{Type: "mysql"},
},
})
if planner == nil || planner.Name() != "pglike-mysql-planner" {
t.Fatalf("unexpected planner: %v", planner)
}
}
func TestResolveMigrationPlanner_UsesMySQLPGLikePlannerForSphinxToPostgres(t *testing.T) {
t.Parallel()
planner := resolveMigrationPlanner(MigrationBuildContext{
Config: SyncConfig{
SourceConfig: connection.ConnectionConfig{Type: "sphinx"},
TargetConfig: connection.ConnectionConfig{Type: "postgres"},
},
})
if planner == nil || planner.Name() != "mysql-pglike-planner" {
t.Fatalf("unexpected planner: %v", planner)
}
}
func TestResolveMigrationPlanner_UsesPGLikeMySQLPlannerForCustomKingbaseToMySQL(t *testing.T) {
t.Parallel()
planner := resolveMigrationPlanner(MigrationBuildContext{
Config: SyncConfig{
SourceConfig: connection.ConnectionConfig{Type: "custom", Driver: "kingbase8"},
TargetConfig: connection.ConnectionConfig{Type: "mysql"},
},
})
if planner == nil || planner.Name() != "pglike-mysql-planner" {
t.Fatalf("unexpected planner: %v", planner)
}
}
func TestResolveMigrationPlanner_UsesMySQLPGLikePlannerForMySQLToCustomPostgres(t *testing.T) {
t.Parallel()
planner := resolveMigrationPlanner(MigrationBuildContext{
Config: SyncConfig{
SourceConfig: connection.ConnectionConfig{Type: "mysql"},
TargetConfig: connection.ConnectionConfig{Type: "custom", Driver: "postgresql"},
},
})
if planner == nil || planner.Name() != "mysql-pglike-planner" {
t.Fatalf("unexpected planner: %v", planner)
}
}
func TestResolveMigrationPlanner_UsesTDengineMySQLPlanner(t *testing.T) {
t.Parallel()
planner := resolveMigrationPlanner(MigrationBuildContext{
Config: SyncConfig{
SourceConfig: connection.ConnectionConfig{Type: "tdengine"},
TargetConfig: connection.ConnectionConfig{Type: "mysql"},
},
})
if planner == nil || planner.Name() != "tdengine-mysql-planner" {
t.Fatalf("unexpected planner: %v", planner)
}
}
func TestResolveMigrationPlanner_UsesTDenginePGLikePlanner(t *testing.T) {
t.Parallel()
planner := resolveMigrationPlanner(MigrationBuildContext{
Config: SyncConfig{
SourceConfig: connection.ConnectionConfig{Type: "tdengine"},
TargetConfig: connection.ConnectionConfig{Type: "kingbase"},
},
})
if planner == nil || planner.Name() != "tdengine-pglike-planner" {
t.Fatalf("unexpected planner: %v", planner)
}
}
func TestResolveMigrationPlanner_UsesMySQLLikeTDenginePlanner(t *testing.T) {
t.Parallel()
planner := resolveMigrationPlanner(MigrationBuildContext{
Config: SyncConfig{
SourceConfig: connection.ConnectionConfig{Type: "mysql"},
TargetConfig: connection.ConnectionConfig{Type: "tdengine"},
},
})
if planner == nil || planner.Name() != "mysqllike-tdengine-planner" {
t.Fatalf("unexpected planner: %v", planner)
}
}
func TestResolveMigrationPlanner_UsesPGLikeTDenginePlanner(t *testing.T) {
t.Parallel()
planner := resolveMigrationPlanner(MigrationBuildContext{
Config: SyncConfig{
SourceConfig: connection.ConnectionConfig{Type: "postgres"},
TargetConfig: connection.ConnectionConfig{Type: "tdengine"},
},
})
if planner == nil || planner.Name() != "pglike-tdengine-planner" {
t.Fatalf("unexpected planner: %v", planner)
}
}
func TestResolveMigrationPlanner_UsesClickHouseTDenginePlanner(t *testing.T) {
t.Parallel()
planner := resolveMigrationPlanner(MigrationBuildContext{
Config: SyncConfig{
SourceConfig: connection.ConnectionConfig{Type: "clickhouse"},
TargetConfig: connection.ConnectionConfig{Type: "tdengine"},
},
})
if planner == nil || planner.Name() != "clickhouse-tdengine-planner" {
t.Fatalf("unexpected planner: %v", planner)
}
}
func TestResolveMigrationPlanner_UsesClickHousePGLikePlanner(t *testing.T) {
t.Parallel()
planner := resolveMigrationPlanner(MigrationBuildContext{
Config: SyncConfig{
SourceConfig: connection.ConnectionConfig{Type: "clickhouse"},
TargetConfig: connection.ConnectionConfig{Type: "postgres"},
},
})
if planner == nil || planner.Name() != "clickhouse-pglike-planner" {
t.Fatalf("unexpected planner: %v", planner)
}
}
func TestResolveMigrationPlanner_UsesTDengineTDenginePlanner(t *testing.T) {
t.Parallel()
planner := resolveMigrationPlanner(MigrationBuildContext{
Config: SyncConfig{
SourceConfig: connection.ConnectionConfig{Type: "tdengine"},
TargetConfig: connection.ConnectionConfig{Type: "tdengine"},
},
})
if planner == nil || planner.Name() != "tdengine-tdengine-planner" {
t.Fatalf("unexpected planner: %v", planner)
}
}

View File

@@ -0,0 +1,104 @@
package sync
import (
"GoNavi-Wails/internal/connection"
"GoNavi-Wails/internal/db"
)
type MigrationDataModel string
const (
MigrationDataModelRelational MigrationDataModel = "relational"
MigrationDataModelDocument MigrationDataModel = "document"
MigrationDataModelColumnar MigrationDataModel = "columnar"
MigrationDataModelTimeSeries MigrationDataModel = "timeseries"
MigrationDataModelKeyValue MigrationDataModel = "keyvalue"
MigrationDataModelCustom MigrationDataModel = "custom"
)
type MigrationObjectKind string
const (
MigrationObjectKindTable MigrationObjectKind = "table"
MigrationObjectKindCollection MigrationObjectKind = "collection"
MigrationObjectKindKeyspace MigrationObjectKind = "keyspace"
)
type MigrationSupportLevel string
const (
MigrationSupportLevelFull MigrationSupportLevel = "full"
MigrationSupportLevelPartial MigrationSupportLevel = "partial"
MigrationSupportLevelPlanned MigrationSupportLevel = "planned"
MigrationSupportLevelUnsupported MigrationSupportLevel = "unsupported"
)
type CanonicalFieldSpec struct {
Name string
SourceType string
CanonicalType string
Nullable bool
DefaultValue *string
AutoIncrement bool
Comment string
NestedPath string
Confidence float64
}
type CanonicalIndexSpec struct {
Name string
Kind string
Columns []string
Expression string
PrefixLength int
Supported bool
DegradeStrategy string
Unique bool
}
type CanonicalConstraintSpec struct {
Name string
Kind string
Columns []string
RefName string
}
type CanonicalObjectSpec struct {
Name string
Schema string
Kind MigrationObjectKind
Fields []CanonicalFieldSpec
PrimaryKey []string
Indexes []CanonicalIndexSpec
Constraints []CanonicalConstraintSpec
Comments []string
SourceHints map[string]string
}
type SchemaInferenceIssue struct {
Field string
Level string
Message string
Resolution string
}
type SchemaInferenceResult struct {
Object CanonicalObjectSpec
Issues []SchemaInferenceIssue
SampleSize int
Confidence float64
NeedsReview bool
}
type MigrationBuildContext struct {
Config SyncConfig
TableName string
SourceDB db.Database
TargetDB db.Database
}
type MigrationPlanner interface {
Name() string
SupportLevel(ctx MigrationBuildContext) MigrationSupportLevel
BuildPlan(ctx MigrationBuildContext) (SchemaMigrationPlan, []connection.ColumnDefinition, []connection.ColumnDefinition, error)
}

View File

@@ -0,0 +1,603 @@
package sync
import (
"GoNavi-Wails/internal/connection"
"GoNavi-Wails/internal/db"
"encoding/json"
"fmt"
"sort"
"strings"
"time"
)
func buildMySQLToMongoPlan(config SyncConfig, tableName string, sourceDB db.Database, targetDB db.Database) (SchemaMigrationPlan, []connection.ColumnDefinition, []connection.ColumnDefinition, error) {
return buildTabularToMongoPlan(config, tableName, sourceDB, targetDB)
}
func buildPGLikeToMongoPlan(config SyncConfig, tableName string, sourceDB db.Database, targetDB db.Database) (SchemaMigrationPlan, []connection.ColumnDefinition, []connection.ColumnDefinition, error) {
return buildTabularToMongoPlan(config, tableName, sourceDB, targetDB)
}
func buildClickHouseToMongoPlan(config SyncConfig, tableName string, sourceDB db.Database, targetDB db.Database) (SchemaMigrationPlan, []connection.ColumnDefinition, []connection.ColumnDefinition, error) {
return buildTabularToMongoPlan(config, tableName, sourceDB, targetDB)
}
func buildTDengineToMongoPlan(config SyncConfig, tableName string, sourceDB db.Database, targetDB db.Database) (SchemaMigrationPlan, []connection.ColumnDefinition, []connection.ColumnDefinition, error) {
return buildTabularToMongoPlan(config, tableName, sourceDB, targetDB)
}
func buildTabularToMongoPlan(config SyncConfig, tableName string, sourceDB db.Database, targetDB db.Database) (SchemaMigrationPlan, []connection.ColumnDefinition, []connection.ColumnDefinition, error) {
plan := SchemaMigrationPlan{}
sourceType := resolveMigrationDBType(config.SourceConfig)
targetType := resolveMigrationDBType(config.TargetConfig)
plan.SourceSchema, plan.SourceTable = normalizeSchemaAndTable(sourceType, config.SourceConfig.Database, tableName)
plan.TargetSchema, plan.TargetTable = normalizeSchemaAndTable(targetType, config.TargetConfig.Database, tableName)
plan.SourceQueryTable = qualifiedNameForQuery(sourceType, plan.SourceSchema, plan.SourceTable, tableName)
plan.TargetQueryTable = qualifiedNameForQuery(targetType, plan.TargetSchema, plan.TargetTable, tableName)
plan.PlannedAction = "使用已有目标集合导入"
sourceCols, sourceExists, err := inspectTableColumns(sourceDB, plan.SourceSchema, plan.SourceTable)
if err != nil {
return plan, nil, nil, fmt.Errorf("获取源表字段失败: %w", err)
}
if !sourceExists {
return plan, nil, nil, fmt.Errorf("源表不存在或无列定义: %s", tableName)
}
targetExists, err := inspectMongoCollection(targetDB, plan.TargetSchema, plan.TargetTable)
if err != nil {
return plan, sourceCols, nil, fmt.Errorf("检查目标集合失败: %w", err)
}
plan.TargetTableExists = targetExists
strategy := normalizeTargetTableStrategy(config.TargetTableStrategy)
if targetExists {
plan.Warnings = append(plan.Warnings, "MongoDB 为弱 schema 目标,字段结构以写入文档为准,不执行目标列校验")
return dedupeSchemaMigrationPlan(plan), sourceCols, nil, nil
}
switch strategy {
case "existing_only":
plan.PlannedAction = "目标集合不存在,需先手工创建"
plan.Warnings = append(plan.Warnings, "当前策略要求目标集合已存在,执行时不会自动创建")
return dedupeSchemaMigrationPlan(plan), sourceCols, nil, nil
case "smart", "auto_create_if_missing":
plan.AutoCreate = true
plan.PlannedAction = "目标集合不存在,将自动创建集合后导入"
createCmd, err := buildMongoCreateCollectionCommand(plan.TargetTable)
if err != nil {
return plan, sourceCols, nil, err
}
plan.PreDataSQL = append(plan.PreDataSQL, createCmd)
if config.CreateIndexes {
indexCmds, warnings, unsupported, created, skipped, err := buildMongoIndexCommands(sourceDB, plan.SourceSchema, plan.SourceTable, plan.TargetTable)
if err != nil {
plan.Warnings = append(plan.Warnings, fmt.Sprintf("读取源表索引失败,已跳过索引迁移:%v", err))
} else {
plan.PostDataSQL = append(plan.PostDataSQL, indexCmds...)
plan.Warnings = append(plan.Warnings, warnings...)
plan.UnsupportedObjects = append(plan.UnsupportedObjects, unsupported...)
plan.IndexesToCreate = created
plan.IndexesSkipped = skipped
}
}
return dedupeSchemaMigrationPlan(plan), sourceCols, nil, nil
default:
return dedupeSchemaMigrationPlan(plan), sourceCols, nil, nil
}
}
func buildMongoToMySQLPlan(config SyncConfig, tableName string, sourceDB db.Database, targetDB db.Database) (SchemaMigrationPlan, []connection.ColumnDefinition, []connection.ColumnDefinition, error) {
plan := SchemaMigrationPlan{}
plan.SourceSchema, plan.SourceTable = normalizeSchemaAndTable(config.SourceConfig.Type, config.SourceConfig.Database, tableName)
plan.TargetSchema, plan.TargetTable = normalizeSchemaAndTable(config.TargetConfig.Type, config.TargetConfig.Database, tableName)
plan.SourceQueryTable = qualifiedNameForQuery(config.SourceConfig.Type, plan.SourceSchema, plan.SourceTable, tableName)
plan.TargetQueryTable = qualifiedNameForQuery(config.TargetConfig.Type, plan.TargetSchema, plan.TargetTable, tableName)
plan.PlannedAction = "使用已有目标表导入"
sourceCols, warnings, err := inferMongoCollectionColumns(sourceDB, plan.SourceTable)
if err != nil {
return plan, nil, nil, err
}
plan.Warnings = append(plan.Warnings, warnings...)
if len(sourceCols) == 0 {
return plan, nil, nil, fmt.Errorf("源集合未推断出可迁移字段: %s", tableName)
}
targetCols, targetExists, err := inspectTableColumns(targetDB, plan.TargetSchema, plan.TargetTable)
if err != nil {
return plan, sourceCols, nil, fmt.Errorf("获取目标表字段失败: %w", err)
}
plan.TargetTableExists = targetExists
strategy := normalizeTargetTableStrategy(config.TargetTableStrategy)
if targetExists {
missing := diffMissingColumnNames(sourceCols, targetCols)
if len(missing) > 0 {
plan.Warnings = append(plan.Warnings, fmt.Sprintf("目标表缺失字段 %d 个:%s", len(missing), strings.Join(missing, ", ")))
}
if config.AutoAddColumns {
addSQL, addWarnings := buildMongoToMySQLAddColumnSQL(plan.TargetQueryTable, sourceCols, targetCols)
plan.PreDataSQL = append(plan.PreDataSQL, addSQL...)
plan.Warnings = append(plan.Warnings, addWarnings...)
if len(addSQL) > 0 {
plan.PlannedAction = fmt.Sprintf("补齐缺失字段(%d)后导入", len(addSQL))
}
}
return dedupeSchemaMigrationPlan(plan), sourceCols, targetCols, nil
}
switch strategy {
case "existing_only":
plan.PlannedAction = "目标表不存在,需先手工创建"
plan.Warnings = append(plan.Warnings, "当前策略要求目标表已存在,执行时不会自动建表")
return dedupeSchemaMigrationPlan(plan), sourceCols, targetCols, nil
case "smart", "auto_create_if_missing":
plan.AutoCreate = true
plan.PlannedAction = "目标表不存在,将自动建表后导入"
createSQL, postSQL, moreWarnings, unsupported, idxCreate, idxSkip, err := buildMongoToMySQLCreateTablePlan(config, plan.TargetQueryTable, sourceCols, sourceDB, plan.SourceSchema, plan.SourceTable)
if err != nil {
return plan, sourceCols, targetCols, err
}
plan.CreateTableSQL = createSQL
plan.PostDataSQL = append(plan.PostDataSQL, postSQL...)
plan.Warnings = append(plan.Warnings, moreWarnings...)
plan.UnsupportedObjects = append(plan.UnsupportedObjects, unsupported...)
plan.IndexesToCreate = idxCreate
plan.IndexesSkipped = idxSkip
return dedupeSchemaMigrationPlan(plan), sourceCols, targetCols, nil
default:
return dedupeSchemaMigrationPlan(plan), sourceCols, targetCols, nil
}
}
func inspectMongoCollection(database db.Database, dbName, collection string) (bool, error) {
items, err := database.GetTables(dbName)
if err != nil {
return false, err
}
target := strings.TrimSpace(collection)
for _, item := range items {
if strings.EqualFold(strings.TrimSpace(item), target) {
return true, nil
}
}
return false, nil
}
func buildMongoCreateCollectionCommand(collection string) (string, error) {
cmd := map[string]interface{}{"create": strings.TrimSpace(collection)}
data, err := json.Marshal(cmd)
if err != nil {
return "", err
}
return string(data), nil
}
func buildMongoIndexCommands(sourceDB db.Database, dbName, tableName, targetCollection string) ([]string, []string, []string, int, int, error) {
indexes, err := sourceDB.GetIndexes(dbName, tableName)
if err != nil {
return nil, nil, nil, 0, 0, err
}
grouped := groupIndexDefinitions(indexes)
cmds := make([]string, 0, len(grouped))
warnings := make([]string, 0)
unsupported := make([]string, 0)
created := 0
skipped := 0
for _, idx := range grouped {
name := strings.TrimSpace(idx.Name)
if name == "" || strings.EqualFold(name, "primary") {
continue
}
if len(idx.Columns) == 0 {
skipped++
unsupported = append(unsupported, fmt.Sprintf("索引 %s 缺少列定义,已跳过", name))
continue
}
kind := strings.ToLower(strings.TrimSpace(idx.IndexType))
if idx.SubPart > 0 {
skipped++
unsupported = append(unsupported, fmt.Sprintf("索引 %s 使用前缀长度MongoDB 目标暂不支持等价迁移", name))
continue
}
if kind != "" && kind != "btree" {
warnings = append(warnings, fmt.Sprintf("索引 %s 类型=%s 将按普通索引迁移到 MongoDB", name, idx.IndexType))
}
keySpec := make(map[string]int)
for _, col := range idx.Columns {
keySpec[col] = 1
}
command := map[string]interface{}{
"createIndexes": strings.TrimSpace(targetCollection),
"indexes": []map[string]interface{}{{
"name": name,
"key": keySpec,
"unique": idx.Unique,
}},
}
data, err := json.Marshal(command)
if err != nil {
skipped++
unsupported = append(unsupported, fmt.Sprintf("索引 %s 生成 MongoDB createIndexes 命令失败:%v", name, err))
continue
}
cmds = append(cmds, string(data))
created++
}
return cmds, dedupeStrings(warnings), dedupeStrings(unsupported), created, skipped, nil
}
func inferMongoCollectionColumns(sourceDB db.Database, collection string) ([]connection.ColumnDefinition, []string, error) {
query := fmt.Sprintf(`{"find":"%s","filter":{},"limit":200}`, strings.TrimSpace(collection))
rows, _, err := sourceDB.Query(query)
if err != nil {
return nil, nil, fmt.Errorf("读取源集合样本失败: %w", err)
}
if len(rows) == 0 {
return []connection.ColumnDefinition{{Name: "_id", Type: "varchar(64)", Nullable: "NO", Key: "PRI"}}, []string{"源集合暂无样本数据,仅按 `_id` 生成基础主键列"}, nil
}
fieldNames := make(map[string]struct{})
for _, row := range rows {
for key := range row {
fieldNames[key] = struct{}{}
}
}
orderedFields := make([]string, 0, len(fieldNames))
for key := range fieldNames {
orderedFields = append(orderedFields, key)
}
sort.Strings(orderedFields)
if containsString(orderedFields, "_id") {
orderedFields = moveStringToFront(orderedFields, "_id")
}
columns := make([]connection.ColumnDefinition, 0, len(orderedFields))
warnings := make([]string, 0)
for _, field := range orderedFields {
typeName, nullable, fieldWarnings := inferMongoFieldType(rows, field)
warnings = append(warnings, fieldWarnings...)
col := connection.ColumnDefinition{
Name: field,
Type: typeName,
Nullable: ternaryString(nullable, "YES", "NO"),
Key: "",
Extra: "",
}
if field == "_id" {
col.Key = "PRI"
col.Nullable = "NO"
}
columns = append(columns, col)
}
return columns, dedupeStrings(warnings), nil
}
func inferMongoFieldType(rows []map[string]interface{}, field string) (string, bool, []string) {
nullable := false
hasString, hasBool, hasInt, hasFloat, hasTime, hasComplex := false, false, false, false, false, false
for _, row := range rows {
value, ok := row[field]
if !ok || value == nil {
nullable = true
continue
}
switch value.(type) {
case bool:
hasBool = true
case int, int8, int16, int32, int64, uint, uint8, uint16, uint32, uint64:
hasInt = true
case float32, float64:
hasFloat = true
case time.Time:
hasTime = true
case map[string]interface{}, []interface{}:
hasComplex = true
default:
hasString = true
}
}
kinds := 0
for _, flag := range []bool{hasString, hasBool, hasInt, hasFloat, hasTime, hasComplex} {
if flag {
kinds++
}
}
warnings := make([]string, 0)
if kinds > 1 {
warnings = append(warnings, fmt.Sprintf("字段 %s 存在多种 BSON 值类型,已按兼容类型降级", field))
}
if field == "_id" {
return "varchar(64)", false, warnings
}
switch {
case hasComplex:
return "json", nullable, warnings
case hasTime:
return "datetime", nullable, warnings
case hasFloat:
return "double", nullable, warnings
case hasInt:
return "bigint", nullable, warnings
case hasBool:
return "tinyint(1)", nullable, warnings
default:
return "varchar(255)", nullable, warnings
}
}
func buildMongoToMySQLAddColumnSQL(targetQueryTable string, sourceCols, targetCols []connection.ColumnDefinition) ([]string, []string) {
targetSet := make(map[string]struct{}, len(targetCols))
for _, col := range targetCols {
key := strings.ToLower(strings.TrimSpace(col.Name))
if key == "" {
continue
}
targetSet[key] = struct{}{}
}
var sqlList []string
for _, col := range sourceCols {
key := strings.ToLower(strings.TrimSpace(col.Name))
if key == "" {
continue
}
if _, ok := targetSet[key]; ok {
continue
}
sqlList = append(sqlList, fmt.Sprintf("ALTER TABLE %s ADD COLUMN %s %s NULL",
quoteQualifiedIdentByType("mysql", targetQueryTable),
quoteIdentByType("mysql", col.Name),
strings.TrimSpace(col.Type),
))
}
return sqlList, nil
}
func buildMongoToMySQLCreateTablePlan(config SyncConfig, targetQueryTable string, sourceCols []connection.ColumnDefinition, sourceDB db.Database, sourceSchema, sourceTable string) (string, []string, []string, []string, int, int, error) {
columnDefs := make([]string, 0, len(sourceCols)+1)
warnings := make([]string, 0)
unsupported := make([]string, 0)
pkCols := make([]string, 0, 1)
for _, col := range sourceCols {
columnDef := fmt.Sprintf("%s %s", quoteIdentByType("mysql", col.Name), strings.TrimSpace(col.Type))
if strings.EqualFold(strings.TrimSpace(col.Nullable), "NO") {
columnDef += " NOT NULL"
}
columnDefs = append(columnDefs, columnDef)
if col.Key == "PRI" || col.Key == "PK" {
pkCols = append(pkCols, quoteIdentByType("mysql", col.Name))
}
}
if len(pkCols) > 0 {
columnDefs = append(columnDefs, fmt.Sprintf("PRIMARY KEY (%s)", strings.Join(pkCols, ", ")))
} else {
warnings = append(warnings, "MongoDB 源集合未推断出稳定主键,目标表将不自动创建主键")
}
createSQL := fmt.Sprintf("CREATE TABLE %s (\n %s\n)", quoteQualifiedIdentByType("mysql", targetQueryTable), strings.Join(columnDefs, ",\n "))
if !config.CreateIndexes {
return createSQL, nil, dedupeStrings(warnings), dedupeStrings(unsupported), 0, 0, nil
}
indexes, err := sourceDB.GetIndexes(sourceSchema, sourceTable)
if err != nil {
warnings = append(warnings, fmt.Sprintf("读取源集合索引失败,已跳过索引迁移:%v", err))
return createSQL, nil, dedupeStrings(warnings), dedupeStrings(unsupported), 0, 0, nil
}
grouped := groupIndexDefinitions(indexes)
postSQL := make([]string, 0, len(grouped))
created := 0
skipped := 0
for _, idx := range grouped {
name := strings.TrimSpace(idx.Name)
if name == "" || strings.EqualFold(name, "_id_") || strings.EqualFold(name, "primary") {
continue
}
if len(idx.Columns) == 0 {
skipped++
unsupported = append(unsupported, fmt.Sprintf("索引 %s 缺少列定义,已跳过", name))
continue
}
quotedCols := make([]string, 0, len(idx.Columns))
for _, col := range idx.Columns {
quotedCols = append(quotedCols, quoteIdentByType("mysql", col))
}
prefix := "CREATE INDEX"
if idx.Unique {
prefix = "CREATE UNIQUE INDEX"
}
postSQL = append(postSQL, fmt.Sprintf("%s %s ON %s (%s)", prefix, quoteIdentByType("mysql", name), quoteQualifiedIdentByType("mysql", targetQueryTable), strings.Join(quotedCols, ", ")))
created++
}
return createSQL, postSQL, dedupeStrings(warnings), dedupeStrings(unsupported), created, skipped, nil
}
func containsString(items []string, target string) bool {
for _, item := range items {
if item == target {
return true
}
}
return false
}
func moveStringToFront(items []string, target string) []string {
out := make([]string, 0, len(items))
for _, item := range items {
if item == target {
continue
}
out = append(out, item)
}
return append([]string{target}, out...)
}
func buildMongoToPGLikePlan(config SyncConfig, tableName string, sourceDB db.Database, targetDB db.Database) (SchemaMigrationPlan, []connection.ColumnDefinition, []connection.ColumnDefinition, error) {
plan := SchemaMigrationPlan{}
targetType := strings.ToLower(strings.TrimSpace(config.TargetConfig.Type))
plan.SourceSchema, plan.SourceTable = normalizeSchemaAndTable(config.SourceConfig.Type, config.SourceConfig.Database, tableName)
plan.TargetSchema, plan.TargetTable = normalizeSchemaAndTable(config.TargetConfig.Type, config.TargetConfig.Database, tableName)
plan.SourceQueryTable = qualifiedNameForQuery(config.SourceConfig.Type, plan.SourceSchema, plan.SourceTable, tableName)
plan.TargetQueryTable = qualifiedNameForQuery(config.TargetConfig.Type, plan.TargetSchema, plan.TargetTable, tableName)
plan.PlannedAction = "使用已有目标表导入"
sourceCols, warnings, err := inferMongoCollectionColumns(sourceDB, plan.SourceTable)
if err != nil {
return plan, nil, nil, err
}
plan.Warnings = append(plan.Warnings, warnings...)
if len(sourceCols) == 0 {
return plan, nil, nil, fmt.Errorf("源集合未推断出可迁移字段: %s", tableName)
}
targetCols, targetExists, err := inspectTableColumns(targetDB, plan.TargetSchema, plan.TargetTable)
if err != nil {
return plan, sourceCols, nil, fmt.Errorf("获取目标表字段失败: %w", err)
}
plan.TargetTableExists = targetExists
strategy := normalizeTargetTableStrategy(config.TargetTableStrategy)
if targetExists {
missing := diffMissingColumnNames(sourceCols, targetCols)
if len(missing) > 0 {
plan.Warnings = append(plan.Warnings, fmt.Sprintf("目标表缺失字段 %d 个:%s", len(missing), strings.Join(missing, ", ")))
}
if config.AutoAddColumns {
addSQL, addWarnings := buildMongoToPGLikeAddColumnSQL(targetType, plan.TargetQueryTable, sourceCols, targetCols)
plan.PreDataSQL = append(plan.PreDataSQL, addSQL...)
plan.Warnings = append(plan.Warnings, addWarnings...)
if len(addSQL) > 0 {
plan.PlannedAction = fmt.Sprintf("补齐缺失字段(%d)后导入", len(addSQL))
}
}
return dedupeSchemaMigrationPlan(plan), sourceCols, targetCols, nil
}
switch strategy {
case "existing_only":
plan.PlannedAction = "目标表不存在,需先手工创建"
plan.Warnings = append(plan.Warnings, "当前策略要求目标表已存在,执行时不会自动建表")
return dedupeSchemaMigrationPlan(plan), sourceCols, targetCols, nil
case "smart", "auto_create_if_missing":
plan.AutoCreate = true
plan.PlannedAction = "目标表不存在,将自动建表后导入"
createSQL, postSQL, moreWarnings, unsupported, idxCreate, idxSkip, err := buildMongoToPGLikeCreateTablePlan(targetType, config, plan.TargetQueryTable, sourceCols, sourceDB, plan.SourceSchema, plan.SourceTable)
if err != nil {
return plan, sourceCols, targetCols, err
}
plan.CreateTableSQL = createSQL
plan.PostDataSQL = append(plan.PostDataSQL, postSQL...)
plan.Warnings = append(plan.Warnings, moreWarnings...)
plan.UnsupportedObjects = append(plan.UnsupportedObjects, unsupported...)
plan.IndexesToCreate = idxCreate
plan.IndexesSkipped = idxSkip
return dedupeSchemaMigrationPlan(plan), sourceCols, targetCols, nil
default:
return dedupeSchemaMigrationPlan(plan), sourceCols, targetCols, nil
}
}
func buildMongoToPGLikeAddColumnSQL(targetType string, targetQueryTable string, sourceCols, targetCols []connection.ColumnDefinition) ([]string, []string) {
targetSet := make(map[string]struct{}, len(targetCols))
for _, col := range targetCols {
key := strings.ToLower(strings.TrimSpace(col.Name))
if key == "" {
continue
}
targetSet[key] = struct{}{}
}
var sqlList []string
var warnings []string
for _, col := range sourceCols {
key := strings.ToLower(strings.TrimSpace(col.Name))
if key == "" {
continue
}
if _, ok := targetSet[key]; ok {
continue
}
colType, mapWarnings := mapMongoInferredColumnToPGLike(col)
warnings = append(warnings, mapWarnings...)
sqlList = append(sqlList, fmt.Sprintf("ALTER TABLE %s ADD COLUMN %s %s NULL",
quoteQualifiedIdentByType(targetType, targetQueryTable),
quoteIdentByType(targetType, col.Name),
colType,
))
}
return sqlList, dedupeStrings(warnings)
}
func buildMongoToPGLikeCreateTablePlan(targetType string, config SyncConfig, targetQueryTable string, sourceCols []connection.ColumnDefinition, sourceDB db.Database, sourceSchema, sourceTable string) (string, []string, []string, []string, int, int, error) {
columnDefs := make([]string, 0, len(sourceCols)+1)
warnings := make([]string, 0)
unsupported := make([]string, 0)
pkCols := make([]string, 0, 1)
for _, col := range sourceCols {
colType, colWarnings := mapMongoInferredColumnToPGLike(col)
warnings = append(warnings, colWarnings...)
parts := []string{colType}
if strings.EqualFold(strings.TrimSpace(col.Nullable), "NO") {
parts = append(parts, "NOT NULL")
}
columnDefs = append(columnDefs, fmt.Sprintf("%s %s", quoteIdentByType(targetType, col.Name), strings.Join(parts, " ")))
if col.Key == "PRI" || col.Key == "PK" {
pkCols = append(pkCols, quoteIdentByType(targetType, col.Name))
}
}
if len(pkCols) > 0 {
columnDefs = append(columnDefs, fmt.Sprintf("PRIMARY KEY (%s)", strings.Join(pkCols, ", ")))
}
createSQL := fmt.Sprintf("CREATE TABLE %s (\n %s\n)", quoteQualifiedIdentByType(targetType, targetQueryTable), strings.Join(columnDefs, ",\n "))
if !config.CreateIndexes {
return createSQL, nil, dedupeStrings(warnings), dedupeStrings(unsupported), 0, 0, nil
}
indexes, err := sourceDB.GetIndexes(sourceSchema, sourceTable)
if err != nil {
warnings = append(warnings, fmt.Sprintf("读取源集合索引失败,已跳过索引迁移:%v", err))
return createSQL, nil, dedupeStrings(warnings), dedupeStrings(unsupported), 0, 0, nil
}
grouped := groupIndexDefinitions(indexes)
postSQL := make([]string, 0, len(grouped))
created := 0
skipped := 0
for _, idx := range grouped {
name := strings.TrimSpace(idx.Name)
if name == "" || strings.EqualFold(name, "_id_") || strings.EqualFold(name, "primary") {
continue
}
if len(idx.Columns) == 0 {
skipped++
unsupported = append(unsupported, fmt.Sprintf("索引 %s 缺少列定义,已跳过", name))
continue
}
quotedCols := make([]string, 0, len(idx.Columns))
for _, col := range idx.Columns {
quotedCols = append(quotedCols, quoteIdentByType(targetType, col))
}
prefix := "CREATE INDEX"
if idx.Unique {
prefix = "CREATE UNIQUE INDEX"
}
postSQL = append(postSQL, fmt.Sprintf("%s %s ON %s (%s)", prefix, quoteIdentByType(targetType, name), quoteQualifiedIdentByType(targetType, targetQueryTable), strings.Join(quotedCols, ", ")))
created++
}
return createSQL, postSQL, dedupeStrings(warnings), dedupeStrings(unsupported), created, skipped, nil
}
func mapMongoInferredColumnToPGLike(col connection.ColumnDefinition) (string, []string) {
raw := strings.ToLower(strings.TrimSpace(col.Type))
warnings := make([]string, 0)
switch {
case strings.HasPrefix(raw, "varchar"):
return col.Type, warnings
case raw == "json":
return "jsonb", warnings
case raw == "datetime":
return "timestamp", warnings
case raw == "tinyint(1)":
return "boolean", warnings
case raw == "double":
return "double precision", warnings
case raw == "bigint":
return "bigint", warnings
default:
return col.Type, warnings
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,58 @@
package sync
import (
"GoNavi-Wails/internal/connection"
"fmt"
"strings"
)
func supportsAutoAddColumnsForPair(sourceType string, targetType string) bool {
source := normalizeMigrationDBType(sourceType)
target := normalizeMigrationDBType(targetType)
if isMySQLLikeWritableTargetType(target) {
return isMySQLCoreType(source)
}
if isPGLikeTarget(target) {
return isMySQLLikeSourceType(source)
}
return false
}
func buildAddColumnSQLForPair(sourceType string, targetType string, targetQueryTable string, sourceCol connection.ColumnDefinition) (string, error) {
source := normalizeMigrationDBType(sourceType)
target := normalizeMigrationDBType(targetType)
switch {
case isMySQLCoreType(source) && isMySQLLikeWritableTargetType(target):
colType := sanitizeMySQLColumnType(sourceCol.Type)
return fmt.Sprintf("ALTER TABLE %s ADD COLUMN %s %s NULL",
quoteQualifiedIdentByType("mysql", targetQueryTable),
quoteIdentByType("mysql", sourceCol.Name),
colType,
), nil
case isMySQLLikeSourceType(source) && isPGLikeTarget(target):
colType, _, warnings := mapMySQLColumnToKingbase(sourceCol)
if len(warnings) > 0 && strings.Contains(strings.Join(warnings, " "), "identity") {
// 对已有目标表补字段时保守处理,不补建自增语义。
}
return fmt.Sprintf("ALTER TABLE %s ADD COLUMN %s %s NULL",
quoteQualifiedIdentByType(target, targetQueryTable),
quoteIdentByType(target, sourceCol.Name),
colType,
), nil
default:
return "", fmt.Errorf("当前不支持 source=%s target=%s 的自动补字段", sourceType, targetType)
}
}
func executeSQLStatements(execFn func(string) (int64, error), statements []string) error {
for _, stmt := range statements {
trimmed := strings.TrimSpace(stmt)
if trimmed == "" {
continue
}
if _, err := execFn(trimmed); err != nil {
return err
}
}
return nil
}

View File

@@ -0,0 +1,53 @@
package sync
import (
"fmt"
"strings"
)
type SchemaInferenceStrategy string
const (
SchemaInferenceStrategySample SchemaInferenceStrategy = "sample"
SchemaInferenceStrategyStrict SchemaInferenceStrategy = "strict"
)
func shouldUseSchemaInference(sourceType string, targetType string) bool {
sourceModel := classifyMigrationDataModel(sourceType)
targetModel := classifyMigrationDataModel(targetType)
return sourceModel == MigrationDataModelDocument && targetModel == MigrationDataModelRelational
}
func inferMigrationObjectKind(sourceType string, targetType string) MigrationObjectKind {
sourceModel := classifyMigrationDataModel(sourceType)
targetModel := classifyMigrationDataModel(targetType)
switch {
case sourceModel == MigrationDataModelDocument || targetModel == MigrationDataModelDocument:
return MigrationObjectKindCollection
case sourceModel == MigrationDataModelKeyValue || targetModel == MigrationDataModelKeyValue:
return MigrationObjectKindKeyspace
default:
return MigrationObjectKindTable
}
}
func inferSchemaForPair(sourceType string, targetType string, objectName string) (SchemaInferenceResult, error) {
if !shouldUseSchemaInference(sourceType, targetType) {
return SchemaInferenceResult{}, fmt.Errorf("当前迁移对 %s -> %s 不需要 schema 推断", sourceType, targetType)
}
return SchemaInferenceResult{
Object: CanonicalObjectSpec{
Name: strings.TrimSpace(objectName),
Kind: MigrationObjectKindCollection,
Fields: []CanonicalFieldSpec{},
},
Issues: []SchemaInferenceIssue{
{
Level: "info",
Message: "MongoDB -> 关系型数据库的 schema 推断能力尚在建设中,当前仅提供内核入口。",
Resolution: "后续将基于样本数据生成列定义与类型降级策略。",
},
},
NeedsReview: true,
}, nil
}

View File

@@ -0,0 +1,296 @@
package sync
import (
"GoNavi-Wails/internal/connection"
"GoNavi-Wails/internal/db"
"fmt"
"strconv"
"strings"
)
func buildTDengineToMySQLPlan(config SyncConfig, tableName string, sourceDB db.Database, targetDB db.Database) (SchemaMigrationPlan, []connection.ColumnDefinition, []connection.ColumnDefinition, error) {
plan := SchemaMigrationPlan{}
sourceType := resolveMigrationDBType(config.SourceConfig)
targetType := resolveMigrationDBType(config.TargetConfig)
plan.SourceSchema, plan.SourceTable = normalizeSchemaAndTable(sourceType, config.SourceConfig.Database, tableName)
plan.TargetSchema, plan.TargetTable = normalizeSchemaAndTable(targetType, config.TargetConfig.Database, tableName)
plan.SourceQueryTable = qualifiedNameForQuery(sourceType, plan.SourceSchema, plan.SourceTable, tableName)
plan.TargetQueryTable = qualifiedNameForQuery(targetType, plan.TargetSchema, plan.TargetTable, tableName)
plan.PlannedAction = "使用已有目标表导入"
sourceCols, sourceExists, err := inspectTableColumns(sourceDB, plan.SourceSchema, plan.SourceTable)
if err != nil {
return plan, nil, nil, fmt.Errorf("获取源表字段失败: %w", err)
}
if !sourceExists {
return plan, nil, nil, fmt.Errorf("源表不存在或无列定义: %s", tableName)
}
targetCols, targetExists, err := inspectTableColumns(targetDB, plan.TargetSchema, plan.TargetTable)
if err != nil {
return plan, sourceCols, nil, fmt.Errorf("获取目标表字段失败: %w", err)
}
plan.TargetTableExists = targetExists
plan.Warnings = append(plan.Warnings, tdengineSemanticWarnings(sourceCols)...)
strategy := normalizeTargetTableStrategy(config.TargetTableStrategy)
if targetExists {
missing := diffMissingColumnNames(sourceCols, targetCols)
if len(missing) > 0 {
plan.Warnings = append(plan.Warnings, fmt.Sprintf("目标表缺失字段 %d 个:%s", len(missing), strings.Join(missing, ", ")))
}
if strategy != "existing_only" {
plan.Warnings = append(plan.Warnings, "TDengine 源端当前不自动补齐已有目标表字段,请先确认目标表结构")
}
return dedupeSchemaMigrationPlan(plan), sourceCols, targetCols, nil
}
switch strategy {
case "existing_only":
plan.PlannedAction = "目标表不存在,需先手工创建"
plan.Warnings = append(plan.Warnings, "当前策略要求目标表已存在,执行时不会自动建表")
return dedupeSchemaMigrationPlan(plan), sourceCols, targetCols, nil
case "smart", "auto_create_if_missing":
plan.AutoCreate = true
plan.PlannedAction = "目标表不存在,将自动建表后导入"
createSQL, warnings, unsupported := buildTDengineToMySQLCreateTableSQL(plan.TargetQueryTable, sourceCols)
plan.CreateTableSQL = createSQL
plan.Warnings = append(plan.Warnings, warnings...)
plan.UnsupportedObjects = append(plan.UnsupportedObjects, unsupported...)
return dedupeSchemaMigrationPlan(plan), sourceCols, targetCols, nil
default:
return dedupeSchemaMigrationPlan(plan), sourceCols, targetCols, nil
}
}
func buildTDengineToPGLikePlan(config SyncConfig, tableName string, sourceDB db.Database, targetDB db.Database) (SchemaMigrationPlan, []connection.ColumnDefinition, []connection.ColumnDefinition, error) {
plan := SchemaMigrationPlan{}
sourceType := resolveMigrationDBType(config.SourceConfig)
targetType := resolveMigrationDBType(config.TargetConfig)
plan.SourceSchema, plan.SourceTable = normalizeSchemaAndTable(sourceType, config.SourceConfig.Database, tableName)
plan.TargetSchema, plan.TargetTable = normalizeSchemaAndTable(targetType, config.TargetConfig.Database, tableName)
plan.SourceQueryTable = qualifiedNameForQuery(sourceType, plan.SourceSchema, plan.SourceTable, tableName)
plan.TargetQueryTable = qualifiedNameForQuery(targetType, plan.TargetSchema, plan.TargetTable, tableName)
plan.PlannedAction = "使用已有目标表导入"
sourceCols, sourceExists, err := inspectTableColumns(sourceDB, plan.SourceSchema, plan.SourceTable)
if err != nil {
return plan, nil, nil, fmt.Errorf("获取源表字段失败: %w", err)
}
if !sourceExists {
return plan, nil, nil, fmt.Errorf("源表不存在或无列定义: %s", tableName)
}
targetCols, targetExists, err := inspectTableColumns(targetDB, plan.TargetSchema, plan.TargetTable)
if err != nil {
return plan, sourceCols, nil, fmt.Errorf("获取目标表字段失败: %w", err)
}
plan.TargetTableExists = targetExists
plan.Warnings = append(plan.Warnings, tdengineSemanticWarnings(sourceCols)...)
strategy := normalizeTargetTableStrategy(config.TargetTableStrategy)
if targetExists {
missing := diffMissingColumnNames(sourceCols, targetCols)
if len(missing) > 0 {
plan.Warnings = append(plan.Warnings, fmt.Sprintf("目标表缺失字段 %d 个:%s", len(missing), strings.Join(missing, ", ")))
}
if strategy != "existing_only" {
plan.Warnings = append(plan.Warnings, "TDengine 源端当前不自动补齐已有目标表字段,请先确认目标表结构")
}
return dedupeSchemaMigrationPlan(plan), sourceCols, targetCols, nil
}
switch strategy {
case "existing_only":
plan.PlannedAction = "目标表不存在,需先手工创建"
plan.Warnings = append(plan.Warnings, "当前策略要求目标表已存在,执行时不会自动建表")
return dedupeSchemaMigrationPlan(plan), sourceCols, targetCols, nil
case "smart", "auto_create_if_missing":
plan.AutoCreate = true
plan.PlannedAction = "目标表不存在,将自动建表后导入"
createSQL, warnings, unsupported := buildTDengineToPGLikeCreateTableSQL(targetType, plan.TargetQueryTable, sourceCols)
plan.CreateTableSQL = createSQL
plan.Warnings = append(plan.Warnings, warnings...)
plan.UnsupportedObjects = append(plan.UnsupportedObjects, unsupported...)
return dedupeSchemaMigrationPlan(plan), sourceCols, targetCols, nil
default:
return dedupeSchemaMigrationPlan(plan), sourceCols, targetCols, nil
}
}
func buildTDengineToMySQLCreateTableSQL(targetQueryTable string, sourceCols []connection.ColumnDefinition) (string, []string, []string) {
columnDefs := make([]string, 0, len(sourceCols))
warnings := make([]string, 0)
unsupported := []string{"TDengine 的索引/外键/触发器/超级表/TTL 等时序语义当前不会自动迁移"}
for _, col := range sourceCols {
def, colWarnings := buildTDengineToMySQLColumnDefinition(col)
columnDefs = append(columnDefs, fmt.Sprintf("%s %s", quoteIdentByType("mysql", col.Name), def))
warnings = append(warnings, colWarnings...)
}
createSQL := fmt.Sprintf("CREATE TABLE %s (\n %s\n)", quoteQualifiedIdentByType("mysql", targetQueryTable), strings.Join(columnDefs, ",\n "))
return createSQL, dedupeStrings(warnings), dedupeStrings(unsupported)
}
func buildTDengineToPGLikeCreateTableSQL(targetType string, targetQueryTable string, sourceCols []connection.ColumnDefinition) (string, []string, []string) {
columnDefs := make([]string, 0, len(sourceCols))
warnings := make([]string, 0)
unsupported := []string{"TDengine 的索引/外键/触发器/超级表/TTL 等时序语义当前不会自动迁移"}
for _, col := range sourceCols {
def, colWarnings := buildTDengineToPGLikeColumnDefinition(col)
columnDefs = append(columnDefs, fmt.Sprintf("%s %s", quoteIdentByType(targetType, col.Name), def))
warnings = append(warnings, colWarnings...)
}
createSQL := fmt.Sprintf("CREATE TABLE %s (\n %s\n)", quoteQualifiedIdentByType(targetType, targetQueryTable), strings.Join(columnDefs, ",\n "))
return createSQL, dedupeStrings(warnings), dedupeStrings(unsupported)
}
func buildTDengineToMySQLColumnDefinition(col connection.ColumnDefinition) (string, []string) {
targetType, warnings := mapTDengineColumnToMySQL(col)
parts := []string{targetType}
if strings.EqualFold(strings.TrimSpace(col.Nullable), "NO") {
parts = append(parts, "NOT NULL")
} else {
parts = append(parts, "NULL")
}
return strings.Join(parts, " "), warnings
}
func buildTDengineToPGLikeColumnDefinition(col connection.ColumnDefinition) (string, []string) {
targetType, warnings := mapTDengineColumnToPGLike(col)
parts := []string{targetType}
if strings.EqualFold(strings.TrimSpace(col.Nullable), "NO") {
parts = append(parts, "NOT NULL")
} else {
parts = append(parts, "NULL")
}
return strings.Join(parts, " "), warnings
}
func tdengineSemanticWarnings(sourceCols []connection.ColumnDefinition) []string {
warnings := []string{"TDengine 到关系型目标库当前仅迁移列与数据超级表、TAG 关联、保留策略等时序语义会降级或丢失"}
for _, col := range sourceCols {
if isTDengineTagColumn(col) {
warnings = append(warnings, fmt.Sprintf("字段 %s 为 TDengine TAG 列,迁移到关系型目标后将降级为普通字段", col.Name))
}
}
return dedupeStrings(warnings)
}
func isTDengineTagColumn(col connection.ColumnDefinition) bool {
return strings.EqualFold(strings.TrimSpace(col.Key), "TAG") || strings.Contains(strings.ToUpper(strings.TrimSpace(col.Extra)), "TAG")
}
func parseTDengineType(raw string) (string, int) {
cleaned := strings.TrimSpace(strings.ToUpper(raw))
if cleaned == "" {
return "", 0
}
base := cleaned
length := 0
if idx := strings.Index(base, "("); idx >= 0 {
end := strings.Index(base[idx+1:], ")")
if end >= 0 {
lengthText := strings.TrimSpace(base[idx+1 : idx+1+end])
if v, err := strconv.Atoi(lengthText); err == nil {
length = v
}
}
base = strings.TrimSpace(base[:idx])
}
return base, length
}
func mapTDengineColumnToMySQL(col connection.ColumnDefinition) (string, []string) {
base, length := parseTDengineType(col.Type)
warnings := make([]string, 0)
if isTDengineTagColumn(col) {
warnings = append(warnings, fmt.Sprintf("字段 %s 为 TDengine TAG 列,已按普通列映射", col.Name))
}
switch base {
case "BOOL", "BOOLEAN":
return "tinyint(1)", warnings
case "TINYINT":
return "tinyint", warnings
case "UTINYINT":
return "tinyint unsigned", warnings
case "SMALLINT":
return "smallint", warnings
case "USMALLINT":
return "smallint unsigned", warnings
case "INT", "INTEGER":
return "int", warnings
case "UINT":
return "int unsigned", warnings
case "BIGINT":
return "bigint", warnings
case "UBIGINT":
return "bigint unsigned", warnings
case "FLOAT":
return "float", warnings
case "DOUBLE":
return "double", warnings
case "DECIMAL", "NUMERIC":
if length > 0 {
return strings.ToLower(strings.TrimSpace(col.Type)), warnings
}
return "decimal(38,10)", warnings
case "TIMESTAMP":
return "datetime", warnings
case "DATE":
return "date", warnings
case "JSON":
return "json", warnings
case "BINARY", "NCHAR", "VARCHAR", "VARBINARY":
if length > 0 && length <= 65535 {
return fmt.Sprintf("varchar(%d)", length), warnings
}
return "text", warnings
default:
warnings = append(warnings, fmt.Sprintf("字段 %s 类型 %s 暂无专门 MySQL 映射,已降级为 text", col.Name, col.Type))
return "text", warnings
}
}
func mapTDengineColumnToPGLike(col connection.ColumnDefinition) (string, []string) {
base, length := parseTDengineType(col.Type)
warnings := make([]string, 0)
if isTDengineTagColumn(col) {
warnings = append(warnings, fmt.Sprintf("字段 %s 为 TDengine TAG 列,已按普通列映射", col.Name))
}
switch base {
case "BOOL", "BOOLEAN":
return "boolean", warnings
case "TINYINT", "UTINYINT", "SMALLINT":
return "smallint", warnings
case "USMALLINT", "INT", "INTEGER":
return "integer", warnings
case "UINT", "BIGINT":
return "bigint", warnings
case "UBIGINT":
warnings = append(warnings, fmt.Sprintf("字段 %s 类型 UBIGINT 已映射为 numeric(20,0) 以避免无符号溢出", col.Name))
return "numeric(20,0)", warnings
case "FLOAT":
return "real", warnings
case "DOUBLE":
return "double precision", warnings
case "DECIMAL", "NUMERIC":
if length > 0 {
return strings.ToLower(strings.TrimSpace(col.Type)), warnings
}
return "numeric(38,10)", warnings
case "TIMESTAMP":
return "timestamp", warnings
case "DATE":
return "date", warnings
case "JSON":
return "jsonb", warnings
case "BINARY", "NCHAR", "VARCHAR", "VARBINARY":
if length > 0 {
return fmt.Sprintf("varchar(%d)", length), warnings
}
return "text", warnings
default:
warnings = append(warnings, fmt.Sprintf("字段 %s 类型 %s 暂无专门 PG-like 映射,已降级为 text", col.Name, col.Type))
return "text", warnings
}
}

View File

@@ -0,0 +1,657 @@
package sync
import (
"GoNavi-Wails/internal/connection"
"GoNavi-Wails/internal/db"
"fmt"
"strconv"
"strings"
)
type mySQLLikeToTDenginePlanner struct{}
type pgLikeToTDenginePlanner struct{}
type clickHouseToTDenginePlanner struct{}
type tdengineToTDenginePlanner struct{}
func (mySQLLikeToTDenginePlanner) Name() string { return "mysqllike-tdengine-planner" }
func (mySQLLikeToTDenginePlanner) SupportLevel(ctx MigrationBuildContext) MigrationSupportLevel {
sourceType := resolveMigrationDBType(ctx.Config.SourceConfig)
targetType := resolveMigrationDBType(ctx.Config.TargetConfig)
if isMySQLLikeSourceType(sourceType) && targetType == "tdengine" {
return MigrationSupportLevelFull
}
return MigrationSupportLevelUnsupported
}
func (mySQLLikeToTDenginePlanner) BuildPlan(ctx MigrationBuildContext) (SchemaMigrationPlan, []connection.ColumnDefinition, []connection.ColumnDefinition, error) {
return buildMySQLLikeToTDenginePlan(ctx.Config, ctx.TableName, ctx.SourceDB, ctx.TargetDB)
}
func (pgLikeToTDenginePlanner) Name() string { return "pglike-tdengine-planner" }
func (pgLikeToTDenginePlanner) SupportLevel(ctx MigrationBuildContext) MigrationSupportLevel {
sourceType := resolveMigrationDBType(ctx.Config.SourceConfig)
targetType := resolveMigrationDBType(ctx.Config.TargetConfig)
if isPGLikeSource(sourceType) && targetType == "tdengine" {
return MigrationSupportLevelFull
}
return MigrationSupportLevelUnsupported
}
func (pgLikeToTDenginePlanner) BuildPlan(ctx MigrationBuildContext) (SchemaMigrationPlan, []connection.ColumnDefinition, []connection.ColumnDefinition, error) {
return buildPGLikeToTDenginePlan(ctx.Config, ctx.TableName, ctx.SourceDB, ctx.TargetDB)
}
func buildMySQLLikeToTDenginePlan(config SyncConfig, tableName string, sourceDB db.Database, targetDB db.Database) (SchemaMigrationPlan, []connection.ColumnDefinition, []connection.ColumnDefinition, error) {
return buildSourceToTDenginePlan(config, tableName, sourceDB, targetDB, isMySQLLikeTDengineTimestampCandidate, buildMySQLLikeToTDengineCreateTableSQL)
}
func buildPGLikeToTDenginePlan(config SyncConfig, tableName string, sourceDB db.Database, targetDB db.Database) (SchemaMigrationPlan, []connection.ColumnDefinition, []connection.ColumnDefinition, error) {
return buildSourceToTDenginePlan(config, tableName, sourceDB, targetDB, isPGLikeTDengineTimestampCandidate, buildPGLikeToTDengineCreateTableSQL)
}
func buildClickHouseToTDenginePlan(config SyncConfig, tableName string, sourceDB db.Database, targetDB db.Database) (SchemaMigrationPlan, []connection.ColumnDefinition, []connection.ColumnDefinition, error) {
return buildSourceToTDenginePlan(config, tableName, sourceDB, targetDB, isClickHouseTDengineTimestampCandidate, buildClickHouseToTDengineCreateTableSQL)
}
func buildTDengineToTDenginePlan(config SyncConfig, tableName string, sourceDB db.Database, targetDB db.Database) (SchemaMigrationPlan, []connection.ColumnDefinition, []connection.ColumnDefinition, error) {
return buildSourceToTDenginePlan(config, tableName, sourceDB, targetDB, isTDengineTDengineTimestampCandidate, buildTDengineToTDengineCreateTableSQL)
}
func (clickHouseToTDenginePlanner) Name() string { return "clickhouse-tdengine-planner" }
func (clickHouseToTDenginePlanner) SupportLevel(ctx MigrationBuildContext) MigrationSupportLevel {
sourceType := resolveMigrationDBType(ctx.Config.SourceConfig)
targetType := resolveMigrationDBType(ctx.Config.TargetConfig)
if sourceType == "clickhouse" && targetType == "tdengine" {
return MigrationSupportLevelFull
}
return MigrationSupportLevelUnsupported
}
func (clickHouseToTDenginePlanner) BuildPlan(ctx MigrationBuildContext) (SchemaMigrationPlan, []connection.ColumnDefinition, []connection.ColumnDefinition, error) {
return buildClickHouseToTDenginePlan(ctx.Config, ctx.TableName, ctx.SourceDB, ctx.TargetDB)
}
func (tdengineToTDenginePlanner) Name() string { return "tdengine-tdengine-planner" }
func (tdengineToTDenginePlanner) SupportLevel(ctx MigrationBuildContext) MigrationSupportLevel {
sourceType := resolveMigrationDBType(ctx.Config.SourceConfig)
targetType := resolveMigrationDBType(ctx.Config.TargetConfig)
if sourceType == "tdengine" && targetType == "tdengine" {
return MigrationSupportLevelFull
}
return MigrationSupportLevelUnsupported
}
func (tdengineToTDenginePlanner) BuildPlan(ctx MigrationBuildContext) (SchemaMigrationPlan, []connection.ColumnDefinition, []connection.ColumnDefinition, error) {
return buildTDengineToTDenginePlan(ctx.Config, ctx.TableName, ctx.SourceDB, ctx.TargetDB)
}
type tdengineTimestampCandidate func(connection.ColumnDefinition) bool
type tdengineCreateTableBuilder func(string, []connection.ColumnDefinition, int) (string, []string, []string)
func buildSourceToTDenginePlan(config SyncConfig, tableName string, sourceDB db.Database, targetDB db.Database, isTimestamp tdengineTimestampCandidate, buildCreateSQL tdengineCreateTableBuilder) (SchemaMigrationPlan, []connection.ColumnDefinition, []connection.ColumnDefinition, error) {
plan := SchemaMigrationPlan{}
sourceType := resolveMigrationDBType(config.SourceConfig)
targetType := resolveMigrationDBType(config.TargetConfig)
plan.SourceSchema, plan.SourceTable = normalizeSchemaAndTable(sourceType, config.SourceConfig.Database, tableName)
plan.TargetSchema, plan.TargetTable = normalizeSchemaAndTable(targetType, config.TargetConfig.Database, tableName)
plan.SourceQueryTable = qualifiedNameForQuery(sourceType, plan.SourceSchema, plan.SourceTable, tableName)
plan.TargetQueryTable = qualifiedNameForQuery(targetType, plan.TargetSchema, plan.TargetTable, tableName)
plan.PlannedAction = "使用已有目标表导入"
sourceCols, sourceExists, err := inspectTableColumns(sourceDB, plan.SourceSchema, plan.SourceTable)
if err != nil {
return plan, nil, nil, fmt.Errorf("获取源表字段失败: %w", err)
}
if !sourceExists {
return plan, nil, nil, fmt.Errorf("源表不存在或无列定义: %s", tableName)
}
plan.Warnings = append(plan.Warnings, tdengineTargetBaseWarnings()...)
timestampIndex := findTDengineTimestampColumn(sourceCols, isTimestamp)
if timestampIndex < 0 {
plan.Warnings = append(plan.Warnings, tdengineTargetMissingTimeWarning())
}
targetCols, targetExists, err := inspectTableColumns(targetDB, plan.TargetSchema, plan.TargetTable)
if err != nil {
return plan, sourceCols, nil, fmt.Errorf("获取目标表字段失败: %w", err)
}
plan.TargetTableExists = targetExists
strategy := normalizeTargetTableStrategy(config.TargetTableStrategy)
if targetExists {
missing := diffMissingColumnNames(sourceCols, targetCols)
if len(missing) > 0 {
plan.Warnings = append(plan.Warnings, fmt.Sprintf("目标表缺失字段 %d 个:%s", len(missing), strings.Join(missing, ", ")))
}
if strategy != "existing_only" {
plan.Warnings = append(plan.Warnings, "TDengine 目标端当前不自动补齐已有目标表字段,请先确认目标表结构")
}
return dedupeSchemaMigrationPlan(plan), sourceCols, targetCols, nil
}
switch strategy {
case "existing_only":
plan.PlannedAction = "目标表不存在,需先手工创建"
plan.Warnings = append(plan.Warnings, "当前策略要求目标表已存在,执行时不会自动建表")
return dedupeSchemaMigrationPlan(plan), sourceCols, targetCols, nil
case "smart", "auto_create_if_missing":
if timestampIndex < 0 {
plan.PlannedAction = "源表未识别到可映射为 TDengine 首列的时间列,无法自动建表"
plan.UnsupportedObjects = append(plan.UnsupportedObjects, "TDengine regular table 首列必须为 TIMESTAMP当前源表缺少可直接映射的时间列")
return dedupeSchemaMigrationPlan(plan), sourceCols, targetCols, nil
}
plan.AutoCreate = true
plan.PlannedAction = "目标表不存在,将自动建表后导入"
createSQL, warnings, unsupported := buildCreateSQL(plan.TargetQueryTable, sourceCols, timestampIndex)
plan.CreateTableSQL = createSQL
plan.Warnings = append(plan.Warnings, warnings...)
plan.UnsupportedObjects = append(plan.UnsupportedObjects, unsupported...)
return dedupeSchemaMigrationPlan(plan), sourceCols, targetCols, nil
default:
return dedupeSchemaMigrationPlan(plan), sourceCols, targetCols, nil
}
}
func tdengineTargetBaseWarnings() []string {
return []string{
"TDengine 目标端当前仅支持 INSERT 写入;若存在差异 update/delete执行期会被拒绝",
"TDengine 目标端 auto-create 当前仅创建基础表索引、外键、触发器、supertable/TAGS/TTL 不会自动迁移",
}
}
func tdengineTargetMissingTimeWarning() string {
return "源表缺少可映射的时间列,自动建表将不可用;如需继续,请先人工准备 TDengine 目标表与时间列"
}
func findTDengineTimestampColumn(sourceCols []connection.ColumnDefinition, candidate tdengineTimestampCandidate) int {
preferred := []string{"ts", "timestamp", "event_time", "eventtime", "created_at", "create_time", "occurred_at"}
for _, name := range preferred {
for idx, col := range sourceCols {
if !candidate(col) {
continue
}
if strings.EqualFold(strings.TrimSpace(col.Name), name) {
return idx
}
}
}
for idx, col := range sourceCols {
if candidate(col) {
return idx
}
}
return -1
}
func reorderTDengineColumns(sourceCols []connection.ColumnDefinition, timestampIndex int) []connection.ColumnDefinition {
if timestampIndex <= 0 || timestampIndex >= len(sourceCols) {
cloned := make([]connection.ColumnDefinition, len(sourceCols))
copy(cloned, sourceCols)
return cloned
}
ordered := make([]connection.ColumnDefinition, 0, len(sourceCols))
ordered = append(ordered, sourceCols[timestampIndex])
for idx, col := range sourceCols {
if idx == timestampIndex {
continue
}
ordered = append(ordered, col)
}
return ordered
}
func buildMySQLLikeToTDengineCreateTableSQL(targetQueryTable string, sourceCols []connection.ColumnDefinition, timestampIndex int) (string, []string, []string) {
ordered := reorderTDengineColumns(sourceCols, timestampIndex)
columnDefs := make([]string, 0, len(ordered))
warnings := make([]string, 0)
unsupported := []string{"源表索引/外键/触发器/唯一约束/自增语义当前不会自动迁移到 TDengine"}
if timestampIndex != 0 && timestampIndex >= 0 && timestampIndex < len(sourceCols) {
warnings = append(warnings, fmt.Sprintf("TDengine 基础表要求时间列优先,已将字段 %s 调整为首列", sourceCols[timestampIndex].Name))
}
for idx, col := range ordered {
def, colWarnings := mapMySQLLikeColumnToTDengine(col, idx == 0)
warnings = append(warnings, colWarnings...)
columnDefs = append(columnDefs, fmt.Sprintf("%s %s", quoteIdentByType("tdengine", col.Name), def))
}
createSQL := fmt.Sprintf("CREATE TABLE %s (\n %s\n)", quoteQualifiedIdentByType("tdengine", targetQueryTable), strings.Join(columnDefs, ",\n "))
return createSQL, dedupeStrings(warnings), dedupeStrings(unsupported)
}
func buildPGLikeToTDengineCreateTableSQL(targetQueryTable string, sourceCols []connection.ColumnDefinition, timestampIndex int) (string, []string, []string) {
ordered := reorderTDengineColumns(sourceCols, timestampIndex)
columnDefs := make([]string, 0, len(ordered))
warnings := make([]string, 0)
unsupported := []string{"源表索引/外键/触发器/唯一约束/identity/sequence 语义当前不会自动迁移到 TDengine"}
if timestampIndex != 0 && timestampIndex >= 0 && timestampIndex < len(sourceCols) {
warnings = append(warnings, fmt.Sprintf("TDengine 基础表要求时间列优先,已将字段 %s 调整为首列", sourceCols[timestampIndex].Name))
}
for idx, col := range ordered {
def, colWarnings := mapPGLikeColumnToTDengine(col, idx == 0)
warnings = append(warnings, colWarnings...)
columnDefs = append(columnDefs, fmt.Sprintf("%s %s", quoteIdentByType("tdengine", col.Name), def))
}
createSQL := fmt.Sprintf("CREATE TABLE %s (\n %s\n)", quoteQualifiedIdentByType("tdengine", targetQueryTable), strings.Join(columnDefs, ",\n "))
return createSQL, dedupeStrings(warnings), dedupeStrings(unsupported)
}
func buildClickHouseToTDengineCreateTableSQL(targetQueryTable string, sourceCols []connection.ColumnDefinition, timestampIndex int) (string, []string, []string) {
ordered := reorderTDengineColumns(sourceCols, timestampIndex)
columnDefs := make([]string, 0, len(ordered))
warnings := make([]string, 0)
unsupported := []string{"源表 ORDER BY/PARTITION/TTL/Projection/物化视图 语义当前不会自动迁移到 TDengine"}
if timestampIndex != 0 && timestampIndex >= 0 && timestampIndex < len(sourceCols) {
warnings = append(warnings, fmt.Sprintf("TDengine 基础表要求时间列优先,已将字段 %s 调整为首列", sourceCols[timestampIndex].Name))
}
for idx, col := range ordered {
def, colWarnings := mapClickHouseColumnToTDengine(col, idx == 0)
warnings = append(warnings, colWarnings...)
columnDefs = append(columnDefs, fmt.Sprintf("%s %s", quoteIdentByType("tdengine", col.Name), def))
}
createSQL := fmt.Sprintf("CREATE TABLE %s (\n %s\n)", quoteQualifiedIdentByType("tdengine", targetQueryTable), strings.Join(columnDefs, ",\n "))
return createSQL, dedupeStrings(warnings), dedupeStrings(unsupported)
}
func buildTDengineToTDengineCreateTableSQL(targetQueryTable string, sourceCols []connection.ColumnDefinition, timestampIndex int) (string, []string, []string) {
ordered := reorderTDengineColumns(sourceCols, timestampIndex)
columnDefs := make([]string, 0, len(ordered))
warnings := make([]string, 0)
unsupported := []string{"源表 supertable/TAGS/TTL/保留策略/索引 语义当前不会自动迁移到 TDengine regular table"}
if timestampIndex != 0 && timestampIndex >= 0 && timestampIndex < len(sourceCols) {
warnings = append(warnings, fmt.Sprintf("TDengine 基础表要求时间列优先,已将字段 %s 调整为首列", sourceCols[timestampIndex].Name))
}
for idx, col := range ordered {
def, colWarnings := mapTDengineColumnToTDengine(col, idx == 0)
warnings = append(warnings, colWarnings...)
columnDefs = append(columnDefs, fmt.Sprintf("%s %s", quoteIdentByType("tdengine", col.Name), def))
}
createSQL := fmt.Sprintf("CREATE TABLE %s (\n %s\n)", quoteQualifiedIdentByType("tdengine", targetQueryTable), strings.Join(columnDefs, ",\n "))
return createSQL, dedupeStrings(warnings), dedupeStrings(unsupported)
}
func isMySQLLikeTDengineTimestampCandidate(col connection.ColumnDefinition) bool {
raw := strings.ToLower(strings.TrimSpace(col.Type))
clean := strings.ReplaceAll(raw, " unsigned", "")
clean = strings.ReplaceAll(clean, " zerofill", "")
return strings.HasPrefix(clean, "timestamp") || strings.HasPrefix(clean, "datetime")
}
func isPGLikeTDengineTimestampCandidate(col connection.ColumnDefinition) bool {
raw := strings.ToLower(strings.TrimSpace(col.Type))
return strings.HasPrefix(raw, "timestamp")
}
func isClickHouseTDengineTimestampCandidate(col connection.ColumnDefinition) bool {
lower, _ := unwrapClickHouseTDengineType(col.Type)
return strings.HasPrefix(lower, "datetime")
}
func isTDengineTDengineTimestampCandidate(col connection.ColumnDefinition) bool {
base, _ := parseTDengineType(col.Type)
return base == "TIMESTAMP"
}
func mapMySQLLikeColumnToTDengine(col connection.ColumnDefinition, forceTimestamp bool) (string, []string) {
warnings := make([]string, 0)
if forceTimestamp {
if !isMySQLLikeTDengineTimestampCandidate(col) {
warnings = append(warnings, fmt.Sprintf("字段 %s 类型 %s 已提升为 TDengine 首列 TIMESTAMP", col.Name, col.Type))
}
return "TIMESTAMP", warnings
}
raw := strings.ToLower(strings.TrimSpace(col.Type))
if raw == "" {
return "VARCHAR(1024)", []string{fmt.Sprintf("字段 %s 类型为空,已降级为 VARCHAR(1024)", col.Name)}
}
unsigned := strings.Contains(raw, "unsigned")
clean := strings.ReplaceAll(raw, " unsigned", "")
clean = strings.ReplaceAll(clean, " zerofill", "")
isAutoIncrement := strings.Contains(strings.ToLower(strings.TrimSpace(col.Extra)), "auto_increment")
if isAutoIncrement {
warnings = append(warnings, fmt.Sprintf("字段 %s 自增语义不会迁移到 TDengine", col.Name))
}
if col.Key == "PRI" || col.Key == "PK" {
warnings = append(warnings, fmt.Sprintf("字段 %s 主键语义不会按关系型约束迁移到 TDengine", col.Name))
}
switch {
case strings.HasPrefix(clean, "tinyint(1)") && !unsigned && !isAutoIncrement:
return "BOOL", warnings
case strings.HasPrefix(clean, "tinyint"):
if unsigned {
return "UTINYINT", warnings
}
return "TINYINT", warnings
case strings.HasPrefix(clean, "smallint"):
if unsigned {
return "USMALLINT", warnings
}
return "SMALLINT", warnings
case strings.HasPrefix(clean, "mediumint"), strings.HasPrefix(clean, "int"), strings.HasPrefix(clean, "integer"):
if unsigned {
return "UINT", warnings
}
return "INT", warnings
case strings.HasPrefix(clean, "bigint"):
if unsigned {
return "UBIGINT", warnings
}
return "BIGINT", warnings
case strings.HasPrefix(clean, "decimal"), strings.HasPrefix(clean, "numeric"):
return normalizeTDengineDecimalType(clean), warnings
case strings.HasPrefix(clean, "float"):
return "FLOAT", warnings
case strings.HasPrefix(clean, "double"):
return "DOUBLE", warnings
case strings.HasPrefix(clean, "date"):
warnings = append(warnings, fmt.Sprintf("字段 %s 类型 date 已降级映射为 TIMESTAMP", col.Name))
return "TIMESTAMP", warnings
case strings.HasPrefix(clean, "timestamp"), strings.HasPrefix(clean, "datetime"):
return "TIMESTAMP", warnings
case strings.HasPrefix(clean, "time"):
warnings = append(warnings, fmt.Sprintf("字段 %s 类型 %s 暂无稳定 TDengine 时间-only 映射,已降级为 VARCHAR(64)", col.Name, col.Type))
return "VARCHAR(64)", warnings
case strings.HasPrefix(clean, "char("), strings.HasPrefix(clean, "varchar("):
return fmt.Sprintf("VARCHAR(%d)", normalizeTDengineVarcharLength(extractFirstTypeLength(clean), 255)), warnings
case strings.HasPrefix(clean, "tinytext"), strings.HasPrefix(clean, "text"), strings.HasPrefix(clean, "mediumtext"), strings.HasPrefix(clean, "longtext"):
warnings = append(warnings, fmt.Sprintf("字段 %s 类型 %s 已降级为 VARCHAR(4096)", col.Name, col.Type))
return "VARCHAR(4096)", warnings
case strings.HasPrefix(clean, "json"):
warnings = append(warnings, fmt.Sprintf("字段 %s 类型 %s 因 TDengine JSON 仅适用于 TAG已降级为 VARCHAR(4096)", col.Name, col.Type))
return "VARCHAR(4096)", warnings
case strings.HasPrefix(clean, "enum"), strings.HasPrefix(clean, "set"):
warnings = append(warnings, fmt.Sprintf("字段 %s 类型 %s 已降级为 VARCHAR(255)", col.Name, col.Type))
return "VARCHAR(255)", warnings
case strings.HasPrefix(clean, "binary"), strings.HasPrefix(clean, "varbinary"), strings.HasPrefix(clean, "tinyblob"), strings.HasPrefix(clean, "blob"), strings.HasPrefix(clean, "mediumblob"), strings.HasPrefix(clean, "longblob"):
warnings = append(warnings, fmt.Sprintf("字段 %s 类型 %s 已按字符串语义降级为 VARCHAR(4096)", col.Name, col.Type))
return "VARCHAR(4096)", warnings
default:
warnings = append(warnings, fmt.Sprintf("字段 %s 类型 %s 暂无专门 TDengine 映射,已降级为 VARCHAR(1024)", col.Name, col.Type))
return "VARCHAR(1024)", warnings
}
}
func mapPGLikeColumnToTDengine(col connection.ColumnDefinition, forceTimestamp bool) (string, []string) {
warnings := make([]string, 0)
if forceTimestamp {
if raw := strings.ToLower(strings.TrimSpace(col.Type)); !strings.HasPrefix(raw, "timestamp") {
warnings = append(warnings, fmt.Sprintf("字段 %s 类型 %s 已提升为 TDengine 首列 TIMESTAMP", col.Name, col.Type))
}
return "TIMESTAMP", warnings
}
raw := strings.ToLower(strings.TrimSpace(col.Type))
if raw == "" {
return "VARCHAR(1024)", []string{fmt.Sprintf("字段 %s 类型为空,已降级为 VARCHAR(1024)", col.Name)}
}
if col.Key == "PRI" || col.Key == "PK" {
warnings = append(warnings, fmt.Sprintf("字段 %s 主键语义不会按关系型约束迁移到 TDengine", col.Name))
}
if strings.Contains(strings.ToLower(strings.TrimSpace(col.Extra)), "identity") || strings.Contains(strings.ToLower(strings.TrimSpace(col.Extra)), "auto_increment") {
warnings = append(warnings, fmt.Sprintf("字段 %s 自增/identity 语义不会迁移到 TDengine", col.Name))
}
switch {
case raw == "boolean" || strings.HasPrefix(raw, "bool"):
return "BOOL", warnings
case raw == "smallint":
return "SMALLINT", warnings
case raw == "integer" || raw == "int4":
return "INT", warnings
case raw == "bigint" || raw == "int8":
return "BIGINT", warnings
case strings.HasPrefix(raw, "numeric"), strings.HasPrefix(raw, "decimal"):
return normalizeTDengineDecimalType(raw), warnings
case raw == "real" || raw == "float4":
return "FLOAT", warnings
case raw == "double precision" || raw == "float8":
return "DOUBLE", warnings
case raw == "date":
warnings = append(warnings, fmt.Sprintf("字段 %s 类型 date 已降级映射为 TIMESTAMP", col.Name))
return "TIMESTAMP", warnings
case strings.HasPrefix(raw, "timestamp"):
return "TIMESTAMP", warnings
case strings.HasPrefix(raw, "time"):
warnings = append(warnings, fmt.Sprintf("字段 %s 类型 %s 暂无稳定 TDengine 时间-only 映射,已降级为 VARCHAR(64)", col.Name, col.Type))
return "VARCHAR(64)", warnings
case strings.HasPrefix(raw, "character varying("), strings.HasPrefix(raw, "varchar("), strings.HasPrefix(raw, "character("), strings.HasPrefix(raw, "char("):
return fmt.Sprintf("VARCHAR(%d)", normalizeTDengineVarcharLength(extractFirstTypeLength(raw), 255)), warnings
case raw == "text":
warnings = append(warnings, fmt.Sprintf("字段 %s 类型 text 已降级为 VARCHAR(4096)", col.Name))
return "VARCHAR(4096)", warnings
case raw == "uuid":
warnings = append(warnings, fmt.Sprintf("字段 %s 类型 uuid 已降级为 VARCHAR(36)", col.Name))
return "VARCHAR(36)", warnings
case raw == "json" || raw == "jsonb":
warnings = append(warnings, fmt.Sprintf("字段 %s 类型 %s 因 TDengine JSON 仅适用于 TAG已降级为 VARCHAR(4096)", col.Name, col.Type))
return "VARCHAR(4096)", warnings
case raw == "bytea":
warnings = append(warnings, fmt.Sprintf("字段 %s 类型 bytea 已按字符串语义降级为 VARCHAR(4096)", col.Name))
return "VARCHAR(4096)", warnings
case strings.HasSuffix(raw, "[]") || strings.HasPrefix(raw, "array"):
warnings = append(warnings, fmt.Sprintf("字段 %s 类型 %s 已降级为 VARCHAR(4096)", col.Name, col.Type))
return "VARCHAR(4096)", warnings
case raw == "user-defined":
warnings = append(warnings, fmt.Sprintf("字段 %s 为用户自定义类型,已降级为 VARCHAR(1024)", col.Name))
return "VARCHAR(1024)", warnings
default:
warnings = append(warnings, fmt.Sprintf("字段 %s 类型 %s 暂无专门 TDengine 映射,已降级为 VARCHAR(1024)", col.Name, col.Type))
return "VARCHAR(1024)", warnings
}
}
func mapClickHouseColumnToTDengine(col connection.ColumnDefinition, forceTimestamp bool) (string, []string) {
warnings := make([]string, 0)
if forceTimestamp {
if !isClickHouseTDengineTimestampCandidate(col) {
warnings = append(warnings, fmt.Sprintf("字段 %s 类型 %s 已提升为 TDengine 首列 TIMESTAMP", col.Name, col.Type))
}
return "TIMESTAMP", warnings
}
lower, _ := unwrapClickHouseTDengineType(col.Type)
if lower == "" {
return "VARCHAR(1024)", []string{fmt.Sprintf("字段 %s 类型为空,已降级为 VARCHAR(1024)", col.Name)}
}
switch {
case lower == "bool" || lower == "boolean":
return "BOOL", warnings
case lower == "int8":
return "TINYINT", warnings
case lower == "uint8":
return "UTINYINT", warnings
case lower == "int16":
return "SMALLINT", warnings
case lower == "uint16":
return "USMALLINT", warnings
case lower == "int32":
return "INT", warnings
case lower == "uint32":
return "UINT", warnings
case lower == "int64":
return "BIGINT", warnings
case lower == "uint64":
return "UBIGINT", warnings
case lower == "float32":
return "FLOAT", warnings
case lower == "float64":
return "DOUBLE", warnings
case lower == "date":
warnings = append(warnings, fmt.Sprintf("字段 %s 类型 date 已降级映射为 TIMESTAMP", col.Name))
return "TIMESTAMP", warnings
case strings.HasPrefix(lower, "datetime"):
return "TIMESTAMP", warnings
case lower == "string":
return "VARCHAR(1024)", warnings
case lower == "uuid":
warnings = append(warnings, fmt.Sprintf("字段 %s 类型 uuid 已降级为 VARCHAR(36)", col.Name))
return "VARCHAR(36)", warnings
case lower == "json", strings.HasPrefix(lower, "map("), strings.HasPrefix(lower, "array("), strings.HasPrefix(lower, "tuple("), strings.HasPrefix(lower, "nested("):
warnings = append(warnings, fmt.Sprintf("字段 %s 类型 %s 已降级为 VARCHAR(4096)", col.Name, col.Type))
return "VARCHAR(4096)", warnings
case strings.HasPrefix(lower, "enum8("), strings.HasPrefix(lower, "enum16("):
warnings = append(warnings, fmt.Sprintf("字段 %s 枚举类型 %s 已降级为 VARCHAR(255)", col.Name, col.Type))
return "VARCHAR(255)", warnings
case clickHouseDecimalPattern.MatchString(lower):
parts := clickHouseDecimalPattern.FindStringSubmatch(lower)
return fmt.Sprintf("DECIMAL(%s,%s)", parts[2], parts[3]), warnings
case clickHouseStringArgsPattern.MatchString(lower):
parts := clickHouseStringArgsPattern.FindStringSubmatch(lower)
length, err := strconv.Atoi(parts[1])
if err != nil {
warnings = append(warnings, fmt.Sprintf("字段 %s FixedString 长度解析失败,已降级为 VARCHAR(255)", col.Name))
return "VARCHAR(255)", warnings
}
return fmt.Sprintf("VARCHAR(%d)", normalizeTDengineVarcharLength(length, 255)), warnings
default:
warnings = append(warnings, fmt.Sprintf("字段 %s 类型 %s 暂无专门 TDengine 映射,已降级为 VARCHAR(1024)", col.Name, col.Type))
return "VARCHAR(1024)", warnings
}
}
func mapTDengineColumnToTDengine(col connection.ColumnDefinition, forceTimestamp bool) (string, []string) {
warnings := make([]string, 0)
if forceTimestamp {
if !isTDengineTDengineTimestampCandidate(col) {
warnings = append(warnings, fmt.Sprintf("字段 %s 类型 %s 已提升为 TDengine 首列 TIMESTAMP", col.Name, col.Type))
}
return "TIMESTAMP", warnings
}
base, length := parseTDengineType(col.Type)
if base == "" {
return "VARCHAR(1024)", []string{fmt.Sprintf("字段 %s 类型为空,已降级为 VARCHAR(1024)", col.Name)}
}
if isTDengineTagColumn(col) {
warnings = append(warnings, fmt.Sprintf("字段 %s 为 TDengine TAG 列,迁移到 regular table 后将降级为普通字段", col.Name))
}
switch base {
case "BOOL", "BOOLEAN":
return "BOOL", warnings
case "TINYINT":
return "TINYINT", warnings
case "UTINYINT":
return "UTINYINT", warnings
case "SMALLINT":
return "SMALLINT", warnings
case "USMALLINT":
return "USMALLINT", warnings
case "INT", "INTEGER":
return "INT", warnings
case "UINT":
return "UINT", warnings
case "BIGINT":
return "BIGINT", warnings
case "UBIGINT":
return "UBIGINT", warnings
case "FLOAT":
return "FLOAT", warnings
case "DOUBLE":
return "DOUBLE", warnings
case "DECIMAL", "NUMERIC":
return normalizeTDengineDecimalType(col.Type), warnings
case "TIMESTAMP":
return "TIMESTAMP", warnings
case "DATE":
return "DATE", warnings
case "JSON":
warnings = append(warnings, fmt.Sprintf("字段 %s 类型 JSON 在 TDengine regular table 中不保留 TAG 语义,已降级为 VARCHAR(4096)", col.Name))
return "VARCHAR(4096)", warnings
case "BINARY", "NCHAR", "VARCHAR", "VARBINARY":
if length > 0 {
return fmt.Sprintf("%s(%d)", base, normalizeTDengineVarcharLength(length, length)), warnings
}
fallback := 255
if base == "VARCHAR" {
fallback = 1024
}
return fmt.Sprintf("%s(%d)", base, fallback), warnings
default:
warnings = append(warnings, fmt.Sprintf("字段 %s 类型 %s 暂无专门 TDengine 同库映射,已降级为 VARCHAR(1024)", col.Name, col.Type))
return "VARCHAR(1024)", warnings
}
}
func unwrapClickHouseTDengineType(raw string) (string, bool) {
text := strings.TrimSpace(raw)
lower := strings.ToLower(text)
nullable := false
for {
switched := false
if strings.HasPrefix(lower, "nullable(") && strings.HasSuffix(lower, ")") {
text = strings.TrimSpace(text[len("Nullable(") : len(text)-1])
lower = strings.ToLower(text)
nullable = true
switched = true
}
if strings.HasPrefix(lower, "lowcardinality(") && strings.HasSuffix(lower, ")") {
text = strings.TrimSpace(text[len("LowCardinality(") : len(text)-1])
lower = strings.ToLower(text)
switched = true
}
if !switched {
break
}
}
return lower, nullable
}
func normalizeTDengineDecimalType(raw string) string {
text := strings.TrimSpace(raw)
if text == "" {
return "DECIMAL(38,10)"
}
lower := strings.ToLower(text)
if strings.HasPrefix(lower, "numeric") {
return "DECIMAL" + text[len("numeric"):]
}
if strings.HasPrefix(lower, "decimal") {
return "DECIMAL" + text[len("decimal"):]
}
return "DECIMAL(38,10)"
}
func normalizeTDengineVarcharLength(length int, fallback int) int {
if fallback <= 0 {
fallback = 255
}
if length <= 0 {
return fallback
}
if length > 16384 {
return 16384
}
return length
}
func extractFirstTypeLength(raw string) int {
start := strings.Index(raw, "(")
if start < 0 {
return 0
}
end := strings.Index(raw[start+1:], ")")
if end < 0 {
return 0
}
inside := strings.TrimSpace(raw[start+1 : start+1+end])
if inside == "" {
return 0
}
parts := strings.SplitN(inside, ",", 2)
length, err := strconv.Atoi(strings.TrimSpace(parts[0]))
if err != nil {
return 0
}
return length
}

View File

@@ -0,0 +1,98 @@
package sync
import (
"GoNavi-Wails/internal/connection"
"strings"
)
func normalizeMigrationDBType(dbType string) string {
normalized := strings.ToLower(strings.TrimSpace(dbType))
switch normalized {
case "doris":
return "diros"
case "postgresql":
return "postgres"
case "dm", "dm8":
return "dameng"
case "sqlite3":
return "sqlite"
default:
return normalized
}
}
func resolveMigrationDBType(config connection.ConnectionConfig) string {
dbType := normalizeMigrationDBType(config.Type)
if dbType != "custom" {
return dbType
}
driver := strings.ToLower(strings.TrimSpace(config.Driver))
switch driver {
case "postgresql", "postgres", "pg", "pq", "pgx":
return "postgres"
case "dm", "dameng", "dm8":
return "dameng"
case "sqlite3", "sqlite":
return "sqlite"
case "sphinxql":
return "sphinx"
case "diros", "doris":
return "diros"
case "kingbase", "kingbase8", "kingbasees", "kingbasev8":
return "kingbase"
case "highgo":
return "highgo"
case "vastbase":
return "vastbase"
case "mysql", "mysql2":
return "mysql"
case "mariadb":
return "mariadb"
}
switch {
case strings.Contains(driver, "postgres"):
return "postgres"
case strings.Contains(driver, "kingbase"):
return "kingbase"
case strings.Contains(driver, "highgo"):
return "highgo"
case strings.Contains(driver, "vastbase"):
return "vastbase"
case strings.Contains(driver, "sqlite"):
return "sqlite"
case strings.Contains(driver, "sphinx"):
return "sphinx"
case strings.Contains(driver, "diros"), strings.Contains(driver, "doris"):
return "diros"
case strings.Contains(driver, "maria"):
return "mariadb"
case strings.Contains(driver, "mysql"):
return "mysql"
case strings.Contains(driver, "dameng"), strings.Contains(driver, "dm"):
return "dameng"
default:
return normalizeMigrationDBType(driver)
}
}
func isMySQLCoreType(dbType string) bool {
switch normalizeMigrationDBType(dbType) {
case "mysql", "mariadb", "diros":
return true
default:
return false
}
}
func isMySQLLikeSourceType(dbType string) bool {
if isMySQLCoreType(dbType) {
return true
}
return normalizeMigrationDBType(dbType) == "sphinx"
}
func isMySQLLikeWritableTargetType(dbType string) bool {
return isMySQLCoreType(dbType)
}

View File

@@ -1,7 +1,7 @@
package sync
import (
"GoNavi-Wails/internal/db"
"errors"
"fmt"
"strings"
)
@@ -36,12 +36,18 @@ func (s *SyncEngine) Preview(config SyncConfig, tableName string, limit int) (Ta
if limit > 500 {
limit = 500
}
if isRedisToMongoKeyspacePair(config) {
return s.previewRedisToMongo(config, tableName, limit)
}
if isMongoToRedisKeyspacePair(config) {
return s.previewMongoToRedis(config, tableName, limit)
}
sourceDB, err := db.NewDatabase(config.SourceConfig.Type)
sourceDB, err := newSyncDatabase(config.SourceConfig.Type)
if err != nil {
return TableDiffPreview{}, fmt.Errorf("初始化源数据库驱动失败: %w", err)
}
targetDB, err := db.NewDatabase(config.TargetConfig.Type)
targetDB, err := newSyncDatabase(config.TargetConfig.Type)
if err != nil {
return TableDiffPreview{}, fmt.Errorf("初始化目标数据库驱动失败: %w", err)
}
@@ -56,14 +62,12 @@ func (s *SyncEngine) Preview(config SyncConfig, tableName string, limit int) (Ta
}
defer targetDB.Close()
sourceSchema, sourceTable := normalizeSchemaAndTable(config.SourceConfig.Type, config.SourceConfig.Database, tableName)
targetSchema, targetTable := normalizeSchemaAndTable(config.TargetConfig.Type, config.TargetConfig.Database, tableName)
sourceQueryTable := qualifiedNameForQuery(config.SourceConfig.Type, sourceSchema, sourceTable, tableName)
targetQueryTable := qualifiedNameForQuery(config.TargetConfig.Type, targetSchema, targetTable, tableName)
cols, err := sourceDB.GetColumns(sourceSchema, sourceTable)
plan, cols, _, err := buildSchemaMigrationPlan(config, tableName, sourceDB, targetDB)
if err != nil {
return TableDiffPreview{}, fmt.Errorf("获取源表字段失败: %w", err)
return TableDiffPreview{}, err
}
if !plan.TargetTableExists && !plan.AutoCreate {
return TableDiffPreview{}, errors.New(firstNonEmpty(plan.PlannedAction, "目标表不存在,无法预览差异"))
}
pkCols := make([]string, 0, 2)
@@ -80,13 +84,17 @@ func (s *SyncEngine) Preview(config SyncConfig, tableName string, limit int) (Ta
}
pkCol := pkCols[0]
sourceRows, _, err := sourceDB.Query(fmt.Sprintf("SELECT * FROM %s", quoteQualifiedIdentByType(config.SourceConfig.Type, sourceQueryTable)))
sourceRows, _, err := sourceDB.Query(fmt.Sprintf("SELECT * FROM %s", quoteQualifiedIdentByType(resolveMigrationDBType(config.SourceConfig), plan.SourceQueryTable)))
if err != nil {
return TableDiffPreview{}, fmt.Errorf("读取源表失败: %w", err)
}
targetRows, _, err := targetDB.Query(fmt.Sprintf("SELECT * FROM %s", quoteQualifiedIdentByType(config.TargetConfig.Type, targetQueryTable)))
if err != nil {
return TableDiffPreview{}, fmt.Errorf("读取目标表失败: %w", err)
targetRows := make([]map[string]interface{}, 0)
if plan.TargetTableExists {
targetRows, _, err = targetDB.Query(fmt.Sprintf("SELECT * FROM %s", quoteQualifiedIdentByType(resolveMigrationDBType(config.TargetConfig), plan.TargetQueryTable)))
if err != nil {
return TableDiffPreview{}, fmt.Errorf("读取目标表失败: %w", err)
}
}
targetMap := make(map[string]map[string]interface{}, len(targetRows))
@@ -133,12 +141,7 @@ func (s *SyncEngine) Preview(config SyncConfig, tableName string, limit int) (Ta
if len(changedColumns) > 0 {
out.TotalUpdates++
if len(out.Updates) < limit {
out.Updates = append(out.Updates, PreviewUpdateRow{
PK: pkVal,
ChangedColumns: changedColumns,
Source: sRow,
Target: tRow,
})
out.Updates = append(out.Updates, PreviewUpdateRow{PK: pkVal, ChangedColumns: changedColumns, Source: sRow, Target: tRow})
}
}
continue

View File

@@ -0,0 +1,490 @@
package sync
import (
"GoNavi-Wails/internal/connection"
"GoNavi-Wails/internal/db"
redispkg "GoNavi-Wails/internal/redis"
"fmt"
"sort"
"strings"
"testing"
)
type fakeRedisMigrationClient struct {
values map[string]*redispkg.RedisValue
scannedKeys []string
connectConfig connection.ConnectionConfig
closed bool
}
func (f *fakeRedisMigrationClient) Connect(config connection.ConnectionConfig) error {
f.connectConfig = config
return nil
}
func (f *fakeRedisMigrationClient) Close() error {
f.closed = true
return nil
}
func (f *fakeRedisMigrationClient) ScanKeys(pattern string, cursor uint64, count int64) (*redispkg.RedisScanResult, error) {
items := make([]redispkg.RedisKeyInfo, 0, len(f.scannedKeys))
for _, key := range f.scannedKeys {
items = append(items, redispkg.RedisKeyInfo{Key: key, Type: "string", TTL: -1})
}
return &redispkg.RedisScanResult{Keys: items, Cursor: "0"}, nil
}
func (f *fakeRedisMigrationClient) GetKeyType(key string) (string, error) {
if value, ok := f.values[key]; ok && value != nil {
return value.Type, nil
}
return "none", nil
}
func (f *fakeRedisMigrationClient) GetValue(key string) (*redispkg.RedisValue, error) {
if value, ok := f.values[key]; ok {
return value, nil
}
return nil, fmt.Errorf("key not found: %s", key)
}
func (f *fakeRedisMigrationClient) DeleteKeys(keys []string) (int64, error) {
var deleted int64
for _, key := range keys {
if _, ok := f.values[key]; ok {
delete(f.values, key)
deleted++
}
}
return deleted, nil
}
func (f *fakeRedisMigrationClient) SetTTL(key string, ttl int64) error {
value, ok := f.values[key]
if !ok {
return nil
}
value.TTL = ttl
return nil
}
func (f *fakeRedisMigrationClient) SetString(key, value string, ttl int64) error {
if f.values == nil {
f.values = map[string]*redispkg.RedisValue{}
}
f.values[key] = &redispkg.RedisValue{Type: "string", TTL: ttl, Value: value, Length: int64(len(value))}
return nil
}
func (f *fakeRedisMigrationClient) SetHashField(key, field, value string) error {
if f.values == nil {
f.values = map[string]*redispkg.RedisValue{}
}
current, ok := f.values[key]
if !ok || current == nil || current.Type != "hash" {
current = &redispkg.RedisValue{Type: "hash", TTL: -1, Value: map[string]string{}}
f.values[key] = current
}
hash, _ := current.Value.(map[string]string)
if hash == nil {
hash = map[string]string{}
}
hash[field] = value
current.Value = hash
current.Length = int64(len(hash))
return nil
}
func (f *fakeRedisMigrationClient) ListPush(key string, values ...string) error {
if f.values == nil {
f.values = map[string]*redispkg.RedisValue{}
}
current, ok := f.values[key]
if !ok || current == nil || current.Type != "list" {
current = &redispkg.RedisValue{Type: "list", TTL: -1, Value: []string{}}
f.values[key] = current
}
list, _ := current.Value.([]string)
list = append(list, values...)
current.Value = list
current.Length = int64(len(list))
return nil
}
func (f *fakeRedisMigrationClient) SetAdd(key string, members ...string) error {
if f.values == nil {
f.values = map[string]*redispkg.RedisValue{}
}
current, ok := f.values[key]
if !ok || current == nil || current.Type != "set" {
current = &redispkg.RedisValue{Type: "set", TTL: -1, Value: []string{}}
f.values[key] = current
}
setValues, _ := current.Value.([]string)
seen := make(map[string]struct{}, len(setValues)+len(members))
for _, item := range setValues {
seen[item] = struct{}{}
}
for _, item := range members {
if _, ok := seen[item]; ok {
continue
}
seen[item] = struct{}{}
setValues = append(setValues, item)
}
sort.Strings(setValues)
current.Value = setValues
current.Length = int64(len(setValues))
return nil
}
func (f *fakeRedisMigrationClient) ZSetAdd(key string, members ...redispkg.ZSetMember) error {
if f.values == nil {
f.values = map[string]*redispkg.RedisValue{}
}
copied := append([]redispkg.ZSetMember(nil), members...)
sort.Slice(copied, func(i, j int) bool {
if copied[i].Score == copied[j].Score {
return copied[i].Member < copied[j].Member
}
return copied[i].Score < copied[j].Score
})
f.values[key] = &redispkg.RedisValue{Type: "zset", TTL: -1, Value: copied, Length: int64(len(copied))}
return nil
}
func (f *fakeRedisMigrationClient) StreamAdd(key string, fields map[string]string, id string) (string, error) {
if f.values == nil {
f.values = map[string]*redispkg.RedisValue{}
}
current, ok := f.values[key]
if !ok || current == nil || current.Type != "stream" {
current = &redispkg.RedisValue{Type: "stream", TTL: -1, Value: []redispkg.StreamEntry{}}
f.values[key] = current
}
entries, _ := current.Value.([]redispkg.StreamEntry)
entryID := id
if entryID == "" {
entryID = fmt.Sprintf("%d-0", len(entries)+1)
}
entries = append(entries, redispkg.StreamEntry{ID: entryID, Fields: fields})
current.Value = entries
current.Length = int64(len(entries))
return entryID, nil
}
type fakeRedisMongoTargetDB struct {
tables []string
queryTable string
queryRows []map[string]interface{}
execs []string
applyTable string
applySet connection.ChangeSet
}
func (f *fakeRedisMongoTargetDB) Connect(config connection.ConnectionConfig) error { return nil }
func (f *fakeRedisMongoTargetDB) Close() error { return nil }
func (f *fakeRedisMongoTargetDB) Ping() error { return nil }
func (f *fakeRedisMongoTargetDB) Query(query string) ([]map[string]interface{}, []string, error) {
queryTable := strings.TrimSpace(f.queryTable)
if queryTable == "" {
queryTable = "redis_db_0_keys"
}
if strings.Contains(query, fmt.Sprintf(`"find":"%s"`, queryTable)) {
return f.queryRows, []string{"_id", "key", "value"}, nil
}
return nil, nil, nil
}
func (f *fakeRedisMongoTargetDB) Exec(query string) (int64, error) {
f.execs = append(f.execs, query)
return 1, nil
}
func (f *fakeRedisMongoTargetDB) GetDatabases() ([]string, error) { return []string{"app"}, nil }
func (f *fakeRedisMongoTargetDB) GetTables(dbName string) ([]string, error) {
return f.tables, nil
}
func (f *fakeRedisMongoTargetDB) GetCreateStatement(dbName, tableName string) (string, error) {
return "", nil
}
func (f *fakeRedisMongoTargetDB) GetColumns(dbName, tableName string) ([]connection.ColumnDefinition, error) {
return nil, nil
}
func (f *fakeRedisMongoTargetDB) GetAllColumns(dbName string) ([]connection.ColumnDefinitionWithTable, error) {
return nil, nil
}
func (f *fakeRedisMongoTargetDB) GetIndexes(dbName, tableName string) ([]connection.IndexDefinition, error) {
return nil, nil
}
func (f *fakeRedisMongoTargetDB) GetForeignKeys(dbName, tableName string) ([]connection.ForeignKeyDefinition, error) {
return nil, nil
}
func (f *fakeRedisMongoTargetDB) GetTriggers(dbName, tableName string) ([]connection.TriggerDefinition, error) {
return nil, nil
}
func (f *fakeRedisMongoTargetDB) ApplyChanges(tableName string, changes connection.ChangeSet) error {
f.applyTable = tableName
f.applySet = changes
return nil
}
type fakeMongoRedisSourceDB struct {
tables []string
rowsByTable map[string][]map[string]interface{}
connectConfig connection.ConnectionConfig
}
func (f *fakeMongoRedisSourceDB) Connect(config connection.ConnectionConfig) error {
f.connectConfig = config
return nil
}
func (f *fakeMongoRedisSourceDB) Close() error { return nil }
func (f *fakeMongoRedisSourceDB) Ping() error { return nil }
func (f *fakeMongoRedisSourceDB) Query(query string) ([]map[string]interface{}, []string, error) {
for tableName, rows := range f.rowsByTable {
if strings.Contains(query, fmt.Sprintf(`"find":"%s"`, tableName)) {
return rows, []string{"_id", "key", "type", "ttl", "value"}, nil
}
}
return nil, nil, fmt.Errorf("unexpected query: %s", query)
}
func (f *fakeMongoRedisSourceDB) Exec(query string) (int64, error) { return 0, nil }
func (f *fakeMongoRedisSourceDB) GetDatabases() ([]string, error) { return []string{"app"}, nil }
func (f *fakeMongoRedisSourceDB) GetTables(dbName string) ([]string, error) {
return f.tables, nil
}
func (f *fakeMongoRedisSourceDB) GetCreateStatement(dbName, tableName string) (string, error) {
return "", nil
}
func (f *fakeMongoRedisSourceDB) GetColumns(dbName, tableName string) ([]connection.ColumnDefinition, error) {
return nil, nil
}
func (f *fakeMongoRedisSourceDB) GetAllColumns(dbName string) ([]connection.ColumnDefinitionWithTable, error) {
return nil, nil
}
func (f *fakeMongoRedisSourceDB) GetIndexes(dbName, tableName string) ([]connection.IndexDefinition, error) {
return nil, nil
}
func (f *fakeMongoRedisSourceDB) GetForeignKeys(dbName, tableName string) ([]connection.ForeignKeyDefinition, error) {
return nil, nil
}
func (f *fakeMongoRedisSourceDB) GetTriggers(dbName, tableName string) ([]connection.TriggerDefinition, error) {
return nil, nil
}
func TestRunSync_RedisToMongoAppliesInsertAndUpdate(t *testing.T) {
fakeRedis := &fakeRedisMigrationClient{
values: map[string]*redispkg.RedisValue{
"user:1": {Type: "hash", TTL: 120, Length: 2, Value: map[string]string{"name": "alice"}},
"user:2": {Type: "string", TTL: -1, Length: 1, Value: "online"},
},
}
fakeTarget := &fakeRedisMongoTargetDB{
tables: []string{"redis_db_0_keys"},
queryRows: []map[string]interface{}{
{"_id": "db0:user:1", "redisDb": 0, "key": "user:1", "type": "hash", "ttl": 120, "length": int64(2), "value": map[string]interface{}{"name": "old"}},
},
}
oldNewRedisClient := newRedisSourceClient
oldNewDatabase := newSyncDatabase
defer func() {
newRedisSourceClient = oldNewRedisClient
newSyncDatabase = oldNewDatabase
}()
newRedisSourceClient = func() redisMigrationClient { return fakeRedis }
newSyncDatabase = func(dbType string) (db.Database, error) { return fakeTarget, nil }
engine := NewSyncEngine(Reporter{})
result := engine.RunSync(SyncConfig{
SourceConfig: connection.ConnectionConfig{Type: "redis", Database: "0"},
TargetConfig: connection.ConnectionConfig{Type: "mongodb", Database: "app"},
Tables: []string{"user:1", "user:2"},
Content: "data",
Mode: "insert_update",
})
if !result.Success {
t.Fatalf("expected success, got: %+v", result)
}
if fakeRedis.connectConfig.RedisDB != 0 {
t.Fatalf("expected redis db 0, got %d", fakeRedis.connectConfig.RedisDB)
}
if fakeTarget.applyTable != "redis_db_0_keys" {
t.Fatalf("unexpected apply table: %s", fakeTarget.applyTable)
}
if len(fakeTarget.applySet.Inserts) != 1 || len(fakeTarget.applySet.Updates) != 1 {
t.Fatalf("unexpected change set: %+v", fakeTarget.applySet)
}
}
func TestRunSync_RedisToMongoUsesConfiguredCollectionName(t *testing.T) {
fakeRedis := &fakeRedisMigrationClient{
values: map[string]*redispkg.RedisValue{
"user:1": {Type: "string", TTL: -1, Length: 1, Value: "online"},
},
}
fakeTarget := &fakeRedisMongoTargetDB{
tables: []string{"custom_keyspace_docs"},
queryTable: "custom_keyspace_docs",
}
oldNewRedisClient := newRedisSourceClient
oldNewDatabase := newSyncDatabase
defer func() {
newRedisSourceClient = oldNewRedisClient
newSyncDatabase = oldNewDatabase
}()
newRedisSourceClient = func() redisMigrationClient { return fakeRedis }
newSyncDatabase = func(dbType string) (db.Database, error) { return fakeTarget, nil }
engine := NewSyncEngine(Reporter{})
result := engine.RunSync(SyncConfig{
SourceConfig: connection.ConnectionConfig{Type: "redis", Database: "0"},
TargetConfig: connection.ConnectionConfig{Type: "mongodb", Database: "app"},
Tables: []string{"user:1"},
Content: "data",
Mode: "insert_update",
MongoCollectionName: "custom_keyspace_docs",
})
if !result.Success {
t.Fatalf("expected success, got: %+v", result)
}
if fakeTarget.applyTable != "custom_keyspace_docs" {
t.Fatalf("unexpected apply table: %s", fakeTarget.applyTable)
}
}
func TestPreview_RedisToMongoReturnsDocumentPreview(t *testing.T) {
fakeRedis := &fakeRedisMigrationClient{
values: map[string]*redispkg.RedisValue{
"session:1": {Type: "string", TTL: 60, Length: 1, Value: "token"},
},
}
fakeTarget := &fakeRedisMongoTargetDB{}
oldNewRedisClient := newRedisSourceClient
oldNewDatabase := newSyncDatabase
defer func() {
newRedisSourceClient = oldNewRedisClient
newSyncDatabase = oldNewDatabase
}()
newRedisSourceClient = func() redisMigrationClient { return fakeRedis }
newSyncDatabase = func(dbType string) (db.Database, error) { return fakeTarget, nil }
engine := NewSyncEngine(Reporter{})
preview, err := engine.Preview(SyncConfig{
SourceConfig: connection.ConnectionConfig{Type: "redis", Database: "0"},
TargetConfig: connection.ConnectionConfig{Type: "mongodb", Database: "app"},
Tables: []string{"session:1"},
Content: "data",
Mode: "insert_update",
}, "session:1", 20)
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if preview.PKColumn != "_id" {
t.Fatalf("unexpected pk column: %s", preview.PKColumn)
}
if preview.TotalInserts != 1 || len(preview.Inserts) != 1 {
t.Fatalf("unexpected preview: %+v", preview)
}
if preview.Inserts[0].PK != "db0:session:1" {
t.Fatalf("unexpected preview pk: %+v", preview.Inserts[0])
}
}
func TestRunSync_MongoToRedisAppliesStringAndHash(t *testing.T) {
fakeSource := &fakeMongoRedisSourceDB{
tables: []string{"redis_db_0_keys"},
rowsByTable: map[string][]map[string]interface{}{
"redis_db_0_keys": {
{"_id": "db0:session:1", "key": "session:1", "type": "string", "ttl": int64(60), "value": "token"},
{"_id": "db0:user:1", "key": "user:1", "type": "hash", "ttl": int64(120), "value": map[string]interface{}{"name": "alice", "role": "admin"}},
},
},
}
fakeRedis := &fakeRedisMigrationClient{
values: map[string]*redispkg.RedisValue{
"user:1": {Type: "hash", TTL: 120, Length: 1, Value: map[string]string{"name": "old"}},
},
}
oldNewRedisClient := newRedisSourceClient
oldNewDatabase := newSyncDatabase
defer func() {
newRedisSourceClient = oldNewRedisClient
newSyncDatabase = oldNewDatabase
}()
newRedisSourceClient = func() redisMigrationClient { return fakeRedis }
newSyncDatabase = func(dbType string) (db.Database, error) { return fakeSource, nil }
engine := NewSyncEngine(Reporter{})
result := engine.RunSync(SyncConfig{
SourceConfig: connection.ConnectionConfig{Type: "mongodb", Database: "app"},
TargetConfig: connection.ConnectionConfig{Type: "redis", Database: "0"},
Tables: []string{"redis_db_0_keys"},
Content: "data",
Mode: "insert_update",
})
if !result.Success {
t.Fatalf("expected success, got: %+v", result)
}
if fakeRedis.connectConfig.RedisDB != 0 {
t.Fatalf("expected redis db 0, got %d", fakeRedis.connectConfig.RedisDB)
}
if got := fakeRedis.values["session:1"]; got == nil || got.Type != "string" || got.Value != "token" || got.TTL != 60 {
t.Fatalf("unexpected string value: %+v", got)
}
gotHash, _ := fakeRedis.values["user:1"].Value.(map[string]string)
if gotHash["name"] != "alice" || gotHash["role"] != "admin" {
t.Fatalf("unexpected hash value: %+v", fakeRedis.values["user:1"])
}
if result.RowsInserted != 1 || result.RowsUpdated != 1 {
t.Fatalf("unexpected sync result: %+v", result)
}
}
func TestPreview_MongoToRedisReturnsCollectionPreview(t *testing.T) {
fakeSource := &fakeMongoRedisSourceDB{
tables: []string{"redis_db_0_keys"},
rowsByTable: map[string][]map[string]interface{}{
"redis_db_0_keys": {
{"_id": "db0:session:1", "key": "session:1", "type": "string", "ttl": int64(60), "value": "token"},
},
},
}
fakeRedis := &fakeRedisMigrationClient{values: map[string]*redispkg.RedisValue{}}
oldNewRedisClient := newRedisSourceClient
oldNewDatabase := newSyncDatabase
defer func() {
newRedisSourceClient = oldNewRedisClient
newSyncDatabase = oldNewDatabase
}()
newRedisSourceClient = func() redisMigrationClient { return fakeRedis }
newSyncDatabase = func(dbType string) (db.Database, error) { return fakeSource, nil }
engine := NewSyncEngine(Reporter{})
preview, err := engine.Preview(SyncConfig{
SourceConfig: connection.ConnectionConfig{Type: "mongodb", Database: "app"},
TargetConfig: connection.ConnectionConfig{Type: "redis", Database: "0"},
Tables: []string{"redis_db_0_keys"},
Content: "data",
Mode: "insert_update",
}, "redis_db_0_keys", 20)
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if preview.Table != "redis_db_0_keys" || preview.PKColumn != "key" {
t.Fatalf("unexpected preview header: %+v", preview)
}
if preview.TotalInserts != 1 || len(preview.Inserts) != 1 {
t.Fatalf("unexpected preview rows: %+v", preview)
}
if preview.Inserts[0].PK != "session:1" {
t.Fatalf("unexpected preview pk: %+v", preview.Inserts[0])
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,957 @@
package sync
import (
"GoNavi-Wails/internal/connection"
"context"
"reflect"
"strings"
"testing"
)
type fakeMigrationDB struct {
columns map[string][]connection.ColumnDefinition
indexes map[string][]connection.IndexDefinition
queryData map[string][]map[string]interface{}
queryCols map[string][]string
}
func (f *fakeMigrationDB) Connect(config connection.ConnectionConfig) error { return nil }
func (f *fakeMigrationDB) Close() error { return nil }
func (f *fakeMigrationDB) Ping() error { return nil }
func (f *fakeMigrationDB) Query(query string) ([]map[string]interface{}, []string, error) {
if rows, ok := f.queryData[query]; ok {
return rows, f.queryCols[query], nil
}
return nil, nil, nil
}
func (f *fakeMigrationDB) Exec(query string) (int64, error) { return 0, nil }
func (f *fakeMigrationDB) GetDatabases() ([]string, error) { return nil, nil }
func (f *fakeMigrationDB) GetTables(dbName string) ([]string, error) {
return nil, nil
}
func (f *fakeMigrationDB) GetCreateStatement(dbName, tableName string) (string, error) {
return "", nil
}
func (f *fakeMigrationDB) GetColumns(dbName, tableName string) ([]connection.ColumnDefinition, error) {
key := dbName + "." + tableName
if rows, ok := f.columns[key]; ok {
return rows, nil
}
return []connection.ColumnDefinition{}, nil
}
func (f *fakeMigrationDB) GetAllColumns(dbName string) ([]connection.ColumnDefinitionWithTable, error) {
return nil, nil
}
func (f *fakeMigrationDB) GetIndexes(dbName, tableName string) ([]connection.IndexDefinition, error) {
key := dbName + "." + tableName
if rows, ok := f.indexes[key]; ok {
return rows, nil
}
return nil, nil
}
func (f *fakeMigrationDB) GetForeignKeys(dbName, tableName string) ([]connection.ForeignKeyDefinition, error) {
return nil, nil
}
func (f *fakeMigrationDB) GetTriggers(dbName, tableName string) ([]connection.TriggerDefinition, error) {
return nil, nil
}
func (f *fakeMigrationDB) QueryContext(ctx context.Context, query string) ([]map[string]interface{}, []string, error) {
return f.Query(query)
}
func (f *fakeMigrationDB) ExecContext(ctx context.Context, query string) (int64, error) {
return 0, nil
}
func TestBuildMySQLToKingbaseColumnDefinition_AutoIncrementAndBoolean(t *testing.T) {
t.Parallel()
def, warnings := buildMySQLToKingbaseColumnDefinition(connection.ColumnDefinition{
Name: "id",
Type: "int unsigned",
Nullable: "NO",
Extra: "auto_increment",
})
if !strings.Contains(def, "bigint") || !strings.Contains(def, "GENERATED BY DEFAULT AS IDENTITY") || !strings.Contains(def, "NOT NULL") {
t.Fatalf("unexpected definition: %s", def)
}
if len(warnings) != 0 {
t.Fatalf("unexpected warnings: %v", warnings)
}
def, warnings = buildMySQLToKingbaseColumnDefinition(connection.ColumnDefinition{
Name: "enabled",
Type: "tinyint(1)",
Nullable: "YES",
Default: stringPtr("1"),
})
if !strings.Contains(def, "boolean") || !strings.Contains(def, "DEFAULT TRUE") {
t.Fatalf("unexpected boolean definition: %s", def)
}
if len(warnings) != 0 {
t.Fatalf("unexpected warnings for boolean: %v", warnings)
}
}
func TestBuildMySQLToKingbaseCreateTablePlan_GeneratesAndSkipsIndexes(t *testing.T) {
t.Parallel()
sourceDB := &fakeMigrationDB{
indexes: map[string][]connection.IndexDefinition{
"shop.orders": {
{Name: "PRIMARY", ColumnName: "id", NonUnique: 0, SeqInIndex: 1, IndexType: "BTREE"},
{Name: "idx_user_status", ColumnName: "user_id", NonUnique: 1, SeqInIndex: 1, IndexType: "BTREE"},
{Name: "idx_user_status", ColumnName: "status", NonUnique: 1, SeqInIndex: 2, IndexType: "BTREE"},
{Name: "idx_name_prefix", ColumnName: "name", NonUnique: 1, SeqInIndex: 1, IndexType: "BTREE", SubPart: 12},
{Name: "idx_fulltext_note", ColumnName: "note", NonUnique: 1, SeqInIndex: 1, IndexType: "FULLTEXT"},
},
},
}
cols := []connection.ColumnDefinition{
{Name: "id", Type: "bigint", Nullable: "NO", Key: "PRI", Extra: "auto_increment"},
{Name: "user_id", Type: "bigint", Nullable: "NO"},
{Name: "status", Type: "varchar(32)", Nullable: "YES"},
{Name: "name", Type: "varchar(128)", Nullable: "YES"},
{Name: "note", Type: "text", Nullable: "YES"},
}
cfg := SyncConfig{CreateIndexes: true}
createSQL, postSQL, warnings, unsupported, idxCreate, idxSkip, err := buildMySQLToKingbaseCreateTablePlan(cfg, "public.orders", cols, sourceDB, "shop", "orders")
if err != nil {
t.Fatalf("buildMySQLToKingbaseCreateTablePlan returned error: %v", err)
}
if !strings.Contains(createSQL, `CREATE TABLE "public"."orders"`) {
t.Fatalf("unexpected create SQL: %s", createSQL)
}
if !strings.Contains(createSQL, `PRIMARY KEY ("id")`) {
t.Fatalf("create SQL missing primary key: %s", createSQL)
}
if idxCreate != 1 || idxSkip != 2 {
t.Fatalf("unexpected index summary: create=%d skip=%d", idxCreate, idxSkip)
}
if len(postSQL) != 1 || !strings.Contains(postSQL[0], `CREATE INDEX "idx_user_status"`) {
t.Fatalf("unexpected post SQL: %v", postSQL)
}
if len(warnings) != 0 {
t.Fatalf("unexpected warnings: %v", warnings)
}
wantUnsupported := []string{
"索引 idx_name_prefix 使用前缀长度,当前暂不支持迁移",
"索引 idx_fulltext_note 类型=FULLTEXT当前暂不支持自动迁移",
}
if !reflect.DeepEqual(unsupported, wantUnsupported) {
t.Fatalf("unexpected unsupported objects: got=%v want=%v", unsupported, wantUnsupported)
}
}
func TestBuildSchemaMigrationPlan_AutoCreateWhenTargetMissing(t *testing.T) {
t.Parallel()
sourceDB := &fakeMigrationDB{
columns: map[string][]connection.ColumnDefinition{
"shop.orders": {
{Name: "id", Type: "bigint", Nullable: "NO", Key: "PRI", Extra: "auto_increment"},
{Name: "name", Type: "varchar(128)", Nullable: "YES"},
},
},
indexes: map[string][]connection.IndexDefinition{},
}
targetDB := &fakeMigrationDB{columns: map[string][]connection.ColumnDefinition{}}
cfg := SyncConfig{
SourceConfig: connection.ConnectionConfig{Type: "mysql", Database: "shop"},
TargetConfig: connection.ConnectionConfig{Type: "kingbase", Database: "demo"},
TargetTableStrategy: "smart",
CreateIndexes: true,
}
plan, sourceCols, targetCols, err := buildSchemaMigrationPlan(cfg, "orders", sourceDB, targetDB)
if err != nil {
t.Fatalf("buildSchemaMigrationPlan returned error: %v", err)
}
if len(sourceCols) != 2 || len(targetCols) != 0 {
t.Fatalf("unexpected columns lengths: source=%d target=%d", len(sourceCols), len(targetCols))
}
if plan.TargetTableExists {
t.Fatalf("expected target table missing")
}
if !plan.AutoCreate {
t.Fatalf("expected auto create enabled")
}
if !strings.Contains(plan.PlannedAction, "自动建表") {
t.Fatalf("unexpected planned action: %s", plan.PlannedAction)
}
if !strings.Contains(plan.CreateTableSQL, `CREATE TABLE "public"."orders"`) {
t.Fatalf("unexpected create table SQL: %s", plan.CreateTableSQL)
}
}
func stringPtr(v string) *string { return &v }
func TestBuildPGLikeToMySQLCreateTablePlan_GeneratesMySQLDDL(t *testing.T) {
t.Parallel()
sourceDB := &fakeMigrationDB{
indexes: map[string][]connection.IndexDefinition{
"public.users": {
{Name: "users_email_key", ColumnName: "email", NonUnique: 0, SeqInIndex: 1, IndexType: "BTREE"},
{Name: "idx_users_name", ColumnName: "name", NonUnique: 1, SeqInIndex: 1, IndexType: "BTREE"},
},
},
}
cols := []connection.ColumnDefinition{
{Name: "id", Type: "integer", Nullable: "NO", Key: "PRI", Extra: "auto_increment"},
{Name: "email", Type: "character varying(120)", Nullable: "NO"},
{Name: "name", Type: "text", Nullable: "YES"},
{Name: "profile", Type: "jsonb", Nullable: "YES"},
}
cfg := SyncConfig{CreateIndexes: true}
createSQL, postSQL, warnings, unsupported, idxCreate, idxSkip, err := buildPGLikeToMySQLCreateTablePlan(cfg, "app.users", cols, sourceDB, "public", "users")
if err != nil {
t.Fatalf("buildPGLikeToMySQLCreateTablePlan returned error: %v", err)
}
if !strings.Contains(createSQL, "CREATE TABLE `app`.`users`") {
t.Fatalf("unexpected create SQL: %s", createSQL)
}
if !strings.Contains(createSQL, "`id` int AUTO_INCREMENT NOT NULL") {
t.Fatalf("unexpected id definition: %s", createSQL)
}
if !strings.Contains(createSQL, "`profile` json") {
t.Fatalf("unexpected json definition: %s", createSQL)
}
if idxCreate != 2 || idxSkip != 0 {
t.Fatalf("unexpected index summary: create=%d skip=%d", idxCreate, idxSkip)
}
if len(postSQL) != 2 {
t.Fatalf("unexpected post sql length: %v", postSQL)
}
if len(warnings) != 0 {
t.Fatalf("unexpected warnings: %v", warnings)
}
if len(unsupported) != 0 {
t.Fatalf("unexpected unsupported: %v", unsupported)
}
}
func TestBuildPGLikeToMySQLPlan_AutoCreateWhenTargetMissing(t *testing.T) {
t.Parallel()
sourceDB := &fakeMigrationDB{
columns: map[string][]connection.ColumnDefinition{
"public.orders": {
{Name: "id", Type: "bigint", Nullable: "NO", Key: "PRI", Extra: "auto_increment"},
{Name: "amount", Type: "numeric(10,2)", Nullable: "NO"},
},
},
indexes: map[string][]connection.IndexDefinition{},
}
targetDB := &fakeMigrationDB{columns: map[string][]connection.ColumnDefinition{}}
cfg := SyncConfig{
SourceConfig: connection.ConnectionConfig{Type: "kingbase", Database: "public"},
TargetConfig: connection.ConnectionConfig{Type: "mysql", Database: "app"},
TargetTableStrategy: "smart",
CreateIndexes: true,
}
plan, sourceCols, targetCols, err := buildPGLikeToMySQLPlan(cfg, "orders", sourceDB, targetDB)
if err != nil {
t.Fatalf("buildPGLikeToMySQLPlan returned error: %v", err)
}
if len(sourceCols) != 2 || len(targetCols) != 0 {
t.Fatalf("unexpected columns lengths: source=%d target=%d", len(sourceCols), len(targetCols))
}
if plan.TargetTableExists {
t.Fatalf("expected target table missing")
}
if !plan.AutoCreate {
t.Fatalf("expected auto create enabled")
}
if !strings.Contains(plan.CreateTableSQL, "CREATE TABLE `app`.`orders`") {
t.Fatalf("unexpected create table SQL: %s", plan.CreateTableSQL)
}
}
func TestBuildMySQLToPGLikeCreateTablePlan_GeneratesPostgresDDL(t *testing.T) {
t.Parallel()
sourceDB := &fakeMigrationDB{
indexes: map[string][]connection.IndexDefinition{
"shop.orders": {
{Name: "idx_orders_user", ColumnName: "user_id", NonUnique: 1, SeqInIndex: 1, IndexType: "BTREE"},
{Name: "idx_orders_user", ColumnName: "status", NonUnique: 1, SeqInIndex: 2, IndexType: "BTREE"},
},
},
}
cols := []connection.ColumnDefinition{
{Name: "id", Type: "bigint", Nullable: "NO", Key: "PRI", Extra: "auto_increment"},
{Name: "user_id", Type: "bigint", Nullable: "NO"},
{Name: "status", Type: "varchar(32)", Nullable: "YES"},
{Name: "payload", Type: "json", Nullable: "YES"},
}
cfg := SyncConfig{CreateIndexes: true}
createSQL, postSQL, warnings, unsupported, idxCreate, idxSkip, err := buildMySQLToPGLikeCreateTablePlan("postgres", cfg, "public.orders", cols, sourceDB, "shop", "orders")
if err != nil {
t.Fatalf("buildMySQLToPGLikeCreateTablePlan returned error: %v", err)
}
if !strings.Contains(createSQL, `CREATE TABLE "public"."orders"`) {
t.Fatalf("unexpected create SQL: %s", createSQL)
}
if !strings.Contains(createSQL, `GENERATED BY DEFAULT AS IDENTITY`) {
t.Fatalf("missing identity mapping: %s", createSQL)
}
if !strings.Contains(createSQL, `jsonb`) {
t.Fatalf("missing jsonb mapping: %s", createSQL)
}
if idxCreate != 1 || idxSkip != 0 {
t.Fatalf("unexpected index summary: create=%d skip=%d", idxCreate, idxSkip)
}
if len(postSQL) != 1 || !strings.Contains(postSQL[0], `CREATE INDEX "idx_orders_user"`) {
t.Fatalf("unexpected post SQL: %v", postSQL)
}
if len(warnings) != 0 || len(unsupported) != 0 {
t.Fatalf("unexpected warnings/unsupported: warnings=%v unsupported=%v", warnings, unsupported)
}
}
func TestBuildMySQLToClickHouseCreateTableSQL_GeneratesMergeTree(t *testing.T) {
t.Parallel()
cols := []connection.ColumnDefinition{
{Name: "id", Type: "bigint unsigned", Nullable: "NO", Key: "PRI"},
{Name: "name", Type: "varchar(128)", Nullable: "YES"},
{Name: "payload", Type: "json", Nullable: "YES"},
}
createSQL, warnings, unsupported := buildMySQLToClickHouseCreateTableSQL("analytics.orders", cols)
if !strings.Contains(createSQL, "ENGINE = MergeTree()") {
t.Fatalf("unexpected create SQL: %s", createSQL)
}
if !strings.Contains(createSQL, "ORDER BY (`id`)") {
t.Fatalf("unexpected order by: %s", createSQL)
}
if !strings.Contains(createSQL, "`payload` Nullable(String)") {
t.Fatalf("unexpected json mapping: %s", createSQL)
}
if len(warnings) == 0 {
t.Fatalf("expected warnings for clickhouse semantics")
}
if len(unsupported) != 0 {
t.Fatalf("unexpected unsupported: %v", unsupported)
}
}
func TestBuildClickHouseToMySQLCreateTableSQL_GeneratesMySQLDDL(t *testing.T) {
t.Parallel()
cols := []connection.ColumnDefinition{
{Name: "id", Type: "UInt64", Nullable: "NO", Key: "PRI"},
{Name: "event_time", Type: "DateTime", Nullable: "NO"},
{Name: "payload", Type: "Map(String, String)", Nullable: "YES"},
}
createSQL, warnings := buildClickHouseToMySQLCreateTableSQL("app.metrics", cols)
if !strings.Contains(createSQL, "CREATE TABLE `app`.`metrics`") {
t.Fatalf("unexpected create SQL: %s", createSQL)
}
if !strings.Contains(createSQL, "`id` bigint unsigned NOT NULL") {
t.Fatalf("unexpected uint64 mapping: %s", createSQL)
}
if !strings.Contains(createSQL, "`payload` json") {
t.Fatalf("unexpected complex type mapping: %s", createSQL)
}
if len(warnings) == 0 {
t.Fatalf("expected warning for limited clickhouse reverse semantics")
}
}
func TestBuildMySQLToMongoPlan_AutoCreateCollection(t *testing.T) {
t.Parallel()
sourceDB := &fakeMigrationDB{
columns: map[string][]connection.ColumnDefinition{
"shop.users": {
{Name: "id", Type: "bigint", Nullable: "NO", Key: "PRI"},
{Name: "name", Type: "varchar(64)", Nullable: "YES"},
},
},
indexes: map[string][]connection.IndexDefinition{
"shop.users": {
{Name: "idx_users_name", ColumnName: "name", NonUnique: 1, SeqInIndex: 1, IndexType: "BTREE"},
},
},
}
targetDB := &fakeMigrationDB{}
cfg := SyncConfig{
SourceConfig: connection.ConnectionConfig{Type: "mysql", Database: "shop"},
TargetConfig: connection.ConnectionConfig{Type: "mongodb", Database: "app"},
TargetTableStrategy: "smart",
CreateIndexes: true,
}
plan, sourceCols, targetCols, err := buildMySQLToMongoPlan(cfg, "users", sourceDB, targetDB)
if err != nil {
t.Fatalf("buildMySQLToMongoPlan returned error: %v", err)
}
if len(sourceCols) != 2 || targetCols != nil {
t.Fatalf("unexpected source/target columns: %d / %v", len(sourceCols), targetCols)
}
if !plan.AutoCreate || len(plan.PreDataSQL) == 0 {
t.Fatalf("expected auto create collection command: %+v", plan)
}
if !strings.Contains(plan.PreDataSQL[0], `"create":"users"`) {
t.Fatalf("unexpected create collection command: %v", plan.PreDataSQL)
}
if len(plan.PostDataSQL) != 1 || !strings.Contains(plan.PostDataSQL[0], `"createIndexes":"users"`) {
t.Fatalf("unexpected index commands: %v", plan.PostDataSQL)
}
}
func TestBuildPGLikeToMongoPlan_AutoCreateCollection(t *testing.T) {
t.Parallel()
sourceDB := &fakeMigrationDB{
columns: map[string][]connection.ColumnDefinition{
"public.orders": {
{Name: "id", Type: "bigint", Nullable: "NO", Key: "PRI"},
{Name: "name", Type: "varchar(64)", Nullable: "YES"},
},
},
indexes: map[string][]connection.IndexDefinition{
"public.orders": {
{Name: "idx_orders_name", ColumnName: "name", NonUnique: 1, SeqInIndex: 1, IndexType: "BTREE"},
},
},
}
targetDB := &fakeMigrationDB{}
cfg := SyncConfig{
SourceConfig: connection.ConnectionConfig{Type: "postgres", Database: "public"},
TargetConfig: connection.ConnectionConfig{Type: "mongodb", Database: "app"},
TargetTableStrategy: "smart",
CreateIndexes: true,
}
plan, sourceCols, targetCols, err := buildPGLikeToMongoPlan(cfg, "orders", sourceDB, targetDB)
if err != nil {
t.Fatalf("buildPGLikeToMongoPlan returned error: %v", err)
}
if len(sourceCols) != 2 || targetCols != nil {
t.Fatalf("unexpected source/target columns: %d / %v", len(sourceCols), targetCols)
}
if !plan.AutoCreate || len(plan.PreDataSQL) == 0 {
t.Fatalf("expected auto create collection command: %+v", plan)
}
if !strings.Contains(plan.PreDataSQL[0], `"create":"orders"`) {
t.Fatalf("unexpected create collection command: %v", plan.PreDataSQL)
}
if len(plan.PostDataSQL) != 1 || !strings.Contains(plan.PostDataSQL[0], `"createIndexes":"orders"`) {
t.Fatalf("unexpected index commands: %v", plan.PostDataSQL)
}
}
func TestBuildClickHouseToMongoPlan_AutoCreateCollection(t *testing.T) {
t.Parallel()
sourceDB := &fakeMigrationDB{
columns: map[string][]connection.ColumnDefinition{
"analytics.metrics": {
{Name: "id", Type: "UInt64", Nullable: "NO", Key: "PRI"},
{Name: "host", Type: "String", Nullable: "YES"},
},
},
}
targetDB := &fakeMigrationDB{}
cfg := SyncConfig{
SourceConfig: connection.ConnectionConfig{Type: "clickhouse", Database: "analytics"},
TargetConfig: connection.ConnectionConfig{Type: "mongodb", Database: "app"},
TargetTableStrategy: "smart",
}
plan, sourceCols, targetCols, err := buildClickHouseToMongoPlan(cfg, "metrics", sourceDB, targetDB)
if err != nil {
t.Fatalf("buildClickHouseToMongoPlan returned error: %v", err)
}
if len(sourceCols) != 2 || targetCols != nil {
t.Fatalf("unexpected source/target columns: %d / %v", len(sourceCols), targetCols)
}
if !plan.AutoCreate || len(plan.PreDataSQL) == 0 {
t.Fatalf("expected auto create collection command: %+v", plan)
}
if !strings.Contains(plan.PreDataSQL[0], `"create":"metrics"`) {
t.Fatalf("unexpected create collection command: %v", plan.PreDataSQL)
}
}
func TestBuildTDengineToMongoPlan_AutoCreateCollection(t *testing.T) {
t.Parallel()
sourceDB := &fakeMigrationDB{
columns: map[string][]connection.ColumnDefinition{
"src.cpu": {
{Name: "ts", Type: "TIMESTAMP", Nullable: "NO"},
{Name: "host", Type: "NCHAR(64)", Nullable: "YES"},
},
},
}
targetDB := &fakeMigrationDB{}
cfg := SyncConfig{
SourceConfig: connection.ConnectionConfig{Type: "tdengine", Database: "src"},
TargetConfig: connection.ConnectionConfig{Type: "mongodb", Database: "app"},
TargetTableStrategy: "smart",
}
plan, sourceCols, targetCols, err := buildTDengineToMongoPlan(cfg, "cpu", sourceDB, targetDB)
if err != nil {
t.Fatalf("buildTDengineToMongoPlan returned error: %v", err)
}
if len(sourceCols) != 2 || targetCols != nil {
t.Fatalf("unexpected source/target columns: %d / %v", len(sourceCols), targetCols)
}
if !plan.AutoCreate || len(plan.PreDataSQL) == 0 {
t.Fatalf("expected auto create collection command: %+v", plan)
}
if !strings.Contains(plan.PreDataSQL[0], `"create":"cpu"`) {
t.Fatalf("unexpected create collection command: %v", plan.PreDataSQL)
}
}
func TestBuildMongoToMySQLPlan_InfersColumnsAndCreatesTable(t *testing.T) {
t.Parallel()
query := `{"find":"users","filter":{},"limit":200}`
sourceDB := &fakeMigrationDB{
queryData: map[string][]map[string]interface{}{
query: {
{"_id": "a1", "name": "alice", "age": int64(18), "profile": map[string]interface{}{"city": "shanghai"}},
{"_id": "b2", "name": "bob", "profile": map[string]interface{}{"city": "beijing"}},
},
},
queryCols: map[string][]string{query: {"_id", "name", "age", "profile"}},
indexes: map[string][]connection.IndexDefinition{
"crm.users": {{Name: "email_1", ColumnName: "name", NonUnique: 1, SeqInIndex: 1, IndexType: "BTREE"}},
},
}
targetDB := &fakeMigrationDB{}
cfg := SyncConfig{
SourceConfig: connection.ConnectionConfig{Type: "mongodb", Database: "crm"},
TargetConfig: connection.ConnectionConfig{Type: "mysql", Database: "app"},
TargetTableStrategy: "smart",
CreateIndexes: true,
}
plan, sourceCols, _, err := buildMongoToMySQLPlan(cfg, "users", sourceDB, targetDB)
if err != nil {
t.Fatalf("buildMongoToMySQLPlan returned error: %v", err)
}
if len(sourceCols) == 0 {
t.Fatalf("expected inferred source cols")
}
if !plan.AutoCreate || !strings.Contains(plan.CreateTableSQL, "CREATE TABLE `app`.`users`") {
t.Fatalf("unexpected create table sql: %s", plan.CreateTableSQL)
}
if !strings.Contains(plan.CreateTableSQL, "`_id` text NOT NULL") && !strings.Contains(plan.CreateTableSQL, "`_id` varchar") {
t.Fatalf("missing inferred _id column: %s", plan.CreateTableSQL)
}
if !strings.Contains(plan.CreateTableSQL, "`profile` json") {
t.Fatalf("expected nested field degrade to json: %s", plan.CreateTableSQL)
}
if len(plan.PostDataSQL) != 1 {
t.Fatalf("expected one post index sql, got=%v", plan.PostDataSQL)
}
}
func TestBuildTDengineToMySQLPlan_AutoCreateWhenTargetMissing(t *testing.T) {
t.Parallel()
sourceDB := &fakeMigrationDB{
columns: map[string][]connection.ColumnDefinition{
"metrics.cpu": {
{Name: "ts", Type: "TIMESTAMP", Nullable: "NO"},
{Name: "host", Type: "NCHAR(64)", Nullable: "YES", Key: "TAG", Extra: "TAG"},
{Name: "usage", Type: "DOUBLE", Nullable: "YES"},
},
},
}
targetDB := &fakeMigrationDB{}
cfg := SyncConfig{
SourceConfig: connection.ConnectionConfig{Type: "tdengine", Database: "metrics"},
TargetConfig: connection.ConnectionConfig{Type: "mysql", Database: "app"},
TargetTableStrategy: "smart",
}
plan, sourceCols, targetCols, err := buildTDengineToMySQLPlan(cfg, "cpu", sourceDB, targetDB)
if err != nil {
t.Fatalf("buildTDengineToMySQLPlan returned error: %v", err)
}
if len(sourceCols) != 3 || len(targetCols) != 0 {
t.Fatalf("unexpected columns lengths: source=%d target=%d", len(sourceCols), len(targetCols))
}
if !plan.AutoCreate {
t.Fatalf("expected auto create enabled")
}
if !strings.Contains(plan.CreateTableSQL, "CREATE TABLE `app`.`cpu`") {
t.Fatalf("unexpected create table sql: %s", plan.CreateTableSQL)
}
if !strings.Contains(plan.CreateTableSQL, "`ts` datetime") {
t.Fatalf("expected timestamp mapping, got: %s", plan.CreateTableSQL)
}
if !strings.Contains(plan.CreateTableSQL, "`host` varchar(64)") {
t.Fatalf("expected nchar mapping, got: %s", plan.CreateTableSQL)
}
if len(plan.Warnings) == 0 || !strings.Contains(strings.Join(plan.Warnings, " "), "TAG") {
t.Fatalf("expected TAG warning, got: %v", plan.Warnings)
}
}
func TestBuildTDengineToPGLikePlan_AutoCreateWhenTargetMissing(t *testing.T) {
t.Parallel()
sourceDB := &fakeMigrationDB{
columns: map[string][]connection.ColumnDefinition{
"metrics.cpu": {
{Name: "ts", Type: "TIMESTAMP", Nullable: "NO"},
{Name: "payload", Type: "JSON", Nullable: "YES"},
{Name: "host", Type: "BINARY(32)", Nullable: "YES", Key: "TAG", Extra: "TAG"},
},
},
}
targetDB := &fakeMigrationDB{}
cfg := SyncConfig{
SourceConfig: connection.ConnectionConfig{Type: "tdengine", Database: "metrics"},
TargetConfig: connection.ConnectionConfig{Type: "kingbase", Database: "ignored"},
TargetTableStrategy: "smart",
}
plan, sourceCols, targetCols, err := buildTDengineToPGLikePlan(cfg, "cpu", sourceDB, targetDB)
if err != nil {
t.Fatalf("buildTDengineToPGLikePlan returned error: %v", err)
}
if len(sourceCols) != 3 || len(targetCols) != 0 {
t.Fatalf("unexpected columns lengths: source=%d target=%d", len(sourceCols), len(targetCols))
}
if !plan.AutoCreate {
t.Fatalf("expected auto create enabled")
}
if !strings.Contains(plan.CreateTableSQL, `CREATE TABLE "public"."cpu"`) {
t.Fatalf("unexpected create table sql: %s", plan.CreateTableSQL)
}
if !strings.Contains(plan.CreateTableSQL, `"ts" timestamp`) {
t.Fatalf("expected timestamp mapping, got: %s", plan.CreateTableSQL)
}
if !strings.Contains(plan.CreateTableSQL, `"payload" jsonb`) {
t.Fatalf("expected json mapping, got: %s", plan.CreateTableSQL)
}
if len(plan.Warnings) == 0 || !strings.Contains(strings.Join(plan.Warnings, " "), "TAG") {
t.Fatalf("expected TAG warning, got: %v", plan.Warnings)
}
}
func TestBuildSchemaMigrationPlan_TDengineTargetWarnsInsertOnlyBoundary(t *testing.T) {
t.Parallel()
sourceDB := &fakeMigrationDB{
columns: map[string][]connection.ColumnDefinition{
"shop.metrics": {
{Name: "id", Type: "bigint", Nullable: "NO", Key: "PRI"},
{Name: "ts", Type: "datetime", Nullable: "NO"},
{Name: "value", Type: "double", Nullable: "YES"},
},
},
}
targetDB := &fakeMigrationDB{
columns: map[string][]connection.ColumnDefinition{
"taos.metrics": {
{Name: "id", Type: "bigint", Nullable: "NO"},
{Name: "ts", Type: "timestamp", Nullable: "NO"},
{Name: "value", Type: "double", Nullable: "YES"},
},
},
}
cfg := SyncConfig{
SourceConfig: connection.ConnectionConfig{Type: "mysql", Database: "shop"},
TargetConfig: connection.ConnectionConfig{Type: "tdengine", Database: "taos"},
Mode: "insert_update",
}
plan, _, _, err := buildSchemaMigrationPlan(cfg, "metrics", sourceDB, targetDB)
if err != nil {
t.Fatalf("buildSchemaMigrationPlan returned error: %v", err)
}
warnings := strings.Join(plan.Warnings, " ")
if !strings.Contains(warnings, "仅支持 INSERT 写入") {
t.Fatalf("expected TDengine target warning, got: %v", plan.Warnings)
}
}
func TestBuildMySQLLikeToTDenginePlan_AutoCreateWhenTargetMissing(t *testing.T) {
t.Parallel()
sourceDB := &fakeMigrationDB{
columns: map[string][]connection.ColumnDefinition{
"shop.metrics": {
{Name: "id", Type: "bigint", Nullable: "NO", Key: "PRI", Extra: "auto_increment"},
{Name: "ts", Type: "datetime", Nullable: "NO"},
{Name: "payload", Type: "json", Nullable: "YES"},
},
},
}
targetDB := &fakeMigrationDB{}
cfg := SyncConfig{
SourceConfig: connection.ConnectionConfig{Type: "mysql", Database: "shop"},
TargetConfig: connection.ConnectionConfig{Type: "tdengine", Database: "taos"},
TargetTableStrategy: "smart",
}
plan, sourceCols, targetCols, err := buildMySQLLikeToTDenginePlan(cfg, "metrics", sourceDB, targetDB)
if err != nil {
t.Fatalf("buildMySQLLikeToTDenginePlan returned error: %v", err)
}
if len(sourceCols) != 3 || len(targetCols) != 0 {
t.Fatalf("unexpected columns lengths: source=%d target=%d", len(sourceCols), len(targetCols))
}
if !plan.AutoCreate {
t.Fatalf("expected auto create enabled")
}
if !strings.Contains(plan.CreateTableSQL, "CREATE TABLE `taos`.`metrics`") {
t.Fatalf("unexpected create sql: %s", plan.CreateTableSQL)
}
if !strings.Contains(plan.CreateTableSQL, "`ts` TIMESTAMP") {
t.Fatalf("expected ts first column mapped to TIMESTAMP, got: %s", plan.CreateTableSQL)
}
if !strings.Contains(plan.CreateTableSQL, "`payload` VARCHAR(") {
t.Fatalf("expected json degrade to VARCHAR, got: %s", plan.CreateTableSQL)
}
if !strings.Contains(strings.Join(plan.Warnings, " "), "insert-only") && !strings.Contains(strings.Join(plan.Warnings, " "), "INSERT") {
t.Fatalf("expected tdengine target warning, got: %v", plan.Warnings)
}
}
func TestBuildPGLikeToTDenginePlan_AutoCreateWhenTargetMissing(t *testing.T) {
t.Parallel()
sourceDB := &fakeMigrationDB{
columns: map[string][]connection.ColumnDefinition{
"public.metrics": {
{Name: "event_time", Type: "timestamp without time zone", Nullable: "NO"},
{Name: "name", Type: "character varying(64)", Nullable: "YES"},
{Name: "meta", Type: "jsonb", Nullable: "YES"},
},
},
}
targetDB := &fakeMigrationDB{}
cfg := SyncConfig{
SourceConfig: connection.ConnectionConfig{Type: "postgres", Database: "ignored"},
TargetConfig: connection.ConnectionConfig{Type: "tdengine", Database: "taos"},
TargetTableStrategy: "smart",
}
plan, sourceCols, targetCols, err := buildPGLikeToTDenginePlan(cfg, "metrics", sourceDB, targetDB)
if err != nil {
t.Fatalf("buildPGLikeToTDenginePlan returned error: %v", err)
}
if len(sourceCols) != 3 || len(targetCols) != 0 {
t.Fatalf("unexpected columns lengths: source=%d target=%d", len(sourceCols), len(targetCols))
}
if !plan.AutoCreate {
t.Fatalf("expected auto create enabled")
}
if !strings.Contains(plan.CreateTableSQL, "CREATE TABLE `taos`.`metrics`") {
t.Fatalf("unexpected create sql: %s", plan.CreateTableSQL)
}
if !strings.Contains(plan.CreateTableSQL, "`event_time` TIMESTAMP") {
t.Fatalf("expected timestamp mapping, got: %s", plan.CreateTableSQL)
}
if !strings.Contains(plan.CreateTableSQL, "`meta` VARCHAR(") {
t.Fatalf("expected jsonb degrade to VARCHAR, got: %s", plan.CreateTableSQL)
}
}
func TestBuildMySQLLikeToTDenginePlan_RejectsAutoCreateWithoutTimestampColumn(t *testing.T) {
t.Parallel()
sourceDB := &fakeMigrationDB{
columns: map[string][]connection.ColumnDefinition{
"shop.metrics": {
{Name: "id", Type: "bigint", Nullable: "NO", Key: "PRI"},
{Name: "name", Type: "varchar(64)", Nullable: "YES"},
},
},
}
targetDB := &fakeMigrationDB{}
cfg := SyncConfig{
SourceConfig: connection.ConnectionConfig{Type: "mysql", Database: "shop"},
TargetConfig: connection.ConnectionConfig{Type: "tdengine", Database: "taos"},
TargetTableStrategy: "smart",
}
plan, _, _, err := buildMySQLLikeToTDenginePlan(cfg, "metrics", sourceDB, targetDB)
if err != nil {
t.Fatalf("buildMySQLLikeToTDenginePlan returned error: %v", err)
}
if plan.AutoCreate {
t.Fatalf("expected auto create disabled when source has no timestamp column")
}
if !strings.Contains(plan.PlannedAction, "时间列") {
t.Fatalf("unexpected planned action: %s", plan.PlannedAction)
}
if !strings.Contains(strings.Join(plan.Warnings, " "), "时间列") {
t.Fatalf("expected missing timestamp warning, got: %v", plan.Warnings)
}
}
func TestBuildClickHouseToTDenginePlan_AutoCreateWhenTargetMissing(t *testing.T) {
t.Parallel()
sourceDB := &fakeMigrationDB{
columns: map[string][]connection.ColumnDefinition{
"analytics.metrics": {
{Name: "event_time", Type: "DateTime64(3)", Nullable: "NO"},
{Name: "host", Type: "FixedString(64)", Nullable: "YES"},
{Name: "payload", Type: "Map(String,String)", Nullable: "YES"},
},
},
}
targetDB := &fakeMigrationDB{}
cfg := SyncConfig{
SourceConfig: connection.ConnectionConfig{Type: "clickhouse", Database: "analytics"},
TargetConfig: connection.ConnectionConfig{Type: "tdengine", Database: "taos"},
TargetTableStrategy: "smart",
}
plan, sourceCols, targetCols, err := buildClickHouseToTDenginePlan(cfg, "metrics", sourceDB, targetDB)
if err != nil {
t.Fatalf("buildClickHouseToTDenginePlan returned error: %v", err)
}
if len(sourceCols) != 3 || len(targetCols) != 0 {
t.Fatalf("unexpected columns lengths: source=%d target=%d", len(sourceCols), len(targetCols))
}
if !plan.AutoCreate {
t.Fatalf("expected auto create enabled")
}
if !strings.Contains(plan.CreateTableSQL, "CREATE TABLE `taos`.`metrics`") {
t.Fatalf("unexpected create sql: %s", plan.CreateTableSQL)
}
if !strings.Contains(plan.CreateTableSQL, "`event_time` TIMESTAMP") {
t.Fatalf("expected datetime64 mapping, got: %s", plan.CreateTableSQL)
}
if !strings.Contains(plan.CreateTableSQL, "`host` VARCHAR(64)") {
t.Fatalf("expected fixedstring mapping, got: %s", plan.CreateTableSQL)
}
if !strings.Contains(plan.CreateTableSQL, "`payload` VARCHAR(") {
t.Fatalf("expected complex type degrade to VARCHAR, got: %s", plan.CreateTableSQL)
}
}
func TestBuildClickHouseToPGLikePlan_AutoCreateWhenTargetMissing(t *testing.T) {
t.Parallel()
sourceDB := &fakeMigrationDB{
columns: map[string][]connection.ColumnDefinition{
"analytics.metrics": {
{Name: "id", Type: "UInt64", Nullable: "NO", Key: "PRI"},
{Name: "event_time", Type: "DateTime64(3)", Nullable: "NO"},
{Name: "host", Type: "FixedString(64)", Nullable: "YES"},
{Name: "payload", Type: "Map(String,String)", Nullable: "YES"},
},
},
}
targetDB := &fakeMigrationDB{}
cfg := SyncConfig{
SourceConfig: connection.ConnectionConfig{Type: "clickhouse", Database: "analytics"},
TargetConfig: connection.ConnectionConfig{Type: "postgres", Database: "public"},
TargetTableStrategy: "smart",
}
plan, sourceCols, targetCols, err := buildClickHouseToPGLikePlan(cfg, "metrics", sourceDB, targetDB)
if err != nil {
t.Fatalf("buildClickHouseToPGLikePlan returned error: %v", err)
}
if len(sourceCols) != 4 || len(targetCols) != 0 {
t.Fatalf("unexpected columns lengths: source=%d target=%d", len(sourceCols), len(targetCols))
}
if !plan.AutoCreate {
t.Fatalf("expected auto create enabled")
}
if !strings.Contains(plan.CreateTableSQL, `CREATE TABLE "public"."metrics"`) {
t.Fatalf("unexpected create sql: %s", plan.CreateTableSQL)
}
if !strings.Contains(plan.CreateTableSQL, `"id" numeric(20,0)`) {
t.Fatalf("expected uint64 safeguard mapping, got: %s", plan.CreateTableSQL)
}
if !strings.Contains(plan.CreateTableSQL, `"event_time" timestamp`) {
t.Fatalf("expected datetime64 mapping, got: %s", plan.CreateTableSQL)
}
if !strings.Contains(plan.CreateTableSQL, `"host" varchar(64)`) {
t.Fatalf("expected fixedstring mapping, got: %s", plan.CreateTableSQL)
}
if !strings.Contains(plan.CreateTableSQL, `"payload" jsonb`) {
t.Fatalf("expected complex type degrade to jsonb, got: %s", plan.CreateTableSQL)
}
if !strings.Contains(plan.CreateTableSQL, `PRIMARY KEY ("id")`) {
t.Fatalf("expected primary key preservation, got: %s", plan.CreateTableSQL)
}
}
func TestBuildPGLikeToClickHousePlan_AutoCreateWhenTargetMissing(t *testing.T) {
t.Parallel()
sourceDB := &fakeMigrationDB{
columns: map[string][]connection.ColumnDefinition{
"public.orders": {
{Name: "id", Type: "bigint", Nullable: "NO", Key: "PRI"},
{Name: "created_at", Type: "timestamp without time zone", Nullable: "NO"},
{Name: "profile", Type: "jsonb", Nullable: "YES"},
},
},
}
targetDB := &fakeMigrationDB{}
cfg := SyncConfig{
SourceConfig: connection.ConnectionConfig{Type: "postgres", Database: "public"},
TargetConfig: connection.ConnectionConfig{Type: "clickhouse", Database: "analytics"},
TargetTableStrategy: "smart",
}
plan, sourceCols, targetCols, err := buildPGLikeToClickHousePlan(cfg, "orders", sourceDB, targetDB)
if err != nil {
t.Fatalf("buildPGLikeToClickHousePlan returned error: %v", err)
}
if len(sourceCols) != 3 || len(targetCols) != 0 {
t.Fatalf("unexpected columns lengths: source=%d target=%d", len(sourceCols), len(targetCols))
}
if !plan.AutoCreate {
t.Fatalf("expected auto create enabled")
}
if !strings.Contains(plan.CreateTableSQL, "CREATE TABLE `analytics`.`orders`") {
t.Fatalf("unexpected create sql: %s", plan.CreateTableSQL)
}
if !strings.Contains(plan.CreateTableSQL, "`created_at` DateTime") {
t.Fatalf("expected timestamp mapping, got: %s", plan.CreateTableSQL)
}
if !strings.Contains(plan.CreateTableSQL, "`profile` Nullable(String)") {
t.Fatalf("expected jsonb degrade to Nullable(String), got: %s", plan.CreateTableSQL)
}
if !strings.Contains(plan.CreateTableSQL, "ORDER BY (`id`)") {
t.Fatalf("expected primary key order by, got: %s", plan.CreateTableSQL)
}
}
func TestBuildTDengineToTDenginePlan_AutoCreateWhenTargetMissing(t *testing.T) {
t.Parallel()
sourceDB := &fakeMigrationDB{
columns: map[string][]connection.ColumnDefinition{
"src.cpu": {
{Name: "ts", Type: "TIMESTAMP", Nullable: "NO"},
{Name: "host", Type: "NCHAR(64)", Nullable: "YES"},
{Name: "region", Type: "NCHAR(32)", Nullable: "YES", Key: "TAG"},
},
},
}
targetDB := &fakeMigrationDB{}
cfg := SyncConfig{
SourceConfig: connection.ConnectionConfig{Type: "tdengine", Database: "src"},
TargetConfig: connection.ConnectionConfig{Type: "tdengine", Database: "dst"},
TargetTableStrategy: "smart",
}
plan, sourceCols, targetCols, err := buildTDengineToTDenginePlan(cfg, "cpu", sourceDB, targetDB)
if err != nil {
t.Fatalf("buildTDengineToTDenginePlan returned error: %v", err)
}
if len(sourceCols) != 3 || len(targetCols) != 0 {
t.Fatalf("unexpected columns lengths: source=%d target=%d", len(sourceCols), len(targetCols))
}
if !plan.AutoCreate {
t.Fatalf("expected auto create enabled")
}
if !strings.Contains(plan.CreateTableSQL, "CREATE TABLE `dst`.`cpu`") {
t.Fatalf("unexpected create sql: %s", plan.CreateTableSQL)
}
if !strings.Contains(plan.CreateTableSQL, "`ts` TIMESTAMP") {
t.Fatalf("expected timestamp preserved, got: %s", plan.CreateTableSQL)
}
if !strings.Contains(plan.CreateTableSQL, "`region` NCHAR(32)") {
t.Fatalf("expected tag degrade to regular nchar column, got: %s", plan.CreateTableSQL)
}
if !strings.Contains(strings.Join(plan.Warnings, " "), "TAG") {
t.Fatalf("expected TAG degrade warning, got: %v", plan.Warnings)
}
}

Some files were not shown because too many files have changed in this diff Show More