Compare commits

...

70 Commits

Author SHA1 Message Date
Syngnat
1a5bf79dd3 Merge branch 'fix/editor-sql-error-20260306-ygf' into dev 2026-03-06 14:27:39 +08:00
Syngnat
dea096d4c2 feat(release-notes): 支持自动生成 Release 更新说明并区分配置文件命名 2026-03-06 14:26:08 +08:00
github-actions[bot]
04f8b266d3 - feat(connection,metadata,kingbase): 增强多数据源连接能力并修复金仓/达梦/Oracle/ClickHouse兼容性问题 (#188) (#190)
* feat(http-tunnel): 支持独立 HTTP 隧道连接并覆盖多数据源

refs #168

* fix(kingbase-data-grid): 修复金仓打开表卡顿并降低对象渲染开销

refs #178

* fix(kingbase-transaction): 修复金仓事务提交重复引号导致语法错误

refs #176

* fix(driver-agent): 修复老版本 Win10 升级后金仓驱动代理启动失败

refs #177

* chore(ci): 新增手动触发的 macOS 测试构建工作流

* chore(ci): 允许测试工作流在当前分支自动触发

* fix(query-editor): 修复 SQL 编辑中光标随机跳到末尾 refs #185

* feat(data-sync): 增加差异 SQL 预览能力便于审核 refs #174

* fix(clickhouse-connect): 自动识别并回退 HTTP/Native 协议连接 refs #181

* fix(oracle-metadata): 修复视图与函数加载按 schema 过滤异常 refs #155

* fix(dameng-databases): 修复显示全部库时数据库列表不完整 refs #154

* fix(connection,db-list): 统一处理空列表返回并修复达梦连接测试报错 refs #157

Co-authored-by: 辣条 <69459608+tianqijiuyun-latiao@users.noreply.github.com>
2026-03-06 13:57:11 +08:00
Syngnat
0246d7fae5 Merge remote-tracking branch 'origin/main' into dev
# Conflicts:
#	CONTRIBUTING.md
#	CONTRIBUTING.zh-CN.md
2026-03-06 11:17:18 +08:00
Syngnat
4aa177ed37 🔧 chore(branch-sync): 补充 main 回灌 dev 权限前置条件并增加失败告警 2026-03-06 11:05:27 +08:00
Syngnat
4f5a7bd94b feat(branch-sync): 新增 main 回灌 dev 自动同步工作流并同步中英文贡献指南 2026-03-06 09:40:49 +08:00
Syngnat
00c6f9871f Release/0.5.2 (#183) 2026-03-05 17:17:03 +08:00
Syngnat
6a4b397ecc Merge branch 'feature/suport-clickhouse-20260227-ygf' into dev 2026-03-05 17:15:16 +08:00
Syngnat
3973038aea Merge branch 'main' into dev
# Conflicts:
#	frontend/src/App.tsx
#	frontend/src/components/ConnectionModal.tsx
#	frontend/src/components/DataGrid.tsx
#	frontend/src/components/DataViewer.tsx
#	frontend/src/components/QueryEditor.tsx
#	internal/app/methods_driver.go
#	internal/app/methods_file_export_test.go
#	internal/db/clickhouse_impl.go
#	internal/db/oracle_impl.go
#	internal/redis/redis_impl.go
2026-03-05 17:11:41 +08:00
辣条
71b41459e7 feat(mongodb,connection-tree,query-editor,sidebar,sqlserver,table-designer,ssl): 完成 MongoDB v1/v2 驱动切换与复制连接,增强快捷键/搜索/筛选与设计表体验,并修复 SQLServer、SSL 及连接稳定性问题 (#180)
* feat(mongodb-driver,connection-tree): 支持 MongoDB v1/v2 切换并新增复制连接

* fix(mongodb-query): 修复 MongoDB 筛选不生效并兼容 shell 语法执行

refs #153

* fix(query-editor): 修复 SQLServer 自动补全回车重复 dbo 前缀

refs #159

* fix(sqlserver-table-designer): 修复设计表读取列时错误使用 schema 作为数据库名

refs #156

* feat(shortcuts): 增加快捷键设置并支持 SQL 执行/侧边栏搜索

refs #158

* fix(sidebar-search): 优化范围搜索匹配与交互

refs #158

* fix(filter,connection-recovery): 保持筛选状态并修复连接失效卡死

refs #165

同步修复连接失效后侧栏持续转圈、断开后无法恢复的问题

* feat(table-designer): 统一设计表界面风格并优化字段新增交互

- 统一设计表页面与数据面板的视觉风格,覆盖工具栏、Tabs、表格与编辑区域

- 移除默认硬边框,改为透明背景与细分隔线,提升整体观感一致性

- 添加字段后自动滚动到新行并高亮,且自动聚焦输入框

- 新增" 在选中字段后添加\,支持按选中字段位置插入字段

* feat(data-grid-filter): 筛选字段支持快捷搜索

- 在筛选条件字段下拉启用可搜索(showSearch)

- 支持字段名大小写不敏感模糊匹配

- 表字段较多时可快速定位目标字段,减少下拉查找耗时

refs #171

* fix(db-ssl): 支持多数据源 SSL/TLS 连接并补齐达梦证书配置

refs #167

* fix(sidebar): 修复数据库加载时 null.map 导致表加载失败

* fix(query-editor): 合并运行按钮并保留 SQL 停止执行入口
2026-03-05 16:52:06 +08:00
ljyf5593
69942bb77e * feat: SQL执行中时,增加取消执行功能 (#172)
Co-authored-by: liujie <469282686@qq.com>
2026-03-05 15:28:34 +08:00
ljyf5593
f372b20a68 fix: 修复连接导出功能生成空JSON数组的问题 (#169)
Co-authored-by: liujie <469282686@qq.com>
2026-03-05 12:01:58 +08:00
Toskysun
e6da986927 feat: 新增 HTML 导出功能 (#164)
- 后端:在 writeRowsToFile 中新增 html 分支
- 后端:实现 writeRowsToHTML 函数,生成包含内嵌 CSS 的独立 HTML 文件
- 后端:实现 formatExportHTMLCell 函数,进行 HTML 转义和换行处理
- 后端:新增测试用例验证 XSS 转义、样式存在、换行处理、空值显示
- 前端:在 DataGrid 所有导出菜单中新增 HTML 选项(右键菜单、工具栏、单元格菜单)
- 前端:在 Sidebar 表节点右键菜单中新增 HTML 选项
- 样式:响应式表格设计,支持斑马纹、悬停效果、表头吸顶
- 安全:所有用户数据经过 HTML 转义,防止 XSS 攻击
2026-03-04 17:46:18 +08:00
Toskysun
4570516678 feat: 表筛选结果一键导出功能 (#161)
* 🔧 chore(gitignore): 忽略 AI 上下文文档避免版本控制污染

添加 CLAUDE.md 及其子目录变体到 .gitignore,防止 AI 辅助开发过程中生成的临时上下文文件被意外提交到仓库。

- 忽略根目录 CLAUDE.md
- 忽略所有子目录下的 CLAUDE.md 文件

* feat: 表筛选结果一键导出功能

- 新增表浏览模式下筛选结果的导出功能
- DataViewer 生成包含筛选条件的完整 SQL
- DataGrid 动态显示分组导出菜单(筛选结果 + 全表)
- 支持 CSV、Excel、JSON、Markdown 四种格式
- 添加未提交修改的警告提示
- 复用现有 ExportQuery 后端方法,无需后端修改

实现细节:
- 使用 buildWhereSQL 和 buildOrderBySQL 构建 SQL
- 支持 MySQL/MariaDB 的 sort buffer 优化
- 分组菜单设计避免用户误操作
- 导出文件名包含 _filtered 后缀

关闭 #issue
2026-03-04 13:54:51 +08:00
凌封
8c91d8929b Feature/add aibook (#160)
* feat: 增加技术圈连接

* feat: 增强水平滚动条在大量数据下支持鼠标滚轮 (refs #146)

* feat: 新增连接标签分组功能,支持创建/编辑/删除标签、拖拽归组、右键移至标签 refs #148
2026-03-04 11:50:34 +08:00
Syngnat
786835c9bc 📝 docs(contributing): 补充中英文贡献指南并统一 README 入口
- 新增英文版 CONTRIBUTING.md 作为正式贡献文档
- 新增中文版 CONTRIBUTING.zh-CN.md 作为中文贡献说明
- 调整 README 和 README.zh-CN 的贡献入口指向对应语言文档
2026-03-03 15:49:58 +08:00
Syngnat
f2fc7cbd05 Release/0.5.1 (#152)
* 🐛 fix(data-viewer): 修复ClickHouse尾部分页异常并增强DuckDB复杂类型兼容

- DataViewer 新增 ClickHouse 反向分页策略,修复最后页与倒数页查询失败
- DuckDB 查询失败时按列类型生成安全 SELECT,复杂类型转 VARCHAR 重试
- 分页状态统一使用 currentPage 回填,避免页码与总数推导不一致
- 增强查询异常日志与重试路径,降低大表场景卡顿与误报

*  feat(frontend-driver): 驱动管理支持快速搜索并优化信息展示

- 新增搜索框,支持按 DuckDB/ClickHouse 等关键字快速定位驱动
- 显示“匹配 x / y”统计与无结果提示
- 优化头部区域排版,提升透明/暗色场景下的视觉对齐

* 🔧 fix(connection-modal): 修复多数据源URI导入解析并校正Oracle服务名校验

- 新增单主机URI解析映射,兼容 postgres/postgresql、sqlserver、redis、tdengine、dameng(dm)、kingbase、highgo、vastbase、clickhouse、oracle
- 抽取 parseSingleHostUri 复用逻辑,统一 host/port/user/password/database 回填行为
- Oracle 连接新增服务名必填校验,移除“服务名为空回退用户名”的隐式逻辑
- 连接弹窗补充 Oracle 服务名输入项与 URI 示例

* 🐛 fix(query-export): 修复查询结果导出卡住并统一按数据源能力控制导出路径

- 查询结果页导出增加稳定兜底,异常时确保 loading 关闭避免持续转圈
- DataGrid 导出逻辑按数据源能力分流,优先走后端 ExportQuery 并保留结果集导出降级
- QueryEditor 传递结果导出 SQL,保证查询结果导出范围与当前结果一致
- 后端补充 ExportData/ExportQuery 关键日志,提升导出链路可观测性

* 🐛 fix(precision): 修复查询链路与分页统计的大整数精度丢失

- 代理响应数据解码改为 UseNumber,避免默认 float64 吞精度
- 统一归一化 json.Number 与超界整数,超出 JS 安全范围转字符串
- 修复 DataViewer 总数解析,超大值不再误转 Number 参与分页
- refs #142

* 🐛 fix(driver-manager): 修复驱动管理网络告警重复并强化代理引导

- 新增下载链路域名探测,区分“GitHub可达但驱动下载链路不可达”
- 网络不可达场景仅保留红色强提醒,移除重复二级告警
- 强提醒增加“打开全局代理设置”入口,优先引导使用 GoNavi 全局代理
- 统一网络检测与目录说明提示图标尺寸,修复加载期视觉不一致
- refs #141

* ♻️ refactor(frontend-interaction): 统一标签拖拽与暗色主题交互实现

- 重构Tab拖拽排序实现,统一为可配置拖拽引擎
- 规范拖拽与点击事件边界,提升交互一致性
- 统一多组件暗色透明样式策略,减少硬编码色值
- 提升Redis/表格/连接面板在透明模式下的观感一致性
- refs #144

* ♻️ refactor(update-state): 重构在线更新状态流并按版本统一进度展示

- 重构更新检查与下载状态同步流程,减少前后端状态分叉
- 进度展示严格绑定 latestVersion,避免跨版本状态串用
- 优化 about 打开场景的静默检查状态回填逻辑
- 统一下载弹窗关闭/后台隐藏行为
- 保持现有安装流程并补齐目录打开能力

* 🎨 style(sidebar-log): 将SQL执行日志入口调整为悬浮胶囊样式

- 移除侧栏底部整条日志入口容器
- 新增悬浮按钮阴影/边框/透明背景并适配明暗主题
- 为树区域预留底部空间避免入口遮挡内容

*  feat(redis-cluster): 支持集群模式逻辑多库隔离与 0-15 库切换

- 前端恢复 Redis 集群场景下 db0-db15 的数据库选择与展示
- 后端新增集群逻辑库命名空间前缀映射,统一 key/pattern 读写隔离
- 覆盖扫描、读取、写入、删除、重命名等核心操作的键映射规则
- 集群命令通道支持 SELECT 逻辑切库与 FLUSHDB 逻辑库清空
- refs #145

*  feat(DataGrid): 大数据表虚拟滚动性能优化及UI一致性修复

- 启用动态虚拟滚动(数据量≥500行自动切换),解决万行数据表卡顿问题
- 虚拟模式下EditableCell改用div渲染,CSS选择器从元素级改为类级适配虚拟DOM
- 修复虚拟模式双水平滚动条:样式化rc-virtual-list内置滚动条为胶囊外观,禁用自定义外部滚动条
- 为rc-virtual-list水平滚动条添加鼠标滚轮支持(MutationObserver + marginLeft驱动)
- 修复白色主题透明模式下列名悬浮Tooltip对比度不足的问题
- 新增白色主题全局滚动条样式适配透明模式(App.css)
- App.tsx主题token与组件样式优化
- refs #147

* 🔧 chore(app): 清理 App.tsx 类型告警并收敛前端壳层实现

- 清除未使用代码和冗余状态
- 替换弃用 API 以消除 IDE 提示
- 显式处理浮动 Promise 避免告警
- 保持现有更新检查和代理设置行为不变

* 🔧 fix(ci): 修复 Windows AMD64 下 DuckDB 驱动构建链路

- 将 DuckDB 工具链准备切换为优先使用 MSYS2
- 增加 gcc 和 g++ 存在性校验与版本验证
- 在 MSYS2 异常时回退 Chocolatey 安装 MinGW
- 保持 Windows ARM64 跳过 DuckDB 构建与平台支持一致

* 🔧 fix(ci): 修复 Windows AMD64 下 DuckDB 驱动构建工具链

- 将 DuckDB 编译链从 MINGW64 切换为 MSYS2 UCRT64
- 修正 Windows AMD64 的 gcc 和 g++ 探测路径
- 增加 DuckDB 编译器版本校验步骤

---------

Co-authored-by: Syngnat <yangguofeng919@gmail.com>
2026-03-03 15:25:25 +08:00
Syngnat
462ca57907 🔧 fix(ci): 修复 Windows AMD64 下 DuckDB 驱动构建工具链
- 将 DuckDB 编译链从 MINGW64 切换为 MSYS2 UCRT64
- 修正 Windows AMD64 的 gcc 和 g++ 探测路径
- 增加 DuckDB 编译器版本校验步骤
2026-03-03 15:22:02 +08:00
Syngnat
4bfdb2cb6c Release/0.5.1 (#150)
* 🐛 fix(data-viewer): 修复ClickHouse尾部分页异常并增强DuckDB复杂类型兼容

- DataViewer 新增 ClickHouse 反向分页策略,修复最后页与倒数页查询失败
- DuckDB 查询失败时按列类型生成安全 SELECT,复杂类型转 VARCHAR 重试
- 分页状态统一使用 currentPage 回填,避免页码与总数推导不一致
- 增强查询异常日志与重试路径,降低大表场景卡顿与误报

*  feat(frontend-driver): 驱动管理支持快速搜索并优化信息展示

- 新增搜索框,支持按 DuckDB/ClickHouse 等关键字快速定位驱动
- 显示“匹配 x / y”统计与无结果提示
- 优化头部区域排版,提升透明/暗色场景下的视觉对齐

* 🔧 fix(connection-modal): 修复多数据源URI导入解析并校正Oracle服务名校验

- 新增单主机URI解析映射,兼容 postgres/postgresql、sqlserver、redis、tdengine、dameng(dm)、kingbase、highgo、vastbase、clickhouse、oracle
- 抽取 parseSingleHostUri 复用逻辑,统一 host/port/user/password/database 回填行为
- Oracle 连接新增服务名必填校验,移除“服务名为空回退用户名”的隐式逻辑
- 连接弹窗补充 Oracle 服务名输入项与 URI 示例

* 🐛 fix(query-export): 修复查询结果导出卡住并统一按数据源能力控制导出路径

- 查询结果页导出增加稳定兜底,异常时确保 loading 关闭避免持续转圈
- DataGrid 导出逻辑按数据源能力分流,优先走后端 ExportQuery 并保留结果集导出降级
- QueryEditor 传递结果导出 SQL,保证查询结果导出范围与当前结果一致
- 后端补充 ExportData/ExportQuery 关键日志,提升导出链路可观测性

* 🐛 fix(precision): 修复查询链路与分页统计的大整数精度丢失

- 代理响应数据解码改为 UseNumber,避免默认 float64 吞精度
- 统一归一化 json.Number 与超界整数,超出 JS 安全范围转字符串
- 修复 DataViewer 总数解析,超大值不再误转 Number 参与分页
- refs #142

* 🐛 fix(driver-manager): 修复驱动管理网络告警重复并强化代理引导

- 新增下载链路域名探测,区分“GitHub可达但驱动下载链路不可达”
- 网络不可达场景仅保留红色强提醒,移除重复二级告警
- 强提醒增加“打开全局代理设置”入口,优先引导使用 GoNavi 全局代理
- 统一网络检测与目录说明提示图标尺寸,修复加载期视觉不一致
- refs #141

* ♻️ refactor(frontend-interaction): 统一标签拖拽与暗色主题交互实现

- 重构Tab拖拽排序实现,统一为可配置拖拽引擎
- 规范拖拽与点击事件边界,提升交互一致性
- 统一多组件暗色透明样式策略,减少硬编码色值
- 提升Redis/表格/连接面板在透明模式下的观感一致性
- refs #144

* ♻️ refactor(update-state): 重构在线更新状态流并按版本统一进度展示

- 重构更新检查与下载状态同步流程,减少前后端状态分叉
- 进度展示严格绑定 latestVersion,避免跨版本状态串用
- 优化 about 打开场景的静默检查状态回填逻辑
- 统一下载弹窗关闭/后台隐藏行为
- 保持现有安装流程并补齐目录打开能力

* 🎨 style(sidebar-log): 将SQL执行日志入口调整为悬浮胶囊样式

- 移除侧栏底部整条日志入口容器
- 新增悬浮按钮阴影/边框/透明背景并适配明暗主题
- 为树区域预留底部空间避免入口遮挡内容

*  feat(redis-cluster): 支持集群模式逻辑多库隔离与 0-15 库切换

- 前端恢复 Redis 集群场景下 db0-db15 的数据库选择与展示
- 后端新增集群逻辑库命名空间前缀映射,统一 key/pattern 读写隔离
- 覆盖扫描、读取、写入、删除、重命名等核心操作的键映射规则
- 集群命令通道支持 SELECT 逻辑切库与 FLUSHDB 逻辑库清空
- refs #145

*  feat(DataGrid): 大数据表虚拟滚动性能优化及UI一致性修复

- 启用动态虚拟滚动(数据量≥500行自动切换),解决万行数据表卡顿问题
- 虚拟模式下EditableCell改用div渲染,CSS选择器从元素级改为类级适配虚拟DOM
- 修复虚拟模式双水平滚动条:样式化rc-virtual-list内置滚动条为胶囊外观,禁用自定义外部滚动条
- 为rc-virtual-list水平滚动条添加鼠标滚轮支持(MutationObserver + marginLeft驱动)
- 修复白色主题透明模式下列名悬浮Tooltip对比度不足的问题
- 新增白色主题全局滚动条样式适配透明模式(App.css)
- App.tsx主题token与组件样式优化
- refs #147

* 🔧 chore(app): 清理 App.tsx 类型告警并收敛前端壳层实现

- 清除未使用代码和冗余状态
- 替换弃用 API 以消除 IDE 提示
- 显式处理浮动 Promise 避免告警
- 保持现有更新检查和代理设置行为不变

* 🔧 fix(ci): 修复 Windows AMD64 下 DuckDB 驱动构建链路

- 将 DuckDB 工具链准备切换为优先使用 MSYS2
- 增加 gcc 和 g++ 存在性校验与版本验证
- 在 MSYS2 异常时回退 Chocolatey 安装 MinGW
- 保持 Windows ARM64 跳过 DuckDB 构建与平台支持一致
2026-03-03 15:06:16 +08:00
Syngnat
6918b56ed9 Merge remote-tracking branch 'origin/feature/suport-clickhouse-20260227-ygf' into feature/suport-clickhouse-20260227-ygf 2026-03-03 14:58:48 +08:00
Syngnat
1afb8850ad 🔧 fix(ci): 修复 Windows AMD64 下 DuckDB 驱动构建链路
- 将 DuckDB 工具链准备切换为优先使用 MSYS2
- 增加 gcc 和 g++ 存在性校验与版本验证
- 在 MSYS2 异常时回退 Chocolatey 安装 MinGW
- 保持 Windows ARM64 跳过 DuckDB 构建与平台支持一致
2026-03-03 14:58:37 +08:00
Syngnat
3284eeba17 🔧 fix(ci): 修复 Windows AMD64 下 DuckDB 驱动构建链路
- 将 DuckDB 工具链准备切换为优先使用 MSYS2
- 增加 gcc 和 g++ 存在性校验与版本验证
- 在 MSYS2 异常时回退 Chocolatey 安装 MinGW
- 保持 Windows ARM64 跳过 DuckDB 构建与平台支持一致
2026-03-03 14:58:19 +08:00
Syngnat
494484eb92 Release/0.5.1 (#149)
* 🐛 fix(data-viewer): 修复ClickHouse尾部分页异常并增强DuckDB复杂类型兼容

- DataViewer 新增 ClickHouse 反向分页策略,修复最后页与倒数页查询失败
- DuckDB 查询失败时按列类型生成安全 SELECT,复杂类型转 VARCHAR 重试
- 分页状态统一使用 currentPage 回填,避免页码与总数推导不一致
- 增强查询异常日志与重试路径,降低大表场景卡顿与误报

*  feat(frontend-driver): 驱动管理支持快速搜索并优化信息展示

- 新增搜索框,支持按 DuckDB/ClickHouse 等关键字快速定位驱动
- 显示“匹配 x / y”统计与无结果提示
- 优化头部区域排版,提升透明/暗色场景下的视觉对齐

* 🔧 fix(connection-modal): 修复多数据源URI导入解析并校正Oracle服务名校验

- 新增单主机URI解析映射,兼容 postgres/postgresql、sqlserver、redis、tdengine、dameng(dm)、kingbase、highgo、vastbase、clickhouse、oracle
- 抽取 parseSingleHostUri 复用逻辑,统一 host/port/user/password/database 回填行为
- Oracle 连接新增服务名必填校验,移除“服务名为空回退用户名”的隐式逻辑
- 连接弹窗补充 Oracle 服务名输入项与 URI 示例

* 🐛 fix(query-export): 修复查询结果导出卡住并统一按数据源能力控制导出路径

- 查询结果页导出增加稳定兜底,异常时确保 loading 关闭避免持续转圈
- DataGrid 导出逻辑按数据源能力分流,优先走后端 ExportQuery 并保留结果集导出降级
- QueryEditor 传递结果导出 SQL,保证查询结果导出范围与当前结果一致
- 后端补充 ExportData/ExportQuery 关键日志,提升导出链路可观测性

* 🐛 fix(precision): 修复查询链路与分页统计的大整数精度丢失

- 代理响应数据解码改为 UseNumber,避免默认 float64 吞精度
- 统一归一化 json.Number 与超界整数,超出 JS 安全范围转字符串
- 修复 DataViewer 总数解析,超大值不再误转 Number 参与分页
- refs #142

* 🐛 fix(driver-manager): 修复驱动管理网络告警重复并强化代理引导

- 新增下载链路域名探测,区分“GitHub可达但驱动下载链路不可达”
- 网络不可达场景仅保留红色强提醒,移除重复二级告警
- 强提醒增加“打开全局代理设置”入口,优先引导使用 GoNavi 全局代理
- 统一网络检测与目录说明提示图标尺寸,修复加载期视觉不一致
- refs #141

* ♻️ refactor(frontend-interaction): 统一标签拖拽与暗色主题交互实现

- 重构Tab拖拽排序实现,统一为可配置拖拽引擎
- 规范拖拽与点击事件边界,提升交互一致性
- 统一多组件暗色透明样式策略,减少硬编码色值
- 提升Redis/表格/连接面板在透明模式下的观感一致性
- refs #144

* ♻️ refactor(update-state): 重构在线更新状态流并按版本统一进度展示

- 重构更新检查与下载状态同步流程,减少前后端状态分叉
- 进度展示严格绑定 latestVersion,避免跨版本状态串用
- 优化 about 打开场景的静默检查状态回填逻辑
- 统一下载弹窗关闭/后台隐藏行为
- 保持现有安装流程并补齐目录打开能力

* 🎨 style(sidebar-log): 将SQL执行日志入口调整为悬浮胶囊样式

- 移除侧栏底部整条日志入口容器
- 新增悬浮按钮阴影/边框/透明背景并适配明暗主题
- 为树区域预留底部空间避免入口遮挡内容

*  feat(redis-cluster): 支持集群模式逻辑多库隔离与 0-15 库切换

- 前端恢复 Redis 集群场景下 db0-db15 的数据库选择与展示
- 后端新增集群逻辑库命名空间前缀映射,统一 key/pattern 读写隔离
- 覆盖扫描、读取、写入、删除、重命名等核心操作的键映射规则
- 集群命令通道支持 SELECT 逻辑切库与 FLUSHDB 逻辑库清空
- refs #145

*  feat(DataGrid): 大数据表虚拟滚动性能优化及UI一致性修复

- 启用动态虚拟滚动(数据量≥500行自动切换),解决万行数据表卡顿问题
- 虚拟模式下EditableCell改用div渲染,CSS选择器从元素级改为类级适配虚拟DOM
- 修复虚拟模式双水平滚动条:样式化rc-virtual-list内置滚动条为胶囊外观,禁用自定义外部滚动条
- 为rc-virtual-list水平滚动条添加鼠标滚轮支持(MutationObserver + marginLeft驱动)
- 修复白色主题透明模式下列名悬浮Tooltip对比度不足的问题
- 新增白色主题全局滚动条样式适配透明模式(App.css)
- App.tsx主题token与组件样式优化
- refs #147

* 🔧 chore(app): 清理 App.tsx 类型告警并收敛前端壳层实现

- 清除未使用代码和冗余状态
- 替换弃用 API 以消除 IDE 提示
- 显式处理浮动 Promise 避免告警
- 保持现有更新检查和代理设置行为不变

---------

Co-authored-by: Syngnat <yangguofeng919@gmail.com>
2026-03-03 14:35:17 +08:00
Syngnat
6156884455 Merge branch 'feature/suport-clickhouse-20260227-ygf' into dev1 2026-03-03 14:23:04 +08:00
Syngnat
a54b8906a3 Revert "feat: 增加关于内容技术圈"
This reverts commit 9a684cd82c.
2026-03-03 14:18:53 +08:00
Syngnat
f477feab2f 🔧 chore(app): 清理 App.tsx 类型告警并收敛前端壳层实现
- 清除未使用代码和冗余状态
- 替换弃用 API 以消除 IDE 提示
- 显式处理浮动 Promise 避免告警
- 保持现有更新检查和代理设置行为不变
2026-03-03 14:11:35 +08:00
Syngnat
e76e174bfe feat(DataGrid): 大数据表虚拟滚动性能优化及UI一致性修复
- 启用动态虚拟滚动(数据量≥500行自动切换),解决万行数据表卡顿问题
- 虚拟模式下EditableCell改用div渲染,CSS选择器从元素级改为类级适配虚拟DOM
- 修复虚拟模式双水平滚动条:样式化rc-virtual-list内置滚动条为胶囊外观,禁用自定义外部滚动条
- 为rc-virtual-list水平滚动条添加鼠标滚轮支持(MutationObserver + marginLeft驱动)
- 修复白色主题透明模式下列名悬浮Tooltip对比度不足的问题
- 新增白色主题全局滚动条样式适配透明模式(App.css)
- App.tsx主题token与组件样式优化
- refs #147
2026-03-03 13:49:31 +08:00
Syngnat
b904c0b107 feat(redis-cluster): 支持集群模式逻辑多库隔离与 0-15 库切换
- 前端恢复 Redis 集群场景下 db0-db15 的数据库选择与展示
- 后端新增集群逻辑库命名空间前缀映射,统一 key/pattern 读写隔离
- 覆盖扫描、读取、写入、删除、重命名等核心操作的键映射规则
- 集群命令通道支持 SELECT 逻辑切库与 FLUSHDB 逻辑库清空
- refs #145
2026-03-03 09:42:49 +08:00
Syngnat
c02e7c12e8 🎨 style(sidebar-log): 将SQL执行日志入口调整为悬浮胶囊样式
- 移除侧栏底部整条日志入口容器
- 新增悬浮按钮阴影/边框/透明背景并适配明暗主题
- 为树区域预留底部空间避免入口遮挡内容
2026-03-02 17:45:09 +08:00
Syngnat
a87c801e66 ♻️ refactor(update-state): 重构在线更新状态流并按版本统一进度展示
- 重构更新检查与下载状态同步流程,减少前后端状态分叉
- 进度展示严格绑定 latestVersion,避免跨版本状态串用
- 优化 about 打开场景的静默检查状态回填逻辑
- 统一下载弹窗关闭/后台隐藏行为
- 保持现有安装流程并补齐目录打开能力
2026-03-02 17:26:40 +08:00
Syngnat
7f00139847 ♻️ refactor(frontend-interaction): 统一标签拖拽与暗色主题交互实现
- 重构Tab拖拽排序实现,统一为可配置拖拽引擎
- 规范拖拽与点击事件边界,提升交互一致性
- 统一多组件暗色透明样式策略,减少硬编码色值
- 提升Redis/表格/连接面板在透明模式下的观感一致性
- refs #144
2026-03-02 16:34:09 +08:00
Syngnat
78c5351399 🐛 fix(driver-manager): 修复驱动管理网络告警重复并强化代理引导
- 新增下载链路域名探测,区分“GitHub可达但驱动下载链路不可达”
- 网络不可达场景仅保留红色强提醒,移除重复二级告警
- 强提醒增加“打开全局代理设置”入口,优先引导使用 GoNavi 全局代理
- 统一网络检测与目录说明提示图标尺寸,修复加载期视觉不一致
- refs #141
2026-03-02 15:58:58 +08:00
Syngnat
e2acfa51eb Merge pull request #143 from fengin/feature/addAibook
feat: 增加关于内容技术圈
2026-03-02 14:46:03 +08:00
fengin
9a684cd82c feat: 增加关于内容技术圈 2026-03-02 14:42:42 +08:00
Syngnat
e3b142053f 🐛 fix(precision): 修复查询链路与分页统计的大整数精度丢失
- 代理响应数据解码改为 UseNumber,避免默认 float64 吞精度
- 统一归一化 json.Number 与超界整数,超出 JS 安全范围转字符串
- 修复 DataViewer 总数解析,超大值不再误转 Number 参与分页
- refs #142
2026-03-02 14:40:59 +08:00
Syngnat
3ca898a950 🐛 fix(query-export): 修复查询结果导出卡住并统一按数据源能力控制导出路径
- 查询结果页导出增加稳定兜底,异常时确保 loading 关闭避免持续转圈
- DataGrid 导出逻辑按数据源能力分流,优先走后端 ExportQuery 并保留结果集导出降级
- QueryEditor 传递结果导出 SQL,保证查询结果导出范围与当前结果一致
- 后端补充 ExportData/ExportQuery 关键日志,提升导出链路可观测性
2026-03-02 14:18:44 +08:00
Syngnat
84688e995a 🔧 fix(connection-modal): 修复多数据源URI导入解析并校正Oracle服务名校验
- 新增单主机URI解析映射,兼容 postgres/postgresql、sqlserver、redis、tdengine、dameng(dm)、kingbase、highgo、vastbase、clickhouse、oracle
- 抽取 parseSingleHostUri 复用逻辑,统一 host/port/user/password/database 回填行为
- Oracle 连接新增服务名必填校验,移除“服务名为空回退用户名”的隐式逻辑
- 连接弹窗补充 Oracle 服务名输入项与 URI 示例
2026-03-02 11:46:59 +08:00
Syngnat
4d0940636d feat(frontend-driver): 驱动管理支持快速搜索并优化信息展示
- 新增搜索框,支持按 DuckDB/ClickHouse 等关键字快速定位驱动
- 显示“匹配 x / y”统计与无结果提示
- 优化头部区域排版,提升透明/暗色场景下的视觉对齐
2026-03-02 11:10:48 +08:00
Syngnat
26b79adc5f 🐛 fix(data-viewer): 修复ClickHouse尾部分页异常并增强DuckDB复杂类型兼容
- DataViewer 新增 ClickHouse 反向分页策略,修复最后页与倒数页查询失败
- DuckDB 查询失败时按列类型生成安全 SELECT,复杂类型转 VARCHAR 重试
- 分页状态统一使用 currentPage 回填,避免页码与总数推导不一致
- 增强查询异常日志与重试路径,降低大表场景卡顿与误报
2026-03-02 10:49:23 +08:00
Syngnat
90aa3561be Merge pull request #140 from Syngnat/release/0.5.0
Release/0.5.0
2026-02-28 15:57:40 +08:00
Syngnat
ec59023736 📝 docs(i18n-readme): 建立README中英双文档结构并统一能力描述
- 英文主 README 覆盖项目定位、性能特性与安装说明
- 中文 README 覆盖同等信息密度,避免内容断层
- 中英文文档统一支持数据源表格与驱动模式说明
- 完成语言切换链接配置,便于读者快速切换
2026-02-28 15:54:46 +08:00
Syngnat
4a96cb93d2 🎨 style(connection-modal): 优化新建连接弹窗尺寸与分类栏视觉一致性
- 统一调整弹窗 step1/step2 尺寸参数,改善布局观感
- 增加 step1 内容区最小高度,减少拥挤感
- 分类栏分割线改为主题感知颜色,消除深色模式下突兀白线
2026-02-28 15:36:26 +08:00
Syngnat
4c322db9d0 💥 breaking(driver-manager): 统一 Doris 驱动命名并移除 diros 历史包兼容
- 前后端统一 Doris 展示与连接命名,修复 diros 拼写问题
- GitHub Actions 驱动产物改为 doris-driver-agent-* 命名
- 构建流程保持内部 gonavi_diros_driver 映射,避免构建链路中断
- 驱动安装/下载/解压链路仅识别 doris 资产名,不再兼容 diros 历史包
- 内置与文档 manifest 下载地址统一为 builtin://activate/doris
- close #132
2026-02-28 15:27:58 +08:00
Syngnat
ed18c8285f 🐛 fix(db-compat): 修复PG系建表语句兼容并优化DuckDB大表总数统计
- 统一 DBShowCreateTable 与导出链路的 DDL 兜底逻辑,修复 Kingbase/Postgres 占位语句问题
- 增强 custom driver 到 postgres/kingbase/highgo/vastbase 的映射并补充回归测试
- DuckDB 关闭自动后台 COUNT(*),避免大文件场景翻页与查询卡顿
- 新增近似总数展示、手动精确统计与取消统计交互
- 新增 DBQueryIsolated 独立连接查询能力并同步前端 wailsjs 接口
- refs #136
2026-02-28 15:00:13 +08:00
Syngnat
5f8cedabd8 🐛 fix(update-proxy): 修复本地代理下检查更新 TLS 证书未知颁发者失败
- 在全局代理 HTTP 传输层增加本地回环代理兼容回退能力
- 回退触发条件限制为 unknown authority 且仅 GET/HEAD 请求
- 保留默认 TLS 校验策略并输出告警日志便于审计定位
- refs #139
2026-02-28 13:55:42 +08:00
Syngnat
20923989b9 🐛 fix(connection-modal): 修复透明暗色模式下 SSH/代理配置区块白底问题
- 为连接弹窗接入主题与透明度状态,按模式动态计算区块背景
- 将 SSH 与代理配置容器统一替换为自适应样式并补齐边框层次
- 保持连接测试与保存逻辑不变,仅修复显示层
2026-02-28 13:37:19 +08:00
Syngnat
210106cde7 🐛 fix(driver-modal): 修复驱动日志弹窗在透明暗色主题下对比度异常
- 将日志内容容器改为 dark/light 双模式自适应样式
- 使用全局外观透明度参数参与日志背景渲染
- 保持驱动安装与日志采集逻辑不变,仅修复显示层
2026-02-28 13:27:49 +08:00
Syngnat
87aac277ec 🎨 style(redis-viewer): 对齐 Redis 拖拽分割条与侧边栏宽度调整样式
- 分割条宽度调整为与 host 侧边栏一致
- 分割条背景统一为 transparent,去除 hover 强对比效果
- 保持拖拽命中区与提示文案,提升整体样式一致性
2026-02-28 12:53:06 +08:00
Syngnat
4de3f408c5 🐛 fix(redis-scan): 修复大数据量下命名空间加载不完整问题
- 前后端 Redis SCAN 游标统一为字符串传递,避免 Number 精度丢失
- RedisScanKeys 增加 string/number 游标兼容解析,异常游标降级并告警
- 新增游标解析单测
- refs #135
2026-02-28 12:32:22 +08:00
Syngnat
439625a49c 🔧 fix(duckdb-pagination): 修复 DuckDB 总数异常导致分页不可用
- 修正 DataViewer 在 hasMore 与 totalKnown 冲突时的分页状态处理
- 增强 DuckDB COUNT(*) 结果解析,兼容字段名与数值类型差异
- 将分页兜底逻辑收敛为 DuckDB 专用,避免影响其他数据库
- 修复 total=0 时分页文案显示异常
- refs #136
2026-02-28 12:14:34 +08:00
Syngnat
884d72f3d3 ♻️ refactor(clickhouse): 使用结构化 Options 替代 DSN 连接构造
- 用 buildClickHouseOptions 收敛连接参数生成逻辑
- 将连接入口改为 clickhouse.OpenDB(Options)
- 清理 DSN 中的 write_timeout/read_timeout/dial_timeout 透传路径
- 同步重写 ClickHouse 相关测试断言
- refs #138
2026-02-28 11:56:59 +08:00
Syngnat
98c1600e13 feat(driver-manager): 增强驱动管理本地导入并统一滚动交互体验
- 新增驱动目录批量导入入口,支持覆盖已安装开关与去重处理
- 行内本地导入聚焦单文件场景,目录导入与单文件导入流程统一
- 已安装驱动版本选择锁定,避免安装后误改版本
- 补充驱动下载网络检测与日志可见性,提升问题定位效率
- 重构驱动管理横向滚动条实现,修复双滚动条/消失/位置异常问题
2026-02-28 11:33:21 +08:00
Syngnat
eb594b7741 Merge pull request #134 from Syngnat/release/0.4.9
release/0.4.9
2026-02-27 17:39:17 +08:00
Syngnat
587ed3444b ️ perf(ci-assets): 完整化驱动打包资产覆盖范围
- 将 clickhouse 纳入可选驱动构建数组
- 提升发布资产完整性与可用性
- 减少驱动安装阶段因资产缺失导致的失败
2026-02-27 17:37:40 +08:00
Syngnat
e366a61910 Merge pull request #133 from Syngnat/release/0.4.9
Release/0.4.9
2026-02-27 17:24:09 +08:00
Syngnat
5986b71c4d ️ perf(redis-datagrid): 优化大数据场景下搜索与右键菜单响应性能
- RedisViewer 引入树节点轻量化、虚拟滚动与大 keyspace 性能模式,降低 Key 列表卡顿
- Redis 搜索按模式分级加载并增加请求乱序保护,避免搜索结果回写抖动
- Redis 后端 ScanKeys 为搜索模式增加时间预算与轮次上限,优先返回可继续分页结果
- DataGrid 稳定 Context/rowSelection/onRow 引用并增加 shouldCellUpdate,减少右键触发全表重渲染
2026-02-27 17:22:38 +08:00
Syngnat
cb18bc3067 feat(driver-proxy): 新增ClickHouse数据源并提供全局代理独立入口
- 新增 ClickHouse 可选驱动实现与 optional-driver-agent provider,补齐驱动注册与清单配置
- 补齐 ClickHouse 连接与 SQL 适配:连接默认端口/用户、LIMIT、标识符引用、只读编辑限制
- 新增全局代理后端能力与前端持久化配置,更新检查和驱动网络请求统一走代理客户端
2026-02-27 16:39:13 +08:00
Syngnat
d676ac9084 Merge remote-tracking branch 'origin/main' 2026-02-27 14:25:02 +08:00
Syngnat
7fcbcb2471 Merge branch 'release/0.4.8' 2026-02-27 14:24:44 +08:00
Syngnat
c680e50e74 Merge pull request #131 from Syngnat/release/0.4.8
Release/0.4.8
2026-02-27 14:24:16 +08:00
Syngnat
4cb5071b0b Merge pull request #124 from Syngnat/release/0.4.7
Release/0.4.7
2026-02-27 09:51:49 +08:00
Syngnat
7ae5341c1c Merge pull request #121 from Syngnat/release/0.4.7
Release/0.4.7
2026-02-26 14:28:10 +08:00
Syngnat
01940e74b7 🐛 fix(release.yml): 修复构建脚本空标签数组未绑定导致失败
- Build 步骤改为有标签/无标签分支执行
- 避免 set -u 下 TAG_ARGS[@] 报 unbound variable
- 保持 webkit2_41 标签构建路径不变
2026-02-14 15:51:07 +08:00
Syngnat
30210bc40e Merge pull request #111 from Syngnat/release/0.4.6
Release/0.4.6
2026-02-14 15:47:38 +08:00
Syngnat
e90a3e2db6 Merge pull request #110 from Syngnat/release/0.4.5
Release/0.4.5
2026-02-14 11:47:59 +08:00
Syngnat
5df95730d8 Merge pull request #109 from Syngnat/release/0.4.4
feat(drivers): 支持按需启动数据源并通过外置驱动代理减少发行包体积
2026-02-13 17:26:13 +08:00
Syngnat
67a9c454d0 Merge remote-tracking branch 'origin/main' 2026-02-12 10:39:46 +08:00
Syngnat
c17493952b Merge branch 'release/0.4.3' 2026-02-12 10:39:30 +08:00
Syngnat
dd258bd46c Merge pull request #102 from Syngnat/release/0.4.3
release/0.4.3
2026-02-12 10:38:57 +08:00
Syngnat
505c89066b Merge pull request #101 from Syngnat/release/0.4.3
Release/0.4.3
2026-02-12 09:28:33 +08:00
89 changed files with 16021 additions and 1668 deletions

26
.github/release.yaml vendored Normal file
View File

@@ -0,0 +1,26 @@
changelog:
categories:
- title: 新功能
labels:
- feature
- enhancement
- feat
- title: 问题修复
labels:
- bug
- fix
- title: 文档与流程
labels:
- docs
- documentation
- ci
- workflow
- chore
- title: 重构与优化
labels:
- refactor
- perf
- optimization
- title: 其他更新
labels:
- '*'

View File

@@ -131,15 +131,91 @@ jobs:
- name: Install Wails
run: go install -v github.com/wailsapp/wails/v2/cmd/wails@latest
- name: Setup MSYS2 Toolchain For DuckDB (Windows AMD64)
id: msys2_duckdb
if: ${{ matrix.build_optional_agents && matrix.platform == 'windows/amd64' }}
continue-on-error: true
uses: msys2/setup-msys2@v2
with:
msystem: UCRT64
update: true
install: >-
mingw-w64-ucrt-x86_64-gcc
- name: Configure DuckDB CGO Toolchain (Windows AMD64)
if: ${{ matrix.build_optional_agents && matrix.platform == 'windows/amd64' }}
shell: pwsh
run: |
function Find-MingwBin([string[]]$candidates) {
foreach ($bin in $candidates) {
if ([string]::IsNullOrWhiteSpace($bin)) {
continue
}
$gcc = Join-Path $bin 'gcc.exe'
$gxx = Join-Path $bin 'g++.exe'
if ((Test-Path $gcc) -and (Test-Path $gxx)) {
return $bin
}
}
return $null
}
$msys2Outcome = "${{ steps.msys2_duckdb.outcome }}"
$msys2Location = "${{ steps.msys2_duckdb.outputs['msys2-location'] }}"
$candidateBins = @()
if (-not [string]::IsNullOrWhiteSpace($msys2Location)) {
$candidateBins += Join-Path $msys2Location 'ucrt64\bin'
}
$candidateBins += @(
'C:\msys64\ucrt64\bin',
'D:\a\_temp\msys64\ucrt64\bin'
)
$candidateBins = @($candidateBins | Select-Object -Unique)
$mingwBin = Find-MingwBin $candidateBins
if (-not $mingwBin) {
if ($msys2Outcome -ne 'success') {
Write-Warning "⚠️ MSYS2 安装步骤结果为 $msys2Outcome回退到 UCRT64 本机路径探测"
} else {
Write-Warning "⚠️ MSYS2 已执行,但未找到 UCRT64 gcc/g++,回退到本机路径探测"
}
$mingwBin = Find-MingwBin $candidateBins
}
if (-not $mingwBin) {
Write-Error "❌ 未找到可用的 DuckDB UCRT64 编译器。已检查:$($candidateBins -join ', ')"
exit 1
}
$gcc = (Join-Path $mingwBin 'gcc.exe')
$gxx = (Join-Path $mingwBin 'g++.exe')
if (!(Test-Path $gcc) -or !(Test-Path $gxx)) {
Write-Error "❌ DuckDB 编译器缺失gcc=$gcc g++=$gxx"
exit 1
}
"$mingwBin" | Out-File -FilePath $env:GITHUB_PATH -Append -Encoding utf8
"CC=$gcc" | Out-File -FilePath $env:GITHUB_ENV -Append -Encoding utf8
"CXX=$gxx" | Out-File -FilePath $env:GITHUB_ENV -Append -Encoding utf8
Write-Host "✅ 已配置 DuckDB cgo 编译器: gcc=$gcc g++=$gxx"
- name: Verify DuckDB CGO Toolchain (Windows AMD64)
if: ${{ matrix.build_optional_agents && matrix.platform == 'windows/amd64' }}
shell: pwsh
run: |
& "$env:CC" --version
& "$env:CXX" --version
- name: Build
shell: bash
run: |
set -euo pipefail
TAG_ARGS=()
if [ -n "${{ matrix.wails_tags }}" ]; then
TAG_ARGS+=(-tags "${{ matrix.wails_tags }}")
wails build -platform ${{ matrix.platform }} -clean -o ${{ matrix.build_name }} -tags "${{ matrix.wails_tags }}" -ldflags "-s -w -X GoNavi-Wails/internal/app.AppVersion=${{ github.ref_name }}"
else
wails build -platform ${{ matrix.platform }} -clean -o ${{ matrix.build_name }} -ldflags "-s -w -X GoNavi-Wails/internal/app.AppVersion=${{ github.ref_name }}"
fi
wails build -platform ${{ matrix.platform }} -clean -o ${{ matrix.build_name }} "${TAG_ARGS[@]}" -ldflags "-s -w -X GoNavi-Wails/internal/app.AppVersion=${{ github.ref_name }}"
- name: Build Optional Driver Agents
if: ${{ matrix.build_optional_agents }}
@@ -149,12 +225,20 @@ jobs:
TARGET_PLATFORM="${{ matrix.platform }}"
GOOS="${TARGET_PLATFORM%%/*}"
GOARCH="${TARGET_PLATFORM##*/}"
DRIVERS=(mariadb diros sphinx sqlserver sqlite duckdb dameng kingbase highgo vastbase mongodb tdengine)
DRIVERS=(mariadb doris sphinx sqlserver sqlite duckdb dameng kingbase highgo vastbase mongodb tdengine clickhouse)
OUTDIR="drivers/${{ matrix.os_name }}"
mkdir -p "$OUTDIR"
for DRIVER in "${DRIVERS[@]}"; do
TAG="gonavi_${DRIVER}_driver"
BUILD_DRIVER="$DRIVER"
if [ "$DRIVER" = "doris" ]; then
BUILD_DRIVER="diros"
fi
if [ "$DRIVER" = "duckdb" ] && [ "$GOOS" = "windows" ] && [ "$GOARCH" != "amd64" ]; then
echo "⚠️ 跳过 DuckDB driver当前平台 ${GOOS}/${GOARCH} 不受支持,仅支持 windows/amd64"
continue
fi
TAG="gonavi_${BUILD_DRIVER}_driver"
OUTPUT="${DRIVER}-driver-agent-${GOOS}-${GOARCH}"
if [ "$GOOS" = "windows" ]; then
OUTPUT="${OUTPUT}.exe"
@@ -162,20 +246,12 @@ jobs:
OUTPUT_PATH="${OUTDIR}/${OUTPUT}"
echo "🔧 构建 ${OUTPUT_PATH} (tag=${TAG})"
if [ "$DRIVER" = "duckdb" ]; then
set +e
CGO_ENABLED=1 GOOS="$GOOS" GOARCH="$GOARCH" go build \
-tags "${TAG}" \
-trimpath \
-ldflags "-s -w" \
-o "${OUTPUT_PATH}" \
./cmd/optional-driver-agent
DUCKDB_RC=$?
set -e
if [ "${DUCKDB_RC}" -ne 0 ]; then
echo "⚠️ DuckDB 代理构建失败(平台 ${GOOS}/${GOARCH}),跳过该资产,不阻断发布"
rm -f "${OUTPUT_PATH}"
continue
fi
else
CGO_ENABLED=0 GOOS="$GOOS" GOARCH="$GOARCH" go build \
-tags "${TAG}" \
@@ -365,6 +441,38 @@ jobs:
- name: List Assets
run: ls -R release-assets
- name: Verify Optional Driver Assets
shell: bash
run: |
set -euo pipefail
cd release-assets
REQUIRED_FILES=(
"drivers/Windows/duckdb-driver-agent-windows-amd64.exe"
"drivers/MacOS/duckdb-driver-agent-darwin-amd64"
"drivers/MacOS/duckdb-driver-agent-darwin-arm64"
"drivers/Linux/duckdb-driver-agent-linux-amd64"
"drivers/Windows/clickhouse-driver-agent-windows-amd64.exe"
"drivers/MacOS/clickhouse-driver-agent-darwin-amd64"
"drivers/MacOS/clickhouse-driver-agent-darwin-arm64"
"drivers/Linux/clickhouse-driver-agent-linux-amd64"
)
missing=0
for file in "${REQUIRED_FILES[@]}"; do
if [ ! -f "$file" ]; then
echo "❌ 缺少驱动资产:$file"
missing=1
else
echo "✅ 已找到驱动资产:$file"
fi
done
if [ "$missing" -ne 0 ]; then
echo "❌ 可选驱动资产不完整,终止发布"
exit 1
fi
- name: Package Driver Agents Bundle
shell: bash
run: |
@@ -442,5 +550,6 @@ jobs:
files: release-assets/*
draft: true
make_latest: true
generate_release_notes: true
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

159
.github/workflows/sync-main-to-dev.yml vendored Normal file
View File

@@ -0,0 +1,159 @@
name: main 回灌 dev
on:
push:
branches:
- main
workflow_dispatch:
permissions:
contents: write
pull-requests: write
concurrency:
group: sync-main-to-dev
cancel-in-progress: true
jobs:
sync-main-to-dev:
name: 执行回灌同步
runs-on: ubuntu-latest
steps:
- name: 检出代码
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: 检查是否需要同步
id: diff_check
shell: bash
run: |
set -euo pipefail
echo "开始检查 main 与 dev 的分支差异..."
git fetch origin main dev
ahead_count="$(git rev-list --count origin/dev..origin/main)"
echo "ahead_count=${ahead_count}" >> "$GITHUB_OUTPUT"
if [ "${ahead_count}" -eq 0 ]; then
echo "无需同步dev 已包含 main 的最新提交。"
echo "has_changes=false" >> "$GITHUB_OUTPUT"
else
echo "检测到 ${ahead_count} 个待同步提交,准备创建或复用同步 PR。"
echo "has_changes=true" >> "$GITHUB_OUTPUT"
fi
- name: 创建或复用同步 PR
id: sync_pr
if: steps.diff_check.outputs.has_changes == 'true'
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
shell: bash
run: |
set -euo pipefail
echo "permission_blocked=false" >> "$GITHUB_OUTPUT"
existing_number="$(gh pr list --base dev --head main --state open --json number --jq '.[0].number // empty')"
if [ -n "${existing_number}" ]; then
pr_number="${existing_number}"
pr_url="$(gh pr view "${pr_number}" --json url --jq '.url')"
echo "复用已有同步 PR#${pr_number}"
echo "created=false" >> "$GITHUB_OUTPUT"
else
body_file="$(mktemp)"
error_file="$(mktemp)"
{
echo "## 自动回灌:\`main -> dev\`"
echo
echo "- 触发条件:\`main\` 分支出现新提交(含贡献者直接合并到 \`main\` 的 PR"
echo "- 目标:让 \`dev\` 持续吸收 \`main\` 的更新,避免发布前集中冲突"
echo
echo "### 合并建议"
echo "- 无冲突:直接合并该 PR建议 \`Merge commit\`"
echo "- 有冲突:在该 PR 内解决冲突后再合并"
} > "${body_file}"
if pr_url="$(gh pr create \
--base dev \
--head main \
--title "🔁 chore(sync): 回灌 main 到 dev" \
--body-file "${body_file}" 2>"${error_file}")"; then
pr_number="${pr_url##*/}"
echo "已创建同步 PR#${pr_number}"
echo "created=true" >> "$GITHUB_OUTPUT"
else
error_message="$(tr '\n' ' ' < "${error_file}")"
if printf '%s' "${error_message}" | grep -Fq "GitHub Actions is not permitted to create or approve pull requests"; then
echo "::warning::仓库未开启“Allow GitHub Actions to create and approve pull requests”已跳过自动创建同步 PR。"
echo "permission_blocked=true" >> "$GITHUB_OUTPUT"
echo "created=false" >> "$GITHUB_OUTPUT"
echo "pr_number=" >> "$GITHUB_OUTPUT"
echo "pr_url=" >> "$GITHUB_OUTPUT"
exit 0
fi
echo "::error::创建同步 PR 失败:${error_message}"
exit 1
fi
fi
echo "pr_number=${pr_number}" >> "$GITHUB_OUTPUT"
echo "pr_url=${pr_url}" >> "$GITHUB_OUTPUT"
- name: 检查合并状态
id: merge_state
if: steps.diff_check.outputs.has_changes == 'true' && steps.sync_pr.outputs.permission_blocked != 'true'
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
shell: bash
run: |
set -euo pipefail
pr_number="${{ steps.sync_pr.outputs.pr_number }}"
mergeable="$(gh pr view "${pr_number}" --json mergeable --jq '.mergeable')"
merge_state_status="$(gh pr view "${pr_number}" --json mergeStateStatus --jq '.mergeStateStatus')"
echo "PR #${pr_number} 合并状态mergeable=${mergeable}, mergeStateStatus=${merge_state_status}"
echo "mergeable=${mergeable}" >> "$GITHUB_OUTPUT"
echo "merge_state_status=${merge_state_status}" >> "$GITHUB_OUTPUT"
- name: 可合并时开启自动合并
id: auto_merge
if: steps.diff_check.outputs.has_changes == 'true' && steps.sync_pr.outputs.permission_blocked != 'true' && steps.merge_state.outputs.mergeable == 'MERGEABLE'
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
shell: bash
run: |
set -euo pipefail
pr_number="${{ steps.sync_pr.outputs.pr_number }}"
if gh pr merge "${pr_number}" --merge --auto; then
echo "已为 PR #${pr_number} 开启自动合并。"
echo "result=enabled" >> "$GITHUB_OUTPUT"
else
echo "::warning::自动合并开启失败,请手动处理并合并该 PR。"
echo "result=failed" >> "$GITHUB_OUTPUT"
fi
- name: 写入执行摘要
if: always()
shell: bash
run: |
{
echo "## main 回灌 dev 执行结果"
if [ "${{ steps.diff_check.outputs.has_changes }}" != "true" ]; then
echo "- 状态无需同步dev 已包含 main 最新提交)"
exit 0
fi
if [ "${{ steps.sync_pr.outputs.permission_blocked }}" = "true" ]; then
echo "- 状态:已跳过自动创建同步 PR"
echo "- 原因:仓库未开启 GitHub Actions 创建与审批 Pull Request 权限"
echo "- 处理:前往 Settings -> Actions -> General -> Workflow permissions开启 Allow GitHub Actions to create and approve pull requests"
echo "- 兜底:由维护者手动执行 main 到 dev 合并,或开启该设置后重新运行 workflow"
exit 0
fi
echo "- PR${{ steps.sync_pr.outputs.pr_url }}"
echo "- 可合并状态:${{ steps.merge_state.outputs.mergeable }}"
echo "- 合并状态详情:${{ steps.merge_state.outputs.merge_state_status }}"
if [ "${{ steps.merge_state.outputs.mergeable }}" = "CONFLICTING" ]; then
echo "- 结论:检测到冲突,需要手动处理后合并"
elif [ "${{ steps.auto_merge.outputs.result }}" = "enabled" ]; then
echo "- 结论:已启用自动合并(满足保护规则后将自动入 dev"
else
echo "- 结论PR 已创建/复用,请按分支策略人工合并"
fi
} >> "$GITHUB_STEP_SUMMARY"

91
.github/workflows/test-macos-build.yml vendored Normal file
View File

@@ -0,0 +1,91 @@
name: Test Build macOS (Manual)
on:
workflow_dispatch:
inputs:
build_label:
description: "测试包标识(仅用于文件名)"
required: false
default: "test"
push:
branches:
- feature/kingbase_opt
paths:
- ".github/workflows/test-macos-build.yml"
permissions:
contents: read
jobs:
build-macos:
name: Build macOS ${{ matrix.arch }}
runs-on: macos-latest
strategy:
fail-fast: false
matrix:
include:
- platform: darwin/amd64
arch: amd64
- platform: darwin/arm64
arch: arm64
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Go
uses: actions/setup-go@v5
with:
go-version: "1.24.3"
check-latest: true
- name: Setup Node
uses: actions/setup-node@v4
with:
node-version: "20"
- name: Install Wails
run: go install github.com/wailsapp/wails/v2/cmd/wails@v2.11.0
- name: Build App
run: |
set -euo pipefail
OUTPUT_NAME="gonavi-test-${{ matrix.arch }}"
BUILD_LABEL="${{ inputs.build_label }}"
if [ -z "$BUILD_LABEL" ]; then
BUILD_LABEL="test"
fi
APP_VERSION="${BUILD_LABEL}-${GITHUB_RUN_NUMBER}"
wails build \
-platform "${{ matrix.platform }}" \
-clean \
-o "$OUTPUT_NAME" \
-ldflags "-s -w -X GoNavi-Wails/internal/app.AppVersion=${APP_VERSION}"
- name: Package Zip
run: |
set -euo pipefail
APP_PATH="build/bin/gonavi-test-${{ matrix.arch }}.app"
if [ ! -d "$APP_PATH" ]; then
APP_PATH=$(find build/bin -maxdepth 1 -name "*.app" | head -n 1 || true)
fi
if [ -z "$APP_PATH" ] || [ ! -d "$APP_PATH" ]; then
echo "未找到 .app 产物"
ls -la build/bin || true
exit 1
fi
LABEL="${{ inputs.build_label }}"
if [ -z "$LABEL" ]; then
LABEL="test"
fi
ZIP_NAME="GoNavi-${LABEL}-macos-${{ matrix.arch }}-run${GITHUB_RUN_NUMBER}.zip"
mkdir -p artifacts
ditto -c -k --sequesterRsrc --keepParent "$APP_PATH" "artifacts/$ZIP_NAME"
shasum -a 256 "artifacts/$ZIP_NAME" > "artifacts/$ZIP_NAME.sha256"
- name: Upload Artifact
uses: actions/upload-artifact@v4
with:
name: gonavi-macos-${{ matrix.arch }}-run${{ github.run_number }}
path: artifacts/*
if-no-files-found: error

3
.gitignore vendored
View File

@@ -19,3 +19,6 @@ GoNavi-Wails.exe
.ace-tool/
.claude/
tmpclaude-*
CLAUDE.md
**/CLAUDE.md

161
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,161 @@
# Contributing Guide
Thank you for contributing to this project.
This repository follows a release-first workflow: `main` is the default public branch, while releases are prepared through `release/*` branches.
---
## Branch Model
- `main`: stable release branch and default branch
- `dev`: day-to-day integration branch for maintainers
- `release/*`: release preparation branches for maintainers
- Recommended branch names for external contributors:
- `fix/*`: bug fixes
- `feature/*`: new features or enhancements
Maintainer release flow:
```text
feature/* / fix/* -> dev -> release/* -> main -> tag(vX.Y.Z)
```
---
## How External Contributors Should Open Pull Requests
Whether your branch is `fix/*` or `feature/*`, external contributors should **open pull requests directly against `main`**.
Reasons:
- `main` is the default branch, so the PR entry point is clearer
- merged contributions are immediately visible on the default branch
- maintainers can handle downstream sync and release preparation in one place
Recommended flow:
1. Fork this repository
2. Create a branch in your fork (`fix/*` or `feature/*` is recommended)
3. Make your changes and perform basic self-checks
4. Push the branch to your fork
5. Open a pull request against the `main` branch of this repository
---
## Pull Request Requirements
Please keep each pull request focused, reviewable, and easy to validate.
Recommended expectations:
- one pull request should address one logical change
- use a clear title that explains the purpose
- include the following in the description:
- background and problem statement
- key changes
- impact scope
- validation method
- include screenshots or recordings for UI changes when helpful
- explicitly mention risk and rollback notes for compatibility, data, or build-chain changes
---
## Merge Strategy for Maintainers
Pull requests merged into `main` should generally use **Squash and merge**.
Reasons:
- keeps `main` history clean and linear
- maps each PR to a single commit on `main`
- reduces release, audit, and rollback complexity
---
## Maintainer Sync Rules
Because external pull requests are merged directly into `main`, maintainers must sync `main` back to development and release branches to avoid branch drift.
### 1. Sync `main` -> `dev` (required)
This repository provides automatic sync via GitHub Actions workflow:
- `.github/workflows/sync-main-to-dev.yml`
- Trigger: every push to `main`
- Behavior: create/reuse a PR from `main` to `dev`; if mergeable, it tries to enable auto-merge
- Prerequisite: in `Settings -> Actions -> General -> Workflow permissions`, enable `Allow GitHub Actions to create and approve pull requests`; otherwise the workflow will skip PR creation and only emit a warning summary
Manual fallback (when conflicts or automation is unavailable):
```bash
git checkout dev
git pull
git merge main
git push
```
### 2. Create `release/*` from `dev`
Before a release, create a release branch from `dev`, for example:
```bash
git checkout dev
git pull
git checkout -b release/v0.6.0
git push -u origin release/v0.6.0
```
### 3. Release from `release/*` back to `main`
When release preparation is complete, merge the release branch back into `main` and create a tag:
```bash
git checkout main
git pull
git merge release/v0.6.0
git push
git tag v0.6.0
git push origin v0.6.0
```
### 4. Sync `main` back to `dev` after release
After the release, the same automation still applies. If needed, you can run the workflow manually (`workflow_dispatch`) or execute the fallback commands:
```bash
git checkout dev
git pull
git merge main
git push
```
---
## Commit Message Recommendation
Keep commit messages clear and easy to audit.
Recommended format:
```text
emoji type(scope): concise description
```
Examples:
```text
🔧 fix(ci): fix DuckDB driver toolchain on Windows AMD64
✨ feat(redis): add Stream data browsing support
♻️ refactor(datagrid): optimize large-table horizontal scrolling and rendering
```
---
## Additional Notes
- Please include validation results for documentation, build-chain, or driver compatibility changes
- For larger changes, opening an issue or draft PR first is recommended
- Maintainers may ask contributors to narrow the scope if the change conflicts with the current project direction
Thank you for contributing.

161
CONTRIBUTING.zh-CN.md Normal file
View File

@@ -0,0 +1,161 @@
# 贡献指南
感谢你对本项目的贡献。
本项目采用“发布优先(`main` 为默认分支)+ `release/*` 分支发版”的协作模型。为减少分支漂移与 PR 处理成本,请在提交贡献前先阅读本指南。
---
## 分支模型
- `main`:稳定发布分支,也是仓库默认分支
- `dev`:日常开发集成分支,主要供维护者使用
- `release/*`:发布准备分支,主要供维护者使用
- 外部贡献者建议使用以下分支命名:
- `fix/*`:问题修复
- `feature/*`:功能新增或增强
维护者发布流转如下:
```text
feature/* / fix/* -> dev -> release/* -> main -> tag(vX.Y.Z)
```
---
## 外部贡献者如何提 Pull Request
无论是 `fix/*` 还是 `feature/*`**外部贡献者统一直接向 `main` 发起 Pull Request**。
这样做的原因:
- `main` 是默认分支PR 入口更直观
- 合并后贡献会直接体现在默认分支
- 便于维护者统一做后续同步与发版整理
建议流程:
1. Fork 本仓库
2. 从你自己的仓库创建分支(建议命名为 `fix/*``feature/*`
3. 完成代码修改,并进行必要自检
4. 推送到你的远程分支
5. 向本仓库的 `main` 分支发起 Pull Request
---
## Pull Request 要求
请尽量保证 PR 单一、清晰、可审核。
建议遵循以下要求:
- 一个 PR 只解决一类问题,避免混入无关改动
- 标题清晰说明改动目的
- 描述中说明:
- 背景与问题
- 变更点
- 影响范围
- 验证方式
- 如涉及 UI 调整,建议附截图或录屏
- 如涉及兼容性、数据变更或构建链路调整,请明确说明风险和回滚方式
---
## PR 合并策略(维护者)
`main` 分支上的 PR 建议使用 **Squash and merge**
原因:
- 保持 `main` 历史干净、线性
- 每个 PR 在 `main` 上对应一个清晰提交
- 降低发布排查与回滚成本
---
## 维护者同步规则
由于外部 PR 会直接合入 `main`,维护者必须及时将 `main` 的变更同步到开发与发布分支,避免分支漂移。
### 1. main → dev 同步(必做)
仓库已提供 GitHub Actions 自动同步机制:
- `.github/workflows/sync-main-to-dev.yml`
- 触发时机:每次 `main` 分支有新的 push
- 行为:自动创建或复用 `main``dev` 的同步 PR若可合并则尝试开启自动合并
- 前置条件:需在 `Settings -> Actions -> General -> Workflow permissions` 中开启 `Allow GitHub Actions to create and approve pull requests`,否则 workflow 只会输出告警摘要并跳过建 PR
当出现冲突,或自动化暂不可用时,使用以下手动兜底方式:
```bash
git checkout dev
git pull
git merge main
git push
```
### 2. 发版前从 dev 切 release/*
发布前由维护者基于 `dev` 创建发布分支,例如:
```bash
git checkout dev
git pull
git checkout -b release/v0.6.0
git push -u origin release/v0.6.0
```
### 3. release/* → main 发版
发布准备完成后,将 `release/*` 合并回 `main`,并打标签发布:
```bash
git checkout main
git pull
git merge release/v0.6.0
git push
git tag v0.6.0
git push origin v0.6.0
```
### 4. main 回流到 dev发版后必做
发布完成后,仍沿用同一套自动化流程;如有需要,也可以手动触发 `workflow_dispatch`,或执行以下兜底命令,确保开发线与发布线一致:
```bash
git checkout dev
git pull
git merge main
git push
```
---
## 提交建议
建议保持提交信息简洁、明确,便于维护者审查与后续追踪。
推荐格式:
```text
emoji type(scope): 中文描述
```
示例:
```text
🔧 fix(ci): 修复 Windows AMD64 下 DuckDB 驱动构建工具链
✨ feat(redis): 新增 Stream 类型数据浏览支持
♻️ refactor(datagrid): 优化大表横向滚动与渲染结构
```
---
## 其他说明
- 文档、构建链路、驱动兼容性相关改动,请尽量附带验证结果
- 若改动较大,建议先提 Issue 或 Draft PR先对齐方案再实施
- 如提交内容与项目当前架构方向冲突,维护者可能要求收敛范围后再合并
感谢你的贡献。

229
README.md
View File

@@ -1,4 +1,4 @@
# GoNavi - 现代化的轻量级数据库管理工具
# GoNavi - A Modern Lightweight Database Client
[![Go Version](https://img.shields.io/github/go-mod/go-version/Syngnat/GoNavi)](https://go.dev/)
[![Wails Version](https://img.shields.io/badge/Wails-v2-red)](https://wails.io)
@@ -6,11 +6,51 @@
[![License](https://img.shields.io/badge/License-Apache%202.0-green.svg)](LICENSE)
[![Build Status](https://img.shields.io/github/actions/workflow/status/Syngnat/GoNavi/release.yml?label=Build)](https://github.com/Syngnat/GoNavi/actions)
**GoNavi** 是一款基于 **Wails (Go)****React** 构建的现代化、高性能、跨平台数据库管理客户端。它旨在提供如原生应用般流畅的用户体验,同时保持极低的资源占用。
**Language**: English | [简体中文](README.zh-CN.md)
相比于 Electron 应用GoNavi 的体积更小(~10MB启动速度更快内存占用更低。
GoNavi is a modern, high-performance, cross-platform database client built with **Wails (Go)** and **React**.
It delivers native-like responsiveness with low resource usage.
<h2 align="center">📸 项目截图</h2>
Compared with many Electron-based clients, GoNavi is typically smaller in binary size (around 10MB class), starts faster, and uses less memory.
---
## Project Overview
GoNavi is designed for developers and DBAs who need a unified desktop experience across multiple databases.
- **Native-performance architecture**: Wails (Go + WebView) with lightweight runtime overhead.
- **Large dataset usability**: virtualized rendering and optimized DataGrid workflows for high-volume tables.
- **Unified connectivity**: URI build/parse, SSH tunnel, proxy support, and on-demand driver activation.
- **Production-oriented workflow**: SQL editor, object management, batch export/backup, sync tools, execution logs, and update checks.
## Supported Data Sources
> `Built-in`: available out of the box.
> `Optional driver agent`: install/enable via Driver Manager first.
| Category | Data Source | Driver Mode | Typical Capabilities |
|---|---|---|---|
| Relational | MySQL | Built-in | Schema browsing, SQL query, data editing, export/backup |
| Relational | PostgreSQL | Built-in | Schema browsing, SQL query, data editing, object management |
| Relational | Oracle | Built-in | Query execution, object browsing, data editing |
| Cache | Redis | Built-in | Key browsing, command execution, encoding/view switch |
| Relational | MariaDB | Optional driver agent | Querying, object management, data editing |
| Relational | Doris | Optional driver agent | Querying, object browsing, SQL execution |
| Search | Sphinx | Optional driver agent | SphinxQL querying and object browsing |
| Relational | SQL Server | Optional driver agent | Schema browsing, SQL query, object management |
| File-based | SQLite | Optional driver agent | Local DB browsing, editing, export |
| File-based | DuckDB | Optional driver agent | Large-table query, pagination, file-DB workflow |
| Domestic DB | Dameng | Optional driver agent | Querying, object browsing, data editing |
| Domestic DB | Kingbase | Optional driver agent | Querying, object browsing, data editing |
| Domestic DB | HighGo | Optional driver agent | Querying, object browsing, data editing |
| Domestic DB | Vastbase | Optional driver agent | Querying, object browsing, data editing |
| Document | MongoDB | Optional driver agent | Document query, collection browsing, connection management |
| Time-series | TDengine | Optional driver agent | Time-series schema browsing and querying |
| Columnar Analytics | ClickHouse | Optional driver agent | Analytical query, object browsing, SQL execution |
| Extensibility | Custom Driver/DSN | Custom | Extend to more data sources via Driver + DSN |
<h2 align="center">📸 Screenshots</h2>
<div align="center">
<img width="25%" alt="image" src="https://github.com/user-attachments/assets/341cda98-79a5-4198-90f3-1335131ccde0" />
@@ -24,137 +64,124 @@
---
## ✨ 核心特性
## Key Features
### 🚀 极致性能
- **零卡顿交互**:采用独创的 "幽灵拖拽" (Ghost Resizing) 技术,在包含数万行数据的表格中调整列宽,依然保持 60fps+ 的丝滑体验。
- **虚拟滚动**:轻松处理海量数据展示,拒绝卡顿。
### Performance
- **Smooth interaction under load**: optimized table interaction (including column resize workflow on large datasets).
- **Virtualized rendering**: keeps large result sets responsive.
### 🔌 多数据库支持
- **MySQL**:完整支持,涵盖数据编辑、结构管理与导入导出。
- **PostgreSQL**:数据查看与编辑支持,事务提交能力持续完善。
- **SQLite**:本地文件数据库支持。
- **Oracle**:基础数据访问与编辑支持。
- **Dameng达梦**:基础数据访问与编辑支持。
- **Kingbase人大金仓**:基础数据访问与编辑支持。
- **TDengine**:时序数据库连接、库表浏览与 SQL 查询支持。
- **Redis**Key/Value 浏览、命令执行、视图与编码切换。
- **自定义驱动**:支持配置 Driver/DSN 接入更多数据源。
- **SSH 隧道**:内置 SSH 隧道支持,安全连接内网数据库。
### Data Management (DataGrid)
- In-place cell editing.
- Batch insert/update/delete with transaction-oriented submit/rollback.
- Large-field popup editor.
- Context actions (set NULL, copy/export, etc.).
- Smart read/write mode switching based on query context.
- Export formats: CSV, Excel (XLSX), JSON, Markdown.
### 📊 强大的数据管理 (DataGrid)
- **所见即所得编辑**:直接在表格中双击单元格修改数据。
- **批量事务操作**:支持批量新增、修改、删除,一键提交或回滚事务。
- **大字段编辑**:双击大字段自动打开弹窗编辑器,避免卡顿。
- **右键上下文菜单**:快速设置 NULL、复制/导出等操作。
- **智能上下文**:自动识别单表查询,解锁编辑功能;复杂查询自动切换为只读模式。
- **批量导出/备份**:支持表与数据库的批量导出/备份。
- **数据导出**:支持 CSV、Excel (XLSX)、JSON、Markdown 等格式。
### SQL Editor
- Monaco Editor core.
- Context-aware completion for databases/tables/columns.
- Multi-tab query workflow.
### 🧰 批量导出/备份
- **数据库批量导出**:支持结构导出与结构+数据备份。
- **表批量导出**:支持多表一键导出/备份。
- **智能上下文检测**:自动判断目标范围,避免误操作。
### Batch Export / Backup
- Database-level and table-level batch export/backup.
- Scope-aware operation flow to reduce mistakes.
### 🧩 Redis 视图与编码
- **视图模式切换**:自动/原始文本/UTF-8/十六进制多模式显示。
- **智能解码**:针对二进制值进行 UTF-8 质量判定与中文字符识别。
- **命令执行**:内置命令面板快速操作。
### Connectivity
- URI generation/parsing.
- SSH tunnel support.
- Proxy support.
- Config import/export (JSON).
- Optional driver management and activation.
### 🔄 数据同步与导入导出
- **连接配置导入/导出**:支持配置 JSON 导入导出,便于团队共享。
- **数据同步**:内置数据同步面板,支持跨库同步任务配置。
### Redis Tools
- Multi-view value rendering (auto/raw text/UTF-8/hex).
- Built-in command execution panel.
### 🆙 在线更新
- **自动更新**:启动/定时/手动检查更新,自动下载并提示重启完成更新。
### Observability and Update
- SQL execution logs with timing information.
- Startup/scheduled/manual update checks.
### 🧾 可观测性
- **SQL 执行日志**:实时查看 SQL 与执行耗时,便于排障与优化。
### 📝 智能 SQL 编辑器
- **Monaco Editor 内核**:集成 VS Code 同款编辑器,体验极佳。
- **智能补全**:自动感知当前连接上下文,提供数据库、表名、字段名的实时补全。
- **多标签页**:支持多窗口并行操作,像浏览器一样管理你的查询会话。
### 🎨 现代化 UI
- **Ant Design 5**:企业级 UI 设计语言。
- **暗黑模式**:内置深色/浅色主题切换,适应不同光照环境。
- **响应式布局**:灵活的侧边栏与布局调整。
### UI/UX
- Ant Design 5 based interface.
- Light/Dark themes.
- Flexible sidebar and layout behavior.
---
## 🛠️ 技术栈
## Tech Stack
* **后端 (Backend)**: Go 1.24 + Wails v2
* **前端 (Frontend)**: React 18 + TypeScript + Vite
* **UI 框架**: Ant Design 5
* **状态管理**: Zustand
* **编辑器**: Monaco Editor
- **Backend**: Go 1.24 + Wails v2
- **Frontend**: React 18 + TypeScript + Vite
- **UI**: Ant Design 5
- **State Management**: Zustand
- **Editor**: Monaco Editor
---
## 📦 安装与运行
## Installation and Run
### 前置要求
* [Go](https://go.dev/dl/) 1.21+
* [Node.js](https://nodejs.org/) 18+
* [Wails CLI](https://wails.io/docs/gettingstarted/installation): `go install github.com/wailsapp/wails/v2/cmd/wails@latest`
### Prerequisites
- [Go](https://go.dev/dl/) 1.21+
- [Node.js](https://nodejs.org/) 18+
- [Wails CLI](https://wails.io/docs/gettingstarted/installation):
`go install github.com/wailsapp/wails/v2/cmd/wails@latest`
### 开发模式
### Development Mode
```bash
# 克隆项目
# Clone
git clone https://github.com/Syngnat/GoNavi.git
cd GoNavi
# 启动开发服务器 (支持热重载)
# Start development with hot reload
wails dev
```
### 编译构建
### Build
```bash
# 构建当前平台的可执行文件
# Build for current platform
wails build
# 清理并构建 (推荐发布前使用)
# Clean build (recommended before release)
wails build -clean
```
构建产物将位于 `build/bin` 目录下。
Artifacts are generated in `build/bin`.
### 跨平台编译 (GitHub Actions)
### Cross-Platform Release (GitHub Actions)
本项目内置了 GitHub Actions 流水线Push `v*` 格式的 Tag 即可自动触发构建并发布 Release。
支持构建:
* macOS (AMD64 / ARM64)
* Windows (AMD64)
* Linux (AMD64提供 WebKitGTK 4.0 与 4.1 变体产物)
The repository includes a release workflow.
Push a `v*` tag to trigger automated build and release.
Release notes are generated automatically from merged pull requests and categorized by `.github/release.yaml`.
Target artifacts include:
- macOS (AMD64 / ARM64)
- Windows (AMD64)
- Linux (AMD64, WebKitGTK 4.0 and 4.1 variants)
---
## ❓ 常见问题 (Troubleshooting)
## Troubleshooting
### macOS 提示 "应用已损坏,无法打开"
### macOS: "App is damaged and cant be opened"
由于本项目尚未购买 Apple 开发者证书进行签名NotarizationmacOS 的 Gatekeeper 安全机制可能会拦截应用的运行。请按照以下步骤解决:
Without Apple notarization, Gatekeeper may block startup.
1. 将下载的 `GoNavi.app` 拖入 **应用程序** 文件夹。
2. 打开 **终端 (Terminal)**
3. 复制并执行以下命令(输入密码时不会显示):
```bash
sudo xattr -rd com.apple.quarantine /Applications/GoNavi.app
```
4. 或者:在 Finder 中右键点击应用图标,按住 `Control` 键选择 **打开**,然后在弹出的窗口中再次点击 **打开**。
1. Move `GoNavi.app` to **Applications**.
2. Open **Terminal**.
3. Run:
### Linux 启动报错缺少 `libwebkit2gtk` / `libjavascriptcoregtk`
```bash
sudo xattr -rd com.apple.quarantine /Applications/GoNavi.app
```
GoNavi 的 Linux 二进制依赖系统 WebKitGTK 运行库。不同发行版默认版本不同:
Or right-click the app in Finder and choose **Open** with Control key flow.
- Debian 13 / Ubuntu 24.04 及更新版本:通常为 WebKitGTK 4.1
- Ubuntu 22.04 / Debian 12 等:通常为 WebKitGTK 4.0
### Linux: missing `libwebkit2gtk` / `libjavascriptcoregtk`
如果启动时报错(如 `libwebkit2gtk-4.0.so.37: cannot open shared object file`),请按系统安装对应依赖后重试:
GoNavi depends on WebKitGTK runtime libraries.
```bash
# Debian 13 / Ubuntu 24.04+
@@ -166,20 +193,20 @@ sudo apt-get update
sudo apt-get install -y libgtk-3-0 libwebkit2gtk-4.0-37 libjavascriptcoregtk-4.0-18
```
如果你使用的是 Release 中带 `-WebKit41` 后缀的 Linux 产物,请优先在 Debian 13 / Ubuntu 24.04+ 上使用;普通 Linux 产物更适合 WebKitGTK 4.0 运行环境。
If you use Linux artifacts with the `-WebKit41` suffix, prefer Debian 13 / Ubuntu 24.04+.
---
## 🤝 贡献指南
## Contributing
欢迎提交 Issue 和 Pull Request
Issues and pull requests are welcome.
1. Fork 本仓库
2. 创建你的特性分支 (`git checkout -b feature/AmazingFeature`)
3. 提交你的改动 (`git commit -m 'feat: Add some AmazingFeature'`)
4. 推送到分支 (`git push origin feature/AmazingFeature`)
5. 开启一个 Pull Request
For the full workflow, branch model, and maintainer sync rules, see:
## 📄 开源协议
- [CONTRIBUTING.md](CONTRIBUTING.md)
本项目采用 [Apache-2.0 协议](LICENSE) 开源。
External contributors should open pull requests directly against `main`.
## License
Licensed under [Apache-2.0](LICENSE).

195
README.zh-CN.md Normal file
View File

@@ -0,0 +1,195 @@
# GoNavi - 现代化轻量级数据库客户端
[![Go Version](https://img.shields.io/github/go-mod/go-version/Syngnat/GoNavi)](https://go.dev/)
[![Wails Version](https://img.shields.io/badge/Wails-v2-red)](https://wails.io)
[![React Version](https://img.shields.io/badge/React-v18-blue)](https://reactjs.org/)
[![License](https://img.shields.io/badge/License-Apache%202.0-green.svg)](LICENSE)
[![Build Status](https://img.shields.io/github/actions/workflow/status/Syngnat/GoNavi/release.yml?label=Build)](https://github.com/Syngnat/GoNavi/actions)
**语言**: [English](README.md) | 简体中文
GoNavi 是基于 **Wails (Go)****React** 构建的跨平台数据库管理工具,强调原生性能、低资源占用与多数据源统一工作流。
相比常见 Electron 客户端GoNavi 在体积、启动速度和内存占用上更轻量。
---
## 项目简介
GoNavi 面向开发者与 DBA核心目标是让数据库操作在桌面端做到“快、稳、统一”。
- **原生性能架构**WailsGo + WebView降低运行时开销。
- **大数据可用性**:虚拟滚动 + DataGrid 交互优化,提升大结果集可操作性。
- **统一连接能力**:支持 URI 生成/解析、SSH 隧道、代理、驱动按需安装。
- **工程化能力完整**:覆盖 SQL 编辑、对象管理、批量导出/备份、数据同步、执行日志、在线更新。
## 支持的数据源
> `内置`:主程序开箱即用。
> `可选驱动代理`:需在驱动管理中安装启用后可用。
| 类别 | 数据源 | 驱动模式 | 典型能力 |
|---|---|---|---|
| 关系型 | MySQL | 内置 | 库表浏览、SQL 查询、数据编辑、导出/备份 |
| 关系型 | PostgreSQL | 内置 | 库表浏览、SQL 查询、数据编辑、对象管理 |
| 关系型 | Oracle | 内置 | 连接查询、对象浏览、数据编辑 |
| 缓存 | Redis | 内置 | Key 浏览、命令执行、编码/视图切换 |
| 关系型 | MariaDB | 可选驱动代理 | 连接查询、对象管理、数据编辑 |
| 关系型 | Doris | 可选驱动代理 | 连接查询、对象浏览、SQL 执行 |
| 搜索 | Sphinx | 可选驱动代理 | SphinxQL 查询与对象浏览 |
| 关系型 | SQL Server | 可选驱动代理 | 库表浏览、SQL 查询、对象管理 |
| 文件型 | SQLite | 可选驱动代理 | 本地文件库浏览、编辑、导出 |
| 文件型 | DuckDB | 可选驱动代理 | 大表查询、分页浏览、文件库管理 |
| 国产数据库 | Dameng | 可选驱动代理 | 连接查询、对象浏览、数据编辑 |
| 国产数据库 | Kingbase | 可选驱动代理 | 连接查询、对象浏览、数据编辑 |
| 国产数据库 | HighGo | 可选驱动代理 | 连接查询、对象浏览、数据编辑 |
| 国产数据库 | Vastbase | 可选驱动代理 | 连接查询、对象浏览、数据编辑 |
| 文档型 | MongoDB | 可选驱动代理 | 文档查询、集合浏览、连接管理 |
| 时序 | TDengine | 可选驱动代理 | 时序库表浏览、查询分析 |
| 列式分析 | ClickHouse | 可选驱动代理 | 分析查询、对象浏览、SQL 执行 |
| 扩展接入 | Custom Driver/DSN | 自定义 | 通过 Driver + DSN 接入更多数据源 |
<h2 align="center">📸 项目截图</h2>
<div align="center">
<img width="25%" alt="image" src="https://github.com/user-attachments/assets/341cda98-79a5-4198-90f3-1335131ccde0" />
<img width="25%" alt="image" src="https://github.com/user-attachments/assets/224a74e7-65df-4aef-9710-d8e82e3a70c1" />
<img width="25%" alt="image" src="https://github.com/user-attachments/assets/ec522145-5ceb-4481-ae46-a9251c89bdfc" />
<br />
<img width="25%" alt="image" src="https://github.com/user-attachments/assets/330ce49b-45f1-4919-ae14-75f7d47e5f73" />
<img width="14%" alt="image" src="https://github.com/user-attachments/assets/d15fa9e9-5486-423b-a0e9-53b467e45432" />
<img width="25%" alt="image" src="https://github.com/user-attachments/assets/f0c57590-d987-4ecf-89b2-64efad60b6d7" />
</div>
---
## 核心特性
### 性能与交互
- 大数据场景下保持流畅交互(含 DataGrid 列宽拖拽、批量编辑流程优化)。
- 虚拟滚动渲染,降低大结果集卡顿风险。
### 数据管理DataGrid
- 单元格所见即所得编辑。
- 批量新增/修改/删除,支持事务提交与回滚。
- 大字段弹窗编辑。
- 右键上下文操作NULL、复制、导出等
- 根据查询上下文智能切换读写模式。
- 支持 CSV / XLSX / JSON / Markdown 导出。
### SQL 编辑器
- 基于 Monaco Editor。
- 上下文补全(数据库/表/字段)。
- 多标签查询工作流。
### 连接与驱动
- URI 生成与解析。
- SSH 隧道、代理支持。
- 连接配置 JSON 导入/导出。
- 可选驱动安装与启用管理。
### Redis 工具
- 自动/原始文本/UTF-8/十六进制等视图模式。
- 内置命令执行面板。
### 可观测性与更新
- SQL 执行日志(含耗时)。
- 启动/定时/手动更新检查。
### UI 体验
- Ant Design 5 体系。
- 深色/浅色主题切换。
- 灵活布局与侧边栏行为。
---
## 技术栈
- **后端**: Go 1.24 + Wails v2
- **前端**: React 18 + TypeScript + Vite
- **UI 框架**: Ant Design 5
- **状态管理**: Zustand
- **编辑器**: Monaco Editor
---
## 安装与运行
### 前置要求
- [Go](https://go.dev/dl/) 1.21+
- [Node.js](https://nodejs.org/) 18+
- [Wails CLI](https://wails.io/docs/gettingstarted/installation):
`go install github.com/wailsapp/wails/v2/cmd/wails@latest`
### 开发模式
```bash
# 克隆项目
git clone https://github.com/Syngnat/GoNavi.git
cd GoNavi
# 启动开发(热重载)
wails dev
```
### 编译构建
```bash
# 构建当前平台
wails build
# 清理后构建(发布前推荐)
wails build -clean
```
构建产物位于 `build/bin`
### 跨平台发布GitHub Actions
仓库内置发布流水线,推送 `v*` Tag 可自动构建并发布 Release。
Release 更新说明会基于已合并 Pull Request 自动生成,并按 `.github/release.yaml` 分类。
支持目标:
- macOS (AMD64 / ARM64)
- Windows (AMD64)
- Linux (AMD64含 WebKitGTK 4.0 / 4.1 变体)
---
## 常见问题
### macOS 提示“应用已损坏,无法打开”
在未进行 Apple Notarization 时Gatekeeper 可能拦截应用。
```bash
sudo xattr -rd com.apple.quarantine /Applications/GoNavi.app
```
### Linux 缺少 `libwebkit2gtk` / `libjavascriptcoregtk`
```bash
# Debian 13 / Ubuntu 24.04+
sudo apt-get update
sudo apt-get install -y libgtk-3-0 libwebkit2gtk-4.1-0 libjavascriptcoregtk-4.1-0
# Ubuntu 22.04 / Debian 12
sudo apt-get update
sudo apt-get install -y libgtk-3-0 libwebkit2gtk-4.0-37 libjavascriptcoregtk-4.0-18
```
---
## 贡献指南
欢迎提交 Issue 与 Pull Request。
完整流程、分支模型与维护者同步规则请查看:
- [CONTRIBUTING.zh-CN.md](CONTRIBUTING.zh-CN.md)
外部贡献者统一直接向 `main` 发起 Pull Request。
## 开源协议
本项目采用 [Apache-2.0 协议](LICENSE)。

View File

@@ -2,10 +2,13 @@ package main
import (
"bufio"
"context"
"encoding/json"
"fmt"
"os"
"reflect"
"strings"
"time"
"GoNavi-Wails/internal/connection"
"GoNavi-Wails/internal/db"
@@ -16,6 +19,7 @@ type agentRequest struct {
Method string `json:"method"`
Config *connection.ConnectionConfig `json:"config,omitempty"`
Query string `json:"query,omitempty"`
TimeoutMs int64 `json:"timeoutMs,omitempty"`
DBName string `json:"dbName,omitempty"`
TableName string `json:"tableName,omitempty"`
Changes *connection.ChangeSet `json:"changes,omitempty"`
@@ -47,6 +51,8 @@ const (
agentMethodApplyChanges = "applyChanges"
)
const legacyClickHouseDefaultTimeout = 2 * time.Hour
var (
agentDriverType string
agentDatabaseFactory func() db.Database
@@ -137,14 +143,14 @@ func handleRequest(inst *db.Database, req agentRequest) agentResponse {
return fail(resp, err.Error())
}
case agentMethodQuery:
data, fields, err := (*inst).Query(req.Query)
data, fields, err := queryWithOptionalTimeout(*inst, req.Query, req.TimeoutMs)
if err != nil {
return fail(resp, err.Error())
}
resp.Data = data
resp.Fields = fields
case agentMethodExec:
affected, err := (*inst).Exec(req.Query)
affected, err := execWithOptionalTimeout(*inst, req.Query, req.TimeoutMs)
if err != nil {
return fail(resp, err.Error())
}
@@ -218,7 +224,11 @@ func handleRequest(inst *db.Database, req agentRequest) agentResponse {
}
func writeResponse(writer *bufio.Writer, resp agentResponse) error {
payload, err := json.Marshal(resp)
// 对响应数据做统一 JSON 安全归一化:
// 将 map[any]any如 duckdb.Map递归转换为 map[string]any避免序列化失败导致代理进程退出。
safeResp := resp
safeResp.Data = normalizeAgentResponseData(resp.Data)
payload, err := json.Marshal(safeResp)
if err != nil {
return err
}
@@ -234,3 +244,87 @@ func fail(resp agentResponse, errText string) agentResponse {
resp.Error = strings.TrimSpace(errText)
return resp
}
func normalizeAgentResponseData(v interface{}) interface{} {
if v == nil {
return nil
}
rv := reflect.ValueOf(v)
switch rv.Kind() {
case reflect.Pointer, reflect.Interface:
if rv.IsNil() {
return nil
}
return normalizeAgentResponseData(rv.Elem().Interface())
case reflect.Map:
if rv.IsNil() {
return nil
}
out := make(map[string]interface{}, rv.Len())
iter := rv.MapRange()
for iter.Next() {
out[fmt.Sprint(iter.Key().Interface())] = normalizeAgentResponseData(iter.Value().Interface())
}
return out
case reflect.Slice:
if rv.IsNil() {
return nil
}
// 保持 []byte 原样,避免改变现有二进制列的 JSON 编码行为base64
if rv.Type().Elem().Kind() == reflect.Uint8 {
return v
}
size := rv.Len()
items := make([]interface{}, size)
for i := 0; i < size; i++ {
items[i] = normalizeAgentResponseData(rv.Index(i).Interface())
}
return items
case reflect.Array:
size := rv.Len()
items := make([]interface{}, size)
for i := 0; i < size; i++ {
items[i] = normalizeAgentResponseData(rv.Index(i).Interface())
}
return items
default:
return v
}
}
func queryWithOptionalTimeout(inst db.Database, query string, timeoutMs int64) ([]map[string]interface{}, []string, error) {
effectiveTimeoutMs := timeoutMs
if effectiveTimeoutMs <= 0 && strings.EqualFold(strings.TrimSpace(agentDriverType), "clickhouse") {
effectiveTimeoutMs = int64(legacyClickHouseDefaultTimeout / time.Millisecond)
}
if effectiveTimeoutMs <= 0 {
return inst.Query(query)
}
if q, ok := inst.(interface {
QueryContext(context.Context, string) ([]map[string]interface{}, []string, error)
}); ok {
ctx, cancel := context.WithTimeout(context.Background(), time.Duration(effectiveTimeoutMs)*time.Millisecond)
defer cancel()
return q.QueryContext(ctx, query)
}
return inst.Query(query)
}
func execWithOptionalTimeout(inst db.Database, query string, timeoutMs int64) (int64, error) {
effectiveTimeoutMs := timeoutMs
if effectiveTimeoutMs <= 0 && strings.EqualFold(strings.TrimSpace(agentDriverType), "clickhouse") {
effectiveTimeoutMs = int64(legacyClickHouseDefaultTimeout / time.Millisecond)
}
if effectiveTimeoutMs <= 0 {
return inst.Exec(query)
}
if e, ok := inst.(interface {
ExecContext(context.Context, string) (int64, error)
}); ok {
ctx, cancel := context.WithTimeout(context.Background(), time.Duration(effectiveTimeoutMs)*time.Millisecond)
defer cancel()
return e.ExecContext(ctx, query)
}
return inst.Exec(query)
}

View File

@@ -0,0 +1,172 @@
package main
import (
"bufio"
"bytes"
"context"
"encoding/json"
"errors"
"testing"
"time"
"GoNavi-Wails/internal/connection"
)
type duckMapLike map[any]any
func TestWriteResponse_NormalizesMapAnyAny(t *testing.T) {
resp := agentResponse{
ID: 1,
Success: true,
Data: []map[string]interface{}{
{
"id": int64(7),
"meta": duckMapLike{"k": "v", 2: "two"},
},
},
}
var out bytes.Buffer
writer := bufio.NewWriter(&out)
if err := writeResponse(writer, resp); err != nil {
t.Fatalf("writeResponse 返回错误: %v", err)
}
var decoded struct {
Data []map[string]interface{} `json:"data"`
}
if err := json.Unmarshal(bytes.TrimSpace(out.Bytes()), &decoded); err != nil {
t.Fatalf("解码响应失败: %v", err)
}
if len(decoded.Data) != 1 {
t.Fatalf("期望 1 行数据,实际 %d", len(decoded.Data))
}
meta, ok := decoded.Data[0]["meta"].(map[string]interface{})
if !ok {
t.Fatalf("meta 字段类型异常: %T", decoded.Data[0]["meta"])
}
if meta["k"] != "v" {
t.Fatalf("字符串 key 转换异常: %v", meta["k"])
}
if meta["2"] != "two" {
t.Fatalf("数字 key 未字符串化: %v", meta["2"])
}
}
func TestNormalizeAgentResponseData_KeepByteSlice(t *testing.T) {
raw := []byte{0x61, 0x62, 0x63}
normalized := normalizeAgentResponseData(raw)
out, ok := normalized.([]byte)
if !ok {
t.Fatalf("期望 []byte实际 %T", normalized)
}
if !bytes.Equal(out, raw) {
t.Fatalf("[]byte 内容被意外改写: %v", out)
}
}
type fakeAgentTimeoutDB struct {
queryCalled bool
queryContextCalled bool
execCalled bool
execContextCalled bool
deadlineSet bool
}
func (f *fakeAgentTimeoutDB) Connect(config connection.ConnectionConfig) error { return nil }
func (f *fakeAgentTimeoutDB) Close() error { return nil }
func (f *fakeAgentTimeoutDB) Ping() error { return nil }
func (f *fakeAgentTimeoutDB) Query(query string) ([]map[string]interface{}, []string, error) {
f.queryCalled = true
return nil, nil, errors.New("query should not be called")
}
func (f *fakeAgentTimeoutDB) QueryContext(ctx context.Context, query string) ([]map[string]interface{}, []string, error) {
f.queryContextCalled = true
if _, ok := ctx.Deadline(); ok {
f.deadlineSet = true
}
return []map[string]interface{}{{"ok": 1}}, []string{"ok"}, nil
}
func (f *fakeAgentTimeoutDB) Exec(query string) (int64, error) {
f.execCalled = true
return 0, errors.New("exec should not be called")
}
func (f *fakeAgentTimeoutDB) ExecContext(ctx context.Context, query string) (int64, error) {
f.execContextCalled = true
if _, ok := ctx.Deadline(); ok {
f.deadlineSet = true
}
return 3, nil
}
func (f *fakeAgentTimeoutDB) GetDatabases() ([]string, error) { return nil, nil }
func (f *fakeAgentTimeoutDB) GetTables(dbName string) ([]string, error) {
return nil, nil
}
func (f *fakeAgentTimeoutDB) GetCreateStatement(dbName, tableName string) (string, error) {
return "", nil
}
func (f *fakeAgentTimeoutDB) GetColumns(dbName, tableName string) ([]connection.ColumnDefinition, error) {
return nil, nil
}
func (f *fakeAgentTimeoutDB) GetAllColumns(dbName string) ([]connection.ColumnDefinitionWithTable, error) {
return nil, nil
}
func (f *fakeAgentTimeoutDB) GetIndexes(dbName, tableName string) ([]connection.IndexDefinition, error) {
return nil, nil
}
func (f *fakeAgentTimeoutDB) GetForeignKeys(dbName, tableName string) ([]connection.ForeignKeyDefinition, error) {
return nil, nil
}
func (f *fakeAgentTimeoutDB) GetTriggers(dbName, tableName string) ([]connection.TriggerDefinition, error) {
return nil, nil
}
func TestQueryWithOptionalTimeout_UsesQueryContext(t *testing.T) {
fake := &fakeAgentTimeoutDB{}
data, fields, err := queryWithOptionalTimeout(fake, "SELECT 1", int64((2 * time.Second).Milliseconds()))
if err != nil {
t.Fatalf("queryWithOptionalTimeout 返回错误: %v", err)
}
if !fake.queryContextCalled || fake.queryCalled {
t.Fatalf("query 调用路径异常QueryContext=%v Query=%v", fake.queryContextCalled, fake.queryCalled)
}
if !fake.deadlineSet {
t.Fatal("queryWithOptionalTimeout 未设置 deadline")
}
if len(data) != 1 || len(fields) != 1 || fields[0] != "ok" {
t.Fatalf("queryWithOptionalTimeout 返回数据异常: data=%v fields=%v", data, fields)
}
}
func TestExecWithOptionalTimeout_UsesExecContext(t *testing.T) {
fake := &fakeAgentTimeoutDB{}
affected, err := execWithOptionalTimeout(fake, "DELETE FROM t", int64((2 * time.Second).Milliseconds()))
if err != nil {
t.Fatalf("execWithOptionalTimeout 返回错误: %v", err)
}
if !fake.execContextCalled || fake.execCalled {
t.Fatalf("exec 调用路径异常ExecContext=%v Exec=%v", fake.execContextCalled, fake.execCalled)
}
if !fake.deadlineSet {
t.Fatal("execWithOptionalTimeout 未设置 deadline")
}
if affected != 3 {
t.Fatalf("受影响行数异常want=3 got=%d", affected)
}
}
func TestQueryWithOptionalTimeout_ClickHouseLegacyModeUsesQueryContext(t *testing.T) {
old := agentDriverType
agentDriverType = "clickhouse"
defer func() { agentDriverType = old }()
fake := &fakeAgentTimeoutDB{}
_, _, err := queryWithOptionalTimeout(fake, "SELECT 1", 0)
if err != nil {
t.Fatalf("queryWithOptionalTimeout 返回错误: %v", err)
}
if !fake.queryContextCalled || fake.queryCalled {
t.Fatalf("clickhouse legacy query 调用路径异常QueryContext=%v Query=%v", fake.queryContextCalled, fake.queryCalled)
}
}

View File

@@ -0,0 +1,12 @@
//go:build gonavi_clickhouse_driver
package main
import "GoNavi-Wails/internal/db"
func init() {
agentDriverType = "clickhouse"
agentDatabaseFactory = func() db.Database {
return &db.ClickHouseDB{}
}
}

View File

@@ -0,0 +1,12 @@
//go:build gonavi_mongodb_driver_v1
package main
import "GoNavi-Wails/internal/db"
func init() {
agentDriverType = "mongodb"
agentDatabaseFactory = func() db.Database {
return &db.MongoDBV1{}
}
}

View File

@@ -7,11 +7,11 @@
"checksumPolicy": "off",
"downloadUrl": "builtin://activate/mariadb"
},
"diros": {
"doris": {
"engine": "go",
"version": "1.9.3",
"checksumPolicy": "off",
"downloadUrl": "builtin://activate/diros"
"downloadUrl": "builtin://activate/doris"
},
"sphinx": {
"engine": "go",
@@ -33,7 +33,7 @@
},
"duckdb": {
"engine": "go",
"version": "2.5.5",
"version": "2.5.6",
"checksumPolicy": "off",
"downloadUrl": "builtin://activate/duckdb"
},
@@ -73,6 +73,12 @@
"checksumPolicy": "off",
"downloadUrl": "builtin://activate/tdengine"
},
"clickhouse": {
"engine": "go",
"version": "2.43.1",
"checksumPolicy": "off",
"downloadUrl": "builtin://activate/clickhouse"
},
"postgres": {
"engine": "go",
"version": "1.11.1",

View File

@@ -1 +1 @@
5b8157374dae5f9340e31b2d0bd2c00e
d0f9366af59a6367ad3c7e2d4185ead4

View File

@@ -57,6 +57,29 @@ body[data-theme='dark'] ::-webkit-scrollbar-thumb:hover {
background: #666;
}
/* Scrollbar styling for light mode (transparent-friendly) */
body[data-theme='light'] ::-webkit-scrollbar {
width: 10px;
height: 10px;
}
body[data-theme='light'] ::-webkit-scrollbar-track {
background: transparent;
}
body[data-theme='light'] ::-webkit-scrollbar-corner {
background: transparent;
}
body[data-theme='light'] ::-webkit-scrollbar-thumb {
background: rgba(0, 0, 0, 0.18);
border-radius: 4px;
border: 2px solid transparent;
background-clip: content-box;
}
body[data-theme='light'] ::-webkit-scrollbar-thumb:hover {
background: rgba(0, 0, 0, 0.30);
border: 2px solid transparent;
background-clip: content-box;
}
/* Ensure body background matches theme to avoid white flashes, but kept transparent for window composition */
body {
transition: color 0.3s;
@@ -67,6 +90,51 @@ body[data-theme='dark'] {
在透明窗口环境下会显著加剧 GPU 负载 */
}
/* 暗色 + 透明:提升选中/焦点可读性,避免默认蓝色在半透明背景下发灰 */
body[data-theme='dark'] .ant-tree .ant-tree-node-content-wrapper.ant-tree-node-selected,
body[data-theme='dark'] .ant-tree .ant-tree-node-content-wrapper.ant-tree-node-selected:hover {
background: rgba(246, 196, 83, 0.24) !important;
color: rgba(255, 236, 179, 0.98) !important;
}
body[data-theme='dark'] .ant-checkbox-checked .ant-checkbox-inner {
background-color: #f6c453 !important;
border-color: #f6c453 !important;
}
body[data-theme='dark'] .ant-checkbox-indeterminate .ant-checkbox-inner::after {
background-color: #f6c453 !important;
}
body[data-theme='dark'] .ant-checkbox:hover .ant-checkbox-inner,
body[data-theme='dark'] .ant-checkbox-wrapper:hover .ant-checkbox-inner {
border-color: #f6c453 !important;
}
body[data-theme='dark'] .ant-radio-checked .ant-radio-inner {
border-color: #f6c453 !important;
background-color: #f6c453 !important;
}
body[data-theme='dark'] .ant-radio-wrapper:hover .ant-radio-inner,
body[data-theme='dark'] .ant-radio:hover .ant-radio-inner {
border-color: #f6c453 !important;
}
body[data-theme='dark'] .ant-switch.ant-switch-checked {
background: #d8a93b !important;
}
body[data-theme='dark'] .ant-table-tbody > tr.ant-table-row-selected > td,
body[data-theme='dark'] .ant-table-tbody .ant-table-row.ant-table-row-selected > .ant-table-cell {
background: rgba(246, 196, 83, 0.18) !important;
}
body[data-theme='dark'] .ant-table-tbody > tr.ant-table-row-selected:hover > td,
body[data-theme='dark'] .ant-table-tbody .ant-table-row.ant-table-row-selected:hover > .ant-table-cell {
background: rgba(246, 196, 83, 0.26) !important;
}
/* 连接配置弹窗:滚动仅在弹窗 body 内部,不使用外层 wrap 滚动条 */
.connection-modal-wrap {
overflow: hidden !important;
@@ -92,3 +160,53 @@ body[data-theme='dark'] {
background-color: #ff4d4f !important;
color: #fff !important;
}
/* 驱动管理:统一关闭 antd sticky 横向条,仅保留自定义独立横向条 */
.driver-manager-table .ant-table-sticky-scroll {
display: none !important;
}
/* 仅在独立横向条激活时隐藏表格自身横向滚动条,避免出现双横向条 */
.driver-manager-table-wrap.driver-manager-table-wrap-external-active .driver-manager-table .ant-table-content,
.driver-manager-table-wrap.driver-manager-table-wrap-external-active .driver-manager-table .ant-table-body {
overflow-x: auto !important;
-ms-overflow-style: none;
scrollbar-width: none;
}
.driver-manager-table-wrap.driver-manager-table-wrap-external-active .driver-manager-table .ant-table-content::-webkit-scrollbar:horizontal,
.driver-manager-table-wrap.driver-manager-table-wrap-external-active .driver-manager-table .ant-table-body::-webkit-scrollbar:horizontal {
height: 0 !important;
}
.driver-manager-table-wrap {
width: 100%;
max-width: 100%;
overflow-x: hidden;
}
.driver-manager-footer {
width: 100%;
display: flex;
flex-direction: column;
gap: 8px;
}
.driver-manager-footer-actions {
width: 100%;
display: flex;
justify-content: flex-end;
}
.driver-manager-hscroll {
width: 100%;
height: 12px;
overflow-x: auto;
overflow-y: hidden;
scrollbar-gutter: stable;
background: transparent;
}
.driver-manager-hscroll-inner {
height: 1px;
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,4 +1,4 @@
import React, { useState, useEffect, useRef } from 'react';
import React, { useState, useEffect, useMemo, useRef } from 'react';
import { Modal, Form, Select, Button, message, Steps, Transfer, Card, Alert, Divider, Typography, Progress, Checkbox, Table, Drawer, Tabs } from 'antd';
import { useStore } from '../store';
import { DBGetDatabases, DBGetTables, DataSync, DataSyncAnalyze, DataSyncPreview } from '../../wailsjs/go/app/App';
@@ -31,6 +31,118 @@ type TableOps = {
selectedDeletePks?: string[];
};
const quoteSqlIdent = (dbType: string, ident: string): string => {
const raw = String(ident || '').trim();
if (!raw) return raw;
const t = String(dbType || '').toLowerCase();
if (t === 'mysql' || t === 'mariadb' || t === 'diros' || t === 'sphinx' || t === 'clickhouse' || t === 'tdengine') {
return `\`${raw.replace(/`/g, '``')}\``;
}
if (t === 'sqlserver') {
return `[${raw.replace(/]/g, ']]')}]`;
}
return `"${raw.replace(/"/g, '""')}"`;
};
const quoteSqlTable = (dbType: string, tableName: string): string => {
const raw = String(tableName || '').trim();
if (!raw) return raw;
if (!raw.includes('.')) return quoteSqlIdent(dbType, raw);
return raw
.split('.')
.map((part) => quoteSqlIdent(dbType, part))
.join('.');
};
const toSqlLiteral = (value: any, dbType: string): string => {
if (value === null || value === undefined) return 'NULL';
if (typeof value === 'number') return Number.isFinite(value) ? String(value) : 'NULL';
if (typeof value === 'bigint') return value.toString();
if (typeof value === 'boolean') {
const t = String(dbType || '').toLowerCase();
if (t === 'sqlserver') return value ? '1' : '0';
return value ? 'TRUE' : 'FALSE';
}
if (value instanceof Date) {
return `'${value.toISOString().replace(/'/g, "''")}'`;
}
if (typeof value === 'object') {
try {
return `'${JSON.stringify(value).replace(/'/g, "''")}'`;
} catch {
return `'${String(value).replace(/'/g, "''")}'`;
}
}
return `'${String(value).replace(/'/g, "''")}'`;
};
const buildSqlPreview = (
previewData: any,
tableName: string,
dbType: string,
ops?: TableOps,
): { sqlText: string; statementCount: number } => {
if (!previewData || !tableName) return { sqlText: '', statementCount: 0 };
const tableExpr = quoteSqlTable(dbType, tableName);
const pkCol = String(previewData.pkColumn || 'id');
const statements: string[] = [];
const insertRows = Array.isArray(previewData.inserts) ? previewData.inserts : [];
const updateRows = Array.isArray(previewData.updates) ? previewData.updates : [];
const deleteRows = Array.isArray(previewData.deletes) ? previewData.deletes : [];
const selectedInsert = new Set((ops?.selectedInsertPks || []).map((v) => String(v)));
const selectedUpdate = new Set((ops?.selectedUpdatePks || []).map((v) => String(v)));
const selectedDelete = new Set((ops?.selectedDeletePks || []).map((v) => String(v)));
if (ops?.insert !== false) {
insertRows.forEach((rowWrap: any) => {
const pk = String(rowWrap?.pk ?? '');
if (selectedInsert.size > 0 && !selectedInsert.has(pk)) return;
const row = rowWrap?.row || {};
const columns = Object.keys(row);
if (columns.length === 0) return;
const colExpr = columns.map((c) => quoteSqlIdent(dbType, c)).join(', ');
const valExpr = columns.map((c) => toSqlLiteral(row[c], dbType)).join(', ');
statements.push(`INSERT INTO ${tableExpr} (${colExpr}) VALUES (${valExpr});`);
});
}
if (ops?.update !== false) {
updateRows.forEach((rowWrap: any) => {
const pk = String(rowWrap?.pk ?? '');
if (selectedUpdate.size > 0 && !selectedUpdate.has(pk)) return;
const source = rowWrap?.source || {};
const changedColumns = Array.isArray(rowWrap?.changedColumns)
? rowWrap.changedColumns
: Object.keys(source).filter((k) => k !== pkCol);
const setCols = changedColumns.filter((c: string) => String(c) !== pkCol);
if (setCols.length === 0) return;
const setExpr = setCols
.map((c: string) => `${quoteSqlIdent(dbType, c)} = ${toSqlLiteral(source[c], dbType)}`)
.join(', ');
statements.push(
`UPDATE ${tableExpr} SET ${setExpr} WHERE ${quoteSqlIdent(dbType, pkCol)} = ${toSqlLiteral(pk, dbType)};`,
);
});
}
if (ops?.delete) {
deleteRows.forEach((rowWrap: any) => {
const pk = String(rowWrap?.pk ?? '');
if (selectedDelete.size > 0 && !selectedDelete.has(pk)) return;
statements.push(
`DELETE FROM ${tableExpr} WHERE ${quoteSqlIdent(dbType, pkCol)} = ${toSqlLiteral(pk, dbType)};`,
);
});
}
return {
sqlText: statements.join('\n'),
statementCount: statements.length,
};
};
const DataSyncModal: React.FC<{ open: boolean; onClose: () => void }> = ({ open, onClose }) => {
const connections = useStore((state) => state.connections);
const [currentStep, setCurrentStep] = useState(0);
@@ -152,32 +264,38 @@ const DataSyncModal: React.FC<{ open: boolean; onClose: () => void }> = ({ open,
setSourceConnId(connId);
setSourceDb('');
const conn = connections.find(c => c.id === connId);
if (conn) {
setLoading(true);
try {
const res = await DBGetDatabases(normalizeConnConfig(conn) as any);
if (res.success) {
setSourceDbs((res.data as any[]).map((r: any) => r.Database || r.database || r.username));
}
} catch(e) { message.error("Failed to fetch source databases"); }
setLoading(false);
}
if (conn) {
setLoading(true);
try {
const res = await DBGetDatabases(normalizeConnConfig(conn) as any);
if (res.success) {
const dbRows = Array.isArray(res.data) ? res.data : [];
setSourceDbs(dbRows
.map((r: any) => r?.Database || r?.database || r?.username)
.filter((name: any) => typeof name === 'string' && name.trim() !== ''));
}
} catch(e) { message.error("Failed to fetch source databases"); }
setLoading(false);
}
};
const handleTargetConnChange = async (connId: string) => {
setTargetConnId(connId);
setTargetDb('');
const conn = connections.find(c => c.id === connId);
if (conn) {
setLoading(true);
try {
const res = await DBGetDatabases(normalizeConnConfig(conn) as any);
if (res.success) {
setTargetDbs((res.data as any[]).map((r: any) => r.Database || r.database || r.username));
}
} catch(e) { message.error("Failed to fetch target databases"); }
setLoading(false);
}
if (conn) {
setLoading(true);
try {
const res = await DBGetDatabases(normalizeConnConfig(conn) as any);
if (res.success) {
const dbRows = Array.isArray(res.data) ? res.data : [];
setTargetDbs(dbRows
.map((r: any) => r?.Database || r?.database || r?.username)
.filter((name: any) => typeof name === 'string' && name.trim() !== ''));
}
} catch(e) { message.error("Failed to fetch target databases"); }
setLoading(false);
}
};
const nextToTables = async () => {
@@ -189,14 +307,17 @@ const DataSyncModal: React.FC<{ open: boolean; onClose: () => void }> = ({ open,
try {
const conn = connections.find(c => c.id === sourceConnId);
if (conn) {
const config = normalizeConnConfig(conn, sourceDb);
const res = await DBGetTables(config as any, sourceDb);
if (res.success) {
// DBGetTables returns [{Table: "name"}, ...]
const tables = (res.data as any[]).map((row: any) => row.Table || row.table || row.TABLE_NAME || Object.values(row)[0]);
setAllTables(tables as string[]);
setCurrentStep(1);
} else {
const config = normalizeConnConfig(conn, sourceDb);
const res = await DBGetTables(config as any, sourceDb);
if (res.success) {
// DBGetTables returns [{Table: "name"}, ...]
const tableRows = Array.isArray(res.data) ? res.data : [];
const tables = tableRows
.map((row: any) => row?.Table || row?.table || row?.TABLE_NAME || Object.values(row || {})[0])
.filter((name: any) => typeof name === 'string' && name.trim() !== '');
setAllTables(tables as string[]);
setCurrentStep(1);
} else {
message.error(res.message);
}
}
@@ -402,6 +523,13 @@ const DataSyncModal: React.FC<{ open: boolean; onClose: () => void }> = ({ open,
);
};
const previewSql = useMemo(() => {
if (!previewData || !previewTable) return { sqlText: '', statementCount: 0 };
const targetType = String(connections.find(c => c.id === targetConnId)?.config?.type || '');
const ops = tableOptions[previewTable] || { insert: true, update: true, delete: false };
return buildSqlPreview(previewData, previewTable, targetType, ops);
}, [previewData, previewTable, targetConnId, connections, tableOptions]);
return (
<>
<Modal
@@ -794,6 +922,51 @@ const DataSyncModal: React.FC<{ open: boolean; onClose: () => void }> = ({ open,
/>
</div>
)
},
{
key: 'sql',
label: `SQL(${previewSql.statementCount})`,
children: (
<div>
<Alert
type="info"
showIcon
message="SQL 预览会按当前勾选的插入/更新/删除与行选择范围生成,用于审核确认。"
/>
<div style={{ marginTop: 8, marginBottom: 8, display: 'flex', justifyContent: 'space-between', alignItems: 'center' }}>
<Text type="secondary"> {previewSql.statementCount} 200 /</Text>
<Button
size="small"
disabled={!previewSql.sqlText}
onClick={async () => {
try {
await navigator.clipboard.writeText(previewSql.sqlText || '');
message.success('SQL 已复制');
} catch {
message.error('复制失败,请手动复制');
}
}}
>
SQL
</Button>
</div>
<pre
style={{
margin: 0,
padding: 10,
border: '1px solid #f0f0f0',
borderRadius: 6,
background: '#fafafa',
maxHeight: 420,
overflow: 'auto',
whiteSpace: 'pre-wrap',
wordBreak: 'break-word'
}}
>
{previewSql.sqlText || '-- 当前勾选范围下无 SQL 可预览'}
</pre>
</div>
)
}
]}
/>

View File

@@ -1,10 +1,187 @@
import React, { useEffect, useState, useCallback, useRef } from 'react';
import React, { useEffect, useState, useCallback, useRef, useMemo } from 'react';
import { message } from 'antd';
import { TabData, ColumnDefinition } from '../types';
import { useStore } from '../store';
import { DBQuery, DBGetColumns } from '../../wailsjs/go/app/App';
import DataGrid, { GONAVI_ROW_KEY } from './DataGrid';
import { buildOrderBySQL, buildWhereSQL, quoteQualifiedIdent, withSortBufferTuningSQL, type FilterCondition } from '../utils/sql';
import { buildOrderBySQL, buildWhereSQL, quoteIdentPart, quoteQualifiedIdent, withSortBufferTuningSQL, type FilterCondition } from '../utils/sql';
import { buildMongoCountCommand, buildMongoFilter, buildMongoFindCommand, buildMongoSort } from '../utils/mongodb';
import { getDataSourceCapabilities } from '../utils/dataSourceCapabilities';
type ViewerPaginationState = {
current: number;
pageSize: number;
total: number;
totalKnown: boolean;
totalApprox: boolean;
totalCountLoading: boolean;
totalCountCancelled: boolean;
};
const JS_MAX_SAFE_INTEGER_BIGINT = BigInt(Number.MAX_SAFE_INTEGER);
const isIntegerText = (text: string): boolean => /^[+-]?\d+$/.test(text);
const toNonNegativeFiniteNumber = (value: unknown): number | null => {
if (typeof value === 'number') {
return Number.isFinite(value) && value >= 0 && value <= Number.MAX_SAFE_INTEGER ? value : null;
}
if (typeof value === 'bigint') {
return value >= 0n && value <= JS_MAX_SAFE_INTEGER_BIGINT ? Number(value) : null;
}
if (typeof value === 'string') {
const text = value.trim();
if (!text) return null;
if (isIntegerText(text)) {
try {
const parsedBigInt = BigInt(text);
if (parsedBigInt < 0n || parsedBigInt > JS_MAX_SAFE_INTEGER_BIGINT) {
return null;
}
return Number(parsedBigInt);
} catch {
return null;
}
}
const parsed = Number(text);
return Number.isFinite(parsed) && parsed >= 0 && parsed <= Number.MAX_SAFE_INTEGER ? parsed : null;
}
return null;
};
const parseTotalFromCountRow = (row: any): number | null => {
if (!row || typeof row !== 'object') return null;
const entries = Object.entries(row as Record<string, unknown>);
if (entries.length === 0) return null;
for (const [key, raw] of entries) {
const normalized = String(key || '').trim().toLowerCase();
if (normalized === 'total' || normalized === 'count' || normalized.includes('count')) {
const parsed = toNonNegativeFiniteNumber(raw);
if (parsed !== null) return parsed;
}
}
for (const [, raw] of entries) {
const parsed = toNonNegativeFiniteNumber(raw);
if (parsed !== null) return parsed;
}
return null;
};
const parseDuckDBApproxTotalRow = (row: any): number | null => {
if (!row || typeof row !== 'object') return null;
const entries = Object.entries(row as Record<string, unknown>);
if (entries.length === 0) return null;
const preferredKeys = ['approx_total', 'estimated_size', 'estimated_rows', 'row_count', 'count', 'total'];
for (const preferred of preferredKeys) {
for (const [key, raw] of entries) {
if (String(key || '').trim().toLowerCase() !== preferred) continue;
const parsed = toNonNegativeFiniteNumber(raw);
if (parsed !== null) return parsed;
}
}
for (const [key, raw] of entries) {
const normalized = String(key || '').trim().toLowerCase();
if (normalized.includes('estimate') || normalized.includes('row') || normalized.includes('count') || normalized.includes('total')) {
const parsed = toNonNegativeFiniteNumber(raw);
if (parsed !== null) return parsed;
}
}
return null;
};
const normalizeDuckDBIdentifier = (raw: string): string => {
const text = String(raw || '').trim();
if (text.length >= 2) {
const first = text[0];
const last = text[text.length - 1];
if ((first === '"' && last === '"') || (first === '`' && last === '`')) {
return text.slice(1, -1).trim();
}
}
return text;
};
const resolveDuckDBSchemaAndTable = (dbName: string, tableName: string) => {
const rawTable = String(tableName || '').trim();
if (!rawTable) return { schemaName: 'main', pureTableName: '' };
const parts = rawTable.split('.');
if (parts.length >= 2) {
const pureTableName = normalizeDuckDBIdentifier(parts[parts.length - 1]);
const schemaName = normalizeDuckDBIdentifier(parts[parts.length - 2]);
if (schemaName && pureTableName) {
return { schemaName, pureTableName };
}
}
const fallbackSchema = normalizeDuckDBIdentifier(String(dbName || '').trim()) || 'main';
return { schemaName: fallbackSchema, pureTableName: normalizeDuckDBIdentifier(rawTable) };
};
const escapeSQLLiteral = (value: string): string => String(value || '').replace(/'/g, "''");
const isDuckDBUnsupportedTypeError = (msg: string): boolean => /unsupported\s*type:\s*duckdb\./i.test(String(msg || ''));
const isDuckDBComplexColumnType = (columnType?: string): boolean => {
const raw = String(columnType || '').trim().toLowerCase();
if (!raw) return false;
return raw.includes('map') || raw.includes('struct') || raw.includes('union') || raw.includes('array') || raw.includes('list');
};
const reverseOrderBySQL = (orderBySQL: string): string => {
const raw = String(orderBySQL || '').trim();
if (!raw) return '';
const body = raw.replace(/^order\s+by\s+/i, '').trim();
if (!body) return '';
const parts = body
.split(',')
.map((part) => part.trim())
.filter(Boolean)
.map((part) => {
if (/\s+asc$/i.test(part)) return part.replace(/\s+asc$/i, ' DESC');
if (/\s+desc$/i.test(part)) return part.replace(/\s+desc$/i, ' ASC');
return `${part} DESC`;
});
if (parts.length === 0) return '';
return ` ORDER BY ${parts.join(', ')}`;
};
type ViewerFilterSnapshot = {
showFilter: boolean;
conditions: FilterCondition[];
};
const viewerFilterSnapshotsByTab = new Map<string, ViewerFilterSnapshot>();
const normalizeViewerFilterConditions = (conditions: FilterCondition[] | undefined): FilterCondition[] => {
if (!Array.isArray(conditions)) return [];
return conditions.map((cond) => ({
id: Number.isFinite(Number(cond?.id)) ? Number(cond?.id) : undefined,
enabled: cond?.enabled !== false,
logic: String(cond?.logic || '').trim().toUpperCase() === 'OR' ? 'OR' : 'AND',
column: String(cond?.column || ''),
op: String(cond?.op || '='),
value: String(cond?.value ?? ''),
value2: String(cond?.value2 ?? ''),
}));
};
const getViewerFilterSnapshot = (tabId: string): ViewerFilterSnapshot => {
const cached = viewerFilterSnapshotsByTab.get(String(tabId || '').trim());
if (!cached) {
return { showFilter: false, conditions: [] };
}
return {
showFilter: cached.showFilter === true,
conditions: normalizeViewerFilterConditions(cached.conditions),
};
};
const DataViewer: React.FC<{ tab: TabData }> = ({ tab }) => {
const [data, setData] = useState<any[]>([]);
@@ -16,30 +193,148 @@ const DataViewer: React.FC<{ tab: TabData }> = ({ tab }) => {
const fetchSeqRef = useRef(0);
const countSeqRef = useRef(0);
const countKeyRef = useRef<string>('');
const duckdbApproxSeqRef = useRef(0);
const duckdbApproxKeyRef = useRef<string>('');
const manualCountSeqRef = useRef(0);
const manualCountKeyRef = useRef<string>('');
const pkSeqRef = useRef(0);
const pkKeyRef = useRef<string>('');
const latestConfigRef = useRef<any>(null);
const latestDbTypeRef = useRef<string>('');
const latestDbNameRef = useRef<string>('');
const latestCountSqlRef = useRef<string>('');
const latestCountKeyRef = useRef<string>('');
const [pagination, setPagination] = useState({
const [pagination, setPagination] = useState<ViewerPaginationState>({
current: 1,
pageSize: 100,
total: 0,
totalKnown: false
totalKnown: false,
totalApprox: false,
totalCountLoading: false,
totalCountCancelled: false,
});
const [sortInfo, setSortInfo] = useState<{ columnKey: string, order: string } | null>(null);
const [showFilter, setShowFilter] = useState(false);
const [filterConditions, setFilterConditions] = useState<FilterCondition[]>([]);
const currentConnType = (connections.find(c => c.id === tab.connectionId)?.config?.type || '').toLowerCase();
const forceReadOnly = currentConnType === 'tdengine';
const [showFilter, setShowFilter] = useState<boolean>(() => getViewerFilterSnapshot(tab.id).showFilter);
const [filterConditions, setFilterConditions] = useState<FilterCondition[]>(() => getViewerFilterSnapshot(tab.id).conditions);
const duckdbSafeSelectCacheRef = useRef<Record<string, string>>({});
const currentConnConfig = connections.find(c => c.id === tab.connectionId)?.config;
const currentConnCaps = getDataSourceCapabilities(currentConnConfig);
const currentConnType = currentConnCaps.type;
const forceReadOnly = currentConnCaps.forceReadOnlyQueryResult;
useEffect(() => {
const snapshot = getViewerFilterSnapshot(tab.id);
setShowFilter(snapshot.showFilter);
setFilterConditions(snapshot.conditions);
}, [tab.id]);
useEffect(() => {
viewerFilterSnapshotsByTab.set(tab.id, {
showFilter,
conditions: normalizeViewerFilterConditions(filterConditions),
});
}, [tab.id, showFilter, filterConditions]);
useEffect(() => {
setPkColumns([]);
pkKeyRef.current = '';
countKeyRef.current = '';
setPagination(prev => ({ ...prev, current: 1, total: 0, totalKnown: false }));
duckdbApproxKeyRef.current = '';
manualCountKeyRef.current = '';
duckdbSafeSelectCacheRef.current = {};
latestConfigRef.current = null;
latestDbTypeRef.current = '';
latestDbNameRef.current = '';
latestCountSqlRef.current = '';
latestCountKeyRef.current = '';
setPagination(prev => ({
...prev,
current: 1,
total: 0,
totalKnown: false,
totalApprox: false,
totalCountLoading: false,
totalCountCancelled: false,
}));
}, [tab.connectionId, tab.dbName, tab.tableName]);
const handleDuckDBManualCount = useCallback(async () => {
if (latestDbTypeRef.current !== 'duckdb') {
return;
}
const config = latestConfigRef.current;
const dbName = latestDbNameRef.current;
const countSql = latestCountSqlRef.current;
const countKey = latestCountKeyRef.current;
if (!config || !countSql || !countKey) {
message.warning('当前结果集尚未就绪,请先执行一次加载');
return;
}
manualCountKeyRef.current = countKey;
const countSeq = ++manualCountSeqRef.current;
const countStart = Date.now();
setPagination(prev => ({ ...prev, totalCountLoading: true, totalCountCancelled: false }));
const countConfig: any = { ...(config as any), timeout: 120 };
try {
const resCount = await DBQuery(countConfig as any, dbName, countSql);
const countDuration = Date.now() - countStart;
addSqlLog({
id: `log-${Date.now()}-duckdb-manual-count`,
timestamp: Date.now(),
sql: countSql,
status: resCount?.success ? 'success' : 'error',
duration: countDuration,
message: resCount?.success ? '' : String(resCount?.message || '统计失败'),
dbName
});
if (manualCountSeqRef.current !== countSeq) return;
if (manualCountKeyRef.current !== countKey) return;
if (!resCount?.success) {
setPagination(prev => ({ ...prev, totalCountLoading: false }));
message.error(String(resCount?.message || '统计总数失败'));
return;
}
if (!Array.isArray(resCount.data) || resCount.data.length === 0) {
setPagination(prev => ({ ...prev, totalCountLoading: false }));
return;
}
const total = parseTotalFromCountRow(resCount.data[0]);
if (total === null) {
setPagination(prev => ({ ...prev, totalCountLoading: false }));
message.error('统计结果解析失败');
return;
}
setPagination(prev => ({
...prev,
total,
totalKnown: true,
totalApprox: false,
totalCountLoading: false,
totalCountCancelled: false,
}));
} catch (e: any) {
if (manualCountSeqRef.current !== countSeq) return;
if (manualCountKeyRef.current !== countKey) return;
setPagination(prev => ({ ...prev, totalCountLoading: false }));
message.error(`统计总数失败: ${String(e?.message || e)}`);
}
}, [addSqlLog]);
const handleDuckDBCancelManualCount = useCallback(() => {
manualCountSeqRef.current++;
setPagination(prev => ({ ...prev, totalCountLoading: false, totalCountCancelled: true }));
}, []);
const fetchData = useCallback(async (page = pagination.current, size = pagination.pageSize) => {
const seq = ++fetchSeqRef.current;
setLoading(true);
@@ -65,40 +360,142 @@ const DataViewer: React.FC<{ tab: TabData }> = ({ tab }) => {
const dbName = tab.dbName || '';
const tableName = tab.tableName || '';
const isMongoDB = dbTypeLower === 'mongodb';
let mongoFilter: Record<string, unknown> | undefined;
if (isMongoDB) {
try {
mongoFilter = buildMongoFilter(filterConditions);
} catch (e: any) {
message.error(`Mongo 筛选条件无效:${String(e?.message || e || '解析失败')}`);
if (fetchSeqRef.current === seq) setLoading(false);
return;
}
}
const whereSQL = buildWhereSQL(dbType, filterConditions);
const countSql = `SELECT COUNT(*) as total FROM ${quoteQualifiedIdent(dbType, tableName)} ${whereSQL}`;
let sql = `SELECT * FROM ${quoteQualifiedIdent(dbType, tableName)} ${whereSQL}`;
sql += buildOrderBySQL(dbType, sortInfo, pkColumns);
const offset = (page - 1) * size;
// 大表性能:打开表不阻塞在 COUNT(*),先通过多取 1 条判断是否还有下一页;总数在后台统计并异步回填。
sql += ` LIMIT ${size + 1} OFFSET ${offset}`;
const whereSQL = isMongoDB
? JSON.stringify(mongoFilter || {})
: buildWhereSQL(dbType, filterConditions);
const countSql = isMongoDB
? buildMongoCountCommand(tableName, mongoFilter || {})
: `SELECT COUNT(*) as total FROM ${quoteQualifiedIdent(dbType, tableName)} ${whereSQL}`;
const orderBySQL = isMongoDB ? '' : buildOrderBySQL(dbType, sortInfo, pkColumns);
const totalRows = Number(pagination.total);
const hasFiniteTotal = Number.isFinite(totalRows) && totalRows >= 0;
const totalKnown = pagination.totalKnown && hasFiniteTotal;
const totalPages = hasFiniteTotal ? Math.max(1, Math.ceil(totalRows / size)) : 0;
const currentPage = totalPages > 0 ? Math.min(Math.max(1, page), totalPages) : Math.max(1, page);
const offset = (currentPage - 1) * size;
const isClickHouse = !isMongoDB && dbTypeLower === 'clickhouse';
const reverseOrderSQL = isClickHouse ? reverseOrderBySQL(orderBySQL) : '';
let useClickHouseReversePagination = false;
let clickHouseReverseLimit = 0;
let clickHouseReverseHasMore = false;
let sql = '';
if (isMongoDB) {
const mongoSort = buildMongoSort(sortInfo, pkColumns);
sql = buildMongoFindCommand({
collection: tableName,
filter: mongoFilter || {},
sort: mongoSort,
limit: size + 1,
skip: offset,
});
} else {
const baseSql = `SELECT * FROM ${quoteQualifiedIdent(dbType, tableName)} ${whereSQL}`;
sql = `${baseSql}${orderBySQL}`;
// ClickHouse 深分页在超大 OFFSET 下容易超时。对于总数已知且存在 ORDER BY 的场景,
// 当“尾部偏移”小于“头部偏移”时,改为反向 ORDER BY + 小 OFFSET并在前端翻转结果。
if (isClickHouse && totalKnown && offset > 0 && reverseOrderSQL) {
const pageRowCount = Math.max(0, Math.min(size, totalRows - offset));
if (pageRowCount > 0) {
const tailOffset = Math.max(0, totalRows - (offset + pageRowCount));
if (tailOffset < offset) {
sql = `${baseSql}${reverseOrderSQL} LIMIT ${pageRowCount} OFFSET ${tailOffset}`;
useClickHouseReversePagination = true;
clickHouseReverseLimit = pageRowCount;
clickHouseReverseHasMore = currentPage < totalPages;
}
}
}
if (!useClickHouseReversePagination) {
// 大表性能:打开表不阻塞在 COUNT(*),先通过多取 1 条判断是否还有下一页;总数在后台统计并异步回填。
sql += ` LIMIT ${size + 1} OFFSET ${offset}`;
}
}
const requestStartTime = Date.now();
let executedSql = sql;
try {
const executeDataQuery = async (querySql: string, attemptLabel: string) => {
const startTime = Date.now();
const result = await DBQuery(config as any, dbName, querySql);
addSqlLog({
id: `log-${Date.now()}-data`,
timestamp: Date.now(),
sql: querySql,
status: result.success ? 'success' : 'error',
duration: Date.now() - startTime,
message: result.success ? '' : `${attemptLabel}: ${result.message}`,
affectedRows: Array.isArray(result.data) ? result.data.length : undefined,
dbName
});
return result;
try {
const result = await DBQuery(config as any, dbName, querySql);
addSqlLog({
id: `log-${Date.now()}-data`,
timestamp: Date.now(),
sql: querySql,
status: result.success ? 'success' : 'error',
duration: Date.now() - startTime,
message: result.success ? '' : `${attemptLabel}: ${result.message}`,
affectedRows: Array.isArray(result.data) ? result.data.length : undefined,
dbName
});
return result;
} catch (e: any) {
const errMessage = String(e?.message || e || 'query failed');
addSqlLog({
id: `log-${Date.now()}-data`,
timestamp: Date.now(),
sql: querySql,
status: 'error',
duration: Date.now() - startTime,
message: `${attemptLabel}: ${errMessage}`,
dbName
});
return { success: false, message: errMessage, data: [], fields: [] };
}
};
const hasSort = !!sortInfo?.columnKey && (sortInfo?.order === 'ascend' || sortInfo?.order === 'descend');
const isSortMemoryErr = (msg: string) => /error\s*1038|out of sort memory/i.test(String(msg || ''));
let resData = await executeDataQuery(sql, '主查询');
if (!resData.success && dbTypeLower === 'duckdb' && isDuckDBUnsupportedTypeError(String(resData.message || ''))) {
const cacheKey = `${tab.connectionId}|${dbName}|${tableName}`;
let safeSelect = duckdbSafeSelectCacheRef.current[cacheKey] || '';
if (!safeSelect) {
try {
const resCols = await DBGetColumns(config as any, dbName, tableName);
if (resCols?.success && Array.isArray(resCols.data)) {
const columnDefs = resCols.data as ColumnDefinition[];
const selectParts = columnDefs.map((col) => {
const colName = String(col?.name || '').trim();
if (!colName) return '';
const quotedCol = quoteIdentPart(dbType, colName);
if (isDuckDBComplexColumnType(col?.type)) {
return `CAST(${quotedCol} AS VARCHAR) AS ${quotedCol}`;
}
return quotedCol;
}).filter(Boolean);
if (selectParts.length > 0) {
safeSelect = selectParts.join(', ');
duckdbSafeSelectCacheRef.current[cacheKey] = safeSelect;
}
}
} catch {
// ignore and keep original error path
}
}
if (safeSelect) {
let fallbackSql = `SELECT ${safeSelect} FROM ${quoteQualifiedIdent(dbType, tableName)} ${whereSQL}`;
fallbackSql += buildOrderBySQL(dbType, sortInfo, pkColumns);
fallbackSql += ` LIMIT ${size + 1} OFFSET ${offset}`;
executedSql = fallbackSql;
resData = await executeDataQuery(fallbackSql, '复杂类型降级重试');
}
}
if (!resData.success && isMySQLFamily && hasSort && isSortMemoryErr(resData.message)) {
const retrySql32MB = withSortBufferTuningSQL(dbType, sql, 32 * 1024 * 1024);
if (retrySql32MB !== sql) {
@@ -141,7 +538,12 @@ const DataViewer: React.FC<{ tab: TabData }> = ({ tab }) => {
let resultData = resData.data as any[];
if (!Array.isArray(resultData)) resultData = [];
const hasMore = resultData.length > size;
if (useClickHouseReversePagination) {
// 反向查询后恢复为原排序方向,保证用户看到的仍是“最后一页正序数据”。
resultData = resultData.slice(0, clickHouseReverseLimit).reverse();
}
const hasMore = useClickHouseReversePagination ? clickHouseReverseHasMore : resultData.length > size;
if (hasMore) resultData = resultData.slice(0, size);
let fieldNames = resData.fields || [];
@@ -156,26 +558,71 @@ const DataViewer: React.FC<{ tab: TabData }> = ({ tab }) => {
setData(resultData);
const countKey = `${tab.connectionId}|${dbName}|${tableName}|${whereSQL}`;
const derivedTotalKnown = !hasMore;
const derivedTotal = derivedTotalKnown ? offset + resultData.length : page * size + 1;
const derivedTotal = derivedTotalKnown ? offset + resultData.length : currentPage * size + 1;
const isDuckDB = dbTypeLower === 'duckdb';
const minExpectedTotal = hasMore ? offset + resultData.length + 1 : offset + resultData.length;
if (derivedTotalKnown) countKeyRef.current = countKey;
latestConfigRef.current = config;
latestDbTypeRef.current = dbTypeLower;
latestDbNameRef.current = dbName;
latestCountSqlRef.current = countSql;
latestCountKeyRef.current = countKey;
setPagination(prev => {
if (derivedTotalKnown) {
return { ...prev, current: page, pageSize: size, total: derivedTotal, totalKnown: true };
return {
...prev,
current: currentPage,
pageSize: size,
total: derivedTotal,
totalKnown: true,
totalApprox: false,
totalCountLoading: false,
totalCountCancelled: false,
};
}
if (prev.totalKnown && countKeyRef.current === countKey) {
return { ...prev, current: page, pageSize: size };
if (!isDuckDB) {
return { ...prev, current: currentPage, pageSize: size };
}
// 当当前页存在“下一页”信号时,已知总数至少应大于当前页末尾。
// 若旧总数不满足该条件(例如历史统计值为 0降级为未知总数并回退到 derivedTotal。
if (Number.isFinite(prev.total) && prev.total >= minExpectedTotal) {
return { ...prev, current: currentPage, pageSize: size };
}
}
return { ...prev, current: page, pageSize: size, total: derivedTotal, totalKnown: false };
const keepManualCounting = prev.totalCountLoading && manualCountKeyRef.current === countKey;
if (isDuckDB && prev.totalApprox && duckdbApproxKeyRef.current === countKey && Number.isFinite(prev.total) && prev.total >= minExpectedTotal) {
return {
...prev,
current: currentPage,
pageSize: size,
totalKnown: false,
totalApprox: true,
totalCountLoading: keepManualCounting,
totalCountCancelled: false,
};
}
return {
...prev,
current: currentPage,
pageSize: size,
total: derivedTotal,
totalKnown: false,
totalApprox: false,
totalCountLoading: keepManualCounting,
totalCountCancelled: keepManualCounting ? false : prev.totalCountCancelled,
};
});
if (!derivedTotalKnown) {
const shouldRunAsyncCount = !derivedTotalKnown && !isDuckDB;
if (shouldRunAsyncCount) {
if (countKeyRef.current !== countKey) {
countKeyRef.current = countKey;
const countSeq = ++countSeqRef.current;
const countStart = Date.now();
// 大表 COUNT(*) 可能非常慢,且在部分运行时环境下会影响后续操作响应;
// 这里为统计请求设置更短的超时,避免“后台统计”长期占用资源
// DuckDB 大文件场景下该统计会显著拖慢翻页,已禁用后台 COUNT
const countConfig: any = { ...(config as any), timeout: 5 };
DBQuery(countConfig, dbName, countSql)
@@ -198,10 +645,17 @@ const DataViewer: React.FC<{ tab: TabData }> = ({ tab }) => {
if (!resCount.success) return;
if (!Array.isArray(resCount.data) || resCount.data.length === 0) return;
const total = Number(resCount.data[0]?.['total']);
if (!Number.isFinite(total) || total < 0) return;
const total = parseTotalFromCountRow(resCount.data[0]);
if (total === null) return;
setPagination(prev => ({ ...prev, total, totalKnown: true }));
setPagination(prev => ({
...prev,
total,
totalKnown: true,
totalApprox: false,
totalCountLoading: false,
totalCountCancelled: false,
}));
})
.catch(() => {
if (countSeqRef.current !== countSeq) return;
@@ -210,6 +664,50 @@ const DataViewer: React.FC<{ tab: TabData }> = ({ tab }) => {
});
}
}
if (isDuckDB && !derivedTotalKnown && whereSQL.trim() === '' && duckdbApproxKeyRef.current !== countKey) {
duckdbApproxKeyRef.current = countKey;
const approxSeq = ++duckdbApproxSeqRef.current;
const { schemaName, pureTableName } = resolveDuckDBSchemaAndTable(dbName, tableName);
const escapedSchema = escapeSQLLiteral(schemaName);
const escapedTable = escapeSQLLiteral(pureTableName);
const approxConfig: any = { ...(config as any), timeout: 3 };
const approxSqlCandidates = [
`SELECT estimated_size AS approx_total FROM duckdb_tables() WHERE schema_name='${escapedSchema}' AND table_name='${escapedTable}' LIMIT 1`,
`SELECT estimated_size AS approx_total FROM duckdb_tables() WHERE table_name='${escapedTable}' ORDER BY CASE WHEN schema_name='${escapedSchema}' THEN 0 ELSE 1 END LIMIT 1`,
];
(async () => {
for (const approxSql of approxSqlCandidates) {
try {
const approxRes = await DBQuery(approxConfig as any, dbName, approxSql);
if (duckdbApproxSeqRef.current !== approxSeq) return;
if (countKeyRef.current !== countKey) return;
if (!approxRes?.success || !Array.isArray(approxRes.data) || approxRes.data.length === 0) continue;
const approxTotal = parseDuckDBApproxTotalRow(approxRes.data[0]);
if (approxTotal === null) continue;
if (!Number.isFinite(approxTotal) || approxTotal < minExpectedTotal) continue;
setPagination(prev => {
if (countKeyRef.current !== countKey) return prev;
if (prev.totalKnown) return prev;
return {
...prev,
total: approxTotal,
totalKnown: false,
totalApprox: true,
totalCountCancelled: false,
};
});
return;
} catch {
if (duckdbApproxSeqRef.current !== approxSeq) return;
if (countKeyRef.current !== countKey) return;
}
}
})();
}
} else {
message.error(String(resData.message || '查询失败'));
}
@@ -227,7 +725,7 @@ const DataViewer: React.FC<{ tab: TabData }> = ({ tab }) => {
});
}
if (fetchSeqRef.current === seq) setLoading(false);
}, [connections, tab, sortInfo, filterConditions, pkColumns]);
}, [connections, tab, sortInfo, filterConditions, pkColumns, pagination.total, pagination.totalKnown]);
// 依赖 pkColumns在无手动排序时可回退到主键稳定排序。
// 主键信息只会在首次加载后更新一次,避免循环查询。
@@ -248,6 +746,24 @@ const DataViewer: React.FC<{ tab: TabData }> = ({ tab }) => {
const handleToggleFilter = useCallback(() => setShowFilter(prev => !prev), []);
const handleApplyFilter = useCallback((conditions: FilterCondition[]) => setFilterConditions(conditions), []);
const exportSqlWithFilter = useMemo(() => {
const tableName = String(tab.tableName || '').trim();
const dbType = String(currentConnConfig?.type || '').trim();
if (!tableName || !dbType) return '';
const whereSQL = buildWhereSQL(dbType, filterConditions);
if (!whereSQL) return '';
let sql = `SELECT * FROM ${quoteQualifiedIdent(dbType, tableName)} ${whereSQL}`;
sql += buildOrderBySQL(dbType, sortInfo, pkColumns);
const normalizedType = dbType.toLowerCase();
const hasExplicitSort = !!sortInfo?.columnKey && (sortInfo?.order === 'ascend' || sortInfo?.order === 'descend');
if (hasExplicitSort && (normalizedType === 'mysql' || normalizedType === 'mariadb')) {
sql = withSortBufferTuningSQL(normalizedType, sql, 32 * 1024 * 1024);
}
return sql;
}, [tab.tableName, currentConnConfig?.type, filterConditions, sortInfo, pkColumns]);
useEffect(() => {
fetchData(1, pagination.pageSize);
}, [tab, sortInfo, filterConditions]); // Initial load and re-load on sort/filter
@@ -259,6 +775,7 @@ const DataViewer: React.FC<{ tab: TabData }> = ({ tab }) => {
columnNames={columnNames}
loading={loading}
tableName={tab.tableName}
exportScope="table"
dbName={tab.dbName}
connectionId={tab.connectionId}
pkColumns={pkColumns}
@@ -266,11 +783,15 @@ const DataViewer: React.FC<{ tab: TabData }> = ({ tab }) => {
onSort={handleSort}
onPageChange={handlePageChange}
pagination={pagination}
onRequestTotalCount={currentConnType === 'duckdb' ? handleDuckDBManualCount : undefined}
onCancelTotalCount={currentConnType === 'duckdb' ? handleDuckDBCancelManualCount : undefined}
showFilter={showFilter}
onToggleFilter={handleToggleFilter}
onApplyFilter={handleApplyFilter}
appliedFilterConditions={filterConditions}
readOnly={forceReadOnly}
sortInfoExternal={sortInfo}
exportSqlWithFilter={exportSqlWithFilter || undefined}
/>
</div>
);

File diff suppressed because it is too large Load Diff

View File

@@ -27,8 +27,9 @@ const LogPanel: React.FC<LogPanelProps> = ({ height, onClose, onResizeStart }) =
const b = parseInt(hex.substring(4, 6), 16);
return `rgba(${r}, ${g}, ${b}, ${opacity})`;
};
const bgMain = getBg('#1f1f1f');
const bgToolbar = getBg('#2a2a2a');
const bgMain = getBg('#1d1d1d');
const panelDividerColor = darkMode ? 'rgba(255,255,255,0.08)' : 'rgba(0,0,0,0.08)';
const panelMutedTextColor = darkMode ? 'rgba(255,255,255,0.62)' : 'rgba(0,0,0,0.58)';
const logScrollbarThumb = darkMode ? 'rgba(255, 255, 255, 0.34)' : 'rgba(0, 0, 0, 0.26)';
const logScrollbarThumbHover = darkMode ? 'rgba(255, 255, 255, 0.5)' : 'rgba(0, 0, 0, 0.36)';
@@ -37,7 +38,7 @@ const LogPanel: React.FC<LogPanelProps> = ({ height, onClose, onResizeStart }) =
title: 'Time',
dataIndex: 'timestamp',
width: 80,
render: (ts: number) => <span style={{ color: '#888', fontSize: '12px' }}>{new Date(ts).toLocaleTimeString()}</span>
render: (ts: number) => <span style={{ color: panelMutedTextColor, fontSize: '12px' }}>{new Date(ts).toLocaleTimeString()}</span>
},
{
title: 'Status',
@@ -62,7 +63,7 @@ const LogPanel: React.FC<LogPanelProps> = ({ height, onClose, onResizeStart }) =
<div style={{ fontFamily: 'monospace', wordBreak: 'break-all', fontSize: '12px', lineHeight: '1.2' }}>
<div style={{ color: darkMode ? '#a6e22e' : '#005cc5' }}>{text}</div>
{record.message && <div style={{ color: '#ff4d4f', marginTop: 2 }}>{record.message}</div>}
{record.affectedRows !== undefined && <div style={{ color: '#888', marginTop: 1 }}>Affected: {record.affectedRows}</div>}
{record.affectedRows !== undefined && <div style={{ color: panelMutedTextColor, marginTop: 1 }}>Affected: {record.affectedRows}</div>}
</div>
)
}
@@ -71,7 +72,7 @@ const LogPanel: React.FC<LogPanelProps> = ({ height, onClose, onResizeStart }) =
return (
<div style={{
height,
borderTop: 'none',
borderTop: `1px solid ${panelDividerColor}`,
background: bgMain,
display: 'flex',
flexDirection: 'column',
@@ -95,7 +96,7 @@ const LogPanel: React.FC<LogPanelProps> = ({ height, onClose, onResizeStart }) =
{/* Toolbar */}
<div style={{
padding: '4px 8px',
borderBottom: 'none',
borderBottom: `1px solid ${panelDividerColor}`,
display: 'flex',
justifyContent: 'space-between',
alignItems: 'center',

View File

@@ -1,12 +1,16 @@
import React, { useState, useEffect, useRef } from 'react';
import React, { useState, useEffect, useRef, useMemo } from 'react';
import Editor, { OnMount } from '@monaco-editor/react';
import { Button, message, Modal, Input, Form, Dropdown, MenuProps, Tooltip, Select, Tabs } from 'antd';
import { PlayCircleOutlined, SaveOutlined, FormatPainterOutlined, SettingOutlined, CloseOutlined } from '@ant-design/icons';
import { PlayCircleOutlined, SaveOutlined, FormatPainterOutlined, SettingOutlined, CloseOutlined, StopOutlined } from '@ant-design/icons';
import { format } from 'sql-formatter';
import { v4 as uuidv4 } from 'uuid';
import { TabData, ColumnDefinition } from '../types';
import { useStore } from '../store';
import { DBQuery, DBGetTables, DBGetAllColumns, DBGetDatabases, DBGetColumns } from '../../wailsjs/go/app/App';
import { DBQuery, DBQueryWithCancel, DBGetTables, DBGetAllColumns, DBGetDatabases, DBGetColumns, CancelQuery, GenerateQueryID } from '../../wailsjs/go/app/App';
import DataGrid, { GONAVI_ROW_KEY } from './DataGrid';
import { getDataSourceCapabilities } from '../utils/dataSourceCapabilities';
import { convertMongoShellToJsonCommand } from '../utils/mongodb';
import { getShortcutDisplay, isEditableElement, isShortcutMatch } from '../utils/shortcuts';
const QueryEditor: React.FC<{ tab: TabData }> = ({ tab }) => {
const [query, setQuery] = useState(tab.query || 'SELECT * FROM ');
@@ -14,6 +18,7 @@ const QueryEditor: React.FC<{ tab: TabData }> = ({ tab }) => {
type ResultSet = {
key: string;
sql: string;
exportSql?: string;
rows: any[];
columns: string[];
tableName?: string;
@@ -28,7 +33,9 @@ const QueryEditor: React.FC<{ tab: TabData }> = ({ tab }) => {
const [activeResultKey, setActiveResultKey] = useState<string>('');
const [loading, setLoading] = useState(false);
const [currentQueryId, setCurrentQueryId] = useState<string>('');
const runSeqRef = useRef(0);
const currentQueryIdRef = useRef('');
const [isSaveModalOpen, setIsSaveModalOpen] = useState(false);
const [saveForm] = Form.useForm();
@@ -41,12 +48,17 @@ const QueryEditor: React.FC<{ tab: TabData }> = ({ tab }) => {
const [editorHeight, setEditorHeight] = useState(300);
const editorRef = useRef<any>(null);
const monacoRef = useRef<any>(null);
const lastExternalQueryRef = useRef<string>(tab.query || '');
const dragRef = useRef<{ startY: number, startHeight: number } | null>(null);
const tablesRef = useRef<{dbName: string, tableName: string}[]>([]); // Store tables for autocomplete (cross-db)
const allColumnsRef = useRef<{dbName: string, tableName: string, name: string, type: string}[]>([]); // Store all columns (cross-db)
const visibleDbsRef = useRef<string[]>([]); // Store visible databases for cross-db intellisense
const connections = useStore(state => state.connections);
const queryCapableConnections = useMemo(
() => connections.filter(c => getDataSourceCapabilities(c.config).supportsQueryEditor),
[connections]
);
const addSqlLog = useStore(state => state.addSqlLog);
const currentConnectionIdRef = useRef(currentConnectionId);
const currentDbRef = useRef(currentDb);
@@ -59,11 +71,23 @@ const QueryEditor: React.FC<{ tab: TabData }> = ({ tab }) => {
const setSqlFormatOptions = useStore(state => state.setSqlFormatOptions);
const queryOptions = useStore(state => state.queryOptions);
const setQueryOptions = useStore(state => state.setQueryOptions);
const shortcutOptions = useStore(state => state.shortcutOptions);
const activeTabId = useStore(state => state.activeTabId);
useEffect(() => {
currentConnectionIdRef.current = currentConnectionId;
}, [currentConnectionId]);
useEffect(() => {
if (!queryCapableConnections.some(c => c.id === currentConnectionId)) {
const fallback = queryCapableConnections[0]?.id || '';
if (fallback && fallback !== currentConnectionId) {
setCurrentConnectionId(fallback);
setCurrentDb('');
}
}
}, [queryCapableConnections, currentConnectionId]);
useEffect(() => {
currentDbRef.current = currentDb;
}, [currentDb]);
@@ -72,10 +96,30 @@ const QueryEditor: React.FC<{ tab: TabData }> = ({ tab }) => {
connectionsRef.current = connections;
}, [connections]);
const getCurrentQuery = () => {
const val = editorRef.current?.getValue?.();
if (typeof val === 'string') return val;
return query || '';
};
const syncQueryToEditor = (sql: string) => {
const next = sql || '';
setQuery(next);
const editor = editorRef.current;
if (editor && editor.getValue?.() !== next) {
editor.setValue(next);
}
};
// If opening a saved query, load its SQL
useEffect(() => {
if (tab.query) setQuery(tab.query);
}, [tab.query]);
const incoming = tab.query || '';
if (incoming === lastExternalQueryRef.current) {
return;
}
lastExternalQueryRef.current = incoming;
syncQueryToEditor(incoming || 'SELECT * FROM ');
}, [tab.id, tab.query]);
// Fetch Database List
useEffect(() => {
@@ -170,6 +214,17 @@ const QueryEditor: React.FC<{ tab: TabData }> = ({ tab }) => {
fetchMetadata();
}, [currentConnectionId, connections, dbList]); // dbList 变化时触发重新加载
// Query ID management helpers
const setQueryId = (id: string) => {
currentQueryIdRef.current = id;
setCurrentQueryId(id);
};
const clearQueryId = () => {
currentQueryIdRef.current = '';
setCurrentQueryId('');
};
// Handle Resizing
const handleMouseDown = (e: React.MouseEvent) => {
e.preventDefault();
@@ -238,6 +293,19 @@ const QueryEditor: React.FC<{ tab: TabData }> = ({ tab }) => {
return parts[parts.length - 1] || raw;
};
const splitSchemaAndTable = (qualified: string): { schema: string; table: string } => {
const raw = normalizeQualifiedName(qualified);
if (!raw) return { schema: '', table: '' };
const parts = raw.split('.').filter(Boolean);
if (parts.length >= 2) {
return {
schema: parts[parts.length - 2] || '',
table: parts[parts.length - 1] || '',
};
}
return { schema: '', table: parts[0] || '' };
};
const buildConnConfig = () => {
const connId = currentConnectionIdRef.current;
const conn = connectionsRef.current.find(c => c.id === connId);
@@ -310,13 +378,14 @@ const QueryEditor: React.FC<{ tab: TabData }> = ({ tab }) => {
if (qualifierMatch) {
const qualifier = stripQuotes(qualifierMatch[1]);
const prefix = (qualifierMatch[2] || '').toLowerCase();
const qualifierLower = qualifier.toLowerCase();
// 首先检查 qualifier 是否是数据库名(跨库表提示)
const visibleDbs = visibleDbsRef.current;
if (visibleDbs.some(db => db.toLowerCase() === qualifier.toLowerCase())) {
if (visibleDbs.some(db => db.toLowerCase() === qualifierLower)) {
// qualifier 是数据库名,提示该库的表
const tables = tablesRef.current.filter(t =>
(t.dbName || '').toLowerCase() === qualifier.toLowerCase()
(t.dbName || '').toLowerCase() === qualifierLower
);
const filtered = prefix
? tables.filter(t => (t.tableName || '').toLowerCase().startsWith(prefix))
@@ -333,6 +402,34 @@ const QueryEditor: React.FC<{ tab: TabData }> = ({ tab }) => {
return { suggestions };
}
// qualifier 是 schema如 dbo/public仅补全表名避免输入 dbo. 后再补成 dbo.dbo.table
const schemaTables = tablesRef.current
.map(t => {
const parsed = splitSchemaAndTable(t.tableName || '');
return {
dbName: t.dbName || '',
schema: parsed.schema,
table: parsed.table,
};
})
.filter(t => t.schema.toLowerCase() === qualifierLower && !!t.table);
if (schemaTables.length > 0) {
const filtered = prefix
? schemaTables.filter(t => t.table.toLowerCase().startsWith(prefix))
: schemaTables;
const suggestions = filtered.map(t => ({
label: t.table,
kind: monaco.languages.CompletionItemKind.Class,
insertText: t.table,
detail: `Table (${t.dbName}${t.schema ? '.' + t.schema : ''})`,
range,
sortText: '0' + t.table
}));
return { suggestions };
}
// 否则检查是否是表别名或表名,提示列
const reserved = new Set([
'where', 'on', 'group', 'order', 'limit', 'having',
@@ -481,8 +578,8 @@ const QueryEditor: React.FC<{ tab: TabData }> = ({ tab }) => {
const handleFormat = () => {
try {
const formatted = format(query, { language: 'mysql', keywordCase: sqlFormatOptions.keywordCase });
setQuery(formatted);
const formatted = format(getCurrentQuery(), { language: 'mysql', keywordCase: sqlFormatOptions.keywordCase });
syncQueryToEditor(formatted);
} catch (e) {
message.error("格式化失败: SQL 语法可能有误");
}
@@ -501,6 +598,12 @@ const QueryEditor: React.FC<{ tab: TabData }> = ({ tab }) => {
icon: sqlFormatOptions.keywordCase === 'lower' ? '✓' : undefined,
onClick: () => setSqlFormatOptions({ keywordCase: 'lower' })
},
{ type: 'divider' },
{
key: 'shortcut-settings',
label: '快捷键管理...',
onClick: () => window.dispatchEvent(new CustomEvent('gonavi:open-shortcut-settings')),
},
];
const splitSQLStatements = (sql: string): string[] => {
@@ -922,7 +1025,7 @@ const QueryEditor: React.FC<{ tab: TabData }> = ({ tab }) => {
const applyAutoLimit = (sql: string, dbType: string, maxRows: number): { sql: string; applied: boolean; maxRows: number } => {
const normalizedType = (dbType || 'mysql').toLowerCase();
const supportsLimit = normalizedType === 'mysql' || normalizedType === 'mariadb' || normalizedType === 'diros' || normalizedType === 'sphinx' || normalizedType === 'postgres' || normalizedType === 'kingbase' || normalizedType === 'sqlite' || normalizedType === 'duckdb' || normalizedType === 'tdengine' || normalizedType === '';
const supportsLimit = normalizedType === 'mysql' || normalizedType === 'mariadb' || normalizedType === 'diros' || normalizedType === 'sphinx' || normalizedType === 'postgres' || normalizedType === 'kingbase' || normalizedType === 'sqlite' || normalizedType === 'duckdb' || normalizedType === 'tdengine' || normalizedType === 'clickhouse' || normalizedType === '';
if (!supportsLimit) return { sql, applied: false, maxRows };
if (!Number.isFinite(maxRows) || maxRows <= 0) return { sql, applied: false, maxRows };
@@ -963,11 +1066,22 @@ const QueryEditor: React.FC<{ tab: TabData }> = ({ tab }) => {
};
const handleRun = async () => {
if (!query.trim()) return;
const currentQuery = getCurrentQuery();
if (!currentQuery.trim()) return;
if (!currentDb) {
message.error("请先选择数据库");
return;
}
// 如果已有查询在运行,先取消它
if (currentQueryIdRef.current) {
try {
await CancelQuery(currentQueryIdRef.current);
} catch (error) {
// 忽略取消错误,可能查询已完成
}
// 清除旧查询ID
clearQueryId();
}
const runSeq = ++runSeqRef.current;
setLoading(true);
const runStartTime = Date.now();
@@ -977,6 +1091,12 @@ const QueryEditor: React.FC<{ tab: TabData }> = ({ tab }) => {
if (runSeqRef.current === runSeq) setLoading(false);
return;
}
const connCaps = getDataSourceCapabilities(conn.config);
if (!connCaps.supportsQueryEditor) {
message.error("当前数据源不支持 SQL 查询编辑器,请使用对应专用页面。");
if (runSeqRef.current === runSeq) setLoading(false);
return;
}
const config = {
...conn.config,
@@ -988,8 +1108,16 @@ const QueryEditor: React.FC<{ tab: TabData }> = ({ tab }) => {
};
try {
const rawSQL = getSelectedSQL() || query;
const statements = splitSQLStatements(rawSQL);
const rawSQL = getSelectedSQL() || currentQuery;
const dbType = String((config as any).type || 'mysql');
const normalizedDbType = dbType.trim().toLowerCase();
const normalizedRawSQL = String(rawSQL || '').replace(//g, ';');
const splitInput = normalizedDbType === 'mongodb'
? normalizedRawSQL
.replace(/^\s*\/\/.*$/gm, '')
.replace(/^\s*#.*$/gm, '')
: normalizedRawSQL;
const statements = splitSQLStatements(splitInput);
if (statements.length === 0) {
message.info('没有可执行的 SQL。');
setResultSets([]);
@@ -999,9 +1127,7 @@ const QueryEditor: React.FC<{ tab: TabData }> = ({ tab }) => {
const nextResultSets: ResultSet[] = [];
const maxRows = Number(queryOptions?.maxRows) || 0;
const dbType = String((config as any).type || 'mysql');
const normalizedDbType = dbType.toLowerCase();
const forceReadOnlyResult = normalizedDbType === 'tdengine';
const forceReadOnlyResult = connCaps.forceReadOnlyQueryResult;
const wantsLimitProbe = Number.isFinite(maxRows) && maxRows > 0;
const probeLimit = wantsLimitProbe ? (maxRows + 1) : 0;
let anyTruncated = false;
@@ -1014,9 +1140,35 @@ const QueryEditor: React.FC<{ tab: TabData }> = ({ tab }) => {
const limitApplied = shouldAutoLimit && wantsLimitProbe;
const limited = limitApplied ? applyAutoLimit(rawStatement, dbType, probeLimit) : { sql: rawStatement, applied: false, maxRows: probeLimit };
const executedSql = limited.sql;
let executedSql = limited.sql;
if (String(dbType || '').trim().toLowerCase() === 'mongodb') {
const shellConvert = convertMongoShellToJsonCommand(executedSql);
if (shellConvert.recognized) {
if (shellConvert.error) {
const prefix = statements.length > 1 ? `${idx + 1} 条语句执行失败:` : '';
message.error(prefix + shellConvert.error);
setResultSets([]);
setActiveResultKey('');
return;
}
if (shellConvert.command) {
executedSql = shellConvert.command;
}
}
}
const startTime = Date.now();
const res = await DBQuery(config as any, currentDb, executedSql);
// Generate query ID for cancellation using backend UUID with fallback
let queryId: string;
try {
queryId = await GenerateQueryID();
} catch (error) {
console.warn('GenerateQueryID failed, using local UUID fallback:', error);
queryId = 'query-' + uuidv4();
}
setQueryId(queryId);
const res = await DBQueryWithCancel(config as any, currentDb, executedSql, queryId);
const duration = Date.now() - startTime;
addSqlLog({
@@ -1031,6 +1183,32 @@ const QueryEditor: React.FC<{ tab: TabData }> = ({ tab }) => {
});
if (!res.success) {
// 检查是否为查询取消错误
const errorMsg = res.message.toLowerCase();
const isCancelledError = errorMsg.includes('context canceled') ||
errorMsg.includes('查询已取消') ||
errorMsg.includes('canceled') ||
errorMsg.includes('cancelled') ||
errorMsg.includes('statement canceled') ||
errorMsg.includes('sql: statement canceled');
// 确保不是超时错误
const isTimeoutError = errorMsg.includes('context deadline exceeded') ||
errorMsg.includes('timeout') ||
errorMsg.includes('超时') ||
errorMsg.includes('deadline exceeded');
if (isCancelledError && !isTimeoutError) {
// 查询已被用户取消,不显示错误消息,清理状态
setResultSets([]);
setActiveResultKey('');
// 清除查询ID与handleCancel保持一致
if (currentQueryIdRef.current) {
clearQueryId();
}
return;
}
const prefix = statements.length > 1 ? `${idx + 1} 条语句执行失败:` : '';
message.error(prefix + res.message);
setResultSets([]);
@@ -1066,6 +1244,7 @@ const QueryEditor: React.FC<{ tab: TabData }> = ({ tab }) => {
nextResultSets.push({
key: `result-${idx + 1}`,
sql: rawStatement,
exportSql: limited.applied ? applyAutoLimit(rawStatement, dbType, Math.max(1, Number(maxRows) || 1)).sql : rawStatement,
rows,
columns: cols,
tableName: simpleTableName,
@@ -1082,6 +1261,7 @@ const QueryEditor: React.FC<{ tab: TabData }> = ({ tab }) => {
nextResultSets.push({
key: `result-${idx + 1}`,
sql: rawStatement,
exportSql: rawStatement,
rows: [row],
columns: ['affectedRows'],
pkColumns: [],
@@ -1134,16 +1314,82 @@ const QueryEditor: React.FC<{ tab: TabData }> = ({ tab }) => {
setActiveResultKey('');
} finally {
if (runSeqRef.current === runSeq) setLoading(false);
// Clear query ID after execution completes
clearQueryId();
}
};
const handleCancel = async () => {
if (!currentQueryIdRef.current) {
message.warning('没有正在运行的查询可取消');
return;
}
const queryIdToCancel = currentQueryIdRef.current;
try {
const res = await CancelQuery(queryIdToCancel);
if (res.success) {
message.success('查询已取消');
// Clear query ID after successful cancellation
if (currentQueryIdRef.current === queryIdToCancel) {
clearQueryId()
}
} else {
message.warning(res.message);
}
} catch (error: any) {
message.error('取消查询失败: ' + error.message);
}
};
useEffect(() => {
const binding = shortcutOptions.runQuery;
if (!binding?.enabled || !binding.combo) {
return;
}
const handleRunShortcut = (event: KeyboardEvent) => {
if (activeTabId !== tab.id) {
return;
}
if (!isShortcutMatch(event, binding.combo)) {
return;
}
const editorHasFocus = !!editorRef.current?.hasTextFocus?.();
if (!editorHasFocus && !isEditableElement(event.target)) {
return;
}
event.preventDefault();
event.stopPropagation();
void handleRun();
};
window.addEventListener('keydown', handleRunShortcut);
return () => {
window.removeEventListener('keydown', handleRunShortcut);
};
}, [activeTabId, tab.id, shortcutOptions.runQuery, handleRun]);
useEffect(() => {
const handleRunActiveQuery = () => {
if (activeTabId !== tab.id) {
return;
}
void handleRun();
};
window.addEventListener('gonavi:run-active-query', handleRunActiveQuery as EventListener);
return () => {
window.removeEventListener('gonavi:run-active-query', handleRunActiveQuery as EventListener);
};
}, [activeTabId, tab.id, handleRun]);
const handleSave = async () => {
try {
const values = await saveForm.validateFields();
saveQuery({
id: tab.id.startsWith('saved-') ? tab.id : `saved-${Date.now()}`,
name: values.name,
sql: query,
sql: getCurrentQuery(),
connectionId: currentConnectionId,
dbName: currentDb || tab.dbName || '',
createdAt: Date.now()
@@ -1223,7 +1469,7 @@ const QueryEditor: React.FC<{ tab: TabData }> = ({ tab }) => {
setCurrentConnectionId(val);
setCurrentDb('');
}}
options={connections.map(c => ({ label: c.name, value: c.id }))}
options={queryCapableConnections.map(c => ({ label: c.name, value: c.id }))}
showSearch
/>
<Select
@@ -1248,9 +1494,24 @@ const QueryEditor: React.FC<{ tab: TabData }> = ({ tab }) => {
]}
/>
</Tooltip>
<Button type="primary" icon={<PlayCircleOutlined />} onClick={handleRun} loading={loading}>
</Button>
<Button.Group>
<Tooltip
title={
shortcutOptions.runQuery?.enabled && shortcutOptions.runQuery?.combo
? `运行(${getShortcutDisplay(shortcutOptions.runQuery.combo)}`
: '运行'
}
>
<Button type="primary" icon={<PlayCircleOutlined />} onClick={handleRun} loading={loading}>
</Button>
</Tooltip>
{loading && (
<Button type="primary" danger icon={<StopOutlined />} onClick={handleCancel}>
</Button>
)}
</Button.Group>
<Button icon={<SaveOutlined />} onClick={() => {
saveForm.setFieldsValue({ name: tab.title.replace('Query (', '').replace(')', '') });
setIsSaveModalOpen(true);
@@ -1273,7 +1534,7 @@ const QueryEditor: React.FC<{ tab: TabData }> = ({ tab }) => {
height="100%"
defaultLanguage="sql"
theme={darkMode ? "transparent-dark" : "transparent-light"}
value={query}
defaultValue={query}
onChange={(val) => setQuery(val || '')}
onMount={handleEditorDidMount}
options={{
@@ -1333,6 +1594,8 @@ const QueryEditor: React.FC<{ tab: TabData }> = ({ tab }) => {
columnNames={rs.columns}
loading={loading}
tableName={rs.tableName}
exportScope="queryResult"
resultSql={rs.exportSql || rs.sql}
dbName={currentDb}
connectionId={currentConnectionId}
pkColumns={rs.pkColumns}

View File

@@ -5,6 +5,7 @@ import { useStore } from '../store';
import { RedisKeyInfo, RedisValue, StreamEntry } from '../types';
import Editor from '@monaco-editor/react';
import type { DataNode } from 'antd/es/tree';
import { normalizeOpacityForPlatform } from '../utils/appearance';
const { Search } = Input;
@@ -16,6 +17,10 @@ const REDIS_TREE_KEY_TTL_WIDTH = 92;
const REDIS_TREE_HIDE_TTL_THRESHOLD = 460;
const REDIS_KEY_INITIAL_LOAD_COUNT = 2000;
const REDIS_KEY_LOAD_MORE_COUNT = 2000;
const REDIS_KEY_SEARCH_INITIAL_LOAD_COUNT = 600;
const REDIS_KEY_SEARCH_LOAD_MORE_COUNT = 1000;
const REDIS_LARGE_KEYSPACE_THRESHOLD = 10000;
const REDIS_LARGE_KEYSPACE_MAX_EXPANDED_GROUPS = 200;
interface RedisViewerProps {
connectionId: string;
@@ -214,18 +219,17 @@ const ResizableDivider: React.FC<{
<div
onMouseDown={handleMouseDown}
style={{
width: 6,
width: 5,
cursor: 'col-resize',
background: '#f0f0f0',
background: 'transparent',
flexShrink: 0,
display: 'flex',
alignItems: 'center',
justifyContent: 'center'
justifyContent: 'center',
zIndex: 10,
}}
onMouseEnter={(e) => (e.currentTarget.style.background = '#d9d9d9')}
onMouseLeave={(e) => (e.currentTarget.style.background = '#f0f0f0')}
title="拖动调整宽度"
>
<div style={{ width: 2, height: 30, background: '#bfbfbf', borderRadius: 1 }} />
</div>
);
};
@@ -241,36 +245,79 @@ type RedisKeyTreeGroup = {
path: string;
children: Map<string, RedisKeyTreeGroup>;
leaves: RedisKeyTreeLeaf[];
leafCount: number;
};
type RedisKeyTreeResult = {
treeData: DataNode[];
rawKeyByNodeKey: Map<string, string>;
leafNodeKeyByRawKey: Map<string, string>;
treeData: RedisTreeDataNode[];
groupKeys: string[];
};
type RedisTreeDataNode = DataNode & {
nodeType: 'group' | 'leaf';
groupName?: string;
groupLeafCount?: number;
leafLabel?: string;
rawKey?: string;
keyType?: string;
ttl?: number;
};
const buildLeafNodeKey = (rawKey: string): string => `key:${rawKey}`;
const parseRawKeyFromNodeKey = (nodeKey: React.Key): string | null => {
const keyText = String(nodeKey);
if (!keyText.startsWith('key:')) {
return null;
}
return keyText.slice(4);
};
const getRedisScanLoadCount = (pattern: string, append: boolean): number => {
const normalizedPattern = pattern.trim() || '*';
if (normalizedPattern === '*') {
return append ? REDIS_KEY_LOAD_MORE_COUNT : REDIS_KEY_INITIAL_LOAD_COUNT;
}
return append ? REDIS_KEY_SEARCH_LOAD_MORE_COUNT : REDIS_KEY_SEARCH_INITIAL_LOAD_COUNT;
};
const normalizeRedisCursor = (value: unknown): string => {
if (typeof value === 'string') {
const trimmed = value.trim();
return trimmed === '' ? '0' : trimmed;
}
if (typeof value === 'number') {
if (!Number.isFinite(value)) {
return '0';
}
return Math.trunc(value).toString();
}
if (typeof value === 'bigint') {
return value.toString();
}
return '0';
};
const normalizeKeySegment = (segment: string): string => {
return segment === '' ? EMPTY_SEGMENT_LABEL : segment;
};
const createTreeGroup = (name: string, path: string): RedisKeyTreeGroup => {
return { name, path, children: new Map(), leaves: [] };
return { name, path, children: new Map(), leaves: [], leafCount: 0 };
};
const countGroupLeafNodes = (group: RedisKeyTreeGroup): number => {
const calculateGroupLeafCount = (group: RedisKeyTreeGroup): number => {
let count = group.leaves.length;
group.children.forEach((child) => {
count += countGroupLeafNodes(child);
count += calculateGroupLeafCount(child);
});
group.leafCount = count;
return count;
};
const buildRedisKeyTree = (
keys: RedisKeyInfo[],
formatTTL: (ttl: number) => string,
getTypeColor: (type: string) => string,
showTTL: boolean
sortLeafNodes: boolean
): RedisKeyTreeResult => {
const root = createTreeGroup('__root__', '__root__');
@@ -300,105 +347,41 @@ const buildRedisKeyTree = (
current.leaves.push({ keyInfo, label: leafLabel });
});
calculateGroupLeafCount(root);
const rawKeyByNodeKey = new Map<string, string>();
const leafNodeKeyByRawKey = new Map<string, string>();
const groupKeys: string[] = [];
const toTreeNodes = (group: RedisKeyTreeGroup): DataNode[] => {
const toTreeNodes = (group: RedisKeyTreeGroup): RedisTreeDataNode[] => {
const childGroups = Array.from(group.children.values()).sort((a, b) => a.name.localeCompare(b.name));
const childLeaves = [...group.leaves].sort((a, b) => a.keyInfo.key.localeCompare(b.keyInfo.key));
const childLeaves = sortLeafNodes
? [...group.leaves].sort((a, b) => a.keyInfo.key.localeCompare(b.keyInfo.key))
: group.leaves;
const groupNodes: DataNode[] = childGroups.map((child) => {
const groupNodes: RedisTreeDataNode[] = childGroups.map((child) => {
const groupNodeKey = `group:${child.path}`;
groupKeys.push(groupNodeKey);
return {
key: groupNodeKey,
title: (
<Space size={6}>
<FolderOpenOutlined style={{ color: '#8c8c8c' }} />
<span>{child.name}</span>
<span style={{ fontSize: 12, color: '#999' }}>({countGroupLeafNodes(child)})</span>
</Space>
),
title: child.name,
nodeType: 'group',
groupName: child.name,
groupLeafCount: child.leafCount,
selectable: false,
disableCheckbox: true,
children: toTreeNodes(child),
};
});
const leafNodes: DataNode[] = childLeaves.map((leaf) => {
const nodeKey = `key:${leaf.keyInfo.key}`;
rawKeyByNodeKey.set(nodeKey, leaf.keyInfo.key);
leafNodeKeyByRawKey.set(leaf.keyInfo.key, nodeKey);
const leafNodes: RedisTreeDataNode[] = childLeaves.map((leaf) => {
return {
key: nodeKey,
key: buildLeafNodeKey(leaf.keyInfo.key),
isLeaf: true,
title: (
<div
style={{
display: 'flex',
alignItems: 'center',
gap: 8,
minWidth: 0,
width: '100%',
overflow: 'hidden',
}}
>
<div
style={{
display: 'flex',
alignItems: 'center',
gap: 6,
minWidth: 0,
flex: 1,
overflow: 'hidden',
}}
>
<KeyOutlined style={{ color: '#1677ff', flexShrink: 0 }} />
<Tooltip title={leaf.keyInfo.key}>
<span
style={{
minWidth: 0,
overflow: 'hidden',
textOverflow: 'ellipsis',
whiteSpace: 'nowrap',
display: 'block',
}}
>
{leaf.label}
</span>
</Tooltip>
</div>
<Tag
color={getTypeColor(leaf.keyInfo.type)}
style={{
marginInlineEnd: 0,
width: showTTL ? REDIS_TREE_KEY_TYPE_WIDTH : REDIS_TREE_KEY_TYPE_WIDTH_NARROW,
textAlign: 'center',
flexShrink: 0
}}
>
{leaf.keyInfo.type}
</Tag>
{showTTL && (
<span
style={{
width: REDIS_TREE_KEY_TTL_WIDTH,
fontSize: 12,
color: '#999',
textAlign: 'left',
whiteSpace: 'nowrap',
flexShrink: 0,
overflow: 'hidden',
textOverflow: 'ellipsis',
}}
>
{formatTTL(leaf.keyInfo.ttl)}
</span>
)}
</div>
),
title: leaf.label,
nodeType: 'leaf',
leafLabel: leaf.label,
rawKey: leaf.keyInfo.key,
keyType: leaf.keyInfo.type,
ttl: leaf.keyInfo.ttl,
};
});
@@ -407,20 +390,31 @@ const buildRedisKeyTree = (
return {
treeData: toTreeNodes(root),
rawKeyByNodeKey,
leafNodeKeyByRawKey,
groupKeys,
};
};
const RedisViewer: React.FC<RedisViewerProps> = ({ connectionId, redisDB }) => {
const { connections } = useStore();
const connections = useStore(state => state.connections);
const theme = useStore(state => state.theme);
const appearance = useStore(state => state.appearance);
const darkMode = theme === 'dark';
const opacity = normalizeOpacityForPlatform(appearance.opacity);
const connection = connections.find(c => c.id === connectionId);
const keyAccentColor = darkMode ? '#ffd666' : '#1677ff';
const jsonAccentColor = darkMode ? '#f6c453' : '#1890ff';
const valueToolbarBg = darkMode
? `rgba(38, 38, 38, ${opacity})`
: `rgba(245, 245, 245, ${opacity})`;
const valueToolbarBorder = darkMode
? `1px solid rgba(255, 255, 255, ${Math.max(0.12, Math.min(0.24, opacity * 0.22))})`
: `1px solid rgba(0, 0, 0, ${Math.max(0.08, Math.min(0.2, opacity * 0.12))})`;
const valueToolbarText = darkMode ? 'rgba(255, 255, 255, 0.78)' : '#666';
const [keys, setKeys] = useState<RedisKeyInfo[]>([]);
const [loading, setLoading] = useState(false);
const [searchPattern, setSearchPattern] = useState('*');
const [cursor, setCursor] = useState<number>(0);
const [cursor, setCursor] = useState<string>('0');
const [hasMore, setHasMore] = useState(false);
const [selectedKey, setSelectedKey] = useState<string | null>(null);
const [keyValue, setKeyValue] = useState<RedisValue | null>(null);
@@ -445,11 +439,14 @@ const RedisViewer: React.FC<RedisViewerProps> = ({ connectionId, redisDB }) => {
onSave: (newValue: string) => Promise<void>;
} | null>(null);
const jsonEditValueRef = useRef<string>('');
const latestLoadRequestIdRef = useRef(0);
// 面板宽度状态和 ref - 默认占据 50% 宽度
const [leftPanelWidth, setLeftPanelWidth] = useState<number | string>('50%');
const leftPanelRef = useRef<HTMLDivElement>(null);
const treeContainerRef = useRef<HTMLDivElement>(null);
const [showTreeKeyTTL, setShowTreeKeyTTL] = useState(true);
const [treeHeight, setTreeHeight] = useState(500);
const [expandedGroupKeys, setExpandedGroupKeys] = useState<string[]>([]);
const getConfig = useCallback(() => {
@@ -466,20 +463,28 @@ const RedisViewer: React.FC<RedisViewerProps> = ({ connectionId, redisDB }) => {
const loadKeys = useCallback(async (
pattern: string = '*',
fromCursor: number = 0,
fromCursor: string = '0',
append: boolean = false,
targetCount: number = REDIS_KEY_INITIAL_LOAD_COUNT
targetCount?: number
) => {
const config = getConfig();
if (!config) return;
const normalizedPattern = pattern.trim() || '*';
const effectiveTargetCount = targetCount ?? getRedisScanLoadCount(normalizedPattern, append);
const requestId = latestLoadRequestIdRef.current + 1;
latestLoadRequestIdRef.current = requestId;
setLoading(true);
try {
const res = await (window as any).go.app.App.RedisScanKeys(config, pattern, fromCursor, targetCount);
const res = await (window as any).go.app.App.RedisScanKeys(config, normalizedPattern, fromCursor, effectiveTargetCount);
if (requestId !== latestLoadRequestIdRef.current) {
return;
}
if (res.success) {
const result = res.data;
const scannedKeys = Array.isArray(result?.keys) ? result.keys : [];
const nextCursor = Number(result?.cursor || 0);
const nextCursor = normalizeRedisCursor(result?.cursor);
if (append) {
setKeys(prev => {
const keyMap = new Map<string, RedisKeyInfo>();
@@ -491,38 +496,43 @@ const RedisViewer: React.FC<RedisViewerProps> = ({ connectionId, redisDB }) => {
setKeys(scannedKeys);
}
setCursor(nextCursor);
setHasMore(nextCursor !== 0);
setHasMore(nextCursor !== '0');
} else {
message.error('加载 Key 失败: ' + res.message);
}
} catch (e: any) {
if (requestId !== latestLoadRequestIdRef.current) {
return;
}
message.error('加载 Key 失败: ' + (e?.message || String(e)));
} finally {
setLoading(false);
if (requestId === latestLoadRequestIdRef.current) {
setLoading(false);
}
}
}, [getConfig]);
useEffect(() => {
loadKeys(searchPattern, 0, false, REDIS_KEY_INITIAL_LOAD_COUNT);
loadKeys(searchPattern, '0', false, getRedisScanLoadCount(searchPattern, false));
}, [redisDB]);
const handleSearch = (value: string) => {
const pattern = value.trim() || '*';
setSearchPattern(pattern);
setCursor(0);
loadKeys(pattern, 0, false, REDIS_KEY_INITIAL_LOAD_COUNT);
setCursor('0');
loadKeys(pattern, '0', false, getRedisScanLoadCount(pattern, false));
};
const handleLoadMore = () => {
if (!hasMore || loading) {
return;
}
loadKeys(searchPattern, cursor, true, REDIS_KEY_LOAD_MORE_COUNT);
loadKeys(searchPattern, cursor, true, getRedisScanLoadCount(searchPattern, true));
};
const handleRefresh = () => {
setCursor(0);
loadKeys(searchPattern, 0, false, REDIS_KEY_INITIAL_LOAD_COUNT);
setCursor('0');
loadKeys(searchPattern, '0', false, getRedisScanLoadCount(searchPattern, false));
};
const loadKeyValue = async (key: string) => {
@@ -678,23 +688,51 @@ const RedisViewer: React.FC<RedisViewerProps> = ({ connectionId, redisDB }) => {
return () => window.removeEventListener('resize', handleWindowResize);
}, []);
useEffect(() => {
const target = treeContainerRef.current;
if (!target) return;
const updateTreeHeight = (nextHeight: number) => {
if (nextHeight <= 0) return;
setTreeHeight((prev) => (prev === nextHeight ? prev : nextHeight));
};
updateTreeHeight(Math.round(target.getBoundingClientRect().height));
if (typeof ResizeObserver !== 'undefined') {
const observer = new ResizeObserver((entries) => {
const nextHeight = Math.round(entries[0]?.contentRect.height || target.getBoundingClientRect().height);
updateTreeHeight(nextHeight);
});
observer.observe(target);
return () => observer.disconnect();
}
const handleWindowResize = () => {
updateTreeHeight(Math.round(target.getBoundingClientRect().height));
};
window.addEventListener('resize', handleWindowResize);
return () => window.removeEventListener('resize', handleWindowResize);
}, []);
const isLargeKeyspace = keys.length >= REDIS_LARGE_KEYSPACE_THRESHOLD;
const keyTree = useMemo(() => {
return buildRedisKeyTree(keys, formatTTL, getTypeColor, showTreeKeyTTL);
}, [keys, showTreeKeyTTL]);
return buildRedisKeyTree(keys, !isLargeKeyspace);
}, [isLargeKeyspace, keys]);
const groupKeySet = useMemo(() => new Set(keyTree.groupKeys), [keyTree.groupKeys]);
const selectedTreeNodeKeys = useMemo(() => {
if (!selectedKey) {
return [] as string[];
}
const nodeKey = keyTree.leafNodeKeyByRawKey.get(selectedKey);
return nodeKey ? [nodeKey] : [];
}, [selectedKey, keyTree]);
return [buildLeafNodeKey(selectedKey)];
}, [selectedKey]);
const checkedTreeNodeKeys = useMemo(() => {
return selectedKeys
.map(rawKey => keyTree.leafNodeKeyByRawKey.get(rawKey))
.filter((nodeKey): nodeKey is string => Boolean(nodeKey));
}, [selectedKeys, keyTree]);
return selectedKeys.map(rawKey => buildLeafNodeKey(rawKey));
}, [selectedKeys]);
useEffect(() => {
const existingKeySet = new Set(keys.map(item => item.key));
@@ -703,16 +741,19 @@ const RedisViewer: React.FC<RedisViewerProps> = ({ connectionId, redisDB }) => {
useEffect(() => {
setExpandedGroupKeys((prev) => {
const validKeys = prev.filter(nodeKey => keyTree.groupKeys.includes(nodeKey));
return validKeys;
const validKeys = prev.filter(nodeKey => groupKeySet.has(nodeKey));
if (!isLargeKeyspace) {
return validKeys;
}
return validKeys.slice(0, REDIS_LARGE_KEYSPACE_MAX_EXPANDED_GROUPS);
});
}, [keyTree]);
}, [groupKeySet, isLargeKeyspace]);
const handleTreeSelect = (nodeKeys: React.Key[]) => {
if (nodeKeys.length === 0) {
return;
}
const rawKey = keyTree.rawKeyByNodeKey.get(String(nodeKeys[0]));
const rawKey = parseRawKeyFromNodeKey(nodeKeys[0]);
if (!rawKey) {
return;
}
@@ -722,11 +763,119 @@ const RedisViewer: React.FC<RedisViewerProps> = ({ connectionId, redisDB }) => {
const handleTreeCheck = (checked: React.Key[] | { checked: React.Key[]; halfChecked: React.Key[] }) => {
const checkedNodeKeys = Array.isArray(checked) ? checked : checked.checked;
const rawKeys = checkedNodeKeys
.map(nodeKey => keyTree.rawKeyByNodeKey.get(String(nodeKey)))
.map(nodeKey => parseRawKeyFromNodeKey(nodeKey))
.filter((rawKey): rawKey is string => Boolean(rawKey));
setSelectedKeys(rawKeys);
};
const renderTreeNodeTitle = useCallback((nodeData: DataNode) => {
const treeNode = nodeData as RedisTreeDataNode;
if (treeNode.nodeType === 'group') {
return (
<Space size={6}>
<FolderOpenOutlined style={{ color: '#8c8c8c' }} />
<span>{treeNode.groupName}</span>
<span style={{ fontSize: 12, color: '#999' }}>({treeNode.groupLeafCount ?? 0})</span>
</Space>
);
}
const leafLabel = treeNode.leafLabel ?? '';
const rawKey = treeNode.rawKey ?? parseRawKeyFromNodeKey(treeNode.key ?? '') ?? '';
const keyType = treeNode.keyType ?? 'unknown';
const ttl = typeof treeNode.ttl === 'number' ? treeNode.ttl : -1;
if (isLargeKeyspace) {
return (
<div style={{ minWidth: 0, overflow: 'hidden', textOverflow: 'ellipsis', whiteSpace: 'nowrap' }}>
<span>{leafLabel}</span>
<span style={{ marginLeft: 8, color: '#999', fontSize: 12 }}>[{keyType}]</span>
{showTreeKeyTTL && (
<span style={{ marginLeft: 8, color: '#999', fontSize: 12 }}>{formatTTL(ttl)}</span>
)}
</div>
);
}
return (
<div
style={{
display: 'flex',
alignItems: 'center',
gap: 8,
minWidth: 0,
width: '100%',
overflow: 'hidden',
}}
>
<div
style={{
display: 'flex',
alignItems: 'center',
gap: 6,
minWidth: 0,
flex: 1,
overflow: 'hidden',
}}
>
<KeyOutlined style={{ color: keyAccentColor, flexShrink: 0 }} />
<Tooltip title={rawKey}>
<span
style={{
minWidth: 0,
overflow: 'hidden',
textOverflow: 'ellipsis',
whiteSpace: 'nowrap',
display: 'block',
}}
>
{leafLabel}
</span>
</Tooltip>
</div>
<Tag
color={getTypeColor(keyType)}
style={{
marginInlineEnd: 0,
width: showTreeKeyTTL ? REDIS_TREE_KEY_TYPE_WIDTH : REDIS_TREE_KEY_TYPE_WIDTH_NARROW,
textAlign: 'center',
flexShrink: 0
}}
>
{keyType}
</Tag>
{showTreeKeyTTL && (
<span
style={{
width: REDIS_TREE_KEY_TTL_WIDTH,
fontSize: 12,
color: '#999',
textAlign: 'left',
whiteSpace: 'nowrap',
flexShrink: 0,
overflow: 'hidden',
textOverflow: 'ellipsis',
}}
>
{formatTTL(ttl)}
</span>
)}
</div>
);
}, [formatTTL, getTypeColor, isLargeKeyspace, showTreeKeyTTL]);
const handleTreeExpand = (nextExpandedKeys: React.Key[]) => {
const validGroupKeys = nextExpandedKeys
.map(key => String(key))
.filter(nodeKey => groupKeySet.has(nodeKey));
if (isLargeKeyspace) {
setExpandedGroupKeys(validGroupKeys.slice(0, REDIS_LARGE_KEYSPACE_MAX_EXPANDED_GROUPS));
return;
}
setExpandedGroupKeys(validGroupKeys);
};
const renderValueEditor = () => {
if (!keyValue || !selectedKey) {
return <div style={{ padding: 20, textAlign: 'center', color: '#999' }}> Key </div>;
@@ -766,13 +915,13 @@ const RedisViewer: React.FC<RedisViewerProps> = ({ connectionId, redisDB }) => {
<div style={{ display: 'flex', flexDirection: 'column', height: '100%' }}>
<div style={{
padding: '4px 8px',
background: '#f5f5f5',
borderBottom: '1px solid #d9d9d9',
background: valueToolbarBg,
borderBottom: valueToolbarBorder,
display: 'flex',
justifyContent: 'space-between',
alignItems: 'center'
}}>
<span style={{ fontSize: 12, color: '#666' }}>
<span style={{ fontSize: 12, color: valueToolbarText }}>
{encoding && `编码: ${encoding}`}
</span>
<Radio.Group size="small" value={viewMode} onChange={(e) => setViewMode(e.target.value)}>
@@ -785,6 +934,7 @@ const RedisViewer: React.FC<RedisViewerProps> = ({ connectionId, redisDB }) => {
<Editor
height="calc(100% - 72px)"
language={isJson ? 'json' : 'plaintext'}
theme={darkMode ? 'transparent-dark' : 'transparent-light'}
value={displayValue}
options={{
readOnly: true,
@@ -934,7 +1084,7 @@ const RedisViewer: React.FC<RedisViewerProps> = ({ connectionId, redisDB }) => {
return (
<Tooltip title={<pre style={{ maxHeight: 300, overflow: 'auto', margin: 0, fontSize: 12 }}>{tooltipContent}</pre>} styles={{ root: { maxWidth: 600 } }}>
<span style={{
color: record.isBinary ? '#d46b08' : (record.isJson ? '#1890ff' : undefined),
color: record.isBinary ? '#d46b08' : (record.isJson ? jsonAccentColor : undefined),
fontFamily: record.isBinary ? 'monospace' : undefined,
fontSize: record.isBinary ? 11 : undefined
}}>
@@ -1113,7 +1263,7 @@ const RedisViewer: React.FC<RedisViewerProps> = ({ connectionId, redisDB }) => {
return (
<Tooltip title={<pre style={{ maxHeight: 300, overflow: 'auto', margin: 0, fontSize: 12 }}>{tooltipContent}</pre>} styles={{ root: { maxWidth: 600 } }}>
<span style={{
color: record.isBinary ? '#d46b08' : (record.isJson ? '#1890ff' : undefined),
color: record.isBinary ? '#d46b08' : (record.isJson ? jsonAccentColor : undefined),
fontFamily: record.isBinary ? 'monospace' : undefined,
fontSize: record.isBinary ? 11 : undefined
}}>
@@ -1268,7 +1418,7 @@ const RedisViewer: React.FC<RedisViewerProps> = ({ connectionId, redisDB }) => {
return (
<Tooltip title={<pre style={{ maxHeight: 300, overflow: 'auto', margin: 0, fontSize: 12 }}>{tooltipContent}</pre>} styles={{ root: { maxWidth: 600 } }}>
<span style={{
color: record.isBinary ? '#d46b08' : (record.isJson ? '#1890ff' : undefined),
color: record.isBinary ? '#d46b08' : (record.isJson ? jsonAccentColor : undefined),
fontFamily: record.isBinary ? 'monospace' : undefined,
fontSize: record.isBinary ? 11 : undefined
}}>
@@ -1422,7 +1572,7 @@ const RedisViewer: React.FC<RedisViewerProps> = ({ connectionId, redisDB }) => {
return (
<Tooltip title={<pre style={{ maxHeight: 300, overflow: 'auto', margin: 0, fontSize: 12 }}>{tooltipContent}</pre>} styles={{ root: { maxWidth: 600 } }}>
<span style={{
color: record.isBinary ? '#d46b08' : (record.isJson ? '#1890ff' : undefined),
color: record.isBinary ? '#d46b08' : (record.isJson ? jsonAccentColor : undefined),
fontFamily: record.isBinary ? 'monospace' : undefined,
fontSize: record.isBinary ? 11 : undefined
}}>
@@ -1636,7 +1786,7 @@ const RedisViewer: React.FC<RedisViewerProps> = ({ connectionId, redisDB }) => {
return (
<Tooltip title={<pre style={{ maxHeight: 300, overflow: 'auto', margin: 0, fontSize: 12 }}>{tooltipContent}</pre>} styles={{ root: { maxWidth: 720 } }}>
<span style={{
color: record.isBinary ? '#d46b08' : (record.isJson ? '#1890ff' : undefined),
color: record.isBinary ? '#d46b08' : (record.isJson ? jsonAccentColor : undefined),
fontFamily: record.isBinary ? 'monospace' : undefined,
fontSize: record.isBinary ? 11 : undefined
}}>
@@ -1769,24 +1919,34 @@ const RedisViewer: React.FC<RedisViewerProps> = ({ connectionId, redisDB }) => {
</Popconfirm>
</div>
</div>
<div style={{ flex: 1, overflow: 'auto' }}>
<Spin spinning={loading} size="small">
<Tree
blockNode
showIcon={false}
checkable
checkStrictly
selectable
treeData={keyTree.treeData}
selectedKeys={selectedTreeNodeKeys}
checkedKeys={checkedTreeNodeKeys}
expandedKeys={expandedGroupKeys}
onExpand={(nextExpandedKeys) => setExpandedGroupKeys(nextExpandedKeys as string[])}
onSelect={(nodeKeys) => handleTreeSelect(nodeKeys)}
onCheck={(checked) => handleTreeCheck(checked)}
style={{ padding: '8px 6px' }}
/>
</Spin>
<div style={{ flex: 1, minHeight: 0, display: 'flex', flexDirection: 'column' }}>
{isLargeKeyspace && (
<div style={{ padding: '6px 8px', fontSize: 12, color: '#8c8c8c', borderBottom: '1px solid #f0f0f0' }}>
{REDIS_LARGE_KEYSPACE_MAX_EXPANDED_GROUPS}
</div>
)}
<div ref={treeContainerRef} style={{ flex: 1, minHeight: 0, overflow: 'hidden' }}>
<Spin spinning={loading} size="small" style={{ width: '100%' }}>
<Tree
blockNode
showIcon={false}
checkable
checkStrictly
selectable
virtual
height={Math.max(treeHeight - 8, 220)}
treeData={keyTree.treeData}
titleRender={renderTreeNodeTitle}
selectedKeys={selectedTreeNodeKeys}
checkedKeys={checkedTreeNodeKeys}
expandedKeys={expandedGroupKeys}
onExpand={handleTreeExpand}
onSelect={(nodeKeys) => handleTreeSelect(nodeKeys)}
onCheck={(checked) => handleTreeCheck(checked)}
style={{ padding: '8px 6px' }}
/>
</Spin>
</div>
{hasMore && (
<div style={{ padding: 8, textAlign: 'center' }}>
<Button onClick={handleLoadMore} loading={loading} disabled={!hasMore || loading}></Button>
@@ -1819,6 +1979,7 @@ const RedisViewer: React.FC<RedisViewerProps> = ({ connectionId, redisDB }) => {
<Editor
height="450px"
language={tryFormatJson(editValue).isJson ? 'json' : 'plaintext'}
theme={darkMode ? 'transparent-dark' : 'transparent-light'}
value={editValue}
onChange={(value) => setEditValue(value || '')}
options={{
@@ -1883,6 +2044,7 @@ const RedisViewer: React.FC<RedisViewerProps> = ({ connectionId, redisDB }) => {
<Editor
height="450px"
language={jsonEditConfig?.isJson ? 'json' : 'plaintext'}
theme={darkMode ? 'transparent-dark' : 'transparent-light'}
defaultValue={jsonEditConfig?.value || ''}
onChange={(value) => { jsonEditValueRef.current = value || ''; }}
onMount={(editor) => { jsonEditValueRef.current = jsonEditConfig?.value || ''; }}

View File

@@ -1,11 +1,12 @@
import React, { useEffect, useState, useMemo, useRef } from 'react';
import { Tree, message, Dropdown, MenuProps, Input, Button, Modal, Form, Badge, Checkbox, Space, Select } from 'antd';
import { Tree, message, Dropdown, MenuProps, Input, Button, Modal, Form, Badge, Checkbox, Space, Select, Popover, Tooltip } from 'antd';
import {
DatabaseOutlined,
TableOutlined,
EyeOutlined,
ConsoleSqlOutlined,
HddOutlined,
FolderOutlined,
FolderOpenOutlined,
FileTextOutlined,
CopyOutlined,
@@ -42,13 +43,14 @@ interface TreeNode {
children?: TreeNode[];
icon?: React.ReactNode;
dataRef?: any;
type?: 'connection' | 'database' | 'table' | 'view' | 'db-trigger' | 'routine' | 'object-group' | 'queries-folder' | 'saved-query' | 'folder-columns' | 'folder-indexes' | 'folder-fks' | 'folder-triggers' | 'redis-db';
type?: 'connection' | 'database' | 'table' | 'view' | 'db-trigger' | 'routine' | 'object-group' | 'queries-folder' | 'saved-query' | 'folder-columns' | 'folder-indexes' | 'folder-fks' | 'folder-triggers' | 'redis-db' | 'tag';
}
type BatchTableExportMode = 'schema' | 'backup' | 'dataOnly';
type BatchObjectType = 'table' | 'view';
type BatchObjectFilterType = 'all' | BatchObjectType;
type BatchSelectionScope = 'filtered' | 'all';
type SearchScope = 'smart' | 'object' | 'database' | 'host' | 'tag';
interface BatchObjectItem {
title: string;
@@ -58,12 +60,32 @@ interface BatchObjectItem {
dataRef: any;
}
const SEARCH_SCOPE_OPTIONS: Array<{ value: SearchScope; label: string }> = [
{ value: 'smart', label: '智能' },
{ value: 'object', label: '表对象' },
{ value: 'database', label: '库' },
{ value: 'host', label: 'Host' },
{ value: 'tag', label: '标签' },
];
const SEARCH_SCOPE_LABEL_MAP: Record<SearchScope, string> = SEARCH_SCOPE_OPTIONS.reduce((acc, option) => {
acc[option.value] = option.label;
return acc;
}, {} as Record<SearchScope, string>);
const Sidebar: React.FC<{ onEditConnection?: (conn: SavedConnection) => void }> = ({ onEditConnection }) => {
const connections = useStore(state => state.connections);
const savedQueries = useStore(state => state.savedQueries);
const addConnection = useStore(state => state.addConnection);
const addTab = useStore(state => state.addTab);
const setActiveContext = useStore(state => state.setActiveContext);
const removeConnection = useStore(state => state.removeConnection);
const connectionTags = useStore(state => state.connectionTags);
const addConnectionTag = useStore(state => state.addConnectionTag);
const updateConnectionTag = useStore(state => state.updateConnectionTag);
const removeConnectionTag = useStore(state => state.removeConnectionTag);
const moveConnectionToTag = useStore(state => state.moveConnectionToTag);
const reorderTags = useStore(state => state.reorderTags);
const closeTabsByConnection = useStore(state => state.closeTabsByConnection);
const closeTabsByDatabase = useStore(state => state.closeTabsByDatabase);
const theme = useStore(state => state.theme);
@@ -87,6 +109,9 @@ const Sidebar: React.FC<{ onEditConnection?: (conn: SavedConnection) => void }>
};
const bgMain = getBg('#141414');
const [searchValue, setSearchValue] = useState('');
const [searchScopes, setSearchScopes] = useState<SearchScope[]>(['smart']);
const [isSearchScopePopoverOpen, setIsSearchScopePopoverOpen] = useState(false);
const searchInputRef = useRef<any>(null);
const [expandedKeys, setExpandedKeys] = useState<React.Key[]>([]);
const [autoExpandParent, setAutoExpandParent] = useState(true);
const [loadedKeys, setLoadedKeys] = useState<React.Key[]>([]);
@@ -109,6 +134,21 @@ const Sidebar: React.FC<{ onEditConnection?: (conn: SavedConnection) => void }>
resizeObserver.observe(treeContainerRef.current);
return () => resizeObserver.disconnect();
}, []);
useEffect(() => {
const handleFocusSidebarSearch = () => {
const inputEl = searchInputRef.current?.input as HTMLInputElement | undefined;
if (!inputEl) {
return;
}
inputEl.focus();
inputEl.select();
};
window.addEventListener('gonavi:focus-sidebar-search', handleFocusSidebarSearch as EventListener);
return () => {
window.removeEventListener('gonavi:focus-sidebar-search', handleFocusSidebarSearch as EventListener);
};
}, []);
// Connection Status State: key -> 'success' | 'error'
const [connectionStates, setConnectionStates] = useState<Record<string, 'success' | 'error'>>({});
@@ -127,6 +167,10 @@ const Sidebar: React.FC<{ onEditConnection?: (conn: SavedConnection) => void }>
const [renameViewForm] = Form.useForm();
const [renameViewTarget, setRenameViewTarget] = useState<any>(null);
// Connection Tag Modals
const [isCreateTagModalOpen, setIsCreateTagModalOpen] = useState(false);
const [createTagForm] = Form.useForm();
// Batch Operations Modal
const [isBatchModalOpen, setIsBatchModalOpen] = useState(false);
const [batchTables, setBatchTables] = useState<BatchObjectItem[]>([]);
@@ -208,11 +252,21 @@ const Sidebar: React.FC<{ onEditConnection?: (conn: SavedConnection) => void }>
useEffect(() => {
setTreeData((prev) => {
const prevMap = new Map<string, TreeNode>();
prev.forEach((node) => {
prevMap.set(String(node.key), node);
});
return connections.map((conn) => {
// We need to recursively extract connections from old tag structures
// so if a user expands a connection that was tagged, the state remains
const recurseCollect = (nodes: TreeNode[]) => {
nodes.forEach((node) => {
if (node.type === 'tag') {
if (node.children) recurseCollect(node.children);
} else if (node.type === 'connection') {
prevMap.set(String(node.key), node);
}
});
};
recurseCollect(prev);
const buildConnectionNode = (conn: SavedConnection): TreeNode => {
const existing = prevMap.get(conn.id);
return {
title: conn.name,
@@ -223,10 +277,157 @@ const Sidebar: React.FC<{ onEditConnection?: (conn: SavedConnection) => void }>
isLeaf: false,
children: existing?.children,
} as TreeNode;
});
});
}, [connections]);
};
const taggedConnIds = new Set<string>();
const tagNodes: TreeNode[] = connectionTags.map((tag) => {
tag.connectionIds.forEach(id => taggedConnIds.add(id));
return {
title: tag.name,
key: `tag-${tag.id}`,
icon: <FolderOutlined style={{ color: '#faad14' }} />,
type: 'tag',
dataRef: tag,
isLeaf: false,
children: tag.connectionIds
.map(cid => connections.find(c => c.id === cid))
.filter(Boolean)
.map(conn => buildConnectionNode(conn!)),
} as TreeNode;
});
const ungroupedNodes: TreeNode[] = connections
.filter(c => !taggedConnIds.has(c.id))
.map(conn => buildConnectionNode(conn));
return [...tagNodes, ...ungroupedNodes];
});
}, [connections, connectionTags]);
const buildDuplicateConnectionName = (rawName: string): string => {
const baseName = String(rawName || '').trim() || '连接';
const suffix = ' - 副本';
const usedNames = new Set(connections.map(conn => String(conn.name || '').trim()));
let candidate = `${baseName}${suffix}`;
let counter = 2;
while (usedNames.has(candidate)) {
candidate = `${baseName}${suffix} ${counter}`;
counter += 1;
}
return candidate;
};
const cloneConnectionConfig = (config: SavedConnection['config']): SavedConnection['config'] => {
const raw: any = config || {};
let cloned: any = {};
try {
cloned = typeof structuredClone === 'function'
? structuredClone(raw)
: JSON.parse(JSON.stringify(raw));
} catch {
cloned = { ...raw };
}
const readString = (...values: unknown[]): string => {
for (const value of values) {
if (typeof value === 'string') {
return value;
}
}
return '';
};
const readBool = (fallback: boolean, ...values: unknown[]): boolean => {
for (const value of values) {
if (typeof value === 'boolean') {
return value;
}
}
return fallback;
};
const readNumber = (fallback: number, ...values: unknown[]): number => {
for (const value of values) {
const num = Number(value);
if (Number.isFinite(num)) {
return num;
}
}
return fallback;
};
const rawSSH = (cloned.ssh ?? cloned.SSH ?? {}) as Record<string, unknown>;
const normalizedSSH = {
host: readString(rawSSH.host, rawSSH.Host, cloned.sshHost, cloned.SSHHost),
port: readNumber(22, rawSSH.port, rawSSH.Port, cloned.sshPort, cloned.SSHPort),
user: readString(rawSSH.user, rawSSH.User, cloned.sshUser, cloned.SSHUser),
password: readString(rawSSH.password, rawSSH.Password, cloned.sshPassword, cloned.SSHPassword),
keyPath: readString(rawSSH.keyPath, rawSSH.KeyPath, cloned.sshKeyPath, cloned.SSHKeyPath),
};
const hasSSHDetail = Boolean(
normalizedSSH.host
|| normalizedSSH.user
|| normalizedSSH.password
|| normalizedSSH.keyPath
);
const rawProxy = (cloned.proxy ?? cloned.Proxy ?? {}) as Record<string, unknown>;
const proxyTypeRaw = readString(rawProxy.type, rawProxy.Type, cloned.proxyType, cloned.ProxyType).toLowerCase();
const proxyType: 'socks5' | 'http' = proxyTypeRaw === 'http' ? 'http' : 'socks5';
const normalizedProxy = {
type: proxyType,
host: readString(rawProxy.host, rawProxy.Host, cloned.proxyHost, cloned.ProxyHost),
port: readNumber(proxyType === 'http' ? 8080 : 1080, rawProxy.port, rawProxy.Port, cloned.proxyPort, cloned.ProxyPort),
user: readString(rawProxy.user, rawProxy.User, cloned.proxyUser, cloned.ProxyUser),
password: readString(rawProxy.password, rawProxy.Password, cloned.proxyPassword, cloned.ProxyPassword),
};
const hasProxyDetail = Boolean(normalizedProxy.host || normalizedProxy.user || normalizedProxy.password);
const rawHttpTunnel = (cloned.httpTunnel ?? cloned.HTTPTunnel ?? {}) as Record<string, unknown>;
const normalizedHttpTunnel = {
host: readString(rawHttpTunnel.host, rawHttpTunnel.Host, cloned.httpTunnelHost, cloned.HttpTunnelHost),
port: readNumber(8080, rawHttpTunnel.port, rawHttpTunnel.Port, cloned.httpTunnelPort, cloned.HttpTunnelPort),
user: readString(rawHttpTunnel.user, rawHttpTunnel.User, cloned.httpTunnelUser, cloned.HttpTunnelUser),
password: readString(rawHttpTunnel.password, rawHttpTunnel.Password, cloned.httpTunnelPassword, cloned.HttpTunnelPassword),
};
const hasHttpTunnelDetail = Boolean(normalizedHttpTunnel.host || normalizedHttpTunnel.user || normalizedHttpTunnel.password);
const normalizedUseHttpTunnel = readBool(hasHttpTunnelDetail, cloned.useHttpTunnel, cloned.UseHTTPTunnel);
const normalizedUseProxy = !normalizedUseHttpTunnel && readBool(hasProxyDetail, cloned.useProxy, cloned.UseProxy);
const rawHosts = Array.isArray(cloned.hosts)
? cloned.hosts
: (Array.isArray(cloned.Hosts) ? cloned.Hosts : []);
const normalizedHosts = rawHosts
.map((entry: unknown) => String(entry || '').trim())
.filter((entry: string) => !!entry);
return {
...(cloned as SavedConnection['config']),
useSSH: readBool(hasSSHDetail, cloned.useSSH, cloned.UseSSH),
ssh: normalizedSSH,
useProxy: normalizedUseProxy,
proxy: normalizedProxy,
useHttpTunnel: normalizedUseHttpTunnel,
httpTunnel: normalizedHttpTunnel,
hosts: normalizedHosts,
timeout: readNumber(30, cloned.timeout, cloned.Timeout),
};
};
const handleDuplicateConnection = (conn: SavedConnection) => {
if (!conn) return;
const duplicatedConnection: SavedConnection = {
...conn,
id: `${Date.now()}-${Math.random().toString(36).slice(2, 8)}`,
name: buildDuplicateConnectionName(conn.name),
config: cloneConnectionConfig(conn.config),
includeDatabases: conn.includeDatabases ? [...conn.includeDatabases] : undefined,
includeRedisDatabases: conn.includeRedisDatabases ? [...conn.includeRedisDatabases] : undefined,
};
addConnection(duplicatedConnection);
message.success(`已复制连接: ${duplicatedConnection.name}`);
};
const updateTreeData = (list: TreeNode[], key: React.Key, children: TreeNode[] | undefined): TreeNode[] => {
return list.map(node => {
if (node.key === key) {
@@ -456,10 +657,15 @@ const Sidebar: React.FC<{ onEditConnection?: (conn: SavedConnection) => void }>
}
case 'oracle':
case 'dm':
if (!safeDbName) {
return [{ sql: `SELECT VIEW_NAME AS view_name FROM USER_VIEWS ORDER BY VIEW_NAME` }];
}
return [{ sql: `SELECT OWNER AS schema_name, VIEW_NAME AS view_name FROM ALL_VIEWS WHERE OWNER = '${safeDbName.toUpperCase()}' ORDER BY VIEW_NAME` }];
return normalizeMetadataQuerySpecs([
{ sql: `SELECT VIEW_NAME AS view_name FROM USER_VIEWS ORDER BY VIEW_NAME` },
{ sql: `SELECT OWNER AS schema_name, VIEW_NAME AS view_name FROM ALL_VIEWS WHERE OWNER = USER ORDER BY VIEW_NAME` },
{
sql: safeDbName
? `SELECT OWNER AS schema_name, VIEW_NAME AS view_name FROM ALL_VIEWS WHERE OWNER = '${safeDbName.toUpperCase()}' ORDER BY VIEW_NAME`
: '',
},
]);
case 'sqlite':
return [{ sql: `SELECT name AS view_name FROM sqlite_master WHERE type = 'view' ORDER BY name` }];
case 'duckdb':
@@ -542,10 +748,15 @@ const Sidebar: React.FC<{ onEditConnection?: (conn: SavedConnection) => void }>
}
case 'oracle':
case 'dm':
if (!safeDbName) {
return [{ sql: `SELECT OBJECT_NAME AS routine_name, OBJECT_TYPE AS routine_type FROM USER_OBJECTS WHERE OBJECT_TYPE IN ('FUNCTION','PROCEDURE') ORDER BY OBJECT_TYPE, OBJECT_NAME` }];
}
return [{ sql: `SELECT OWNER AS schema_name, OBJECT_NAME AS routine_name, OBJECT_TYPE AS routine_type FROM ALL_OBJECTS WHERE OWNER = '${safeDbName.toUpperCase()}' AND OBJECT_TYPE IN ('FUNCTION','PROCEDURE') ORDER BY OBJECT_TYPE, OBJECT_NAME` }];
return normalizeMetadataQuerySpecs([
{ sql: `SELECT OBJECT_NAME AS routine_name, OBJECT_TYPE AS routine_type FROM USER_OBJECTS WHERE OBJECT_TYPE IN ('FUNCTION','PROCEDURE') ORDER BY OBJECT_TYPE, OBJECT_NAME` },
{ sql: `SELECT OWNER AS schema_name, OBJECT_NAME AS routine_name, OBJECT_TYPE AS routine_type FROM ALL_OBJECTS WHERE OWNER = USER AND OBJECT_TYPE IN ('FUNCTION','PROCEDURE') ORDER BY OBJECT_TYPE, OBJECT_NAME` },
{
sql: safeDbName
? `SELECT OWNER AS schema_name, OBJECT_NAME AS routine_name, OBJECT_TYPE AS routine_type FROM ALL_OBJECTS WHERE OWNER = '${safeDbName.toUpperCase()}' AND OBJECT_TYPE IN ('FUNCTION','PROCEDURE') ORDER BY OBJECT_TYPE, OBJECT_NAME`
: '',
},
]);
case 'duckdb':
return [{
sql: `SELECT schema_name, function_name AS routine_name, 'FUNCTION' AS routine_type FROM duckdb_functions() WHERE internal = false AND lower(function_type) = 'macro' AND COALESCE(macro_definition, '') <> '' ORDER BY schema_name, function_name`,
@@ -705,7 +916,8 @@ const Sidebar: React.FC<{ onEditConnection?: (conn: SavedConnection) => void }>
const res = await (window as any).go.app.App.RedisGetDatabases(config);
if (res.success) {
setConnectionStates(prev => ({ ...prev, [conn.id]: 'success' }));
let dbs = (res.data as any[]).map((db: any) => ({
const redisRows: any[] = Array.isArray(res.data) ? res.data : [];
let dbs = redisRows.map((db: any) => ({
title: `db${db.index}${db.keys > 0 ? ` (${db.keys})` : ''}`,
key: `${conn.id}-db${db.index}`,
icon: <DatabaseOutlined style={{ color: '#DC382D' }} />,
@@ -736,7 +948,8 @@ const Sidebar: React.FC<{ onEditConnection?: (conn: SavedConnection) => void }>
const res = await DBGetDatabases(config as any);
if (res.success) {
setConnectionStates(prev => ({ ...prev, [conn.id]: 'success' }));
let dbs = (res.data as any[]).map((row: any) => ({
const dbRows: any[] = Array.isArray(res.data) ? res.data : [];
let dbs = dbRows.map((row: any) => ({
title: row.Database || row.database,
key: `${conn.id}-${row.Database || row.database}`,
icon: <DatabaseOutlined />,
@@ -751,10 +964,13 @@ const Sidebar: React.FC<{ onEditConnection?: (conn: SavedConnection) => void }>
}
setTreeData(origin => updateTreeData(origin, node.key, dbs));
} else {
setConnectionStates(prev => ({ ...prev, [conn.id]: 'error' }));
message.error({ content: res.message, key: `conn-${conn.id}-dbs` });
}
} else {
setConnectionStates(prev => ({ ...prev, [conn.id]: 'error' }));
message.error({ content: res.message, key: `conn-${conn.id}-dbs` });
}
} catch (e: any) {
setConnectionStates(prev => ({ ...prev, [conn.id]: 'error' }));
message.error({ content: '连接失败: ' + (e?.message || String(e)), key: `conn-${conn.id}-dbs` });
} finally {
loadingNodesRef.current.delete(loadKey);
}
@@ -799,7 +1015,8 @@ const Sidebar: React.FC<{ onEditConnection?: (conn: SavedConnection) => void }>
if (res.success) {
setConnectionStates(prev => ({ ...prev, [key as string]: 'success' }));
const tableEntries = (res.data as any[]).map((row: any) => {
const tableRows: any[] = Array.isArray(res.data) ? res.data : [];
const tableEntries = tableRows.map((row: any) => {
const tableName = Object.values(row)[0] as string;
const parsed = splitQualifiedName(tableName);
return {
@@ -815,7 +1032,11 @@ const Sidebar: React.FC<{ onEditConnection?: (conn: SavedConnection) => void }>
loadFunctions(conn, conn.dbName),
]);
const viewEntries = viewsResult.views.map((viewName) => {
const viewRows: string[] = Array.isArray(viewsResult.views) ? viewsResult.views : [];
const triggerRows: any[] = Array.isArray(triggersResult.triggers) ? triggersResult.triggers : [];
const routineRows: any[] = Array.isArray(routinesResult.routines) ? routinesResult.routines : [];
const viewEntries = viewRows.map((viewName: string) => {
const parsed = splitQualifiedName(viewName);
return {
viewName,
@@ -829,7 +1050,7 @@ const Sidebar: React.FC<{ onEditConnection?: (conn: SavedConnection) => void }>
const triggerSeen = new Set<string>();
const metadataDialect = getMetadataDialect(conn as SavedConnection);
triggersResult.triggers.forEach((trigger) => {
triggerRows.forEach((trigger: any) => {
const triggerParsed = splitQualifiedName(trigger.triggerName);
const tableParsed = splitQualifiedName(trigger.tableName);
const schemaName = tableParsed.schemaName || triggerParsed.schemaName || String(conn.dbName || '').trim();
@@ -854,7 +1075,7 @@ const Sidebar: React.FC<{ onEditConnection?: (conn: SavedConnection) => void }>
return deduped;
})();
const routineEntries = routinesResult.routines.map((routine) => {
const routineEntries = routineRows.map((routine: any) => {
const parsed = splitQualifiedName(routine.routineName);
const typeLabel = routine.routineType === 'PROCEDURE' ? 'P' : 'F';
return {
@@ -1036,12 +1257,16 @@ const Sidebar: React.FC<{ onEditConnection?: (conn: SavedConnection) => void }>
setConnectionStates(prev => ({ ...prev, [key as string]: 'error' }));
message.error({ content: res.message, key: `db-${key}-tables` });
}
} catch (e: any) {
setConnectionStates(prev => ({ ...prev, [key as string]: 'error' }));
message.error({ content: '加载表失败: ' + (e?.message || String(e)), key: `db-${key}-tables` });
} finally {
loadingNodesRef.current.delete(loadKey);
}
};
const onLoadData = async ({ key, children, dataRef, type }: any) => {
if (type === 'tag') return;
if (children) return;
if (type === 'connection') {
@@ -1380,7 +1605,8 @@ const Sidebar: React.FC<{ onEditConnection?: (conn: SavedConnection) => void }>
const res = await DBGetDatabases(config as any);
if (res.success) {
let dbs = (res.data as any[]).map((row: any) => {
const dbRows: any[] = Array.isArray(res.data) ? res.data : [];
let dbs = dbRows.map((row: any) => {
const dbName = row.Database || row.database;
return {
title: dbName,
@@ -1421,9 +1647,11 @@ const Sidebar: React.FC<{ onEditConnection?: (conn: SavedConnection) => void }>
return;
}
const viewSet = new Set(viewResult.views.map(view => view.toLowerCase()));
const tableRows: any[] = Array.isArray(res.data) ? res.data : [];
const viewRows: string[] = Array.isArray(viewResult.views) ? viewResult.views : [];
const viewSet = new Set(viewRows.map((view: string) => view.toLowerCase()));
const tableObjects: BatchObjectItem[] = (res.data as any[])
const tableObjects: BatchObjectItem[] = tableRows
.map((row: any) => Object.values(row)[0] as string)
.filter((tableName: string) => !viewSet.has(tableName.toLowerCase()))
.map((tableName: string) => ({
@@ -1434,7 +1662,7 @@ const Sidebar: React.FC<{ onEditConnection?: (conn: SavedConnection) => void }>
dataRef: { ...conn, tableName, dbName, objectType: 'table' },
}));
const viewObjects: BatchObjectItem[] = viewResult.views.map((viewName: string) => ({
const viewObjects: BatchObjectItem[] = viewRows.map((viewName: string) => ({
title: getSidebarTableDisplayName(conn, viewName),
key: `${conn.id}-${dbName}-view-${viewName}`,
objectName: viewName,
@@ -1600,7 +1828,8 @@ const Sidebar: React.FC<{ onEditConnection?: (conn: SavedConnection) => void }>
const res = await DBGetDatabases(config as any);
if (res.success) {
let dbs = (res.data as any[]).map((row: any) => {
const dbRows: any[] = Array.isArray(res.data) ? res.data : [];
let dbs = dbRows.map((row: any) => {
const dbName = row.Database || row.database;
return {
title: dbName,
@@ -2187,28 +2416,205 @@ const Sidebar: React.FC<{ onEditConnection?: (conn: SavedConnection) => void }>
setSearchValue(value);
};
const loop = (data: TreeNode[]): TreeNode[] => {
const result: TreeNode[] = [];
data.forEach(item => {
const match = item.title.toLowerCase().indexOf(searchValue.toLowerCase()) > -1;
if (item.children) {
const filteredChildren = loop(item.children);
if (filteredChildren.length > 0 || match) {
result.push({ ...item, children: filteredChildren });
}
const toggleSearchScope = (scope: SearchScope) => {
setSearchScopes((prev) => {
if (scope === 'smart') {
return ['smart'];
}
const withoutSmart = prev.filter((item) => item !== 'smart');
if (withoutSmart.includes(scope)) {
const next = withoutSmart.filter((item) => item !== scope);
return next.length > 0 ? next : ['smart'];
}
return [...withoutSmart, scope];
});
};
const setSearchScopeChecked = (scope: SearchScope, checked: boolean) => {
if (scope === 'smart') {
if (checked) {
setSearchScopes(['smart']);
} else if (searchScopes.length === 1 && searchScopes[0] === 'smart') {
setSearchScopes(['smart']);
} else {
if (match) {
setSearchScopes((prev) => {
const next = prev.filter((item) => item !== 'smart');
return next.length > 0 ? next : ['smart'];
});
}
return;
}
if (checked) {
setSearchScopes((prev) => {
const withoutSmart = prev.filter((item) => item !== 'smart');
if (withoutSmart.includes(scope)) {
return withoutSmart;
}
return [...withoutSmart, scope];
});
} else {
setSearchScopes((prev) => {
const next = prev.filter((item) => item !== scope && item !== 'smart');
return next.length > 0 ? next : ['smart'];
});
}
};
const searchScopeSummary = useMemo(() => {
if (searchScopes.includes('smart')) {
return '智能';
}
return searchScopes.map((scope) => SEARCH_SCOPE_LABEL_MAP[scope]).join(' + ');
}, [searchScopes]);
const searchScopePopoverContent = useMemo(() => {
const smartSelected = searchScopes.includes('smart');
const scopedOptions = SEARCH_SCOPE_OPTIONS.filter((option) => option.value !== 'smart');
return (
<div style={{ minWidth: 220, display: 'flex', flexDirection: 'column', gap: 8 }}>
<div style={{ fontSize: 12, color: '#8c8c8c' }}></div>
<Checkbox
checked={smartSelected}
onChange={(e) => setSearchScopeChecked('smart', e.target.checked)}
>
</Checkbox>
<div style={{ paddingLeft: 12, display: 'grid', gap: 6 }}>
{scopedOptions.map((option) => (
<Checkbox
key={option.value}
checked={searchScopes.includes(option.value)}
onChange={(e) => setSearchScopeChecked(option.value, e.target.checked)}
>
{option.label}
</Checkbox>
))}
</div>
<div style={{ fontSize: 12, color: '#8c8c8c' }}>
</div>
</div>
);
}, [searchScopes]);
const parseHostOnlyToken = (value: unknown): string[] => {
const raw = String(value || '').trim();
if (!raw) {
return [];
}
let text = raw.replace(/^[a-z][a-z0-9+.-]*:\/\//i, '');
if (text.includes('/')) {
text = text.split('/')[0];
}
if (text.includes('?')) {
text = text.split('?')[0];
}
if (text.includes('@')) {
text = text.split('@').pop() || '';
}
return text
.split(',')
.map((entry) => {
const token = entry.trim();
if (!token) return '';
if (token.startsWith('[')) {
const rightBracketIndex = token.indexOf(']');
if (rightBracketIndex > 0) {
return token.slice(0, rightBracketIndex + 1).toLowerCase();
}
}
const colonIndex = token.lastIndexOf(':');
if (colonIndex > 0) {
return token.slice(0, colonIndex).toLowerCase();
}
return token.toLowerCase();
})
.filter(Boolean);
};
const getConnectionHostSearchText = (node: TreeNode): string => {
if (node.type !== 'connection') return '';
const config = node.dataRef?.config || {};
const hostTokens = [
...parseHostOnlyToken(config.host),
...(Array.isArray(config.hosts) ? config.hosts.flatMap((entry: string) => parseHostOnlyToken(entry)) : []),
...parseHostOnlyToken(config.uri),
];
const uniqueHosts = Array.from(new Set(hostTokens));
return uniqueHosts.join(' ');
};
const getConnectionNameSearchText = (node: TreeNode): string => {
if (node.type !== 'connection') return '';
const name = node.dataRef?.name ?? node.title;
return String(name || '').toLowerCase();
};
const isObjectNode = (node: TreeNode): boolean => {
return node.type === 'table'
|| node.type === 'view'
|| node.type === 'db-trigger'
|| node.type === 'routine'
|| node.type === 'object-group';
};
const matchByScopes = (node: TreeNode, keyword: string, scopes: SearchScope[]): boolean => {
const title = String(node.title || '').toLowerCase();
if (scopes.includes('database') && node.type === 'database' && title.includes(keyword)) {
return true;
}
if (scopes.includes('tag') && node.type === 'tag' && title.includes(keyword)) {
return true;
}
if (scopes.includes('host') && node.type === 'connection' && getConnectionHostSearchText(node).includes(keyword)) {
return true;
}
if (scopes.includes('object') && isObjectNode(node) && title.includes(keyword)) {
return true;
}
return false;
};
const loop = (data: TreeNode[], keyword: string): TreeNode[] => {
const isSmartMode = searchScopes.includes('smart');
const result: TreeNode[] = [];
data.forEach((item) => {
const titleMatch = String(item.title || '').toLowerCase().includes(keyword);
const smartMatch = item.type === 'connection'
? getConnectionNameSearchText(item).includes(keyword) || getConnectionHostSearchText(item).includes(keyword)
: titleMatch;
const scopedMatch = matchByScopes(item, keyword, searchScopes);
const selfMatch = isSmartMode ? smartMatch : scopedMatch;
const filteredChildren = item.children ? loop(item.children, keyword) : [];
if (selfMatch) {
const shouldKeepFullSubtree = isSmartMode
|| item.type === 'connection'
|| item.type === 'database'
|| item.type === 'tag';
if (item.children && shouldKeepFullSubtree) {
result.push(item);
} else if (item.children && filteredChildren.length > 0) {
result.push({ ...item, children: filteredChildren });
} else {
result.push(item);
}
return;
}
if (filteredChildren.length > 0) {
result.push({ ...item, children: filteredChildren });
}
});
return result;
};
const displayTreeData = useMemo(() => {
if (!searchValue) return treeData;
return loop(treeData);
}, [searchValue, treeData]);
const keyword = searchValue.trim().toLowerCase();
if (!keyword) return treeData;
return loop(treeData, keyword);
}, [searchValue, searchScopes, treeData]);
const getNodeMenuItems = (node: any): MenuProps['items'] => {
const conn = node.dataRef as SavedConnection;
@@ -2284,6 +2690,38 @@ const Sidebar: React.FC<{ onEditConnection?: (conn: SavedConnection) => void }>
return routineMenu;
}
// Connection Tag Menu — must be BEFORE the connection check
if (node.type === 'tag') {
return [
{
key: 'edit-tag',
label: '编辑标签',
icon: <EditOutlined />,
onClick: () => {
createTagForm.setFieldsValue({ name: node.title, connectionIds: node.dataRef.connectionIds });
setRenameViewTarget(node);
setIsCreateTagModalOpen(true);
}
},
{ type: 'divider' },
{
key: 'delete-tag',
label: '删除标签',
icon: <DeleteOutlined />,
danger: true,
onClick: () => {
Modal.confirm({
title: '确认删除',
content: `确定要删除标签 "${node.title}" 吗?这不会删除里面的连接。`,
onOk: () => {
removeConnectionTag(node.dataRef.id);
}
});
}
}
];
}
if (node.type === 'connection') {
// Redis connection menu
if (isRedis) {
@@ -2318,6 +2756,12 @@ const Sidebar: React.FC<{ onEditConnection?: (conn: SavedConnection) => void }>
if (onEditConnection) onEditConnection(node.dataRef);
}
},
{
key: 'copy-connection',
label: '复制连接',
icon: <CopyOutlined />,
onClick: () => handleDuplicateConnection(node.dataRef as SavedConnection)
},
{
key: 'disconnect',
label: '断开连接',
@@ -2358,6 +2802,22 @@ const Sidebar: React.FC<{ onEditConnection?: (conn: SavedConnection) => void }>
];
}
// Tag submenu for connection
const tagSubMenuItems: MenuProps['items'] = connectionTags.map(tag => ({
key: `move-to-tag-${tag.id}`,
label: tag.name,
icon: <FolderOutlined />,
onClick: () => moveConnectionToTag(node.key, tag.id)
}));
if (connectionTags.length > 0) {
tagSubMenuItems.push({ type: 'divider' });
}
tagSubMenuItems.push({
key: 'move-to-ungrouped',
label: '移出标签',
onClick: () => moveConnectionToTag(node.key, null)
});
// Regular database connection menu
return [
{
@@ -2400,11 +2860,30 @@ const Sidebar: React.FC<{ onEditConnection?: (conn: SavedConnection) => void }>
if (onEditConnection) onEditConnection(node.dataRef);
}
},
{
key: 'copy-connection',
label: '复制连接',
icon: <CopyOutlined />,
onClick: () => handleDuplicateConnection(node.dataRef as SavedConnection)
},
{
key: 'move-to-tag',
label: '移至标签',
icon: <FolderOpenOutlined />,
children: tagSubMenuItems
},
{
key: 'disconnect',
label: '断开连接',
icon: <DisconnectOutlined />,
onClick: () => {
const connId = String(node.key || '');
// 强制清理该连接相关的 loading 标记,避免网络卡住后重连仍被短路。
Array.from(loadingNodesRef.current).forEach((loadingKey) => {
if (loadingKey === `dbs-${connId}` || loadingKey.startsWith(`tables-${connId}-`)) {
loadingNodesRef.current.delete(loadingKey);
}
});
// Reset status recursively
setConnectionStates(prev => {
const next = { ...prev };
@@ -2526,6 +3005,7 @@ const Sidebar: React.FC<{ onEditConnection?: (conn: SavedConnection) => void }>
onClick: () => {
const dbConnId = String(node.dataRef?.id || '');
const dbName = String(node.dataRef?.dbName || node.title || '').trim();
loadingNodesRef.current.delete(`tables-${dbConnId}-${dbName}`);
setConnectionStates(prev => {
const next = { ...prev };
delete next[node.key];
@@ -2707,6 +3187,7 @@ const Sidebar: React.FC<{ onEditConnection?: (conn: SavedConnection) => void }>
{ key: 'export-xlsx', label: '导出 Excel (XLSX)', onClick: () => handleExport(node, 'xlsx') },
{ key: 'export-json', label: '导出 JSON', onClick: () => handleExport(node, 'json') },
{ key: 'export-md', label: '导出 Markdown', onClick: () => handleExport(node, 'md') },
{ key: 'export-html', label: '导出 HTML', onClick: () => handleExport(node, 'html') },
]
}
];
@@ -2741,6 +3222,72 @@ const Sidebar: React.FC<{ onEditConnection?: (conn: SavedConnection) => void }>
return <span title={hoverTitle}>{statusBadge}{displayTitle}</span>;
};
const handleDrop = (info: any) => {
const dropKey = info.node.key;
const dragKey = info.dragNode.key;
const dropPos = info.node.pos.split('-');
const dropPosition = info.dropPosition - Number(dropPos[dropPos.length - 1]);
const dragNode = info.dragNode;
const dropNode = info.node;
// Tag to Tag reordering
if (dragNode.type === 'tag') {
// You can only drop tags onto the root level (before/after other tags or connections at root)
if (dropNode.type === 'tag' || dropNode.type === 'connection') {
// Get current order
const currentTagOrder = connectionTags.map(t => t.id);
const dragTagId = dragNode.dataRef.id;
// Filter out the dragging tag
const newOrder = currentTagOrder.filter(id => id !== dragTagId);
let insertIndex = newOrder.length;
if (dropNode.type === 'tag') {
const dropTagId = dropNode.dataRef.id;
const dropIndex = newOrder.indexOf(dropTagId);
if (dropPosition === -1) {
insertIndex = dropIndex;
} else {
insertIndex = dropIndex + 1;
}
} else {
// Dropped onto a root connection, usually meaning moving to the end of tags
// Since tags are always displayed before ungrouped connections, just put it at the end
insertIndex = newOrder.length;
}
newOrder.splice(insertIndex, 0, dragTagId);
reorderTags(newOrder);
}
return;
}
// Connection moving to tag (any drop position on a tag node counts as "into")
if (dragNode.type === 'connection' && dropNode.type === 'tag') {
moveConnectionToTag(dragNode.key, dropNode.dataRef.id);
return;
}
// Connection moving to another connection inside a tag
if (dragNode.type === 'connection' && dropNode.type === 'connection') {
// Find if drop target is under a tag
const targetTag = connectionTags.find(t => t.connectionIds.includes(dropNode.key));
if (targetTag) {
moveConnectionToTag(dragNode.key, targetTag.id);
return;
}
// Drop target is NOT under a tag (ungrouped) -> move OUT of tag
const sourceTag = connectionTags.find(t => t.connectionIds.includes(dragNode.key));
if (sourceTag) {
moveConnectionToTag(dragNode.key, null);
return;
}
}
};
const onRightClick = ({ event, node }: any) => {
const items = getNodeMenuItems(node);
if (items && items.length > 0) {
@@ -2755,16 +3302,49 @@ const Sidebar: React.FC<{ onEditConnection?: (conn: SavedConnection) => void }>
return (
<div style={{ display: 'flex', flexDirection: 'column', height: '100%' }}>
<div style={{ padding: '4px 8px' }}>
<Search placeholder="搜索..." onChange={onSearch} size="small" />
<Space.Compact block size="small">
<Search
ref={searchInputRef}
placeholder="搜索..."
onChange={onSearch}
size="small"
style={{ width: '100%' }}
/>
<Popover
content={searchScopePopoverContent}
trigger="click"
placement="bottomRight"
open={isSearchScopePopoverOpen}
onOpenChange={setIsSearchScopePopoverOpen}
>
<Tooltip title={`搜索范围:${searchScopeSummary}`}>
<Button size="small" icon={<DownOutlined />} style={{ width: 86 }}>
{searchScopes.includes('smart') ? '(智)' : `(${searchScopes.length})`}
</Button>
</Tooltip>
</Popover>
</Space.Compact>
</div>
{/* Toolbar for batch operations - always visible */}
<div style={{ padding: '4px 8px', borderBottom: 'none', display: 'flex', gap: 4 }}>
{/* Toolbar */}
<div style={{ padding: '4px 8px', borderBottom: 'none', display: 'flex', flexWrap: 'wrap', gap: 4 }}>
<Button
size="small"
icon={<FolderOpenOutlined />}
onClick={() => {
setRenameViewTarget(null); // Create mode
createTagForm.resetFields();
setIsCreateTagModalOpen(true);
}}
style={{ flex: '1 1 auto' }}
>
</Button>
<Button
size="small"
icon={<CheckSquareOutlined />}
onClick={() => openBatchOperationModal()}
style={{ flex: 1 }}
style={{ flex: '1 1 auto' }}
>
</Button>
@@ -2772,7 +3352,7 @@ const Sidebar: React.FC<{ onEditConnection?: (conn: SavedConnection) => void }>
size="small"
icon={<CheckSquareOutlined />}
onClick={() => openBatchDatabaseModal()}
style={{ flex: 1 }}
style={{ flex: '1 1 auto' }}
>
</Button>
@@ -2781,6 +3361,11 @@ const Sidebar: React.FC<{ onEditConnection?: (conn: SavedConnection) => void }>
<div ref={treeContainerRef} style={{ flex: 1, overflow: 'hidden', minHeight: 0 }}>
<Tree
showIcon
draggable={{
icon: false,
nodeDraggable: (node: any) => node.type === 'connection' || node.type === 'tag'
}}
onDrop={handleDrop}
loadData={onLoadData}
treeData={displayTreeData}
onDoubleClick={onDoubleClick}
@@ -2809,6 +3394,60 @@ const Sidebar: React.FC<{ onEditConnection?: (conn: SavedConnection) => void }>
</Dropdown>
)}
<Modal
title={renameViewTarget?.type === 'tag' ? "编辑标签" : "新建组"}
open={isCreateTagModalOpen}
onOk={() => {
createTagForm.validateFields().then(values => {
if (renameViewTarget?.type === 'tag') {
// Rename
updateConnectionTag({
...renameViewTarget.dataRef,
name: values.name,
connectionIds: values.connectionIds || []
});
// update cross-connections
const allOtherTagsIds = connectionTags.filter(t => t.id !== renameViewTarget.dataRef.id).flatMap(t => t.connectionIds);
(values.connectionIds || []).forEach((cid: string) => {
if (allOtherTagsIds.includes(cid)) {
moveConnectionToTag(cid, renameViewTarget.dataRef.id);
}
});
} else {
// Create
const tagId = Date.now().toString();
addConnectionTag({
id: tagId,
name: values.name,
connectionIds: values.connectionIds || []
});
(values.connectionIds || []).forEach((cid: string) => {
moveConnectionToTag(cid, tagId);
});
}
setIsCreateTagModalOpen(false);
});
}}
onCancel={() => setIsCreateTagModalOpen(false)}
>
<Form form={createTagForm} layout="vertical">
<Form.Item name="name" label="标签名称" rules={[{ required: true, message: '请输入标签名称' }]}>
<Input />
</Form.Item>
<Form.Item name="connectionIds" label="选择连接">
<Checkbox.Group style={{ width: '100%' }}>
<Space direction="vertical" style={{ width: '100%', maxHeight: '400px', overflowY: 'auto' }}>
{connections.map(conn => (
<Checkbox key={conn.id} value={conn.id}>
{conn.name} {conn.config.host ? `(${conn.config.host})` : ''}
</Checkbox>
))}
</Space>
</Checkbox.Group>
</Form.Item>
</Form>
</Modal>
<Modal
title="新建数据库"
open={isCreateDbModalOpen}

View File

@@ -1,6 +1,11 @@
import React, { useMemo } from 'react';
import React, { useMemo, useRef, useState } from 'react';
import { Tabs, Dropdown } from 'antd';
import type { MenuProps } from 'antd';
import type { MenuProps, TabsProps } from 'antd';
import { DndContext, PointerSensor, closestCenter, useSensor, useSensors } from '@dnd-kit/core';
import type { DragStartEvent, DragEndEvent } from '@dnd-kit/core';
import { SortableContext, useSortable, horizontalListSortingStrategy } from '@dnd-kit/sortable';
import { CSS } from '@dnd-kit/utilities';
import { restrictToHorizontalAxis } from '@dnd-kit/modifiers';
import { useStore } from '../store';
import DataViewer from './DataViewer';
import QueryEditor from './QueryEditor';
@@ -29,9 +34,58 @@ const buildTabDisplayTitle = (tab: TabData, connectionName: string | undefined):
return `[${prefix}] ${tab.title}`;
};
type SortableTabLabelProps = {
displayTitle: string;
menuItems: MenuProps['items'];
};
const SortableTabLabel: React.FC<SortableTabLabelProps> = ({
displayTitle,
menuItems,
}) => {
return (
<Dropdown menu={{ items: menuItems }} trigger={['contextMenu']}>
<span
className="tab-dnd-label"
onContextMenu={(e) => e.preventDefault()}
title="拖拽调整标签顺序"
>
{displayTitle}
</span>
</Dropdown>
);
};
type DraggableTabNodeProps = {
node: React.ReactElement;
};
const DraggableTabNode: React.FC<DraggableTabNodeProps> = ({ node }) => {
const tabId = String(node.key || '').trim();
const { attributes, listeners, setNodeRef, transform, transition, isDragging } = useSortable({ id: tabId });
const style: React.CSSProperties = {
...(node.props.style || {}),
transform: CSS.Transform.toString(transform),
transition: transition || 'transform 180ms cubic-bezier(0.22, 1, 0.36, 1)',
opacity: isDragging ? 0.88 : 1,
cursor: isDragging ? 'grabbing' : 'grab',
touchAction: 'none',
zIndex: isDragging ? 2 : node.props.style?.zIndex,
};
return React.cloneElement(node, {
ref: setNodeRef,
style,
...attributes,
...listeners,
className: `${node.props.className || ''} tab-dnd-node${isDragging ? ' is-dragging' : ''}`,
});
};
const TabManager: React.FC = () => {
const tabs = useStore(state => state.tabs);
const connections = useStore(state => state.connections);
const theme = useStore(state => state.theme);
const activeTabId = useStore(state => state.activeTabId);
const setActiveTab = useStore(state => state.setActiveTab);
const closeTab = useStore(state => state.closeTab);
@@ -39,6 +93,15 @@ const TabManager: React.FC = () => {
const closeTabsToLeft = useStore(state => state.closeTabsToLeft);
const closeTabsToRight = useStore(state => state.closeTabsToRight);
const closeAllTabs = useStore(state => state.closeAllTabs);
const moveTab = useStore(state => state.moveTab);
const tabsNavBorderColor = theme === 'dark' ? 'rgba(255, 255, 255, 0.09)' : 'rgba(0, 0, 0, 0.08)';
const [draggingTabId, setDraggingTabId] = useState<string | null>(null);
const suppressClickUntilRef = useRef<number>(0);
const sensors = useSensors(
useSensor(PointerSensor, {
activationConstraint: { distance: 8 },
})
);
const onChange = (newActiveKey: string) => {
setActiveTab(newActiveKey);
@@ -50,11 +113,43 @@ const TabManager: React.FC = () => {
}
};
const handleDragStart = (event: DragStartEvent) => {
const sourceId = String(event.active.id || '').trim();
setDraggingTabId(sourceId || null);
};
const handleDragEnd = (event: DragEndEvent) => {
const sourceId = String(event.active.id || '').trim();
const targetId = String(event.over?.id || '').trim();
setDraggingTabId(null);
if (!sourceId || !targetId || sourceId === targetId) {
return;
}
suppressClickUntilRef.current = Date.now() + 120;
moveTab(sourceId, targetId);
};
const handleDragCancel = () => {
setDraggingTabId(null);
};
const tabIds = useMemo(() => tabs.map((tab) => tab.id), [tabs]);
const renderTabBar: TabsProps['renderTabBar'] = (tabBarProps, DefaultTabBar) => (
<DefaultTabBar {...tabBarProps}>
{(node) => <DraggableTabNode key={node.key} node={node} />}
</DefaultTabBar>
);
const items = useMemo(() => tabs.map((tab, index) => {
const connectionName = connections.find((conn) => conn.id === tab.connectionId)?.name;
const displayTitle = buildTabDisplayTitle(tab, connectionName);
const keepMountedWhenInactive = tab.type === 'query' || tab.type === 'redis-command';
const shouldRenderContent = activeTabId === tab.id || keepMountedWhenInactive;
let content;
if (tab.type === 'query') {
if (!shouldRenderContent) {
content = null;
} else if (tab.type === 'query') {
content = <QueryEditor tab={tab} />;
} else if (tab.type === 'table') {
content = <DataViewer tab={tab} />;
@@ -100,14 +195,15 @@ const TabManager: React.FC = () => {
return {
label: (
<Dropdown menu={{ items: menuItems }} trigger={['contextMenu']}>
<span onContextMenu={(e) => e.preventDefault()}>{displayTitle}</span>
</Dropdown>
<SortableTabLabel
displayTitle={displayTitle}
menuItems={menuItems}
/>
),
key: tab.id,
children: content,
};
}), [tabs, connections, closeOtherTabs, closeTabsToLeft, closeTabsToRight, closeAllTabs]);
}), [tabs, connections, activeTabId, closeOtherTabs, closeTabsToLeft, closeTabsToRight, closeAllTabs]);
return (
<>
@@ -156,18 +252,63 @@ const TabManager: React.FC = () => {
display: none !important;
}
.main-tabs .ant-tabs-nav::before {
border-bottom: none !important;
border-bottom: 1px solid ${tabsNavBorderColor} !important;
}
.main-tabs .ant-tabs-tab {
transition: transform 180ms cubic-bezier(0.22, 1, 0.36, 1), background-color 120ms ease;
}
.main-tabs .tab-dnd-label {
user-select: none;
-webkit-user-select: none;
display: inline-flex;
align-items: center;
max-width: 100%;
}
.main-tabs .tab-dnd-node.is-dragging,
.main-tabs .tab-dnd-node.is-dragging .tab-dnd-label {
cursor: grabbing !important;
}
body[data-theme='dark'] .main-tabs .ant-tabs-tab-btn:focus-visible {
outline: none !important;
border-radius: 6px;
box-shadow: 0 0 0 2px rgba(255, 214, 102, 0.72);
background: rgba(255, 214, 102, 0.16);
}
body[data-theme='light'] .main-tabs .ant-tabs-tab-btn:focus-visible {
outline: none !important;
border-radius: 6px;
box-shadow: 0 0 0 2px rgba(9, 109, 217, 0.32);
background: rgba(9, 109, 217, 0.08);
}
body[data-theme='dark'] .main-tabs .ant-tabs-tab.ant-tabs-tab-active {
background: rgba(255, 214, 102, 0.12) !important;
border-color: rgba(255, 214, 102, 0.4) !important;
}
`}</style>
<Tabs
className="main-tabs"
type="editable-card"
onChange={onChange}
activeKey={activeTabId || undefined}
onEdit={onEdit}
items={items}
hideAdd
/>
<DndContext
sensors={sensors}
collisionDetection={closestCenter}
modifiers={[restrictToHorizontalAxis]}
onDragStart={handleDragStart}
onDragEnd={handleDragEnd}
onDragCancel={handleDragCancel}
>
<SortableContext items={tabIds} strategy={horizontalListSortingStrategy}>
<Tabs
className="main-tabs"
type="editable-card"
onChange={(newActiveKey) => {
if (Date.now() < suppressClickUntilRef.current) return;
onChange(newActiveKey);
}}
activeKey={activeTabId || undefined}
onEdit={onEdit}
items={items}
hideAdd
renderTabBar={renderTabBar}
/>
</SortableContext>
</DndContext>
</>
);
};

View File

@@ -259,10 +259,20 @@ const TableDesigner: React.FC<{ tab: TabData }> = ({ tab }) => {
const connections = useStore(state => state.connections);
const theme = useStore(state => state.theme);
const darkMode = theme === 'dark';
const resizeGuideColor = darkMode ? '#f6c453' : '#1890ff';
const readOnly = !!tab.readOnly;
const panelRadius = 10;
const panelFrameColor = darkMode ? 'rgba(0, 0, 0, 0.18)' : 'rgba(0, 0, 0, 0.12)';
const panelToolbarBorder = darkMode ? 'rgba(255, 255, 255, 0.12)' : 'rgba(0, 0, 0, 0.10)';
const panelToolbarBg = darkMode ? 'rgba(20, 20, 20, 0.35)' : 'rgba(255, 255, 255, 0.72)';
const panelBodyBg = darkMode ? 'rgba(0, 0, 0, 0.24)' : 'rgba(255, 255, 255, 0.82)';
const focusRowBg = darkMode ? 'rgba(246, 196, 83, 0.22)' : 'rgba(24, 144, 255, 0.12)';
const [tableHeight, setTableHeight] = useState(500);
const containerRef = useRef<HTMLDivElement>(null);
const pendingFocusColumnKeyRef = useRef<string | null>(null);
const focusHighlightTimerRef = useRef<number | null>(null);
const [focusColumnKey, setFocusColumnKey] = useState('');
const openCommentEditor = useCallback((record: EditableColumn) => {
if (!record?._key) return;
@@ -345,6 +355,61 @@ const TableDesigner: React.FC<{ tab: TabData }> = ({ tab }) => {
setSelectedColumnRowKeys(prev => prev.filter(key => columns.some(c => c._key === key)));
}, [columns]);
useEffect(() => {
return () => {
if (focusHighlightTimerRef.current !== null) {
window.clearTimeout(focusHighlightTimerRef.current);
}
};
}, []);
const focusColumnRow = useCallback((targetKey: string): boolean => {
if (activeKey !== 'columns') return false;
const tableBody = containerRef.current?.querySelector('.ant-table-body') as HTMLElement | null;
if (!tableBody) return false;
const row = tableBody.querySelector(`tr[data-row-key="${targetKey}"]`) as HTMLTableRowElement | null;
if (!row) return false;
row.scrollIntoView({ behavior: 'smooth', block: 'nearest' });
setFocusColumnKey(targetKey);
if (focusHighlightTimerRef.current !== null) {
window.clearTimeout(focusHighlightTimerRef.current);
}
focusHighlightTimerRef.current = window.setTimeout(() => {
setFocusColumnKey(prev => (prev === targetKey ? '' : prev));
}, 1600);
if (!readOnly) {
const firstInput = row.querySelector('input') as HTMLInputElement | null;
if (firstInput) {
firstInput.focus();
firstInput.select();
}
}
return true;
}, [activeKey, readOnly]);
useEffect(() => {
const pendingKey = pendingFocusColumnKeyRef.current;
if (!pendingKey || activeKey !== 'columns') return;
let cancelled = false;
const tryFocus = () => {
if (cancelled) return;
if (focusColumnRow(pendingKey)) {
pendingFocusColumnKeyRef.current = null;
}
};
const timerA = window.setTimeout(tryFocus, 0);
const timerB = window.setTimeout(tryFocus, 96);
return () => {
cancelled = true;
window.clearTimeout(timerA);
window.clearTimeout(timerB);
};
}, [activeKey, columns, focusColumnRow]);
// Initial Columns Definition
useEffect(() => {
const initialCols = [
@@ -885,21 +950,46 @@ ${selectedTrigger.statement}`;
}));
};
const handleAddColumn = () => {
const newCol: EditableColumn = {
name: isNewTable ? 'new_column' : `new_col_${columns.length + 1}`,
type: 'varchar(255)',
nullable: 'YES',
key: '',
extra: '',
comment: '',
default: '',
_key: `new-${Date.now()}`,
isNew: true,
isAutoIncrement: false
};
setColumns([...columns, newCol]);
};
const createNewColumn = useCallback((indexHint: number): EditableColumn => ({
name: isNewTable ? 'new_column' : `new_col_${indexHint}`,
type: 'varchar(255)',
nullable: 'YES',
key: '',
extra: '',
comment: '',
default: '',
_key: `new-${Date.now()}-${Math.random().toString(36).slice(2, 8)}`,
isNew: true,
isAutoIncrement: false
}), [isNewTable]);
const handleAddColumn = useCallback((insertAfterKey?: string) => {
const newCol = createNewColumn(columns.length + 1);
setColumns(prev => {
const next = [...prev];
if (insertAfterKey) {
const insertIndex = next.findIndex(col => col._key === insertAfterKey);
if (insertIndex >= 0) {
next.splice(insertIndex + 1, 0, newCol);
return next;
}
}
next.push(newCol);
return next;
});
setSelectedColumnRowKeys([newCol._key]);
pendingFocusColumnKeyRef.current = newCol._key;
}, [columns.length, createNewColumn]);
const handleAddColumnAfterSelected = useCallback(() => {
const selectedSet = new Set(selectedColumnRowKeys);
const anchor = columns.find(col => selectedSet.has(col._key));
if (!anchor) {
message.warning('请先选择一个字段,再执行插入。');
return;
}
handleAddColumn(anchor._key);
}, [columns, handleAddColumn, selectedColumnRowKeys]);
const handleDeleteColumn = (key: string) => {
setColumns(prev => prev.filter(c => c._key !== key));
@@ -1919,22 +2009,35 @@ END;`;
}));
const columnsTabContent = (
<div ref={containerRef} className="table-designer-wrapper" style={{ height: '100%', overflow: 'hidden', position: 'relative' }}>
<div
ref={containerRef}
className="table-designer-wrapper"
style={{
height: '100%',
overflow: 'hidden',
position: 'relative',
background: panelBodyBg
}}
>
<style>{`
.table-designer-wrapper .ant-table-body {
max-height: ${tableHeight}px !important;
}
}
.table-designer-wrapper .table-designer-focus-row > .ant-table-cell {
background: ${focusRowBg} !important;
}
`}</style>
{readOnly ? (
<Table
dataSource={columns}
columns={resizableColumns}
rowKey="_key"
rowClassName={(record: EditableColumn) => record._key === focusColumnKey ? 'table-designer-focus-row' : ''}
size="small"
pagination={false}
loading={loading}
scroll={{ y: tableHeight }}
bordered
bordered={false}
components={{
header: {
cell: ResizableTitle,
@@ -1952,11 +2055,12 @@ END;`;
onChange: (nextSelectedRowKeys) => setSelectedColumnRowKeys(nextSelectedRowKeys as string[]),
}}
rowKey="_key"
rowClassName={(record: EditableColumn) => record._key === focusColumnKey ? 'table-designer-focus-row' : ''}
size="small"
pagination={false}
loading={loading}
scroll={{ y: tableHeight }}
bordered
bordered={false}
components={{
body: { row: SortableRow },
header: { cell: ResizableTitle }
@@ -1973,7 +2077,7 @@ END;`;
bottom: 0,
left: 0,
width: '2px',
background: '#1890ff',
background: resizeGuideColor,
zIndex: 9999,
display: 'none',
pointerEvents: 'none',
@@ -1984,8 +2088,63 @@ END;`;
);
return (
<div style={{ display: 'flex', flexDirection: 'column', height: '100%' }}>
<div style={{ padding: '8px', borderBottom: '1px solid #eee', display: 'flex', gap: '8px', alignItems: 'center' }}>
<div className="table-designer-shell" style={{ display: 'flex', flexDirection: 'column', height: '100%', minHeight: 0, padding: '6px 0' }}>
<style>{`
.table-designer-shell .ant-table,
.table-designer-shell .ant-table-wrapper,
.table-designer-shell .ant-table-container {
background: transparent !important;
}
.table-designer-shell .ant-table-wrapper,
.table-designer-shell .ant-table-container {
border: none !important;
overflow: hidden !important;
}
.table-designer-shell .ant-table-thead > tr > th {
background: transparent !important;
border-bottom: 1px solid ${darkMode ? 'rgba(255,255,255,0.06)' : 'rgba(0,0,0,0.06)'} !important;
border-inline-end: 1px solid transparent !important;
}
.table-designer-shell .ant-table-tbody > tr > td,
.table-designer-shell .ant-table-tbody .ant-table-row > .ant-table-cell {
background: transparent !important;
border-bottom: 1px solid ${darkMode ? 'rgba(255,255,255,0.05)' : 'rgba(0,0,0,0.05)'} !important;
border-inline-end: 1px solid transparent !important;
}
.table-designer-shell .ant-table-thead > tr > th::before {
display: none !important;
}
.table-designer-shell .ant-table-tbody > tr:hover > td,
.table-designer-shell .ant-table-tbody .ant-table-row:hover > .ant-table-cell {
background: ${darkMode ? 'rgba(255,255,255,0.06)' : 'rgba(0,0,0,0.02)'} !important;
}
.table-designer-shell .ant-tabs-nav {
margin-bottom: 8px !important;
}
.table-designer-shell .ant-tabs-nav::before {
border-bottom-color: ${darkMode ? 'rgba(255,255,255,0.08)' : 'rgba(0,0,0,0.08)'} !important;
}
.table-designer-shell .ant-tabs-content-holder,
.table-designer-shell .ant-tabs-content,
.table-designer-shell .ant-tabs-tabpane {
height: 100%;
}
`}</style>
<div
style={{
padding: '10px 12px 8px 12px',
borderBottom: `1px solid ${panelToolbarBorder}`,
borderTopLeftRadius: panelRadius,
borderTopRightRadius: panelRadius,
borderLeft: `1px solid ${panelFrameColor}`,
borderRight: `1px solid ${panelFrameColor}`,
borderTop: `1px solid ${panelFrameColor}`,
background: panelToolbarBg,
display: 'flex',
gap: '8px',
alignItems: 'center'
}}
>
{isNewTable && (
<>
<Input
@@ -2013,14 +2172,25 @@ END;`;
/>
</>
)}
{!readOnly && <Button icon={<SaveOutlined />} type="primary" onClick={generateDDL}></Button>}
{!isNewTable && <Button icon={<ReloadOutlined />} onClick={fetchData}></Button>}
{!readOnly && <Button size="small" icon={<SaveOutlined />} type="primary" onClick={generateDDL}></Button>}
{!isNewTable && <Button size="small" icon={<ReloadOutlined />} onClick={fetchData}></Button>}
{!isNewTable && !readOnly && supportsTableCommentOps() && (
<Button icon={<EditOutlined />} onClick={openTableCommentModal}></Button>
<Button size="small" icon={<EditOutlined />} onClick={openTableCommentModal}></Button>
)}
{!readOnly && <Button icon={<PlusOutlined />} onClick={handleAddColumn}></Button>}
{!readOnly && <Button size="small" icon={<PlusOutlined />} onClick={() => handleAddColumn()}></Button>}
{!readOnly && (
<Button
size="small"
icon={<PlusOutlined />}
onClick={handleAddColumnAfterSelected}
disabled={selectedColumnRowKeys.length === 0}
>
</Button>
)}
{!readOnly && (
<Button
size="small"
icon={<CopyOutlined />}
onClick={openCopySelectedColumnsModal}
disabled={selectedColumns.length === 0}
@@ -2033,7 +2203,17 @@ END;`;
<Tabs
activeKey={activeKey}
onChange={setActiveKey}
style={{ flex: 1, padding: '0 10px' }}
style={{
flex: 1,
minHeight: 0,
padding: '8px 10px 10px 10px',
borderBottomLeftRadius: panelRadius,
borderBottomRightRadius: panelRadius,
borderLeft: `1px solid ${panelFrameColor}`,
borderRight: `1px solid ${panelFrameColor}`,
borderBottom: `1px solid ${panelFrameColor}`,
background: panelBodyBg
}}
items={[
{
key: 'columns',
@@ -2275,7 +2455,7 @@ END;`;
label: 'DDL',
icon: <FileTextOutlined />,
children: (
<div style={{ height: 'calc(100vh - 200px)', border: darkMode ? '1px solid #303030' : '1px solid #d9d9d9', borderRadius: 4 }}>
<div style={{ height: 'calc(100vh - 200px)', border: `1px solid ${panelFrameColor}`, borderRadius: panelRadius, background: panelBodyBg }}>
<Editor
height="100%"
language="sql"
@@ -2516,7 +2696,7 @@ END;`;
<span><strong>:</strong> {selectedTrigger.timing}</span>
<span><strong>:</strong> {selectedTrigger.event}</span>
</div>
<div style={{ border: darkMode ? '1px solid #303030' : '1px solid #d9d9d9', borderRadius: 4 }}>
<div style={{ border: `1px solid ${panelFrameColor}`, borderRadius: panelRadius, background: panelBodyBg }}>
<Editor
height="350px"
language="sql"
@@ -2552,7 +2732,7 @@ END;`;
<span></span>
)}
</div>
<div style={{ border: darkMode ? '1px solid #303030' : '1px solid #d9d9d9', borderRadius: 4 }}>
<div style={{ border: `1px solid ${panelFrameColor}`, borderRadius: panelRadius, background: panelBodyBg }}>
<Editor
height="350px"
language="sql"

View File

@@ -1,8 +1,22 @@
import { create } from 'zustand';
import { persist } from 'zustand/middleware';
import { ConnectionConfig, SavedConnection, TabData, SavedQuery } from './types';
import { ConnectionConfig, ProxyConfig, SavedConnection, TabData, SavedQuery, ConnectionTag } from './types';
import {
ShortcutAction,
ShortcutBinding,
ShortcutOptions,
DEFAULT_SHORTCUT_OPTIONS,
cloneShortcutOptions,
sanitizeShortcutOptions,
} from './utils/shortcuts';
const DEFAULT_APPEARANCE = { opacity: 1.0, blur: 0 };
const DEFAULT_UI_SCALE = 1.0;
const MIN_UI_SCALE = 0.8;
const MAX_UI_SCALE = 1.25;
const DEFAULT_FONT_SIZE = 14;
const MIN_FONT_SIZE = 12;
const MAX_FONT_SIZE = 20;
const DEFAULT_STARTUP_FULLSCREEN = false;
const LEGACY_DEFAULT_OPACITY = 0.95;
const OPACITY_EPSILON = 1e-6;
@@ -11,12 +25,23 @@ const MAX_HOST_ENTRY_LENGTH = 512;
const MAX_HOST_ENTRIES = 64;
const DEFAULT_TIMEOUT_SECONDS = 30;
const MAX_TIMEOUT_SECONDS = 3600;
const PERSIST_VERSION = 5;
const DEFAULT_CONNECTION_TYPE = 'mysql';
const DEFAULT_GLOBAL_PROXY: GlobalProxyConfig = {
enabled: false,
type: 'socks5',
host: '',
port: 1080,
user: '',
password: '',
};
const SUPPORTED_CONNECTION_TYPES = new Set([
'mysql',
'mariadb',
'doris',
'diros',
'sphinx',
'clickhouse',
'postgres',
'redis',
'tdengine',
@@ -31,18 +56,38 @@ const SUPPORTED_CONNECTION_TYPES = new Set([
'duckdb',
'custom',
]);
const SSL_SUPPORTED_CONNECTION_TYPES = new Set([
'mysql',
'mariadb',
'diros',
'sphinx',
'dameng',
'clickhouse',
'postgres',
'sqlserver',
'oracle',
'kingbase',
'highgo',
'vastbase',
'mongodb',
'redis',
'tdengine',
]);
const getDefaultPortByType = (type: string): number => {
switch (type) {
case 'mysql':
case 'mariadb':
return 3306;
case 'doris':
case 'diros':
return 9030;
case 'duckdb':
return 0;
case 'sphinx':
return 9306;
case 'clickhouse':
return 9000;
case 'postgres':
case 'vastbase':
return 5432;
@@ -93,6 +138,13 @@ const normalizeIntegerInRange = (value: unknown, fallbackValue: number, min: num
return normalized;
};
const normalizeFloatInRange = (value: unknown, fallbackValue: number, min: number, max: number): number => {
const parsed = Number(value);
if (!Number.isFinite(parsed)) return fallbackValue;
if (parsed < min || parsed > max) return fallbackValue;
return parsed;
};
const isValidHostEntry = (entry: string): boolean => {
if (!entry) return false;
if (entry.length > MAX_HOST_ENTRY_LENGTH) return false;
@@ -138,6 +190,9 @@ const sanitizeAddressList = (value: unknown): string[] => {
const normalizeConnectionType = (value: unknown): string => {
const type = toTrimmedString(value).toLowerCase();
if (type === 'doris') {
return 'diros';
}
return SUPPORTED_CONNECTION_TYPES.has(type) ? type : DEFAULT_CONNECTION_TYPE;
};
@@ -147,6 +202,16 @@ const sanitizeConnectionConfig = (value: unknown): ConnectionConfig => {
const defaultPort = getDefaultPortByType(type);
const savePassword = typeof raw.savePassword === 'boolean' ? raw.savePassword : true;
const mongoSrv = !!raw.mongoSrv;
const sslCapable = SSL_SUPPORTED_CONNECTION_TYPES.has(type);
const sslModeRaw = toTrimmedString(raw.sslMode, 'preferred').toLowerCase();
const sslMode: 'preferred' | 'required' | 'skip-verify' | 'disable' =
sslModeRaw === 'required'
? 'required'
: sslModeRaw === 'skip-verify'
? 'skip-verify'
: sslModeRaw === 'disable'
? 'disable'
: 'preferred';
const sshRaw = (raw.ssh && typeof raw.ssh === 'object') ? raw.ssh as Record<string, unknown> : {};
const ssh = {
@@ -166,6 +231,18 @@ const sanitizeConnectionConfig = (value: unknown): ConnectionConfig => {
user: toTrimmedString(proxyRaw.user),
password: toTrimmedString(proxyRaw.password),
};
const httpTunnelRaw = (raw.httpTunnel && typeof raw.httpTunnel === 'object')
? raw.httpTunnel as Record<string, unknown>
: ((raw.HTTPTunnel && typeof raw.HTTPTunnel === 'object') ? raw.HTTPTunnel as Record<string, unknown> : {});
const httpTunnel = {
host: toTrimmedString(httpTunnelRaw.host ?? raw.httpTunnelHost),
port: normalizePort(httpTunnelRaw.port ?? raw.httpTunnelPort, 8080),
user: toTrimmedString(httpTunnelRaw.user ?? raw.httpTunnelUser),
password: toTrimmedString(httpTunnelRaw.password ?? raw.httpTunnelPassword),
};
const supportsNetworkTunnel = type !== 'sqlite' && type !== 'duckdb';
const useHttpTunnel = supportsNetworkTunnel && (raw.useHttpTunnel === true || raw.UseHTTPTunnel === true);
const useProxy = supportsNetworkTunnel && !!raw.useProxy && !useHttpTunnel;
const safeConfig: ConnectionConfig & Record<string, unknown> = {
...raw,
@@ -176,13 +253,19 @@ const sanitizeConnectionConfig = (value: unknown): ConnectionConfig => {
password: savePassword ? toTrimmedString(raw.password) : '',
savePassword,
database: toTrimmedString(raw.database),
useSSL: sslCapable ? !!raw.useSSL : false,
sslMode: sslCapable ? sslMode : 'disable',
sslCertPath: sslCapable ? toTrimmedString(raw.sslCertPath) : '',
sslKeyPath: sslCapable ? toTrimmedString(raw.sslKeyPath) : '',
useSSH: !!raw.useSSH,
ssh,
useProxy: !!raw.useProxy,
useProxy,
proxy,
useHttpTunnel,
httpTunnel,
uri: toTrimmedString(raw.uri).slice(0, MAX_URI_LENGTH),
hosts: sanitizeAddressList(raw.hosts),
topology: raw.topology === 'replica' ? 'replica' : 'single',
topology: raw.topology === 'replica' ? 'replica' : (raw.topology === 'cluster' ? 'cluster' : 'single'),
mysqlReplicaUser: toTrimmedString(raw.mysqlReplicaUser),
mysqlReplicaPassword: savePassword ? toTrimmedString(raw.mysqlReplicaPassword) : '',
replicaSet: toTrimmedString(raw.replicaSet),
@@ -229,7 +312,8 @@ const sanitizeSavedConnection = (value: unknown, index: number): SavedConnection
const raw = value as Record<string, unknown>;
const config = sanitizeConnectionConfig(resolveConnectionConfigPayload(raw));
const id = toTrimmedString(raw.id, `conn-${index + 1}`) || `conn-${index + 1}`;
const fallbackName = config.host ? `${config.type}-${config.host}` : `连接-${index + 1}`;
const displayType = config.type === 'diros' ? 'doris' : config.type;
const fallbackName = config.host ? `${displayType}-${config.host}` : `连接-${index + 1}`;
const name = toTrimmedString(raw.name, fallbackName) || fallbackName;
const includeDatabases = sanitizeStringArray(raw.includeDatabases, 256);
const includeRedisDatabases = sanitizeNumberArray(raw.includeRedisDatabases, 0, 15);
@@ -262,6 +346,27 @@ const sanitizeConnections = (value: unknown): SavedConnection[] => {
return result;
};
const sanitizeConnectionTags = (value: unknown): ConnectionTag[] => {
if (!Array.isArray(value)) return [];
const result: ConnectionTag[] = [];
const idSet = new Set<string>();
value.forEach((entry, index) => {
if (!entry || typeof entry !== 'object') return;
const raw = entry as Record<string, unknown>;
const id = toTrimmedString(raw.id, `tag-${index + 1}`) || `tag-${index + 1}`;
if (idSet.has(id)) return;
idSet.add(id);
const name = toTrimmedString(raw.name, `标签-${index + 1}`) || `标签-${index + 1}`;
const connectionIds = sanitizeStringArray(raw.connectionIds, 256);
result.push({ id, name, connectionIds });
});
return result;
};
const isLegacyDefaultAppearance = (appearance: Partial<{ opacity: number; blur: number }> | undefined): boolean => {
if (!appearance) {
return true;
@@ -288,17 +393,26 @@ export interface QueryOptions {
showColumnType: boolean;
}
export interface GlobalProxyConfig extends ProxyConfig {
enabled: boolean;
}
interface AppState {
connections: SavedConnection[];
connectionTags: ConnectionTag[];
tabs: TabData[];
activeTabId: string | null;
activeContext: { connectionId: string; dbName: string } | null;
savedQueries: SavedQuery[];
theme: 'light' | 'dark';
appearance: { opacity: number; blur: number };
uiScale: number;
fontSize: number;
startupFullscreen: boolean;
globalProxy: GlobalProxyConfig;
sqlFormatOptions: { keywordCase: 'upper' | 'lower' };
queryOptions: QueryOptions;
shortcutOptions: ShortcutOptions;
sqlLogs: SqlLog[];
tableAccessCount: Record<string, number>;
tableSortPreference: Record<string, 'name' | 'frequency'>;
@@ -307,6 +421,12 @@ interface AppState {
updateConnection: (conn: SavedConnection) => void;
removeConnection: (id: string) => void;
addConnectionTag: (tag: ConnectionTag) => void;
updateConnectionTag: (tag: ConnectionTag) => void;
removeConnectionTag: (id: string) => void;
moveConnectionToTag: (connectionId: string, targetTagId: string | null) => void;
reorderTags: (tagIds: string[]) => void;
addTab: (tab: TabData) => void;
closeTab: (id: string) => void;
closeOtherTabs: (id: string) => void;
@@ -314,6 +434,7 @@ interface AppState {
closeTabsToRight: (id: string) => void;
closeTabsByConnection: (connectionId: string) => void;
closeTabsByDatabase: (connectionId: string, dbName: string) => void;
moveTab: (sourceId: string, targetId: string) => void;
closeAllTabs: () => void;
setActiveTab: (id: string) => void;
setActiveContext: (context: { connectionId: string; dbName: string } | null) => void;
@@ -323,9 +444,14 @@ interface AppState {
setTheme: (theme: 'light' | 'dark') => void;
setAppearance: (appearance: Partial<{ opacity: number; blur: number }>) => void;
setUiScale: (scale: number) => void;
setFontSize: (size: number) => void;
setStartupFullscreen: (enabled: boolean) => void;
setGlobalProxy: (proxy: Partial<GlobalProxyConfig>) => void;
setSqlFormatOptions: (options: { keywordCase: 'upper' | 'lower' }) => void;
setQueryOptions: (options: Partial<QueryOptions>) => void;
updateShortcut: (action: ShortcutAction, binding: Partial<ShortcutBinding>) => void;
resetShortcutOptions: () => void;
addSqlLog: (log: SqlLog) => void;
clearSqlLogs: () => void;
@@ -416,6 +542,29 @@ const sanitizeStartupFullscreen = (value: unknown): boolean => {
return value === true;
};
const sanitizeUiScale = (value: unknown): number => {
return normalizeFloatInRange(value, DEFAULT_UI_SCALE, MIN_UI_SCALE, MAX_UI_SCALE);
};
const sanitizeFontSize = (value: unknown): number => {
return normalizeIntegerInRange(value, DEFAULT_FONT_SIZE, MIN_FONT_SIZE, MAX_FONT_SIZE);
};
const sanitizeGlobalProxy = (value: unknown): GlobalProxyConfig => {
const raw = (value && typeof value === 'object') ? value as Record<string, unknown> : {};
const typeRaw = toTrimmedString(raw.type, DEFAULT_GLOBAL_PROXY.type).toLowerCase();
const type: 'socks5' | 'http' = typeRaw === 'http' ? 'http' : 'socks5';
const fallbackPort = type === 'http' ? 8080 : 1080;
return {
enabled: raw.enabled === true,
type,
host: toTrimmedString(raw.host),
port: normalizePort(raw.port, fallbackPort),
user: toTrimmedString(raw.user),
password: toTrimmedString(raw.password),
};
};
const unwrapPersistedAppState = (persistedState: unknown): Record<string, unknown> => {
if (!persistedState || typeof persistedState !== 'object') {
return {};
@@ -431,15 +580,20 @@ export const useStore = create<AppState>()(
persist(
(set) => ({
connections: [],
connectionTags: [],
tabs: [],
activeTabId: null,
activeContext: null,
savedQueries: [],
theme: 'light',
appearance: { ...DEFAULT_APPEARANCE },
uiScale: DEFAULT_UI_SCALE,
fontSize: DEFAULT_FONT_SIZE,
startupFullscreen: DEFAULT_STARTUP_FULLSCREEN,
globalProxy: { ...DEFAULT_GLOBAL_PROXY },
sqlFormatOptions: { keywordCase: 'upper' },
queryOptions: { maxRows: 5000, showColumnComment: true, showColumnType: true },
shortcutOptions: cloneShortcutOptions(DEFAULT_SHORTCUT_OPTIONS),
sqlLogs: [],
tableAccessCount: {},
tableSortPreference: {},
@@ -448,7 +602,46 @@ export const useStore = create<AppState>()(
updateConnection: (conn) => set((state) => ({
connections: state.connections.map(c => c.id === conn.id ? conn : c)
})),
removeConnection: (id) => set((state) => ({ connections: state.connections.filter(c => c.id !== id) })),
removeConnection: (id) => set((state) => ({
connections: state.connections.filter(c => c.id !== id),
connectionTags: state.connectionTags.map(tag => ({
...tag,
connectionIds: tag.connectionIds.filter(cid => cid !== id)
}))
})),
addConnectionTag: (tag) => set((state) => ({ connectionTags: [...state.connectionTags, tag] })),
updateConnectionTag: (tag) => set((state) => ({
connectionTags: state.connectionTags.map(t => t.id === tag.id ? tag : t)
})),
removeConnectionTag: (id) => set((state) => ({
connectionTags: state.connectionTags.filter(t => t.id !== id)
})),
moveConnectionToTag: (connectionId, targetTagId) => set((state) => {
const newTags = state.connectionTags.map(tag => {
//先从所有tag中移除该connection
const filteredIds = tag.connectionIds.filter(id => id !== connectionId);
if (tag.id === targetTagId) {
return { ...tag, connectionIds: [...filteredIds, connectionId] };
}
return { ...tag, connectionIds: filteredIds };
});
return { connectionTags: newTags };
}),
reorderTags: (tagIds) => set((state) => {
const tagMap = new Map(state.connectionTags.map(t => [t.id, t]));
const newTags: ConnectionTag[] = [];
tagIds.forEach(id => {
const tag = tagMap.get(id);
if (tag) {
newTags.push(tag);
tagMap.delete(id);
}
});
// 追加未指定的tag如果有的话
newTags.push(...Array.from(tagMap.values()));
return { connectionTags: newTags };
}),
addTab: (tab) => set((state) => {
const index = state.tabs.findIndex(t => t.id === tab.id);
@@ -531,6 +724,23 @@ export const useStore = create<AppState>()(
};
}),
moveTab: (sourceId, targetId) => set((state) => {
const fromId = String(sourceId || '').trim();
const toId = String(targetId || '').trim();
if (!fromId || !toId || fromId === toId) {
return state;
}
const fromIndex = state.tabs.findIndex((tab) => tab.id === fromId);
const toIndex = state.tabs.findIndex((tab) => tab.id === toId);
if (fromIndex < 0 || toIndex < 0 || fromIndex === toIndex) {
return state;
}
const nextTabs = [...state.tabs];
const [movingTab] = nextTabs.splice(fromIndex, 1);
nextTabs.splice(toIndex, 0, movingTab);
return { tabs: nextTabs };
}),
closeAllTabs: () => set(() => ({ tabs: [], activeTabId: null })),
setActiveTab: (id) => set({ activeTabId: id }),
@@ -549,9 +759,22 @@ export const useStore = create<AppState>()(
setTheme: (theme) => set({ theme }),
setAppearance: (appearance) => set((state) => ({ appearance: { ...state.appearance, ...appearance } })),
setUiScale: (scale) => set({ uiScale: sanitizeUiScale(scale) }),
setFontSize: (size) => set({ fontSize: sanitizeFontSize(size) }),
setStartupFullscreen: (enabled) => set({ startupFullscreen: !!enabled }),
setGlobalProxy: (proxy) => set((state) => ({ globalProxy: sanitizeGlobalProxy({ ...state.globalProxy, ...proxy }) })),
setSqlFormatOptions: (options) => set({ sqlFormatOptions: options }),
setQueryOptions: (options) => set((state) => ({ queryOptions: { ...state.queryOptions, ...options } })),
updateShortcut: (action, binding) => set((state) => ({
shortcutOptions: {
...state.shortcutOptions,
[action]: {
...state.shortcutOptions[action],
...binding,
},
},
})),
resetShortcutOptions: () => set({ shortcutOptions: cloneShortcutOptions(DEFAULT_SHORTCUT_OPTIONS) }),
addSqlLog: (log) => set((state) => ({ sqlLogs: [log, ...state.sqlLogs].slice(0, 1000) })), // Keep last 1000 logs
clearSqlLogs: () => set({ sqlLogs: [] }),
@@ -579,17 +802,26 @@ export const useStore = create<AppState>()(
}),
{
name: 'lite-db-storage', // name of the item in the storage (must be unique)
version: 3,
version: PERSIST_VERSION,
migrate: (persistedState: unknown, version: number) => {
const state = unwrapPersistedAppState(persistedState) as Partial<AppState>;
const nextState: Partial<AppState> = { ...state };
nextState.connections = sanitizeConnections(state.connections);
if (version < 5) {
nextState.connectionTags = sanitizeConnectionTags(state.connectionTags);
} else {
nextState.connectionTags = sanitizeConnectionTags(state.connectionTags);
}
nextState.savedQueries = sanitizeSavedQueries(state.savedQueries);
nextState.theme = sanitizeTheme(state.theme);
nextState.appearance = sanitizeAppearance(state.appearance, version);
nextState.uiScale = sanitizeUiScale(state.uiScale);
nextState.fontSize = sanitizeFontSize(state.fontSize);
nextState.startupFullscreen = sanitizeStartupFullscreen(state.startupFullscreen);
nextState.globalProxy = sanitizeGlobalProxy(state.globalProxy);
nextState.sqlFormatOptions = sanitizeSqlFormatOptions(state.sqlFormatOptions);
nextState.queryOptions = sanitizeQueryOptions(state.queryOptions);
nextState.shortcutOptions = sanitizeShortcutOptions(state.shortcutOptions);
nextState.tableAccessCount = sanitizeTableAccessCount(state.tableAccessCount);
nextState.tableSortPreference = sanitizeTableSortPreference(state.tableSortPreference);
return nextState as AppState;
@@ -600,24 +832,34 @@ export const useStore = create<AppState>()(
...currentState,
...state,
connections: sanitizeConnections(state.connections),
connectionTags: sanitizeConnectionTags(state.connectionTags),
savedQueries: sanitizeSavedQueries(state.savedQueries),
theme: sanitizeTheme(state.theme),
appearance: sanitizeAppearance(state.appearance, 3),
appearance: sanitizeAppearance(state.appearance, PERSIST_VERSION),
uiScale: sanitizeUiScale(state.uiScale),
fontSize: sanitizeFontSize(state.fontSize),
startupFullscreen: sanitizeStartupFullscreen(state.startupFullscreen),
globalProxy: sanitizeGlobalProxy(state.globalProxy),
sqlFormatOptions: sanitizeSqlFormatOptions(state.sqlFormatOptions),
queryOptions: sanitizeQueryOptions(state.queryOptions),
shortcutOptions: sanitizeShortcutOptions(state.shortcutOptions),
tableAccessCount: sanitizeTableAccessCount(state.tableAccessCount),
tableSortPreference: sanitizeTableSortPreference(state.tableSortPreference),
};
},
partialize: (state) => ({
connections: state.connections,
connectionTags: state.connectionTags,
savedQueries: state.savedQueries,
theme: state.theme,
appearance: state.appearance,
uiScale: state.uiScale,
fontSize: state.fontSize,
startupFullscreen: state.startupFullscreen,
globalProxy: state.globalProxy,
sqlFormatOptions: state.sqlFormatOptions,
queryOptions: state.queryOptions,
shortcutOptions: state.shortcutOptions,
tableAccessCount: state.tableAccessCount,
tableSortPreference: state.tableSortPreference
}), // Don't persist logs

View File

@@ -14,6 +14,13 @@ export interface ProxyConfig {
password?: string;
}
export interface HTTPTunnelConfig {
host: string;
port: number;
user?: string;
password?: string;
}
export interface ConnectionConfig {
type: string;
host: string;
@@ -22,17 +29,23 @@ export interface ConnectionConfig {
password?: string;
savePassword?: boolean;
database?: string;
useSSL?: boolean;
sslMode?: 'preferred' | 'required' | 'skip-verify' | 'disable';
sslCertPath?: string;
sslKeyPath?: string;
useSSH?: boolean;
ssh?: SSHConfig;
useProxy?: boolean;
proxy?: ProxyConfig;
useHttpTunnel?: boolean;
httpTunnel?: HTTPTunnelConfig;
driver?: string;
dsn?: string;
timeout?: number;
redisDB?: number; // Redis database index (0-15)
uri?: string; // Connection URI for copy/paste
hosts?: string[]; // Multi-host addresses: host:port
topology?: 'single' | 'replica';
topology?: 'single' | 'replica' | 'cluster';
mysqlReplicaUser?: string;
mysqlReplicaPassword?: string;
replicaSet?: string;
@@ -61,6 +74,12 @@ export interface SavedConnection {
includeRedisDatabases?: number[]; // Redis databases to show (0-15)
}
export interface ConnectionTag {
id: string;
name: string;
connectionIds: string[];
}
export interface ColumnDefinition {
name: string;
type: string;
@@ -137,7 +156,7 @@ export interface RedisKeyInfo {
export interface RedisScanResult {
keys: RedisKeyInfo[];
cursor: number;
cursor: string;
}
export interface RedisValue {

View File

@@ -0,0 +1,86 @@
import type { ConnectionConfig } from '../types';
type ConnectionLike = Pick<ConnectionConfig, 'type' | 'driver'> | null | undefined;
const normalizeDataSourceToken = (raw: string): string => {
const normalized = String(raw || '').trim().toLowerCase();
switch (normalized) {
case 'doris':
return 'diros';
case 'postgresql':
return 'postgres';
case 'dm':
return 'dameng';
default:
return normalized;
}
};
export const resolveDataSourceType = (config: ConnectionLike): string => {
if (!config) return '';
const type = normalizeDataSourceToken(String(config.type || ''));
if (type === 'custom') {
const driver = normalizeDataSourceToken(String(config.driver || ''));
return driver || 'custom';
}
return type;
};
const SQL_QUERY_EXPORT_TYPES = new Set([
'mysql',
'mariadb',
'diros',
'sphinx',
'postgres',
'kingbase',
'highgo',
'vastbase',
'sqlserver',
'sqlite',
'duckdb',
'oracle',
'dameng',
'tdengine',
'clickhouse',
]);
const COPY_INSERT_TYPES = new Set([
'mysql',
'mariadb',
'diros',
'sphinx',
'postgres',
'kingbase',
'highgo',
'vastbase',
'sqlserver',
'sqlite',
'duckdb',
'oracle',
'dameng',
'tdengine',
'clickhouse',
]);
const QUERY_EDITOR_DISABLED_TYPES = new Set(['redis']);
const FORCE_READ_ONLY_QUERY_TYPES = new Set(['tdengine', 'clickhouse']);
export type DataSourceCapabilities = {
type: string;
supportsQueryEditor: boolean;
supportsSqlQueryExport: boolean;
supportsCopyInsert: boolean;
forceReadOnlyQueryResult: boolean;
};
export const getDataSourceCapabilities = (config: ConnectionLike): DataSourceCapabilities => {
const type = resolveDataSourceType(config);
return {
type,
supportsQueryEditor: !QUERY_EDITOR_DISABLED_TYPES.has(type),
supportsSqlQueryExport: SQL_QUERY_EXPORT_TYPES.has(type),
supportsCopyInsert: COPY_INSERT_TYPES.has(type),
forceReadOnlyQueryResult: FORCE_READ_ONLY_QUERY_TYPES.has(type),
};
};

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,258 @@
import type { KeyboardEvent as ReactKeyboardEvent } from 'react';
export type ShortcutAction =
| 'runQuery'
| 'focusSidebarSearch'
| 'newQueryTab'
| 'toggleLogPanel'
| 'toggleTheme'
| 'openShortcutManager';
export interface ShortcutBinding {
combo: string;
enabled: boolean;
}
export type ShortcutOptions = Record<ShortcutAction, ShortcutBinding>;
export interface ShortcutActionMeta {
label: string;
description: string;
allowInEditable?: boolean;
}
const MODIFIER_ORDER = ['Ctrl', 'Meta', 'Alt', 'Shift'] as const;
const MODIFIER_SET = new Set(MODIFIER_ORDER);
const KEY_ALIASES: Record<string, string> = {
control: 'Ctrl',
ctrl: 'Ctrl',
command: 'Meta',
cmd: 'Meta',
meta: 'Meta',
option: 'Alt',
alt: 'Alt',
shift: 'Shift',
escape: 'Esc',
esc: 'Esc',
return: 'Enter',
enter: 'Enter',
tab: 'Tab',
space: 'Space',
' ': 'Space',
backspace: 'Backspace',
delete: 'Delete',
del: 'Delete',
arrowup: 'Up',
up: 'Up',
arrowdown: 'Down',
down: 'Down',
arrowleft: 'Left',
left: 'Left',
arrowright: 'Right',
right: 'Right',
pagedown: 'PageDown',
pageup: 'PageUp',
home: 'Home',
end: 'End',
insert: 'Insert',
',': ',',
'.': '.',
'/': '/',
';': ';',
"'": "'",
'[': '[',
']': ']',
'\\': '\\',
'-': '-',
'=': '=',
'`': '`',
};
export const SHORTCUT_ACTION_ORDER: ShortcutAction[] = [
'runQuery',
'focusSidebarSearch',
'newQueryTab',
'toggleLogPanel',
'toggleTheme',
'openShortcutManager',
];
export const SHORTCUT_ACTION_META: Record<ShortcutAction, ShortcutActionMeta> = {
runQuery: {
label: '执行 SQL',
description: '在当前查询页执行 SQL',
},
focusSidebarSearch: {
label: '聚焦侧边栏搜索',
description: '定位到左侧连接树搜索框',
allowInEditable: true,
},
newQueryTab: {
label: '新建查询页',
description: '创建一个新的 SQL 查询标签页',
},
toggleLogPanel: {
label: '切换日志面板',
description: '打开或关闭 SQL 执行日志面板',
},
toggleTheme: {
label: '切换主题',
description: '在亮色和暗色主题之间切换',
},
openShortcutManager: {
label: '打开快捷键管理',
description: '打开快捷键设置面板',
allowInEditable: true,
},
};
export const DEFAULT_SHORTCUT_OPTIONS: ShortcutOptions = {
runQuery: { combo: 'Ctrl+Shift+R', enabled: true },
focusSidebarSearch: { combo: 'Ctrl+F', enabled: true },
newQueryTab: { combo: 'Ctrl+Shift+N', enabled: true },
toggleLogPanel: { combo: 'Ctrl+Shift+L', enabled: true },
toggleTheme: { combo: 'Ctrl+Shift+D', enabled: true },
openShortcutManager: { combo: 'Ctrl+,', enabled: true },
};
const normalizeKeyToken = (value: string): string => {
const token = String(value || '').trim();
if (!token) return '';
const alias = KEY_ALIASES[token.toLowerCase()];
if (alias) return alias;
if (/^f([1-9]|1[0-2])$/i.test(token)) {
return token.toUpperCase();
}
if (token.length === 1) {
return token === '+' ? '+' : token.toUpperCase();
}
return token.length > 1 ? token[0].toUpperCase() + token.slice(1).toLowerCase() : token;
};
export const normalizeShortcutCombo = (combo: string): string => {
const raw = String(combo || '').trim();
if (!raw) return '';
const pieces = raw
.split('+')
.map(part => part.trim())
.filter(Boolean);
const modifiers: string[] = [];
let key = '';
pieces.forEach((part) => {
const normalized = normalizeKeyToken(part);
if (!normalized) return;
if (MODIFIER_SET.has(normalized as typeof MODIFIER_ORDER[number])) {
if (!modifiers.includes(normalized)) {
modifiers.push(normalized);
}
return;
}
key = normalized;
});
modifiers.sort((a, b) => MODIFIER_ORDER.indexOf(a as typeof MODIFIER_ORDER[number]) - MODIFIER_ORDER.indexOf(b as typeof MODIFIER_ORDER[number]));
if (!key) {
return modifiers.join('+');
}
return [...modifiers, key].join('+');
};
const normalizeKeyboardKey = (key: string): string => {
const token = String(key || '').trim();
if (!token) return '';
const alias = KEY_ALIASES[token.toLowerCase()];
if (alias) return alias;
if (token.length === 1) {
if (token === ' ') return 'Space';
return token.toUpperCase();
}
if (/^f([1-9]|1[0-2])$/i.test(token)) {
return token.toUpperCase();
}
return token.length > 1 ? token[0].toUpperCase() + token.slice(1) : token;
};
export const eventToShortcut = (event: KeyboardEvent | ReactKeyboardEvent): string => {
const key = normalizeKeyboardKey(event.key);
if (!key || MODIFIER_SET.has(key as typeof MODIFIER_ORDER[number])) {
return '';
}
const modifiers: string[] = [];
if (event.ctrlKey) modifiers.push('Ctrl');
if (event.metaKey) modifiers.push('Meta');
if (event.altKey) modifiers.push('Alt');
if (event.shiftKey) modifiers.push('Shift');
return normalizeShortcutCombo([...modifiers, key].join('+'));
};
export const isShortcutMatch = (event: KeyboardEvent | ReactKeyboardEvent, combo: string): boolean => {
const expected = normalizeShortcutCombo(combo);
if (!expected) return false;
const actual = eventToShortcut(event);
return actual === expected;
};
export const hasModifierKey = (combo: string): boolean => {
const normalized = normalizeShortcutCombo(combo);
if (!normalized) return false;
return normalized.split('+').some(part => MODIFIER_SET.has(part as typeof MODIFIER_ORDER[number]));
};
export const cloneShortcutOptions = (value: ShortcutOptions): ShortcutOptions => {
return SHORTCUT_ACTION_ORDER.reduce((acc, action) => {
acc[action] = {
combo: normalizeShortcutCombo(value[action]?.combo || DEFAULT_SHORTCUT_OPTIONS[action].combo),
enabled: value[action]?.enabled !== false,
};
return acc;
}, {} as ShortcutOptions);
};
export const sanitizeShortcutOptions = (value: unknown): ShortcutOptions => {
const raw = (value && typeof value === 'object') ? value as Record<string, unknown> : {};
const defaults = cloneShortcutOptions(DEFAULT_SHORTCUT_OPTIONS);
SHORTCUT_ACTION_ORDER.forEach((action) => {
const actionRaw = raw[action];
if (!actionRaw || typeof actionRaw !== 'object') {
return;
}
const binding = actionRaw as Record<string, unknown>;
const combo = normalizeShortcutCombo(String(binding.combo || defaults[action].combo));
defaults[action] = {
combo: combo || defaults[action].combo,
enabled: binding.enabled === false ? false : true,
};
});
return defaults;
};
export const isEditableElement = (target: EventTarget | null): boolean => {
if (!(target instanceof HTMLElement)) {
return false;
}
const tag = target.tagName.toLowerCase();
if (target.isContentEditable) {
return true;
}
if (tag === 'input' || tag === 'textarea' || tag === 'select') {
return true;
}
if (target.closest('.monaco-editor, .monaco-inputbox, .ant-select, .ant-picker, .ant-input')) {
return true;
}
return false;
};
export const getShortcutDisplay = (combo: string): string => {
const normalized = normalizeShortcutCombo(combo);
return normalized || '-';
};

View File

@@ -1,6 +1,7 @@
export type FilterCondition = {
id?: number;
enabled?: boolean;
logic?: 'AND' | 'OR';
column?: string;
op?: string;
value?: string;
@@ -36,7 +37,7 @@ export const quoteIdentPart = (dbType: string, ident: string) => {
if (!raw) return raw;
const dbTypeLower = (dbType || '').toLowerCase();
if (dbTypeLower === 'mysql' || dbTypeLower === 'mariadb' || dbTypeLower === 'diros' || dbTypeLower === 'sphinx' || dbTypeLower === 'tdengine') {
if (dbTypeLower === 'mysql' || dbTypeLower === 'mariadb' || dbTypeLower === 'diros' || dbTypeLower === 'sphinx' || dbTypeLower === 'tdengine' || dbTypeLower === 'clickhouse') {
return `\`${raw.replace(/`/g, '``')}\``;
}
@@ -142,8 +143,12 @@ export const parseListValues = (val: string) => {
.filter(Boolean);
};
const normalizeConditionLogic = (logic: unknown): 'AND' | 'OR' => {
return String(logic || '').trim().toUpperCase() === 'OR' ? 'OR' : 'AND';
};
export const buildWhereSQL = (dbType: string, conditions: FilterCondition[]) => {
const whereParts: string[] = [];
const whereParts: Array<{ expr: string; logic: 'AND' | 'OR' }> = [];
(conditions || []).forEach((cond) => {
if (cond?.enabled === false) return;
@@ -152,10 +157,17 @@ export const buildWhereSQL = (dbType: string, conditions: FilterCondition[]) =>
const column = (cond?.column || '').trim();
const value = (cond?.value ?? '').toString();
const value2 = (cond?.value2 ?? '').toString();
const logic = normalizeConditionLogic(cond?.logic);
const appendWherePart = (expr: string) => {
const normalizedExpr = String(expr || '').trim();
if (!normalizedExpr) return;
whereParts.push({ expr: normalizedExpr, logic });
};
if (op === 'CUSTOM') {
const expr = value.trim();
if (expr) whereParts.push(`(${expr})`);
if (expr) appendWherePart(`(${expr})`);
return;
}
@@ -165,80 +177,80 @@ export const buildWhereSQL = (dbType: string, conditions: FilterCondition[]) =>
switch (op) {
case 'IS_NULL':
whereParts.push(`${col} IS NULL`);
appendWherePart(`${col} IS NULL`);
return;
case 'IS_NOT_NULL':
whereParts.push(`${col} IS NOT NULL`);
appendWherePart(`${col} IS NOT NULL`);
return;
case 'IS_EMPTY':
// 兼容:空值通常理解为 NULL 或空字符串
whereParts.push(`(${col} IS NULL OR ${col} = '')`);
appendWherePart(`(${col} IS NULL OR ${col} = '')`);
return;
case 'IS_NOT_EMPTY':
whereParts.push(`(${col} IS NOT NULL AND ${col} <> '')`);
appendWherePart(`(${col} IS NOT NULL AND ${col} <> '')`);
return;
case 'BETWEEN': {
const v1 = value.trim();
const v2 = value2.trim();
if (!v1 || !v2) return;
whereParts.push(`${col} BETWEEN '${escapeLiteral(v1)}' AND '${escapeLiteral(v2)}'`);
appendWherePart(`${col} BETWEEN '${escapeLiteral(v1)}' AND '${escapeLiteral(v2)}'`);
return;
}
case 'NOT_BETWEEN': {
const v1 = value.trim();
const v2 = value2.trim();
if (!v1 || !v2) return;
whereParts.push(`${col} NOT BETWEEN '${escapeLiteral(v1)}' AND '${escapeLiteral(v2)}'`);
appendWherePart(`${col} NOT BETWEEN '${escapeLiteral(v1)}' AND '${escapeLiteral(v2)}'`);
return;
}
case 'IN': {
const items = parseListValues(value);
if (items.length === 0) return;
const list = items.map(v => `'${escapeLiteral(v)}'`).join(', ');
whereParts.push(`${col} IN (${list})`);
appendWherePart(`${col} IN (${list})`);
return;
}
case 'NOT_IN': {
const items = parseListValues(value);
if (items.length === 0) return;
const list = items.map(v => `'${escapeLiteral(v)}'`).join(', ');
whereParts.push(`${col} NOT IN (${list})`);
appendWherePart(`${col} NOT IN (${list})`);
return;
}
case 'CONTAINS': {
const v = value.trim();
if (!v) return;
whereParts.push(`${col} LIKE '%${escapeLiteral(v)}%'`);
appendWherePart(`${col} LIKE '%${escapeLiteral(v)}%'`);
return;
}
case 'NOT_CONTAINS': {
const v = value.trim();
if (!v) return;
whereParts.push(`${col} NOT LIKE '%${escapeLiteral(v)}%'`);
appendWherePart(`${col} NOT LIKE '%${escapeLiteral(v)}%'`);
return;
}
case 'STARTS_WITH': {
const v = value.trim();
if (!v) return;
whereParts.push(`${col} LIKE '${escapeLiteral(v)}%'`);
appendWherePart(`${col} LIKE '${escapeLiteral(v)}%'`);
return;
}
case 'NOT_STARTS_WITH': {
const v = value.trim();
if (!v) return;
whereParts.push(`${col} NOT LIKE '${escapeLiteral(v)}%'`);
appendWherePart(`${col} NOT LIKE '${escapeLiteral(v)}%'`);
return;
}
case 'ENDS_WITH': {
const v = value.trim();
if (!v) return;
whereParts.push(`${col} LIKE '%${escapeLiteral(v)}'`);
appendWherePart(`${col} LIKE '%${escapeLiteral(v)}'`);
return;
}
case 'NOT_ENDS_WITH': {
const v = value.trim();
if (!v) return;
whereParts.push(`${col} NOT LIKE '%${escapeLiteral(v)}'`);
appendWherePart(`${col} NOT LIKE '%${escapeLiteral(v)}'`);
return;
}
case '=':
@@ -249,7 +261,7 @@ export const buildWhereSQL = (dbType: string, conditions: FilterCondition[]) =>
case '>=': {
const v = value.trim();
if (!v) return;
whereParts.push(`${col} ${op} '${escapeLiteral(v)}'`);
appendWherePart(`${col} ${op} '${escapeLiteral(v)}'`);
return;
}
default: {
@@ -257,16 +269,23 @@ export const buildWhereSQL = (dbType: string, conditions: FilterCondition[]) =>
if (op.toUpperCase() === 'LIKE') {
const v = value.trim();
if (!v) return;
whereParts.push(`${col} LIKE '%${escapeLiteral(v)}%'`);
appendWherePart(`${col} LIKE '%${escapeLiteral(v)}%'`);
return;
}
const v = value.trim();
if (!v) return;
whereParts.push(`${col} ${op} '${escapeLiteral(v)}'`);
appendWherePart(`${col} ${op} '${escapeLiteral(v)}'`);
}
}
});
return whereParts.length > 0 ? `WHERE ${whereParts.join(' AND ')}` : '';
if (whereParts.length === 0) return '';
let whereExpr = `(${whereParts[0].expr})`;
for (let i = 1; i < whereParts.length; i++) {
const part = whereParts[i];
whereExpr = `(${whereExpr} ${part.logic} (${part.expr}))`;
}
return `WHERE ${whereExpr}`;
};

View File

@@ -1,17 +1,24 @@
// Cynhyrchwyd y ffeil hon yn awtomatig. PEIDIWCH Â MODIWL
// This file is automatically generated. DO NOT EDIT
import {connection} from '../models';
import {time} from '../models';
import {sync} from '../models';
import {redis} from '../models';
export function ApplyChanges(arg1:connection.ConnectionConfig,arg2:string,arg3:string,arg4:connection.ChangeSet):Promise<connection.QueryResult>;
export function CancelQuery(arg1:string):Promise<connection.QueryResult>;
export function CheckDriverNetworkStatus():Promise<connection.QueryResult>;
export function CheckForUpdates():Promise<connection.QueryResult>;
export function CleanupStaleQueries(arg1:time.Duration):Promise<void>;
export function ConfigureDriverRuntimeDirectory(arg1:string):Promise<connection.QueryResult>;
export function ConfigureGlobalProxy(arg1:boolean,arg2:connection.ProxyConfig):Promise<connection.QueryResult>;
export function CreateDatabase(arg1:connection.ConnectionConfig,arg2:string):Promise<connection.QueryResult>;
export function DBConnect(arg1:connection.ConnectionConfig):Promise<connection.QueryResult>;
@@ -32,6 +39,10 @@ export function DBGetTriggers(arg1:connection.ConnectionConfig,arg2:string,arg3:
export function DBQuery(arg1:connection.ConnectionConfig,arg2:string,arg3:string):Promise<connection.QueryResult>;
export function DBQueryIsolated(arg1:connection.ConnectionConfig,arg2:string,arg3:string):Promise<connection.QueryResult>;
export function DBQueryWithCancel(arg1:connection.ConnectionConfig,arg2:string,arg3:string,arg4:string):Promise<connection.QueryResult>;
export function DBShowCreateTable(arg1:connection.ConnectionConfig,arg2:string,arg3:string):Promise<connection.QueryResult>;
export function DataSync(arg1:sync.SyncConfig):Promise<sync.SyncResult>;
@@ -64,6 +75,8 @@ export function ExportTablesDataSQL(arg1:connection.ConnectionConfig,arg2:string
export function ExportTablesSQL(arg1:connection.ConnectionConfig,arg2:string,arg3:Array<string>,arg4:boolean):Promise<connection.QueryResult>;
export function GenerateQueryID():Promise<string>;
export function GetAppInfo():Promise<connection.QueryResult>;
export function GetDriverStatusList(arg1:string,arg2:string):Promise<connection.QueryResult>;
@@ -72,6 +85,8 @@ export function GetDriverVersionList(arg1:string,arg2:string):Promise<connection
export function GetDriverVersionPackageSize(arg1:string,arg2:string):Promise<connection.QueryResult>;
export function GetGlobalProxyConfig():Promise<connection.QueryResult>;
export function ImportConfigFile():Promise<connection.QueryResult>;
export function ImportData(arg1:connection.ConnectionConfig,arg2:string,arg3:string):Promise<connection.QueryResult>;
@@ -94,6 +109,8 @@ export function MySQLQuery(arg1:connection.ConnectionConfig,arg2:string,arg3:str
export function MySQLShowCreateTable(arg1:connection.ConnectionConfig,arg2:string,arg3:string):Promise<connection.QueryResult>;
export function OpenDownloadedUpdateDirectory():Promise<connection.QueryResult>;
export function OpenSQLFile():Promise<connection.QueryResult>;
export function PreviewImportFile(arg1:string):Promise<connection.QueryResult>;
@@ -120,7 +137,7 @@ export function RedisListSet(arg1:connection.ConnectionConfig,arg2:string,arg3:n
export function RedisRenameKey(arg1:connection.ConnectionConfig,arg2:string,arg3:string):Promise<connection.QueryResult>;
export function RedisScanKeys(arg1:connection.ConnectionConfig,arg2:string,arg3:number,arg4:number):Promise<connection.QueryResult>;
export function RedisScanKeys(arg1:connection.ConnectionConfig,arg2:string,arg3:any,arg4:number):Promise<connection.QueryResult>;
export function RedisSelectDB(arg1:connection.ConnectionConfig,arg2:number):Promise<connection.QueryResult>;
@@ -158,8 +175,12 @@ export function ResolveDriverPackageDownloadURL(arg1:string,arg2:string):Promise
export function ResolveDriverRepositoryURL(arg1:string):Promise<connection.QueryResult>;
export function SelectDatabaseFile(arg1:string,arg2:string):Promise<connection.QueryResult>;
export function SelectDriverDownloadDirectory(arg1:string):Promise<connection.QueryResult>;
export function SelectDriverPackageDirectory(arg1:string):Promise<connection.QueryResult>;
export function SelectDriverPackageFile(arg1:string):Promise<connection.QueryResult>;
export function SelectSSHKeyFile(arg1:string):Promise<connection.QueryResult>;

View File

@@ -6,6 +6,10 @@ export function ApplyChanges(arg1, arg2, arg3, arg4) {
return window['go']['app']['App']['ApplyChanges'](arg1, arg2, arg3, arg4);
}
export function CancelQuery(arg1) {
return window['go']['app']['App']['CancelQuery'](arg1);
}
export function CheckDriverNetworkStatus() {
return window['go']['app']['App']['CheckDriverNetworkStatus']();
}
@@ -14,10 +18,18 @@ export function CheckForUpdates() {
return window['go']['app']['App']['CheckForUpdates']();
}
export function CleanupStaleQueries(arg1) {
return window['go']['app']['App']['CleanupStaleQueries'](arg1);
}
export function ConfigureDriverRuntimeDirectory(arg1) {
return window['go']['app']['App']['ConfigureDriverRuntimeDirectory'](arg1);
}
export function ConfigureGlobalProxy(arg1, arg2) {
return window['go']['app']['App']['ConfigureGlobalProxy'](arg1, arg2);
}
export function CreateDatabase(arg1, arg2) {
return window['go']['app']['App']['CreateDatabase'](arg1, arg2);
}
@@ -58,6 +70,14 @@ export function DBQuery(arg1, arg2, arg3) {
return window['go']['app']['App']['DBQuery'](arg1, arg2, arg3);
}
export function DBQueryIsolated(arg1, arg2, arg3) {
return window['go']['app']['App']['DBQueryIsolated'](arg1, arg2, arg3);
}
export function DBQueryWithCancel(arg1, arg2, arg3, arg4) {
return window['go']['app']['App']['DBQueryWithCancel'](arg1, arg2, arg3, arg4);
}
export function DBShowCreateTable(arg1, arg2, arg3) {
return window['go']['app']['App']['DBShowCreateTable'](arg1, arg2, arg3);
}
@@ -122,6 +142,10 @@ export function ExportTablesSQL(arg1, arg2, arg3, arg4) {
return window['go']['app']['App']['ExportTablesSQL'](arg1, arg2, arg3, arg4);
}
export function GenerateQueryID() {
return window['go']['app']['App']['GenerateQueryID']();
}
export function GetAppInfo() {
return window['go']['app']['App']['GetAppInfo']();
}
@@ -138,6 +162,10 @@ export function GetDriverVersionPackageSize(arg1, arg2) {
return window['go']['app']['App']['GetDriverVersionPackageSize'](arg1, arg2);
}
export function GetGlobalProxyConfig() {
return window['go']['app']['App']['GetGlobalProxyConfig']();
}
export function ImportConfigFile() {
return window['go']['app']['App']['ImportConfigFile']();
}
@@ -182,6 +210,10 @@ export function MySQLShowCreateTable(arg1, arg2, arg3) {
return window['go']['app']['App']['MySQLShowCreateTable'](arg1, arg2, arg3);
}
export function OpenDownloadedUpdateDirectory() {
return window['go']['app']['App']['OpenDownloadedUpdateDirectory']();
}
export function OpenSQLFile() {
return window['go']['app']['App']['OpenSQLFile']();
}
@@ -310,10 +342,18 @@ export function ResolveDriverRepositoryURL(arg1) {
return window['go']['app']['App']['ResolveDriverRepositoryURL'](arg1);
}
export function SelectDatabaseFile(arg1, arg2) {
return window['go']['app']['App']['SelectDatabaseFile'](arg1, arg2);
}
export function SelectDriverDownloadDirectory(arg1) {
return window['go']['app']['App']['SelectDriverDownloadDirectory'](arg1);
}
export function SelectDriverPackageDirectory(arg1) {
return window['go']['app']['App']['SelectDriverPackageDirectory'](arg1);
}
export function SelectDriverPackageFile(arg1) {
return window['go']['app']['App']['SelectDriverPackageFile'](arg1);
}

View File

@@ -48,6 +48,24 @@ export namespace connection {
return a;
}
}
export class HTTPTunnelConfig {
host: string;
port: number;
user?: string;
password?: string;
static createFrom(source: any = {}) {
return new HTTPTunnelConfig(source);
}
constructor(source: any = {}) {
if ('string' === typeof source) source = JSON.parse(source);
this.host = source["host"];
this.port = source["port"];
this.user = source["user"];
this.password = source["password"];
}
}
export class ProxyConfig {
type: string;
host: string;
@@ -96,10 +114,16 @@ export namespace connection {
password: string;
savePassword?: boolean;
database: string;
useSSL?: boolean;
sslMode?: string;
sslCertPath?: string;
sslKeyPath?: string;
useSSH: boolean;
ssh: SSHConfig;
useProxy?: boolean;
proxy?: ProxyConfig;
useHttpTunnel?: boolean;
httpTunnel?: HTTPTunnelConfig;
driver?: string;
dsn?: string;
timeout?: number;
@@ -130,10 +154,16 @@ export namespace connection {
this.password = source["password"];
this.savePassword = source["savePassword"];
this.database = source["database"];
this.useSSL = source["useSSL"];
this.sslMode = source["sslMode"];
this.sslCertPath = source["sslCertPath"];
this.sslKeyPath = source["sslKeyPath"];
this.useSSH = source["useSSH"];
this.ssh = this.convertValues(source["ssh"], SSHConfig);
this.useProxy = source["useProxy"];
this.proxy = this.convertValues(source["proxy"], ProxyConfig);
this.useHttpTunnel = source["useHttpTunnel"];
this.httpTunnel = this.convertValues(source["httpTunnel"], HTTPTunnelConfig);
this.driver = source["driver"];
this.dsn = source["dsn"];
this.timeout = source["timeout"];
@@ -171,11 +201,13 @@ export namespace connection {
}
}
export class QueryResult {
success: boolean;
message: string;
data: any;
fields?: string[];
queryId?: string;
static createFrom(source: any = {}) {
return new QueryResult(source);
@@ -187,6 +219,7 @@ export namespace connection {
this.message = source["message"];
this.data = source["data"];
this.fields = source["fields"];
this.queryId = source["queryId"];
}
}

16
go.mod
View File

@@ -5,8 +5,10 @@ go 1.24.3
require (
gitea.com/kingbase/gokb v0.0.0-20201021123113-29bd62a876c3
gitee.com/chunanyong/dm v1.8.22
github.com/ClickHouse/clickhouse-go/v2 v2.43.0
github.com/duckdb/duckdb-go/v2 v2.5.5
github.com/go-sql-driver/mysql v1.9.3
github.com/google/uuid v1.6.0
github.com/highgo/pq-sm3 v0.0.0
github.com/lib/pq v1.11.1
github.com/microsoft/go-mssqldb v1.9.6
@@ -15,6 +17,7 @@ require (
github.com/taosdata/driver-go/v3 v3.7.8
github.com/wailsapp/wails/v2 v2.11.0
github.com/xuri/excelize/v2 v2.10.0
go.mongodb.org/mongo-driver v1.17.9
go.mongodb.org/mongo-driver/v2 v2.5.0
golang.org/x/crypto v0.47.0
golang.org/x/mod v0.32.0
@@ -25,6 +28,8 @@ require (
require (
filippo.io/edwards25519 v1.1.0 // indirect
github.com/ClickHouse/ch-go v0.71.0 // indirect
github.com/andybalholm/brotli v1.2.0 // indirect
github.com/apache/arrow-go/v18 v18.5.1 // indirect
github.com/bep/debounce v1.2.1 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
@@ -36,6 +41,8 @@ require (
github.com/duckdb/duckdb-go-bindings/lib/linux-arm64 v0.3.3 // indirect
github.com/duckdb/duckdb-go-bindings/lib/windows-amd64 v0.3.3 // indirect
github.com/dustin/go-humanize v1.0.1 // indirect
github.com/go-faster/city v1.0.1 // indirect
github.com/go-faster/errors v0.7.1 // indirect
github.com/go-ole/go-ole v1.3.0 // indirect
github.com/go-viper/mapstructure/v2 v2.5.0 // indirect
github.com/goccy/go-json v0.10.5 // indirect
@@ -44,9 +51,8 @@ require (
github.com/golang-sql/sqlexp v0.1.0 // indirect
github.com/golang/snappy v1.0.0 // indirect
github.com/google/flatbuffers v25.12.19+incompatible // indirect
github.com/google/uuid v1.6.0 // indirect
github.com/gorilla/websocket v1.5.3 // indirect
github.com/hashicorp/go-version v1.7.0 // indirect
github.com/hashicorp/go-version v1.8.0 // indirect
github.com/jchv/go-winloader v0.0.0-20210711035445-715c2860da7e // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/klauspost/compress v1.18.3 // indirect
@@ -61,7 +67,9 @@ require (
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/montanaflynn/stats v0.7.1 // indirect
github.com/ncruces/go-strftime v1.0.0 // indirect
github.com/paulmach/orb v0.12.0 // indirect
github.com/pierrec/lz4/v4 v4.1.25 // indirect
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c // indirect
github.com/pkg/errors v0.9.1 // indirect
@@ -70,6 +78,7 @@ require (
github.com/richardlehane/msoleps v1.0.4 // indirect
github.com/rivo/uniseg v0.4.7 // indirect
github.com/samber/lo v1.49.1 // indirect
github.com/segmentio/asm v1.2.1 // indirect
github.com/shopspring/decimal v1.4.0 // indirect
github.com/tiendc/go-deepcopy v1.7.1 // indirect
github.com/tkrajina/go-reflector v0.5.8 // indirect
@@ -84,6 +93,9 @@ require (
github.com/xuri/nfp v0.0.2-0.20250530014748-2ddeb826f9a9 // indirect
github.com/youmark/pkcs8 v0.0.0-20240726163527-a2c0da244d78 // indirect
github.com/zeebo/xxh3 v1.1.0 // indirect
go.opentelemetry.io/otel v1.39.0 // indirect
go.opentelemetry.io/otel/trace v1.39.0 // indirect
go.yaml.in/yaml/v3 v3.0.4 // indirect
golang.org/x/exp v0.0.0-20260112195511-716be5621a96 // indirect
golang.org/x/sync v0.19.0 // indirect
golang.org/x/sys v0.40.0 // indirect

79
go.sum
View File

@@ -16,6 +16,10 @@ github.com/Azure/azure-sdk-for-go/sdk/security/keyvault/internal v1.1.1 h1:bFWuo
github.com/Azure/azure-sdk-for-go/sdk/security/keyvault/internal v1.1.1/go.mod h1:Vih/3yc6yac2JzU4hzpaDupBJP0Flaia9rXXrU8xyww=
github.com/AzureAD/microsoft-authentication-library-for-go v1.4.2 h1:oygO0locgZJe7PpYPXT5A29ZkwJaPqcva7BVeemZOZs=
github.com/AzureAD/microsoft-authentication-library-for-go v1.4.2/go.mod h1:wP83P5OoQ5p6ip3ScPr0BAq0BvuPAvacpEuSzyouqAI=
github.com/ClickHouse/ch-go v0.71.0 h1:bUdZ/EZj/LcVHsMqaRUP2holqygrPWQKeMjc6nZoyRM=
github.com/ClickHouse/ch-go v0.71.0/go.mod h1:NwbNc+7jaqfY58dmdDUbG4Jl22vThgx1cYjBw0vtgXw=
github.com/ClickHouse/clickhouse-go/v2 v2.43.0 h1:fUR05TrF1GyvLDa/mAQjkx7KbgwdLRffs2n9O3WobtE=
github.com/ClickHouse/clickhouse-go/v2 v2.43.0/go.mod h1:o6jf7JM/zveWC/PP277BLxjHy5KjnGX/jfljhM4s34g=
github.com/andybalholm/brotli v1.2.0 h1:ukwgCxwYrmACq68yiUqwIWnGY0cTPox/M94sVwToPjQ=
github.com/andybalholm/brotli v1.2.0/go.mod h1:rzTDkvFWvIrjDXZHkuS16NPggd91W3kUSvPlQ1pLaKY=
github.com/apache/arrow-go/v18 v18.5.1 h1:yaQ6zxMGgf9YCYw4/oaeOU3AULySDlAYDOcnr4LdHdI=
@@ -52,6 +56,10 @@ github.com/duckdb/duckdb-go/v2 v2.5.5 h1:TlK8ipnzoKW2aNrjGqRkFWLCDpJDxR/VwH8ezEc
github.com/duckdb/duckdb-go/v2 v2.5.5/go.mod h1:6uIbC3gz36NCEygECzboygOo/Z9TeVwox/puG+ohWV0=
github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY=
github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto=
github.com/go-faster/city v1.0.1 h1:4WAxSZ3V2Ws4QRDrscLEDcibJY8uf41H6AhXDrNDcGw=
github.com/go-faster/city v1.0.1/go.mod h1:jKcUJId49qdW3L1qKHH/3wPeUstCVpVSXTM6vO3VcTw=
github.com/go-faster/errors v0.7.1 h1:MkJTnDoEdi9pDabt1dpWf7AA8/BaSYZqibYyhZ20AYg=
github.com/go-faster/errors v0.7.1/go.mod h1:5ySTjWFiphBs07IKuiL69nxdfd5+fzh1u7FPGZP2quo=
github.com/go-ole/go-ole v1.3.0 h1:Dt6ye7+vXGIKZ7Xtk4s6/xVdGDQynvom7xCFEdWr6uE=
github.com/go-ole/go-ole v1.3.0/go.mod h1:5LS6F96DhAwUc7C+1HLexzMXY1xGRSryjyPPKW6zv78=
github.com/go-sql-driver/mysql v1.9.3 h1:U/N249h2WzJ3Ukj8SowVFjdtZKfu9vlLZxjPXV1aweo=
@@ -62,19 +70,23 @@ github.com/goccy/go-json v0.10.5 h1:Fq85nIqj+gXn/S5ahsiTlK3TmC85qgirsdTP/+DeaC4=
github.com/goccy/go-json v0.10.5/go.mod h1:oq7eo15ShAhp70Anwd5lgX2pLfOS3QCiwU/PULtXL6M=
github.com/godbus/dbus/v5 v5.1.0 h1:4KLkAxT3aOY8Li4FRJe/KvhoNFFxo0m6fNuFUO8QJUk=
github.com/godbus/dbus/v5 v5.1.0/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
github.com/golang-jwt/jwt/v5 v5.2.2 h1:Rl4B7itRWVtYIHFrSNd7vhTiz9UpLdi6gZhZ3wEeDy8=
github.com/golang-jwt/jwt/v5 v5.2.2/go.mod h1:pqrtFR0X4osieyHYxtmOUWsAWrfe1Q5UVIyoH402zdk=
github.com/golang-sql/civil v0.0.0-20220223132316-b832511892a9 h1:au07oEsX2xN0ktxqI+Sida1w446QrXBRJ0nee3SNZlA=
github.com/golang-sql/civil v0.0.0-20220223132316-b832511892a9/go.mod h1:8vg3r2VgvsThLBIFL93Qb5yWzgyZWhEmBwUJWevAkK0=
github.com/golang-sql/sqlexp v0.1.0 h1:ZCD6MBpcuOVfGVqsEmY5/4FtYiKz6tSyUv9LPEDei6A=
github.com/golang-sql/sqlexp v0.1.0/go.mod h1:J4ad9Vo8ZCWQ2GMrC4UCQy1JpCbwU9m3EOqtpKwwwHI=
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
github.com/golang/snappy v0.0.1/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/golang/snappy v1.0.0 h1:Oy607GVXHs7RtbggtPBnr2RmDArIsAefDwvrdWvRhGs=
github.com/golang/snappy v1.0.0/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/google/flatbuffers v25.12.19+incompatible h1:haMV2JRRJCe1998HeW/p0X9UaMTK6SDo0ffLn2+DbLs=
github.com/google/flatbuffers v25.12.19+incompatible/go.mod h1:1AeVuKshWv4vARoZatz6mlQ0JxURH0Kv5+zNeJKJCa8=
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/pprof v0.0.0-20250317173921-a4b03ec1a45e h1:ijClszYn+mADRFY17kjQEVQ1XRhq2/JR1M3sGqeJoxs=
github.com/google/pprof v0.0.0-20250317173921-a4b03ec1a45e/go.mod h1:boTsfXsheKC2y+lKOCMpSfarhxDeIzfZG1jqGcPl3cA=
@@ -83,20 +95,29 @@ github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+
github.com/gorilla/websocket v1.5.0/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
github.com/gorilla/websocket v1.5.3 h1:saDtZ6Pbx/0u+bgYQ3q96pZgCzfhKXGPqt7kZ72aNNg=
github.com/gorilla/websocket v1.5.3/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
github.com/hashicorp/go-version v1.7.0 h1:5tqGy27NaOTB8yJKUZELlFAS/LTKJkrmONwQKeRZfjY=
github.com/hashicorp/go-version v1.7.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA=
github.com/hashicorp/go-version v1.8.0 h1:KAkNb1HAiZd1ukkxDFGmokVZe1Xy9HG6NUp+bPle2i4=
github.com/hashicorp/go-version v1.8.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA=
github.com/hashicorp/golang-lru/v2 v2.0.7 h1:a+bsQ5rvGLjzHuww6tVxozPZFVghXaHOwFs4luLUK2k=
github.com/hashicorp/golang-lru/v2 v2.0.7/go.mod h1:QeFd9opnmA6QUJc5vARoKUSoFhyfM2/ZepoAG6RGpeM=
github.com/jchv/go-winloader v0.0.0-20210711035445-715c2860da7e h1:Q3+PugElBCf4PFpxhErSzU3/PY5sFL5Z6rfv4AbGAck=
github.com/jchv/go-winloader v0.0.0-20210711035445-715c2860da7e/go.mod h1:alcuEEnZsY1WQsagKhZDsoPCRoOijYqhZvPwLG0kzVs=
github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM=
github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo=
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/klauspost/asmfmt v1.3.2 h1:4Ri7ox3EwapiOjCki+hw14RyKk201CN4rzyCJRFLpK4=
github.com/klauspost/asmfmt v1.3.2/go.mod h1:AG8TuvYojzulgDAMCnYn50l/5QV3Bs/tp6j0HLHbNSE=
github.com/klauspost/compress v1.13.6/go.mod h1:/3/Vjq9QcHkK5uEr5lBEmyoZ1iFhe47etQ6QUkpK6sk=
github.com/klauspost/compress v1.18.3 h1:9PJRvfbmTabkOX8moIpXPbMMbYN60bWImDDU7L+/6zw=
github.com/klauspost/compress v1.18.3/go.mod h1:R0h/fSBs8DE4ENlcrlib3PsXS61voFxhIs2DeRhCvJ4=
github.com/klauspost/cpuid/v2 v2.3.0 h1:S4CRMLnYUhGeDFDqkGriYKdfoFlDnMtqTiI/sFzhA9Y=
github.com/klauspost/cpuid/v2 v2.3.0/go.mod h1:hqwkgyIinND0mEev00jJYCxPNVRVXFQeu1XKlok6oO0=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=
github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
github.com/labstack/echo/v4 v4.13.3 h1:pwhpCPrTl5qry5HRdM5FwdXnhXSLSY+WE+YQSeCaafY=
@@ -134,8 +155,16 @@ github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M=
github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
github.com/montanaflynn/stats v0.0.0-20171201202039-1bf9dbcd8cbe/go.mod h1:wL8QJuTMNUDYhXwkmfOly8iTdp5TEcJFWZD2D7SIkUc=
github.com/montanaflynn/stats v0.7.1 h1:etflOAAHORrCC44V+aR6Ftzort912ZU+YLiSTuV8eaE=
github.com/montanaflynn/stats v0.7.1/go.mod h1:etXPPgVO6n31NxCd9KQUMvCM+ve0ruNzt6R8Bnaayow=
github.com/ncruces/go-strftime v1.0.0 h1:HMFp8mLCTPp341M/ZnA4qaf7ZlsbTc+miZjCLOFAw7w=
github.com/ncruces/go-strftime v1.0.0/go.mod h1:Fwc5htZGVVkseilnfgOVb9mKy6w1naJmn9CehxcKcls=
github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e h1:fD57ERR4JtEqsWbfPhv4DMiApHyliiK5xCTNVSPiaAs=
github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e/go.mod h1:zD1mROLANZcx1PVRCS0qkT7pwLkGfwJo4zjcN/Tysno=
github.com/paulmach/orb v0.12.0 h1:z+zOwjmG3MyEEqzv92UN49Lg1JFYx0L9GpGKNVDKk1s=
github.com/paulmach/orb v0.12.0/go.mod h1:5mULz1xQfs3bmQm63QEJA6lNGujuRafwA5S/EnuLaLU=
github.com/paulmach/protoscan v0.2.1/go.mod h1:SpcSwydNLrxUGSDvXvO0P7g7AuhJ7lcKfDlhJCDw2gY=
github.com/pierrec/lz4/v4 v4.1.25 h1:kocOqRffaIbU5djlIBr7Wh+cx82C0vtFb0fOurZHqD0=
github.com/pierrec/lz4/v4 v4.1.25/go.mod h1:EoQMVJgeeEOMsCqCzqFm2O0cJvljX2nGZjcRIPL34O4=
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c h1:+mdjkGKdHQG3305AYmdv1U2eRNDiU2ErMBj1gwrq8eQ=
@@ -159,6 +188,8 @@ github.com/rivo/uniseg v0.4.7 h1:WUdvkW8uEhrYfLC4ZzdpI2ztxP1I582+49Oc5Mq64VQ=
github.com/rivo/uniseg v0.4.7/go.mod h1:FN3SvrM+Zdj16jyLfmOkMNblXMcoc8DfTHruCPUcx88=
github.com/samber/lo v1.49.1 h1:4BIFyVfuQSEpluc7Fua+j1NolZHiEHEpaSEKdsH0tew=
github.com/samber/lo v1.49.1/go.mod h1:dO6KHFzUKXgP8LDhU0oI8d2hekjXnGOu0DB8Jecxd6o=
github.com/segmentio/asm v1.2.1 h1:DTNbBqs57ioxAD4PrArqftgypG4/qNpXoJx8TVXxPR0=
github.com/segmentio/asm v1.2.1/go.mod h1:BqMnlJP91P8d+4ibuonYZw9mfnzI9HfxselHZr5aAcs=
github.com/shopspring/decimal v1.4.0 h1:bxl37RwXBklmTi0C79JfXCEBD1cqqHt0bbgBAGFp81k=
github.com/shopspring/decimal v1.4.0/go.mod h1:gawqmDU56v4yIKSwfBSFip1HdCCXN8/+DMd9qYNcwME=
github.com/sijms/go-ora/v2 v2.9.0 h1:+iQbUeTeCOFMb5BsOMgUhV8KWyrv9yjKpcK4x7+MFrg=
@@ -169,6 +200,7 @@ github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpE
github.com/stretchr/objx v0.5.2 h1:xuMeJ0Sdp5ZMRXx/aWO6RZxdr3beISkG5/G/aIRr3pY=
github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.8.2/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
@@ -176,6 +208,7 @@ github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
github.com/taosdata/driver-go/v3 v3.7.8 h1:N2H6HLLZH2ve2ipcoFgG9BJS+yW0XksqNYwEdSmHaJk=
github.com/taosdata/driver-go/v3 v3.7.8/go.mod h1:gSxBEPOueMg0rTmMO1Ug6aeD7AwGdDGvUtLrsDTTpYc=
github.com/tidwall/pretty v1.0.0/go.mod h1:XNkn88O1ChpSDQmQeStsy+sBenx6DDtFZJxhVysOjyk=
github.com/tiendc/go-deepcopy v1.7.1 h1:LnubftI6nYaaMOcaz0LphzwraqN8jiWTwm416sitff4=
github.com/tiendc/go-deepcopy v1.7.1/go.mod h1:4bKjNC2r7boYOkD2IOuZpYjmlDdzjbpTRyCx+goBCJQ=
github.com/tkrajina/go-reflector v0.5.8 h1:yPADHrwmUbMq4RGEyaOUpz2H90sRsETNVpjzo3DLVQQ=
@@ -192,8 +225,10 @@ github.com/wailsapp/wails/v2 v2.11.0 h1:seLacV8pqupq32IjS4Y7V8ucab0WZwtK6VvUVxSB
github.com/wailsapp/wails/v2 v2.11.0/go.mod h1:jrf0ZaM6+GBc1wRmXsM8cIvzlg0karYin3erahI4+0k=
github.com/xdg-go/pbkdf2 v1.0.0 h1:Su7DPu48wXMwC3bs7MCNG+z4FhcyEuz5dlvchbq0B0c=
github.com/xdg-go/pbkdf2 v1.0.0/go.mod h1:jrpuAogTd400dnrH08LKmI/xc1MbPOebTwRqcT5RDeI=
github.com/xdg-go/scram v1.1.1/go.mod h1:RaEWvsqvNKKvBPvcKeFjrG2cJqOkHTiyTpzz23ni57g=
github.com/xdg-go/scram v1.2.0 h1:bYKF2AEwG5rqd1BumT4gAnvwU/M9nBp2pTSxeZw7Wvs=
github.com/xdg-go/scram v1.2.0/go.mod h1:3dlrS0iBaWKYVt2ZfA4cj48umJZ+cAEbR6/SjLA88I8=
github.com/xdg-go/stringprep v1.0.3/go.mod h1:W3f5j4i+9rC0kuIEJL0ky1VpHXQU3ocBgklLGvcBnW8=
github.com/xdg-go/stringprep v1.0.4 h1:XLI/Ng3O1Atzq0oBs3TWm+5ZVgkq2aqdlvP9JtoZ6c8=
github.com/xdg-go/stringprep v1.0.4/go.mod h1:mPGuuIYwz7CmR2bT9j4GbQqutWS1zV24gijq1dTyGkM=
github.com/xuri/efp v0.0.1 h1:fws5Rv3myXyYni8uwj2qKjVaRP30PdjeYe2Y6FDsCL8=
@@ -202,38 +237,66 @@ github.com/xuri/excelize/v2 v2.10.0 h1:8aKsP7JD39iKLc6dH5Tw3dgV3sPRh8uRVXu/fMstf
github.com/xuri/excelize/v2 v2.10.0/go.mod h1:SC5TzhQkaOsTWpANfm+7bJCldzcnU/jrhqkTi/iBHBU=
github.com/xuri/nfp v0.0.2-0.20250530014748-2ddeb826f9a9 h1:+C0TIdyyYmzadGaL/HBLbf3WdLgC29pgyhTjAT/0nuE=
github.com/xuri/nfp v0.0.2-0.20250530014748-2ddeb826f9a9/go.mod h1:WwHg+CVyzlv/TX9xqBFXEZAuxOPxn2k1GNHwG41IIUQ=
github.com/xyproto/randomstring v1.0.5 h1:YtlWPoRdgMu3NZtP45drfy1GKoojuR7hmRcnhZqKjWU=
github.com/xyproto/randomstring v1.0.5/go.mod h1:rgmS5DeNXLivK7YprL0pY+lTuhNQW3iGxZ18UQApw/E=
github.com/youmark/pkcs8 v0.0.0-20181117223130-1be2e3e5546d/go.mod h1:rHwXgn7JulP+udvsHwJoVG1YGAP6VLg4y9I5dyZdqmA=
github.com/youmark/pkcs8 v0.0.0-20240726163527-a2c0da244d78 h1:ilQV1hzziu+LLM3zUTJ0trRztfwgjqKnBWNtSRkbmwM=
github.com/youmark/pkcs8 v0.0.0-20240726163527-a2c0da244d78/go.mod h1:aL8wCCfTfSfmXjznFBSZNN13rSJjlIOI1fUNAtF7rmI=
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
github.com/zeebo/assert v1.3.0 h1:g7C04CbJuIDKNPFHmsk4hwZDO5O+kntRxzaUoNXj+IQ=
github.com/zeebo/assert v1.3.0/go.mod h1:Pq9JiuJQpG8JLJdtkwrJESF0Foym2/D9XMU5ciN/wJ0=
github.com/zeebo/xxh3 v1.1.0 h1:s7DLGDK45Dyfg7++yxI0khrfwq9661w9EN78eP/UZVs=
github.com/zeebo/xxh3 v1.1.0/go.mod h1:IisAie1LELR4xhVinxWS5+zf1lA4p0MW4T+w+W07F5s=
go.mongodb.org/mongo-driver v1.11.4/go.mod h1:PTSz5yu21bkT/wXpkS7WR5f0ddqw5quethTUn9WM+2g=
go.mongodb.org/mongo-driver v1.17.9 h1:IexDdCuuNJ3BHrELgBlyaH9p60JXAvdzWR128q+U5tU=
go.mongodb.org/mongo-driver v1.17.9/go.mod h1:LlOhpH5NUEfhxcAwG0UEkMqwYcc4JU18gtCdGudk/tQ=
go.mongodb.org/mongo-driver/v2 v2.5.0 h1:yXUhImUjjAInNcpTcAlPHiT7bIXhshCTL3jVBkF3xaE=
go.mongodb.org/mongo-driver/v2 v2.5.0/go.mod h1:yOI9kBsufol30iFsl1slpdq1I0eHPzybRWdyYUs8K/0=
go.opentelemetry.io/otel v1.39.0 h1:8yPrr/S0ND9QEfTfdP9V+SiwT4E0G7Y5MO7p85nis48=
go.opentelemetry.io/otel v1.39.0/go.mod h1:kLlFTywNWrFyEdH0oj2xK0bFYZtHRYUdv1NklR/tgc8=
go.opentelemetry.io/otel/trace v1.39.0 h1:2d2vfpEDmCJ5zVYz7ijaJdOF59xLomrvj7bjt6/qCJI=
go.opentelemetry.io/otel/trace v1.39.0/go.mod h1:88w4/PnZSazkGzz/w84VHpQafiU4EtqqlVdxWy+rNOA=
go.yaml.in/yaml/v3 v3.0.4 h1:tfq32ie2Jv2UxXFdLJdh3jXuOzWiL1fo0bu/FbuKpbc=
go.yaml.in/yaml/v3 v3.0.4/go.mod h1:DhzuOOF2ATzADvBadXxruRBLzYTpT36CKvDb3+aBEFg=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.0.0-20220622213112-05595931fe9d/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/crypto v0.47.0 h1:V6e3FRj+n4dbpw86FJ8Fv7XVOql7TEwpHapKoMJ/GO8=
golang.org/x/crypto v0.47.0/go.mod h1:ff3Y9VzzKbwSSEzWqJsJVBnWmRwRSHt/6Op5n9bQc4A=
golang.org/x/exp v0.0.0-20260112195511-716be5621a96 h1:Z/6YuSHTLOHfNFdb8zVZomZr7cqNgTJvA8+Qz75D8gU=
golang.org/x/exp v0.0.0-20260112195511-716be5621a96/go.mod h1:nzimsREAkjBCIEFtHiYkrJyT+2uy9YZJB7H1k68CXZU=
golang.org/x/image v0.25.0 h1:Y6uW6rH1y5y/LK1J8BPWZtr6yZ7hrsy6hFrXjgsc2fQ=
golang.org/x/image v0.25.0/go.mod h1:tCAmOEGthTtkalusGp1g3xa2gke8J6c2N565dTyl9Rs=
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
golang.org/x/mod v0.32.0 h1:9F4d3PHLljb6x//jOyokMv3eX+YDeepZSEo3mFJy93c=
golang.org/x/mod v0.32.0/go.mod h1:SgipZ/3h2Ci89DlEtEXWUk/HteuRin+HHhN+WbNhguU=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210505024714-0287a6fb4125/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/net v0.49.0 h1:eeHFmOGUTtaaPSGNmjBKpbng9MulQsJURQUAfUwY++o=
golang.org/x/net v0.49.0/go.mod h1:/ysNB2EvaqvesRkuLAyjI1ycPZlQHM3q01F02UY/MV8=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.19.0 h1:vV+1eWNmZ5geRlYjzm2adRgW2/mcpevXNg50YZtPCE4=
golang.org/x/sync v0.19.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200810151505-1b9f1253b3ed/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
@@ -260,15 +323,25 @@ golang.org/x/text v0.33.0 h1:B3njUFyqtHDUI5jMn1YIr5B0IE2U0qck04r6d4KPAxE=
golang.org/x/text v0.33.0/go.mod h1:LuMebE6+rBincTi9+xWTY8TztLzKHc/9C1uBCG27+q8=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
golang.org/x/tools v0.41.0 h1:a9b8iMweWG+S0OBnlU36rzLp20z1Rp10w+IY2czHTQc=
golang.org/x/tools v0.41.0/go.mod h1:XSY6eDqxVNiYgezAVqqCeihT4j1U2CCsqvH3WhQpnlg=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20240903120638-7835f813f4da h1:noIWHXmPHxILtqtCOPIhSt0ABwskkZKjD3bXGnZGpNY=
golang.org/x/xerrors v0.0.0-20240903120638-7835f813f4da/go.mod h1:NDW/Ps6MPRej6fsCIbMTohpP40sJ/P/vI1MoTEGwX90=
gonum.org/v1/gonum v0.16.0 h1:5+ul4Swaf3ESvrOnidPp4GZbzf0mxVQpDCYUQE7OJfk=
gonum.org/v1/gonum v0.16.0/go.mod h1:fef3am4MQ93R2HHpKnLk4/Tbh/s0+wqD5nfa6Pnwy4E=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.27.1/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20200227125254-8fa46927fb4f h1:BLraFXnmrev5lT+xlilqcH8XK9/i0At2xKjWk4p6zsU=
gopkg.in/check.v1 v1.0.0-20200227125254-8fa46927fb4f/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=

View File

@@ -16,6 +16,7 @@ import (
"GoNavi-Wails/internal/db"
"GoNavi-Wails/internal/logger"
proxytunnel "GoNavi-Wails/internal/proxy"
"github.com/google/uuid"
)
const dbCachePingInterval = 30 * time.Second
@@ -25,19 +26,27 @@ type cachedDatabase struct {
lastPing time.Time
}
type queryContext struct {
cancel context.CancelFunc
started time.Time
}
// App struct
type App struct {
ctx context.Context
dbCache map[string]cachedDatabase // Cache for DB connections
mu sync.RWMutex // Mutex for cache access
updateMu sync.Mutex
updateState updateState
ctx context.Context
dbCache map[string]cachedDatabase // Cache for DB connections
mu sync.RWMutex // Mutex for cache access
updateMu sync.Mutex
updateState updateState
queryMu sync.RWMutex
runningQueries map[string]queryContext // queryID -> cancelFunc and start time
}
// NewApp creates a new App application struct
func NewApp() *App {
return &App{
dbCache: make(map[string]cachedDatabase),
dbCache: make(map[string]cachedDatabase),
runningQueries: make(map[string]queryContext),
}
}
@@ -74,20 +83,137 @@ func (a *App) Shutdown(ctx context.Context) {
logger.Close()
}
// Helper: Generate a unique key for the connection config
func getCacheKey(config connection.ConnectionConfig) string {
if !config.UseSSH {
config.SSH = connection.SSHConfig{}
func normalizeCacheKeyConfig(config connection.ConnectionConfig) connection.ConnectionConfig {
normalized := config
normalized.Type = strings.ToLower(strings.TrimSpace(normalized.Type))
// timeout 仅用于 Query/Ping 控制,不应作为物理连接复用键的一部分。
normalized.Timeout = 0
normalized.SavePassword = false
if !normalized.UseSSH {
normalized.SSH = connection.SSHConfig{}
}
if !config.UseProxy {
config.Proxy = connection.ProxyConfig{}
if !normalized.UseProxy {
normalized.Proxy = connection.ProxyConfig{}
}
if !normalized.UseHTTPTunnel {
normalized.HTTPTunnel = connection.HTTPTunnelConfig{}
}
b, _ := json.Marshal(config)
if isFileDatabaseType(normalized.Type) {
dsn := strings.TrimSpace(normalized.Host)
if dsn == "" {
dsn = strings.TrimSpace(normalized.Database)
}
if dsn == "" {
dsn = ":memory:"
}
// DuckDB/SQLite 仅基于文件来源识别连接,其他网络字段不参与键计算。
normalized.Host = dsn
normalized.Database = ""
normalized.Port = 0
normalized.User = ""
normalized.Password = ""
normalized.URI = ""
normalized.Hosts = nil
normalized.Topology = ""
normalized.MySQLReplicaUser = ""
normalized.MySQLReplicaPassword = ""
normalized.ReplicaSet = ""
normalized.AuthSource = ""
normalized.ReadPreference = ""
normalized.MongoSRV = false
normalized.MongoAuthMechanism = ""
normalized.MongoReplicaUser = ""
normalized.MongoReplicaPassword = ""
normalized.UseHTTPTunnel = false
normalized.HTTPTunnel = connection.HTTPTunnelConfig{}
}
return normalized
}
func resolveFileDatabaseDSN(config connection.ConnectionConfig) string {
dsn := strings.TrimSpace(config.Host)
if dsn == "" {
dsn = strings.TrimSpace(config.Database)
}
if dsn == "" {
dsn = ":memory:"
}
return dsn
}
// Helper: Generate a unique key for the connection config
func getCacheKey(config connection.ConnectionConfig) string {
normalized := normalizeCacheKeyConfig(config)
b, _ := json.Marshal(normalized)
sum := sha256.Sum256(b)
return hex.EncodeToString(sum[:])
}
func shortCacheKey(cacheKey string) string {
shortKey := cacheKey
if len(shortKey) > 12 {
shortKey = shortKey[:12]
}
return shortKey
}
func shouldRefreshCachedConnection(err error) bool {
if err == nil {
return false
}
normalized := strings.ToLower(normalizeErrorMessage(err))
if normalized == "" {
return false
}
patterns := []string{
"invalid connection",
"bad connection",
"database is closed",
"connection is already closed",
"use of closed network connection",
"broken pipe",
"connection reset by peer",
"server has gone away",
"eof",
}
for _, pattern := range patterns {
if strings.Contains(normalized, pattern) {
return true
}
}
return false
}
func (a *App) invalidateCachedDatabase(config connection.ConnectionConfig, reason error) bool {
effectiveConfig := applyGlobalProxyToConnection(config)
key := getCacheKey(effectiveConfig)
shortKey := shortCacheKey(key)
a.mu.Lock()
defer a.mu.Unlock()
entry, exists := a.dbCache[key]
if !exists || entry.inst == nil {
return false
}
if closeErr := entry.inst.Close(); closeErr != nil {
logger.Error(closeErr, "关闭失效缓存连接失败缓存Key=%s", shortKey)
}
delete(a.dbCache, key)
if reason != nil {
logger.Errorf("检测到连接失效,已清理缓存连接:%s 缓存Key=%s 原因=%s", formatConnSummary(effectiveConfig), shortKey, normalizeErrorMessage(reason))
} else {
logger.Infof("已清理缓存连接:%s 缓存Key=%s", formatConnSummary(effectiveConfig), shortKey)
}
return true
}
func wrapConnectError(config connection.ConnectionConfig, err error) error {
if err == nil {
return nil
@@ -182,6 +308,12 @@ func formatConnSummary(config connection.ConnectionConfig) string {
b.WriteString(" 代理认证=已配置")
}
}
if config.UseHTTPTunnel {
b.WriteString(fmt.Sprintf(" HTTP隧道=%s:%d", strings.TrimSpace(config.HTTPTunnel.Host), config.HTTPTunnel.Port))
if strings.TrimSpace(config.HTTPTunnel.User) != "" {
b.WriteString(" HTTP隧道认证=已配置")
}
}
if config.Type == "custom" {
driver := strings.TrimSpace(config.Driver)
@@ -207,16 +339,51 @@ func (a *App) getDatabase(config connection.ConnectionConfig) (db.Database, erro
return a.getDatabaseWithPing(config, false)
}
func (a *App) openDatabaseIsolated(config connection.ConnectionConfig) (db.Database, error) {
effectiveConfig := applyGlobalProxyToConnection(config)
if supported, reason := db.DriverRuntimeSupportStatus(effectiveConfig.Type); !supported {
if strings.TrimSpace(reason) == "" {
reason = fmt.Sprintf("%s 驱动未启用,请先在驱动管理中安装启用", strings.TrimSpace(effectiveConfig.Type))
}
return nil, withLogHint{err: fmt.Errorf("%s", reason), logPath: logger.Path()}
}
dbInst, err := db.NewDatabase(effectiveConfig.Type)
if err != nil {
return nil, err
}
connectConfig, proxyErr := resolveDialConfigWithProxy(effectiveConfig)
if proxyErr != nil {
_ = dbInst.Close()
return nil, wrapConnectError(effectiveConfig, proxyErr)
}
if err := dbInst.Connect(connectConfig); err != nil {
_ = dbInst.Close()
return nil, wrapConnectError(effectiveConfig, err)
}
return dbInst, nil
}
func (a *App) getDatabaseWithPing(config connection.ConnectionConfig, forcePing bool) (db.Database, error) {
key := getCacheKey(config)
effectiveConfig := applyGlobalProxyToConnection(config)
isFileDB := isFileDatabaseType(effectiveConfig.Type)
key := getCacheKey(effectiveConfig)
shortKey := key
if len(shortKey) > 12 {
shortKey = shortKey[:12]
}
if isFileDB {
rawDSN := resolveFileDatabaseDSN(effectiveConfig)
normalizedDSN := resolveFileDatabaseDSN(normalizeCacheKeyConfig(effectiveConfig))
logger.Infof("文件库连接缓存探测:类型=%s 原始DSN=%s 归一化DSN=%s timeout=%ds forcePing=%t 缓存Key=%s",
strings.TrimSpace(effectiveConfig.Type), rawDSN, normalizedDSN, effectiveConfig.Timeout, forcePing, shortKey)
}
if supported, reason := db.DriverRuntimeSupportStatus(config.Type); !supported {
if supported, reason := db.DriverRuntimeSupportStatus(effectiveConfig.Type); !supported {
if strings.TrimSpace(reason) == "" {
reason = fmt.Sprintf("%s 驱动未启用,请先在驱动管理中安装启用", strings.TrimSpace(config.Type))
reason = fmt.Sprintf("%s 驱动未启用,请先在驱动管理中安装启用", strings.TrimSpace(effectiveConfig.Type))
}
// Best-effort cleanup: if cached instance exists for this exact config, close it.
a.mu.Lock()
@@ -232,6 +399,9 @@ func (a *App) getDatabaseWithPing(config connection.ConnectionConfig, forcePing
entry, ok := a.dbCache[key]
a.mu.RUnlock()
if ok {
if isFileDB {
logger.Infof("命中文件库连接缓存:类型=%s 缓存Key=%s", strings.TrimSpace(effectiveConfig.Type), shortKey)
}
needPing := forcePing
if !needPing {
lastPing := entry.lastPing
@@ -241,6 +411,9 @@ func (a *App) getDatabaseWithPing(config connection.ConnectionConfig, forcePing
}
if !needPing {
if isFileDB {
logger.Infof("复用文件库连接缓存(免 Ping类型=%s 缓存Key=%s", strings.TrimSpace(effectiveConfig.Type), shortKey)
}
return entry.inst, nil
}
@@ -252,9 +425,12 @@ func (a *App) getDatabaseWithPing(config connection.ConnectionConfig, forcePing
a.dbCache[key] = cur
}
a.mu.Unlock()
if isFileDB {
logger.Infof("复用文件库连接缓存Ping 成功):类型=%s 缓存Key=%s", strings.TrimSpace(effectiveConfig.Type), shortKey)
}
return entry.inst, nil
} else {
logger.Error(err, "缓存连接不可用,准备重建:%s 缓存Key=%s", formatConnSummary(config), shortKey)
logger.Error(err, "缓存连接不可用,准备重建:%s 缓存Key=%s", formatConnSummary(effectiveConfig), shortKey)
}
// Ping failed: remove cached instance (best effort)
@@ -266,26 +442,32 @@ func (a *App) getDatabaseWithPing(config connection.ConnectionConfig, forcePing
delete(a.dbCache, key)
}
a.mu.Unlock()
if isFileDB {
logger.Infof("文件库缓存连接已剔除,准备新建连接:类型=%s 缓存Key=%s", strings.TrimSpace(effectiveConfig.Type), shortKey)
}
}
if isFileDB {
logger.Infof("未命中文件库连接缓存,开始创建连接:类型=%s 缓存Key=%s", strings.TrimSpace(effectiveConfig.Type), shortKey)
}
logger.Infof("获取数据库连接:%s 缓存Key=%s", formatConnSummary(config), shortKey)
logger.Infof("创建数据库驱动实例:类型=%s 缓存Key=%s", config.Type, shortKey)
dbInst, err := db.NewDatabase(config.Type)
logger.Infof("获取数据库连接:%s 缓存Key=%s", formatConnSummary(effectiveConfig), shortKey)
logger.Infof("创建数据库驱动实例:类型=%s 缓存Key=%s", effectiveConfig.Type, shortKey)
dbInst, err := db.NewDatabase(effectiveConfig.Type)
if err != nil {
logger.Error(err, "创建数据库驱动实例失败:类型=%s 缓存Key=%s", config.Type, shortKey)
logger.Error(err, "创建数据库驱动实例失败:类型=%s 缓存Key=%s", effectiveConfig.Type, shortKey)
return nil, err
}
connectConfig, proxyErr := resolveDialConfigWithProxy(config)
connectConfig, proxyErr := resolveDialConfigWithProxy(effectiveConfig)
if proxyErr != nil {
wrapped := wrapConnectError(config, proxyErr)
logger.Error(wrapped, "连接代理准备失败:%s 缓存Key=%s", formatConnSummary(config), shortKey)
wrapped := wrapConnectError(effectiveConfig, proxyErr)
logger.Error(wrapped, "连接代理准备失败:%s 缓存Key=%s", formatConnSummary(effectiveConfig), shortKey)
return nil, wrapped
}
if err := dbInst.Connect(connectConfig); err != nil {
wrapped := wrapConnectError(config, err)
logger.Error(wrapped, "建立数据库连接失败:%s 缓存Key=%s", formatConnSummary(config), shortKey)
wrapped := wrapConnectError(effectiveConfig, err)
logger.Error(wrapped, "建立数据库连接失败:%s 缓存Key=%s", formatConnSummary(effectiveConfig), shortKey)
return nil, wrapped
}
@@ -296,11 +478,54 @@ func (a *App) getDatabaseWithPing(config connection.ConnectionConfig, forcePing
a.mu.Unlock()
// Prefer existing cached connection to avoid cache racing duplicates.
_ = dbInst.Close()
if isFileDB {
logger.Infof("并发创建命中已存在文件库连接,关闭新建连接并复用缓存:类型=%s 缓存Key=%s", strings.TrimSpace(effectiveConfig.Type), shortKey)
}
return existing.inst, nil
}
a.dbCache[key] = cachedDatabase{inst: dbInst, lastPing: now}
a.mu.Unlock()
logger.Infof("数据库连接成功并写入缓存:%s 缓存Key=%s", formatConnSummary(config), shortKey)
logger.Infof("数据库连接成功并写入缓存:%s 缓存Key=%s", formatConnSummary(effectiveConfig), shortKey)
return dbInst, nil
}
// generateQueryID generates a unique ID for a query using UUID v4
func generateQueryID() string {
return "query-" + uuid.New().String()
}
// CancelQuery cancels a running query by its ID
func (a *App) CancelQuery(queryID string) connection.QueryResult {
a.queryMu.Lock()
defer a.queryMu.Unlock()
if ctx, exists := a.runningQueries[queryID]; exists {
ctx.cancel()
delete(a.runningQueries, queryID)
logger.Infof("查询已取消queryID=%s", queryID)
return connection.QueryResult{Success: true, Message: "查询已取消"}
}
logger.Warnf("取消查询失败queryID=%s 不存在或已完成", queryID)
return connection.QueryResult{Success: false, Message: "查询不存在或已完成"}
}
// CleanupStaleQueries removes queries older than maxAge
func (a *App) CleanupStaleQueries(maxAge time.Duration) {
a.queryMu.Lock()
defer a.queryMu.Unlock()
now := time.Now()
for id, ctx := range a.runningQueries {
if now.Sub(ctx.started) > maxAge {
// Query likely finished or stuck, remove from tracking
delete(a.runningQueries, id)
// Query expired, silently remove
}
}
}
// GenerateQueryID generates a unique query ID for cancellation tracking
func (a *App) GenerateQueryID() string {
return generateQueryID()
}

View File

@@ -0,0 +1,63 @@
package app
import (
"testing"
"GoNavi-Wails/internal/connection"
)
func TestGetCacheKey_IgnoreTimeout(t *testing.T) {
base := connection.ConnectionConfig{
Type: "duckdb",
Host: `C:\data\songs.duckdb`,
Timeout: 30,
UseProxy: false,
UseSSH: false,
}
modified := base
modified.Timeout = 120
left := getCacheKey(base)
right := getCacheKey(modified)
if left != right {
t.Fatalf("expected same cache key when only timeout differs, got %s vs %s", left, right)
}
}
func TestGetCacheKey_DuckDBHostAndDatabaseEquivalent(t *testing.T) {
withHost := connection.ConnectionConfig{
Type: "duckdb",
Host: `D:\music\songs.duckdb`,
}
withDatabase := connection.ConnectionConfig{
Type: "duckdb",
Database: `D:\music\songs.duckdb`,
}
left := getCacheKey(withHost)
right := getCacheKey(withDatabase)
if left != right {
t.Fatalf("expected same cache key for duckdb host/database path, got %s vs %s", left, right)
}
}
func TestGetCacheKey_KeepDatabaseIsolation(t *testing.T) {
a := connection.ConnectionConfig{
Type: "mysql",
Host: "127.0.0.1",
Port: 3306,
User: "root",
Password: "root",
Database: "db_a",
Timeout: 30,
}
b := a
b.Database = "db_b"
b.Timeout = 5
left := getCacheKey(a)
right := getCacheKey(b)
if left == right {
t.Fatalf("expected different cache key for different database targets")
}
}

View File

@@ -14,7 +14,7 @@ func normalizeRunConfig(config connection.ConnectionConfig, dbName string) conne
}
switch strings.ToLower(strings.TrimSpace(config.Type)) {
case "mysql", "mariadb", "diros", "sphinx", "postgres", "kingbase", "highgo", "vastbase", "sqlserver", "mongodb", "tdengine":
case "mysql", "mariadb", "diros", "sphinx", "postgres", "kingbase", "highgo", "vastbase", "sqlserver", "mongodb", "tdengine", "clickhouse":
// 这些类型的 dbName 表示"数据库",需要写入连接配置以选择目标库。
runConfig.Database = name
case "dameng":
@@ -36,6 +36,17 @@ func normalizeSchemaAndTable(config connection.ConnectionConfig, dbName string,
return rawDB, rawTable
}
dbType := strings.ToLower(strings.TrimSpace(config.Type))
if dbType == "sqlserver" {
// SQL Server 的 DB 接口约定第一个参数是数据库名schema 由 tableName(如 dbo.users) 自行解析。
// 不能把 schema(dbo) 传到第一个参数,否则会拼出 dbo.sys.columns 等无效对象名。
targetDB := rawDB
if targetDB == "" {
targetDB = strings.TrimSpace(config.Database)
}
return targetDB, rawTable
}
if parts := strings.SplitN(rawTable, ".", 2); len(parts) == 2 {
schema := strings.TrimSpace(parts[0])
table := strings.TrimSpace(parts[1])
@@ -44,13 +55,10 @@ func normalizeSchemaAndTable(config connection.ConnectionConfig, dbName string,
}
}
switch strings.ToLower(strings.TrimSpace(config.Type)) {
switch dbType {
case "postgres", "kingbase", "highgo", "vastbase":
// PG/金仓/瀚高/海量dbName 在 UI 里是"数据库"schema 需从 tableName 或使用默认 public。
return "public", rawTable
case "sqlserver":
// SQL ServerdbName 表示数据库schema 默认 dbo
return "dbo", rawTable
default:
// MySQLdbName 表示数据库Oracle/达梦dbName 表示 schema/owner。
return rawDB, rawTable

View File

@@ -0,0 +1,51 @@
package app
import (
"testing"
"GoNavi-Wails/internal/connection"
)
func TestNormalizeSchemaAndTable_SQLServerKeepsDatabaseAndQualifiedTable(t *testing.T) {
t.Parallel()
schemaOrDb, table := normalizeSchemaAndTable(connection.ConnectionConfig{
Type: "sqlserver",
Database: "master",
}, "biz_db", "dbo.users")
if schemaOrDb != "biz_db" {
t.Fatalf("expected sqlserver first return value as database name, got %q", schemaOrDb)
}
if table != "dbo.users" {
t.Fatalf("expected sqlserver table name keep qualified form, got %q", table)
}
}
func TestNormalizeSchemaAndTable_SQLServerFallbackToConfigDatabase(t *testing.T) {
t.Parallel()
schemaOrDb, table := normalizeSchemaAndTable(connection.ConnectionConfig{
Type: "sqlserver",
Database: "biz_db",
}, "", "dbo.users")
if schemaOrDb != "biz_db" {
t.Fatalf("expected sqlserver fallback database from config, got %q", schemaOrDb)
}
if table != "dbo.users" {
t.Fatalf("expected sqlserver table name keep qualified form, got %q", table)
}
}
func TestNormalizeSchemaAndTable_PostgresStillSplitsQualifiedName(t *testing.T) {
t.Parallel()
schema, table := normalizeSchemaAndTable(connection.ConnectionConfig{
Type: "postgres",
}, "demo_db", "public.orders")
if schema != "public" || table != "orders" {
t.Fatalf("expected postgres qualified split to public.orders, got %q.%q", schema, table)
}
}

View File

@@ -12,8 +12,35 @@ import (
func resolveDialConfigWithProxy(raw connection.ConnectionConfig) (connection.ConnectionConfig, error) {
config := raw
if config.UseHTTPTunnel {
if config.UseProxy {
return connection.ConnectionConfig{}, fmt.Errorf("HTTP 隧道与普通代理不能同时启用")
}
tunnelHost := strings.TrimSpace(config.HTTPTunnel.Host)
if tunnelHost == "" {
return connection.ConnectionConfig{}, fmt.Errorf("HTTP 隧道主机不能为空")
}
tunnelPort := config.HTTPTunnel.Port
if tunnelPort <= 0 {
tunnelPort = 8080
}
if tunnelPort > 65535 {
return connection.ConnectionConfig{}, fmt.Errorf("HTTP 隧道端口无效:%d", config.HTTPTunnel.Port)
}
config.UseProxy = true
config.Proxy = connection.ProxyConfig{
Type: "http",
Host: tunnelHost,
Port: tunnelPort,
User: strings.TrimSpace(config.HTTPTunnel.User),
Password: config.HTTPTunnel.Password,
}
}
if !config.UseProxy {
config.Proxy = connection.ProxyConfig{}
config.UseHTTPTunnel = false
config.HTTPTunnel = connection.HTTPTunnelConfig{}
return config, nil
}
@@ -22,6 +49,8 @@ func resolveDialConfigWithProxy(raw connection.ConnectionConfig) (connection.Con
return connection.ConnectionConfig{}, err
}
config.Proxy = normalizedProxy
config.UseHTTPTunnel = false
config.HTTPTunnel = connection.HTTPTunnelConfig{}
if config.UseSSH {
sshPort := config.SSH.Port
@@ -194,6 +223,8 @@ func defaultPortByType(driverType string) int {
return 1433
case "mongodb":
return 27017
case "clickhouse":
return 9000
case "highgo":
return 5866
default:

View File

@@ -0,0 +1,291 @@
package app
import (
"crypto/tls"
"crypto/x509"
"errors"
"fmt"
"net"
"net/http"
"net/url"
"strconv"
"strings"
"sync"
"time"
"GoNavi-Wails/internal/connection"
"GoNavi-Wails/internal/logger"
proxytunnel "GoNavi-Wails/internal/proxy"
)
type globalProxySnapshot struct {
Enabled bool `json:"enabled"`
Proxy connection.ProxyConfig `json:"proxy"`
}
var globalProxyRuntime = struct {
mu sync.RWMutex
enabled bool
proxy connection.ProxyConfig
}{}
type localProxyTLSFallbackTransport struct {
primary *http.Transport
fallback *http.Transport
proxyEndpoint string
}
func currentGlobalProxyConfig() globalProxySnapshot {
globalProxyRuntime.mu.RLock()
defer globalProxyRuntime.mu.RUnlock()
if !globalProxyRuntime.enabled {
return globalProxySnapshot{
Enabled: false,
Proxy: connection.ProxyConfig{},
}
}
return globalProxySnapshot{
Enabled: true,
Proxy: globalProxyRuntime.proxy,
}
}
func setGlobalProxyConfig(enabled bool, proxyConfig connection.ProxyConfig) (globalProxySnapshot, error) {
if !enabled {
globalProxyRuntime.mu.Lock()
globalProxyRuntime.enabled = false
globalProxyRuntime.proxy = connection.ProxyConfig{}
globalProxyRuntime.mu.Unlock()
return currentGlobalProxyConfig(), nil
}
normalizedProxy, err := proxytunnel.NormalizeConfig(proxyConfig)
if err != nil {
return globalProxySnapshot{}, err
}
globalProxyRuntime.mu.Lock()
globalProxyRuntime.enabled = true
globalProxyRuntime.proxy = normalizedProxy
globalProxyRuntime.mu.Unlock()
return currentGlobalProxyConfig(), nil
}
func (a *App) ConfigureGlobalProxy(enabled bool, proxyConfig connection.ProxyConfig) connection.QueryResult {
snapshot, err := setGlobalProxyConfig(enabled, proxyConfig)
if err != nil {
return connection.QueryResult{Success: false, Message: err.Error()}
}
if snapshot.Enabled {
authState := ""
if strings.TrimSpace(snapshot.Proxy.User) != "" {
authState = "(认证:已配置)"
}
logger.Infof(
"全局代理已启用:%s://%s:%d%s",
strings.ToLower(strings.TrimSpace(snapshot.Proxy.Type)),
strings.TrimSpace(snapshot.Proxy.Host),
snapshot.Proxy.Port,
authState,
)
} else {
logger.Infof("全局代理已关闭")
}
return connection.QueryResult{
Success: true,
Message: "全局代理配置已生效",
Data: snapshot,
}
}
func (a *App) GetGlobalProxyConfig() connection.QueryResult {
return connection.QueryResult{
Success: true,
Message: "OK",
Data: currentGlobalProxyConfig(),
}
}
func applyGlobalProxyToConnection(config connection.ConnectionConfig) connection.ConnectionConfig {
effective := config
if effective.UseProxy || effective.UseHTTPTunnel {
return effective
}
if isFileDatabaseType(effective.Type) {
effective.Proxy = connection.ProxyConfig{}
return effective
}
snapshot := currentGlobalProxyConfig()
if !snapshot.Enabled {
effective.Proxy = connection.ProxyConfig{}
return effective
}
effective.UseProxy = true
effective.Proxy = snapshot.Proxy
return effective
}
func isFileDatabaseType(driverType string) bool {
switch strings.ToLower(strings.TrimSpace(driverType)) {
case "sqlite", "duckdb":
return true
default:
return false
}
}
func newHTTPClientWithGlobalProxy(timeout time.Duration) *http.Client {
client := &http.Client{
Timeout: timeout,
}
if transport := buildHTTPTransportWithGlobalProxy(); transport != nil {
client.Transport = transport
}
return client
}
func buildHTTPTransportWithGlobalProxy() http.RoundTripper {
baseTransport, ok := http.DefaultTransport.(*http.Transport)
if !ok || baseTransport == nil {
return nil
}
transport := baseTransport.Clone()
snapshot := currentGlobalProxyConfig()
if !snapshot.Enabled {
transport.Proxy = http.ProxyFromEnvironment
return transport
}
proxyURL, err := buildProxyURLFromConfig(snapshot.Proxy)
if err != nil {
logger.Warnf("全局代理配置无效,回退系统代理:%v", err)
transport.Proxy = http.ProxyFromEnvironment
return transport
}
transport.Proxy = http.ProxyURL(proxyURL)
if !isLoopbackProxyHost(snapshot.Proxy.Host) {
return transport
}
fallbackTransport := transport.Clone()
fallbackTransport.TLSClientConfig = cloneTLSConfigWithInsecureSkipVerify(fallbackTransport.TLSClientConfig)
return &localProxyTLSFallbackTransport{
primary: transport,
fallback: fallbackTransport,
proxyEndpoint: proxyURL.Redacted(),
}
}
func (t *localProxyTLSFallbackTransport) RoundTrip(req *http.Request) (*http.Response, error) {
resp, err := t.primary.RoundTrip(req)
if err == nil {
return resp, nil
}
if !isTLSFallbackCandidate(req.Method, err) {
return nil, err
}
retryReq, cloneErr := cloneRequestForRetry(req)
if cloneErr != nil {
return nil, err
}
logger.Warnf("检测到本地代理 TLS 证书不受信任,启用兼容回退:代理=%s 目标=%s 错误=%v", t.proxyEndpoint, req.URL.String(), err)
return t.fallback.RoundTrip(retryReq)
}
func isTLSFallbackCandidate(method string, err error) bool {
if !isIdempotentRequestMethod(method) {
return false
}
return isUnknownAuthorityError(err)
}
func isIdempotentRequestMethod(method string) bool {
switch strings.ToUpper(strings.TrimSpace(method)) {
case http.MethodGet, http.MethodHead:
return true
default:
return false
}
}
func cloneRequestForRetry(req *http.Request) (*http.Request, error) {
cloned := req.Clone(req.Context())
if req.Body == nil || req.Body == http.NoBody {
return cloned, nil
}
if req.GetBody == nil {
return nil, fmt.Errorf("request body not replayable")
}
body, err := req.GetBody()
if err != nil {
return nil, err
}
cloned.Body = body
return cloned, nil
}
func isUnknownAuthorityError(err error) bool {
var unknownErr x509.UnknownAuthorityError
if errors.As(err, &unknownErr) {
return true
}
return strings.Contains(strings.ToLower(err.Error()), "x509: certificate signed by unknown authority")
}
func cloneTLSConfigWithInsecureSkipVerify(base *tls.Config) *tls.Config {
if base == nil {
return &tls.Config{InsecureSkipVerify: true}
}
cloned := base.Clone()
cloned.InsecureSkipVerify = true
return cloned
}
func isLoopbackProxyHost(host string) bool {
trimmed := strings.TrimSpace(host)
if trimmed == "" {
return false
}
if strings.EqualFold(trimmed, "localhost") {
return true
}
ip := net.ParseIP(trimmed)
if ip == nil {
return false
}
return ip.IsLoopback()
}
func buildProxyURLFromConfig(proxyConfig connection.ProxyConfig) (*url.URL, error) {
normalizedProxy, err := proxytunnel.NormalizeConfig(proxyConfig)
if err != nil {
return nil, err
}
proxyType := strings.ToLower(strings.TrimSpace(normalizedProxy.Type))
if proxyType != "http" && proxyType != "socks5" {
return nil, fmt.Errorf("不支持的代理类型:%s", normalizedProxy.Type)
}
if strings.TrimSpace(normalizedProxy.Host) == "" {
return nil, fmt.Errorf("代理地址不能为空")
}
if normalizedProxy.Port <= 0 || normalizedProxy.Port > 65535 {
return nil, fmt.Errorf("代理端口无效:%d", normalizedProxy.Port)
}
proxyURL := &url.URL{
Scheme: proxyType,
Host: net.JoinHostPort(strings.TrimSpace(normalizedProxy.Host), strconv.Itoa(normalizedProxy.Port)),
}
if strings.TrimSpace(normalizedProxy.User) != "" {
proxyURL.User = url.UserPassword(strings.TrimSpace(normalizedProxy.User), normalizedProxy.Password)
}
return proxyURL, nil
}

View File

@@ -7,6 +7,7 @@ import (
"time"
"GoNavi-Wails/internal/connection"
"GoNavi-Wails/internal/db"
"GoNavi-Wails/internal/logger"
"GoNavi-Wails/internal/utils"
)
@@ -88,6 +89,8 @@ func (a *App) CreateDatabase(config connection.ConnectionConfig, dbName string)
query = fmt.Sprintf("CREATE DATABASE \"%s\"", escapedDbName)
} else if dbType == "tdengine" {
query = fmt.Sprintf("CREATE DATABASE IF NOT EXISTS %s", quoteIdentByType(dbType, dbName))
} else if dbType == "clickhouse" {
query = fmt.Sprintf("CREATE DATABASE IF NOT EXISTS %s", quoteIdentByType(dbType, dbName))
} else if dbType == "mariadb" || dbType == "diros" {
// MariaDB uses same syntax as MySQL
} else if dbType == "sphinx" {
@@ -110,16 +113,39 @@ func resolveDDLDBType(config connection.ConnectionConfig) string {
driver := strings.ToLower(strings.TrimSpace(config.Driver))
switch driver {
case "postgresql":
case "postgresql", "postgres", "pg", "pq", "pgx":
return "postgres"
case "dm":
case "dm", "dameng", "dm8":
return "dameng"
case "sqlite3":
case "sqlite3", "sqlite":
return "sqlite"
case "sphinxql":
return "sphinx"
case "diros", "doris":
return "diros"
case "kingbase", "kingbase8", "kingbasees", "kingbasev8":
return "kingbase"
case "highgo":
return "highgo"
case "vastbase":
return "vastbase"
}
switch {
case strings.Contains(driver, "postgres"):
return "postgres"
case strings.Contains(driver, "kingbase"):
return "kingbase"
case strings.Contains(driver, "highgo"):
return "highgo"
case strings.Contains(driver, "vastbase"):
return "vastbase"
case strings.Contains(driver, "sqlite"):
return "sqlite"
case strings.Contains(driver, "sphinx"):
return "sphinx"
case strings.Contains(driver, "diros"), strings.Contains(driver, "doris"):
return "diros"
default:
return driver
}
@@ -162,7 +188,7 @@ func buildRunConfigForDDL(config connection.ConnectionConfig, dbType string, dbN
if strings.EqualFold(strings.TrimSpace(config.Type), "custom") {
// custom 连接的 dbName 语义依赖 driver尽量在常见驱动上对齐内置类型行为。
switch dbType {
case "mysql", "mariadb", "diros", "sphinx", "postgres", "kingbase", "vastbase", "dameng":
case "mysql", "mariadb", "diros", "sphinx", "postgres", "kingbase", "vastbase", "dameng", "clickhouse":
if strings.TrimSpace(dbName) != "" {
runConfig.Database = strings.TrimSpace(dbName)
}
@@ -184,7 +210,7 @@ func (a *App) RenameDatabase(config connection.ConnectionConfig, oldName string,
dbType := resolveDDLDBType(config)
switch dbType {
case "mysql", "mariadb", "diros", "sphinx":
return connection.QueryResult{Success: false, Message: "MySQL/MariaDB/Diros/Sphinx 不支持直接重命名数据库,请新建库后迁移数据"}
return connection.QueryResult{Success: false, Message: "MySQL/MariaDB/Doris/Sphinx 不支持直接重命名数据库,请新建库后迁移数据"}
case "postgres", "kingbase", "highgo", "vastbase":
if strings.EqualFold(strings.TrimSpace(config.Database), oldName) {
return connection.QueryResult{Success: false, Message: "当前连接正在使用目标数据库,请先连接到其他数据库后再重命名"}
@@ -216,7 +242,7 @@ func (a *App) DropDatabase(config connection.ConnectionConfig, dbName string) co
sql string
)
switch dbType {
case "mysql", "mariadb", "diros", "tdengine":
case "mysql", "mariadb", "diros", "tdengine", "clickhouse":
runConfig = config
runConfig.Database = ""
sql = fmt.Sprintf("DROP DATABASE %s", quoteIdentByType(dbType, dbName))
@@ -255,7 +281,7 @@ func (a *App) RenameTable(config connection.ConnectionConfig, dbName string, old
dbType := resolveDDLDBType(config)
switch dbType {
case "mysql", "mariadb", "diros", "sphinx", "postgres", "kingbase", "sqlite", "duckdb", "oracle", "dameng", "highgo", "vastbase", "sqlserver":
case "mysql", "mariadb", "diros", "sphinx", "postgres", "kingbase", "sqlite", "duckdb", "oracle", "dameng", "highgo", "vastbase", "sqlserver", "clickhouse":
default:
return connection.QueryResult{Success: false, Message: fmt.Sprintf("当前数据源(%s)暂不支持重命名表", dbType)}
}
@@ -269,7 +295,7 @@ func (a *App) RenameTable(config connection.ConnectionConfig, dbName string, old
var sql string
switch dbType {
case "mysql", "mariadb", "diros", "sphinx":
case "mysql", "mariadb", "diros", "sphinx", "clickhouse":
newQualifiedTable := quoteTableIdentByType(dbType, schemaName, newTableName)
sql = fmt.Sprintf("RENAME TABLE %s TO %s", oldQualifiedTable, newQualifiedTable)
case "sqlserver":
@@ -301,7 +327,7 @@ func (a *App) DropTable(config connection.ConnectionConfig, dbName string, table
dbType := resolveDDLDBType(config)
switch dbType {
case "mysql", "mariadb", "diros", "sphinx", "postgres", "kingbase", "sqlite", "duckdb", "oracle", "dameng", "highgo", "vastbase", "sqlserver", "tdengine":
case "mysql", "mariadb", "diros", "sphinx", "postgres", "kingbase", "sqlite", "duckdb", "oracle", "dameng", "highgo", "vastbase", "sqlserver", "tdengine", "clickhouse":
default:
return connection.QueryResult{Success: false, Message: fmt.Sprintf("当前数据源(%s)暂不支持删除表", dbType)}
}
@@ -350,13 +376,121 @@ func (a *App) MySQLShowCreateTable(config connection.ConnectionConfig, dbName st
}
func (a *App) DBQuery(config connection.ConnectionConfig, dbName string, query string) connection.QueryResult {
return a.DBQueryWithCancel(config, dbName, query, "")
}
func (a *App) DBQueryWithCancel(config connection.ConnectionConfig, dbName string, query string, queryID string) connection.QueryResult {
runConfig := normalizeRunConfig(config, dbName)
// Generate query ID if not provided
if queryID == "" {
queryID = generateQueryID()
}
dbInst, err := a.getDatabase(runConfig)
if err != nil {
logger.Error(err, "DBQuery 获取连接失败:%s", formatConnSummary(runConfig))
return connection.QueryResult{Success: false, Message: err.Error(), QueryID: queryID}
}
query = sanitizeSQLForPgLike(runConfig.Type, query)
timeoutSeconds := runConfig.Timeout
if timeoutSeconds <= 0 {
timeoutSeconds = 30
}
ctx, cancel := utils.ContextWithTimeout(time.Duration(timeoutSeconds) * time.Second)
defer cancel()
// Store cancel function for potential manual cancellation
a.queryMu.Lock()
a.runningQueries[queryID] = queryContext{
cancel: cancel,
started: time.Now(),
}
a.queryMu.Unlock()
// Ensure query is removed from tracking when done
defer func() {
a.queryMu.Lock()
delete(a.runningQueries, queryID)
a.queryMu.Unlock()
}()
lowerQuery := strings.TrimSpace(strings.ToLower(query))
isReadQuery := strings.HasPrefix(lowerQuery, "select") || strings.HasPrefix(lowerQuery, "show") || strings.HasPrefix(lowerQuery, "describe") || strings.HasPrefix(lowerQuery, "explain")
// MongoDB JSON 命令中的 find/count/aggregate 也属于读查询
if !isReadQuery && strings.ToLower(strings.TrimSpace(runConfig.Type)) == "mongodb" && strings.HasPrefix(strings.TrimSpace(query), "{") {
isReadQuery = true
}
runReadQuery := func(inst db.Database) ([]map[string]interface{}, []string, error) {
if q, ok := inst.(interface {
QueryContext(context.Context, string) ([]map[string]interface{}, []string, error)
}); ok {
return q.QueryContext(ctx, query)
}
return inst.Query(query)
}
runExecQuery := func(inst db.Database) (int64, error) {
if e, ok := inst.(interface {
ExecContext(context.Context, string) (int64, error)
}); ok {
return e.ExecContext(ctx, query)
}
return inst.Exec(query)
}
if isReadQuery {
data, columns, err := runReadQuery(dbInst)
if err != nil && shouldRefreshCachedConnection(err) {
if a.invalidateCachedDatabase(runConfig, err) {
retryInst, retryErr := a.getDatabaseForcePing(runConfig)
if retryErr != nil {
logger.Error(retryErr, "DBQuery 重建连接失败:%s SQL片段=%q", formatConnSummary(runConfig), sqlSnippet(query))
return connection.QueryResult{Success: false, Message: retryErr.Error()}
}
data, columns, err = runReadQuery(retryInst)
}
}
if err != nil {
logger.Error(err, "DBQuery 查询失败:%s SQL片段=%q", formatConnSummary(runConfig), sqlSnippet(query))
return connection.QueryResult{Success: false, Message: err.Error(), QueryID: queryID}
}
return connection.QueryResult{Success: true, Data: data, Fields: columns, QueryID: queryID}
} else {
affected, err := runExecQuery(dbInst)
if err != nil && shouldRefreshCachedConnection(err) {
if a.invalidateCachedDatabase(runConfig, err) {
retryInst, retryErr := a.getDatabaseForcePing(runConfig)
if retryErr != nil {
logger.Error(retryErr, "DBQuery 重建连接失败:%s SQL片段=%q", formatConnSummary(runConfig), sqlSnippet(query))
return connection.QueryResult{Success: false, Message: retryErr.Error()}
}
affected, err = runExecQuery(retryInst)
}
}
if err != nil {
logger.Error(err, "DBQuery 执行失败:%s SQL片段=%q", formatConnSummary(runConfig), sqlSnippet(query))
return connection.QueryResult{Success: false, Message: err.Error(), QueryID: queryID}
}
return connection.QueryResult{Success: true, Data: map[string]int64{"affectedRows": affected}, QueryID: queryID}
}
}
func (a *App) DBQueryIsolated(config connection.ConnectionConfig, dbName string, query string) connection.QueryResult {
runConfig := normalizeRunConfig(config, dbName)
dbInst, err := a.openDatabaseIsolated(runConfig)
if err != nil {
logger.Error(err, "DBQueryIsolated 获取连接失败:%s", formatConnSummary(runConfig))
return connection.QueryResult{Success: false, Message: err.Error()}
}
defer func() {
if closeErr := dbInst.Close(); closeErr != nil {
logger.Error(closeErr, "DBQueryIsolated 关闭临时连接失败:%s", formatConnSummary(runConfig))
}
}()
query = sanitizeSQLForPgLike(runConfig.Type, query)
timeoutSeconds := runConfig.Timeout
@@ -368,10 +502,10 @@ func (a *App) DBQuery(config connection.ConnectionConfig, dbName string, query s
lowerQuery := strings.TrimSpace(strings.ToLower(query))
isReadQuery := strings.HasPrefix(lowerQuery, "select") || strings.HasPrefix(lowerQuery, "show") || strings.HasPrefix(lowerQuery, "describe") || strings.HasPrefix(lowerQuery, "explain")
// MongoDB JSON 命令中的 find/count/aggregate 也属于读查询
if !isReadQuery && strings.ToLower(strings.TrimSpace(runConfig.Type)) == "mongodb" && strings.HasPrefix(strings.TrimSpace(query), "{") {
isReadQuery = true
}
if isReadQuery {
var data []map[string]interface{}
var columns []string
@@ -383,25 +517,25 @@ func (a *App) DBQuery(config connection.ConnectionConfig, dbName string, query s
data, columns, err = dbInst.Query(query)
}
if err != nil {
logger.Error(err, "DBQuery 查询失败:%s SQL片段=%q", formatConnSummary(runConfig), sqlSnippet(query))
logger.Error(err, "DBQueryIsolated 查询失败:%s SQL片段=%q", formatConnSummary(runConfig), sqlSnippet(query))
return connection.QueryResult{Success: false, Message: err.Error()}
}
return connection.QueryResult{Success: true, Data: data, Fields: columns}
} else {
var affected int64
if e, ok := dbInst.(interface {
ExecContext(context.Context, string) (int64, error)
}); ok {
affected, err = e.ExecContext(ctx, query)
} else {
affected, err = dbInst.Exec(query)
}
if err != nil {
logger.Error(err, "DBQuery 执行失败:%s SQL片段=%q", formatConnSummary(runConfig), sqlSnippet(query))
return connection.QueryResult{Success: false, Message: err.Error()}
}
return connection.QueryResult{Success: true, Data: map[string]int64{"affectedRows": affected}}
}
var affected int64
if e, ok := dbInst.(interface {
ExecContext(context.Context, string) (int64, error)
}); ok {
affected, err = e.ExecContext(ctx, query)
} else {
affected, err = dbInst.Exec(query)
}
if err != nil {
logger.Error(err, "DBQueryIsolated 执行失败:%s SQL片段=%q", formatConnSummary(runConfig), sqlSnippet(query))
return connection.QueryResult{Success: false, Message: err.Error()}
}
return connection.QueryResult{Success: true, Data: map[string]int64{"affectedRows": affected}}
}
func sqlSnippet(query string) string {
@@ -413,20 +547,38 @@ func sqlSnippet(query string) string {
return q[:max] + "..."
}
func ensureNonNilSlice[T any](items []T) []T {
if items == nil {
return make([]T, 0)
}
return items
}
func (a *App) DBGetDatabases(config connection.ConnectionConfig) connection.QueryResult {
dbInst, err := a.getDatabase(config)
runConfig := normalizeRunConfig(config, "")
dbInst, err := a.getDatabase(runConfig)
if err != nil {
logger.Error(err, "DBGetDatabases 获取连接失败:%s", formatConnSummary(config))
logger.Error(err, "DBGetDatabases 获取连接失败:%s", formatConnSummary(runConfig))
return connection.QueryResult{Success: false, Message: err.Error()}
}
dbs, err := dbInst.GetDatabases()
if err != nil && shouldRefreshCachedConnection(err) {
if a.invalidateCachedDatabase(runConfig, err) {
retryInst, retryErr := a.getDatabaseForcePing(runConfig)
if retryErr != nil {
logger.Error(retryErr, "DBGetDatabases 重建连接失败:%s", formatConnSummary(runConfig))
return connection.QueryResult{Success: false, Message: retryErr.Error()}
}
dbs, err = retryInst.GetDatabases()
}
}
if err != nil {
logger.Error(err, "DBGetDatabases 获取数据库列表失败:%s", formatConnSummary(config))
logger.Error(err, "DBGetDatabases 获取数据库列表失败:%s", formatConnSummary(runConfig))
return connection.QueryResult{Success: false, Message: err.Error()}
}
var resData []map[string]string
resData := make([]map[string]string, 0, len(dbs))
for _, name := range dbs {
resData = append(resData, map[string]string{"Database": name})
}
@@ -444,12 +596,22 @@ func (a *App) DBGetTables(config connection.ConnectionConfig, dbName string) con
}
tables, err := dbInst.GetTables(dbName)
if err != nil && shouldRefreshCachedConnection(err) {
if a.invalidateCachedDatabase(runConfig, err) {
retryInst, retryErr := a.getDatabaseForcePing(runConfig)
if retryErr != nil {
logger.Error(retryErr, "DBGetTables 重建连接失败:%s", formatConnSummary(runConfig))
return connection.QueryResult{Success: false, Message: retryErr.Error()}
}
tables, err = retryInst.GetTables(dbName)
}
}
if err != nil {
logger.Error(err, "DBGetTables 获取表列表失败:%s", formatConnSummary(runConfig))
return connection.QueryResult{Success: false, Message: err.Error()}
}
var resData []map[string]string
resData := make([]map[string]string, 0, len(tables))
for _, name := range tables {
resData = append(resData, map[string]string{"Table": name})
}
@@ -458,8 +620,8 @@ func (a *App) DBGetTables(config connection.ConnectionConfig, dbName string) con
}
func (a *App) DBShowCreateTable(config connection.ConnectionConfig, dbName string, tableName string) connection.QueryResult {
runConfig := normalizeRunConfig(config, dbName)
dbType := resolveDDLDBType(config)
runConfig := buildRunConfigForDDL(config, dbType, dbName)
dbInst, err := a.getDatabase(runConfig)
if err != nil {
@@ -467,35 +629,65 @@ func (a *App) DBShowCreateTable(config connection.ConnectionConfig, dbName strin
return connection.QueryResult{Success: false, Message: err.Error()}
}
schemaName, pureTableName := normalizeSchemaAndTable(config, dbName, tableName)
sqlStr, err := dbInst.GetCreateStatement(schemaName, pureTableName)
sqlStr, err := resolveCreateStatementWithFallback(dbInst, config, dbName, tableName)
if err != nil {
logger.Error(err, "DBShowCreateTable 获取建表语句失败:%s 表=%s", formatConnSummary(runConfig), tableName)
return connection.QueryResult{Success: false, Message: err.Error()}
}
if shouldFallbackCreateStatement(dbType, sqlStr) {
columns, colErr := dbInst.GetColumns(schemaName, pureTableName)
if colErr != nil {
logger.Error(colErr, "DBShowCreateTable 兜底加载字段失败:%s 表=%s", formatConnSummary(runConfig), tableName)
return connection.QueryResult{Success: false, Message: colErr.Error()}
}
fallbackDDL, buildErr := buildFallbackCreateStatement(dbType, schemaName, pureTableName, columns)
if buildErr != nil {
logger.Error(buildErr, "DBShowCreateTable 兜底生成 DDL 失败:%s 表=%s", formatConnSummary(runConfig), tableName)
return connection.QueryResult{Success: false, Message: buildErr.Error()}
}
sqlStr = fallbackDDL
}
return connection.QueryResult{Success: true, Data: sqlStr}
}
func shouldFallbackCreateStatement(dbType string, ddl string) bool {
func resolveCreateStatementWithFallback(dbInst db.Database, config connection.ConnectionConfig, dbName string, tableName string) (string, error) {
dbType := resolveDDLDBType(config)
schemaName, pureTableName := normalizeSchemaAndTableByType(dbType, dbName, tableName)
if pureTableName == "" {
return "", fmt.Errorf("表名不能为空")
}
sqlStr, sourceErr := dbInst.GetCreateStatement(schemaName, pureTableName)
if sourceErr == nil && !shouldFallbackCreateStatement(dbType, sqlStr) {
return sqlStr, nil
}
if !supportsCreateStatementFallback(dbType) {
if sourceErr != nil {
return "", sourceErr
}
return sqlStr, nil
}
columns, colErr := dbInst.GetColumns(schemaName, pureTableName)
if colErr != nil {
if sourceErr != nil {
return "", sourceErr
}
return "", colErr
}
fallbackDDL, buildErr := buildFallbackCreateStatement(dbType, schemaName, pureTableName, columns)
if buildErr != nil {
if sourceErr != nil {
return "", sourceErr
}
return "", buildErr
}
return fallbackDDL, nil
}
func supportsCreateStatementFallback(dbType string) bool {
switch dbType {
case "postgres", "kingbase", "highgo", "vastbase":
return true
default:
return false
}
}
func shouldFallbackCreateStatement(dbType string, ddl string) bool {
if !supportsCreateStatementFallback(dbType) {
return false
}
trimmed := strings.TrimSpace(ddl)
if trimmed == "" {
@@ -601,7 +793,7 @@ func (a *App) DBGetColumns(config connection.ConnectionConfig, dbName string, ta
return connection.QueryResult{Success: false, Message: err.Error()}
}
return connection.QueryResult{Success: true, Data: columns}
return connection.QueryResult{Success: true, Data: ensureNonNilSlice(columns)}
}
func (a *App) DBGetIndexes(config connection.ConnectionConfig, dbName string, tableName string) connection.QueryResult {
@@ -618,7 +810,7 @@ func (a *App) DBGetIndexes(config connection.ConnectionConfig, dbName string, ta
return connection.QueryResult{Success: false, Message: err.Error()}
}
return connection.QueryResult{Success: true, Data: indexes}
return connection.QueryResult{Success: true, Data: ensureNonNilSlice(indexes)}
}
func (a *App) DBGetForeignKeys(config connection.ConnectionConfig, dbName string, tableName string) connection.QueryResult {
@@ -635,7 +827,7 @@ func (a *App) DBGetForeignKeys(config connection.ConnectionConfig, dbName string
return connection.QueryResult{Success: false, Message: err.Error()}
}
return connection.QueryResult{Success: true, Data: fks}
return connection.QueryResult{Success: true, Data: ensureNonNilSlice(fks)}
}
func (a *App) DBGetTriggers(config connection.ConnectionConfig, dbName string, tableName string) connection.QueryResult {
@@ -652,7 +844,7 @@ func (a *App) DBGetTriggers(config connection.ConnectionConfig, dbName string, t
return connection.QueryResult{Success: false, Message: err.Error()}
}
return connection.QueryResult{Success: true, Data: triggers}
return connection.QueryResult{Success: true, Data: ensureNonNilSlice(triggers)}
}
func (a *App) DropView(config connection.ConnectionConfig, dbName string, viewName string) connection.QueryResult {
@@ -663,7 +855,7 @@ func (a *App) DropView(config connection.ConnectionConfig, dbName string, viewNa
dbType := resolveDDLDBType(config)
switch dbType {
case "mysql", "mariadb", "diros", "sphinx", "postgres", "kingbase", "sqlite", "duckdb", "oracle", "dameng", "highgo", "vastbase", "sqlserver":
case "mysql", "mariadb", "diros", "sphinx", "postgres", "kingbase", "sqlite", "duckdb", "oracle", "dameng", "highgo", "vastbase", "sqlserver", "clickhouse":
default:
return connection.QueryResult{Success: false, Message: fmt.Sprintf("当前数据源(%s)暂不支持删除视图", dbType)}
}
@@ -752,7 +944,7 @@ func (a *App) RenameView(config connection.ConnectionConfig, dbName string, oldN
var sql string
switch dbType {
case "mysql", "mariadb", "diros", "sphinx":
case "mysql", "mariadb", "diros", "sphinx", "clickhouse":
newQualified := quoteTableIdentByType(dbType, schemaName, newName)
sql = fmt.Sprintf("RENAME TABLE %s TO %s", oldQualified, newQualified)
case "postgres", "kingbase", "highgo", "vastbase":
@@ -790,5 +982,5 @@ func (a *App) DBGetAllColumns(config connection.ConnectionConfig, dbName string)
return connection.QueryResult{Success: false, Message: err.Error()}
}
return connection.QueryResult{Success: true, Data: cols}
return connection.QueryResult{Success: true, Data: ensureNonNilSlice(cols)}
}

View File

@@ -0,0 +1,149 @@
package app
import (
"context"
"strings"
"testing"
"time"
"GoNavi-Wails/internal/connection"
)
func TestGenerateQueryID(t *testing.T) {
app := NewApp()
id := app.GenerateQueryID()
if id == "" {
t.Fatal("GenerateQueryID returned empty string")
}
// Should start with "query-"
if !strings.HasPrefix(id, "query-") {
t.Fatalf("Expected query ID to start with 'query-', got: %s", id)
}
// Should be reasonably unique (not equal to another generated ID)
id2 := app.GenerateQueryID()
if id == id2 {
t.Fatal("Two consecutive GenerateQueryID calls returned identical IDs")
}
}
func TestCancelQuery_NonExistent(t *testing.T) {
app := NewApp()
res := app.CancelQuery("non-existent-query-id")
if res.Success {
t.Fatal("CancelQuery should fail for non-existent query ID")
}
if !strings.Contains(res.Message, "不存在") && !strings.Contains(res.Message, "not exist") {
t.Fatalf("Expected error message about query not existing, got: %s", res.Message)
}
}
func TestCancelQuery_ValidQuery(t *testing.T) {
app := NewApp()
// First, generate a query ID and simulate a running query
queryID := app.GenerateQueryID()
// Store a cancel function in runningQueries map
_, cancel := context.WithCancel(context.Background())
app.queryMu.Lock()
app.runningQueries[queryID] = queryContext{
cancel: cancel,
started: time.Now(),
}
app.queryMu.Unlock()
// Ensure cleanup after test
defer func() {
app.queryMu.Lock()
delete(app.runningQueries, queryID)
app.queryMu.Unlock()
}()
// Cancel the query
res := app.CancelQuery(queryID)
if !res.Success {
t.Fatalf("CancelQuery should succeed for valid query ID, got: %s", res.Message)
}
// Verify query removed from map
app.queryMu.Lock()
_, exists := app.runningQueries[queryID]
app.queryMu.Unlock()
if exists {
t.Fatal("Query should be removed from runningQueries after cancellation")
}
}
func TestCleanupStaleQueries(t *testing.T) {
app := NewApp()
// Add a stale query (started 2 hours ago)
queryID := app.GenerateQueryID()
_, cancel := context.WithCancel(context.Background())
app.queryMu.Lock()
app.runningQueries[queryID] = queryContext{
cancel: cancel,
started: time.Now().Add(-2 * time.Hour),
}
app.queryMu.Unlock()
// Cleanup queries older than 1 hour
app.CleanupStaleQueries(1 * time.Hour)
// Verify stale query was removed
app.queryMu.Lock()
_, exists := app.runningQueries[queryID]
app.queryMu.Unlock()
if exists {
t.Fatal("Stale query should be removed by CleanupStaleQueries")
}
// Add a fresh query (started 30 minutes ago)
freshID := app.GenerateQueryID()
_, cancel2 := context.WithCancel(context.Background())
app.queryMu.Lock()
app.runningQueries[freshID] = queryContext{
cancel: cancel2,
started: time.Now().Add(-30 * time.Minute),
}
app.queryMu.Unlock()
defer cancel2()
// Cleanup queries older than 1 hour
app.CleanupStaleQueries(1 * time.Hour)
// Verify fresh query still exists
app.queryMu.Lock()
_, exists = app.runningQueries[freshID]
app.queryMu.Unlock()
if !exists {
t.Fatal("Fresh query should not be removed by CleanupStaleQueries")
}
// Clean up
app.queryMu.Lock()
delete(app.runningQueries, freshID)
app.queryMu.Unlock()
}
func TestDBQueryWithCancel_QueryIDPropagation(t *testing.T) {
// This test verifies that query ID is properly propagated in QueryResult
// Since we can't easily mock database connections, we'll test the integration
// by checking that DBQueryWithCancel returns a QueryResult with QueryID field
app := NewApp()
// Create a minimal config for a database type that doesn't require actual connection
config := connection.ConnectionConfig{
Type: "duckdb",
Host: ":memory:", // In-memory duckdb for testing
}
// This will fail because we can't actually connect, but we can test the error path
result := app.DBQueryWithCancel(config, "", "SELECT 1", "test-query-id")
// The query should fail (no actual database), but QueryID should be present
if result.QueryID != "test-query-id" {
t.Fatalf("Expected QueryID 'test-query-id' in result, got: %s", result.QueryID)
}
}

View File

@@ -0,0 +1,174 @@
package app
import (
"errors"
"strings"
"testing"
"GoNavi-Wails/internal/connection"
)
type fakeCreateStatementDB struct {
createSQL string
createErr error
columns []connection.ColumnDefinition
columnsErr error
createSchema string
createTable string
colsSchema string
colsTable string
}
func (f *fakeCreateStatementDB) Connect(config connection.ConnectionConfig) error { return nil }
func (f *fakeCreateStatementDB) Close() error { return nil }
func (f *fakeCreateStatementDB) Ping() error { return nil }
func (f *fakeCreateStatementDB) Query(query string) ([]map[string]interface{}, []string, error) {
return nil, nil, nil
}
func (f *fakeCreateStatementDB) Exec(query string) (int64, error) { return 0, nil }
func (f *fakeCreateStatementDB) GetDatabases() ([]string, error) { return nil, nil }
func (f *fakeCreateStatementDB) GetTables(dbName string) ([]string, error) { return nil, nil }
func (f *fakeCreateStatementDB) GetCreateStatement(dbName, tableName string) (string, error) {
f.createSchema = dbName
f.createTable = tableName
return f.createSQL, f.createErr
}
func (f *fakeCreateStatementDB) GetColumns(dbName, tableName string) ([]connection.ColumnDefinition, error) {
f.colsSchema = dbName
f.colsTable = tableName
return f.columns, f.columnsErr
}
func (f *fakeCreateStatementDB) GetAllColumns(dbName string) ([]connection.ColumnDefinitionWithTable, error) {
return nil, nil
}
func (f *fakeCreateStatementDB) GetIndexes(dbName, tableName string) ([]connection.IndexDefinition, error) {
return nil, nil
}
func (f *fakeCreateStatementDB) GetForeignKeys(dbName, tableName string) ([]connection.ForeignKeyDefinition, error) {
return nil, nil
}
func (f *fakeCreateStatementDB) GetTriggers(dbName, tableName string) ([]connection.TriggerDefinition, error) {
return nil, nil
}
func TestResolveDDLDBType_CustomDriverAlias(t *testing.T) {
t.Parallel()
testCases := []struct {
name string
driver string
want string
}{
{name: "postgresql alias", driver: "postgresql", want: "postgres"},
{name: "pgx alias", driver: "pgx", want: "postgres"},
{name: "kingbase8 alias", driver: "kingbase8", want: "kingbase"},
{name: "kingbase contains alias", driver: "kingbasees", want: "kingbase"},
{name: "dm alias", driver: "dm8", want: "dameng"},
{name: "sqlite alias", driver: "sqlite3", want: "sqlite"},
}
for _, tc := range testCases {
tc := tc
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
cfg := connection.ConnectionConfig{Type: "custom", Driver: tc.driver}
if got := resolveDDLDBType(cfg); got != tc.want {
t.Fatalf("resolveDDLDBType() mismatch, want=%q got=%q", tc.want, got)
}
})
}
}
func TestResolveCreateStatementWithFallback_CustomKingbaseUsesPublicSchema(t *testing.T) {
t.Parallel()
dbInst := &fakeCreateStatementDB{
createSQL: "SHOW CREATE TABLE not directly supported in Kingbase/Postgres via SQL",
columns: []connection.ColumnDefinition{
{Name: "id", Type: "bigint", Nullable: "NO", Key: "PRI"},
},
}
ddl, err := resolveCreateStatementWithFallback(dbInst, connection.ConnectionConfig{
Type: "custom",
Driver: "kingbase8",
}, "demo_db", "orders")
if err != nil {
t.Fatalf("resolveCreateStatementWithFallback() unexpected error: %v", err)
}
if dbInst.createSchema != "public" || dbInst.colsSchema != "public" {
t.Fatalf("expected fallback schema public, got create=%q columns=%q", dbInst.createSchema, dbInst.colsSchema)
}
if !strings.Contains(ddl, `CREATE TABLE "public"."orders"`) {
t.Fatalf("expected fallback DDL with public schema, got: %s", ddl)
}
}
func TestResolveCreateStatementWithFallback_KeepQualifiedSchema(t *testing.T) {
t.Parallel()
dbInst := &fakeCreateStatementDB{
createSQL: "-- SHOW CREATE TABLE not fully supported for PostgreSQL in this MVP.",
columns: []connection.ColumnDefinition{
{Name: "id", Type: "integer", Nullable: "NO", Key: "PRI"},
},
}
ddl, err := resolveCreateStatementWithFallback(dbInst, connection.ConnectionConfig{
Type: "custom",
Driver: "postgresql",
}, "demo_db", "sales.orders")
if err != nil {
t.Fatalf("resolveCreateStatementWithFallback() unexpected error: %v", err)
}
if dbInst.createSchema != "sales" || dbInst.colsSchema != "sales" {
t.Fatalf("expected schema sales, got create=%q columns=%q", dbInst.createSchema, dbInst.colsSchema)
}
if !strings.Contains(ddl, `CREATE TABLE "sales"."orders"`) {
t.Fatalf("expected fallback DDL with sales schema, got: %s", ddl)
}
}
func TestResolveCreateStatementWithFallback_NoFallbackForMySQL(t *testing.T) {
t.Parallel()
dbInst := &fakeCreateStatementDB{
createSQL: "SHOW CREATE TABLE not directly supported in Kingbase/Postgres via SQL",
columnsErr: errors.New("should not be called"),
}
ddl, err := resolveCreateStatementWithFallback(dbInst, connection.ConnectionConfig{
Type: "mysql",
}, "demo_db", "orders")
if err != nil {
t.Fatalf("resolveCreateStatementWithFallback() unexpected error: %v", err)
}
if ddl != dbInst.createSQL {
t.Fatalf("expected original ddl for mysql, got: %s", ddl)
}
if dbInst.colsTable != "" {
t.Fatalf("mysql path should not call GetColumns, got table=%q", dbInst.colsTable)
}
}
func TestResolveCreateStatementWithFallback_FallbackWhenCreateStatementError(t *testing.T) {
t.Parallel()
dbInst := &fakeCreateStatementDB{
createErr: errors.New("statement unsupported"),
columns: []connection.ColumnDefinition{
{Name: "id", Type: "bigint", Nullable: "NO", Key: "PRI"},
},
}
ddl, err := resolveCreateStatementWithFallback(dbInst, connection.ConnectionConfig{
Type: "postgres",
}, "demo_db", "orders")
if err != nil {
t.Fatalf("resolveCreateStatementWithFallback() unexpected error: %v", err)
}
if !strings.Contains(ddl, `CREATE TABLE "public"."orders"`) {
t.Fatalf("expected fallback DDL for postgres error path, got: %s", ddl)
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -2,12 +2,15 @@ package app
import (
"bufio"
"context"
"encoding/csv"
"encoding/json"
"fmt"
"html"
"math"
"os"
"path/filepath"
"reflect"
"sort"
"strconv"
"strings"
@@ -15,11 +18,16 @@ import (
"GoNavi-Wails/internal/connection"
"GoNavi-Wails/internal/db"
"GoNavi-Wails/internal/logger"
"GoNavi-Wails/internal/utils"
"github.com/wailsapp/wails/v2/pkg/runtime"
"github.com/xuri/excelize/v2"
)
const minExportQueryTimeout = 5 * time.Minute
const minClickHouseExportQueryTimeout = 2 * time.Hour
func (a *App) OpenSQLFile() connection.QueryResult {
selection, err := runtime.OpenFileDialog(a.ctx, runtime.OpenDialogOptions{
Title: "Select SQL File",
@@ -120,6 +128,78 @@ func (a *App) SelectSSHKeyFile(currentPath string) connection.QueryResult {
return connection.QueryResult{Success: true, Data: map[string]interface{}{"path": selection}}
}
func (a *App) SelectDatabaseFile(currentPath string, driverType string) connection.QueryResult {
defaultDir := strings.TrimSpace(currentPath)
if defaultDir == "" {
if home, err := os.UserHomeDir(); err == nil {
defaultDir = home
}
}
if filepath.Ext(defaultDir) != "" {
defaultDir = filepath.Dir(defaultDir)
}
if defaultDir != "" && !filepath.IsAbs(defaultDir) {
if abs, err := filepath.Abs(defaultDir); err == nil {
defaultDir = abs
}
}
normalizedType := strings.ToLower(strings.TrimSpace(driverType))
filters := []runtime.FileFilter{
{
DisplayName: "数据库文件",
Pattern: "*.db;*.sqlite;*.sqlite3;*.db3;*.duckdb;*.ddb",
},
{
DisplayName: "所有文件",
Pattern: "*",
},
}
title := "选择数据库文件"
switch normalizedType {
case "sqlite":
title = "选择 SQLite 数据文件"
filters = []runtime.FileFilter{
{
DisplayName: "SQLite 文件",
Pattern: "*.db;*.sqlite;*.sqlite3;*.db3",
},
{
DisplayName: "所有文件",
Pattern: "*",
},
}
case "duckdb":
title = "选择 DuckDB 数据文件"
filters = []runtime.FileFilter{
{
DisplayName: "DuckDB 文件",
Pattern: "*.duckdb;*.ddb;*.db",
},
{
DisplayName: "所有文件",
Pattern: "*",
},
}
}
selection, err := runtime.OpenFileDialog(a.ctx, runtime.OpenDialogOptions{
Title: title,
DefaultDirectory: defaultDir,
Filters: filters,
})
if err != nil {
return connection.QueryResult{Success: false, Message: err.Error()}
}
if strings.TrimSpace(selection) == "" {
return connection.QueryResult{Success: false, Message: "Cancelled"}
}
if abs, err := filepath.Abs(selection); err == nil {
selection = abs
}
return connection.QueryResult{Success: true, Data: map[string]interface{}{"path": selection}}
}
// PreviewImportFile 解析导入文件,返回字段列表、总行数、前 5 行预览数据
func (a *App) PreviewImportFile(filePath string) connection.QueryResult {
if filePath == "" {
@@ -541,7 +621,7 @@ func (a *App) ExportTable(config connection.ConnectionConfig, dbName string, tab
query := fmt.Sprintf("SELECT * FROM %s", quoteQualifiedIdentByType(runConfig.Type, tableName))
data, columns, err := dbInst.Query(query)
data, columns, err := queryDataForExport(dbInst, runConfig, query)
if err != nil {
return connection.QueryResult{Success: false, Message: err.Error()}
}
@@ -701,7 +781,7 @@ func quoteIdentByType(dbType string, ident string) string {
}
switch dbType {
case "mysql", "mariadb", "diros", "sphinx", "tdengine":
case "mysql", "mariadb", "diros", "sphinx", "tdengine", "clickhouse":
return "`" + strings.ReplaceAll(ident, "`", "``") + "`"
case "sqlserver":
escaped := strings.ReplaceAll(ident, "]", "]]")
@@ -872,7 +952,7 @@ func listViewNameLookup(dbInst db.Database, config connection.ConnectionConfig,
if strings.TrimSpace(query) == "" {
continue
}
rows, _, err := dbInst.Query(query)
rows, _, err := queryDataForExport(dbInst, config, query)
if err != nil {
continue
}
@@ -950,6 +1030,15 @@ func buildListViewQueries(config connection.ConnectionConfig, dbName string) []s
return []string{
`SELECT table_schema AS schema_name, table_name AS object_name FROM information_schema.views WHERE table_schema NOT IN ('information_schema', 'pg_catalog') ORDER BY table_schema, table_name`,
}
case "clickhouse":
if strings.TrimSpace(dbName) == "" {
return []string{
`SELECT database AS schema_name, name AS object_name FROM system.tables WHERE engine LIKE '%View%' ORDER BY database, name`,
}
}
return []string{
fmt.Sprintf(`SELECT database AS schema_name, name AS object_name FROM system.tables WHERE engine LIKE '%%View%%' AND database='%s' ORDER BY name`, escapedDbName),
}
default:
if strings.TrimSpace(dbName) == "" {
return []string{
@@ -974,7 +1063,7 @@ func tryGetViewCreateStatement(
if strings.TrimSpace(query) == "" {
continue
}
rows, _, err := dbInst.Query(query)
rows, _, err := queryDataForExport(dbInst, config, query)
if err != nil || len(rows) == 0 {
continue
}
@@ -1070,6 +1159,18 @@ WHERE s.name = '%s' AND v.name = '%s'`,
fmt.Sprintf("SELECT sql AS ddl FROM duckdb_views() WHERE view_name = '%s' AND schema_name = '%s' LIMIT 1", escapedView, escapedSchema),
fmt.Sprintf("SELECT view_definition AS ddl FROM information_schema.views WHERE table_name = '%s' AND table_schema = '%s' LIMIT 1", escapedView, escapedSchema),
}
case "clickhouse":
if safeSchema == "" {
safeSchema = strings.TrimSpace(dbName)
}
if safeSchema != "" {
return []string{
fmt.Sprintf("SHOW CREATE TABLE %s.%s", quoteIdentByType("clickhouse", safeSchema), quoteIdentByType("clickhouse", safeView)),
}
}
return []string{
fmt.Sprintf("SHOW CREATE TABLE %s", quoteIdentByType("clickhouse", safeView)),
}
default:
if safeSchema != "" {
return []string{
@@ -1270,7 +1371,7 @@ func dumpTableSQL(
createSQL = ddl
}
} else {
ddl, err := dbInst.GetCreateStatement(schemaName, pureTableName)
ddl, err := resolveCreateStatementWithFallback(dbInst, config, dbName, tableName)
if err != nil {
if viewDDL, ok := tryGetViewCreateStatement(dbInst, config, dbName, schemaName, pureTableName); ok {
createSQL = viewDDL
@@ -1327,7 +1428,7 @@ func dumpTableSQL(
qualified := qualifyTable(schemaName, pureTableName)
selectSQL := fmt.Sprintf("SELECT * FROM %s", quoteQualifiedIdentByType(config.Type, qualified))
data, columns, err := dbInst.Query(selectSQL)
data, columns, err := queryDataForExport(dbInst, config, selectSQL)
if err != nil {
return err
}
@@ -1362,14 +1463,17 @@ func (a *App) ExportData(data []map[string]interface{}, columns []string, defaul
if defaultName == "" {
defaultName = "export"
}
logger.Infof("ExportData 开始rows=%d cols=%d format=%s defaultName=%s", len(data), len(columns), strings.ToLower(strings.TrimSpace(format)), strings.TrimSpace(defaultName))
filename, err := runtime.SaveFileDialog(a.ctx, runtime.SaveDialogOptions{
Title: "Export Data",
DefaultFilename: fmt.Sprintf("%s.%s", defaultName, strings.ToLower(format)),
})
if err != nil || filename == "" {
logger.Infof("ExportData 已取消或未选择文件err=%v", err)
return connection.QueryResult{Success: false, Message: "Cancelled"}
}
logger.Infof("ExportData 选定文件:%s", filename)
f, err := os.Create(filename)
if err != nil {
@@ -1377,9 +1481,11 @@ func (a *App) ExportData(data []map[string]interface{}, columns []string, defaul
}
defer f.Close()
if err := writeRowsToFile(f, data, columns, format); err != nil {
logger.Warnf("ExportData 写入失败file=%s err=%v", filename, err)
return connection.QueryResult{Success: false, Message: "Write error: " + err.Error()}
}
logger.Infof("ExportData 完成file=%s rows=%d", filename, len(data))
return connection.QueryResult{Success: true, Message: "Export successful"}
}
@@ -1400,8 +1506,10 @@ func (a *App) ExportQuery(config connection.ConnectionConfig, dbName string, que
DefaultFilename: fmt.Sprintf("%s.%s", defaultName, strings.ToLower(format)),
})
if err != nil || filename == "" {
logger.Infof("ExportQuery 已取消或未选择文件err=%v", err)
return connection.QueryResult{Success: false, Message: "Cancelled"}
}
logger.Infof("ExportQuery 开始type=%s db=%s format=%s file=%s sql=%q", strings.TrimSpace(config.Type), strings.TrimSpace(dbName), strings.ToLower(strings.TrimSpace(format)), filename, sqlSnippet(query))
runConfig := normalizeRunConfig(config, dbName)
dbInst, err := a.getDatabase(runConfig)
@@ -1415,8 +1523,9 @@ func (a *App) ExportQuery(config connection.ConnectionConfig, dbName string, que
return connection.QueryResult{Success: false, Message: "Only SELECT/WITH queries are supported"}
}
data, columns, err := dbInst.Query(query)
data, columns, err := queryDataForExport(dbInst, runConfig, query)
if err != nil {
logger.Warnf("ExportQuery 查询失败type=%s db=%s err=%v sql=%q", strings.TrimSpace(config.Type), strings.TrimSpace(dbName), err, sqlSnippet(query))
return connection.QueryResult{Success: false, Message: err.Error()}
}
@@ -1427,12 +1536,55 @@ func (a *App) ExportQuery(config connection.ConnectionConfig, dbName string, que
defer f.Close()
if err := writeRowsToFile(f, data, columns, format); err != nil {
logger.Warnf("ExportQuery 写入失败file=%s err=%v", filename, err)
return connection.QueryResult{Success: false, Message: "Write error: " + err.Error()}
}
logger.Infof("ExportQuery 完成file=%s rows=%d cols=%d", filename, len(data), len(columns))
return connection.QueryResult{Success: true, Message: "Export successful"}
}
func queryDataForExport(dbInst db.Database, config connection.ConnectionConfig, query string) ([]map[string]interface{}, []string, error) {
timeout := getExportQueryTimeout(config)
dbType := resolveDDLDBType(config)
if dbType == "clickhouse" {
logger.Infof("ClickHouse 导出查询开始timeout=%s SQL片段=%q", timeout, sqlSnippet(query))
}
if q, ok := dbInst.(interface {
QueryContext(context.Context, string) ([]map[string]interface{}, []string, error)
}); ok {
ctx, cancel := utils.ContextWithTimeout(timeout)
defer cancel()
data, columns, err := q.QueryContext(ctx, query)
if err != nil && dbType == "clickhouse" {
logger.Warnf("ClickHouse 导出查询失败timeout=%s SQL片段=%q err=%v", timeout, sqlSnippet(query), err)
}
return data, columns, err
}
data, columns, err := dbInst.Query(query)
if err != nil && dbType == "clickhouse" {
logger.Warnf("ClickHouse 导出查询失败(无 QueryContexttimeout=%s SQL片段=%q err=%v", timeout, sqlSnippet(query), err)
}
return data, columns, err
}
func getExportQueryTimeout(config connection.ConnectionConfig) time.Duration {
timeout := time.Duration(config.Timeout) * time.Second
if timeout <= 0 {
timeout = minExportQueryTimeout
}
if resolveDDLDBType(config) == "clickhouse" {
if timeout < minClickHouseExportQueryTimeout {
timeout = minClickHouseExportQueryTimeout
}
return timeout
}
if timeout < minExportQueryTimeout {
timeout = minExportQueryTimeout
}
return timeout
}
func writeRowsToFile(f *os.File, data []map[string]interface{}, columns []string, format string) error {
format = strings.ToLower(strings.TrimSpace(format))
if f == nil {
@@ -1444,6 +1596,26 @@ func writeRowsToFile(f *os.File, data []map[string]interface{}, columns []string
return writeRowsToXlsx(f.Name(), data, columns)
}
// html 使用内嵌 CSS 输出可直接浏览器预览的独立页面
if format == "html" {
return writeRowsToHTML(f, data, columns)
}
// 如果列名为空但数据不为空,从所有数据行提取所有键
if len(columns) == 0 && len(data) > 0 {
keySet := make(map[string]bool)
for _, row := range data {
for key := range row {
keySet[key] = true
}
}
// 排序以确保输出一致
for key := range keySet {
columns = append(columns, key)
}
sort.Strings(columns)
}
var csvWriter *csv.Writer
var jsonEncoder *json.Encoder
isJsonFirstRow := true
@@ -1506,7 +1678,11 @@ func writeRowsToFile(f *os.File, data []map[string]interface{}, columns []string
return err
}
}
if err := jsonEncoder.Encode(rowMap); err != nil {
exportedRow := make(map[string]interface{}, len(columns))
for _, col := range columns {
exportedRow[col] = normalizeExportJSONValue(rowMap[col])
}
if err := jsonEncoder.Encode(exportedRow); err != nil {
return err
}
isJsonFirstRow = false
@@ -1533,6 +1709,188 @@ func writeRowsToFile(f *os.File, data []map[string]interface{}, columns []string
return nil
}
func formatExportHTMLCell(val interface{}) string {
text := formatExportCellText(val)
escaped := html.EscapeString(text)
escaped = strings.ReplaceAll(escaped, "\r\n", "\n")
escaped = strings.ReplaceAll(escaped, "\r", "\n")
return strings.ReplaceAll(escaped, "\n", "<br>")
}
func writeRowsToHTML(f *os.File, data []map[string]interface{}, columns []string) error {
w := bufio.NewWriterSize(f, 1024*256)
if _, err := w.WriteString(`<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>GoNavi Export</title>
<style>
:root {
color-scheme: light;
--bg: #f8f9fa;
--card: #ffffff;
--line: #dee2e6;
--text: #212529;
--muted: #6c757d;
--hover: #f1f3f5;
--zebra: #f8f9fa;
--head: #ffffff;
}
* { box-sizing: border-box; }
body {
margin: 0;
padding: 24px;
background: var(--bg);
color: var(--text);
font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, "Helvetica Neue", Arial, "Noto Sans", "PingFang SC", "Microsoft YaHei", sans-serif;
line-height: 1.6;
}
.export-wrap {
max-width: 100%;
margin: 0 auto;
background: var(--card);
border: 1px solid var(--line);
border-radius: 8px;
overflow: hidden;
}
.export-head {
padding: 16px 20px;
background: var(--head);
border-bottom: 2px solid var(--line);
}
.export-head h1 {
margin: 0;
font-size: 16px;
font-weight: 600;
color: var(--text);
}
.export-meta {
margin-top: 6px;
color: var(--muted);
font-size: 13px;
}
.table-wrap {
width: 100%;
overflow: auto;
padding: 16px;
}
table {
border-collapse: collapse;
width: auto;
font-size: 13px;
}
thead th {
position: sticky;
top: 0;
z-index: 2;
background: var(--head);
text-align: left;
font-weight: 600;
white-space: nowrap;
border-bottom: 2px solid var(--line);
color: var(--text);
padding: 12px 16px;
}
td {
padding: 10px 16px;
border-bottom: 1px solid var(--line);
vertical-align: top;
white-space: pre-wrap;
word-wrap: break-word;
overflow-wrap: anywhere;
max-width: 500px;
color: var(--text);
}
tbody tr:nth-child(even) {
background: var(--zebra);
}
tbody tr:hover {
background: var(--hover);
}
td.empty {
text-align: center;
color: var(--muted);
font-style: italic;
}
@media (max-width: 768px) {
body { padding: 16px; }
.export-head { padding: 12px 16px; }
.table-wrap { padding: 12px; }
th, td { padding: 8px 12px; font-size: 12px; }
}
@media print {
body { background: white; padding: 0; }
.export-wrap { border: none; }
}
</style>
</head>
<body>
<div class="export-wrap">
<div class="export-head">
<h1>GoNavi Data Export</h1>
<div class="export-meta">`); err != nil {
return err
}
if _, err := fmt.Fprintf(w, "Rows: %d · Columns: %d · Generated: %s", len(data), len(columns), time.Now().Format("2006-01-02 15:04:05")); err != nil {
return err
}
if _, err := w.WriteString(`</div>
</div>
<div class="table-wrap">
<table>
<thead><tr>`); err != nil {
return err
}
for _, col := range columns {
if _, err := fmt.Fprintf(w, "<th>%s</th>", html.EscapeString(col)); err != nil {
return err
}
}
if _, err := w.WriteString(`</tr></thead><tbody>`); err != nil {
return err
}
if len(data) == 0 {
colspan := len(columns)
if colspan <= 0 {
colspan = 1
}
if _, err := fmt.Fprintf(w, `<tr><td class="empty" colspan="%d">(0 rows)</td></tr>`, colspan); err != nil {
return err
}
} else {
for _, rowMap := range data {
if _, err := w.WriteString("<tr>"); err != nil {
return err
}
for _, col := range columns {
if _, err := fmt.Fprintf(w, "<td>%s</td>", formatExportHTMLCell(rowMap[col])); err != nil {
return err
}
}
if _, err := w.WriteString("</tr>"); err != nil {
return err
}
}
}
if _, err := w.WriteString(`</tbody></table>
</div>
</div>
</body>
</html>`); err != nil {
return err
}
return w.Flush()
}
func formatExportCellText(val interface{}) string {
if val == nil {
return "NULL"
@@ -1546,11 +1904,102 @@ func formatExportCellText(val interface{}) string {
return "NULL"
}
return v.Format("2006-01-02 15:04:05")
case float32:
f := float64(v)
if math.IsNaN(f) || math.IsInf(f, 0) {
return "NULL"
}
return strconv.FormatFloat(f, 'f', -1, 32)
case float64:
if math.IsNaN(v) || math.IsInf(v, 0) {
return "NULL"
}
return strconv.FormatFloat(v, 'f', -1, 64)
case json.Number:
text := strings.TrimSpace(v.String())
if text == "" {
return "NULL"
}
return text
default:
return fmt.Sprintf("%v", val)
}
}
func normalizeExportJSONValue(val interface{}) interface{} {
if val == nil {
return nil
}
switch v := val.(type) {
case float32:
f := float64(v)
if math.IsNaN(f) || math.IsInf(f, 0) {
return nil
}
return json.Number(strconv.FormatFloat(f, 'f', -1, 32))
case float64:
if math.IsNaN(v) || math.IsInf(v, 0) {
return nil
}
return json.Number(strconv.FormatFloat(v, 'f', -1, 64))
case json.Number:
text := strings.TrimSpace(v.String())
if text == "" {
return nil
}
return json.Number(text)
case map[string]interface{}:
out := make(map[string]interface{}, len(v))
for key, item := range v {
out[key] = normalizeExportJSONValue(item)
}
return out
case []interface{}:
items := make([]interface{}, len(v))
for i, item := range v {
items[i] = normalizeExportJSONValue(item)
}
return items
}
rv := reflect.ValueOf(val)
switch rv.Kind() {
case reflect.Pointer, reflect.Interface:
if rv.IsNil() {
return nil
}
return normalizeExportJSONValue(rv.Elem().Interface())
case reflect.Map:
if rv.IsNil() {
return nil
}
out := make(map[string]interface{}, rv.Len())
iter := rv.MapRange()
for iter.Next() {
out[fmt.Sprint(iter.Key().Interface())] = normalizeExportJSONValue(iter.Value().Interface())
}
return out
case reflect.Slice:
if rv.IsNil() {
return nil
}
if rv.Type().Elem().Kind() == reflect.Uint8 {
return val
}
fallthrough
case reflect.Array:
size := rv.Len()
items := make([]interface{}, size)
for i := 0; i < size; i++ {
items[i] = normalizeExportJSONValue(rv.Index(i).Interface())
}
return items
default:
return val
}
}
// writeRowsToXlsx 使用 excelize 写入真正的 xlsx 格式文件
func writeRowsToXlsx(filename string, data []map[string]interface{}, columns []string) error {
xlsx := excelize.NewFile()

View File

@@ -0,0 +1,275 @@
package app
import (
"bytes"
"context"
"encoding/json"
"os"
"strings"
"testing"
"time"
"GoNavi-Wails/internal/connection"
)
type fakeExportQueryDB struct {
data []map[string]interface{}
cols []string
err error
lastQuery string
lastContextTimeout time.Duration
hasContextDeadline bool
}
func (f *fakeExportQueryDB) Connect(config connection.ConnectionConfig) error { return nil }
func (f *fakeExportQueryDB) Close() error { return nil }
func (f *fakeExportQueryDB) Ping() error { return nil }
func (f *fakeExportQueryDB) Query(query string) ([]map[string]interface{}, []string, error) {
f.lastQuery = query
return f.data, f.cols, f.err
}
func (f *fakeExportQueryDB) QueryContext(ctx context.Context, query string) ([]map[string]interface{}, []string, error) {
f.lastQuery = query
if deadline, ok := ctx.Deadline(); ok {
f.hasContextDeadline = true
f.lastContextTimeout = time.Until(deadline)
}
return f.data, f.cols, f.err
}
func (f *fakeExportQueryDB) Exec(query string) (int64, error) { return 0, nil }
func (f *fakeExportQueryDB) GetDatabases() ([]string, error) { return nil, nil }
func (f *fakeExportQueryDB) GetTables(dbName string) ([]string, error) {
return nil, nil
}
func (f *fakeExportQueryDB) GetCreateStatement(dbName, tableName string) (string, error) {
return "", nil
}
func (f *fakeExportQueryDB) GetColumns(dbName, tableName string) ([]connection.ColumnDefinition, error) {
return nil, nil
}
func (f *fakeExportQueryDB) GetAllColumns(dbName string) ([]connection.ColumnDefinitionWithTable, error) {
return nil, nil
}
func (f *fakeExportQueryDB) GetIndexes(dbName, tableName string) ([]connection.IndexDefinition, error) {
return nil, nil
}
func (f *fakeExportQueryDB) GetForeignKeys(dbName, tableName string) ([]connection.ForeignKeyDefinition, error) {
return nil, nil
}
func (f *fakeExportQueryDB) GetTriggers(dbName, tableName string) ([]connection.TriggerDefinition, error) {
return nil, nil
}
func TestFormatExportCellText_FloatNoScientificNotation(t *testing.T) {
got := formatExportCellText(1.445663e+06)
if strings.Contains(strings.ToLower(got), "e+") || strings.Contains(strings.ToLower(got), "e-") {
t.Fatalf("不应输出科学计数法got=%q", got)
}
if got != "1445663" {
t.Fatalf("浮点整值导出异常want=%q got=%q", "1445663", got)
}
}
func TestWriteRowsToFile_Markdown_NumberKeepPlainText(t *testing.T) {
f, err := os.CreateTemp("", "gonavi-export-*.md")
if err != nil {
t.Fatalf("创建临时文件失败: %v", err)
}
defer os.Remove(f.Name())
defer f.Close()
data := []map[string]interface{}{
{"id": 1.445663e+06},
}
columns := []string{"id"}
if err := writeRowsToFile(f, data, columns, "md"); err != nil {
t.Fatalf("写入 md 失败: %v", err)
}
contentBytes, err := os.ReadFile(f.Name())
if err != nil {
t.Fatalf("读取 md 失败: %v", err)
}
content := string(contentBytes)
if strings.Contains(strings.ToLower(content), "e+") || strings.Contains(strings.ToLower(content), "e-") {
t.Fatalf("md 导出包含科学计数法: %s", content)
}
if !strings.Contains(content, "| 1445663 |") {
t.Fatalf("md 导出未保留整数字面量content=%s", content)
}
}
func TestWriteRowsToFile_JSON_NumberKeepPlainText(t *testing.T) {
f, err := os.CreateTemp("", "gonavi-export-*.json")
if err != nil {
t.Fatalf("创建临时文件失败: %v", err)
}
defer os.Remove(f.Name())
defer f.Close()
data := []map[string]interface{}{
{"id": 1.445663e+06},
}
columns := []string{"id"}
if err := writeRowsToFile(f, data, columns, "json"); err != nil {
t.Fatalf("写入 json 失败: %v", err)
}
contentBytes, err := os.ReadFile(f.Name())
if err != nil {
t.Fatalf("读取 json 失败: %v", err)
}
content := string(contentBytes)
if strings.Contains(strings.ToLower(content), "e+") || strings.Contains(strings.ToLower(content), "e-") {
t.Fatalf("json 导出包含科学计数法: %s", content)
}
var decoded []map[string]json.Number
decoder := json.NewDecoder(bytes.NewReader(contentBytes))
decoder.UseNumber()
if err := decoder.Decode(&decoded); err != nil {
t.Fatalf("解析导出 json 失败: %v", err)
}
if len(decoded) != 1 {
t.Fatalf("导出行数异常got=%d", len(decoded))
}
if decoded[0]["id"].String() != "1445663" {
t.Fatalf("json 数值格式异常want=1445663 got=%s", decoded[0]["id"].String())
}
}
func TestQueryDataForExport_UsesMinimumTimeout(t *testing.T) {
fake := &fakeExportQueryDB{
data: []map[string]interface{}{{"v": 1}},
cols: []string{"v"},
}
_, _, err := queryDataForExport(fake, connection.ConnectionConfig{Timeout: 10}, "SELECT 1")
if err != nil {
t.Fatalf("queryDataForExport 返回错误: %v", err)
}
if !fake.hasContextDeadline {
t.Fatal("queryDataForExport 应设置 context deadline")
}
if fake.lastQuery != "SELECT 1" {
t.Fatalf("queryDataForExport 查询语句异常want=%q got=%q", "SELECT 1", fake.lastQuery)
}
lowerBound := minExportQueryTimeout - 5*time.Second
upperBound := minExportQueryTimeout + 5*time.Second
if fake.lastContextTimeout < lowerBound || fake.lastContextTimeout > upperBound {
t.Fatalf("导出最小超时异常want≈%s got=%s", minExportQueryTimeout, fake.lastContextTimeout)
}
}
func TestQueryDataForExport_UsesLargerConfiguredTimeout(t *testing.T) {
fake := &fakeExportQueryDB{
data: []map[string]interface{}{{"v": 1}},
cols: []string{"v"},
}
_, _, err := queryDataForExport(fake, connection.ConnectionConfig{Timeout: 900}, "SELECT 1")
if err != nil {
t.Fatalf("queryDataForExport 返回错误: %v", err)
}
if !fake.hasContextDeadline {
t.Fatal("queryDataForExport 应设置 context deadline")
}
expected := 900 * time.Second
lowerBound := expected - 5*time.Second
upperBound := expected + 5*time.Second
if fake.lastContextTimeout < lowerBound || fake.lastContextTimeout > upperBound {
t.Fatalf("导出配置超时异常want≈%s got=%s", expected, fake.lastContextTimeout)
}
}
func TestGetExportQueryTimeout_ClickHouseUsesLongerMinimum(t *testing.T) {
timeout := getExportQueryTimeout(connection.ConnectionConfig{
Type: "clickhouse",
Timeout: 30,
})
if timeout != minClickHouseExportQueryTimeout {
t.Fatalf("clickhouse 导出超时下限异常want=%s got=%s", minClickHouseExportQueryTimeout, timeout)
}
}
func TestGetExportQueryTimeout_CustomClickHouseUsesLongerMinimum(t *testing.T) {
timeout := getExportQueryTimeout(connection.ConnectionConfig{
Type: "custom",
Driver: "clickhouse",
Timeout: 30,
})
if timeout != minClickHouseExportQueryTimeout {
t.Fatalf("custom clickhouse 导出超时下限异常want=%s got=%s", minClickHouseExportQueryTimeout, timeout)
}
}
func TestWriteRowsToFile_HTML_EscapeAndStyle(t *testing.T) {
f, err := os.CreateTemp("", "gonavi-export-*.html")
if err != nil {
t.Fatalf("创建临时文件失败: %v", err)
}
defer os.Remove(f.Name())
defer f.Close()
data := []map[string]interface{}{
{
"name": "<script>alert(1)</script>",
"note": "line1\nline2",
"nullable": nil,
},
}
columns := []string{"name", "note", "nullable"}
if err := writeRowsToFile(f, data, columns, "html"); err != nil {
t.Fatalf("写入 html 失败: %v", err)
}
contentBytes, err := os.ReadFile(f.Name())
if err != nil {
t.Fatalf("读取 html 失败: %v", err)
}
content := string(contentBytes)
if !strings.Contains(content, "<!DOCTYPE html>") {
t.Fatalf("html 导出缺少 doctype: %s", content)
}
if !strings.Contains(content, "position: sticky") {
t.Fatalf("html 导出缺少表头吸顶样式: %s", content)
}
if !strings.Contains(content, "tbody tr:nth-child(even)") {
t.Fatalf("html 导出缺少斑马纹样式: %s", content)
}
if !strings.Contains(content, "&lt;script&gt;alert(1)&lt;/script&gt;") {
t.Fatalf("html 导出未进行 XSS 转义: %s", content)
}
if strings.Contains(content, "<script>alert(1)</script>") {
t.Fatalf("html 导出包含未转义脚本: %s", content)
}
if !strings.Contains(content, "line1<br>line2") {
t.Fatalf("html 导出换行未转为 <br>: %s", content)
}
if !strings.Contains(content, "<td>NULL</td>") {
t.Fatalf("html 导出空值显示异常: %s", content)
}
}
func TestWriteRowsToFile_HTML_EscapeHeader(t *testing.T) {
f, err := os.CreateTemp("", "gonavi-export-*.html")
if err != nil {
t.Fatalf("创建临时文件失败: %v", err)
}
defer os.Remove(f.Name())
defer f.Close()
columnName := "<b>name</b>"
data := []map[string]interface{}{{columnName: "ok"}}
if err := writeRowsToFile(f, data, []string{columnName}, "html"); err != nil {
t.Fatalf("写入 html 失败: %v", err)
}
contentBytes, _ := os.ReadFile(f.Name())
content := string(contentBytes)
if !strings.Contains(content, "<th>&lt;b&gt;name&lt;/b&gt;</th>") || strings.Contains(content, "<th><b>name</b></th>") {
t.Fatalf("html 表头未正确转义: %s", content)
}
}

View File

@@ -4,6 +4,9 @@ import (
"crypto/sha256"
"encoding/hex"
"encoding/json"
"fmt"
"math"
"strconv"
"strings"
"sync"
@@ -20,12 +23,20 @@ var (
// getRedisClient gets or creates a Redis client from cache
func (a *App) getRedisClient(config connection.ConnectionConfig) (redis.RedisClient, error) {
key := getRedisClientCacheKey(config)
effectiveConfig := applyGlobalProxyToConnection(config)
connectConfig, proxyErr := resolveDialConfigWithProxy(effectiveConfig)
if proxyErr != nil {
wrapped := wrapConnectError(effectiveConfig, proxyErr)
logger.Error(wrapped, "Redis 代理准备失败:%s", formatRedisConnSummary(effectiveConfig))
return nil, wrapped
}
key := getRedisClientCacheKey(connectConfig)
shortKey := key
if len(shortKey) > 12 {
shortKey = shortKey[:12]
}
logger.Infof("获取 Redis 连接:%s 缓存Key=%s", formatRedisConnSummary(config), shortKey)
logger.Infof("获取 Redis 连接:%s 缓存Key=%s", formatRedisConnSummary(effectiveConfig), shortKey)
redisCacheMu.Lock()
defer redisCacheMu.Unlock()
@@ -44,47 +55,69 @@ func (a *App) getRedisClient(config connection.ConnectionConfig) (redis.RedisCli
logger.Infof("创建 Redis 客户端实例缓存Key=%s", shortKey)
client := redis.NewRedisClient()
if err := client.Connect(config); err != nil {
logger.Error(err, "Redis 连接失败:%s 缓存Key=%s", formatRedisConnSummary(config), shortKey)
return nil, err
if err := client.Connect(connectConfig); err != nil {
wrapped := wrapConnectError(effectiveConfig, err)
logger.Error(wrapped, "Redis 连接失败:%s 缓存Key=%s", formatRedisConnSummary(effectiveConfig), shortKey)
return nil, wrapped
}
redisCache[key] = client
logger.Infof("Redis 连接成功并写入缓存:%s 缓存Key=%s", formatRedisConnSummary(config), shortKey)
logger.Infof("Redis 连接成功并写入缓存:%s 缓存Key=%s", formatRedisConnSummary(effectiveConfig), shortKey)
return client, nil
}
func getRedisClientCacheKey(config connection.ConnectionConfig) string {
if !config.UseSSH {
config.SSH = connection.SSHConfig{}
}
b, _ := json.Marshal(config)
normalized := normalizeCacheKeyConfig(config)
b, _ := json.Marshal(normalized)
sum := sha256.Sum256(b)
return hex.EncodeToString(sum[:])
}
func formatRedisConnSummary(config connection.ConnectionConfig) string {
timeoutSeconds := config.Timeout
if timeoutSeconds <= 0 {
timeoutSeconds = 30
}
var b strings.Builder
b.WriteString("类型=redis 地址=")
b.WriteString(config.Host)
b.WriteString(":")
b.WriteString(string(rune(config.Port + '0')))
b.WriteString(strconv.Itoa(config.Port))
if topology := strings.TrimSpace(config.Topology); topology != "" {
b.WriteString(" 模式=")
b.WriteString(topology)
}
if len(config.Hosts) > 0 {
b.WriteString(" 节点数=")
b.WriteString(strconv.Itoa(len(config.Hosts)))
}
b.WriteString(" DB=")
b.WriteString(string(rune(config.RedisDB + '0')))
b.WriteString(strconv.Itoa(config.RedisDB))
if config.UseSSH {
b.WriteString(" SSH=")
b.WriteString(config.SSH.Host)
b.WriteString(":")
b.WriteString(string(rune(config.SSH.Port + '0')))
b.WriteString(strconv.Itoa(config.SSH.Port))
b.WriteString(" 用户=")
b.WriteString(config.SSH.User)
}
if config.UseProxy {
b.WriteString(" 代理=")
b.WriteString(strings.ToLower(strings.TrimSpace(config.Proxy.Type)))
b.WriteString("://")
b.WriteString(config.Proxy.Host)
b.WriteString(":")
b.WriteString(strconv.Itoa(config.Proxy.Port))
if strings.TrimSpace(config.Proxy.User) != "" {
b.WriteString(" 代理认证=已配置")
}
}
if config.UseHTTPTunnel {
b.WriteString(" HTTP隧道=")
b.WriteString(strings.TrimSpace(config.HTTPTunnel.Host))
b.WriteString(":")
b.WriteString(strconv.Itoa(config.HTTPTunnel.Port))
if strings.TrimSpace(config.HTTPTunnel.User) != "" {
b.WriteString(" HTTP隧道认证=已配置")
}
}
return b.String()
}
@@ -107,14 +140,20 @@ func (a *App) RedisTestConnection(config connection.ConnectionConfig) connection
}
// RedisScanKeys scans keys matching a pattern
func (a *App) RedisScanKeys(config connection.ConnectionConfig, pattern string, cursor uint64, count int64) connection.QueryResult {
func (a *App) RedisScanKeys(config connection.ConnectionConfig, pattern string, cursor any, count int64) connection.QueryResult {
config.Type = "redis"
client, err := a.getRedisClient(config)
if err != nil {
return connection.QueryResult{Success: false, Message: err.Error()}
}
result, err := client.ScanKeys(pattern, cursor, count)
parsedCursor, err := parseRedisScanCursor(cursor)
if err != nil {
logger.Warnf("RedisScanKeys 游标解析失败已回退到起始游标cursor=%v err=%v", cursor, err)
parsedCursor = 0
}
result, err := client.ScanKeys(pattern, parsedCursor, count)
if err != nil {
logger.Error(err, "RedisScanKeys 扫描失败pattern=%s", pattern)
return connection.QueryResult{Success: false, Message: err.Error()}
@@ -123,6 +162,82 @@ func (a *App) RedisScanKeys(config connection.ConnectionConfig, pattern string,
return connection.QueryResult{Success: true, Data: result}
}
func parseRedisScanCursor(cursor any) (uint64, error) {
switch v := cursor.(type) {
case nil:
return 0, nil
case uint64:
return v, nil
case uint32:
return uint64(v), nil
case uint16:
return uint64(v), nil
case uint8:
return uint64(v), nil
case uint:
return uint64(v), nil
case int64:
if v < 0 {
return 0, fmt.Errorf("游标不能为负数: %d", v)
}
return uint64(v), nil
case int32:
if v < 0 {
return 0, fmt.Errorf("游标不能为负数: %d", v)
}
return uint64(v), nil
case int16:
if v < 0 {
return 0, fmt.Errorf("游标不能为负数: %d", v)
}
return uint64(v), nil
case int8:
if v < 0 {
return 0, fmt.Errorf("游标不能为负数: %d", v)
}
return uint64(v), nil
case int:
if v < 0 {
return 0, fmt.Errorf("游标不能为负数: %d", v)
}
return uint64(v), nil
case float64:
return parseRedisScanCursorFromFloat(v)
case float32:
return parseRedisScanCursorFromFloat(float64(v))
case json.Number:
return parseRedisScanCursor(strings.TrimSpace(v.String()))
case string:
trimmed := strings.TrimSpace(v)
if trimmed == "" {
return 0, nil
}
parsed, err := strconv.ParseUint(trimmed, 10, 64)
if err != nil {
return 0, fmt.Errorf("无效游标: %q", v)
}
return parsed, nil
default:
return 0, fmt.Errorf("不支持的游标类型: %T", cursor)
}
}
func parseRedisScanCursorFromFloat(value float64) (uint64, error) {
if math.IsNaN(value) || math.IsInf(value, 0) {
return 0, fmt.Errorf("无效浮点游标: %v", value)
}
if value < 0 {
return 0, fmt.Errorf("游标不能为负数: %v", value)
}
if math.Trunc(value) != value {
return 0, fmt.Errorf("游标必须为整数: %v", value)
}
if value > float64(math.MaxUint64) {
return 0, fmt.Errorf("游标超出范围: %v", value)
}
return uint64(value), nil
}
// RedisGetValue gets the value of a key
func (a *App) RedisGetValue(config connection.ConnectionConfig, key string) connection.QueryResult {
config.Type = "redis"

View File

@@ -0,0 +1,50 @@
package app
import (
"encoding/json"
"testing"
)
func TestParseRedisScanCursor(t *testing.T) {
t.Parallel()
testCases := []struct {
name string
input any
want uint64
wantErr bool
}{
{name: "nil defaults to zero", input: nil, want: 0},
{name: "empty string defaults to zero", input: " ", want: 0},
{name: "string cursor", input: "123", want: 123},
{name: "uint64 cursor", input: uint64(456), want: 456},
{name: "int cursor", input: int(789), want: 789},
{name: "float cursor", input: float64(42), want: 42},
{name: "json number cursor", input: json.Number("88"), want: 88},
{name: "negative int rejected", input: -1, wantErr: true},
{name: "fraction float rejected", input: float64(1.5), wantErr: true},
{name: "invalid string rejected", input: "abc", wantErr: true},
{name: "unsupported type rejected", input: true, wantErr: true},
}
for _, tc := range testCases {
tc := tc
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
got, err := parseRedisScanCursor(tc.input)
if tc.wantErr {
if err == nil {
t.Fatalf("expected error, got nil (value=%d)", got)
}
return
}
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if got != tc.want {
t.Fatalf("parseRedisScanCursor() mismatch, want=%d got=%d", tc.want, got)
}
})
}
}

View File

@@ -51,12 +51,13 @@ type UpdateInfo struct {
}
type AppInfo struct {
Version string `json:"version"`
Author string `json:"author"`
RepoURL string `json:"repoUrl,omitempty"`
IssueURL string `json:"issueUrl,omitempty"`
ReleaseURL string `json:"releaseUrl,omitempty"`
BuildTime string `json:"buildTime,omitempty"`
Version string `json:"version"`
Author string `json:"author"`
RepoURL string `json:"repoUrl,omitempty"`
IssueURL string `json:"issueUrl,omitempty"`
ReleaseURL string `json:"releaseUrl,omitempty"`
CommunityURL string `json:"communityUrl,omitempty"`
BuildTime string `json:"buildTime,omitempty"`
}
type updateDownloadResult struct {
@@ -137,12 +138,13 @@ func (a *App) CheckForUpdates() connection.QueryResult {
func (a *App) GetAppInfo() connection.QueryResult {
info := AppInfo{
Version: getCurrentVersion(),
Author: getCurrentAuthor(),
RepoURL: "https://github.com/" + updateRepo,
IssueURL: "https://github.com/" + updateRepo + "/issues",
ReleaseURL: "https://github.com/" + updateRepo + "/releases",
BuildTime: strings.TrimSpace(AppBuildTime),
Version: getCurrentVersion(),
Author: getCurrentAuthor(),
RepoURL: "https://github.com/" + updateRepo,
IssueURL: "https://github.com/" + updateRepo + "/issues",
ReleaseURL: "https://github.com/" + updateRepo + "/releases",
CommunityURL: "https://aibook.ren",
BuildTime: strings.TrimSpace(AppBuildTime),
}
return connection.QueryResult{Success: true, Message: "OK", Data: info}
}
@@ -233,6 +235,49 @@ func (a *App) InstallUpdateAndRestart() connection.QueryResult {
}
}
func (a *App) OpenDownloadedUpdateDirectory() connection.QueryResult {
a.updateMu.Lock()
staged := a.updateState.staged
a.updateMu.Unlock()
if staged == nil {
return connection.QueryResult{Success: false, Message: "未找到已下载的更新包"}
}
assetPath := strings.TrimSpace(staged.FilePath)
if assetPath == "" {
return connection.QueryResult{Success: false, Message: "更新包路径为空"}
}
dirPath := strings.TrimSpace(filepath.Dir(assetPath))
if dirPath == "" || dirPath == "." {
return connection.QueryResult{Success: false, Message: "无法解析更新目录"}
}
if stat, err := os.Stat(dirPath); err != nil || !stat.IsDir() {
return connection.QueryResult{Success: false, Message: "更新目录不存在或不可访问"}
}
var cmd *exec.Cmd
switch stdRuntime.GOOS {
case "darwin":
cmd = exec.Command("open", dirPath)
case "windows":
cmd = exec.Command("explorer", dirPath)
case "linux":
cmd = exec.Command("xdg-open", dirPath)
default:
return connection.QueryResult{Success: false, Message: fmt.Sprintf("当前平台暂不支持打开目录:%s", stdRuntime.GOOS)}
}
if err := cmd.Start(); err != nil {
logger.Error(err, "打开更新目录失败")
return connection.QueryResult{Success: false, Message: fmt.Sprintf("打开更新目录失败:%v", err)}
}
return connection.QueryResult{
Success: true,
Message: fmt.Sprintf("已打开安装目录:%s", dirPath),
Data: map[string]any{
"path": dirPath,
},
}
}
func (a *App) downloadAndStageUpdate(info UpdateInfo) connection.QueryResult {
workspaceDir := strings.TrimSpace(resolveUpdateWorkspaceDir(info.LatestVersion))
if workspaceDir == "" {
@@ -374,7 +419,7 @@ func getCurrentAuthor() string {
}
func fetchLatestRelease() (*githubRelease, error) {
client := &http.Client{Timeout: 15 * time.Second}
client := newHTTPClientWithGlobalProxy(15 * time.Second)
req, err := http.NewRequest(http.MethodGet, updateAPIURL, nil)
if err != nil {
return nil, err
@@ -451,7 +496,7 @@ func fetchReleaseSHA256(assets []githubAsset) (map[string]string, error) {
return nil, errors.New("Release 未提供 SHA256SUMS")
}
client := &http.Client{Timeout: 15 * time.Second}
client := newHTTPClientWithGlobalProxy(15 * time.Second)
req, err := http.NewRequest(http.MethodGet, checksumURL, nil)
if err != nil {
return nil, err
@@ -522,7 +567,7 @@ func (w *downloadProgressWriter) Write(p []byte) (int, error) {
}
func downloadFileWithHash(url, filePath string, onProgress func(downloaded, total int64)) (string, error) {
client := &http.Client{Timeout: 10 * time.Minute}
client := newHTTPClientWithGlobalProxy(10 * time.Minute)
req, err := http.NewRequest(http.MethodGet, url, nil)
if err != nil {
return "", err

View File

@@ -18,35 +18,49 @@ type ProxyConfig struct {
Password string `json:"password,omitempty"`
}
// HTTPTunnelConfig holds independent HTTP CONNECT tunnel details
type HTTPTunnelConfig struct {
Host string `json:"host"`
Port int `json:"port"`
User string `json:"user,omitempty"`
Password string `json:"password,omitempty"`
}
// ConnectionConfig holds database connection details including SSH
type ConnectionConfig struct {
Type string `json:"type"`
Host string `json:"host"`
Port int `json:"port"`
User string `json:"user"`
Password string `json:"password"`
SavePassword bool `json:"savePassword,omitempty"` // Persist password in saved connection
Database string `json:"database"`
UseSSH bool `json:"useSSH"`
SSH SSHConfig `json:"ssh"`
UseProxy bool `json:"useProxy,omitempty"`
Proxy ProxyConfig `json:"proxy,omitempty"`
Driver string `json:"driver,omitempty"` // For custom connection
DSN string `json:"dsn,omitempty"` // For custom connection
Timeout int `json:"timeout,omitempty"` // Connection timeout in seconds (default: 30)
RedisDB int `json:"redisDB,omitempty"` // Redis database index (0-15)
URI string `json:"uri,omitempty"` // Connection URI for copy/paste
Hosts []string `json:"hosts,omitempty"` // Multi-host addresses: host:port
Topology string `json:"topology,omitempty"` // single | replica
MySQLReplicaUser string `json:"mysqlReplicaUser,omitempty"` // MySQL replica auth user
MySQLReplicaPassword string `json:"mysqlReplicaPassword,omitempty"` // MySQL replica auth password
ReplicaSet string `json:"replicaSet,omitempty"` // MongoDB replica set name
AuthSource string `json:"authSource,omitempty"` // MongoDB authSource
ReadPreference string `json:"readPreference,omitempty"` // MongoDB readPreference
MongoSRV bool `json:"mongoSrv,omitempty"` // MongoDB use mongodb+srv URI scheme
MongoAuthMechanism string `json:"mongoAuthMechanism,omitempty"` // MongoDB authMechanism
MongoReplicaUser string `json:"mongoReplicaUser,omitempty"` // MongoDB replica auth user
MongoReplicaPassword string `json:"mongoReplicaPassword,omitempty"` // MongoDB replica auth password
Type string `json:"type"`
Host string `json:"host"`
Port int `json:"port"`
User string `json:"user"`
Password string `json:"password"`
SavePassword bool `json:"savePassword,omitempty"` // Persist password in saved connection
Database string `json:"database"`
UseSSL bool `json:"useSSL,omitempty"` // MySQL-like SSL/TLS switch
SSLMode string `json:"sslMode,omitempty"` // preferred | required | skip-verify | disable
SSLCertPath string `json:"sslCertPath,omitempty"` // TLS client certificate path (e.g., Dameng)
SSLKeyPath string `json:"sslKeyPath,omitempty"` // TLS client private key path (e.g., Dameng)
UseSSH bool `json:"useSSH"`
SSH SSHConfig `json:"ssh"`
UseProxy bool `json:"useProxy,omitempty"`
Proxy ProxyConfig `json:"proxy,omitempty"`
UseHTTPTunnel bool `json:"useHttpTunnel,omitempty"`
HTTPTunnel HTTPTunnelConfig `json:"httpTunnel,omitempty"`
Driver string `json:"driver,omitempty"` // For custom connection
DSN string `json:"dsn,omitempty"` // For custom connection
Timeout int `json:"timeout,omitempty"` // Connection timeout in seconds (default: 30)
RedisDB int `json:"redisDB,omitempty"` // Redis database index (0-15)
URI string `json:"uri,omitempty"` // Connection URI for copy/paste
Hosts []string `json:"hosts,omitempty"` // Multi-host addresses: host:port
Topology string `json:"topology,omitempty"` // single | replica | cluster
MySQLReplicaUser string `json:"mysqlReplicaUser,omitempty"` // MySQL replica auth user
MySQLReplicaPassword string `json:"mysqlReplicaPassword,omitempty"` // MySQL replica auth password
ReplicaSet string `json:"replicaSet,omitempty"` // MongoDB replica set name
AuthSource string `json:"authSource,omitempty"` // MongoDB authSource
ReadPreference string `json:"readPreference,omitempty"` // MongoDB readPreference
MongoSRV bool `json:"mongoSrv,omitempty"` // MongoDB use mongodb+srv URI scheme
MongoAuthMechanism string `json:"mongoAuthMechanism,omitempty"` // MongoDB authMechanism
MongoReplicaUser string `json:"mongoReplicaUser,omitempty"` // MongoDB replica auth user
MongoReplicaPassword string `json:"mongoReplicaPassword,omitempty"` // MongoDB replica auth password
}
// QueryResult is the standard response format for Wails methods
@@ -55,6 +69,7 @@ type QueryResult struct {
Message string `json:"message"`
Data interface{} `json:"data"`
Fields []string `json:"fields,omitempty"`
QueryID string `json:"queryId,omitempty"` // Unique ID for query cancellation
}
// ColumnDefinition represents a table column

View File

@@ -0,0 +1,680 @@
//go:build gonavi_full_drivers || gonavi_clickhouse_driver
package db
import (
"context"
"database/sql"
"fmt"
"net"
"net/url"
"strconv"
"strings"
"time"
"GoNavi-Wails/internal/connection"
"GoNavi-Wails/internal/logger"
"GoNavi-Wails/internal/ssh"
"GoNavi-Wails/internal/utils"
clickhouse "github.com/ClickHouse/clickhouse-go/v2"
)
const (
defaultClickHousePort = 9000
defaultClickHouseUser = "default"
defaultClickHouseDatabase = "default"
minClickHouseReadTimeout = 5 * time.Minute
)
type ClickHouseDB struct {
conn *sql.DB
pingTimeout time.Duration
forwarder *ssh.LocalForwarder
database string
}
func normalizeClickHouseConfig(config connection.ConnectionConfig) connection.ConnectionConfig {
normalized := applyClickHouseURI(config)
if strings.TrimSpace(normalized.Host) == "" {
normalized.Host = "localhost"
}
if normalized.Port <= 0 {
normalized.Port = defaultClickHousePort
}
if strings.TrimSpace(normalized.User) == "" {
normalized.User = defaultClickHouseUser
}
if strings.TrimSpace(normalized.Database) == "" {
normalized.Database = defaultClickHouseDatabase
}
return normalized
}
func applyClickHouseURI(config connection.ConnectionConfig) connection.ConnectionConfig {
uriText := strings.TrimSpace(config.URI)
if uriText == "" {
return config
}
lowerURI := strings.ToLower(uriText)
if !strings.HasPrefix(lowerURI, "clickhouse://") {
return config
}
parsed, err := url.Parse(uriText)
if err != nil {
return config
}
if parsed.User != nil {
if strings.TrimSpace(config.User) == "" {
config.User = parsed.User.Username()
}
if pass, ok := parsed.User.Password(); ok && config.Password == "" {
config.Password = pass
}
}
if dbName := strings.TrimPrefix(strings.TrimSpace(parsed.Path), "/"); dbName != "" && strings.TrimSpace(config.Database) == "" {
config.Database = dbName
}
if strings.TrimSpace(config.Database) == "" {
if dbName := strings.TrimSpace(parsed.Query().Get("database")); dbName != "" {
config.Database = dbName
}
}
defaultPort := config.Port
if defaultPort <= 0 {
defaultPort = defaultClickHousePort
}
if strings.TrimSpace(config.Host) == "" {
host, port, ok := parseHostPortWithDefault(parsed.Host, defaultPort)
if ok {
config.Host = host
config.Port = port
}
}
if config.Port <= 0 {
config.Port = defaultPort
}
return config
}
func (c *ClickHouseDB) buildClickHouseOptions(config connection.ConnectionConfig) *clickhouse.Options {
connectTimeout := getConnectTimeout(config)
readTimeout := connectTimeout
if readTimeout < minClickHouseReadTimeout {
readTimeout = minClickHouseReadTimeout
}
protocol := detectClickHouseProtocol(config)
opts := &clickhouse.Options{
Protocol: protocol,
Addr: []string{
net.JoinHostPort(config.Host, strconv.Itoa(config.Port)),
},
Auth: clickhouse.Auth{
Database: strings.TrimSpace(config.Database),
Username: strings.TrimSpace(config.User),
Password: config.Password,
},
DialTimeout: connectTimeout,
ReadTimeout: readTimeout,
}
if tlsConfig := resolveGenericTLSConfig(config); tlsConfig != nil {
opts.TLS = tlsConfig
}
return opts
}
func detectClickHouseProtocol(config connection.ConnectionConfig) clickhouse.Protocol {
uriText := strings.ToLower(strings.TrimSpace(config.URI))
if strings.HasPrefix(uriText, "http://") || strings.HasPrefix(uriText, "https://") {
return clickhouse.HTTP
}
if config.Port == 8123 || config.Port == 8443 {
return clickhouse.HTTP
}
return clickhouse.Native
}
func isClickHouseProtocolMismatch(err error) bool {
if err == nil {
return false
}
text := strings.ToLower(strings.TrimSpace(err.Error()))
if text == "" {
return false
}
return strings.Contains(text, "unexpected packet [72]") ||
(strings.Contains(text, "unexpected packet") && strings.Contains(text, "handshake")) ||
strings.Contains(text, "http response to https client") ||
strings.Contains(text, "malformed http response")
}
func withClickHouseProtocol(config connection.ConnectionConfig, protocol clickhouse.Protocol) connection.ConnectionConfig {
next := config
switch protocol {
case clickhouse.HTTP:
if next.Port == 0 {
next.Port = 8123
}
default:
if next.Port == 0 {
next.Port = defaultClickHousePort
}
}
return next
}
func (c *ClickHouseDB) Connect(config connection.ConnectionConfig) error {
if supported, reason := DriverRuntimeSupportStatus("clickhouse"); !supported {
if strings.TrimSpace(reason) == "" {
reason = "ClickHouse 纯 Go 驱动未启用,请先在驱动管理中安装启用"
}
return fmt.Errorf("%s", reason)
}
if c.forwarder != nil {
_ = c.forwarder.Close()
c.forwarder = nil
}
if c.conn != nil {
_ = c.conn.Close()
c.conn = nil
}
runConfig := normalizeClickHouseConfig(config)
c.pingTimeout = getConnectTimeout(runConfig)
c.database = runConfig.Database
if runConfig.UseSSH {
logger.Infof("ClickHouse 使用 SSH 连接:地址=%s:%d 用户=%s", runConfig.Host, runConfig.Port, runConfig.User)
forwarder, err := ssh.GetOrCreateLocalForwarder(runConfig.SSH, runConfig.Host, runConfig.Port)
if err != nil {
return fmt.Errorf("创建 SSH 隧道失败:%w", err)
}
c.forwarder = forwarder
host, portText, err := net.SplitHostPort(forwarder.LocalAddr)
if err != nil {
return fmt.Errorf("解析本地转发地址失败:%w", err)
}
port, err := strconv.Atoi(portText)
if err != nil {
return fmt.Errorf("解析本地端口失败:%w", err)
}
runConfig.Host = host
runConfig.Port = port
runConfig.UseSSH = false
logger.Infof("ClickHouse 通过本地端口转发连接:%s -> %s:%d", forwarder.LocalAddr, config.Host, config.Port)
}
attempts := []connection.ConnectionConfig{runConfig}
if shouldTrySSLPreferredFallback(runConfig) {
attempts = append(attempts, withSSLDisabled(runConfig))
}
var failures []string
for idx, attempt := range attempts {
primaryProtocol := detectClickHouseProtocol(attempt)
protocols := []clickhouse.Protocol{primaryProtocol}
if primaryProtocol == clickhouse.Native {
protocols = append(protocols, clickhouse.HTTP)
} else {
protocols = append(protocols, clickhouse.Native)
}
for pIdx, protocol := range protocols {
protocolConfig := withClickHouseProtocol(attempt, protocol)
c.conn = clickhouse.OpenDB(c.buildClickHouseOptions(protocolConfig))
if err := c.Ping(); err != nil {
failures = append(failures, fmt.Sprintf("第%d次连接验证失败(protocol=%s): %v", idx+1, protocol.String(), err))
if c.conn != nil {
_ = c.conn.Close()
c.conn = nil
}
if pIdx == 0 && !isClickHouseProtocolMismatch(err) {
// 首次连接不是协议误配特征,避免无谓重试次协议。
break
}
continue
}
if idx > 0 {
logger.Warnf("ClickHouse SSL 优先连接失败,已回退至明文连接")
}
if pIdx > 0 {
logger.Warnf("ClickHouse 已自动切换连接协议为 %s常见于 8123/8443 HTTP 端口)", protocol.String())
}
return nil
}
}
_ = c.Close()
return fmt.Errorf("连接建立后验证失败(可检查 ClickHouse 端口与协议是否匹配Native=9000/9440HTTP=8123/8443%s", strings.Join(failures, ""))
}
func (c *ClickHouseDB) Close() error {
if c.forwarder != nil {
if err := c.forwarder.Close(); err != nil {
logger.Warnf("关闭 ClickHouse SSH 端口转发失败:%v", err)
}
c.forwarder = nil
}
if c.conn != nil {
return c.conn.Close()
}
return nil
}
func (c *ClickHouseDB) Ping() error {
if c.conn == nil {
return fmt.Errorf("connection not open")
}
timeout := c.pingTimeout
if timeout <= 0 {
timeout = 5 * time.Second
}
ctx, cancel := utils.ContextWithTimeout(timeout)
defer cancel()
return c.conn.PingContext(ctx)
}
func (c *ClickHouseDB) QueryContext(ctx context.Context, query string) ([]map[string]interface{}, []string, error) {
if c.conn == nil {
return nil, nil, fmt.Errorf("connection not open")
}
rows, err := c.conn.QueryContext(ctx, query)
if err != nil {
return nil, nil, err
}
defer rows.Close()
return scanRows(rows)
}
func (c *ClickHouseDB) Query(query string) ([]map[string]interface{}, []string, error) {
if c.conn == nil {
return nil, nil, fmt.Errorf("connection not open")
}
rows, err := c.conn.Query(query)
if err != nil {
return nil, nil, err
}
defer rows.Close()
return scanRows(rows)
}
func (c *ClickHouseDB) ExecContext(ctx context.Context, query string) (int64, error) {
if c.conn == nil {
return 0, fmt.Errorf("connection not open")
}
res, err := c.conn.ExecContext(ctx, query)
if err != nil {
return 0, err
}
return res.RowsAffected()
}
func (c *ClickHouseDB) Exec(query string) (int64, error) {
if c.conn == nil {
return 0, fmt.Errorf("connection not open")
}
res, err := c.conn.Exec(query)
if err != nil {
return 0, err
}
return res.RowsAffected()
}
func (c *ClickHouseDB) GetDatabases() ([]string, error) {
data, _, err := c.Query("SELECT name FROM system.databases ORDER BY name")
if err != nil {
return nil, err
}
result := make([]string, 0, len(data))
for _, row := range data {
if val, ok := getClickHouseValueFromRow(row, "name", "database"); ok {
result = append(result, fmt.Sprintf("%v", val))
continue
}
for _, value := range row {
result = append(result, fmt.Sprintf("%v", value))
break
}
}
return result, nil
}
func (c *ClickHouseDB) GetTables(dbName string) ([]string, error) {
targetDB := strings.TrimSpace(dbName)
if targetDB == "" {
targetDB = strings.TrimSpace(c.database)
}
var query string
if targetDB != "" {
query = fmt.Sprintf(
"SELECT name FROM system.tables WHERE database = '%s' ORDER BY name",
escapeClickHouseSQLLiteral(targetDB),
)
} else {
query = "SELECT database, name FROM system.tables ORDER BY database, name"
}
data, _, err := c.Query(query)
if err != nil {
return nil, err
}
result := make([]string, 0, len(data))
for _, row := range data {
if targetDB != "" {
if val, ok := getClickHouseValueFromRow(row, "name", "table", "table_name"); ok {
result = append(result, fmt.Sprintf("%v", val))
continue
}
} else {
databaseValue, hasDB := getClickHouseValueFromRow(row, "database", "schema_name")
tableValue, hasTable := getClickHouseValueFromRow(row, "name", "table", "table_name")
if hasDB && hasTable {
result = append(result, fmt.Sprintf("%v.%v", databaseValue, tableValue))
continue
}
}
for _, value := range row {
result = append(result, fmt.Sprintf("%v", value))
break
}
}
return result, nil
}
func (c *ClickHouseDB) GetCreateStatement(dbName, tableName string) (string, error) {
database, table, err := c.resolveDatabaseAndTable(dbName, tableName)
if err != nil {
return "", err
}
query := fmt.Sprintf("SHOW CREATE TABLE %s.%s", quoteClickHouseIdentifier(database), quoteClickHouseIdentifier(table))
data, _, err := c.Query(query)
if err != nil {
return "", err
}
if len(data) == 0 {
return "", fmt.Errorf("create statement not found")
}
row := data[0]
if val, ok := getClickHouseValueFromRow(row, "statement", "create_statement", "sql", "query"); ok {
text := strings.TrimSpace(fmt.Sprintf("%v", val))
if text != "" {
return text, nil
}
}
longest := ""
for _, value := range row {
text := strings.TrimSpace(fmt.Sprintf("%v", value))
if text == "" {
continue
}
if strings.Contains(strings.ToUpper(text), "CREATE ") && len(text) > len(longest) {
longest = text
}
}
if longest != "" {
return longest, nil
}
return "", fmt.Errorf("create statement not found")
}
func (c *ClickHouseDB) GetColumns(dbName, tableName string) ([]connection.ColumnDefinition, error) {
database, table, err := c.resolveDatabaseAndTable(dbName, tableName)
if err != nil {
return nil, err
}
query := fmt.Sprintf(`
SELECT
name,
type,
default_kind,
default_expression,
is_in_primary_key,
is_in_sorting_key,
comment
FROM system.columns
WHERE database = '%s' AND table = '%s'
ORDER BY position`,
escapeClickHouseSQLLiteral(database),
escapeClickHouseSQLLiteral(table),
)
data, _, err := c.Query(query)
if err != nil {
return nil, err
}
columns := make([]connection.ColumnDefinition, 0, len(data))
for _, row := range data {
nameValue, _ := getClickHouseValueFromRow(row, "name", "column_name")
typeValue, _ := getClickHouseValueFromRow(row, "type", "data_type")
defaultKind, _ := getClickHouseValueFromRow(row, "default_kind")
defaultExpr, hasDefault := getClickHouseValueFromRow(row, "default_expression", "column_default")
commentValue, _ := getClickHouseValueFromRow(row, "comment")
inPrimary, _ := getClickHouseValueFromRow(row, "is_in_primary_key")
inSorting, _ := getClickHouseValueFromRow(row, "is_in_sorting_key")
colType := strings.TrimSpace(fmt.Sprintf("%v", typeValue))
nullable := "NO"
if strings.HasPrefix(strings.ToLower(colType), "nullable(") {
nullable = "YES"
}
key := ""
if isClickHouseTruthy(inPrimary) {
key = "PRI"
} else if isClickHouseTruthy(inSorting) {
key = "MUL"
}
extra := ""
kindText := strings.ToUpper(strings.TrimSpace(fmt.Sprintf("%v", defaultKind)))
if kindText != "" && kindText != "DEFAULT" {
extra = kindText
}
col := connection.ColumnDefinition{
Name: strings.TrimSpace(fmt.Sprintf("%v", nameValue)),
Type: colType,
Nullable: nullable,
Key: key,
Extra: extra,
Comment: strings.TrimSpace(fmt.Sprintf("%v", commentValue)),
}
if hasDefault && defaultExpr != nil {
text := strings.TrimSpace(fmt.Sprintf("%v", defaultExpr))
if text != "" {
col.Default = &text
}
}
columns = append(columns, col)
}
return columns, nil
}
func (c *ClickHouseDB) GetAllColumns(dbName string) ([]connection.ColumnDefinitionWithTable, error) {
targetDB := strings.TrimSpace(dbName)
if targetDB == "" {
targetDB = strings.TrimSpace(c.database)
}
var query string
if targetDB != "" {
query = fmt.Sprintf(`
SELECT
database,
table,
name,
type
FROM system.columns
WHERE database = '%s'
ORDER BY table, position`,
escapeClickHouseSQLLiteral(targetDB),
)
} else {
query = `
SELECT
database,
table,
name,
type
FROM system.columns
WHERE database NOT IN ('system', 'information_schema', 'INFORMATION_SCHEMA')
ORDER BY database, table, position`
}
data, _, err := c.Query(query)
if err != nil {
return nil, err
}
result := make([]connection.ColumnDefinitionWithTable, 0, len(data))
for _, row := range data {
databaseValue, _ := getClickHouseValueFromRow(row, "database")
tableValue, hasTable := getClickHouseValueFromRow(row, "table", "table_name")
nameValue, hasName := getClickHouseValueFromRow(row, "name", "column_name")
typeValue, _ := getClickHouseValueFromRow(row, "type", "data_type")
if !hasTable || !hasName {
continue
}
tableName := strings.TrimSpace(fmt.Sprintf("%v", tableValue))
if targetDB == "" {
dbText := strings.TrimSpace(fmt.Sprintf("%v", databaseValue))
if dbText != "" {
tableName = dbText + "." + tableName
}
}
result = append(result, connection.ColumnDefinitionWithTable{
TableName: tableName,
Name: strings.TrimSpace(fmt.Sprintf("%v", nameValue)),
Type: strings.TrimSpace(fmt.Sprintf("%v", typeValue)),
})
}
return result, nil
}
func (c *ClickHouseDB) GetIndexes(dbName, tableName string) ([]connection.IndexDefinition, error) {
return []connection.IndexDefinition{}, nil
}
func (c *ClickHouseDB) GetForeignKeys(dbName, tableName string) ([]connection.ForeignKeyDefinition, error) {
return []connection.ForeignKeyDefinition{}, nil
}
func (c *ClickHouseDB) GetTriggers(dbName, tableName string) ([]connection.TriggerDefinition, error) {
return []connection.TriggerDefinition{}, nil
}
func (c *ClickHouseDB) resolveDatabaseAndTable(dbName, tableName string) (string, string, error) {
rawTable := strings.TrimSpace(tableName)
if rawTable == "" {
return "", "", fmt.Errorf("table name required")
}
resolvedDB := strings.TrimSpace(dbName)
resolvedTable := rawTable
if parts := strings.SplitN(rawTable, ".", 2); len(parts) == 2 {
if dbPart := normalizeClickHouseIdentifierPart(parts[0]); dbPart != "" {
resolvedDB = dbPart
}
resolvedTable = normalizeClickHouseIdentifierPart(parts[1])
} else {
resolvedTable = normalizeClickHouseIdentifierPart(rawTable)
}
if resolvedDB == "" {
resolvedDB = strings.TrimSpace(c.database)
}
if resolvedDB == "" {
resolvedDB = defaultClickHouseDatabase
}
if resolvedTable == "" {
return "", "", fmt.Errorf("table name required")
}
return resolvedDB, resolvedTable, nil
}
func normalizeClickHouseIdentifierPart(raw string) string {
text := strings.TrimSpace(raw)
if len(text) >= 2 {
first := text[0]
last := text[len(text)-1]
if (first == '`' && last == '`') || (first == '"' && last == '"') {
text = text[1 : len(text)-1]
}
}
return strings.TrimSpace(text)
}
func quoteClickHouseIdentifier(raw string) string {
return "`" + strings.ReplaceAll(strings.TrimSpace(raw), "`", "``") + "`"
}
func escapeClickHouseSQLLiteral(raw string) string {
return strings.ReplaceAll(strings.TrimSpace(raw), "'", "''")
}
func getClickHouseValueFromRow(row map[string]interface{}, keys ...string) (interface{}, bool) {
if len(row) == 0 {
return nil, false
}
for _, key := range keys {
if value, ok := row[key]; ok {
return value, true
}
}
for existingKey, value := range row {
for _, key := range keys {
if strings.EqualFold(existingKey, key) {
return value, true
}
}
}
return nil, false
}
func isClickHouseTruthy(value interface{}) bool {
switch val := value.(type) {
case bool:
return val
case int:
return val != 0
case int8:
return val != 0
case int16:
return val != 0
case int32:
return val != 0
case int64:
return val != 0
case uint:
return val != 0
case uint8:
return val != 0
case uint16:
return val != 0
case uint32:
return val != 0
case uint64:
return val != 0
case string:
normalized := strings.ToLower(strings.TrimSpace(val))
return normalized == "1" || normalized == "true" || normalized == "yes" || normalized == "y"
default:
normalized := strings.ToLower(strings.TrimSpace(fmt.Sprintf("%v", value)))
return normalized == "1" || normalized == "true" || normalized == "yes" || normalized == "y"
}
}

View File

@@ -8,6 +8,7 @@ import (
"fmt"
"net"
"net/url"
"sort"
"strconv"
"strings"
"time"
@@ -36,6 +37,14 @@ func (d *DamengDB) getDSN(config connection.ConnectionConfig) string {
if config.Database != "" {
q.Set("schema", config.Database)
}
if config.UseSSL {
if certPath := strings.TrimSpace(config.SSLCertPath); certPath != "" {
q.Set("SSL_CERT_PATH", certPath)
}
if keyPath := strings.TrimSpace(config.SSLKeyPath); keyPath != "" {
q.Set("SSL_KEY_PATH", keyPath)
}
}
if escapedPassword != config.Password {
// 达梦驱动要求密码包含特殊字符时password 需 PathEscape并添加 escapeProcess=true 让驱动解码。
q.Set("escapeProcess", "true")
@@ -50,8 +59,12 @@ func (d *DamengDB) getDSN(config connection.ConnectionConfig) string {
}
func (d *DamengDB) Connect(config connection.ConnectionConfig) error {
var dsn string
var err error
runConfig := config
if runConfig.UseSSL {
if strings.TrimSpace(runConfig.SSLCertPath) == "" || strings.TrimSpace(runConfig.SSLKeyPath) == "" {
return fmt.Errorf("达梦启用 SSL 需要同时配置证书路径(sslCertPath)与私钥路径(sslKeyPath)")
}
}
if config.UseSSH {
// Create SSH tunnel with local port forwarding
@@ -80,22 +93,37 @@ func (d *DamengDB) Connect(config connection.ConnectionConfig) error {
localConfig.Port = port
localConfig.UseSSH = false
dsn = d.getDSN(localConfig)
runConfig = localConfig
logger.Infof("达梦数据库通过本地端口转发连接:%s -> %s:%d", forwarder.LocalAddr, config.Host, config.Port)
} else {
dsn = d.getDSN(config)
}
db, err := sql.Open("dm", dsn)
if err != nil {
return fmt.Errorf("打开数据库连接失败:%w", err)
attempts := []connection.ConnectionConfig{runConfig}
if shouldTrySSLPreferredFallback(runConfig) {
attempts = append(attempts, withSSLDisabled(runConfig))
}
d.conn = db
d.pingTimeout = getConnectTimeout(config)
if err := d.Ping(); err != nil {
return fmt.Errorf("连接建立后验证失败:%w", err)
var failures []string
for idx, attempt := range attempts {
dsn := d.getDSN(attempt)
db, err := sql.Open("dm", dsn)
if err != nil {
failures = append(failures, fmt.Sprintf("第%d次连接打开失败: %v", idx+1, err))
continue
}
d.conn = db
d.pingTimeout = getConnectTimeout(attempt)
if err := d.Ping(); err != nil {
_ = db.Close()
d.conn = nil
failures = append(failures, fmt.Sprintf("第%d次连接验证失败: %v", idx+1, err))
continue
}
if idx > 0 {
logger.Warnf("达梦 SSL 优先连接失败,已回退至明文连接")
}
return nil
}
return nil
return fmt.Errorf("连接建立后验证失败:%s", strings.Join(failures, ""))
}
func (d *DamengDB) Close() error {
@@ -177,24 +205,82 @@ func (d *DamengDB) Exec(query string) (int64, error) {
}
func (d *DamengDB) GetDatabases() ([]string, error) {
// DM: List Users/Schemas
data, _, err := d.Query("SELECT username FROM dba_users")
if err != nil {
// Fallback if dba_users not accessible
data, _, err = d.Query("SELECT username FROM all_users")
// 达梦将「用户/模式」作为数据库列表来源,不同权限下可见口径不同。
// 这里采用多查询口径聚合,避免仅依赖单一视图导致“少库”。
queries := []string{
"SELECT USERNAME AS DATABASE_NAME FROM SYS.DBA_USERS ORDER BY USERNAME",
"SELECT USERNAME AS DATABASE_NAME FROM DBA_USERS ORDER BY USERNAME",
"SELECT USERNAME AS DATABASE_NAME FROM ALL_USERS ORDER BY USERNAME",
"SELECT USERNAME AS DATABASE_NAME FROM USER_USERS",
"SELECT DISTINCT OWNER AS DATABASE_NAME FROM ALL_TABLES ORDER BY OWNER",
}
seen := make(map[string]struct{})
dbs := make([]string, 0, 64)
var lastErr error
success := false
for _, q := range queries {
data, _, err := d.Query(q)
if err != nil {
return nil, err
lastErr = err
continue
}
success = true
for _, row := range data {
name := getDamengRowString(row, "DATABASE_NAME", "USERNAME", "OWNER", "SCHEMA_NAME")
if name == "" {
// 回退到第一列,兼容驱动返回列名差异。
for _, v := range row {
text := strings.TrimSpace(fmt.Sprintf("%v", v))
if text == "" || strings.EqualFold(text, "<nil>") {
continue
}
name = text
break
}
}
if name == "" {
continue
}
key := strings.ToUpper(name)
if _, ok := seen[key]; ok {
continue
}
seen[key] = struct{}{}
dbs = append(dbs, name)
}
}
var dbs []string
for _, row := range data {
if val, ok := row["USERNAME"]; ok {
dbs = append(dbs, fmt.Sprintf("%v", val))
}
if !success && lastErr != nil {
return nil, lastErr
}
sort.Slice(dbs, func(i, j int) bool {
return strings.ToUpper(dbs[i]) < strings.ToUpper(dbs[j])
})
return dbs, nil
}
func getDamengRowString(row map[string]interface{}, keys ...string) string {
if len(row) == 0 {
return ""
}
for _, key := range keys {
for k, v := range row {
if !strings.EqualFold(strings.TrimSpace(k), strings.TrimSpace(key)) {
continue
}
text := strings.TrimSpace(fmt.Sprintf("%v", v))
if text == "" || strings.EqualFold(text, "<nil>") {
return ""
}
return text
}
}
return ""
}
func (d *DamengDB) GetTables(dbName string) ([]string, error) {
query := fmt.Sprintf("SELECT owner, table_name FROM all_tables WHERE owner = '%s' ORDER BY table_name", strings.ToUpper(dbName))
if dbName == "" {

View File

@@ -15,4 +15,5 @@ func registerOptionalDatabaseFactories() {
registerDatabaseFactory(newOptionalDriverAgentDatabase("vastbase"), "vastbase")
registerDatabaseFactory(newOptionalDriverAgentDatabase("mongodb"), "mongodb")
registerDatabaseFactory(newOptionalDriverAgentDatabase("tdengine"), "tdengine")
registerDatabaseFactory(newOptionalDriverAgentDatabase("clickhouse"), "clickhouse")
}

View File

@@ -15,4 +15,5 @@ func registerOptionalDatabaseFactories() {
registerDatabaseFactory(newOptionalDriverAgentDatabase("vastbase"), "vastbase")
registerDatabaseFactory(newOptionalDriverAgentDatabase("mongodb"), "mongodb")
registerDatabaseFactory(newOptionalDriverAgentDatabase("tdengine"), "tdengine")
registerDatabaseFactory(newOptionalDriverAgentDatabase("clickhouse"), "clickhouse")
}

View File

@@ -21,7 +21,7 @@ const (
defaultDirosPort = 9030
)
// DirosDB 使用独立 driver 名称diros接入底层协议兼容 MySQL。
// DirosDB 使用独立 driver 名称diros接入底层协议兼容 MySQL(对外显示为 Doris
type DirosDB struct {
MySQLDB
}
@@ -146,14 +146,15 @@ func (d *DirosDB) getDSN(config connection.ConnectionConfig) string {
protocol = netName
address = normalizeMySQLAddress(config.Host, config.Port)
} else {
logger.Warnf("注册 Diros SSH 网络失败,将尝试直连:地址=%s:%d 用户=%s原因%v", config.Host, config.Port, config.User, err)
logger.Warnf("注册 Doris SSH 网络失败,将尝试直连:地址=%s:%d 用户=%s原因%v", config.Host, config.Port, config.User, err)
}
}
timeout := getConnectTimeoutSeconds(config)
tlsMode := resolveMySQLTLSMode(config)
return fmt.Sprintf("%s:%s@%s(%s)/%s?charset=utf8mb4&parseTime=True&loc=Local&timeout=%ds",
config.User, config.Password, protocol, address, database, timeout)
return fmt.Sprintf("%s:%s@%s(%s)/%s?charset=utf8mb4&parseTime=True&loc=Local&timeout=%ds&tls=%s",
config.User, config.Password, protocol, address, database, timeout, url.QueryEscape(tlsMode))
}
func resolveDirosCredential(config connection.ConnectionConfig, addressIndex int) (string, string) {
@@ -177,7 +178,7 @@ func (d *DirosDB) Connect(config connection.ConnectionConfig) error {
runConfig := applyDirosURI(config)
addresses := collectDirosAddresses(runConfig)
if len(addresses) == 0 {
return fmt.Errorf("连接建立后验证失败:未找到可用的 Diros 地址")
return fmt.Errorf("连接建立后验证失败:未找到可用的 Doris 地址")
}
var errorDetails []string
@@ -214,7 +215,7 @@ func (d *DirosDB) Connect(config connection.ConnectionConfig) error {
}
if len(errorDetails) == 0 {
return fmt.Errorf("连接建立后验证失败:未找到可用的 Diros 地址")
return fmt.Errorf("连接建立后验证失败:未找到可用的 Doris 地址")
}
return fmt.Errorf("连接建立后验证失败:%s", strings.Join(errorDetails, ""))
}

View File

@@ -0,0 +1,74 @@
package db
import (
"debug/pe"
"fmt"
"runtime"
"strings"
)
const (
peMachineI386 uint16 = 0x014c
peMachineAmd64 uint16 = 0x8664
peMachineArm64 uint16 = 0xaa64
)
func windowsMachineLabel(machine uint16) string {
switch machine {
case peMachineI386:
return "windows-386"
case peMachineAmd64:
return "windows-amd64"
case peMachineArm64:
return "windows-arm64"
default:
return fmt.Sprintf("windows-unknown(0x%04x)", machine)
}
}
func expectedWindowsMachineForGoArch(goarch string) (uint16, string, bool) {
switch strings.ToLower(strings.TrimSpace(goarch)) {
case "386":
return peMachineI386, "windows-386", true
case "amd64":
return peMachineAmd64, "windows-amd64", true
case "arm64":
return peMachineArm64, "windows-arm64", true
default:
return 0, "", false
}
}
func validateWindowsExecutableMachine(pathText string) error {
file, err := pe.Open(pathText)
if err != nil {
return fmt.Errorf("无法识别为有效的 Windows 可执行文件:%w", err)
}
defer file.Close()
expectedMachine, expectedLabel, ok := expectedWindowsMachineForGoArch(runtime.GOARCH)
if !ok {
return nil
}
actualMachine := file.FileHeader.Machine
if actualMachine != expectedMachine {
return fmt.Errorf("可执行文件架构不兼容(文件=%s当前进程=%s", windowsMachineLabel(actualMachine), expectedLabel)
}
return nil
}
// ValidateOptionalDriverAgentExecutable 校验可选驱动代理二进制是否可在当前进程中执行。
// 当前主要用于 Windows 下的 PE 架构兼容性校验,避免升级后复用到错误架构的旧代理。
func ValidateOptionalDriverAgentExecutable(driverType string, executablePath string) error {
pathText := strings.TrimSpace(executablePath)
if pathText == "" {
return fmt.Errorf("%s 驱动代理路径为空", driverDisplayName(driverType))
}
if runtime.GOOS != "windows" {
return nil
}
if err := validateWindowsExecutableMachine(pathText); err != nil {
return fmt.Errorf("%s 驱动代理不可用:%w", driverDisplayName(driverType), err)
}
return nil
}

View File

@@ -18,18 +18,19 @@ var coreBuiltinDrivers = map[string]struct{}{
// optionalGoDrivers 表示需要用户“安装启用”后才能使用的纯 Go 驱动。
// 注意这是一种运行时门控installed.json 标记),并不减少主二进制体积。
var optionalGoDrivers = map[string]struct{}{
"mariadb": {},
"diros": {},
"sphinx": {},
"sqlserver": {},
"sqlite": {},
"duckdb": {},
"dameng": {},
"kingbase": {},
"highgo": {},
"vastbase": {},
"mongodb": {},
"tdengine": {},
"mariadb": {},
"diros": {},
"sphinx": {},
"sqlserver": {},
"sqlite": {},
"duckdb": {},
"dameng": {},
"kingbase": {},
"highgo": {},
"vastbase": {},
"mongodb": {},
"tdengine": {},
"clickhouse": {},
}
var (
@@ -60,7 +61,7 @@ func driverDisplayName(driverType string) string {
case "mariadb":
return "MariaDB"
case "diros":
return "Diros"
return "Doris"
case "sphinx":
return "Sphinx"
case "postgres":
@@ -83,6 +84,8 @@ func driverDisplayName(driverType string) string {
return "MongoDB"
case "tdengine":
return "TDengine"
case "clickhouse":
return "ClickHouse"
default:
return strings.ToUpper(strings.TrimSpace(driverType))
}
@@ -191,6 +194,9 @@ func optionalGoDriverRuntimeReady(driverType string) (bool, string) {
if statErr != nil || info.IsDir() {
return false, fmt.Sprintf("%s 驱动代理缺失,请在驱动管理中重新安装启用", driverDisplayName(normalized))
}
if validateErr := ValidateOptionalDriverAgentExecutable(normalized, executablePath); validateErr != nil {
return false, fmt.Sprintf("%s请在驱动管理中重新安装启用", validateErr.Error())
}
return true, ""
}

View File

@@ -65,11 +65,22 @@ func TestManagedDriverRequiresInstallMarker(t *testing.T) {
if err != nil {
t.Fatalf("解析 mariadb 代理路径失败: %v", err)
}
if err := os.WriteFile(executablePath, []byte("placeholder"), 0o755); err != nil {
t.Fatalf("写入 mariadb 代理占位文件失败: %v", err)
}
if runtime.GOOS == "windows" {
_ = os.Chmod(executablePath, 0o644)
selfPath, selfErr := os.Executable()
if selfErr != nil {
t.Fatalf("获取测试进程路径失败: %v", selfErr)
}
content, readErr := os.ReadFile(selfPath)
if readErr != nil {
t.Fatalf("读取测试进程失败: %v", readErr)
}
if err := os.WriteFile(executablePath, content, 0o755); err != nil {
t.Fatalf("写入 mariadb 代理占位可执行文件失败: %v", err)
}
} else {
if err := os.WriteFile(executablePath, []byte("placeholder"), 0o755); err != nil {
t.Fatalf("写入 mariadb 代理占位文件失败: %v", err)
}
}
supported, reason := DriverRuntimeSupportStatus("mariadb")

View File

@@ -5,6 +5,7 @@ package db
import (
"strings"
"testing"
"time"
"GoNavi-Wails/internal/connection"
)
@@ -32,6 +33,44 @@ func TestPostgresDSN_EscapesPassword(t *testing.T) {
}
}
func TestPostgresDSN_SSLModeRequireWhenEnabled(t *testing.T) {
p := &PostgresDB{}
cfg := connection.ConnectionConfig{
Type: "postgres",
Host: "127.0.0.1",
Port: 5432,
User: "user",
Password: "pass",
Database: "db",
UseSSL: true,
SSLMode: "required",
}
dsn := p.getDSN(cfg)
if !strings.Contains(dsn, "sslmode=require") {
t.Fatalf("dsn 缺少 sslmode=require 参数:%s", dsn)
}
}
func TestMySQLDSN_UsesTLSParamWhenSSLEnabled(t *testing.T) {
m := &MySQLDB{}
cfg := connection.ConnectionConfig{
Type: "mysql",
Host: "127.0.0.1",
Port: 3306,
User: "root",
Password: "pass",
Database: "db",
UseSSL: true,
SSLMode: "required",
}
dsn := m.getDSN(cfg)
if !strings.Contains(dsn, "tls=true") {
t.Fatalf("dsn 缺少 tls=true 参数:%s", dsn)
}
}
func TestOracleDSN_EscapesUserAndPassword(t *testing.T) {
o := &OracleDB{}
cfg := connection.ConnectionConfig{
@@ -81,6 +120,30 @@ func TestDamengDSN_EscapesPasswordAndEnablesEscapeProcess(t *testing.T) {
}
}
func TestDamengDSN_AppendsSSLCertAndKeyParams(t *testing.T) {
d := &DamengDB{}
cfg := connection.ConnectionConfig{
Type: "dameng",
Host: "127.0.0.1",
Port: 5236,
User: "SYSDBA",
Password: "pass",
Database: "DBName",
UseSSL: true,
SSLMode: "required",
SSLCertPath: "C:\\certs\\client-cert.pem",
SSLKeyPath: "C:\\certs\\client-key.pem",
}
dsn := d.getDSN(cfg)
if !strings.Contains(dsn, "SSL_CERT_PATH=") {
t.Fatalf("dsn 缺少 SSL_CERT_PATH 参数:%s", dsn)
}
if !strings.Contains(dsn, "SSL_KEY_PATH=") {
t.Fatalf("dsn 缺少 SSL_KEY_PATH 参数:%s", dsn)
}
}
func TestKingbaseDSN_QuotesPasswordWithSpaces(t *testing.T) {
k := &KingbaseDB{}
cfg := connection.ConnectionConfig{
@@ -114,3 +177,113 @@ func TestTDengineDSN_UsesWebSocketFormat(t *testing.T) {
t.Fatalf("tdengine dsn 格式不正确:%s", dsn)
}
}
func TestTDengineDSN_UsesSecureWebSocketWhenSSLEnabled(t *testing.T) {
td := &TDengineDB{}
cfg := connection.ConnectionConfig{
Type: "tdengine",
Host: "127.0.0.1",
Port: 6041,
User: "root",
Password: "taosdata",
Database: "power",
UseSSL: true,
SSLMode: "required",
}
dsn := td.getDSN(cfg)
if !strings.HasPrefix(dsn, "root:taosdata@wss(127.0.0.1:6041)/power") {
t.Fatalf("tdengine ssl dsn 格式不正确:%s", dsn)
}
}
func TestSQLServerDSN_EncryptMapping(t *testing.T) {
s := &SqlServerDB{}
cfg := connection.ConnectionConfig{
Type: "sqlserver",
Host: "127.0.0.1",
Port: 1433,
User: "sa",
Password: "pass",
Database: "master",
UseSSL: true,
SSLMode: "required",
}
dsn := s.getDSN(cfg)
if !strings.Contains(strings.ToLower(dsn), "encrypt=true") {
t.Fatalf("sqlserver dsn 缺少 encrypt=true%s", dsn)
}
if !strings.Contains(strings.ToLower(dsn), "trustservercertificate=false") {
t.Fatalf("sqlserver dsn 缺少 TrustServerCertificate=false%s", dsn)
}
}
func TestClickHouseOptions_UsesStructuredTimeoutAndAuth(t *testing.T) {
c := &ClickHouseDB{}
cfg := normalizeClickHouseConfig(connection.ConnectionConfig{
Type: "clickhouse",
Host: "127.0.0.1",
Port: 9000,
User: "default",
Password: "p@ss:wo/rd",
Database: "analytics",
Timeout: 15,
})
opts := c.buildClickHouseOptions(cfg)
if opts == nil {
t.Fatal("options 为空")
}
if len(opts.Addr) != 1 || opts.Addr[0] != "127.0.0.1:9000" {
t.Fatalf("addr 不符合预期:%v", opts.Addr)
}
if opts.Auth.Username != "default" {
t.Fatalf("username 不符合预期:%s", opts.Auth.Username)
}
if opts.Auth.Password != cfg.Password {
t.Fatalf("password 不符合预期:%s", opts.Auth.Password)
}
if opts.Auth.Database != "analytics" {
t.Fatalf("database 不符合预期:%s", opts.Auth.Database)
}
if opts.DialTimeout != 15*time.Second {
t.Fatalf("dial timeout 不符合预期:%s", opts.DialTimeout)
}
if opts.ReadTimeout != minClickHouseReadTimeout {
t.Fatalf("read timeout 不符合预期:%s", opts.ReadTimeout)
}
if _, ok := opts.Settings["write_timeout"]; ok {
t.Fatalf("options 不应包含 write_timeout 设置:%v", opts.Settings)
}
if _, ok := opts.Settings["read_timeout"]; ok {
t.Fatalf("options 不应通过 settings 传递 read_timeout%v", opts.Settings)
}
if _, ok := opts.Settings["dial_timeout"]; ok {
t.Fatalf("options 不应通过 settings 传递 dial_timeout%v", opts.Settings)
}
}
func TestClickHouseOptions_ReadTimeoutUsesLargerConfiguredTimeout(t *testing.T) {
c := &ClickHouseDB{}
cfg := normalizeClickHouseConfig(connection.ConnectionConfig{
Type: "clickhouse",
Host: "127.0.0.1",
Port: 9000,
User: "default",
Password: "secret",
Database: "analytics",
Timeout: 900,
})
opts := c.buildClickHouseOptions(cfg)
if opts == nil {
t.Fatal("options 为空")
}
if opts.DialTimeout != 900*time.Second {
t.Fatalf("dial timeout 不符合预期:%s", opts.DialTimeout)
}
if opts.ReadTimeout != 900*time.Second {
t.Fatalf("read timeout 不符合预期:%s", opts.ReadTimeout)
}
}

View File

@@ -42,7 +42,7 @@ func (h *HighGoDB) getDSN(config connection.ConnectionConfig) string {
}
u.User = url.UserPassword(config.User, config.Password)
q := url.Values{}
q.Set("sslmode", "disable")
q.Set("sslmode", resolvePostgresSSLMode(config))
q.Set("connect_timeout", strconv.Itoa(getConnectTimeoutSeconds(config)))
u.RawQuery = q.Encode()
@@ -50,7 +50,7 @@ func (h *HighGoDB) getDSN(config connection.ConnectionConfig) string {
}
func (h *HighGoDB) Connect(config connection.ConnectionConfig) error {
var dsn string
runConfig := config
if config.UseSSH {
logger.Infof("HighGo 使用 SSH 连接:地址=%s:%d 用户=%s", config.Host, config.Port, config.User)
@@ -76,23 +76,37 @@ func (h *HighGoDB) Connect(config connection.ConnectionConfig) error {
localConfig.Port = port
localConfig.UseSSH = false
dsn = h.getDSN(localConfig)
runConfig = localConfig
logger.Infof("HighGo 通过本地端口转发连接:%s -> %s:%d", forwarder.LocalAddr, config.Host, config.Port)
} else {
dsn = h.getDSN(config)
}
db, err := sql.Open("highgo", dsn)
if err != nil {
return fmt.Errorf("打开数据库连接失败:%w", err)
attempts := []connection.ConnectionConfig{runConfig}
if shouldTrySSLPreferredFallback(runConfig) {
attempts = append(attempts, withSSLDisabled(runConfig))
}
h.conn = db
h.pingTimeout = getConnectTimeout(config)
if err := h.Ping(); err != nil {
return fmt.Errorf("连接建立后验证失败:%w", err)
var failures []string
for idx, attempt := range attempts {
dsn := h.getDSN(attempt)
db, err := sql.Open("highgo", dsn)
if err != nil {
failures = append(failures, fmt.Sprintf("第%d次连接打开失败: %v", idx+1, err))
continue
}
h.conn = db
h.pingTimeout = getConnectTimeout(attempt)
if err := h.Ping(); err != nil {
_ = db.Close()
h.conn = nil
failures = append(failures, fmt.Sprintf("第%d次连接验证失败: %v", idx+1, err))
continue
}
if idx > 0 {
logger.Warnf("HighGo SSL 优先连接失败,已回退至明文连接")
}
return nil
}
return nil
return fmt.Errorf("连接建立后验证失败:%s", strings.Join(failures, ""))
}
func (h *HighGoDB) Close() error {

View File

@@ -0,0 +1,53 @@
package db
import (
"bytes"
"encoding/json"
)
func decodeJSONWithUseNumber(data []byte, out interface{}) error {
if out == nil {
return nil
}
decoder := json.NewDecoder(bytes.NewReader(data))
decoder.UseNumber()
if err := decoder.Decode(out); err != nil {
return err
}
normalizeDecodedJSONNumbers(out)
return nil
}
func normalizeDecodedJSONNumbers(out interface{}) {
switch typed := out.(type) {
case *[]map[string]interface{}:
if typed == nil {
return
}
for i := range *typed {
row := (*typed)[i]
for key, value := range row {
row[key] = normalizeQueryValue(value)
}
}
case *map[string]interface{}:
if typed == nil || *typed == nil {
return
}
for key, value := range *typed {
(*typed)[key] = normalizeQueryValue(value)
}
case *[]interface{}:
if typed == nil {
return
}
for i, item := range *typed {
(*typed)[i] = normalizeQueryValue(item)
}
case *interface{}:
if typed == nil {
return
}
*typed = normalizeQueryValue(*typed)
}
}

View File

@@ -0,0 +1,58 @@
package db
import "testing"
func TestDecodeJSONWithUseNumber_QueryRowsPreserveUnsafeInteger(t *testing.T) {
raw := []byte(`[{"id":9007199254740993,"safe":123,"nested":{"n":9007199254740992},"arr":[9007199254740992,1],"decimal":1.25}]`)
var out []map[string]interface{}
if err := decodeJSONWithUseNumber(raw, &out); err != nil {
t.Fatalf("解码失败: %v", err)
}
if len(out) != 1 {
t.Fatalf("期望 1 行,实际 %d", len(out))
}
row := out[0]
if got, ok := row["id"].(string); !ok || got != "9007199254740993" {
t.Fatalf("id 应为 string 且保持精度,实际=%v(%T)", row["id"], row["id"])
}
if got, ok := row["safe"].(int64); !ok || got != 123 {
t.Fatalf("safe 应为 int64(123),实际=%v(%T)", row["safe"], row["safe"])
}
nested, ok := row["nested"].(map[string]interface{})
if !ok {
t.Fatalf("nested 类型异常:%T", row["nested"])
}
if got, ok := nested["n"].(string); !ok || got != "9007199254740992" {
t.Fatalf("nested.n 应为 string 且保持精度,实际=%v(%T)", nested["n"], nested["n"])
}
arr, ok := row["arr"].([]interface{})
if !ok || len(arr) != 2 {
t.Fatalf("arr 类型异常:%v(%T)", row["arr"], row["arr"])
}
if got, ok := arr[0].(string); !ok || got != "9007199254740992" {
t.Fatalf("arr[0] 应为 string 且保持精度,实际=%v(%T)", arr[0], arr[0])
}
if got, ok := arr[1].(int64); !ok || got != 1 {
t.Fatalf("arr[1] 应为 int64(1),实际=%v(%T)", arr[1], arr[1])
}
if got, ok := row["decimal"].(float64); !ok || got != 1.25 {
t.Fatalf("decimal 应为 float64(1.25),实际=%v(%T)", row["decimal"], row["decimal"])
}
}
func TestDecodeJSONWithUseNumber_TypedStruct(t *testing.T) {
type item struct {
ID int64 `json:"id"`
Name string `json:"name"`
}
var out []item
if err := decodeJSONWithUseNumber([]byte(`[{"id":7,"name":"ok"}]`), &out); err != nil {
t.Fatalf("解码失败: %v", err)
}
if len(out) != 1 || out[0].ID != 7 || out[0].Name != "ok" {
t.Fatalf("结构体解码结果异常:%+v", out)
}
}

View File

@@ -65,12 +65,13 @@ func (k *KingbaseDB) getDSN(config connection.ConnectionConfig) string {
port := config.Port
// Construct DSN
dsn := fmt.Sprintf("host=%s port=%d user=%s password=%s dbname=%s sslmode=disable connect_timeout=%d",
dsn := fmt.Sprintf("host=%s port=%d user=%s password=%s dbname=%s sslmode=%s connect_timeout=%d",
quoteConnValue(address),
port,
quoteConnValue(config.User),
quoteConnValue(config.Password),
quoteConnValue(config.Database),
quoteConnValue(resolvePostgresSSLMode(config)),
getConnectTimeoutSeconds(config),
)
@@ -78,8 +79,7 @@ func (k *KingbaseDB) getDSN(config connection.ConnectionConfig) string {
}
func (k *KingbaseDB) Connect(config connection.ConnectionConfig) error {
var dsn string
var err error
runConfig := config
if config.UseSSH {
// Create SSH tunnel with local port forwarding
@@ -108,23 +108,37 @@ func (k *KingbaseDB) Connect(config connection.ConnectionConfig) error {
localConfig.Port = port
localConfig.UseSSH = false
dsn = k.getDSN(localConfig)
runConfig = localConfig
logger.Infof("人大金仓通过本地端口转发连接:%s -> %s:%d", forwarder.LocalAddr, config.Host, config.Port)
} else {
dsn = k.getDSN(config)
}
// Open using "kingbase" driver
db, err := sql.Open("kingbase", dsn)
if err != nil {
return fmt.Errorf("打开数据库连接失败:%w", err)
attempts := []connection.ConnectionConfig{runConfig}
if shouldTrySSLPreferredFallback(runConfig) {
attempts = append(attempts, withSSLDisabled(runConfig))
}
k.conn = db
k.pingTimeout = getConnectTimeout(config)
if err := k.Ping(); err != nil {
return fmt.Errorf("连接建立后验证失败:%w", err)
var failures []string
for idx, attempt := range attempts {
dsn := k.getDSN(attempt)
db, err := sql.Open("kingbase", dsn)
if err != nil {
failures = append(failures, fmt.Sprintf("第%d次连接打开失败: %v", idx+1, err))
continue
}
k.conn = db
k.pingTimeout = getConnectTimeout(attempt)
if err := k.Ping(); err != nil {
_ = db.Close()
k.conn = nil
failures = append(failures, fmt.Sprintf("第%d次连接验证失败: %v", idx+1, err))
continue
}
if idx > 0 {
logger.Warnf("人大金仓 SSL 优先连接失败,已回退至明文连接")
}
return nil
}
return nil
return fmt.Errorf("连接建立后验证失败:%s", strings.Join(failures, ""))
}
func (k *KingbaseDB) Close() error {
@@ -609,28 +623,16 @@ func (k *KingbaseDB) ApplyChanges(tableName string, changes connection.ChangeSet
}
defer tx.Rollback()
quoteIdent := func(name string) string {
n := strings.TrimSpace(name)
n = strings.Trim(n, "\"")
n = strings.ReplaceAll(n, "\"", "\"\"")
if n == "" {
return "\"\""
}
return `"` + n + `"`
}
schema := ""
table := strings.TrimSpace(tableName)
if parts := strings.SplitN(table, ".", 2); len(parts) == 2 {
schema = strings.TrimSpace(parts[0])
table = strings.TrimSpace(parts[1])
schema, table := splitKingbaseQualifiedTable(tableName)
if table == "" {
return fmt.Errorf("table name required")
}
qualifiedTable := ""
if schema != "" {
qualifiedTable = fmt.Sprintf("%s.%s", quoteIdent(schema), quoteIdent(table))
qualifiedTable = fmt.Sprintf("%s.%s", quoteKingbaseIdent(schema), quoteKingbaseIdent(table))
} else {
qualifiedTable = quoteIdent(table)
qualifiedTable = quoteKingbaseIdent(table)
}
// 1. Deletes
@@ -640,7 +642,7 @@ func (k *KingbaseDB) ApplyChanges(tableName string, changes connection.ChangeSet
idx := 0
for k, v := range pk {
idx++
wheres = append(wheres, fmt.Sprintf("%s = $%d", quoteIdent(k), idx))
wheres = append(wheres, fmt.Sprintf("%s = $%d", quoteKingbaseIdent(k), idx))
args = append(args, v)
}
if len(wheres) == 0 {
@@ -660,7 +662,7 @@ func (k *KingbaseDB) ApplyChanges(tableName string, changes connection.ChangeSet
for k, v := range update.Values {
idx++
sets = append(sets, fmt.Sprintf("%s = $%d", quoteIdent(k), idx))
sets = append(sets, fmt.Sprintf("%s = $%d", quoteKingbaseIdent(k), idx))
args = append(args, v)
}
@@ -671,7 +673,7 @@ func (k *KingbaseDB) ApplyChanges(tableName string, changes connection.ChangeSet
var wheres []string
for k, v := range update.Keys {
idx++
wheres = append(wheres, fmt.Sprintf("%s = $%d", quoteIdent(k), idx))
wheres = append(wheres, fmt.Sprintf("%s = $%d", quoteKingbaseIdent(k), idx))
args = append(args, v)
}
@@ -694,7 +696,7 @@ func (k *KingbaseDB) ApplyChanges(tableName string, changes connection.ChangeSet
for k, v := range row {
idx++
cols = append(cols, quoteIdent(k))
cols = append(cols, quoteKingbaseIdent(k))
placeholders = append(placeholders, fmt.Sprintf("$%d", idx))
args = append(args, v)
}
@@ -712,6 +714,67 @@ func (k *KingbaseDB) ApplyChanges(tableName string, changes connection.ChangeSet
return tx.Commit()
}
func normalizeKingbaseIdentifier(raw string) string {
value := strings.TrimSpace(raw)
if value == "" {
return ""
}
// 兼容 JSON/字符串转义后传入的标识符:\"schema\" -> "schema"
value = strings.ReplaceAll(value, `\"`, `"`)
value = strings.TrimSpace(value)
// 兼容异常多重包裹引号(例如 ""schema""、""""schema"""")。
// strings.Trim 会移除两端连续引号,迭代后可收敛到纯标识符。
for i := 0; i < 4; i++ {
next := strings.TrimSpace(strings.Trim(value, `"`))
if next == value {
break
}
value = next
}
// 兼容其他方言可能残留的引用形式
if len(value) >= 2 && strings.HasPrefix(value, "`") && strings.HasSuffix(value, "`") {
value = strings.TrimSpace(strings.Trim(value, "`"))
}
if len(value) >= 2 && strings.HasPrefix(value, "[") && strings.HasSuffix(value, "]") {
value = strings.TrimSpace(value[1 : len(value)-1])
}
return value
}
func quoteKingbaseIdent(name string) string {
n := normalizeKingbaseIdentifier(name)
n = strings.ReplaceAll(n, `"`, `""`)
if n == "" {
return "\"\""
}
return `"` + n + `"`
}
func splitKingbaseQualifiedTable(tableName string) (schema string, table string) {
raw := strings.TrimSpace(tableName)
if raw == "" {
return "", ""
}
if parts := strings.SplitN(raw, ".", 2); len(parts) == 2 {
schema = normalizeKingbaseIdentifier(parts[0])
table = normalizeKingbaseIdentifier(parts[1])
if table == "" {
return "", normalizeKingbaseIdentifier(raw)
}
if schema == "" {
return "", table
}
return schema, table
}
return "", normalizeKingbaseIdentifier(raw)
}
func (k *KingbaseDB) GetAllColumns(dbName string) ([]connection.ColumnDefinitionWithTable, error) {
// dbName 在本项目语义里是“数据库”schema 由 table_schema 决定;这里返回全部用户 schema 的列用于查询提示。
query := `

View File

@@ -0,0 +1,74 @@
//go:build gonavi_full_drivers || gonavi_kingbase_driver
package db
import "testing"
func TestNormalizeKingbaseIdentifier(t *testing.T) {
tests := []struct {
name string
in string
want string
}{
{name: "plain", in: "ldf_server", want: "ldf_server"},
{name: "quoted", in: `"ldf_server"`, want: "ldf_server"},
{name: "double quoted", in: `""ldf_server""`, want: "ldf_server"},
{name: "quad quoted", in: `""""ldf_server""""`, want: "ldf_server"},
{name: "escaped quoted", in: `\"ldf_server\"`, want: "ldf_server"},
{name: "backtick quoted", in: "`ldf_server`", want: "ldf_server"},
{name: "bracket quoted", in: "[ldf_server]", want: "ldf_server"},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := normalizeKingbaseIdentifier(tt.in); got != tt.want {
t.Fatalf("normalizeKingbaseIdentifier(%q) = %q, want %q", tt.in, got, tt.want)
}
})
}
}
func TestQuoteKingbaseIdent(t *testing.T) {
tests := []struct {
name string
in string
want string
}{
{name: "plain", in: "ldf_server", want: `"ldf_server"`},
{name: "double quoted", in: `""ldf_server""`, want: `"ldf_server"`},
{name: "escaped quoted", in: `\"ldf_server\"`, want: `"ldf_server"`},
{name: "with embedded quote", in: `ab"cd`, want: `"ab""cd"`},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := quoteKingbaseIdent(tt.in); got != tt.want {
t.Fatalf("quoteKingbaseIdent(%q) = %q, want %q", tt.in, got, tt.want)
}
})
}
}
func TestSplitKingbaseQualifiedTable(t *testing.T) {
tests := []struct {
name string
in string
wantSchema string
wantTable string
}{
{name: "plain qualified", in: "ldf_server.t_user", wantSchema: "ldf_server", wantTable: "t_user"},
{name: "double quoted qualified", in: `""ldf_server"".""t_user""`, wantSchema: "ldf_server", wantTable: "t_user"},
{name: "escaped qualified", in: `\"ldf_server\".\"t_user\"`, wantSchema: "ldf_server", wantTable: "t_user"},
{name: "bracket qualified", in: "[ldf_server].[t_user]", wantSchema: "ldf_server", wantTable: "t_user"},
{name: "table only", in: `""t_user""`, wantSchema: "", wantTable: "t_user"},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
gotSchema, gotTable := splitKingbaseQualifiedTable(tt.in)
if gotSchema != tt.wantSchema || gotTable != tt.wantTable {
t.Fatalf("splitKingbaseQualifiedTable(%q) = (%q, %q), want (%q, %q)", tt.in, gotSchema, gotTable, tt.wantSchema, tt.wantTable)
}
})
}
}

View File

@@ -6,6 +6,7 @@ import (
"context"
"database/sql"
"fmt"
"net/url"
"strings"
"time"
@@ -40,9 +41,10 @@ func (m *MariaDB) getDSN(config connection.ConnectionConfig) string {
}
timeout := getConnectTimeoutSeconds(config)
tlsMode := resolveMySQLTLSMode(config)
return fmt.Sprintf("%s:%s@%s(%s)/%s?charset=utf8mb4&parseTime=True&loc=Local&timeout=%ds",
config.User, config.Password, protocol, address, database, timeout)
return fmt.Sprintf("%s:%s@%s(%s)/%s?charset=utf8mb4&parseTime=True&loc=Local&timeout=%ds&tls=%s",
config.User, config.Password, protocol, address, database, timeout, url.QueryEscape(tlsMode))
}
func (m *MariaDB) Connect(config connection.ConnectionConfig) error {

View File

@@ -4,6 +4,7 @@ package db
import (
"context"
"crypto/tls"
"fmt"
"net"
"net/url"
@@ -327,35 +328,60 @@ func (m *MongoDB) Connect(config connection.ConnectionConfig) error {
m.database = "admin"
}
attemptConfigs := buildMongoAuthAttempts(connectConfig)
sslAttempts := []connection.ConnectionConfig{connectConfig}
if shouldTrySSLPreferredFallback(connectConfig) {
sslAttempts = append(sslAttempts, withSSLDisabled(connectConfig))
}
var errorDetails []string
for index, attemptConfig := range attemptConfigs {
authLabel := "主库凭据"
if index > 0 {
authLabel = "从库凭据"
for sslIndex, sslConfig := range sslAttempts {
sslLabel := "SSL"
if sslIndex > 0 {
sslLabel = "明文回退"
}
uri := m.getURI(attemptConfig)
clientOpts := options.Client().ApplyURI(uri)
if attemptConfig.UseProxy {
clientOpts.SetDialer(&mongoProxyDialer{proxyConfig: attemptConfig.Proxy})
}
client, err := mongo.Connect(clientOpts)
if err != nil {
errorDetails = append(errorDetails, fmt.Sprintf("%s连接失败: %v", authLabel, err))
continue
}
attemptConfigs := buildMongoAuthAttempts(sslConfig)
for index, attemptConfig := range attemptConfigs {
authLabel := "主库凭据"
if index > 0 {
authLabel = "从库凭据"
}
m.client = client
if err := m.Ping(); err != nil {
ctx, cancel := context.WithTimeout(context.Background(), 3*time.Second)
_ = client.Disconnect(ctx)
cancel()
m.client = nil
errorDetails = append(errorDetails, fmt.Sprintf("%s验证失败: %v", authLabel, err))
continue
if sslIndex > 0 {
attemptConfig.URI = ""
}
uri := m.getURI(attemptConfig)
clientOpts := options.Client().ApplyURI(uri)
tlsEnabled, tlsInsecure := resolveMongoTLSSettings(attemptConfig)
if tlsEnabled {
clientOpts.SetTLSConfig(&tls.Config{
MinVersion: tls.VersionTLS12,
InsecureSkipVerify: tlsInsecure,
})
}
if attemptConfig.UseProxy {
clientOpts.SetDialer(&mongoProxyDialer{proxyConfig: attemptConfig.Proxy})
}
client, err := mongo.Connect(clientOpts)
if err != nil {
errorDetails = append(errorDetails, fmt.Sprintf("%s %s连接失败: %v", sslLabel, authLabel, err))
continue
}
m.client = client
if err := m.Ping(); err != nil {
ctx, cancel := context.WithTimeout(context.Background(), 3*time.Second)
_ = client.Disconnect(ctx)
cancel()
m.client = nil
errorDetails = append(errorDetails, fmt.Sprintf("%s %s验证失败: %v", sslLabel, authLabel, err))
continue
}
if sslIndex > 0 {
logger.Warnf("MongoDB SSL 优先连接失败,已回退至明文连接")
}
return nil
}
return nil
}
if len(errorDetails) > 0 {

File diff suppressed because it is too large Load Diff

View File

@@ -174,7 +174,7 @@ func (c *mysqlAgentClient) call(req mysqlAgentRequest, out interface{}, fields *
*rowsAffected = resp.RowsAffected
}
if out != nil && len(resp.Data) > 0 {
if err := json.Unmarshal(resp.Data, out); err != nil {
if err := decodeJSONWithUseNumber(resp.Data, out); err != nil {
return fmt.Errorf("解析 MySQL 驱动代理数据失败:%w", err)
}
}

View File

@@ -35,6 +35,32 @@ func ResolveOptionalDriverAgentExecutablePath(downloadDir string, driverType str
return filepath.Join(root, normalized, optionalDriverAgentExecutableName(normalized)), nil
}
func ResolveOptionalDriverAgentExecutablePathForVersion(downloadDir string, driverType string, version string) (string, error) {
normalized := normalizeRuntimeDriverType(driverType)
if strings.TrimSpace(normalized) == "" {
return "", fmt.Errorf("驱动类型为空")
}
root, err := resolveExternalDriverRoot(downloadDir)
if err != nil {
return "", err
}
if normalized != "mongodb" {
return filepath.Join(root, normalized, optionalDriverAgentExecutableName(normalized)), nil
}
baseName := optionalDriverAgentExecutableName(normalized)
ext := filepath.Ext(baseName)
stem := strings.TrimSuffix(baseName, ext)
major := 2
trimmed := strings.TrimSpace(version)
trimmed = strings.TrimPrefix(trimmed, "v")
if strings.HasPrefix(trimmed, "1.") || trimmed == "1" {
major = 1
}
versionedName := fmt.Sprintf("%s-v%d%s", stem, major, ext)
return filepath.Join(root, normalized, versionedName), nil
}
func ResolveMySQLAgentExecutablePath(downloadDir string) (string, error) {
return ResolveOptionalDriverAgentExecutablePath(downloadDir, "mysql")
}

View File

@@ -184,9 +184,10 @@ func (m *MySQLDB) getDSN(config connection.ConnectionConfig) string {
}
timeout := getConnectTimeoutSeconds(config)
tlsMode := resolveMySQLTLSMode(config)
return fmt.Sprintf("%s:%s@%s(%s)/%s?charset=utf8mb4&parseTime=True&loc=Local&timeout=%ds",
config.User, config.Password, protocol, address, database, timeout)
return fmt.Sprintf("%s:%s@%s(%s)/%s?charset=utf8mb4&parseTime=True&loc=Local&timeout=%ds&tls=%s",
config.User, config.Password, protocol, address, database, timeout, url.QueryEscape(tlsMode))
}
func resolveMySQLCredential(config connection.ConnectionConfig, addressIndex int) (string, string) {

View File

@@ -9,10 +9,14 @@ import (
"io"
"os"
"os/exec"
"runtime"
"strings"
"sync"
"syscall"
"time"
"GoNavi-Wails/internal/connection"
"GoNavi-Wails/internal/logger"
)
const (
@@ -38,6 +42,7 @@ type optionalAgentRequest struct {
Method string `json:"method"`
Config *connection.ConnectionConfig `json:"config,omitempty"`
Query string `json:"query,omitempty"`
TimeoutMs int64 `json:"timeoutMs,omitempty"`
DBName string `json:"dbName,omitempty"`
TableName string `json:"tableName,omitempty"`
Changes *connection.ChangeSet `json:"changes,omitempty"`
@@ -91,6 +96,9 @@ func newOptionalDriverAgentClient(driverType string, executablePath string) (*op
return nil, fmt.Errorf("创建 %s 驱动代理 stderr 失败:%w", driverDisplayName(driverType), err)
}
if err := cmd.Start(); err != nil {
if isWindowsExecutableMachineMismatch(err) {
return nil, fmt.Errorf("启动 %s 驱动代理失败:%w检测到驱动代理与当前系统架构不兼容请在驱动管理中重新安装启用", driverDisplayName(driverType), err)
}
return nil, fmt.Errorf("启动 %s 驱动代理失败:%w", driverDisplayName(driverType), err)
}
@@ -104,6 +112,30 @@ func newOptionalDriverAgentClient(driverType string, executablePath string) (*op
return client, nil
}
func isWindowsExecutableMachineMismatch(err error) bool {
if err == nil || runtime.GOOS != "windows" {
return false
}
var errno syscall.Errno
if errors.As(err, &errno) && errno == syscall.Errno(216) {
return true
}
text := strings.ToLower(strings.TrimSpace(err.Error()))
if text == "" {
return false
}
if strings.Contains(text, "not compatible with the version of windows") {
return true
}
if strings.Contains(text, "win32") && strings.Contains(text, "compatible") {
return true
}
if strings.Contains(text, "不是有效的win32应用程序") || strings.Contains(text, "无法在win32模式下运行") {
return true
}
return false
}
func (c *optionalDriverAgentClient) captureStderr(stderr io.Reader) {
scanner := bufio.NewScanner(stderr)
buffer := make([]byte, 0, 8<<10)
@@ -176,7 +208,7 @@ func (c *optionalDriverAgentClient) call(req optionalAgentRequest, out interface
*rowsAffected = resp.RowsAffected
}
if out != nil && len(resp.Data) > 0 {
if err := json.Unmarshal(resp.Data, out); err != nil {
if err := decodeJSONWithUseNumber(resp.Data, out); err != nil {
return fmt.Errorf("解析 %s 驱动代理数据失败:%w", driverDisplayName(c.driver), err)
}
}
@@ -223,6 +255,7 @@ func (d *OptionalDriverAgentDB) Connect(config connection.ConnectionConfig) erro
if err != nil {
return err
}
logger.Infof("%s 驱动代理路径:%s", driverDisplayName(d.driverType), executablePath)
client, err := newOptionalDriverAgentClient(d.driverType, executablePath)
if err != nil {
return err
@@ -260,7 +293,20 @@ func (d *OptionalDriverAgentDB) QueryContext(ctx context.Context, query string)
if err := ctx.Err(); err != nil {
return nil, nil, err
}
return d.Query(query)
client, err := d.requireClient()
if err != nil {
return nil, nil, err
}
var data []map[string]interface{}
var fields []string
if err := client.call(optionalAgentRequest{
Method: optionalAgentMethodQuery,
Query: query,
TimeoutMs: timeoutMsFromContext(ctx),
}, &data, &fields, nil); err != nil {
return nil, nil, err
}
return data, fields, nil
}
func (d *OptionalDriverAgentDB) Query(query string) ([]map[string]interface{}, []string, error) {
@@ -283,7 +329,19 @@ func (d *OptionalDriverAgentDB) ExecContext(ctx context.Context, query string) (
if err := ctx.Err(); err != nil {
return 0, err
}
return d.Exec(query)
client, err := d.requireClient()
if err != nil {
return 0, err
}
var affected int64
if err := client.call(optionalAgentRequest{
Method: optionalAgentMethodExec,
Query: query,
TimeoutMs: timeoutMsFromContext(ctx),
}, nil, nil, &affected); err != nil {
return 0, err
}
return affected, nil
}
func (d *OptionalDriverAgentDB) Exec(query string) (int64, error) {
@@ -443,3 +501,15 @@ func (d *OptionalDriverAgentDB) requireClient() (*optionalDriverAgentClient, err
}
return d.client, nil
}
func timeoutMsFromContext(ctx context.Context) int64 {
deadline, ok := ctx.Deadline()
if !ok {
return 0
}
remaining := time.Until(deadline).Milliseconds()
if remaining <= 0 {
return 1
}
return remaining
}

View File

@@ -0,0 +1,32 @@
package db
import (
"context"
"testing"
"time"
)
func TestTimeoutMsFromContext_NoDeadline(t *testing.T) {
if got := timeoutMsFromContext(context.Background()); got != 0 {
t.Fatalf("无 deadline 时应返回 0got=%d", got)
}
}
func TestTimeoutMsFromContext_WithDeadline(t *testing.T) {
ctx, cancel := context.WithTimeout(context.Background(), 3*time.Second)
defer cancel()
got := timeoutMsFromContext(ctx)
if got <= 0 {
t.Fatalf("有 deadline 时应返回正值got=%d", got)
}
}
func TestTimeoutMsFromContext_ExpiredDeadline(t *testing.T) {
ctx, cancel := context.WithDeadline(context.Background(), time.Now().Add(-time.Second))
defer cancel()
if got := timeoutMsFromContext(ctx); got != 1 {
t.Fatalf("过期 deadline 应返回 1got=%d", got)
}
}

View File

@@ -26,10 +26,7 @@ type OracleDB struct {
func (o *OracleDB) getDSN(config connection.ConnectionConfig) string {
// oracle://user:pass@host:port/service_name
database := config.Database
if database == "" {
database = config.User // Default to user service/schema if empty?
}
database := strings.TrimSpace(config.Database)
u := &url.URL{
Scheme: "oracle",
@@ -38,12 +35,27 @@ func (o *OracleDB) getDSN(config connection.ConnectionConfig) string {
}
u.User = url.UserPassword(config.User, config.Password)
u.RawPath = "/" + url.PathEscape(database)
q := url.Values{}
switch normalizedSSLMode(config) {
case sslModeRequired:
q.Set("SSL", "TRUE")
q.Set("SSL VERIFY", "TRUE")
case sslModeSkipVerify, sslModePreferred:
q.Set("SSL", "TRUE")
q.Set("SSL VERIFY", "FALSE")
}
if encoded := q.Encode(); encoded != "" {
u.RawQuery = encoded
}
return u.String()
}
func (o *OracleDB) Connect(config connection.ConnectionConfig) error {
var dsn string
var err error
runConfig := config
serviceName := strings.TrimSpace(config.Database)
if serviceName == "" {
return fmt.Errorf("Oracle 连接缺少服务名Service Name请在连接配置中填写例如 ORCLPDB1")
}
if config.UseSSH {
// Create SSH tunnel with local port forwarding
@@ -72,22 +84,37 @@ func (o *OracleDB) Connect(config connection.ConnectionConfig) error {
localConfig.Port = port
localConfig.UseSSH = false
dsn = o.getDSN(localConfig)
runConfig = localConfig
logger.Infof("Oracle 通过本地端口转发连接:%s -> %s:%d", forwarder.LocalAddr, config.Host, config.Port)
} else {
dsn = o.getDSN(config)
}
db, err := sql.Open("oracle", dsn)
if err != nil {
return fmt.Errorf("打开数据库连接失败:%w", err)
attempts := []connection.ConnectionConfig{runConfig}
if shouldTrySSLPreferredFallback(runConfig) {
attempts = append(attempts, withSSLDisabled(runConfig))
}
o.conn = db
o.pingTimeout = getConnectTimeout(config)
if err := o.Ping(); err != nil {
return fmt.Errorf("连接建立后验证失败:%w", err)
var failures []string
for idx, attempt := range attempts {
dsn := o.getDSN(attempt)
db, err := sql.Open("oracle", dsn)
if err != nil {
failures = append(failures, fmt.Sprintf("第%d次连接打开失败: %v", idx+1, err))
continue
}
o.conn = db
o.pingTimeout = getConnectTimeout(attempt)
if err := o.Ping(); err != nil {
_ = db.Close()
o.conn = nil
failures = append(failures, fmt.Sprintf("第%d次连接验证失败: %v", idx+1, err))
continue
}
if idx > 0 {
logger.Warnf("Oracle SSL 优先连接失败,已回退至明文连接")
}
return nil
}
return nil
return fmt.Errorf("连接建立后验证失败:%s", strings.Join(failures, ""))
}
func (o *OracleDB) Close() error {

View File

@@ -62,7 +62,7 @@ func (p *PostgresDB) getDSN(config connection.ConnectionConfig) string {
}
u.User = url.UserPassword(config.User, config.Password)
q := url.Values{}
q.Set("sslmode", "disable")
q.Set("sslmode", resolvePostgresSSLMode(config))
q.Set("connect_timeout", strconv.Itoa(getConnectTimeoutSeconds(config)))
u.RawQuery = q.Encode()
@@ -126,34 +126,49 @@ func (p *PostgresDB) Connect(config connection.ConnectionConfig) error {
logger.Infof("PostgreSQL 通过本地端口转发连接:%s -> %s:%d", forwarder.LocalAddr, config.Host, config.Port)
}
attemptDBs := resolvePostgresConnectDatabases(runConfig)
sslAttempts := []connection.ConnectionConfig{runConfig}
if shouldTrySSLPreferredFallback(runConfig) {
sslAttempts = append(sslAttempts, withSSLDisabled(runConfig))
}
var failures []string
for _, dbName := range attemptDBs {
attemptConfig := runConfig
attemptConfig.Database = dbName
dsn := p.getDSN(attemptConfig)
dbConn, err := sql.Open("postgres", dsn)
if err != nil {
failures = append(failures, fmt.Sprintf("数据库=%s 打开连接失败: %v", dbName, err))
continue
}
p.conn = dbConn
// Force verification
if err := p.Ping(); err != nil {
failures = append(failures, fmt.Sprintf("数据库=%s 验证失败: %v", dbName, err))
_ = dbConn.Close()
p.conn = nil
continue
for sslIndex, sslConfig := range sslAttempts {
sslLabel := "SSL"
if sslIndex > 0 {
sslLabel = "明文回退"
}
if strings.TrimSpace(config.Database) == "" && !strings.EqualFold(dbName, "postgres") {
logger.Infof("PostgreSQL 自动选择连接数据库:%s", dbName)
}
attemptDBs := resolvePostgresConnectDatabases(sslConfig)
for _, dbName := range attemptDBs {
attemptConfig := sslConfig
attemptConfig.Database = dbName
dsn := p.getDSN(attemptConfig)
cleanupOnFailure = false
return nil
dbConn, err := sql.Open("postgres", dsn)
if err != nil {
failures = append(failures, fmt.Sprintf("%s 数据库=%s 打开连接失败: %v", sslLabel, dbName, err))
continue
}
p.conn = dbConn
// Force verification
if err := p.Ping(); err != nil {
failures = append(failures, fmt.Sprintf("%s 数据库=%s 验证失败: %v", sslLabel, dbName, err))
_ = dbConn.Close()
p.conn = nil
continue
}
if sslIndex > 0 {
logger.Warnf("PostgreSQL SSL 优先连接失败,已回退至明文连接")
}
if strings.TrimSpace(config.Database) == "" && !strings.EqualFold(dbName, "postgres") {
logger.Infof("PostgreSQL 自动选择连接数据库:%s", dbName)
}
cleanupOnFailure = false
return nil
}
}
if len(failures) == 0 {

View File

@@ -2,12 +2,28 @@ package db
import (
"encoding/hex"
"encoding/json"
"fmt"
"math/big"
"reflect"
"strconv"
"strings"
"time"
"unicode"
"unicode/utf8"
)
const (
jsMaxSafeInteger int64 = 9007199254740991
jsMinSafeInteger int64 = -9007199254740991
jsMaxSafeUint uint64 = 9007199254740991
)
var (
jsMaxSafeBigInt = big.NewInt(jsMaxSafeInteger)
jsMinSafeBigInt = big.NewInt(jsMinSafeInteger)
)
// normalizeQueryValue normalizes driver-returned values for UI/JSON transport.
// 当前主要处理 []byte如果是可读文本则转为 string否则转为十六进制字符串避免前端出现“空白值”。
func normalizeQueryValue(v interface{}) interface{} {
@@ -18,7 +34,124 @@ func normalizeQueryValueWithDBType(v interface{}, databaseTypeName string) inter
if b, ok := v.([]byte); ok {
return bytesToDisplayValue(b, databaseTypeName)
}
return v
return normalizeCompositeQueryValue(v)
}
func normalizeCompositeQueryValue(v interface{}) interface{} {
if v == nil {
return nil
}
switch typed := v.(type) {
case []interface{}:
items := make([]interface{}, len(typed))
for i, item := range typed {
items[i] = normalizeQueryValue(item)
}
return items
case map[string]interface{}:
out := make(map[string]interface{}, len(typed))
for key, value := range typed {
out[key] = normalizeQueryValue(value)
}
return out
case json.Number:
return normalizeJSONNumberForJS(typed)
}
rv := reflect.ValueOf(v)
switch rv.Kind() {
case reflect.Pointer:
if rv.IsNil() {
return nil
}
return normalizeQueryValue(rv.Elem().Interface())
case reflect.Map:
if rv.IsNil() {
return nil
}
out := make(map[string]interface{}, rv.Len())
iter := rv.MapRange()
for iter.Next() {
out[mapKeyToString(iter.Key().Interface())] = normalizeQueryValue(iter.Value().Interface())
}
return out
case reflect.Slice, reflect.Array:
// []byte 在上层已单独处理,这里保留对其它切片/数组的递归规整。
if rv.Kind() == reflect.Slice && rv.IsNil() {
return nil
}
size := rv.Len()
items := make([]interface{}, size)
for i := 0; i < size; i++ {
items[i] = normalizeQueryValue(rv.Index(i).Interface())
}
return items
case reflect.Struct:
// 部分驱动(如 Kingbase会返回复杂结构体值直接透传会导致前端渲染和比较开销激增。
// 统一降级为可读字符串,避免对象深层序列化触发 UI 卡顿。
if tm, ok := v.(time.Time); ok {
return tm.Format(time.RFC3339Nano)
}
if stringer, ok := v.(fmt.Stringer); ok {
return stringer.String()
}
return fmt.Sprintf("%v", v)
default:
return normalizeUnsafeIntegerForJS(rv, v)
}
}
func normalizeJSONNumberForJS(n json.Number) interface{} {
text := strings.TrimSpace(n.String())
if text == "" {
return ""
}
if integer, ok := parseJSONInteger(text); ok {
if integer.Cmp(jsMaxSafeBigInt) > 0 || integer.Cmp(jsMinSafeBigInt) < 0 {
return text
}
return integer.Int64()
}
if f, err := n.Float64(); err == nil {
return f
}
return text
}
func parseJSONInteger(text string) (*big.Int, bool) {
if text == "" {
return nil, false
}
start := 0
if text[0] == '+' || text[0] == '-' {
if len(text) == 1 {
return nil, false
}
start = 1
}
for i := start; i < len(text); i++ {
if text[i] < '0' || text[i] > '9' {
return nil, false
}
}
value, ok := new(big.Int).SetString(text, 10)
if !ok {
return nil, false
}
return value, true
}
func mapKeyToString(key interface{}) string {
if key == nil {
return "null"
}
if s, ok := key.(string); ok {
return s
}
return fmt.Sprintf("%v", key)
}
func bytesToDisplayValue(b []byte, databaseTypeName string) interface{} {
@@ -33,8 +166,7 @@ func bytesToDisplayValue(b []byte, databaseTypeName string) interface{} {
if isBitLikeDBType(dbType) {
if u, ok := bytesToUint64(b); ok {
// JS number precision is limited; keep large bitmasks as string.
const maxSafeInteger = 9007199254740991 // 2^53 - 1
if u <= maxSafeInteger {
if u <= jsMaxSafeUint {
return int64(u)
}
return fmt.Sprintf("%d", u)
@@ -89,6 +221,25 @@ func bytesToUint64(b []byte) (uint64, bool) {
return u, true
}
func normalizeUnsafeIntegerForJS(rv reflect.Value, original interface{}) interface{} {
switch rv.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
n := rv.Int()
if n > jsMaxSafeInteger || n < jsMinSafeInteger {
return strconv.FormatInt(n, 10)
}
return original
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
u := rv.Uint()
if u > jsMaxSafeUint {
return strconv.FormatUint(u, 10)
}
return original
default:
return original
}
}
func isMostlyPrintable(s string) bool {
if s == "" {
return true

View File

@@ -1,6 +1,13 @@
package db
import "testing"
import (
"encoding/json"
"fmt"
"testing"
"time"
)
type duckMapLike map[any]any
func TestNormalizeQueryValueWithDBType_BitBytes(t *testing.T) {
v := normalizeQueryValueWithDBType([]byte{0x00}, "BIT")
@@ -42,3 +49,149 @@ func TestNormalizeQueryValueWithDBType_ByteFallbacks(t *testing.T) {
t.Fatalf("未知类型 0xff 期望返回 0xff实际=%v(%T)", v, v)
}
}
func TestNormalizeQueryValueWithDBType_MapAnyAnyForJSON(t *testing.T) {
input := duckMapLike{
"id": int64(1),
1: "one",
true: []interface{}{duckMapLike{2: "two"}},
"bytes": []byte("ok"),
}
v := normalizeQueryValueWithDBType(input, "")
root, ok := v.(map[string]interface{})
if !ok {
t.Fatalf("期望转换为 map[string]interface{},实际=%T", v)
}
if root["id"] != int64(1) {
t.Fatalf("id 字段异常,实际=%v(%T)", root["id"], root["id"])
}
if root["1"] != "one" {
t.Fatalf("数字 key 未被字符串化,实际=%v(%T)", root["1"], root["1"])
}
if root["bytes"] != "ok" {
t.Fatalf("嵌套 []byte 未被转换,实际=%v(%T)", root["bytes"], root["bytes"])
}
arr, ok := root["true"].([]interface{})
if !ok || len(arr) != 1 {
t.Fatalf("bool key 下的数组结构异常,实际=%v(%T)", root["true"], root["true"])
}
nested, ok := arr[0].(map[string]interface{})
if !ok {
t.Fatalf("嵌套 map 未被转换,实际=%v(%T)", arr[0], arr[0])
}
if nested["2"] != "two" {
t.Fatalf("嵌套 map 数字 key 未转换,实际=%v(%T)", nested["2"], nested["2"])
}
}
func TestNormalizeQueryValueWithDBType_UnsafeIntegersAsString(t *testing.T) {
cases := []struct {
name string
input interface{}
want string
}{
{name: "int64 overflow", input: int64(9007199254740992), want: "9007199254740992"},
{name: "int64 underflow", input: int64(-9007199254740992), want: "-9007199254740992"},
{name: "uint64 overflow", input: uint64(9007199254740992), want: "9007199254740992"},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
got := normalizeQueryValueWithDBType(tc.input, "")
if got != tc.want {
t.Fatalf("期望=%q实际=%v(%T)", tc.want, got, got)
}
})
}
}
func TestNormalizeQueryValueWithDBType_SafeIntegersKeepType(t *testing.T) {
got := normalizeQueryValueWithDBType(int64(9007199254740991), "")
if _, ok := got.(int64); !ok {
t.Fatalf("安全范围 int64 应保持数字类型,实际=%v(%T)", got, got)
}
got = normalizeQueryValueWithDBType(uint64(9007199254740991), "")
if _, ok := got.(uint64); !ok {
t.Fatalf("安全范围 uint64 应保持数字类型,实际=%v(%T)", got, got)
}
}
func TestNormalizeQueryValueWithDBType_JSONNumber(t *testing.T) {
cases := []struct {
name string
input json.Number
wantType string
wantValue string
}{
{name: "safe integer", input: json.Number("9007199254740991"), wantType: "int64", wantValue: "9007199254740991"},
{name: "unsafe integer", input: json.Number("9007199254740992"), wantType: "string", wantValue: "9007199254740992"},
{name: "unsafe negative integer", input: json.Number("-9007199254740992"), wantType: "string", wantValue: "-9007199254740992"},
{name: "decimal", input: json.Number("12.5"), wantType: "float64", wantValue: "12.5"},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
got := normalizeQueryValueWithDBType(tc.input, "")
switch tc.wantType {
case "int64":
v, ok := got.(int64)
if !ok {
t.Fatalf("期望 int64实际=%T", got)
}
if v != 9007199254740991 {
t.Fatalf("期望值=%s实际=%d", tc.wantValue, v)
}
case "string":
v, ok := got.(string)
if !ok {
t.Fatalf("期望 string实际=%T", got)
}
if v != tc.wantValue {
t.Fatalf("期望值=%s实际=%s", tc.wantValue, v)
}
case "float64":
v, ok := got.(float64)
if !ok {
t.Fatalf("期望 float64实际=%T", got)
}
if v != 12.5 {
t.Fatalf("期望值=%s实际=%v", tc.wantValue, v)
}
default:
t.Fatalf("未知断言类型:%s", tc.wantType)
}
})
}
}
type customStructValue struct {
Name string
Age int
}
func (v customStructValue) String() string {
return fmt.Sprintf("%s-%d", v.Name, v.Age)
}
func TestNormalizeQueryValueWithDBType_StructToString(t *testing.T) {
got := normalizeQueryValueWithDBType(customStructValue{Name: "alice", Age: 18}, "")
if got != "alice-18" {
t.Fatalf("结构体应降级为可读字符串,实际=%v(%T)", got, got)
}
}
func TestNormalizeQueryValueWithDBType_TimeStructToRFC3339(t *testing.T) {
input := time.Date(2026, 3, 5, 18, 30, 15, 123456789, time.UTC)
got := normalizeQueryValueWithDBType(input, "")
text, ok := got.(string)
if !ok {
t.Fatalf("time.Time 应转为字符串,实际=%v(%T)", got, got)
}
if text != "2026-03-05T18:30:15.123456789Z" {
t.Fatalf("time.Time 规整值异常,实际=%s", text)
}
}

View File

@@ -47,8 +47,9 @@ func (s *SqlServerDB) getDSN(config connection.ConnectionConfig) string {
q := url.Values{}
q.Set("database", dbname)
q.Set("connection timeout", strconv.Itoa(getConnectTimeoutSeconds(config)))
q.Set("encrypt", "disable")
q.Set("TrustServerCertificate", "true")
encrypt, trustServerCertificate := resolveSQLServerTLSSettings(config)
q.Set("encrypt", encrypt)
q.Set("TrustServerCertificate", trustServerCertificate)
u.RawQuery = q.Encode()
return u.String()

122
internal/db/ssl_mode.go Normal file
View File

@@ -0,0 +1,122 @@
package db
import (
"crypto/tls"
"strings"
"GoNavi-Wails/internal/connection"
)
const (
sslModeDisable = "disable"
sslModePreferred = "preferred"
sslModeRequired = "required"
sslModeSkipVerify = "skip-verify"
)
func normalizeSSLModeValue(raw string) string {
mode := strings.ToLower(strings.TrimSpace(raw))
switch mode {
case "", sslModePreferred, "prefer":
return sslModePreferred
case sslModeRequired, "require", "on", "true", "mandatory", "strict":
return sslModeRequired
case sslModeSkipVerify, "insecure", "skipverify", "skip_verify", "insecure-skip-verify":
return sslModeSkipVerify
case sslModeDisable, "disabled", "off", "false", "none":
return sslModeDisable
default:
return sslModePreferred
}
}
func normalizedSSLMode(config connection.ConnectionConfig) string {
if !config.UseSSL {
return sslModeDisable
}
return normalizeSSLModeValue(config.SSLMode)
}
func shouldTrySSLPreferredFallback(config connection.ConnectionConfig) bool {
return config.UseSSL && normalizeSSLModeValue(config.SSLMode) == sslModePreferred
}
func withSSLDisabled(config connection.ConnectionConfig) connection.ConnectionConfig {
next := config
next.UseSSL = false
next.SSLMode = sslModeDisable
return next
}
func resolveMySQLTLSMode(config connection.ConnectionConfig) string {
switch normalizedSSLMode(config) {
case sslModeDisable:
return "false"
case sslModeRequired:
return "true"
case sslModeSkipVerify:
return "skip-verify"
default:
return "preferred"
}
}
func resolvePostgresSSLMode(config connection.ConnectionConfig) string {
switch normalizedSSLMode(config) {
case sslModeDisable:
return "disable"
case sslModeRequired:
return "require"
case sslModeSkipVerify:
return "require"
default:
return "require"
}
}
func resolveSQLServerTLSSettings(config connection.ConnectionConfig) (encrypt string, trustServerCertificate string) {
switch normalizedSSLMode(config) {
case sslModeDisable:
return "disable", "true"
case sslModeRequired:
return "true", "false"
case sslModeSkipVerify:
return "true", "true"
default:
return "false", "true"
}
}
func resolveGenericTLSConfig(config connection.ConnectionConfig) *tls.Config {
switch normalizedSSLMode(config) {
case sslModeDisable:
return nil
case sslModeRequired:
return &tls.Config{MinVersion: tls.VersionTLS12}
case sslModeSkipVerify:
return &tls.Config{MinVersion: tls.VersionTLS12, InsecureSkipVerify: true}
default:
// Preferred: 先尝试 TLS为提升兼容性默认跳过证书校验失败时由调用方按需回退明文。
return &tls.Config{MinVersion: tls.VersionTLS12, InsecureSkipVerify: true}
}
}
func resolveMongoTLSSettings(config connection.ConnectionConfig) (enabled bool, insecure bool) {
switch normalizedSSLMode(config) {
case sslModeDisable:
return false, false
case sslModeRequired:
return true, false
case sslModeSkipVerify:
return true, true
default:
return true, true
}
}
func resolveTDengineNet(config connection.ConnectionConfig) string {
if normalizedSSLMode(config) == sslModeDisable {
return "ws"
}
return "wss"
}

View File

@@ -40,11 +40,12 @@ func (t *TDengineDB) getDSN(config connection.ConnectionConfig) string {
path = "/" + dbName
}
return fmt.Sprintf("%s:%s@ws(%s)%s", user, pass, net.JoinHostPort(config.Host, strconv.Itoa(config.Port)), path)
netType := resolveTDengineNet(config)
return fmt.Sprintf("%s:%s@%s(%s)%s", user, pass, netType, net.JoinHostPort(config.Host, strconv.Itoa(config.Port)), path)
}
func (t *TDengineDB) Connect(config connection.ConnectionConfig) error {
var dsn string
runConfig := config
if config.UseSSH {
logger.Infof("TDengine 使用 SSH 连接:地址=%s:%d 用户=%s", config.Host, config.Port, config.User)
@@ -68,23 +69,38 @@ func (t *TDengineDB) Connect(config connection.ConnectionConfig) error {
localConfig.Host = host
localConfig.Port = port
localConfig.UseSSH = false
dsn = t.getDSN(localConfig)
runConfig = localConfig
logger.Infof("TDengine 通过本地端口转发连接:%s -> %s:%d", forwarder.LocalAddr, config.Host, config.Port)
} else {
dsn = t.getDSN(config)
}
db, err := sql.Open("taosWS", dsn)
if err != nil {
return fmt.Errorf("打开数据库连接失败:%w", err)
attempts := []connection.ConnectionConfig{runConfig}
if shouldTrySSLPreferredFallback(runConfig) {
attempts = append(attempts, withSSLDisabled(runConfig))
}
t.conn = db
t.pingTimeout = getConnectTimeout(config)
if err := t.Ping(); err != nil {
return fmt.Errorf("连接建立后验证失败:%w", err)
var failures []string
for idx, attempt := range attempts {
dsn := t.getDSN(attempt)
db, err := sql.Open("taosWS", dsn)
if err != nil {
failures = append(failures, fmt.Sprintf("第%d次连接打开失败: %v", idx+1, err))
continue
}
t.conn = db
t.pingTimeout = getConnectTimeout(attempt)
if err := t.Ping(); err != nil {
_ = db.Close()
t.conn = nil
failures = append(failures, fmt.Sprintf("第%d次连接验证失败: %v", idx+1, err))
continue
}
if idx > 0 {
logger.Warnf("TDengine SSL 优先连接失败,已回退至明文连接")
}
return nil
}
return nil
return fmt.Errorf("连接建立后验证失败:%s", strings.Join(failures, ""))
}
func (t *TDengineDB) Close() error {

View File

@@ -41,7 +41,7 @@ func (v *VastbaseDB) getDSN(config connection.ConnectionConfig) string {
}
u.User = url.UserPassword(config.User, config.Password)
q := url.Values{}
q.Set("sslmode", "disable")
q.Set("sslmode", resolvePostgresSSLMode(config))
q.Set("connect_timeout", strconv.Itoa(getConnectTimeoutSeconds(config)))
u.RawQuery = q.Encode()
@@ -49,7 +49,7 @@ func (v *VastbaseDB) getDSN(config connection.ConnectionConfig) string {
}
func (v *VastbaseDB) Connect(config connection.ConnectionConfig) error {
var dsn string
runConfig := config
if config.UseSSH {
logger.Infof("Vastbase 使用 SSH 连接:地址=%s:%d 用户=%s", config.Host, config.Port, config.User)
@@ -75,23 +75,37 @@ func (v *VastbaseDB) Connect(config connection.ConnectionConfig) error {
localConfig.Port = port
localConfig.UseSSH = false
dsn = v.getDSN(localConfig)
runConfig = localConfig
logger.Infof("Vastbase 通过本地端口转发连接:%s -> %s:%d", forwarder.LocalAddr, config.Host, config.Port)
} else {
dsn = v.getDSN(config)
}
db, err := sql.Open("postgres", dsn)
if err != nil {
return fmt.Errorf("打开数据库连接失败:%w", err)
attempts := []connection.ConnectionConfig{runConfig}
if shouldTrySSLPreferredFallback(runConfig) {
attempts = append(attempts, withSSLDisabled(runConfig))
}
v.conn = db
v.pingTimeout = getConnectTimeout(config)
if err := v.Ping(); err != nil {
return fmt.Errorf("连接建立后验证失败:%w", err)
var failures []string
for idx, attempt := range attempts {
dsn := v.getDSN(attempt)
db, err := sql.Open("postgres", dsn)
if err != nil {
failures = append(failures, fmt.Sprintf("第%d次连接打开失败: %v", idx+1, err))
continue
}
v.conn = db
v.pingTimeout = getConnectTimeout(attempt)
if err := v.Ping(); err != nil {
_ = db.Close()
v.conn = nil
failures = append(failures, fmt.Sprintf("第%d次连接验证失败: %v", idx+1, err))
continue
}
if idx > 0 {
logger.Warnf("Vastbase SSL 优先连接失败,已回退至明文连接")
}
return nil
}
return nil
return fmt.Errorf("连接建立后验证失败:%s", strings.Join(failures, ""))
}
func (v *VastbaseDB) Close() error {

View File

@@ -12,7 +12,7 @@ type RedisValue struct {
// RedisDBInfo represents information about a Redis database
type RedisDBInfo struct {
Index int `json:"index"` // Database index (0-15)
Index int `json:"index"` // Database index (single: 0-15, cluster: logical 0-15)
Keys int64 `json:"keys"` // Number of keys in this database
}
@@ -26,7 +26,7 @@ type RedisKeyInfo struct {
// RedisScanResult represents the result of a SCAN operation
type RedisScanResult struct {
Keys []RedisKeyInfo `json:"keys"`
Cursor uint64 `json:"cursor"`
Cursor string `json:"cursor"`
}
// RedisClient defines the interface for Redis operations

View File

@@ -2,9 +2,12 @@ package redis
import (
"context"
"crypto/tls"
"fmt"
"net"
"strconv"
"strings"
"sync"
"time"
"GoNavi-Wails/internal/connection"
@@ -16,10 +19,14 @@ import (
// RedisClientImpl implements RedisClient using go-redis
type RedisClientImpl struct {
client *redis.Client
config connection.ConnectionConfig
currentDB int
forwarder *ssh.LocalForwarder
client redis.UniversalClient
singleClient *redis.Client
clusterClient *redis.ClusterClient
config connection.ConnectionConfig
currentDB int
isCluster bool
seedAddrs []string
forwarder *ssh.LocalForwarder
}
const (
@@ -28,6 +35,11 @@ const (
redisScanMinStepCount int64 = 200
redisScanMaxStepCount int64 = 2000
redisScanMaxRounds = 64
redisScanMaxDuration = 12 * time.Second
redisSearchMaxTargetCount int64 = 1000
redisSearchMaxStepCount int64 = 1000
redisSearchMaxRounds = 16
redisSearchMaxDuration = 3 * time.Second
)
// NewRedisClient creates a new Redis client instance
@@ -35,14 +47,206 @@ func NewRedisClient() RedisClient {
return &RedisClientImpl{}
}
func normalizeRedisTimeout(timeoutSeconds int) time.Duration {
if timeoutSeconds <= 0 {
return 30 * time.Second
}
return time.Duration(timeoutSeconds) * time.Second
}
func normalizeRedisSeedAddress(raw string, defaultPort int) (string, error) {
addr := strings.TrimSpace(raw)
if addr == "" {
return "", fmt.Errorf("Redis 节点地址不能为空")
}
if _, _, err := net.SplitHostPort(addr); err == nil {
return addr, nil
}
if !strings.Contains(addr, ":") {
return net.JoinHostPort(addr, strconv.Itoa(defaultPort)), nil
}
// 尝试兼容 host:port 但端口格式异常的场景。
host, port, ok := strings.Cut(addr, ":")
if !ok {
return "", fmt.Errorf("无效 Redis 节点地址: %s", addr)
}
host = strings.TrimSpace(host)
port = strings.TrimSpace(port)
if host == "" {
return "", fmt.Errorf("无效 Redis 节点地址: %s", addr)
}
if _, err := strconv.Atoi(port); err != nil {
return "", fmt.Errorf("无效 Redis 端口: %s", addr)
}
return net.JoinHostPort(host, port), nil
}
func buildRedisSeedAddrs(config connection.ConnectionConfig) ([]string, error) {
defaultPort := config.Port
if defaultPort <= 0 {
defaultPort = 6379
}
candidates := make([]string, 0, 1+len(config.Hosts))
if strings.TrimSpace(config.Host) != "" {
candidates = append(candidates, fmt.Sprintf("%s:%d", strings.TrimSpace(config.Host), defaultPort))
}
candidates = append(candidates, config.Hosts...)
seen := make(map[string]struct{}, len(candidates))
addrs := make([]string, 0, len(candidates))
for _, candidate := range candidates {
normalized, err := normalizeRedisSeedAddress(candidate, defaultPort)
if err != nil {
return nil, err
}
if _, exists := seen[normalized]; exists {
continue
}
seen[normalized] = struct{}{}
addrs = append(addrs, normalized)
}
if len(addrs) == 0 {
return nil, fmt.Errorf("Redis 连接地址不能为空")
}
return addrs, nil
}
func (r *RedisClientImpl) redisNamespacePrefixForDB(index int) string {
if !r.isCluster || index <= 0 {
return ""
}
// Redis Cluster 仅支持物理 db0这里用固定前缀模拟逻辑库隔离。
return fmt.Sprintf("__gonavi_db_%d__:", index)
}
func (r *RedisClientImpl) redisNamespacePrefix() string {
return r.redisNamespacePrefixForDB(r.currentDB)
}
func (r *RedisClientImpl) toPhysicalKey(key string) string {
trimmed := strings.TrimSpace(key)
if trimmed == "" {
return ""
}
prefix := r.redisNamespacePrefix()
if prefix == "" || strings.HasPrefix(trimmed, prefix) {
return trimmed
}
return prefix + trimmed
}
func (r *RedisClientImpl) toPhysicalPattern(pattern string) string {
normalized := strings.TrimSpace(pattern)
if normalized == "" {
normalized = "*"
}
prefix := r.redisNamespacePrefix()
if prefix == "" {
return normalized
}
return prefix + normalized
}
func (r *RedisClientImpl) toPhysicalKeys(keys []string) []string {
if len(keys) == 0 {
return nil
}
result := make([]string, 0, len(keys))
for _, key := range keys {
physical := r.toPhysicalKey(key)
if physical == "" {
continue
}
result = append(result, physical)
}
return result
}
func (r *RedisClientImpl) toDisplayKey(key string) string {
prefix := r.redisNamespacePrefix()
if prefix == "" {
return key
}
return strings.TrimPrefix(key, prefix)
}
// Connect establishes a connection to Redis
func (r *RedisClientImpl) Connect(config connection.ConnectionConfig) error {
r.config = config
r.currentDB = config.RedisDB
if r.config.RedisDB < 0 || r.config.RedisDB > 15 {
r.config.RedisDB = 0
}
r.currentDB = r.config.RedisDB
r.forwarder = nil
r.client = nil
r.singleClient = nil
r.clusterClient = nil
r.isCluster = false
addr := fmt.Sprintf("%s:%d", config.Host, config.Port)
seedAddrs, err := buildRedisSeedAddrs(config)
if err != nil {
return err
}
r.seedAddrs = append([]string(nil), seedAddrs...)
// Handle SSH tunnel if enabled
topology := strings.ToLower(strings.TrimSpace(config.Topology))
r.isCluster = topology == "cluster" || len(seedAddrs) > 1
if r.isCluster && config.UseSSH {
return fmt.Errorf("Redis 集群模式暂不支持 SSH 隧道,请关闭 SSH 后重试")
}
timeout := normalizeRedisTimeout(config.Timeout)
if r.isCluster {
attempts := []connection.ConnectionConfig{config}
if shouldTryRedisSSLPreferredFallback(config) {
attempts = append(attempts, withRedisSSLDisabled(config))
}
var failures []string
for idx, attempt := range attempts {
var tlsConfig *tls.Config
if cfg := resolveRedisTLSConfig(attempt); cfg != nil {
if host, _, err := net.SplitHostPort(seedAddrs[0]); err == nil && host != "" {
cfg.ServerName = host
}
tlsConfig = cfg
}
opts := &redis.ClusterOptions{
Addrs: seedAddrs,
Username: strings.TrimSpace(attempt.User),
Password: attempt.Password,
DialTimeout: timeout,
ReadTimeout: timeout,
WriteTimeout: timeout,
TLSConfig: tlsConfig,
}
clusterClient := redis.NewClusterClient(opts)
ctx, cancel := context.WithTimeout(context.Background(), timeout)
pingErr := clusterClient.Ping(ctx).Err()
cancel()
if pingErr != nil {
clusterClient.Close()
failures = append(failures, fmt.Sprintf("第%d次连接失败: %v", idx+1, pingErr))
continue
}
r.client = clusterClient
r.clusterClient = clusterClient
r.config = attempt
if idx > 0 {
logger.Warnf("Redis 集群 SSL 优先连接失败,已回退至明文连接")
}
logger.Infof("Redis 集群连接成功: seeds=%s 逻辑库=db%d", strings.Join(seedAddrs, ","), r.currentDB)
return nil
}
return fmt.Errorf("Redis 集群连接失败: %s", strings.Join(failures, ""))
}
addr := seedAddrs[0]
if config.UseSSH {
forwarder, err := ssh.GetOrCreateLocalForwarder(config.SSH, config.Host, config.Port)
if err != nil {
@@ -53,35 +257,53 @@ func (r *RedisClientImpl) Connect(config connection.ConnectionConfig) error {
logger.Infof("Redis 通过 SSH 隧道连接: %s -> %s:%d", addr, config.Host, config.Port)
}
opts := &redis.Options{
Addr: addr,
Password: config.Password,
DB: config.RedisDB,
DialTimeout: time.Duration(config.Timeout) * time.Second,
ReadTimeout: time.Duration(config.Timeout) * time.Second,
WriteTimeout: time.Duration(config.Timeout) * time.Second,
attempts := []connection.ConnectionConfig{config}
if shouldTryRedisSSLPreferredFallback(config) {
attempts = append(attempts, withRedisSSLDisabled(config))
}
if opts.DialTimeout == 0 {
opts.DialTimeout = 30 * time.Second
opts.ReadTimeout = 30 * time.Second
opts.WriteTimeout = 30 * time.Second
var failures []string
for idx, attempt := range attempts {
var tlsConfig *tls.Config
if cfg := resolveRedisTLSConfig(attempt); cfg != nil {
if host, _, err := net.SplitHostPort(addr); err == nil && host != "" {
cfg.ServerName = host
}
tlsConfig = cfg
}
opts := &redis.Options{
Addr: addr,
Username: strings.TrimSpace(attempt.User),
Password: attempt.Password,
DB: r.currentDB,
DialTimeout: timeout,
ReadTimeout: timeout,
WriteTimeout: timeout,
TLSConfig: tlsConfig,
}
singleClient := redis.NewClient(opts)
ctx, cancel := context.WithTimeout(context.Background(), timeout)
pingErr := singleClient.Ping(ctx).Err()
cancel()
if pingErr != nil {
singleClient.Close()
failures = append(failures, fmt.Sprintf("第%d次连接失败: %v", idx+1, pingErr))
continue
}
r.client = singleClient
r.singleClient = singleClient
r.config = attempt
if idx > 0 {
logger.Warnf("Redis SSL 优先连接失败,已回退至明文连接")
}
logger.Infof("Redis 连接成功: %s DB=%d", addr, r.currentDB)
return nil
}
r.client = redis.NewClient(opts)
// Test connection
ctx, cancel := context.WithTimeout(context.Background(), opts.DialTimeout)
defer cancel()
if err := r.client.Ping(ctx).Err(); err != nil {
r.client.Close()
r.client = nil
return fmt.Errorf("Redis 连接失败: %w", err)
}
logger.Infof("Redis 连接成功: %s DB=%d", addr, config.RedisDB)
return nil
return fmt.Errorf("Redis 连接失败: %s", strings.Join(failures, ""))
}
// Close closes the Redis connection
@@ -89,6 +311,11 @@ func (r *RedisClientImpl) Close() error {
if r.client != nil {
err := r.client.Close()
r.client = nil
r.singleClient = nil
r.clusterClient = nil
r.isCluster = false
r.seedAddrs = nil
r.forwarder = nil
return err
}
return nil
@@ -110,22 +337,43 @@ func (r *RedisClientImpl) ScanKeys(pattern string, cursor uint64, count int64) (
return nil, fmt.Errorf("Redis 客户端未连接")
}
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
if pattern == "" {
pattern = "*"
}
physicalPattern := r.toPhysicalPattern(pattern)
isSearchPattern := pattern != "*"
targetCount := normalizeRedisScanTargetCount(count)
scanStepCount := normalizeRedisScanStepCount(targetCount)
maxRounds := redisScanMaxRounds
maxDuration := redisScanMaxDuration
if isSearchPattern {
if targetCount > redisSearchMaxTargetCount {
targetCount = redisSearchMaxTargetCount
}
if scanStepCount > redisSearchMaxStepCount {
scanStepCount = redisSearchMaxStepCount
}
maxRounds = redisSearchMaxRounds
maxDuration = redisSearchMaxDuration
}
ctx, cancel := context.WithTimeout(context.Background(), maxDuration+5*time.Second)
defer cancel()
currentCursor := cursor
round := 0
scanStartedAt := time.Now()
keys := make([]string, 0, int(targetCount))
seen := make(map[string]struct{}, int(targetCount))
for len(keys) < int(targetCount) {
batch, nextCursor, err := r.client.Scan(ctx, currentCursor, pattern, scanStepCount).Result()
if time.Since(scanStartedAt) >= maxDuration {
break
}
batch, nextCursor, err := r.client.Scan(ctx, currentCursor, physicalPattern, scanStepCount).Result()
if err != nil {
return nil, err
}
@@ -143,14 +391,14 @@ func (r *RedisClientImpl) ScanKeys(pattern string, cursor uint64, count int64) (
currentCursor = nextCursor
round++
if currentCursor == 0 || round >= redisScanMaxRounds {
if currentCursor == 0 || round >= maxRounds {
break
}
}
return &RedisScanResult{
Keys: r.loadRedisKeyInfos(ctx, keys),
Cursor: currentCursor,
Cursor: strconv.FormatUint(currentCursor, 10),
}, nil
}
@@ -201,7 +449,7 @@ func (r *RedisClientImpl) loadRedisKeyInfos(ctx context.Context, keys []string)
ttlValue = -2
}
result = append(result, RedisKeyInfo{
Key: key,
Key: r.toDisplayKey(key),
Type: keyType,
TTL: toRedisTTLSeconds(ttlValue),
})
@@ -211,7 +459,7 @@ func (r *RedisClientImpl) loadRedisKeyInfos(ctx context.Context, keys []string)
for i, key := range keys {
result = append(result, RedisKeyInfo{
Key: key,
Key: r.toDisplayKey(key),
Type: typeResults[i].Val(),
TTL: toRedisTTLSeconds(ttlResults[i].Val()),
})
@@ -236,7 +484,7 @@ func (r *RedisClientImpl) GetKeyType(key string) (string, error) {
}
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
return r.client.Type(ctx, key).Result()
return r.client.Type(ctx, r.toPhysicalKey(key)).Result()
}
// GetTTL returns the TTL of a key in seconds
@@ -247,7 +495,7 @@ func (r *RedisClientImpl) GetTTL(key string) (int64, error) {
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
ttl, err := r.client.TTL(ctx, key).Result()
ttl, err := r.client.TTL(ctx, r.toPhysicalKey(key)).Result()
if err != nil {
return 0, err
}
@@ -270,9 +518,9 @@ func (r *RedisClientImpl) SetTTL(key string, ttl int64) error {
if ttl < 0 {
// Remove expiry
return r.client.Persist(ctx, key).Err()
return r.client.Persist(ctx, r.toPhysicalKey(key)).Err()
}
return r.client.Expire(ctx, key, time.Duration(ttl)*time.Second).Err()
return r.client.Expire(ctx, r.toPhysicalKey(key), time.Duration(ttl)*time.Second).Err()
}
// DeleteKeys deletes one or more keys
@@ -282,7 +530,11 @@ func (r *RedisClientImpl) DeleteKeys(keys []string) (int64, error) {
}
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
return r.client.Del(ctx, keys...).Result()
physicalKeys := r.toPhysicalKeys(keys)
if len(physicalKeys) == 0 {
return 0, nil
}
return r.client.Del(ctx, physicalKeys...).Result()
}
// RenameKey renames a key
@@ -292,7 +544,7 @@ func (r *RedisClientImpl) RenameKey(oldKey, newKey string) error {
}
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
return r.client.Rename(ctx, oldKey, newKey).Err()
return r.client.Rename(ctx, r.toPhysicalKey(oldKey), r.toPhysicalKey(newKey)).Err()
}
// KeyExists checks if a key exists
@@ -302,7 +554,7 @@ func (r *RedisClientImpl) KeyExists(key string) (bool, error) {
}
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
n, err := r.client.Exists(ctx, key).Result()
n, err := r.client.Exists(ctx, r.toPhysicalKey(key)).Result()
return n > 0, err
}
@@ -318,6 +570,7 @@ func (r *RedisClientImpl) GetValue(key string) (*RedisValue, error) {
}
ttl, _ := r.GetTTL(key)
physicalKey := r.toPhysicalKey(key)
result := &RedisValue{
Type: keyType,
@@ -329,7 +582,7 @@ func (r *RedisClientImpl) GetValue(key string) (*RedisValue, error) {
switch keyType {
case "string":
val, err := r.client.Get(ctx, key).Result()
val, err := r.client.Get(ctx, physicalKey).Result()
if err != nil {
return nil, err
}
@@ -337,7 +590,7 @@ func (r *RedisClientImpl) GetValue(key string) (*RedisValue, error) {
result.Length = int64(len(val))
case "hash":
val, err := r.client.HGetAll(ctx, key).Result()
val, err := r.client.HGetAll(ctx, physicalKey).Result()
if err != nil {
return nil, err
}
@@ -345,7 +598,7 @@ func (r *RedisClientImpl) GetValue(key string) (*RedisValue, error) {
result.Length = int64(len(val))
case "list":
length, err := r.client.LLen(ctx, key).Result()
length, err := r.client.LLen(ctx, physicalKey).Result()
if err != nil {
return nil, err
}
@@ -354,7 +607,7 @@ func (r *RedisClientImpl) GetValue(key string) (*RedisValue, error) {
if length < limit {
limit = length
}
val, err := r.client.LRange(ctx, key, 0, limit-1).Result()
val, err := r.client.LRange(ctx, physicalKey, 0, limit-1).Result()
if err != nil {
return nil, err
}
@@ -362,12 +615,12 @@ func (r *RedisClientImpl) GetValue(key string) (*RedisValue, error) {
result.Length = length
case "set":
length, err := r.client.SCard(ctx, key).Result()
length, err := r.client.SCard(ctx, physicalKey).Result()
if err != nil {
return nil, err
}
// Get members using SMembers (limited by Redis server)
members, err := r.client.SMembers(ctx, key).Result()
members, err := r.client.SMembers(ctx, physicalKey).Result()
if err != nil {
return nil, err
}
@@ -375,7 +628,7 @@ func (r *RedisClientImpl) GetValue(key string) (*RedisValue, error) {
result.Length = length
case "zset":
length, err := r.client.ZCard(ctx, key).Result()
length, err := r.client.ZCard(ctx, physicalKey).Result()
if err != nil {
return nil, err
}
@@ -384,7 +637,7 @@ func (r *RedisClientImpl) GetValue(key string) (*RedisValue, error) {
if length < limit {
limit = length
}
val, err := r.client.ZRangeWithScores(ctx, key, 0, limit-1).Result()
val, err := r.client.ZRangeWithScores(ctx, physicalKey, 0, limit-1).Result()
if err != nil {
return nil, err
}
@@ -399,7 +652,7 @@ func (r *RedisClientImpl) GetValue(key string) (*RedisValue, error) {
result.Length = length
case "stream":
length, err := r.client.XLen(ctx, key).Result()
length, err := r.client.XLen(ctx, physicalKey).Result()
if err != nil {
return nil, err
}
@@ -412,7 +665,7 @@ func (r *RedisClientImpl) GetValue(key string) (*RedisValue, error) {
if length < limit {
limit = length
}
val, err := r.client.XRangeN(ctx, key, "-", "+", limit).Result()
val, err := r.client.XRangeN(ctx, physicalKey, "-", "+", limit).Result()
if err != nil {
return nil, err
}
@@ -432,7 +685,7 @@ func (r *RedisClientImpl) GetString(key string) (string, error) {
}
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
return r.client.Get(ctx, key).Result()
return r.client.Get(ctx, r.toPhysicalKey(key)).Result()
}
// SetString sets a string value with optional TTL
@@ -447,7 +700,7 @@ func (r *RedisClientImpl) SetString(key, value string, ttl int64) error {
if ttl > 0 {
expiration = time.Duration(ttl) * time.Second
}
return r.client.Set(ctx, key, value, expiration).Err()
return r.client.Set(ctx, r.toPhysicalKey(key), value, expiration).Err()
}
// GetHash gets all fields of a hash
@@ -457,7 +710,7 @@ func (r *RedisClientImpl) GetHash(key string) (map[string]string, error) {
}
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
return r.client.HGetAll(ctx, key).Result()
return r.client.HGetAll(ctx, r.toPhysicalKey(key)).Result()
}
// SetHashField sets a field in a hash
@@ -467,7 +720,7 @@ func (r *RedisClientImpl) SetHashField(key, field, value string) error {
}
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
return r.client.HSet(ctx, key, field, value).Err()
return r.client.HSet(ctx, r.toPhysicalKey(key), field, value).Err()
}
// DeleteHashField deletes fields from a hash
@@ -477,7 +730,7 @@ func (r *RedisClientImpl) DeleteHashField(key string, fields ...string) error {
}
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
return r.client.HDel(ctx, key, fields...).Err()
return r.client.HDel(ctx, r.toPhysicalKey(key), fields...).Err()
}
// GetList gets a range of elements from a list
@@ -487,7 +740,7 @@ func (r *RedisClientImpl) GetList(key string, start, stop int64) ([]string, erro
}
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
return r.client.LRange(ctx, key, start, stop).Result()
return r.client.LRange(ctx, r.toPhysicalKey(key), start, stop).Result()
}
// ListPush pushes values to the end of a list
@@ -501,7 +754,7 @@ func (r *RedisClientImpl) ListPush(key string, values ...string) error {
for i, v := range values {
args[i] = v
}
return r.client.RPush(ctx, key, args...).Err()
return r.client.RPush(ctx, r.toPhysicalKey(key), args...).Err()
}
// ListSet sets the value at an index in a list
@@ -511,7 +764,7 @@ func (r *RedisClientImpl) ListSet(key string, index int64, value string) error {
}
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
return r.client.LSet(ctx, key, index, value).Err()
return r.client.LSet(ctx, r.toPhysicalKey(key), index, value).Err()
}
// GetSet gets all members of a set
@@ -521,7 +774,7 @@ func (r *RedisClientImpl) GetSet(key string) ([]string, error) {
}
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
return r.client.SMembers(ctx, key).Result()
return r.client.SMembers(ctx, r.toPhysicalKey(key)).Result()
}
// SetAdd adds members to a set
@@ -535,7 +788,7 @@ func (r *RedisClientImpl) SetAdd(key string, members ...string) error {
for i, m := range members {
args[i] = m
}
return r.client.SAdd(ctx, key, args...).Err()
return r.client.SAdd(ctx, r.toPhysicalKey(key), args...).Err()
}
// SetRemove removes members from a set
@@ -549,7 +802,7 @@ func (r *RedisClientImpl) SetRemove(key string, members ...string) error {
for i, m := range members {
args[i] = m
}
return r.client.SRem(ctx, key, args...).Err()
return r.client.SRem(ctx, r.toPhysicalKey(key), args...).Err()
}
// GetZSet gets members with scores from a sorted set
@@ -560,7 +813,7 @@ func (r *RedisClientImpl) GetZSet(key string, start, stop int64) ([]ZSetMember,
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
val, err := r.client.ZRangeWithScores(ctx, key, start, stop).Result()
val, err := r.client.ZRangeWithScores(ctx, r.toPhysicalKey(key), start, stop).Result()
if err != nil {
return nil, err
}
@@ -590,7 +843,7 @@ func (r *RedisClientImpl) ZSetAdd(key string, members ...ZSetMember) error {
Member: m.Member,
}
}
return r.client.ZAdd(ctx, key, zMembers...).Err()
return r.client.ZAdd(ctx, r.toPhysicalKey(key), zMembers...).Err()
}
// ZSetRemove removes members from a sorted set
@@ -604,7 +857,7 @@ func (r *RedisClientImpl) ZSetRemove(key string, members ...string) error {
for i, m := range members {
args[i] = m
}
return r.client.ZRem(ctx, key, args...).Err()
return r.client.ZRem(ctx, r.toPhysicalKey(key), args...).Err()
}
// GetStream gets stream entries in a range
@@ -625,7 +878,7 @@ func (r *RedisClientImpl) GetStream(key, start, stop string, count int64) ([]Str
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
val, err := r.client.XRangeN(ctx, key, start, stop, count).Result()
val, err := r.client.XRangeN(ctx, r.toPhysicalKey(key), start, stop, count).Result()
if err != nil {
return nil, err
}
@@ -653,7 +906,7 @@ func (r *RedisClientImpl) StreamAdd(key string, fields map[string]string, id str
defer cancel()
newID, err := r.client.XAdd(ctx, &redis.XAddArgs{
Stream: key,
Stream: r.toPhysicalKey(key),
ID: id,
Values: values,
}).Result()
@@ -674,7 +927,7 @@ func (r *RedisClientImpl) StreamDelete(key string, ids ...string) (int64, error)
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
return r.client.XDel(ctx, key, ids...).Result()
return r.client.XDel(ctx, r.toPhysicalKey(key), ids...).Result()
}
func toStreamEntries(messages []redis.XMessage) []StreamEntry {
@@ -692,6 +945,72 @@ func toStreamEntries(messages []redis.XMessage) []StreamEntry {
return entries
}
func parseRedisCommandGetKeysResult(result interface{}) []string {
items, ok := result.([]interface{})
if !ok || len(items) == 0 {
return nil
}
keys := make([]string, 0, len(items))
for _, item := range items {
switch v := item.(type) {
case string:
if v != "" {
keys = append(keys, v)
}
case []byte:
text := string(v)
if text != "" {
keys = append(keys, text)
}
}
}
return keys
}
func (r *RedisClientImpl) rewriteCommandArgsForNamespace(ctx context.Context, args []string) []string {
if !r.isCluster || r.currentDB <= 0 || len(args) == 0 {
return args
}
command := strings.ToUpper(strings.TrimSpace(args[0]))
if command == "COMMAND" || command == "SELECT" || command == "FLUSHDB" {
return args
}
probeArgs := make([]interface{}, 0, len(args)+2)
probeArgs = append(probeArgs, "COMMAND", "GETKEYS")
for _, arg := range args {
probeArgs = append(probeArgs, arg)
}
result, err := r.client.Do(ctx, probeArgs...).Result()
if err != nil {
return args
}
keyCandidates := parseRedisCommandGetKeysResult(result)
if len(keyCandidates) == 0 {
return args
}
rewritten := append([]string(nil), args...)
used := make([]bool, len(rewritten))
for _, key := range keyCandidates {
for i := 1; i < len(rewritten); i++ {
if used[i] {
continue
}
if rewritten[i] != key {
continue
}
rewritten[i] = r.toPhysicalKey(rewritten[i])
used[i] = true
break
}
}
return rewritten
}
// ExecuteCommand executes a raw Redis command
func (r *RedisClientImpl) ExecuteCommand(args []string) (interface{}, error) {
if r.client == nil {
@@ -704,6 +1023,33 @@ func (r *RedisClientImpl) ExecuteCommand(args []string) (interface{}, error) {
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
if r.isCluster {
command := strings.ToUpper(strings.TrimSpace(args[0]))
switch command {
case "SELECT":
if len(args) < 2 {
return nil, fmt.Errorf("SELECT 命令缺少数据库索引")
}
index, err := strconv.Atoi(strings.TrimSpace(args[1]))
if err != nil {
return nil, fmt.Errorf("无效数据库索引: %s", args[1])
}
if index < 0 || index > 15 {
return nil, fmt.Errorf("数据库索引必须在 0-15 之间")
}
r.currentDB = index
r.config.RedisDB = index
return "OK", nil
case "FLUSHDB":
if err := r.FlushDB(); err != nil {
return nil, err
}
return "OK", nil
}
}
args = r.rewriteCommandArgsForNamespace(ctx, args)
// Convert to []interface{}
cmdArgs := make([]interface{}, len(args))
for i, arg := range args {
@@ -770,6 +1116,31 @@ func (r *RedisClientImpl) GetDatabases() ([]RedisDBInfo, error) {
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
if r.isCluster && r.clusterClient != nil {
var totalKeys int64
var mu sync.Mutex
err := r.clusterClient.ForEachMaster(ctx, func(nodeCtx context.Context, node *redis.Client) error {
keys, err := node.DBSize(nodeCtx).Result()
if err != nil {
return err
}
mu.Lock()
totalKeys += keys
mu.Unlock()
return nil
})
if err != nil {
logger.Warnf("Redis 集群获取 key 数量失败,回退为 0: %v", err)
totalKeys = 0
}
result := make([]RedisDBInfo, 16)
for i := 0; i < 16; i++ {
result[i] = RedisDBInfo{Index: i, Keys: 0}
}
result[0].Keys = totalKeys
return result, nil
}
// Get keyspace info
info, err := r.client.Info(ctx, "keyspace").Result()
if err != nil {
@@ -820,34 +1191,47 @@ func (r *RedisClientImpl) SelectDB(index int) error {
if r.client == nil {
return fmt.Errorf("Redis 客户端未连接")
}
if r.isCluster {
if index < 0 || index > 15 {
return fmt.Errorf("数据库索引必须在 0-15 之间")
}
r.currentDB = index
r.config.RedisDB = index
return nil
}
if index < 0 || index > 15 {
return fmt.Errorf("数据库索引必须在 0-15 之间")
}
// Create new client with different DB
addr := fmt.Sprintf("%s:%d", r.config.Host, r.config.Port)
addr := ""
if len(r.seedAddrs) > 0 {
addr = r.seedAddrs[0]
}
if r.forwarder != nil {
addr = r.forwarder.LocalAddr
}
if addr == "" {
addr = fmt.Sprintf("%s:%d", r.config.Host, r.config.Port)
}
timeout := normalizeRedisTimeout(r.config.Timeout)
opts := &redis.Options{
Addr: addr,
Username: strings.TrimSpace(r.config.User),
Password: r.config.Password,
DB: index,
DialTimeout: time.Duration(r.config.Timeout) * time.Second,
ReadTimeout: time.Duration(r.config.Timeout) * time.Second,
WriteTimeout: time.Duration(r.config.Timeout) * time.Second,
}
if opts.DialTimeout == 0 {
opts.DialTimeout = 30 * time.Second
opts.ReadTimeout = 30 * time.Second
opts.WriteTimeout = 30 * time.Second
DialTimeout: timeout,
ReadTimeout: timeout,
WriteTimeout: timeout,
}
newClient := redis.NewClient(opts)
ctx, cancel := context.WithTimeout(context.Background(), opts.DialTimeout)
ctx, cancel := context.WithTimeout(context.Background(), timeout)
defer cancel()
if err := newClient.Ping(ctx).Err(); err != nil {
@@ -856,9 +1240,14 @@ func (r *RedisClientImpl) SelectDB(index int) error {
}
// Close old client and replace
r.client.Close()
if r.client != nil {
_ = r.client.Close()
}
r.client = newClient
r.singleClient = newClient
r.clusterClient = nil
r.currentDB = index
r.config.RedisDB = index
logger.Infof("Redis 切换到数据库: db%d", index)
return nil
@@ -874,6 +1263,63 @@ func (r *RedisClientImpl) FlushDB() error {
if r.client == nil {
return fmt.Errorf("Redis 客户端未连接")
}
if r.isCluster && r.clusterClient != nil {
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Minute)
defer cancel()
namespacePrefix := r.redisNamespacePrefix()
var deletedTotal int64
var deletedMu sync.Mutex
err := r.clusterClient.ForEachMaster(ctx, func(nodeCtx context.Context, node *redis.Client) error {
var cursor uint64
for {
pattern := "*"
if namespacePrefix != "" {
pattern = namespacePrefix + "*"
}
keys, nextCursor, err := node.Scan(nodeCtx, cursor, pattern, 2000).Result()
if err != nil {
return err
}
if namespacePrefix == "" {
filtered := keys[:0]
for _, key := range keys {
// db0 保留兼容:不删除逻辑库前缀 key避免误清理 db1~db15。
if strings.HasPrefix(key, "__gonavi_db_") {
continue
}
filtered = append(filtered, key)
}
keys = filtered
}
if len(keys) > 0 {
deleted, err := node.Del(nodeCtx, keys...).Result()
if err != nil {
return err
}
deletedMu.Lock()
deletedTotal += deleted
deletedMu.Unlock()
}
cursor = nextCursor
if cursor == 0 {
break
}
}
return nil
})
if err != nil {
return err
}
logger.Infof("Redis 集群逻辑库清空完成: db%d deleted=%d", r.currentDB, deletedTotal)
return nil
}
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
return r.client.FlushDB(ctx).Err()

View File

@@ -0,0 +1,55 @@
package redis
import (
"crypto/tls"
"strings"
"GoNavi-Wails/internal/connection"
)
func normalizeRedisSSLMode(raw string) string {
mode := strings.ToLower(strings.TrimSpace(raw))
switch mode {
case "", "preferred", "prefer":
return "preferred"
case "required", "require", "on", "true", "mandatory", "strict":
return "required"
case "skip-verify", "insecure", "skipverify", "skip_verify", "insecure-skip-verify":
return "skip-verify"
case "disable", "disabled", "off", "false", "none":
return "disable"
default:
return "preferred"
}
}
func redisSSLMode(config connection.ConnectionConfig) string {
if !config.UseSSL {
return "disable"
}
return normalizeRedisSSLMode(config.SSLMode)
}
func shouldTryRedisSSLPreferredFallback(config connection.ConnectionConfig) bool {
return config.UseSSL && normalizeRedisSSLMode(config.SSLMode) == "preferred"
}
func withRedisSSLDisabled(config connection.ConnectionConfig) connection.ConnectionConfig {
next := config
next.UseSSL = false
next.SSLMode = "disable"
return next
}
func resolveRedisTLSConfig(config connection.ConnectionConfig) *tls.Config {
switch redisSSLMode(config) {
case "disable":
return nil
case "required":
return &tls.Config{MinVersion: tls.VersionTLS12}
case "skip-verify":
return &tls.Config{MinVersion: tls.VersionTLS12, InsecureSkipVerify: true}
default:
return &tls.Config{MinVersion: tls.VersionTLS12, InsecureSkipVerify: true}
}
}

BIN
optional-driver-agent Executable file

Binary file not shown.