mirror of
https://github.com/Syngnat/GoNavi.git
synced 2026-05-12 02:09:42 +08:00
* 🐛 fix(data-viewer): 修复ClickHouse尾部分页异常并增强DuckDB复杂类型兼容 - DataViewer 新增 ClickHouse 反向分页策略,修复最后页与倒数页查询失败 - DuckDB 查询失败时按列类型生成安全 SELECT,复杂类型转 VARCHAR 重试 - 分页状态统一使用 currentPage 回填,避免页码与总数推导不一致 - 增强查询异常日志与重试路径,降低大表场景卡顿与误报 * ✨ feat(frontend-driver): 驱动管理支持快速搜索并优化信息展示 - 新增搜索框,支持按 DuckDB/ClickHouse 等关键字快速定位驱动 - 显示“匹配 x / y”统计与无结果提示 - 优化头部区域排版,提升透明/暗色场景下的视觉对齐 * 🔧 fix(connection-modal): 修复多数据源URI导入解析并校正Oracle服务名校验 - 新增单主机URI解析映射,兼容 postgres/postgresql、sqlserver、redis、tdengine、dameng(dm)、kingbase、highgo、vastbase、clickhouse、oracle - 抽取 parseSingleHostUri 复用逻辑,统一 host/port/user/password/database 回填行为 - Oracle 连接新增服务名必填校验,移除“服务名为空回退用户名”的隐式逻辑 - 连接弹窗补充 Oracle 服务名输入项与 URI 示例 * 🐛 fix(query-export): 修复查询结果导出卡住并统一按数据源能力控制导出路径 - 查询结果页导出增加稳定兜底,异常时确保 loading 关闭避免持续转圈 - DataGrid 导出逻辑按数据源能力分流,优先走后端 ExportQuery 并保留结果集导出降级 - QueryEditor 传递结果导出 SQL,保证查询结果导出范围与当前结果一致 - 后端补充 ExportData/ExportQuery 关键日志,提升导出链路可观测性 * 🐛 fix(precision): 修复查询链路与分页统计的大整数精度丢失 - 代理响应数据解码改为 UseNumber,避免默认 float64 吞精度 - 统一归一化 json.Number 与超界整数,超出 JS 安全范围转字符串 - 修复 DataViewer 总数解析,超大值不再误转 Number 参与分页 - refs #142 * 🐛 fix(driver-manager): 修复驱动管理网络告警重复并强化代理引导 - 新增下载链路域名探测,区分“GitHub可达但驱动下载链路不可达” - 网络不可达场景仅保留红色强提醒,移除重复二级告警 - 强提醒增加“打开全局代理设置”入口,优先引导使用 GoNavi 全局代理 - 统一网络检测与目录说明提示图标尺寸,修复加载期视觉不一致 - refs #141 * ♻️ refactor(frontend-interaction): 统一标签拖拽与暗色主题交互实现 - 重构Tab拖拽排序实现,统一为可配置拖拽引擎 - 规范拖拽与点击事件边界,提升交互一致性 - 统一多组件暗色透明样式策略,减少硬编码色值 - 提升Redis/表格/连接面板在透明模式下的观感一致性 - refs #144 * ♻️ refactor(update-state): 重构在线更新状态流并按版本统一进度展示 - 重构更新检查与下载状态同步流程,减少前后端状态分叉 - 进度展示严格绑定 latestVersion,避免跨版本状态串用 - 优化 about 打开场景的静默检查状态回填逻辑 - 统一下载弹窗关闭/后台隐藏行为 - 保持现有安装流程并补齐目录打开能力 * 🎨 style(sidebar-log): 将SQL执行日志入口调整为悬浮胶囊样式 - 移除侧栏底部整条日志入口容器 - 新增悬浮按钮阴影/边框/透明背景并适配明暗主题 - 为树区域预留底部空间避免入口遮挡内容 * ✨ feat(redis-cluster): 支持集群模式逻辑多库隔离与 0-15 库切换 - 前端恢复 Redis 集群场景下 db0-db15 的数据库选择与展示 - 后端新增集群逻辑库命名空间前缀映射,统一 key/pattern 读写隔离 - 覆盖扫描、读取、写入、删除、重命名等核心操作的键映射规则 - 集群命令通道支持 SELECT 逻辑切库与 FLUSHDB 逻辑库清空 - refs #145 * ✨ feat(DataGrid): 大数据表虚拟滚动性能优化及UI一致性修复 - 启用动态虚拟滚动(数据量≥500行自动切换),解决万行数据表卡顿问题 - 虚拟模式下EditableCell改用div渲染,CSS选择器从元素级改为类级适配虚拟DOM - 修复虚拟模式双水平滚动条:样式化rc-virtual-list内置滚动条为胶囊外观,禁用自定义外部滚动条 - 为rc-virtual-list水平滚动条添加鼠标滚轮支持(MutationObserver + marginLeft驱动) - 修复白色主题透明模式下列名悬浮Tooltip对比度不足的问题 - 新增白色主题全局滚动条样式适配透明模式(App.css) - App.tsx主题token与组件样式优化 - refs #147 * 🔧 chore(app): 清理 App.tsx 类型告警并收敛前端壳层实现 - 清除未使用代码和冗余状态 - 替换弃用 API 以消除 IDE 提示 - 显式处理浮动 Promise 避免告警 - 保持现有更新检查和代理设置行为不变 * 🔧 fix(ci): 修复 Windows AMD64 下 DuckDB 驱动构建链路 - 将 DuckDB 工具链准备切换为优先使用 MSYS2 - 增加 gcc 和 g++ 存在性校验与版本验证 - 在 MSYS2 异常时回退 Chocolatey 安装 MinGW - 保持 Windows ARM64 跳过 DuckDB 构建与平台支持一致 * 🔧 fix(ci): 修复 Windows AMD64 下 DuckDB 驱动构建链路 - 将 DuckDB 工具链准备切换为优先使用 MSYS2 - 增加 gcc 和 g++ 存在性校验与版本验证 - 在 MSYS2 异常时回退 Chocolatey 安装 MinGW - 保持 Windows ARM64 跳过 DuckDB 构建与平台支持一致 * 🔧 fix(ci): 修复 Windows AMD64 下 DuckDB 驱动构建工具链 - 将 DuckDB 编译链从 MINGW64 切换为 MSYS2 UCRT64 - 修正 Windows AMD64 的 gcc 和 g++ 探测路径 - 增加 DuckDB 编译器版本校验步骤 * 📝 docs(contributing): 补充中英文贡献指南并统一 README 入口 - 新增英文版 CONTRIBUTING.md 作为正式贡献文档 - 新增中文版 CONTRIBUTING.zh-CN.md 作为中文贡献说明 - 调整 README 和 README.zh-CN 的贡献入口指向对应语言文档 * - feat(connection,metadata,kingbase): 增强多数据源连接能力并修复金仓/达梦/Oracle/ClickHouse兼容性问题 (#188) (#190) * feat(http-tunnel): 支持独立 HTTP 隧道连接并覆盖多数据源 refs #168 * fix(kingbase-data-grid): 修复金仓打开表卡顿并降低对象渲染开销 refs #178 * fix(kingbase-transaction): 修复金仓事务提交重复引号导致语法错误 refs #176 * fix(driver-agent): 修复老版本 Win10 升级后金仓驱动代理启动失败 refs #177 * chore(ci): 新增手动触发的 macOS 测试构建工作流 * chore(ci): 允许测试工作流在当前分支自动触发 * fix(query-editor): 修复 SQL 编辑中光标随机跳到末尾 refs #185 * feat(data-sync): 增加差异 SQL 预览能力便于审核 refs #174 * fix(clickhouse-connect): 自动识别并回退 HTTP/Native 协议连接 refs #181 * fix(oracle-metadata): 修复视图与函数加载按 schema 过滤异常 refs #155 * fix(dameng-databases): 修复显示全部库时数据库列表不完整 refs #154 * fix(connection,db-list): 统一处理空列表返回并修复达梦连接测试报错 refs #157 Co-authored-by: 辣条 <69459608+tianqijiuyun-latiao@users.noreply.github.com> * ✨ feat(release-notes): 支持自动生成 Release 更新说明并区分配置文件命名 * 🔁 chore(sync): 回灌 main 到 dev (#192) * - feat(connection,metadata,kingbase): 增强多数据源连接能力并修复金仓/达梦/Oracle/ClickHouse兼容性问题 (#188) * feat(http-tunnel): 支持独立 HTTP 隧道连接并覆盖多数据源 refs #168 * fix(kingbase-data-grid): 修复金仓打开表卡顿并降低对象渲染开销 refs #178 * fix(kingbase-transaction): 修复金仓事务提交重复引号导致语法错误 refs #176 * fix(driver-agent): 修复老版本 Win10 升级后金仓驱动代理启动失败 refs #177 * chore(ci): 新增手动触发的 macOS 测试构建工作流 * chore(ci): 允许测试工作流在当前分支自动触发 * fix(query-editor): 修复 SQL 编辑中光标随机跳到末尾 refs #185 * feat(data-sync): 增加差异 SQL 预览能力便于审核 refs #174 * fix(clickhouse-connect): 自动识别并回退 HTTP/Native 协议连接 refs #181 * fix(oracle-metadata): 修复视图与函数加载按 schema 过滤异常 refs #155 * fix(dameng-databases): 修复显示全部库时数据库列表不完整 refs #154 * fix(connection,db-list): 统一处理空列表返回并修复达梦连接测试报错 refs #157 * Release/0.5.3 (#191) --------- Co-authored-by: 辣条 <69459608+tianqijiuyun-latiao@users.noreply.github.com> Co-authored-by: Syngnat <92659908+Syngnat@users.noreply.github.com> * 🐛 fix(branch-sync): 修复 main 回灌 dev 时 mergeable 异步计算导致漏开自动合并 - 增加 mergeable 状态轮询,避免新建同步 PR 后立即返回 UNKNOWN - 在合并状态未稳定时输出中文告警与执行摘要 - 保持冲突分支、待计算分支与自动合并分支的处理路径清晰 * 🔁 chore(sync): 回灌 main 到 dev (#195) * - feat(connection,metadata,kingbase): 增强多数据源连接能力并修复金仓/达梦/Oracle/ClickHouse兼容性问题 (#188) * feat(http-tunnel): 支持独立 HTTP 隧道连接并覆盖多数据源 refs #168 * fix(kingbase-data-grid): 修复金仓打开表卡顿并降低对象渲染开销 refs #178 * fix(kingbase-transaction): 修复金仓事务提交重复引号导致语法错误 refs #176 * fix(driver-agent): 修复老版本 Win10 升级后金仓驱动代理启动失败 refs #177 * chore(ci): 新增手动触发的 macOS 测试构建工作流 * chore(ci): 允许测试工作流在当前分支自动触发 * fix(query-editor): 修复 SQL 编辑中光标随机跳到末尾 refs #185 * feat(data-sync): 增加差异 SQL 预览能力便于审核 refs #174 * fix(clickhouse-connect): 自动识别并回退 HTTP/Native 协议连接 refs #181 * fix(oracle-metadata): 修复视图与函数加载按 schema 过滤异常 refs #155 * fix(dameng-databases): 修复显示全部库时数据库列表不完整 refs #154 * fix(connection,db-list): 统一处理空列表返回并修复达梦连接测试报错 refs #157 * Release/0.5.3 (#191) * - chore(ci): 新增全平台测试包手动构建工作流 tianqijiuyun-latiao 今天 下午4:26 (#194) * feat(http-tunnel): 支持独立 HTTP 隧道连接并覆盖多数据源 refs #168 * fix(kingbase-data-grid): 修复金仓打开表卡顿并降低对象渲染开销 refs #178 * fix(kingbase-transaction): 修复金仓事务提交重复引号导致语法错误 refs #176 * fix(driver-agent): 修复老版本 Win10 升级后金仓驱动代理启动失败 refs #177 * chore(ci): 新增手动触发的 macOS 测试构建工作流 * chore(ci): 允许测试工作流在当前分支自动触发 * fix(query-editor): 修复 SQL 编辑中光标随机跳到末尾 refs #185 * feat(data-sync): 增加差异 SQL 预览能力便于审核 refs #174 * fix(clickhouse-connect): 自动识别并回退 HTTP/Native 协议连接 refs #181 * fix(oracle-metadata): 修复视图与函数加载按 schema 过滤异常 refs #155 * fix(dameng-databases): 修复显示全部库时数据库列表不完整 refs #154 * fix(connection,db-list): 统一处理空列表返回并修复达梦连接测试报错 refs #157 * fix(kingbase): 补齐主键识别并优化宽表卡顿 refs #176 refs #178 * fix(query-execution): 支持带前置注释的读查询结果识别 * chore(ci): 新增全平台测试包手动构建工作流 --------- Co-authored-by: 辣条 <69459608+tianqijiuyun-latiao@users.noreply.github.com> Co-authored-by: Syngnat <92659908+Syngnat@users.noreply.github.com> * ♻️ refactor(frontend-sync): 优化桌面交互细节并移除 main 回灌 dev 自动化 - 优化新建连接、主题设置、侧边栏工具区与 SQL 日志的界面表现 - 调整分页、筛选、透明模式与弹窗样式,统一整体交互层次 - 收口外观参数生效逻辑并补齐多组件适配 - 删除 sync-main-to-dev 工作流并同步维护者手动回灌说明 * feat: 统一筛选条件逻辑按钮宽度 (#201) * 🐛 fix(oracle-query): 修复 Oracle 表数据分页 SQL 兼容问题 refs #196 (#202) * ✨ feat(datasource): 支持 DuckDB Parquet 文件模式并优化弹窗打开链路 - 统一 DuckDB 文件库与 Parquet 文件接入能力 - 补充 URI、文件选择、只读挂载与连接缓存键处理 - 去掉数据源卡片点击前的同步驱动查询,修复打开卡顿 * ✨ feat(datasource): 支持 DuckDB Parquet 文件模式并优化弹窗打开链路 - 统一 DuckDB 文件库与 Parquet 文件接入能力 - 补充 URI、文件选择、只读挂载与连接缓存键处理 - 去掉数据源卡片点击前的同步驱动查询,修复打开卡顿 - refs #166 * 🐛 fix(dameng): 修复达梦连接成功后数据库列表为空问题 - 调整达梦数据库列表获取策略,优先回退查询当前 schema 与当前用户 - 保留可见用户与 owner 聚合逻辑,兼容低权限账号场景 - 补充前端空列表提示与后端单元测试,降低排查成本 - close #203 * ✨ feat(data-sync): 扩展跨库迁移链路并优化数据同步交互 - 统一同库同步与跨库迁移入口,补充模式区分与风险提示 - 扩展 ClickHouse 与 PG-like 双向迁移,并新增 PG-like、ClickHouse、TDengine 到 MongoDB 的迁移路由 - 完善 TDengine 目标端建表规划、回归测试与需求追踪文档 - refs #51 --------- Co-authored-by: Syngnat <yangguofeng919@gmail.com> Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: 辣条 <69459608+tianqijiuyun-latiao@users.noreply.github.com> Co-authored-by: TSS <266256496+Zencok@users.noreply.github.com>
792 lines
20 KiB
Go
792 lines
20 KiB
Go
package db
|
||
|
||
import (
|
||
"context"
|
||
"database/sql"
|
||
"encoding/json"
|
||
"fmt"
|
||
"net/url"
|
||
"strconv"
|
||
"strings"
|
||
"time"
|
||
|
||
"GoNavi-Wails/internal/connection"
|
||
"GoNavi-Wails/internal/logger"
|
||
"GoNavi-Wails/internal/ssh"
|
||
"GoNavi-Wails/internal/utils"
|
||
|
||
_ "github.com/go-sql-driver/mysql"
|
||
)
|
||
|
||
type MySQLDB struct {
|
||
conn *sql.DB
|
||
pingTimeout time.Duration
|
||
}
|
||
|
||
const defaultMySQLPort = 3306
|
||
|
||
func parseHostPortWithDefault(raw string, defaultPort int) (string, int, bool) {
|
||
text := strings.TrimSpace(raw)
|
||
if text == "" {
|
||
return "", 0, false
|
||
}
|
||
|
||
if strings.HasPrefix(text, "[") {
|
||
end := strings.Index(text, "]")
|
||
if end < 0 {
|
||
return text, defaultPort, true
|
||
}
|
||
host := text[1:end]
|
||
portText := strings.TrimSpace(text[end+1:])
|
||
if strings.HasPrefix(portText, ":") {
|
||
if p, err := strconv.Atoi(strings.TrimSpace(strings.TrimPrefix(portText, ":"))); err == nil && p > 0 {
|
||
return host, p, true
|
||
}
|
||
}
|
||
return host, defaultPort, true
|
||
}
|
||
|
||
lastColon := strings.LastIndex(text, ":")
|
||
if lastColon > 0 && strings.Count(text, ":") == 1 {
|
||
host := strings.TrimSpace(text[:lastColon])
|
||
portText := strings.TrimSpace(text[lastColon+1:])
|
||
if host != "" {
|
||
if p, err := strconv.Atoi(portText); err == nil && p > 0 {
|
||
return host, p, true
|
||
}
|
||
return host, defaultPort, true
|
||
}
|
||
}
|
||
|
||
return text, defaultPort, true
|
||
}
|
||
|
||
func normalizeMySQLAddress(host string, port int) string {
|
||
h := strings.TrimSpace(host)
|
||
if h == "" {
|
||
h = "localhost"
|
||
}
|
||
p := port
|
||
if p <= 0 {
|
||
p = defaultMySQLPort
|
||
}
|
||
return fmt.Sprintf("%s:%d", h, p)
|
||
}
|
||
|
||
func applyMySQLURI(config connection.ConnectionConfig) connection.ConnectionConfig {
|
||
uriText := strings.TrimSpace(config.URI)
|
||
if uriText == "" {
|
||
return config
|
||
}
|
||
lowerURI := strings.ToLower(uriText)
|
||
if !strings.HasPrefix(lowerURI, "mysql://") {
|
||
return config
|
||
}
|
||
|
||
parsed, err := url.Parse(uriText)
|
||
if err != nil {
|
||
return config
|
||
}
|
||
|
||
if parsed.User != nil {
|
||
if config.User == "" {
|
||
config.User = parsed.User.Username()
|
||
}
|
||
if pass, ok := parsed.User.Password(); ok && config.Password == "" {
|
||
config.Password = pass
|
||
}
|
||
}
|
||
|
||
if dbName := strings.TrimPrefix(parsed.Path, "/"); dbName != "" && config.Database == "" {
|
||
config.Database = dbName
|
||
}
|
||
|
||
defaultPort := config.Port
|
||
if defaultPort <= 0 {
|
||
defaultPort = defaultMySQLPort
|
||
}
|
||
|
||
hostsFromURI := make([]string, 0, 4)
|
||
hostText := strings.TrimSpace(parsed.Host)
|
||
if hostText != "" {
|
||
for _, entry := range strings.Split(hostText, ",") {
|
||
host, port, ok := parseHostPortWithDefault(entry, defaultPort)
|
||
if !ok {
|
||
continue
|
||
}
|
||
hostsFromURI = append(hostsFromURI, normalizeMySQLAddress(host, port))
|
||
}
|
||
}
|
||
|
||
if len(config.Hosts) == 0 && len(hostsFromURI) > 0 {
|
||
config.Hosts = hostsFromURI
|
||
}
|
||
if strings.TrimSpace(config.Host) == "" && len(hostsFromURI) > 0 {
|
||
host, port, ok := parseHostPortWithDefault(hostsFromURI[0], defaultPort)
|
||
if ok {
|
||
config.Host = host
|
||
config.Port = port
|
||
}
|
||
}
|
||
|
||
if config.Topology == "" {
|
||
topology := strings.TrimSpace(parsed.Query().Get("topology"))
|
||
if topology != "" {
|
||
config.Topology = strings.ToLower(topology)
|
||
}
|
||
}
|
||
|
||
return config
|
||
}
|
||
|
||
func collectMySQLAddresses(config connection.ConnectionConfig) []string {
|
||
defaultPort := config.Port
|
||
if defaultPort <= 0 {
|
||
defaultPort = defaultMySQLPort
|
||
}
|
||
|
||
candidates := make([]string, 0, len(config.Hosts)+1)
|
||
if len(config.Hosts) > 0 {
|
||
candidates = append(candidates, config.Hosts...)
|
||
} else {
|
||
candidates = append(candidates, normalizeMySQLAddress(config.Host, defaultPort))
|
||
}
|
||
|
||
result := make([]string, 0, len(candidates))
|
||
seen := make(map[string]struct{}, len(candidates))
|
||
for _, entry := range candidates {
|
||
host, port, ok := parseHostPortWithDefault(entry, defaultPort)
|
||
if !ok {
|
||
continue
|
||
}
|
||
normalized := normalizeMySQLAddress(host, port)
|
||
if _, exists := seen[normalized]; exists {
|
||
continue
|
||
}
|
||
seen[normalized] = struct{}{}
|
||
result = append(result, normalized)
|
||
}
|
||
return result
|
||
}
|
||
|
||
func (m *MySQLDB) getDSN(config connection.ConnectionConfig) string {
|
||
database := config.Database
|
||
protocol := "tcp"
|
||
address := normalizeMySQLAddress(config.Host, config.Port)
|
||
|
||
if config.UseSSH {
|
||
netName, err := ssh.RegisterSSHNetwork(config.SSH)
|
||
if err == nil {
|
||
protocol = netName
|
||
address = normalizeMySQLAddress(config.Host, config.Port)
|
||
} else {
|
||
logger.Warnf("注册 SSH 网络失败,将尝试直连:地址=%s:%d 用户=%s,原因:%v", config.Host, config.Port, config.User, err)
|
||
}
|
||
}
|
||
|
||
timeout := getConnectTimeoutSeconds(config)
|
||
tlsMode := resolveMySQLTLSMode(config)
|
||
|
||
return fmt.Sprintf("%s:%s@%s(%s)/%s?charset=utf8mb4&parseTime=True&loc=Local&timeout=%ds&tls=%s",
|
||
config.User, config.Password, protocol, address, database, timeout, url.QueryEscape(tlsMode))
|
||
}
|
||
|
||
func resolveMySQLCredential(config connection.ConnectionConfig, addressIndex int) (string, string) {
|
||
primaryUser := strings.TrimSpace(config.User)
|
||
primaryPassword := config.Password
|
||
replicaUser := strings.TrimSpace(config.MySQLReplicaUser)
|
||
replicaPassword := config.MySQLReplicaPassword
|
||
|
||
if addressIndex > 0 && replicaUser != "" {
|
||
return replicaUser, replicaPassword
|
||
}
|
||
|
||
if primaryUser == "" && replicaUser != "" {
|
||
return replicaUser, replicaPassword
|
||
}
|
||
|
||
return config.User, primaryPassword
|
||
}
|
||
|
||
func (m *MySQLDB) Connect(config connection.ConnectionConfig) error {
|
||
runConfig := applyMySQLURI(config)
|
||
addresses := collectMySQLAddresses(runConfig)
|
||
if len(addresses) == 0 {
|
||
return fmt.Errorf("连接建立后验证失败:未找到可用的 MySQL 地址")
|
||
}
|
||
|
||
var errorDetails []string
|
||
for index, address := range addresses {
|
||
candidateConfig := runConfig
|
||
host, port, ok := parseHostPortWithDefault(address, defaultMySQLPort)
|
||
if !ok {
|
||
continue
|
||
}
|
||
candidateConfig.Host = host
|
||
candidateConfig.Port = port
|
||
candidateConfig.User, candidateConfig.Password = resolveMySQLCredential(runConfig, index)
|
||
|
||
dsn := m.getDSN(candidateConfig)
|
||
db, err := sql.Open("mysql", dsn)
|
||
if err != nil {
|
||
errorDetails = append(errorDetails, fmt.Sprintf("%s 打开失败: %v", address, err))
|
||
continue
|
||
}
|
||
|
||
timeout := getConnectTimeout(candidateConfig)
|
||
ctx, cancel := utils.ContextWithTimeout(timeout)
|
||
pingErr := db.PingContext(ctx)
|
||
cancel()
|
||
if pingErr != nil {
|
||
_ = db.Close()
|
||
errorDetails = append(errorDetails, fmt.Sprintf("%s 验证失败: %v", address, pingErr))
|
||
continue
|
||
}
|
||
|
||
m.conn = db
|
||
m.pingTimeout = timeout
|
||
return nil
|
||
}
|
||
|
||
if len(errorDetails) == 0 {
|
||
return fmt.Errorf("连接建立后验证失败:未找到可用的 MySQL 地址")
|
||
}
|
||
return fmt.Errorf("连接建立后验证失败:%s", strings.Join(errorDetails, ";"))
|
||
}
|
||
|
||
func (m *MySQLDB) Close() error {
|
||
if m.conn != nil {
|
||
return m.conn.Close()
|
||
}
|
||
return nil
|
||
}
|
||
|
||
func (m *MySQLDB) Ping() error {
|
||
if m.conn == nil {
|
||
return fmt.Errorf("connection not open")
|
||
}
|
||
timeout := m.pingTimeout
|
||
if timeout <= 0 {
|
||
timeout = 5 * time.Second
|
||
}
|
||
ctx, cancel := utils.ContextWithTimeout(timeout)
|
||
defer cancel()
|
||
return m.conn.PingContext(ctx)
|
||
}
|
||
|
||
func (m *MySQLDB) QueryContext(ctx context.Context, query string) ([]map[string]interface{}, []string, error) {
|
||
if m.conn == nil {
|
||
return nil, nil, fmt.Errorf("connection not open")
|
||
}
|
||
|
||
rows, err := m.conn.QueryContext(ctx, query)
|
||
if err != nil {
|
||
return nil, nil, err
|
||
}
|
||
defer rows.Close()
|
||
|
||
return scanRows(rows)
|
||
}
|
||
|
||
func (m *MySQLDB) Query(query string) ([]map[string]interface{}, []string, error) {
|
||
if m.conn == nil {
|
||
return nil, nil, fmt.Errorf("connection not open")
|
||
}
|
||
|
||
rows, err := m.conn.Query(query)
|
||
if err != nil {
|
||
return nil, nil, err
|
||
}
|
||
defer rows.Close()
|
||
return scanRows(rows)
|
||
}
|
||
|
||
func (m *MySQLDB) ExecContext(ctx context.Context, query string) (int64, error) {
|
||
if m.conn == nil {
|
||
return 0, fmt.Errorf("connection not open")
|
||
}
|
||
res, err := m.conn.ExecContext(ctx, query)
|
||
if err != nil {
|
||
return 0, err
|
||
}
|
||
return res.RowsAffected()
|
||
}
|
||
|
||
func (m *MySQLDB) Exec(query string) (int64, error) {
|
||
if m.conn == nil {
|
||
return 0, fmt.Errorf("connection not open")
|
||
}
|
||
res, err := m.conn.Exec(query)
|
||
if err != nil {
|
||
return 0, err
|
||
}
|
||
return res.RowsAffected()
|
||
}
|
||
|
||
func (m *MySQLDB) GetDatabases() ([]string, error) {
|
||
data, _, err := m.Query("SHOW DATABASES")
|
||
if err != nil {
|
||
return nil, err
|
||
}
|
||
var dbs []string
|
||
for _, row := range data {
|
||
if val, ok := row["Database"]; ok {
|
||
dbs = append(dbs, fmt.Sprintf("%v", val))
|
||
} else if val, ok := row["database"]; ok {
|
||
dbs = append(dbs, fmt.Sprintf("%v", val))
|
||
}
|
||
}
|
||
return dbs, nil
|
||
}
|
||
|
||
func (m *MySQLDB) GetTables(dbName string) ([]string, error) {
|
||
query := "SHOW TABLES"
|
||
if dbName != "" {
|
||
query = fmt.Sprintf("SHOW TABLES FROM `%s`", dbName)
|
||
}
|
||
|
||
data, _, err := m.Query(query)
|
||
if err != nil {
|
||
return nil, err
|
||
}
|
||
|
||
var tables []string
|
||
for _, row := range data {
|
||
for _, v := range row {
|
||
tables = append(tables, fmt.Sprintf("%v", v))
|
||
break
|
||
}
|
||
}
|
||
return tables, nil
|
||
}
|
||
|
||
func (m *MySQLDB) GetCreateStatement(dbName, tableName string) (string, error) {
|
||
query := fmt.Sprintf("SHOW CREATE TABLE `%s`.`%s`", dbName, tableName)
|
||
if dbName == "" {
|
||
query = fmt.Sprintf("SHOW CREATE TABLE `%s`", tableName)
|
||
}
|
||
|
||
data, _, err := m.Query(query)
|
||
if err != nil {
|
||
return "", err
|
||
}
|
||
|
||
if len(data) > 0 {
|
||
if val, ok := data[0]["Create Table"]; ok {
|
||
return fmt.Sprintf("%v", val), nil
|
||
}
|
||
}
|
||
return "", fmt.Errorf("create statement not found")
|
||
}
|
||
|
||
func (m *MySQLDB) GetColumns(dbName, tableName string) ([]connection.ColumnDefinition, error) {
|
||
query := fmt.Sprintf("SHOW FULL COLUMNS FROM `%s`.`%s`", dbName, tableName)
|
||
if dbName == "" {
|
||
query = fmt.Sprintf("SHOW FULL COLUMNS FROM `%s`", tableName)
|
||
}
|
||
|
||
data, _, err := m.Query(query)
|
||
if err != nil {
|
||
return nil, err
|
||
}
|
||
|
||
var columns []connection.ColumnDefinition
|
||
for _, row := range data {
|
||
col := connection.ColumnDefinition{
|
||
Name: fmt.Sprintf("%v", row["Field"]),
|
||
Type: fmt.Sprintf("%v", row["Type"]),
|
||
Nullable: fmt.Sprintf("%v", row["Null"]),
|
||
Key: fmt.Sprintf("%v", row["Key"]),
|
||
Extra: fmt.Sprintf("%v", row["Extra"]),
|
||
Comment: fmt.Sprintf("%v", row["Comment"]),
|
||
}
|
||
|
||
if row["Default"] != nil {
|
||
d := fmt.Sprintf("%v", row["Default"])
|
||
col.Default = &d
|
||
}
|
||
|
||
columns = append(columns, col)
|
||
}
|
||
return columns, nil
|
||
}
|
||
|
||
func (m *MySQLDB) GetIndexes(dbName, tableName string) ([]connection.IndexDefinition, error) {
|
||
query := fmt.Sprintf("SHOW INDEX FROM `%s`.`%s`", dbName, tableName)
|
||
if dbName == "" {
|
||
query = fmt.Sprintf("SHOW INDEX FROM `%s`", tableName)
|
||
}
|
||
|
||
data, _, err := m.Query(query)
|
||
if err != nil {
|
||
return nil, err
|
||
}
|
||
|
||
var indexes []connection.IndexDefinition
|
||
for _, row := range data {
|
||
nonUnique := 0
|
||
if val, ok := row["Non_unique"]; ok {
|
||
if f, ok := val.(float64); ok {
|
||
nonUnique = int(f)
|
||
} else if i, ok := val.(int64); ok {
|
||
nonUnique = int(i)
|
||
}
|
||
}
|
||
|
||
seq := 0
|
||
if val, ok := row["Seq_in_index"]; ok {
|
||
if f, ok := val.(float64); ok {
|
||
seq = int(f)
|
||
} else if i, ok := val.(int64); ok {
|
||
seq = int(i)
|
||
}
|
||
}
|
||
|
||
subPart := 0
|
||
if val, ok := row["Sub_part"]; ok && val != nil {
|
||
if f, ok := val.(float64); ok {
|
||
subPart = int(f)
|
||
} else if i, ok := val.(int64); ok {
|
||
subPart = int(i)
|
||
}
|
||
}
|
||
|
||
idx := connection.IndexDefinition{
|
||
Name: fmt.Sprintf("%v", row["Key_name"]),
|
||
ColumnName: fmt.Sprintf("%v", row["Column_name"]),
|
||
NonUnique: nonUnique,
|
||
SeqInIndex: seq,
|
||
IndexType: fmt.Sprintf("%v", row["Index_type"]),
|
||
SubPart: subPart,
|
||
}
|
||
indexes = append(indexes, idx)
|
||
}
|
||
return indexes, nil
|
||
}
|
||
|
||
func (m *MySQLDB) GetForeignKeys(dbName, tableName string) ([]connection.ForeignKeyDefinition, error) {
|
||
query := fmt.Sprintf(`SELECT CONSTRAINT_NAME, COLUMN_NAME, REFERENCED_TABLE_NAME, REFERENCED_COLUMN_NAME
|
||
FROM information_schema.KEY_COLUMN_USAGE
|
||
WHERE TABLE_SCHEMA = '%s' AND TABLE_NAME = '%s' AND REFERENCED_TABLE_NAME IS NOT NULL`, dbName, tableName)
|
||
|
||
data, _, err := m.Query(query)
|
||
if err != nil {
|
||
return nil, err
|
||
}
|
||
|
||
var fks []connection.ForeignKeyDefinition
|
||
for _, row := range data {
|
||
fk := connection.ForeignKeyDefinition{
|
||
Name: fmt.Sprintf("%v", row["CONSTRAINT_NAME"]),
|
||
ColumnName: fmt.Sprintf("%v", row["COLUMN_NAME"]),
|
||
RefTableName: fmt.Sprintf("%v", row["REFERENCED_TABLE_NAME"]),
|
||
RefColumnName: fmt.Sprintf("%v", row["REFERENCED_COLUMN_NAME"]),
|
||
ConstraintName: fmt.Sprintf("%v", row["CONSTRAINT_NAME"]),
|
||
}
|
||
fks = append(fks, fk)
|
||
}
|
||
return fks, nil
|
||
}
|
||
|
||
func (m *MySQLDB) GetTriggers(dbName, tableName string) ([]connection.TriggerDefinition, error) {
|
||
query := fmt.Sprintf("SHOW TRIGGERS FROM `%s` WHERE `Table` = '%s'", dbName, tableName)
|
||
data, _, err := m.Query(query)
|
||
if err != nil {
|
||
return nil, err
|
||
}
|
||
|
||
var triggers []connection.TriggerDefinition
|
||
for _, row := range data {
|
||
trig := connection.TriggerDefinition{
|
||
Name: fmt.Sprintf("%v", row["Trigger"]),
|
||
Timing: fmt.Sprintf("%v", row["Timing"]),
|
||
Event: fmt.Sprintf("%v", row["Event"]),
|
||
Statement: fmt.Sprintf("%v", row["Statement"]),
|
||
}
|
||
triggers = append(triggers, trig)
|
||
}
|
||
return triggers, nil
|
||
}
|
||
|
||
func (m *MySQLDB) ApplyChanges(tableName string, changes connection.ChangeSet) error {
|
||
if m.conn == nil {
|
||
return fmt.Errorf("connection not open")
|
||
}
|
||
|
||
columnTypeMap := m.loadColumnTypeMap(tableName)
|
||
|
||
tx, err := m.conn.Begin()
|
||
if err != nil {
|
||
return err
|
||
}
|
||
defer tx.Rollback()
|
||
|
||
// 1. Deletes
|
||
for _, pk := range changes.Deletes {
|
||
var wheres []string
|
||
var args []interface{}
|
||
for k, v := range pk {
|
||
wheres = append(wheres, fmt.Sprintf("`%s` = ?", k))
|
||
args = append(args, normalizeMySQLValueForWrite(k, v, columnTypeMap))
|
||
}
|
||
if len(wheres) == 0 {
|
||
continue
|
||
}
|
||
query := fmt.Sprintf("DELETE FROM `%s` WHERE %s", tableName, strings.Join(wheres, " AND "))
|
||
res, err := tx.Exec(query, args...)
|
||
if err != nil {
|
||
return fmt.Errorf("delete error: %v", err)
|
||
}
|
||
if affected, err := res.RowsAffected(); err == nil && affected == 0 {
|
||
return fmt.Errorf("删除未生效:未匹配到任何行")
|
||
}
|
||
}
|
||
|
||
// 2. Updates
|
||
for _, update := range changes.Updates {
|
||
var sets []string
|
||
var args []interface{}
|
||
|
||
for k, v := range update.Values {
|
||
sets = append(sets, fmt.Sprintf("`%s` = ?", k))
|
||
args = append(args, normalizeMySQLValueForWrite(k, v, columnTypeMap))
|
||
}
|
||
|
||
if len(sets) == 0 {
|
||
continue
|
||
}
|
||
|
||
var wheres []string
|
||
for k, v := range update.Keys {
|
||
wheres = append(wheres, fmt.Sprintf("`%s` = ?", k))
|
||
args = append(args, normalizeMySQLValueForWrite(k, v, columnTypeMap))
|
||
}
|
||
|
||
if len(wheres) == 0 {
|
||
return fmt.Errorf("update requires keys")
|
||
}
|
||
|
||
query := fmt.Sprintf("UPDATE `%s` SET %s WHERE %s", tableName, strings.Join(sets, ", "), strings.Join(wheres, " AND "))
|
||
res, err := tx.Exec(query, args...)
|
||
if err != nil {
|
||
return fmt.Errorf("update error: %v", err)
|
||
}
|
||
if affected, err := res.RowsAffected(); err == nil && affected == 0 {
|
||
return fmt.Errorf("更新未生效:未匹配到任何行")
|
||
}
|
||
}
|
||
|
||
// 3. Inserts
|
||
for _, row := range changes.Inserts {
|
||
var cols []string
|
||
var placeholders []string
|
||
var args []interface{}
|
||
|
||
for k, v := range row {
|
||
normalizedValue, omit := normalizeMySQLValueForInsert(k, v, columnTypeMap)
|
||
if omit {
|
||
continue
|
||
}
|
||
cols = append(cols, fmt.Sprintf("`%s`", k))
|
||
placeholders = append(placeholders, "?")
|
||
args = append(args, normalizedValue)
|
||
}
|
||
|
||
if len(cols) == 0 {
|
||
query := fmt.Sprintf("INSERT INTO `%s` () VALUES ()", tableName)
|
||
res, err := tx.Exec(query)
|
||
if err != nil {
|
||
return fmt.Errorf("insert error: %v", err)
|
||
}
|
||
if affected, err := res.RowsAffected(); err == nil && affected == 0 {
|
||
return fmt.Errorf("插入未生效:未影响任何行")
|
||
}
|
||
continue
|
||
}
|
||
|
||
query := fmt.Sprintf("INSERT INTO `%s` (%s) VALUES (%s)", tableName, strings.Join(cols, ", "), strings.Join(placeholders, ", "))
|
||
res, err := tx.Exec(query, args...)
|
||
if err != nil {
|
||
return fmt.Errorf("insert error: %v", err)
|
||
}
|
||
if affected, err := res.RowsAffected(); err == nil && affected == 0 {
|
||
return fmt.Errorf("插入未生效:未影响任何行")
|
||
}
|
||
}
|
||
|
||
return tx.Commit()
|
||
}
|
||
|
||
func normalizeMySQLComplexValue(value interface{}) interface{} {
|
||
switch v := value.(type) {
|
||
case map[string]interface{}, []interface{}:
|
||
if data, err := json.Marshal(v); err == nil {
|
||
return string(data)
|
||
}
|
||
return fmt.Sprintf("%v", value)
|
||
default:
|
||
return value
|
||
}
|
||
}
|
||
|
||
func normalizeMySQLDateTimeValue(value interface{}) interface{} {
|
||
text, ok := value.(string)
|
||
if !ok {
|
||
return value
|
||
}
|
||
raw := strings.TrimSpace(text)
|
||
if raw == "" {
|
||
return value
|
||
}
|
||
|
||
cleaned := strings.ReplaceAll(raw, "+ ", "+")
|
||
cleaned = strings.ReplaceAll(cleaned, "- ", "-")
|
||
|
||
if len(cleaned) >= 19 && cleaned[10] == 'T' {
|
||
if strings.HasSuffix(cleaned, "Z") || hasTimezoneOffset(cleaned) {
|
||
if t, err := time.Parse(time.RFC3339Nano, cleaned); err == nil {
|
||
return formatMySQLDateTime(t)
|
||
}
|
||
if t, err := time.Parse(time.RFC3339, cleaned); err == nil {
|
||
return formatMySQLDateTime(t)
|
||
}
|
||
}
|
||
return strings.Replace(cleaned, "T", " ", 1)
|
||
}
|
||
|
||
if strings.Contains(cleaned, " ") && (strings.HasSuffix(cleaned, "Z") || hasTimezoneOffset(cleaned)) {
|
||
candidate := strings.Replace(cleaned, " ", "T", 1)
|
||
if t, err := time.Parse(time.RFC3339Nano, candidate); err == nil {
|
||
return formatMySQLDateTime(t)
|
||
}
|
||
if t, err := time.Parse(time.RFC3339, candidate); err == nil {
|
||
return formatMySQLDateTime(t)
|
||
}
|
||
}
|
||
|
||
return value
|
||
}
|
||
|
||
func (m *MySQLDB) loadColumnTypeMap(tableName string) map[string]string {
|
||
result := map[string]string{}
|
||
table := strings.TrimSpace(tableName)
|
||
if table == "" {
|
||
return result
|
||
}
|
||
|
||
columns, err := m.GetColumns("", table)
|
||
if err != nil {
|
||
logger.Warnf("加载列元数据失败(不影响提交):表=%s err=%v", table, err)
|
||
return result
|
||
}
|
||
|
||
for _, col := range columns {
|
||
name := strings.ToLower(strings.TrimSpace(col.Name))
|
||
if name == "" {
|
||
continue
|
||
}
|
||
result[name] = strings.TrimSpace(col.Type)
|
||
}
|
||
return result
|
||
}
|
||
|
||
func normalizeMySQLValueForInsert(columnName string, value interface{}, columnTypeMap map[string]string) (interface{}, bool) {
|
||
columnType := strings.ToLower(strings.TrimSpace(columnTypeMap[strings.ToLower(strings.TrimSpace(columnName))]))
|
||
if !isMySQLTemporalColumnType(columnType) {
|
||
return normalizeMySQLComplexValue(value), false
|
||
}
|
||
text, ok := value.(string)
|
||
if ok && strings.TrimSpace(text) == "" {
|
||
// INSERT 空时间字段不写入,交给 DB 默认值处理(如 CURRENT_TIMESTAMP)。
|
||
return nil, true
|
||
}
|
||
return normalizeMySQLDateTimeValue(value), false
|
||
}
|
||
|
||
func normalizeMySQLValueForWrite(columnName string, value interface{}, columnTypeMap map[string]string) interface{} {
|
||
columnType := strings.ToLower(strings.TrimSpace(columnTypeMap[strings.ToLower(strings.TrimSpace(columnName))]))
|
||
if !isMySQLTemporalColumnType(columnType) {
|
||
return value
|
||
}
|
||
text, ok := value.(string)
|
||
if ok && strings.TrimSpace(text) == "" {
|
||
return nil
|
||
}
|
||
return normalizeMySQLDateTimeValue(value)
|
||
}
|
||
|
||
func isMySQLTemporalColumnType(columnType string) bool {
|
||
raw := strings.ToLower(strings.TrimSpace(columnType))
|
||
if raw == "" {
|
||
return false
|
||
}
|
||
if strings.Contains(raw, "datetime") || strings.Contains(raw, "timestamp") {
|
||
return true
|
||
}
|
||
base := raw
|
||
if idx := strings.IndexAny(base, "( "); idx >= 0 {
|
||
base = base[:idx]
|
||
}
|
||
return base == "date" || base == "time" || base == "year"
|
||
}
|
||
|
||
func hasTimezoneOffset(text string) bool {
|
||
pos := strings.LastIndexAny(text, "+-")
|
||
if pos < 0 || pos < 10 || pos+1 >= len(text) {
|
||
return false
|
||
}
|
||
offset := text[pos+1:]
|
||
if len(offset) == 5 && offset[2] == ':' {
|
||
return isAllDigits(offset[:2]) && isAllDigits(offset[3:])
|
||
}
|
||
if len(offset) == 4 {
|
||
return isAllDigits(offset)
|
||
}
|
||
return false
|
||
}
|
||
|
||
func isAllDigits(text string) bool {
|
||
if text == "" {
|
||
return false
|
||
}
|
||
for _, r := range text {
|
||
if r < '0' || r > '9' {
|
||
return false
|
||
}
|
||
}
|
||
return true
|
||
}
|
||
|
||
func formatMySQLDateTime(t time.Time) string {
|
||
base := t.Format("2006-01-02 15:04:05")
|
||
nanos := t.Nanosecond()
|
||
if nanos == 0 {
|
||
return base
|
||
}
|
||
micro := nanos / 1000
|
||
return fmt.Sprintf("%s.%06d", base, micro)
|
||
}
|
||
|
||
func (m *MySQLDB) GetAllColumns(dbName string) ([]connection.ColumnDefinitionWithTable, error) {
|
||
query := fmt.Sprintf("SELECT TABLE_NAME, COLUMN_NAME, COLUMN_TYPE FROM information_schema.COLUMNS WHERE TABLE_SCHEMA = '%s'", dbName)
|
||
if dbName == "" {
|
||
return nil, fmt.Errorf("database name required for GetAllColumns")
|
||
}
|
||
|
||
data, _, err := m.Query(query)
|
||
if err != nil {
|
||
return nil, err
|
||
}
|
||
|
||
var cols []connection.ColumnDefinitionWithTable
|
||
for _, row := range data {
|
||
col := connection.ColumnDefinitionWithTable{
|
||
TableName: fmt.Sprintf("%v", row["TABLE_NAME"]),
|
||
Name: fmt.Sprintf("%v", row["COLUMN_NAME"]),
|
||
Type: fmt.Sprintf("%v", row["COLUMN_TYPE"]),
|
||
}
|
||
cols = append(cols, col)
|
||
}
|
||
return cols, nil
|
||
}
|