mirror of
https://github.com/Syngnat/GoNavi.git
synced 2026-05-15 12:27:38 +08:00
* 🐛 fix(data-viewer): 修复ClickHouse尾部分页异常并增强DuckDB复杂类型兼容 - DataViewer 新增 ClickHouse 反向分页策略,修复最后页与倒数页查询失败 - DuckDB 查询失败时按列类型生成安全 SELECT,复杂类型转 VARCHAR 重试 - 分页状态统一使用 currentPage 回填,避免页码与总数推导不一致 - 增强查询异常日志与重试路径,降低大表场景卡顿与误报 * ✨ feat(frontend-driver): 驱动管理支持快速搜索并优化信息展示 - 新增搜索框,支持按 DuckDB/ClickHouse 等关键字快速定位驱动 - 显示“匹配 x / y”统计与无结果提示 - 优化头部区域排版,提升透明/暗色场景下的视觉对齐 * 🔧 fix(connection-modal): 修复多数据源URI导入解析并校正Oracle服务名校验 - 新增单主机URI解析映射,兼容 postgres/postgresql、sqlserver、redis、tdengine、dameng(dm)、kingbase、highgo、vastbase、clickhouse、oracle - 抽取 parseSingleHostUri 复用逻辑,统一 host/port/user/password/database 回填行为 - Oracle 连接新增服务名必填校验,移除“服务名为空回退用户名”的隐式逻辑 - 连接弹窗补充 Oracle 服务名输入项与 URI 示例 * 🐛 fix(query-export): 修复查询结果导出卡住并统一按数据源能力控制导出路径 - 查询结果页导出增加稳定兜底,异常时确保 loading 关闭避免持续转圈 - DataGrid 导出逻辑按数据源能力分流,优先走后端 ExportQuery 并保留结果集导出降级 - QueryEditor 传递结果导出 SQL,保证查询结果导出范围与当前结果一致 - 后端补充 ExportData/ExportQuery 关键日志,提升导出链路可观测性 * 🐛 fix(precision): 修复查询链路与分页统计的大整数精度丢失 - 代理响应数据解码改为 UseNumber,避免默认 float64 吞精度 - 统一归一化 json.Number 与超界整数,超出 JS 安全范围转字符串 - 修复 DataViewer 总数解析,超大值不再误转 Number 参与分页 - refs #142 * 🐛 fix(driver-manager): 修复驱动管理网络告警重复并强化代理引导 - 新增下载链路域名探测,区分“GitHub可达但驱动下载链路不可达” - 网络不可达场景仅保留红色强提醒,移除重复二级告警 - 强提醒增加“打开全局代理设置”入口,优先引导使用 GoNavi 全局代理 - 统一网络检测与目录说明提示图标尺寸,修复加载期视觉不一致 - refs #141 * ♻️ refactor(frontend-interaction): 统一标签拖拽与暗色主题交互实现 - 重构Tab拖拽排序实现,统一为可配置拖拽引擎 - 规范拖拽与点击事件边界,提升交互一致性 - 统一多组件暗色透明样式策略,减少硬编码色值 - 提升Redis/表格/连接面板在透明模式下的观感一致性 - refs #144 * ♻️ refactor(update-state): 重构在线更新状态流并按版本统一进度展示 - 重构更新检查与下载状态同步流程,减少前后端状态分叉 - 进度展示严格绑定 latestVersion,避免跨版本状态串用 - 优化 about 打开场景的静默检查状态回填逻辑 - 统一下载弹窗关闭/后台隐藏行为 - 保持现有安装流程并补齐目录打开能力 * 🎨 style(sidebar-log): 将SQL执行日志入口调整为悬浮胶囊样式 - 移除侧栏底部整条日志入口容器 - 新增悬浮按钮阴影/边框/透明背景并适配明暗主题 - 为树区域预留底部空间避免入口遮挡内容 * ✨ feat(redis-cluster): 支持集群模式逻辑多库隔离与 0-15 库切换 - 前端恢复 Redis 集群场景下 db0-db15 的数据库选择与展示 - 后端新增集群逻辑库命名空间前缀映射,统一 key/pattern 读写隔离 - 覆盖扫描、读取、写入、删除、重命名等核心操作的键映射规则 - 集群命令通道支持 SELECT 逻辑切库与 FLUSHDB 逻辑库清空 - refs #145 * ✨ feat(DataGrid): 大数据表虚拟滚动性能优化及UI一致性修复 - 启用动态虚拟滚动(数据量≥500行自动切换),解决万行数据表卡顿问题 - 虚拟模式下EditableCell改用div渲染,CSS选择器从元素级改为类级适配虚拟DOM - 修复虚拟模式双水平滚动条:样式化rc-virtual-list内置滚动条为胶囊外观,禁用自定义外部滚动条 - 为rc-virtual-list水平滚动条添加鼠标滚轮支持(MutationObserver + marginLeft驱动) - 修复白色主题透明模式下列名悬浮Tooltip对比度不足的问题 - 新增白色主题全局滚动条样式适配透明模式(App.css) - App.tsx主题token与组件样式优化 - refs #147 * 🔧 chore(app): 清理 App.tsx 类型告警并收敛前端壳层实现 - 清除未使用代码和冗余状态 - 替换弃用 API 以消除 IDE 提示 - 显式处理浮动 Promise 避免告警 - 保持现有更新检查和代理设置行为不变 * 🔧 fix(ci): 修复 Windows AMD64 下 DuckDB 驱动构建链路 - 将 DuckDB 工具链准备切换为优先使用 MSYS2 - 增加 gcc 和 g++ 存在性校验与版本验证 - 在 MSYS2 异常时回退 Chocolatey 安装 MinGW - 保持 Windows ARM64 跳过 DuckDB 构建与平台支持一致 * 🔧 fix(ci): 修复 Windows AMD64 下 DuckDB 驱动构建链路 - 将 DuckDB 工具链准备切换为优先使用 MSYS2 - 增加 gcc 和 g++ 存在性校验与版本验证 - 在 MSYS2 异常时回退 Chocolatey 安装 MinGW - 保持 Windows ARM64 跳过 DuckDB 构建与平台支持一致 * 🔧 fix(ci): 修复 Windows AMD64 下 DuckDB 驱动构建工具链 - 将 DuckDB 编译链从 MINGW64 切换为 MSYS2 UCRT64 - 修正 Windows AMD64 的 gcc 和 g++ 探测路径 - 增加 DuckDB 编译器版本校验步骤 * 📝 docs(contributing): 补充中英文贡献指南并统一 README 入口 - 新增英文版 CONTRIBUTING.md 作为正式贡献文档 - 新增中文版 CONTRIBUTING.zh-CN.md 作为中文贡献说明 - 调整 README 和 README.zh-CN 的贡献入口指向对应语言文档 * - feat(connection,metadata,kingbase): 增强多数据源连接能力并修复金仓/达梦/Oracle/ClickHouse兼容性问题 (#188) (#190) * feat(http-tunnel): 支持独立 HTTP 隧道连接并覆盖多数据源 refs #168 * fix(kingbase-data-grid): 修复金仓打开表卡顿并降低对象渲染开销 refs #178 * fix(kingbase-transaction): 修复金仓事务提交重复引号导致语法错误 refs #176 * fix(driver-agent): 修复老版本 Win10 升级后金仓驱动代理启动失败 refs #177 * chore(ci): 新增手动触发的 macOS 测试构建工作流 * chore(ci): 允许测试工作流在当前分支自动触发 * fix(query-editor): 修复 SQL 编辑中光标随机跳到末尾 refs #185 * feat(data-sync): 增加差异 SQL 预览能力便于审核 refs #174 * fix(clickhouse-connect): 自动识别并回退 HTTP/Native 协议连接 refs #181 * fix(oracle-metadata): 修复视图与函数加载按 schema 过滤异常 refs #155 * fix(dameng-databases): 修复显示全部库时数据库列表不完整 refs #154 * fix(connection,db-list): 统一处理空列表返回并修复达梦连接测试报错 refs #157 Co-authored-by: 辣条 <69459608+tianqijiuyun-latiao@users.noreply.github.com> * ✨ feat(release-notes): 支持自动生成 Release 更新说明并区分配置文件命名 * 🔁 chore(sync): 回灌 main 到 dev (#192) * - feat(connection,metadata,kingbase): 增强多数据源连接能力并修复金仓/达梦/Oracle/ClickHouse兼容性问题 (#188) * feat(http-tunnel): 支持独立 HTTP 隧道连接并覆盖多数据源 refs #168 * fix(kingbase-data-grid): 修复金仓打开表卡顿并降低对象渲染开销 refs #178 * fix(kingbase-transaction): 修复金仓事务提交重复引号导致语法错误 refs #176 * fix(driver-agent): 修复老版本 Win10 升级后金仓驱动代理启动失败 refs #177 * chore(ci): 新增手动触发的 macOS 测试构建工作流 * chore(ci): 允许测试工作流在当前分支自动触发 * fix(query-editor): 修复 SQL 编辑中光标随机跳到末尾 refs #185 * feat(data-sync): 增加差异 SQL 预览能力便于审核 refs #174 * fix(clickhouse-connect): 自动识别并回退 HTTP/Native 协议连接 refs #181 * fix(oracle-metadata): 修复视图与函数加载按 schema 过滤异常 refs #155 * fix(dameng-databases): 修复显示全部库时数据库列表不完整 refs #154 * fix(connection,db-list): 统一处理空列表返回并修复达梦连接测试报错 refs #157 * Release/0.5.3 (#191) --------- Co-authored-by: 辣条 <69459608+tianqijiuyun-latiao@users.noreply.github.com> Co-authored-by: Syngnat <92659908+Syngnat@users.noreply.github.com> * 🐛 fix(branch-sync): 修复 main 回灌 dev 时 mergeable 异步计算导致漏开自动合并 - 增加 mergeable 状态轮询,避免新建同步 PR 后立即返回 UNKNOWN - 在合并状态未稳定时输出中文告警与执行摘要 - 保持冲突分支、待计算分支与自动合并分支的处理路径清晰 * 🔁 chore(sync): 回灌 main 到 dev (#195) * - feat(connection,metadata,kingbase): 增强多数据源连接能力并修复金仓/达梦/Oracle/ClickHouse兼容性问题 (#188) * feat(http-tunnel): 支持独立 HTTP 隧道连接并覆盖多数据源 refs #168 * fix(kingbase-data-grid): 修复金仓打开表卡顿并降低对象渲染开销 refs #178 * fix(kingbase-transaction): 修复金仓事务提交重复引号导致语法错误 refs #176 * fix(driver-agent): 修复老版本 Win10 升级后金仓驱动代理启动失败 refs #177 * chore(ci): 新增手动触发的 macOS 测试构建工作流 * chore(ci): 允许测试工作流在当前分支自动触发 * fix(query-editor): 修复 SQL 编辑中光标随机跳到末尾 refs #185 * feat(data-sync): 增加差异 SQL 预览能力便于审核 refs #174 * fix(clickhouse-connect): 自动识别并回退 HTTP/Native 协议连接 refs #181 * fix(oracle-metadata): 修复视图与函数加载按 schema 过滤异常 refs #155 * fix(dameng-databases): 修复显示全部库时数据库列表不完整 refs #154 * fix(connection,db-list): 统一处理空列表返回并修复达梦连接测试报错 refs #157 * Release/0.5.3 (#191) * - chore(ci): 新增全平台测试包手动构建工作流 tianqijiuyun-latiao 今天 下午4:26 (#194) * feat(http-tunnel): 支持独立 HTTP 隧道连接并覆盖多数据源 refs #168 * fix(kingbase-data-grid): 修复金仓打开表卡顿并降低对象渲染开销 refs #178 * fix(kingbase-transaction): 修复金仓事务提交重复引号导致语法错误 refs #176 * fix(driver-agent): 修复老版本 Win10 升级后金仓驱动代理启动失败 refs #177 * chore(ci): 新增手动触发的 macOS 测试构建工作流 * chore(ci): 允许测试工作流在当前分支自动触发 * fix(query-editor): 修复 SQL 编辑中光标随机跳到末尾 refs #185 * feat(data-sync): 增加差异 SQL 预览能力便于审核 refs #174 * fix(clickhouse-connect): 自动识别并回退 HTTP/Native 协议连接 refs #181 * fix(oracle-metadata): 修复视图与函数加载按 schema 过滤异常 refs #155 * fix(dameng-databases): 修复显示全部库时数据库列表不完整 refs #154 * fix(connection,db-list): 统一处理空列表返回并修复达梦连接测试报错 refs #157 * fix(kingbase): 补齐主键识别并优化宽表卡顿 refs #176 refs #178 * fix(query-execution): 支持带前置注释的读查询结果识别 * chore(ci): 新增全平台测试包手动构建工作流 --------- Co-authored-by: 辣条 <69459608+tianqijiuyun-latiao@users.noreply.github.com> Co-authored-by: Syngnat <92659908+Syngnat@users.noreply.github.com> * ♻️ refactor(frontend-sync): 优化桌面交互细节并移除 main 回灌 dev 自动化 - 优化新建连接、主题设置、侧边栏工具区与 SQL 日志的界面表现 - 调整分页、筛选、透明模式与弹窗样式,统一整体交互层次 - 收口外观参数生效逻辑并补齐多组件适配 - 删除 sync-main-to-dev 工作流并同步维护者手动回灌说明 * feat: 统一筛选条件逻辑按钮宽度 (#201) * 🐛 fix(oracle-query): 修复 Oracle 表数据分页 SQL 兼容问题 refs #196 (#202) * ✨ feat(datasource): 支持 DuckDB Parquet 文件模式并优化弹窗打开链路 - 统一 DuckDB 文件库与 Parquet 文件接入能力 - 补充 URI、文件选择、只读挂载与连接缓存键处理 - 去掉数据源卡片点击前的同步驱动查询,修复打开卡顿 * ✨ feat(datasource): 支持 DuckDB Parquet 文件模式并优化弹窗打开链路 - 统一 DuckDB 文件库与 Parquet 文件接入能力 - 补充 URI、文件选择、只读挂载与连接缓存键处理 - 去掉数据源卡片点击前的同步驱动查询,修复打开卡顿 - refs #166 * 🐛 fix(dameng): 修复达梦连接成功后数据库列表为空问题 - 调整达梦数据库列表获取策略,优先回退查询当前 schema 与当前用户 - 保留可见用户与 owner 聚合逻辑,兼容低权限账号场景 - 补充前端空列表提示与后端单元测试,降低排查成本 - close #203 * ✨ feat(data-sync): 扩展跨库迁移链路并优化数据同步交互 - 统一同库同步与跨库迁移入口,补充模式区分与风险提示 - 扩展 ClickHouse 与 PG-like 双向迁移,并新增 PG-like、ClickHouse、TDengine 到 MongoDB 的迁移路由 - 完善 TDengine 目标端建表规划、回归测试与需求追踪文档 - refs #51 --------- Co-authored-by: Syngnat <yangguofeng919@gmail.com> Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: 辣条 <69459608+tianqijiuyun-latiao@users.noreply.github.com> Co-authored-by: TSS <266256496+Zencok@users.noreply.github.com>
958 lines
36 KiB
Go
958 lines
36 KiB
Go
package sync
|
||
|
||
import (
|
||
"GoNavi-Wails/internal/connection"
|
||
"context"
|
||
"reflect"
|
||
"strings"
|
||
"testing"
|
||
)
|
||
|
||
type fakeMigrationDB struct {
|
||
columns map[string][]connection.ColumnDefinition
|
||
indexes map[string][]connection.IndexDefinition
|
||
queryData map[string][]map[string]interface{}
|
||
queryCols map[string][]string
|
||
}
|
||
|
||
func (f *fakeMigrationDB) Connect(config connection.ConnectionConfig) error { return nil }
|
||
func (f *fakeMigrationDB) Close() error { return nil }
|
||
func (f *fakeMigrationDB) Ping() error { return nil }
|
||
func (f *fakeMigrationDB) Query(query string) ([]map[string]interface{}, []string, error) {
|
||
if rows, ok := f.queryData[query]; ok {
|
||
return rows, f.queryCols[query], nil
|
||
}
|
||
return nil, nil, nil
|
||
}
|
||
func (f *fakeMigrationDB) Exec(query string) (int64, error) { return 0, nil }
|
||
func (f *fakeMigrationDB) GetDatabases() ([]string, error) { return nil, nil }
|
||
func (f *fakeMigrationDB) GetTables(dbName string) ([]string, error) {
|
||
return nil, nil
|
||
}
|
||
func (f *fakeMigrationDB) GetCreateStatement(dbName, tableName string) (string, error) {
|
||
return "", nil
|
||
}
|
||
func (f *fakeMigrationDB) GetColumns(dbName, tableName string) ([]connection.ColumnDefinition, error) {
|
||
key := dbName + "." + tableName
|
||
if rows, ok := f.columns[key]; ok {
|
||
return rows, nil
|
||
}
|
||
return []connection.ColumnDefinition{}, nil
|
||
}
|
||
func (f *fakeMigrationDB) GetAllColumns(dbName string) ([]connection.ColumnDefinitionWithTable, error) {
|
||
return nil, nil
|
||
}
|
||
func (f *fakeMigrationDB) GetIndexes(dbName, tableName string) ([]connection.IndexDefinition, error) {
|
||
key := dbName + "." + tableName
|
||
if rows, ok := f.indexes[key]; ok {
|
||
return rows, nil
|
||
}
|
||
return nil, nil
|
||
}
|
||
func (f *fakeMigrationDB) GetForeignKeys(dbName, tableName string) ([]connection.ForeignKeyDefinition, error) {
|
||
return nil, nil
|
||
}
|
||
func (f *fakeMigrationDB) GetTriggers(dbName, tableName string) ([]connection.TriggerDefinition, error) {
|
||
return nil, nil
|
||
}
|
||
func (f *fakeMigrationDB) QueryContext(ctx context.Context, query string) ([]map[string]interface{}, []string, error) {
|
||
return f.Query(query)
|
||
}
|
||
func (f *fakeMigrationDB) ExecContext(ctx context.Context, query string) (int64, error) {
|
||
return 0, nil
|
||
}
|
||
|
||
func TestBuildMySQLToKingbaseColumnDefinition_AutoIncrementAndBoolean(t *testing.T) {
|
||
t.Parallel()
|
||
|
||
def, warnings := buildMySQLToKingbaseColumnDefinition(connection.ColumnDefinition{
|
||
Name: "id",
|
||
Type: "int unsigned",
|
||
Nullable: "NO",
|
||
Extra: "auto_increment",
|
||
})
|
||
if !strings.Contains(def, "bigint") || !strings.Contains(def, "GENERATED BY DEFAULT AS IDENTITY") || !strings.Contains(def, "NOT NULL") {
|
||
t.Fatalf("unexpected definition: %s", def)
|
||
}
|
||
if len(warnings) != 0 {
|
||
t.Fatalf("unexpected warnings: %v", warnings)
|
||
}
|
||
|
||
def, warnings = buildMySQLToKingbaseColumnDefinition(connection.ColumnDefinition{
|
||
Name: "enabled",
|
||
Type: "tinyint(1)",
|
||
Nullable: "YES",
|
||
Default: stringPtr("1"),
|
||
})
|
||
if !strings.Contains(def, "boolean") || !strings.Contains(def, "DEFAULT TRUE") {
|
||
t.Fatalf("unexpected boolean definition: %s", def)
|
||
}
|
||
if len(warnings) != 0 {
|
||
t.Fatalf("unexpected warnings for boolean: %v", warnings)
|
||
}
|
||
}
|
||
|
||
func TestBuildMySQLToKingbaseCreateTablePlan_GeneratesAndSkipsIndexes(t *testing.T) {
|
||
t.Parallel()
|
||
|
||
sourceDB := &fakeMigrationDB{
|
||
indexes: map[string][]connection.IndexDefinition{
|
||
"shop.orders": {
|
||
{Name: "PRIMARY", ColumnName: "id", NonUnique: 0, SeqInIndex: 1, IndexType: "BTREE"},
|
||
{Name: "idx_user_status", ColumnName: "user_id", NonUnique: 1, SeqInIndex: 1, IndexType: "BTREE"},
|
||
{Name: "idx_user_status", ColumnName: "status", NonUnique: 1, SeqInIndex: 2, IndexType: "BTREE"},
|
||
{Name: "idx_name_prefix", ColumnName: "name", NonUnique: 1, SeqInIndex: 1, IndexType: "BTREE", SubPart: 12},
|
||
{Name: "idx_fulltext_note", ColumnName: "note", NonUnique: 1, SeqInIndex: 1, IndexType: "FULLTEXT"},
|
||
},
|
||
},
|
||
}
|
||
cols := []connection.ColumnDefinition{
|
||
{Name: "id", Type: "bigint", Nullable: "NO", Key: "PRI", Extra: "auto_increment"},
|
||
{Name: "user_id", Type: "bigint", Nullable: "NO"},
|
||
{Name: "status", Type: "varchar(32)", Nullable: "YES"},
|
||
{Name: "name", Type: "varchar(128)", Nullable: "YES"},
|
||
{Name: "note", Type: "text", Nullable: "YES"},
|
||
}
|
||
cfg := SyncConfig{CreateIndexes: true}
|
||
createSQL, postSQL, warnings, unsupported, idxCreate, idxSkip, err := buildMySQLToKingbaseCreateTablePlan(cfg, "public.orders", cols, sourceDB, "shop", "orders")
|
||
if err != nil {
|
||
t.Fatalf("buildMySQLToKingbaseCreateTablePlan returned error: %v", err)
|
||
}
|
||
if !strings.Contains(createSQL, `CREATE TABLE "public"."orders"`) {
|
||
t.Fatalf("unexpected create SQL: %s", createSQL)
|
||
}
|
||
if !strings.Contains(createSQL, `PRIMARY KEY ("id")`) {
|
||
t.Fatalf("create SQL missing primary key: %s", createSQL)
|
||
}
|
||
if idxCreate != 1 || idxSkip != 2 {
|
||
t.Fatalf("unexpected index summary: create=%d skip=%d", idxCreate, idxSkip)
|
||
}
|
||
if len(postSQL) != 1 || !strings.Contains(postSQL[0], `CREATE INDEX "idx_user_status"`) {
|
||
t.Fatalf("unexpected post SQL: %v", postSQL)
|
||
}
|
||
if len(warnings) != 0 {
|
||
t.Fatalf("unexpected warnings: %v", warnings)
|
||
}
|
||
wantUnsupported := []string{
|
||
"索引 idx_name_prefix 使用前缀长度,当前暂不支持迁移",
|
||
"索引 idx_fulltext_note 类型=FULLTEXT,当前暂不支持自动迁移",
|
||
}
|
||
if !reflect.DeepEqual(unsupported, wantUnsupported) {
|
||
t.Fatalf("unexpected unsupported objects: got=%v want=%v", unsupported, wantUnsupported)
|
||
}
|
||
}
|
||
|
||
func TestBuildSchemaMigrationPlan_AutoCreateWhenTargetMissing(t *testing.T) {
|
||
t.Parallel()
|
||
|
||
sourceDB := &fakeMigrationDB{
|
||
columns: map[string][]connection.ColumnDefinition{
|
||
"shop.orders": {
|
||
{Name: "id", Type: "bigint", Nullable: "NO", Key: "PRI", Extra: "auto_increment"},
|
||
{Name: "name", Type: "varchar(128)", Nullable: "YES"},
|
||
},
|
||
},
|
||
indexes: map[string][]connection.IndexDefinition{},
|
||
}
|
||
targetDB := &fakeMigrationDB{columns: map[string][]connection.ColumnDefinition{}}
|
||
cfg := SyncConfig{
|
||
SourceConfig: connection.ConnectionConfig{Type: "mysql", Database: "shop"},
|
||
TargetConfig: connection.ConnectionConfig{Type: "kingbase", Database: "demo"},
|
||
TargetTableStrategy: "smart",
|
||
CreateIndexes: true,
|
||
}
|
||
plan, sourceCols, targetCols, err := buildSchemaMigrationPlan(cfg, "orders", sourceDB, targetDB)
|
||
if err != nil {
|
||
t.Fatalf("buildSchemaMigrationPlan returned error: %v", err)
|
||
}
|
||
if len(sourceCols) != 2 || len(targetCols) != 0 {
|
||
t.Fatalf("unexpected columns lengths: source=%d target=%d", len(sourceCols), len(targetCols))
|
||
}
|
||
if plan.TargetTableExists {
|
||
t.Fatalf("expected target table missing")
|
||
}
|
||
if !plan.AutoCreate {
|
||
t.Fatalf("expected auto create enabled")
|
||
}
|
||
if !strings.Contains(plan.PlannedAction, "自动建表") {
|
||
t.Fatalf("unexpected planned action: %s", plan.PlannedAction)
|
||
}
|
||
if !strings.Contains(plan.CreateTableSQL, `CREATE TABLE "public"."orders"`) {
|
||
t.Fatalf("unexpected create table SQL: %s", plan.CreateTableSQL)
|
||
}
|
||
}
|
||
|
||
func stringPtr(v string) *string { return &v }
|
||
|
||
func TestBuildPGLikeToMySQLCreateTablePlan_GeneratesMySQLDDL(t *testing.T) {
|
||
t.Parallel()
|
||
|
||
sourceDB := &fakeMigrationDB{
|
||
indexes: map[string][]connection.IndexDefinition{
|
||
"public.users": {
|
||
{Name: "users_email_key", ColumnName: "email", NonUnique: 0, SeqInIndex: 1, IndexType: "BTREE"},
|
||
{Name: "idx_users_name", ColumnName: "name", NonUnique: 1, SeqInIndex: 1, IndexType: "BTREE"},
|
||
},
|
||
},
|
||
}
|
||
cols := []connection.ColumnDefinition{
|
||
{Name: "id", Type: "integer", Nullable: "NO", Key: "PRI", Extra: "auto_increment"},
|
||
{Name: "email", Type: "character varying(120)", Nullable: "NO"},
|
||
{Name: "name", Type: "text", Nullable: "YES"},
|
||
{Name: "profile", Type: "jsonb", Nullable: "YES"},
|
||
}
|
||
cfg := SyncConfig{CreateIndexes: true}
|
||
createSQL, postSQL, warnings, unsupported, idxCreate, idxSkip, err := buildPGLikeToMySQLCreateTablePlan(cfg, "app.users", cols, sourceDB, "public", "users")
|
||
if err != nil {
|
||
t.Fatalf("buildPGLikeToMySQLCreateTablePlan returned error: %v", err)
|
||
}
|
||
if !strings.Contains(createSQL, "CREATE TABLE `app`.`users`") {
|
||
t.Fatalf("unexpected create SQL: %s", createSQL)
|
||
}
|
||
if !strings.Contains(createSQL, "`id` int AUTO_INCREMENT NOT NULL") {
|
||
t.Fatalf("unexpected id definition: %s", createSQL)
|
||
}
|
||
if !strings.Contains(createSQL, "`profile` json") {
|
||
t.Fatalf("unexpected json definition: %s", createSQL)
|
||
}
|
||
if idxCreate != 2 || idxSkip != 0 {
|
||
t.Fatalf("unexpected index summary: create=%d skip=%d", idxCreate, idxSkip)
|
||
}
|
||
if len(postSQL) != 2 {
|
||
t.Fatalf("unexpected post sql length: %v", postSQL)
|
||
}
|
||
if len(warnings) != 0 {
|
||
t.Fatalf("unexpected warnings: %v", warnings)
|
||
}
|
||
if len(unsupported) != 0 {
|
||
t.Fatalf("unexpected unsupported: %v", unsupported)
|
||
}
|
||
}
|
||
|
||
func TestBuildPGLikeToMySQLPlan_AutoCreateWhenTargetMissing(t *testing.T) {
|
||
t.Parallel()
|
||
|
||
sourceDB := &fakeMigrationDB{
|
||
columns: map[string][]connection.ColumnDefinition{
|
||
"public.orders": {
|
||
{Name: "id", Type: "bigint", Nullable: "NO", Key: "PRI", Extra: "auto_increment"},
|
||
{Name: "amount", Type: "numeric(10,2)", Nullable: "NO"},
|
||
},
|
||
},
|
||
indexes: map[string][]connection.IndexDefinition{},
|
||
}
|
||
targetDB := &fakeMigrationDB{columns: map[string][]connection.ColumnDefinition{}}
|
||
cfg := SyncConfig{
|
||
SourceConfig: connection.ConnectionConfig{Type: "kingbase", Database: "public"},
|
||
TargetConfig: connection.ConnectionConfig{Type: "mysql", Database: "app"},
|
||
TargetTableStrategy: "smart",
|
||
CreateIndexes: true,
|
||
}
|
||
plan, sourceCols, targetCols, err := buildPGLikeToMySQLPlan(cfg, "orders", sourceDB, targetDB)
|
||
if err != nil {
|
||
t.Fatalf("buildPGLikeToMySQLPlan returned error: %v", err)
|
||
}
|
||
if len(sourceCols) != 2 || len(targetCols) != 0 {
|
||
t.Fatalf("unexpected columns lengths: source=%d target=%d", len(sourceCols), len(targetCols))
|
||
}
|
||
if plan.TargetTableExists {
|
||
t.Fatalf("expected target table missing")
|
||
}
|
||
if !plan.AutoCreate {
|
||
t.Fatalf("expected auto create enabled")
|
||
}
|
||
if !strings.Contains(plan.CreateTableSQL, "CREATE TABLE `app`.`orders`") {
|
||
t.Fatalf("unexpected create table SQL: %s", plan.CreateTableSQL)
|
||
}
|
||
}
|
||
|
||
func TestBuildMySQLToPGLikeCreateTablePlan_GeneratesPostgresDDL(t *testing.T) {
|
||
t.Parallel()
|
||
|
||
sourceDB := &fakeMigrationDB{
|
||
indexes: map[string][]connection.IndexDefinition{
|
||
"shop.orders": {
|
||
{Name: "idx_orders_user", ColumnName: "user_id", NonUnique: 1, SeqInIndex: 1, IndexType: "BTREE"},
|
||
{Name: "idx_orders_user", ColumnName: "status", NonUnique: 1, SeqInIndex: 2, IndexType: "BTREE"},
|
||
},
|
||
},
|
||
}
|
||
cols := []connection.ColumnDefinition{
|
||
{Name: "id", Type: "bigint", Nullable: "NO", Key: "PRI", Extra: "auto_increment"},
|
||
{Name: "user_id", Type: "bigint", Nullable: "NO"},
|
||
{Name: "status", Type: "varchar(32)", Nullable: "YES"},
|
||
{Name: "payload", Type: "json", Nullable: "YES"},
|
||
}
|
||
cfg := SyncConfig{CreateIndexes: true}
|
||
createSQL, postSQL, warnings, unsupported, idxCreate, idxSkip, err := buildMySQLToPGLikeCreateTablePlan("postgres", cfg, "public.orders", cols, sourceDB, "shop", "orders")
|
||
if err != nil {
|
||
t.Fatalf("buildMySQLToPGLikeCreateTablePlan returned error: %v", err)
|
||
}
|
||
if !strings.Contains(createSQL, `CREATE TABLE "public"."orders"`) {
|
||
t.Fatalf("unexpected create SQL: %s", createSQL)
|
||
}
|
||
if !strings.Contains(createSQL, `GENERATED BY DEFAULT AS IDENTITY`) {
|
||
t.Fatalf("missing identity mapping: %s", createSQL)
|
||
}
|
||
if !strings.Contains(createSQL, `jsonb`) {
|
||
t.Fatalf("missing jsonb mapping: %s", createSQL)
|
||
}
|
||
if idxCreate != 1 || idxSkip != 0 {
|
||
t.Fatalf("unexpected index summary: create=%d skip=%d", idxCreate, idxSkip)
|
||
}
|
||
if len(postSQL) != 1 || !strings.Contains(postSQL[0], `CREATE INDEX "idx_orders_user"`) {
|
||
t.Fatalf("unexpected post SQL: %v", postSQL)
|
||
}
|
||
if len(warnings) != 0 || len(unsupported) != 0 {
|
||
t.Fatalf("unexpected warnings/unsupported: warnings=%v unsupported=%v", warnings, unsupported)
|
||
}
|
||
}
|
||
|
||
func TestBuildMySQLToClickHouseCreateTableSQL_GeneratesMergeTree(t *testing.T) {
|
||
t.Parallel()
|
||
|
||
cols := []connection.ColumnDefinition{
|
||
{Name: "id", Type: "bigint unsigned", Nullable: "NO", Key: "PRI"},
|
||
{Name: "name", Type: "varchar(128)", Nullable: "YES"},
|
||
{Name: "payload", Type: "json", Nullable: "YES"},
|
||
}
|
||
createSQL, warnings, unsupported := buildMySQLToClickHouseCreateTableSQL("analytics.orders", cols)
|
||
if !strings.Contains(createSQL, "ENGINE = MergeTree()") {
|
||
t.Fatalf("unexpected create SQL: %s", createSQL)
|
||
}
|
||
if !strings.Contains(createSQL, "ORDER BY (`id`)") {
|
||
t.Fatalf("unexpected order by: %s", createSQL)
|
||
}
|
||
if !strings.Contains(createSQL, "`payload` Nullable(String)") {
|
||
t.Fatalf("unexpected json mapping: %s", createSQL)
|
||
}
|
||
if len(warnings) == 0 {
|
||
t.Fatalf("expected warnings for clickhouse semantics")
|
||
}
|
||
if len(unsupported) != 0 {
|
||
t.Fatalf("unexpected unsupported: %v", unsupported)
|
||
}
|
||
}
|
||
|
||
func TestBuildClickHouseToMySQLCreateTableSQL_GeneratesMySQLDDL(t *testing.T) {
|
||
t.Parallel()
|
||
|
||
cols := []connection.ColumnDefinition{
|
||
{Name: "id", Type: "UInt64", Nullable: "NO", Key: "PRI"},
|
||
{Name: "event_time", Type: "DateTime", Nullable: "NO"},
|
||
{Name: "payload", Type: "Map(String, String)", Nullable: "YES"},
|
||
}
|
||
createSQL, warnings := buildClickHouseToMySQLCreateTableSQL("app.metrics", cols)
|
||
if !strings.Contains(createSQL, "CREATE TABLE `app`.`metrics`") {
|
||
t.Fatalf("unexpected create SQL: %s", createSQL)
|
||
}
|
||
if !strings.Contains(createSQL, "`id` bigint unsigned NOT NULL") {
|
||
t.Fatalf("unexpected uint64 mapping: %s", createSQL)
|
||
}
|
||
if !strings.Contains(createSQL, "`payload` json") {
|
||
t.Fatalf("unexpected complex type mapping: %s", createSQL)
|
||
}
|
||
if len(warnings) == 0 {
|
||
t.Fatalf("expected warning for limited clickhouse reverse semantics")
|
||
}
|
||
}
|
||
|
||
func TestBuildMySQLToMongoPlan_AutoCreateCollection(t *testing.T) {
|
||
t.Parallel()
|
||
|
||
sourceDB := &fakeMigrationDB{
|
||
columns: map[string][]connection.ColumnDefinition{
|
||
"shop.users": {
|
||
{Name: "id", Type: "bigint", Nullable: "NO", Key: "PRI"},
|
||
{Name: "name", Type: "varchar(64)", Nullable: "YES"},
|
||
},
|
||
},
|
||
indexes: map[string][]connection.IndexDefinition{
|
||
"shop.users": {
|
||
{Name: "idx_users_name", ColumnName: "name", NonUnique: 1, SeqInIndex: 1, IndexType: "BTREE"},
|
||
},
|
||
},
|
||
}
|
||
targetDB := &fakeMigrationDB{}
|
||
cfg := SyncConfig{
|
||
SourceConfig: connection.ConnectionConfig{Type: "mysql", Database: "shop"},
|
||
TargetConfig: connection.ConnectionConfig{Type: "mongodb", Database: "app"},
|
||
TargetTableStrategy: "smart",
|
||
CreateIndexes: true,
|
||
}
|
||
plan, sourceCols, targetCols, err := buildMySQLToMongoPlan(cfg, "users", sourceDB, targetDB)
|
||
if err != nil {
|
||
t.Fatalf("buildMySQLToMongoPlan returned error: %v", err)
|
||
}
|
||
if len(sourceCols) != 2 || targetCols != nil {
|
||
t.Fatalf("unexpected source/target columns: %d / %v", len(sourceCols), targetCols)
|
||
}
|
||
if !plan.AutoCreate || len(plan.PreDataSQL) == 0 {
|
||
t.Fatalf("expected auto create collection command: %+v", plan)
|
||
}
|
||
if !strings.Contains(plan.PreDataSQL[0], `"create":"users"`) {
|
||
t.Fatalf("unexpected create collection command: %v", plan.PreDataSQL)
|
||
}
|
||
if len(plan.PostDataSQL) != 1 || !strings.Contains(plan.PostDataSQL[0], `"createIndexes":"users"`) {
|
||
t.Fatalf("unexpected index commands: %v", plan.PostDataSQL)
|
||
}
|
||
}
|
||
|
||
func TestBuildPGLikeToMongoPlan_AutoCreateCollection(t *testing.T) {
|
||
t.Parallel()
|
||
|
||
sourceDB := &fakeMigrationDB{
|
||
columns: map[string][]connection.ColumnDefinition{
|
||
"public.orders": {
|
||
{Name: "id", Type: "bigint", Nullable: "NO", Key: "PRI"},
|
||
{Name: "name", Type: "varchar(64)", Nullable: "YES"},
|
||
},
|
||
},
|
||
indexes: map[string][]connection.IndexDefinition{
|
||
"public.orders": {
|
||
{Name: "idx_orders_name", ColumnName: "name", NonUnique: 1, SeqInIndex: 1, IndexType: "BTREE"},
|
||
},
|
||
},
|
||
}
|
||
targetDB := &fakeMigrationDB{}
|
||
cfg := SyncConfig{
|
||
SourceConfig: connection.ConnectionConfig{Type: "postgres", Database: "public"},
|
||
TargetConfig: connection.ConnectionConfig{Type: "mongodb", Database: "app"},
|
||
TargetTableStrategy: "smart",
|
||
CreateIndexes: true,
|
||
}
|
||
plan, sourceCols, targetCols, err := buildPGLikeToMongoPlan(cfg, "orders", sourceDB, targetDB)
|
||
if err != nil {
|
||
t.Fatalf("buildPGLikeToMongoPlan returned error: %v", err)
|
||
}
|
||
if len(sourceCols) != 2 || targetCols != nil {
|
||
t.Fatalf("unexpected source/target columns: %d / %v", len(sourceCols), targetCols)
|
||
}
|
||
if !plan.AutoCreate || len(plan.PreDataSQL) == 0 {
|
||
t.Fatalf("expected auto create collection command: %+v", plan)
|
||
}
|
||
if !strings.Contains(plan.PreDataSQL[0], `"create":"orders"`) {
|
||
t.Fatalf("unexpected create collection command: %v", plan.PreDataSQL)
|
||
}
|
||
if len(plan.PostDataSQL) != 1 || !strings.Contains(plan.PostDataSQL[0], `"createIndexes":"orders"`) {
|
||
t.Fatalf("unexpected index commands: %v", plan.PostDataSQL)
|
||
}
|
||
}
|
||
|
||
func TestBuildClickHouseToMongoPlan_AutoCreateCollection(t *testing.T) {
|
||
t.Parallel()
|
||
|
||
sourceDB := &fakeMigrationDB{
|
||
columns: map[string][]connection.ColumnDefinition{
|
||
"analytics.metrics": {
|
||
{Name: "id", Type: "UInt64", Nullable: "NO", Key: "PRI"},
|
||
{Name: "host", Type: "String", Nullable: "YES"},
|
||
},
|
||
},
|
||
}
|
||
targetDB := &fakeMigrationDB{}
|
||
cfg := SyncConfig{
|
||
SourceConfig: connection.ConnectionConfig{Type: "clickhouse", Database: "analytics"},
|
||
TargetConfig: connection.ConnectionConfig{Type: "mongodb", Database: "app"},
|
||
TargetTableStrategy: "smart",
|
||
}
|
||
plan, sourceCols, targetCols, err := buildClickHouseToMongoPlan(cfg, "metrics", sourceDB, targetDB)
|
||
if err != nil {
|
||
t.Fatalf("buildClickHouseToMongoPlan returned error: %v", err)
|
||
}
|
||
if len(sourceCols) != 2 || targetCols != nil {
|
||
t.Fatalf("unexpected source/target columns: %d / %v", len(sourceCols), targetCols)
|
||
}
|
||
if !plan.AutoCreate || len(plan.PreDataSQL) == 0 {
|
||
t.Fatalf("expected auto create collection command: %+v", plan)
|
||
}
|
||
if !strings.Contains(plan.PreDataSQL[0], `"create":"metrics"`) {
|
||
t.Fatalf("unexpected create collection command: %v", plan.PreDataSQL)
|
||
}
|
||
}
|
||
|
||
func TestBuildTDengineToMongoPlan_AutoCreateCollection(t *testing.T) {
|
||
t.Parallel()
|
||
|
||
sourceDB := &fakeMigrationDB{
|
||
columns: map[string][]connection.ColumnDefinition{
|
||
"src.cpu": {
|
||
{Name: "ts", Type: "TIMESTAMP", Nullable: "NO"},
|
||
{Name: "host", Type: "NCHAR(64)", Nullable: "YES"},
|
||
},
|
||
},
|
||
}
|
||
targetDB := &fakeMigrationDB{}
|
||
cfg := SyncConfig{
|
||
SourceConfig: connection.ConnectionConfig{Type: "tdengine", Database: "src"},
|
||
TargetConfig: connection.ConnectionConfig{Type: "mongodb", Database: "app"},
|
||
TargetTableStrategy: "smart",
|
||
}
|
||
plan, sourceCols, targetCols, err := buildTDengineToMongoPlan(cfg, "cpu", sourceDB, targetDB)
|
||
if err != nil {
|
||
t.Fatalf("buildTDengineToMongoPlan returned error: %v", err)
|
||
}
|
||
if len(sourceCols) != 2 || targetCols != nil {
|
||
t.Fatalf("unexpected source/target columns: %d / %v", len(sourceCols), targetCols)
|
||
}
|
||
if !plan.AutoCreate || len(plan.PreDataSQL) == 0 {
|
||
t.Fatalf("expected auto create collection command: %+v", plan)
|
||
}
|
||
if !strings.Contains(plan.PreDataSQL[0], `"create":"cpu"`) {
|
||
t.Fatalf("unexpected create collection command: %v", plan.PreDataSQL)
|
||
}
|
||
}
|
||
|
||
func TestBuildMongoToMySQLPlan_InfersColumnsAndCreatesTable(t *testing.T) {
|
||
t.Parallel()
|
||
|
||
query := `{"find":"users","filter":{},"limit":200}`
|
||
sourceDB := &fakeMigrationDB{
|
||
queryData: map[string][]map[string]interface{}{
|
||
query: {
|
||
{"_id": "a1", "name": "alice", "age": int64(18), "profile": map[string]interface{}{"city": "shanghai"}},
|
||
{"_id": "b2", "name": "bob", "profile": map[string]interface{}{"city": "beijing"}},
|
||
},
|
||
},
|
||
queryCols: map[string][]string{query: {"_id", "name", "age", "profile"}},
|
||
indexes: map[string][]connection.IndexDefinition{
|
||
"crm.users": {{Name: "email_1", ColumnName: "name", NonUnique: 1, SeqInIndex: 1, IndexType: "BTREE"}},
|
||
},
|
||
}
|
||
targetDB := &fakeMigrationDB{}
|
||
cfg := SyncConfig{
|
||
SourceConfig: connection.ConnectionConfig{Type: "mongodb", Database: "crm"},
|
||
TargetConfig: connection.ConnectionConfig{Type: "mysql", Database: "app"},
|
||
TargetTableStrategy: "smart",
|
||
CreateIndexes: true,
|
||
}
|
||
plan, sourceCols, _, err := buildMongoToMySQLPlan(cfg, "users", sourceDB, targetDB)
|
||
if err != nil {
|
||
t.Fatalf("buildMongoToMySQLPlan returned error: %v", err)
|
||
}
|
||
if len(sourceCols) == 0 {
|
||
t.Fatalf("expected inferred source cols")
|
||
}
|
||
if !plan.AutoCreate || !strings.Contains(plan.CreateTableSQL, "CREATE TABLE `app`.`users`") {
|
||
t.Fatalf("unexpected create table sql: %s", plan.CreateTableSQL)
|
||
}
|
||
if !strings.Contains(plan.CreateTableSQL, "`_id` text NOT NULL") && !strings.Contains(plan.CreateTableSQL, "`_id` varchar") {
|
||
t.Fatalf("missing inferred _id column: %s", plan.CreateTableSQL)
|
||
}
|
||
if !strings.Contains(plan.CreateTableSQL, "`profile` json") {
|
||
t.Fatalf("expected nested field degrade to json: %s", plan.CreateTableSQL)
|
||
}
|
||
if len(plan.PostDataSQL) != 1 {
|
||
t.Fatalf("expected one post index sql, got=%v", plan.PostDataSQL)
|
||
}
|
||
}
|
||
|
||
func TestBuildTDengineToMySQLPlan_AutoCreateWhenTargetMissing(t *testing.T) {
|
||
t.Parallel()
|
||
|
||
sourceDB := &fakeMigrationDB{
|
||
columns: map[string][]connection.ColumnDefinition{
|
||
"metrics.cpu": {
|
||
{Name: "ts", Type: "TIMESTAMP", Nullable: "NO"},
|
||
{Name: "host", Type: "NCHAR(64)", Nullable: "YES", Key: "TAG", Extra: "TAG"},
|
||
{Name: "usage", Type: "DOUBLE", Nullable: "YES"},
|
||
},
|
||
},
|
||
}
|
||
targetDB := &fakeMigrationDB{}
|
||
cfg := SyncConfig{
|
||
SourceConfig: connection.ConnectionConfig{Type: "tdengine", Database: "metrics"},
|
||
TargetConfig: connection.ConnectionConfig{Type: "mysql", Database: "app"},
|
||
TargetTableStrategy: "smart",
|
||
}
|
||
plan, sourceCols, targetCols, err := buildTDengineToMySQLPlan(cfg, "cpu", sourceDB, targetDB)
|
||
if err != nil {
|
||
t.Fatalf("buildTDengineToMySQLPlan returned error: %v", err)
|
||
}
|
||
if len(sourceCols) != 3 || len(targetCols) != 0 {
|
||
t.Fatalf("unexpected columns lengths: source=%d target=%d", len(sourceCols), len(targetCols))
|
||
}
|
||
if !plan.AutoCreate {
|
||
t.Fatalf("expected auto create enabled")
|
||
}
|
||
if !strings.Contains(plan.CreateTableSQL, "CREATE TABLE `app`.`cpu`") {
|
||
t.Fatalf("unexpected create table sql: %s", plan.CreateTableSQL)
|
||
}
|
||
if !strings.Contains(plan.CreateTableSQL, "`ts` datetime") {
|
||
t.Fatalf("expected timestamp mapping, got: %s", plan.CreateTableSQL)
|
||
}
|
||
if !strings.Contains(plan.CreateTableSQL, "`host` varchar(64)") {
|
||
t.Fatalf("expected nchar mapping, got: %s", plan.CreateTableSQL)
|
||
}
|
||
if len(plan.Warnings) == 0 || !strings.Contains(strings.Join(plan.Warnings, " "), "TAG") {
|
||
t.Fatalf("expected TAG warning, got: %v", plan.Warnings)
|
||
}
|
||
}
|
||
|
||
func TestBuildTDengineToPGLikePlan_AutoCreateWhenTargetMissing(t *testing.T) {
|
||
t.Parallel()
|
||
|
||
sourceDB := &fakeMigrationDB{
|
||
columns: map[string][]connection.ColumnDefinition{
|
||
"metrics.cpu": {
|
||
{Name: "ts", Type: "TIMESTAMP", Nullable: "NO"},
|
||
{Name: "payload", Type: "JSON", Nullable: "YES"},
|
||
{Name: "host", Type: "BINARY(32)", Nullable: "YES", Key: "TAG", Extra: "TAG"},
|
||
},
|
||
},
|
||
}
|
||
targetDB := &fakeMigrationDB{}
|
||
cfg := SyncConfig{
|
||
SourceConfig: connection.ConnectionConfig{Type: "tdengine", Database: "metrics"},
|
||
TargetConfig: connection.ConnectionConfig{Type: "kingbase", Database: "ignored"},
|
||
TargetTableStrategy: "smart",
|
||
}
|
||
plan, sourceCols, targetCols, err := buildTDengineToPGLikePlan(cfg, "cpu", sourceDB, targetDB)
|
||
if err != nil {
|
||
t.Fatalf("buildTDengineToPGLikePlan returned error: %v", err)
|
||
}
|
||
if len(sourceCols) != 3 || len(targetCols) != 0 {
|
||
t.Fatalf("unexpected columns lengths: source=%d target=%d", len(sourceCols), len(targetCols))
|
||
}
|
||
if !plan.AutoCreate {
|
||
t.Fatalf("expected auto create enabled")
|
||
}
|
||
if !strings.Contains(plan.CreateTableSQL, `CREATE TABLE "public"."cpu"`) {
|
||
t.Fatalf("unexpected create table sql: %s", plan.CreateTableSQL)
|
||
}
|
||
if !strings.Contains(plan.CreateTableSQL, `"ts" timestamp`) {
|
||
t.Fatalf("expected timestamp mapping, got: %s", plan.CreateTableSQL)
|
||
}
|
||
if !strings.Contains(plan.CreateTableSQL, `"payload" jsonb`) {
|
||
t.Fatalf("expected json mapping, got: %s", plan.CreateTableSQL)
|
||
}
|
||
if len(plan.Warnings) == 0 || !strings.Contains(strings.Join(plan.Warnings, " "), "TAG") {
|
||
t.Fatalf("expected TAG warning, got: %v", plan.Warnings)
|
||
}
|
||
}
|
||
|
||
func TestBuildSchemaMigrationPlan_TDengineTargetWarnsInsertOnlyBoundary(t *testing.T) {
|
||
t.Parallel()
|
||
|
||
sourceDB := &fakeMigrationDB{
|
||
columns: map[string][]connection.ColumnDefinition{
|
||
"shop.metrics": {
|
||
{Name: "id", Type: "bigint", Nullable: "NO", Key: "PRI"},
|
||
{Name: "ts", Type: "datetime", Nullable: "NO"},
|
||
{Name: "value", Type: "double", Nullable: "YES"},
|
||
},
|
||
},
|
||
}
|
||
targetDB := &fakeMigrationDB{
|
||
columns: map[string][]connection.ColumnDefinition{
|
||
"taos.metrics": {
|
||
{Name: "id", Type: "bigint", Nullable: "NO"},
|
||
{Name: "ts", Type: "timestamp", Nullable: "NO"},
|
||
{Name: "value", Type: "double", Nullable: "YES"},
|
||
},
|
||
},
|
||
}
|
||
cfg := SyncConfig{
|
||
SourceConfig: connection.ConnectionConfig{Type: "mysql", Database: "shop"},
|
||
TargetConfig: connection.ConnectionConfig{Type: "tdengine", Database: "taos"},
|
||
Mode: "insert_update",
|
||
}
|
||
|
||
plan, _, _, err := buildSchemaMigrationPlan(cfg, "metrics", sourceDB, targetDB)
|
||
if err != nil {
|
||
t.Fatalf("buildSchemaMigrationPlan returned error: %v", err)
|
||
}
|
||
warnings := strings.Join(plan.Warnings, " ")
|
||
if !strings.Contains(warnings, "仅支持 INSERT 写入") {
|
||
t.Fatalf("expected TDengine target warning, got: %v", plan.Warnings)
|
||
}
|
||
}
|
||
|
||
func TestBuildMySQLLikeToTDenginePlan_AutoCreateWhenTargetMissing(t *testing.T) {
|
||
t.Parallel()
|
||
|
||
sourceDB := &fakeMigrationDB{
|
||
columns: map[string][]connection.ColumnDefinition{
|
||
"shop.metrics": {
|
||
{Name: "id", Type: "bigint", Nullable: "NO", Key: "PRI", Extra: "auto_increment"},
|
||
{Name: "ts", Type: "datetime", Nullable: "NO"},
|
||
{Name: "payload", Type: "json", Nullable: "YES"},
|
||
},
|
||
},
|
||
}
|
||
targetDB := &fakeMigrationDB{}
|
||
cfg := SyncConfig{
|
||
SourceConfig: connection.ConnectionConfig{Type: "mysql", Database: "shop"},
|
||
TargetConfig: connection.ConnectionConfig{Type: "tdengine", Database: "taos"},
|
||
TargetTableStrategy: "smart",
|
||
}
|
||
plan, sourceCols, targetCols, err := buildMySQLLikeToTDenginePlan(cfg, "metrics", sourceDB, targetDB)
|
||
if err != nil {
|
||
t.Fatalf("buildMySQLLikeToTDenginePlan returned error: %v", err)
|
||
}
|
||
if len(sourceCols) != 3 || len(targetCols) != 0 {
|
||
t.Fatalf("unexpected columns lengths: source=%d target=%d", len(sourceCols), len(targetCols))
|
||
}
|
||
if !plan.AutoCreate {
|
||
t.Fatalf("expected auto create enabled")
|
||
}
|
||
if !strings.Contains(plan.CreateTableSQL, "CREATE TABLE `taos`.`metrics`") {
|
||
t.Fatalf("unexpected create sql: %s", plan.CreateTableSQL)
|
||
}
|
||
if !strings.Contains(plan.CreateTableSQL, "`ts` TIMESTAMP") {
|
||
t.Fatalf("expected ts first column mapped to TIMESTAMP, got: %s", plan.CreateTableSQL)
|
||
}
|
||
if !strings.Contains(plan.CreateTableSQL, "`payload` VARCHAR(") {
|
||
t.Fatalf("expected json degrade to VARCHAR, got: %s", plan.CreateTableSQL)
|
||
}
|
||
if !strings.Contains(strings.Join(plan.Warnings, " "), "insert-only") && !strings.Contains(strings.Join(plan.Warnings, " "), "INSERT") {
|
||
t.Fatalf("expected tdengine target warning, got: %v", plan.Warnings)
|
||
}
|
||
}
|
||
|
||
func TestBuildPGLikeToTDenginePlan_AutoCreateWhenTargetMissing(t *testing.T) {
|
||
t.Parallel()
|
||
|
||
sourceDB := &fakeMigrationDB{
|
||
columns: map[string][]connection.ColumnDefinition{
|
||
"public.metrics": {
|
||
{Name: "event_time", Type: "timestamp without time zone", Nullable: "NO"},
|
||
{Name: "name", Type: "character varying(64)", Nullable: "YES"},
|
||
{Name: "meta", Type: "jsonb", Nullable: "YES"},
|
||
},
|
||
},
|
||
}
|
||
targetDB := &fakeMigrationDB{}
|
||
cfg := SyncConfig{
|
||
SourceConfig: connection.ConnectionConfig{Type: "postgres", Database: "ignored"},
|
||
TargetConfig: connection.ConnectionConfig{Type: "tdengine", Database: "taos"},
|
||
TargetTableStrategy: "smart",
|
||
}
|
||
plan, sourceCols, targetCols, err := buildPGLikeToTDenginePlan(cfg, "metrics", sourceDB, targetDB)
|
||
if err != nil {
|
||
t.Fatalf("buildPGLikeToTDenginePlan returned error: %v", err)
|
||
}
|
||
if len(sourceCols) != 3 || len(targetCols) != 0 {
|
||
t.Fatalf("unexpected columns lengths: source=%d target=%d", len(sourceCols), len(targetCols))
|
||
}
|
||
if !plan.AutoCreate {
|
||
t.Fatalf("expected auto create enabled")
|
||
}
|
||
if !strings.Contains(plan.CreateTableSQL, "CREATE TABLE `taos`.`metrics`") {
|
||
t.Fatalf("unexpected create sql: %s", plan.CreateTableSQL)
|
||
}
|
||
if !strings.Contains(plan.CreateTableSQL, "`event_time` TIMESTAMP") {
|
||
t.Fatalf("expected timestamp mapping, got: %s", plan.CreateTableSQL)
|
||
}
|
||
if !strings.Contains(plan.CreateTableSQL, "`meta` VARCHAR(") {
|
||
t.Fatalf("expected jsonb degrade to VARCHAR, got: %s", plan.CreateTableSQL)
|
||
}
|
||
}
|
||
|
||
func TestBuildMySQLLikeToTDenginePlan_RejectsAutoCreateWithoutTimestampColumn(t *testing.T) {
|
||
t.Parallel()
|
||
|
||
sourceDB := &fakeMigrationDB{
|
||
columns: map[string][]connection.ColumnDefinition{
|
||
"shop.metrics": {
|
||
{Name: "id", Type: "bigint", Nullable: "NO", Key: "PRI"},
|
||
{Name: "name", Type: "varchar(64)", Nullable: "YES"},
|
||
},
|
||
},
|
||
}
|
||
targetDB := &fakeMigrationDB{}
|
||
cfg := SyncConfig{
|
||
SourceConfig: connection.ConnectionConfig{Type: "mysql", Database: "shop"},
|
||
TargetConfig: connection.ConnectionConfig{Type: "tdengine", Database: "taos"},
|
||
TargetTableStrategy: "smart",
|
||
}
|
||
plan, _, _, err := buildMySQLLikeToTDenginePlan(cfg, "metrics", sourceDB, targetDB)
|
||
if err != nil {
|
||
t.Fatalf("buildMySQLLikeToTDenginePlan returned error: %v", err)
|
||
}
|
||
if plan.AutoCreate {
|
||
t.Fatalf("expected auto create disabled when source has no timestamp column")
|
||
}
|
||
if !strings.Contains(plan.PlannedAction, "时间列") {
|
||
t.Fatalf("unexpected planned action: %s", plan.PlannedAction)
|
||
}
|
||
if !strings.Contains(strings.Join(plan.Warnings, " "), "时间列") {
|
||
t.Fatalf("expected missing timestamp warning, got: %v", plan.Warnings)
|
||
}
|
||
}
|
||
|
||
func TestBuildClickHouseToTDenginePlan_AutoCreateWhenTargetMissing(t *testing.T) {
|
||
t.Parallel()
|
||
|
||
sourceDB := &fakeMigrationDB{
|
||
columns: map[string][]connection.ColumnDefinition{
|
||
"analytics.metrics": {
|
||
{Name: "event_time", Type: "DateTime64(3)", Nullable: "NO"},
|
||
{Name: "host", Type: "FixedString(64)", Nullable: "YES"},
|
||
{Name: "payload", Type: "Map(String,String)", Nullable: "YES"},
|
||
},
|
||
},
|
||
}
|
||
targetDB := &fakeMigrationDB{}
|
||
cfg := SyncConfig{
|
||
SourceConfig: connection.ConnectionConfig{Type: "clickhouse", Database: "analytics"},
|
||
TargetConfig: connection.ConnectionConfig{Type: "tdengine", Database: "taos"},
|
||
TargetTableStrategy: "smart",
|
||
}
|
||
plan, sourceCols, targetCols, err := buildClickHouseToTDenginePlan(cfg, "metrics", sourceDB, targetDB)
|
||
if err != nil {
|
||
t.Fatalf("buildClickHouseToTDenginePlan returned error: %v", err)
|
||
}
|
||
if len(sourceCols) != 3 || len(targetCols) != 0 {
|
||
t.Fatalf("unexpected columns lengths: source=%d target=%d", len(sourceCols), len(targetCols))
|
||
}
|
||
if !plan.AutoCreate {
|
||
t.Fatalf("expected auto create enabled")
|
||
}
|
||
if !strings.Contains(plan.CreateTableSQL, "CREATE TABLE `taos`.`metrics`") {
|
||
t.Fatalf("unexpected create sql: %s", plan.CreateTableSQL)
|
||
}
|
||
if !strings.Contains(plan.CreateTableSQL, "`event_time` TIMESTAMP") {
|
||
t.Fatalf("expected datetime64 mapping, got: %s", plan.CreateTableSQL)
|
||
}
|
||
if !strings.Contains(plan.CreateTableSQL, "`host` VARCHAR(64)") {
|
||
t.Fatalf("expected fixedstring mapping, got: %s", plan.CreateTableSQL)
|
||
}
|
||
if !strings.Contains(plan.CreateTableSQL, "`payload` VARCHAR(") {
|
||
t.Fatalf("expected complex type degrade to VARCHAR, got: %s", plan.CreateTableSQL)
|
||
}
|
||
}
|
||
|
||
func TestBuildClickHouseToPGLikePlan_AutoCreateWhenTargetMissing(t *testing.T) {
|
||
t.Parallel()
|
||
|
||
sourceDB := &fakeMigrationDB{
|
||
columns: map[string][]connection.ColumnDefinition{
|
||
"analytics.metrics": {
|
||
{Name: "id", Type: "UInt64", Nullable: "NO", Key: "PRI"},
|
||
{Name: "event_time", Type: "DateTime64(3)", Nullable: "NO"},
|
||
{Name: "host", Type: "FixedString(64)", Nullable: "YES"},
|
||
{Name: "payload", Type: "Map(String,String)", Nullable: "YES"},
|
||
},
|
||
},
|
||
}
|
||
targetDB := &fakeMigrationDB{}
|
||
cfg := SyncConfig{
|
||
SourceConfig: connection.ConnectionConfig{Type: "clickhouse", Database: "analytics"},
|
||
TargetConfig: connection.ConnectionConfig{Type: "postgres", Database: "public"},
|
||
TargetTableStrategy: "smart",
|
||
}
|
||
plan, sourceCols, targetCols, err := buildClickHouseToPGLikePlan(cfg, "metrics", sourceDB, targetDB)
|
||
if err != nil {
|
||
t.Fatalf("buildClickHouseToPGLikePlan returned error: %v", err)
|
||
}
|
||
if len(sourceCols) != 4 || len(targetCols) != 0 {
|
||
t.Fatalf("unexpected columns lengths: source=%d target=%d", len(sourceCols), len(targetCols))
|
||
}
|
||
if !plan.AutoCreate {
|
||
t.Fatalf("expected auto create enabled")
|
||
}
|
||
if !strings.Contains(plan.CreateTableSQL, `CREATE TABLE "public"."metrics"`) {
|
||
t.Fatalf("unexpected create sql: %s", plan.CreateTableSQL)
|
||
}
|
||
if !strings.Contains(plan.CreateTableSQL, `"id" numeric(20,0)`) {
|
||
t.Fatalf("expected uint64 safeguard mapping, got: %s", plan.CreateTableSQL)
|
||
}
|
||
if !strings.Contains(plan.CreateTableSQL, `"event_time" timestamp`) {
|
||
t.Fatalf("expected datetime64 mapping, got: %s", plan.CreateTableSQL)
|
||
}
|
||
if !strings.Contains(plan.CreateTableSQL, `"host" varchar(64)`) {
|
||
t.Fatalf("expected fixedstring mapping, got: %s", plan.CreateTableSQL)
|
||
}
|
||
if !strings.Contains(plan.CreateTableSQL, `"payload" jsonb`) {
|
||
t.Fatalf("expected complex type degrade to jsonb, got: %s", plan.CreateTableSQL)
|
||
}
|
||
if !strings.Contains(plan.CreateTableSQL, `PRIMARY KEY ("id")`) {
|
||
t.Fatalf("expected primary key preservation, got: %s", plan.CreateTableSQL)
|
||
}
|
||
}
|
||
|
||
func TestBuildPGLikeToClickHousePlan_AutoCreateWhenTargetMissing(t *testing.T) {
|
||
t.Parallel()
|
||
|
||
sourceDB := &fakeMigrationDB{
|
||
columns: map[string][]connection.ColumnDefinition{
|
||
"public.orders": {
|
||
{Name: "id", Type: "bigint", Nullable: "NO", Key: "PRI"},
|
||
{Name: "created_at", Type: "timestamp without time zone", Nullable: "NO"},
|
||
{Name: "profile", Type: "jsonb", Nullable: "YES"},
|
||
},
|
||
},
|
||
}
|
||
targetDB := &fakeMigrationDB{}
|
||
cfg := SyncConfig{
|
||
SourceConfig: connection.ConnectionConfig{Type: "postgres", Database: "public"},
|
||
TargetConfig: connection.ConnectionConfig{Type: "clickhouse", Database: "analytics"},
|
||
TargetTableStrategy: "smart",
|
||
}
|
||
plan, sourceCols, targetCols, err := buildPGLikeToClickHousePlan(cfg, "orders", sourceDB, targetDB)
|
||
if err != nil {
|
||
t.Fatalf("buildPGLikeToClickHousePlan returned error: %v", err)
|
||
}
|
||
if len(sourceCols) != 3 || len(targetCols) != 0 {
|
||
t.Fatalf("unexpected columns lengths: source=%d target=%d", len(sourceCols), len(targetCols))
|
||
}
|
||
if !plan.AutoCreate {
|
||
t.Fatalf("expected auto create enabled")
|
||
}
|
||
if !strings.Contains(plan.CreateTableSQL, "CREATE TABLE `analytics`.`orders`") {
|
||
t.Fatalf("unexpected create sql: %s", plan.CreateTableSQL)
|
||
}
|
||
if !strings.Contains(plan.CreateTableSQL, "`created_at` DateTime") {
|
||
t.Fatalf("expected timestamp mapping, got: %s", plan.CreateTableSQL)
|
||
}
|
||
if !strings.Contains(plan.CreateTableSQL, "`profile` Nullable(String)") {
|
||
t.Fatalf("expected jsonb degrade to Nullable(String), got: %s", plan.CreateTableSQL)
|
||
}
|
||
if !strings.Contains(plan.CreateTableSQL, "ORDER BY (`id`)") {
|
||
t.Fatalf("expected primary key order by, got: %s", plan.CreateTableSQL)
|
||
}
|
||
}
|
||
|
||
func TestBuildTDengineToTDenginePlan_AutoCreateWhenTargetMissing(t *testing.T) {
|
||
t.Parallel()
|
||
|
||
sourceDB := &fakeMigrationDB{
|
||
columns: map[string][]connection.ColumnDefinition{
|
||
"src.cpu": {
|
||
{Name: "ts", Type: "TIMESTAMP", Nullable: "NO"},
|
||
{Name: "host", Type: "NCHAR(64)", Nullable: "YES"},
|
||
{Name: "region", Type: "NCHAR(32)", Nullable: "YES", Key: "TAG"},
|
||
},
|
||
},
|
||
}
|
||
targetDB := &fakeMigrationDB{}
|
||
cfg := SyncConfig{
|
||
SourceConfig: connection.ConnectionConfig{Type: "tdengine", Database: "src"},
|
||
TargetConfig: connection.ConnectionConfig{Type: "tdengine", Database: "dst"},
|
||
TargetTableStrategy: "smart",
|
||
}
|
||
plan, sourceCols, targetCols, err := buildTDengineToTDenginePlan(cfg, "cpu", sourceDB, targetDB)
|
||
if err != nil {
|
||
t.Fatalf("buildTDengineToTDenginePlan returned error: %v", err)
|
||
}
|
||
if len(sourceCols) != 3 || len(targetCols) != 0 {
|
||
t.Fatalf("unexpected columns lengths: source=%d target=%d", len(sourceCols), len(targetCols))
|
||
}
|
||
if !plan.AutoCreate {
|
||
t.Fatalf("expected auto create enabled")
|
||
}
|
||
if !strings.Contains(plan.CreateTableSQL, "CREATE TABLE `dst`.`cpu`") {
|
||
t.Fatalf("unexpected create sql: %s", plan.CreateTableSQL)
|
||
}
|
||
if !strings.Contains(plan.CreateTableSQL, "`ts` TIMESTAMP") {
|
||
t.Fatalf("expected timestamp preserved, got: %s", plan.CreateTableSQL)
|
||
}
|
||
if !strings.Contains(plan.CreateTableSQL, "`region` NCHAR(32)") {
|
||
t.Fatalf("expected tag degrade to regular nchar column, got: %s", plan.CreateTableSQL)
|
||
}
|
||
if !strings.Contains(strings.Join(plan.Warnings, " "), "TAG") {
|
||
t.Fatalf("expected TAG degrade warning, got: %v", plan.Warnings)
|
||
}
|
||
}
|