mirror of
https://github.com/Syngnat/GoNavi.git
synced 2026-05-11 17:09:45 +08:00
* 🐛 fix(data-viewer): 修复ClickHouse尾部分页异常并增强DuckDB复杂类型兼容 - DataViewer 新增 ClickHouse 反向分页策略,修复最后页与倒数页查询失败 - DuckDB 查询失败时按列类型生成安全 SELECT,复杂类型转 VARCHAR 重试 - 分页状态统一使用 currentPage 回填,避免页码与总数推导不一致 - 增强查询异常日志与重试路径,降低大表场景卡顿与误报 * ✨ feat(frontend-driver): 驱动管理支持快速搜索并优化信息展示 - 新增搜索框,支持按 DuckDB/ClickHouse 等关键字快速定位驱动 - 显示“匹配 x / y”统计与无结果提示 - 优化头部区域排版,提升透明/暗色场景下的视觉对齐 * 🔧 fix(connection-modal): 修复多数据源URI导入解析并校正Oracle服务名校验 - 新增单主机URI解析映射,兼容 postgres/postgresql、sqlserver、redis、tdengine、dameng(dm)、kingbase、highgo、vastbase、clickhouse、oracle - 抽取 parseSingleHostUri 复用逻辑,统一 host/port/user/password/database 回填行为 - Oracle 连接新增服务名必填校验,移除“服务名为空回退用户名”的隐式逻辑 - 连接弹窗补充 Oracle 服务名输入项与 URI 示例 * 🐛 fix(query-export): 修复查询结果导出卡住并统一按数据源能力控制导出路径 - 查询结果页导出增加稳定兜底,异常时确保 loading 关闭避免持续转圈 - DataGrid 导出逻辑按数据源能力分流,优先走后端 ExportQuery 并保留结果集导出降级 - QueryEditor 传递结果导出 SQL,保证查询结果导出范围与当前结果一致 - 后端补充 ExportData/ExportQuery 关键日志,提升导出链路可观测性 * 🐛 fix(precision): 修复查询链路与分页统计的大整数精度丢失 - 代理响应数据解码改为 UseNumber,避免默认 float64 吞精度 - 统一归一化 json.Number 与超界整数,超出 JS 安全范围转字符串 - 修复 DataViewer 总数解析,超大值不再误转 Number 参与分页 - refs #142 * 🐛 fix(driver-manager): 修复驱动管理网络告警重复并强化代理引导 - 新增下载链路域名探测,区分“GitHub可达但驱动下载链路不可达” - 网络不可达场景仅保留红色强提醒,移除重复二级告警 - 强提醒增加“打开全局代理设置”入口,优先引导使用 GoNavi 全局代理 - 统一网络检测与目录说明提示图标尺寸,修复加载期视觉不一致 - refs #141 * ♻️ refactor(frontend-interaction): 统一标签拖拽与暗色主题交互实现 - 重构Tab拖拽排序实现,统一为可配置拖拽引擎 - 规范拖拽与点击事件边界,提升交互一致性 - 统一多组件暗色透明样式策略,减少硬编码色值 - 提升Redis/表格/连接面板在透明模式下的观感一致性 - refs #144 * ♻️ refactor(update-state): 重构在线更新状态流并按版本统一进度展示 - 重构更新检查与下载状态同步流程,减少前后端状态分叉 - 进度展示严格绑定 latestVersion,避免跨版本状态串用 - 优化 about 打开场景的静默检查状态回填逻辑 - 统一下载弹窗关闭/后台隐藏行为 - 保持现有安装流程并补齐目录打开能力 * 🎨 style(sidebar-log): 将SQL执行日志入口调整为悬浮胶囊样式 - 移除侧栏底部整条日志入口容器 - 新增悬浮按钮阴影/边框/透明背景并适配明暗主题 - 为树区域预留底部空间避免入口遮挡内容 * ✨ feat(redis-cluster): 支持集群模式逻辑多库隔离与 0-15 库切换 - 前端恢复 Redis 集群场景下 db0-db15 的数据库选择与展示 - 后端新增集群逻辑库命名空间前缀映射,统一 key/pattern 读写隔离 - 覆盖扫描、读取、写入、删除、重命名等核心操作的键映射规则 - 集群命令通道支持 SELECT 逻辑切库与 FLUSHDB 逻辑库清空 - refs #145 * ✨ feat(DataGrid): 大数据表虚拟滚动性能优化及UI一致性修复 - 启用动态虚拟滚动(数据量≥500行自动切换),解决万行数据表卡顿问题 - 虚拟模式下EditableCell改用div渲染,CSS选择器从元素级改为类级适配虚拟DOM - 修复虚拟模式双水平滚动条:样式化rc-virtual-list内置滚动条为胶囊外观,禁用自定义外部滚动条 - 为rc-virtual-list水平滚动条添加鼠标滚轮支持(MutationObserver + marginLeft驱动) - 修复白色主题透明模式下列名悬浮Tooltip对比度不足的问题 - 新增白色主题全局滚动条样式适配透明模式(App.css) - App.tsx主题token与组件样式优化 - refs #147 * 🔧 chore(app): 清理 App.tsx 类型告警并收敛前端壳层实现 - 清除未使用代码和冗余状态 - 替换弃用 API 以消除 IDE 提示 - 显式处理浮动 Promise 避免告警 - 保持现有更新检查和代理设置行为不变 * 🔧 fix(ci): 修复 Windows AMD64 下 DuckDB 驱动构建链路 - 将 DuckDB 工具链准备切换为优先使用 MSYS2 - 增加 gcc 和 g++ 存在性校验与版本验证 - 在 MSYS2 异常时回退 Chocolatey 安装 MinGW - 保持 Windows ARM64 跳过 DuckDB 构建与平台支持一致 * 🔧 fix(ci): 修复 Windows AMD64 下 DuckDB 驱动构建链路 - 将 DuckDB 工具链准备切换为优先使用 MSYS2 - 增加 gcc 和 g++ 存在性校验与版本验证 - 在 MSYS2 异常时回退 Chocolatey 安装 MinGW - 保持 Windows ARM64 跳过 DuckDB 构建与平台支持一致 * 🔧 fix(ci): 修复 Windows AMD64 下 DuckDB 驱动构建工具链 - 将 DuckDB 编译链从 MINGW64 切换为 MSYS2 UCRT64 - 修正 Windows AMD64 的 gcc 和 g++ 探测路径 - 增加 DuckDB 编译器版本校验步骤 * 📝 docs(contributing): 补充中英文贡献指南并统一 README 入口 - 新增英文版 CONTRIBUTING.md 作为正式贡献文档 - 新增中文版 CONTRIBUTING.zh-CN.md 作为中文贡献说明 - 调整 README 和 README.zh-CN 的贡献入口指向对应语言文档 * - feat(connection,metadata,kingbase): 增强多数据源连接能力并修复金仓/达梦/Oracle/ClickHouse兼容性问题 (#188) (#190) * feat(http-tunnel): 支持独立 HTTP 隧道连接并覆盖多数据源 refs #168 * fix(kingbase-data-grid): 修复金仓打开表卡顿并降低对象渲染开销 refs #178 * fix(kingbase-transaction): 修复金仓事务提交重复引号导致语法错误 refs #176 * fix(driver-agent): 修复老版本 Win10 升级后金仓驱动代理启动失败 refs #177 * chore(ci): 新增手动触发的 macOS 测试构建工作流 * chore(ci): 允许测试工作流在当前分支自动触发 * fix(query-editor): 修复 SQL 编辑中光标随机跳到末尾 refs #185 * feat(data-sync): 增加差异 SQL 预览能力便于审核 refs #174 * fix(clickhouse-connect): 自动识别并回退 HTTP/Native 协议连接 refs #181 * fix(oracle-metadata): 修复视图与函数加载按 schema 过滤异常 refs #155 * fix(dameng-databases): 修复显示全部库时数据库列表不完整 refs #154 * fix(connection,db-list): 统一处理空列表返回并修复达梦连接测试报错 refs #157 Co-authored-by: 辣条 <69459608+tianqijiuyun-latiao@users.noreply.github.com> * ✨ feat(release-notes): 支持自动生成 Release 更新说明并区分配置文件命名 * 🔁 chore(sync): 回灌 main 到 dev (#192) * - feat(connection,metadata,kingbase): 增强多数据源连接能力并修复金仓/达梦/Oracle/ClickHouse兼容性问题 (#188) * feat(http-tunnel): 支持独立 HTTP 隧道连接并覆盖多数据源 refs #168 * fix(kingbase-data-grid): 修复金仓打开表卡顿并降低对象渲染开销 refs #178 * fix(kingbase-transaction): 修复金仓事务提交重复引号导致语法错误 refs #176 * fix(driver-agent): 修复老版本 Win10 升级后金仓驱动代理启动失败 refs #177 * chore(ci): 新增手动触发的 macOS 测试构建工作流 * chore(ci): 允许测试工作流在当前分支自动触发 * fix(query-editor): 修复 SQL 编辑中光标随机跳到末尾 refs #185 * feat(data-sync): 增加差异 SQL 预览能力便于审核 refs #174 * fix(clickhouse-connect): 自动识别并回退 HTTP/Native 协议连接 refs #181 * fix(oracle-metadata): 修复视图与函数加载按 schema 过滤异常 refs #155 * fix(dameng-databases): 修复显示全部库时数据库列表不完整 refs #154 * fix(connection,db-list): 统一处理空列表返回并修复达梦连接测试报错 refs #157 * Release/0.5.3 (#191) --------- Co-authored-by: 辣条 <69459608+tianqijiuyun-latiao@users.noreply.github.com> Co-authored-by: Syngnat <92659908+Syngnat@users.noreply.github.com> * 🐛 fix(branch-sync): 修复 main 回灌 dev 时 mergeable 异步计算导致漏开自动合并 - 增加 mergeable 状态轮询,避免新建同步 PR 后立即返回 UNKNOWN - 在合并状态未稳定时输出中文告警与执行摘要 - 保持冲突分支、待计算分支与自动合并分支的处理路径清晰 * 🔁 chore(sync): 回灌 main 到 dev (#195) * - feat(connection,metadata,kingbase): 增强多数据源连接能力并修复金仓/达梦/Oracle/ClickHouse兼容性问题 (#188) * feat(http-tunnel): 支持独立 HTTP 隧道连接并覆盖多数据源 refs #168 * fix(kingbase-data-grid): 修复金仓打开表卡顿并降低对象渲染开销 refs #178 * fix(kingbase-transaction): 修复金仓事务提交重复引号导致语法错误 refs #176 * fix(driver-agent): 修复老版本 Win10 升级后金仓驱动代理启动失败 refs #177 * chore(ci): 新增手动触发的 macOS 测试构建工作流 * chore(ci): 允许测试工作流在当前分支自动触发 * fix(query-editor): 修复 SQL 编辑中光标随机跳到末尾 refs #185 * feat(data-sync): 增加差异 SQL 预览能力便于审核 refs #174 * fix(clickhouse-connect): 自动识别并回退 HTTP/Native 协议连接 refs #181 * fix(oracle-metadata): 修复视图与函数加载按 schema 过滤异常 refs #155 * fix(dameng-databases): 修复显示全部库时数据库列表不完整 refs #154 * fix(connection,db-list): 统一处理空列表返回并修复达梦连接测试报错 refs #157 * Release/0.5.3 (#191) * - chore(ci): 新增全平台测试包手动构建工作流 tianqijiuyun-latiao 今天 下午4:26 (#194) * feat(http-tunnel): 支持独立 HTTP 隧道连接并覆盖多数据源 refs #168 * fix(kingbase-data-grid): 修复金仓打开表卡顿并降低对象渲染开销 refs #178 * fix(kingbase-transaction): 修复金仓事务提交重复引号导致语法错误 refs #176 * fix(driver-agent): 修复老版本 Win10 升级后金仓驱动代理启动失败 refs #177 * chore(ci): 新增手动触发的 macOS 测试构建工作流 * chore(ci): 允许测试工作流在当前分支自动触发 * fix(query-editor): 修复 SQL 编辑中光标随机跳到末尾 refs #185 * feat(data-sync): 增加差异 SQL 预览能力便于审核 refs #174 * fix(clickhouse-connect): 自动识别并回退 HTTP/Native 协议连接 refs #181 * fix(oracle-metadata): 修复视图与函数加载按 schema 过滤异常 refs #155 * fix(dameng-databases): 修复显示全部库时数据库列表不完整 refs #154 * fix(connection,db-list): 统一处理空列表返回并修复达梦连接测试报错 refs #157 * fix(kingbase): 补齐主键识别并优化宽表卡顿 refs #176 refs #178 * fix(query-execution): 支持带前置注释的读查询结果识别 * chore(ci): 新增全平台测试包手动构建工作流 --------- Co-authored-by: 辣条 <69459608+tianqijiuyun-latiao@users.noreply.github.com> Co-authored-by: Syngnat <92659908+Syngnat@users.noreply.github.com> * ♻️ refactor(frontend-sync): 优化桌面交互细节并移除 main 回灌 dev 自动化 - 优化新建连接、主题设置、侧边栏工具区与 SQL 日志的界面表现 - 调整分页、筛选、透明模式与弹窗样式,统一整体交互层次 - 收口外观参数生效逻辑并补齐多组件适配 - 删除 sync-main-to-dev 工作流并同步维护者手动回灌说明 * feat: 统一筛选条件逻辑按钮宽度 (#201) * 🐛 fix(oracle-query): 修复 Oracle 表数据分页 SQL 兼容问题 refs #196 (#202) * ✨ feat(datasource): 支持 DuckDB Parquet 文件模式并优化弹窗打开链路 - 统一 DuckDB 文件库与 Parquet 文件接入能力 - 补充 URI、文件选择、只读挂载与连接缓存键处理 - 去掉数据源卡片点击前的同步驱动查询,修复打开卡顿 * ✨ feat(datasource): 支持 DuckDB Parquet 文件模式并优化弹窗打开链路 - 统一 DuckDB 文件库与 Parquet 文件接入能力 - 补充 URI、文件选择、只读挂载与连接缓存键处理 - 去掉数据源卡片点击前的同步驱动查询,修复打开卡顿 - refs #166 * 🐛 fix(dameng): 修复达梦连接成功后数据库列表为空问题 - 调整达梦数据库列表获取策略,优先回退查询当前 schema 与当前用户 - 保留可见用户与 owner 聚合逻辑,兼容低权限账号场景 - 补充前端空列表提示与后端单元测试,降低排查成本 - close #203 * ✨ feat(data-sync): 扩展跨库迁移链路并优化数据同步交互 - 统一同库同步与跨库迁移入口,补充模式区分与风险提示 - 扩展 ClickHouse 与 PG-like 双向迁移,并新增 PG-like、ClickHouse、TDengine 到 MongoDB 的迁移路由 - 完善 TDengine 目标端建表规划、回归测试与需求追踪文档 - refs #51 --------- Co-authored-by: Syngnat <yangguofeng919@gmail.com> Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: 辣条 <69459608+tianqijiuyun-latiao@users.noreply.github.com> Co-authored-by: TSS <266256496+Zencok@users.noreply.github.com>
491 lines
16 KiB
Go
491 lines
16 KiB
Go
package sync
|
|
|
|
import (
|
|
"GoNavi-Wails/internal/connection"
|
|
"GoNavi-Wails/internal/db"
|
|
redispkg "GoNavi-Wails/internal/redis"
|
|
"fmt"
|
|
"sort"
|
|
"strings"
|
|
"testing"
|
|
)
|
|
|
|
type fakeRedisMigrationClient struct {
|
|
values map[string]*redispkg.RedisValue
|
|
scannedKeys []string
|
|
connectConfig connection.ConnectionConfig
|
|
closed bool
|
|
}
|
|
|
|
func (f *fakeRedisMigrationClient) Connect(config connection.ConnectionConfig) error {
|
|
f.connectConfig = config
|
|
return nil
|
|
}
|
|
|
|
func (f *fakeRedisMigrationClient) Close() error {
|
|
f.closed = true
|
|
return nil
|
|
}
|
|
|
|
func (f *fakeRedisMigrationClient) ScanKeys(pattern string, cursor uint64, count int64) (*redispkg.RedisScanResult, error) {
|
|
items := make([]redispkg.RedisKeyInfo, 0, len(f.scannedKeys))
|
|
for _, key := range f.scannedKeys {
|
|
items = append(items, redispkg.RedisKeyInfo{Key: key, Type: "string", TTL: -1})
|
|
}
|
|
return &redispkg.RedisScanResult{Keys: items, Cursor: "0"}, nil
|
|
}
|
|
|
|
func (f *fakeRedisMigrationClient) GetKeyType(key string) (string, error) {
|
|
if value, ok := f.values[key]; ok && value != nil {
|
|
return value.Type, nil
|
|
}
|
|
return "none", nil
|
|
}
|
|
|
|
func (f *fakeRedisMigrationClient) GetValue(key string) (*redispkg.RedisValue, error) {
|
|
if value, ok := f.values[key]; ok {
|
|
return value, nil
|
|
}
|
|
return nil, fmt.Errorf("key not found: %s", key)
|
|
}
|
|
|
|
func (f *fakeRedisMigrationClient) DeleteKeys(keys []string) (int64, error) {
|
|
var deleted int64
|
|
for _, key := range keys {
|
|
if _, ok := f.values[key]; ok {
|
|
delete(f.values, key)
|
|
deleted++
|
|
}
|
|
}
|
|
return deleted, nil
|
|
}
|
|
|
|
func (f *fakeRedisMigrationClient) SetTTL(key string, ttl int64) error {
|
|
value, ok := f.values[key]
|
|
if !ok {
|
|
return nil
|
|
}
|
|
value.TTL = ttl
|
|
return nil
|
|
}
|
|
|
|
func (f *fakeRedisMigrationClient) SetString(key, value string, ttl int64) error {
|
|
if f.values == nil {
|
|
f.values = map[string]*redispkg.RedisValue{}
|
|
}
|
|
f.values[key] = &redispkg.RedisValue{Type: "string", TTL: ttl, Value: value, Length: int64(len(value))}
|
|
return nil
|
|
}
|
|
|
|
func (f *fakeRedisMigrationClient) SetHashField(key, field, value string) error {
|
|
if f.values == nil {
|
|
f.values = map[string]*redispkg.RedisValue{}
|
|
}
|
|
current, ok := f.values[key]
|
|
if !ok || current == nil || current.Type != "hash" {
|
|
current = &redispkg.RedisValue{Type: "hash", TTL: -1, Value: map[string]string{}}
|
|
f.values[key] = current
|
|
}
|
|
hash, _ := current.Value.(map[string]string)
|
|
if hash == nil {
|
|
hash = map[string]string{}
|
|
}
|
|
hash[field] = value
|
|
current.Value = hash
|
|
current.Length = int64(len(hash))
|
|
return nil
|
|
}
|
|
|
|
func (f *fakeRedisMigrationClient) ListPush(key string, values ...string) error {
|
|
if f.values == nil {
|
|
f.values = map[string]*redispkg.RedisValue{}
|
|
}
|
|
current, ok := f.values[key]
|
|
if !ok || current == nil || current.Type != "list" {
|
|
current = &redispkg.RedisValue{Type: "list", TTL: -1, Value: []string{}}
|
|
f.values[key] = current
|
|
}
|
|
list, _ := current.Value.([]string)
|
|
list = append(list, values...)
|
|
current.Value = list
|
|
current.Length = int64(len(list))
|
|
return nil
|
|
}
|
|
|
|
func (f *fakeRedisMigrationClient) SetAdd(key string, members ...string) error {
|
|
if f.values == nil {
|
|
f.values = map[string]*redispkg.RedisValue{}
|
|
}
|
|
current, ok := f.values[key]
|
|
if !ok || current == nil || current.Type != "set" {
|
|
current = &redispkg.RedisValue{Type: "set", TTL: -1, Value: []string{}}
|
|
f.values[key] = current
|
|
}
|
|
setValues, _ := current.Value.([]string)
|
|
seen := make(map[string]struct{}, len(setValues)+len(members))
|
|
for _, item := range setValues {
|
|
seen[item] = struct{}{}
|
|
}
|
|
for _, item := range members {
|
|
if _, ok := seen[item]; ok {
|
|
continue
|
|
}
|
|
seen[item] = struct{}{}
|
|
setValues = append(setValues, item)
|
|
}
|
|
sort.Strings(setValues)
|
|
current.Value = setValues
|
|
current.Length = int64(len(setValues))
|
|
return nil
|
|
}
|
|
|
|
func (f *fakeRedisMigrationClient) ZSetAdd(key string, members ...redispkg.ZSetMember) error {
|
|
if f.values == nil {
|
|
f.values = map[string]*redispkg.RedisValue{}
|
|
}
|
|
copied := append([]redispkg.ZSetMember(nil), members...)
|
|
sort.Slice(copied, func(i, j int) bool {
|
|
if copied[i].Score == copied[j].Score {
|
|
return copied[i].Member < copied[j].Member
|
|
}
|
|
return copied[i].Score < copied[j].Score
|
|
})
|
|
f.values[key] = &redispkg.RedisValue{Type: "zset", TTL: -1, Value: copied, Length: int64(len(copied))}
|
|
return nil
|
|
}
|
|
|
|
func (f *fakeRedisMigrationClient) StreamAdd(key string, fields map[string]string, id string) (string, error) {
|
|
if f.values == nil {
|
|
f.values = map[string]*redispkg.RedisValue{}
|
|
}
|
|
current, ok := f.values[key]
|
|
if !ok || current == nil || current.Type != "stream" {
|
|
current = &redispkg.RedisValue{Type: "stream", TTL: -1, Value: []redispkg.StreamEntry{}}
|
|
f.values[key] = current
|
|
}
|
|
entries, _ := current.Value.([]redispkg.StreamEntry)
|
|
entryID := id
|
|
if entryID == "" {
|
|
entryID = fmt.Sprintf("%d-0", len(entries)+1)
|
|
}
|
|
entries = append(entries, redispkg.StreamEntry{ID: entryID, Fields: fields})
|
|
current.Value = entries
|
|
current.Length = int64(len(entries))
|
|
return entryID, nil
|
|
}
|
|
|
|
type fakeRedisMongoTargetDB struct {
|
|
tables []string
|
|
queryTable string
|
|
queryRows []map[string]interface{}
|
|
execs []string
|
|
applyTable string
|
|
applySet connection.ChangeSet
|
|
}
|
|
|
|
func (f *fakeRedisMongoTargetDB) Connect(config connection.ConnectionConfig) error { return nil }
|
|
func (f *fakeRedisMongoTargetDB) Close() error { return nil }
|
|
func (f *fakeRedisMongoTargetDB) Ping() error { return nil }
|
|
func (f *fakeRedisMongoTargetDB) Query(query string) ([]map[string]interface{}, []string, error) {
|
|
queryTable := strings.TrimSpace(f.queryTable)
|
|
if queryTable == "" {
|
|
queryTable = "redis_db_0_keys"
|
|
}
|
|
if strings.Contains(query, fmt.Sprintf(`"find":"%s"`, queryTable)) {
|
|
return f.queryRows, []string{"_id", "key", "value"}, nil
|
|
}
|
|
return nil, nil, nil
|
|
}
|
|
func (f *fakeRedisMongoTargetDB) Exec(query string) (int64, error) {
|
|
f.execs = append(f.execs, query)
|
|
return 1, nil
|
|
}
|
|
func (f *fakeRedisMongoTargetDB) GetDatabases() ([]string, error) { return []string{"app"}, nil }
|
|
func (f *fakeRedisMongoTargetDB) GetTables(dbName string) ([]string, error) {
|
|
return f.tables, nil
|
|
}
|
|
func (f *fakeRedisMongoTargetDB) GetCreateStatement(dbName, tableName string) (string, error) {
|
|
return "", nil
|
|
}
|
|
func (f *fakeRedisMongoTargetDB) GetColumns(dbName, tableName string) ([]connection.ColumnDefinition, error) {
|
|
return nil, nil
|
|
}
|
|
func (f *fakeRedisMongoTargetDB) GetAllColumns(dbName string) ([]connection.ColumnDefinitionWithTable, error) {
|
|
return nil, nil
|
|
}
|
|
func (f *fakeRedisMongoTargetDB) GetIndexes(dbName, tableName string) ([]connection.IndexDefinition, error) {
|
|
return nil, nil
|
|
}
|
|
func (f *fakeRedisMongoTargetDB) GetForeignKeys(dbName, tableName string) ([]connection.ForeignKeyDefinition, error) {
|
|
return nil, nil
|
|
}
|
|
func (f *fakeRedisMongoTargetDB) GetTriggers(dbName, tableName string) ([]connection.TriggerDefinition, error) {
|
|
return nil, nil
|
|
}
|
|
func (f *fakeRedisMongoTargetDB) ApplyChanges(tableName string, changes connection.ChangeSet) error {
|
|
f.applyTable = tableName
|
|
f.applySet = changes
|
|
return nil
|
|
}
|
|
|
|
type fakeMongoRedisSourceDB struct {
|
|
tables []string
|
|
rowsByTable map[string][]map[string]interface{}
|
|
connectConfig connection.ConnectionConfig
|
|
}
|
|
|
|
func (f *fakeMongoRedisSourceDB) Connect(config connection.ConnectionConfig) error {
|
|
f.connectConfig = config
|
|
return nil
|
|
}
|
|
func (f *fakeMongoRedisSourceDB) Close() error { return nil }
|
|
func (f *fakeMongoRedisSourceDB) Ping() error { return nil }
|
|
func (f *fakeMongoRedisSourceDB) Query(query string) ([]map[string]interface{}, []string, error) {
|
|
for tableName, rows := range f.rowsByTable {
|
|
if strings.Contains(query, fmt.Sprintf(`"find":"%s"`, tableName)) {
|
|
return rows, []string{"_id", "key", "type", "ttl", "value"}, nil
|
|
}
|
|
}
|
|
return nil, nil, fmt.Errorf("unexpected query: %s", query)
|
|
}
|
|
func (f *fakeMongoRedisSourceDB) Exec(query string) (int64, error) { return 0, nil }
|
|
func (f *fakeMongoRedisSourceDB) GetDatabases() ([]string, error) { return []string{"app"}, nil }
|
|
func (f *fakeMongoRedisSourceDB) GetTables(dbName string) ([]string, error) {
|
|
return f.tables, nil
|
|
}
|
|
func (f *fakeMongoRedisSourceDB) GetCreateStatement(dbName, tableName string) (string, error) {
|
|
return "", nil
|
|
}
|
|
func (f *fakeMongoRedisSourceDB) GetColumns(dbName, tableName string) ([]connection.ColumnDefinition, error) {
|
|
return nil, nil
|
|
}
|
|
func (f *fakeMongoRedisSourceDB) GetAllColumns(dbName string) ([]connection.ColumnDefinitionWithTable, error) {
|
|
return nil, nil
|
|
}
|
|
func (f *fakeMongoRedisSourceDB) GetIndexes(dbName, tableName string) ([]connection.IndexDefinition, error) {
|
|
return nil, nil
|
|
}
|
|
func (f *fakeMongoRedisSourceDB) GetForeignKeys(dbName, tableName string) ([]connection.ForeignKeyDefinition, error) {
|
|
return nil, nil
|
|
}
|
|
func (f *fakeMongoRedisSourceDB) GetTriggers(dbName, tableName string) ([]connection.TriggerDefinition, error) {
|
|
return nil, nil
|
|
}
|
|
|
|
func TestRunSync_RedisToMongoAppliesInsertAndUpdate(t *testing.T) {
|
|
fakeRedis := &fakeRedisMigrationClient{
|
|
values: map[string]*redispkg.RedisValue{
|
|
"user:1": {Type: "hash", TTL: 120, Length: 2, Value: map[string]string{"name": "alice"}},
|
|
"user:2": {Type: "string", TTL: -1, Length: 1, Value: "online"},
|
|
},
|
|
}
|
|
fakeTarget := &fakeRedisMongoTargetDB{
|
|
tables: []string{"redis_db_0_keys"},
|
|
queryRows: []map[string]interface{}{
|
|
{"_id": "db0:user:1", "redisDb": 0, "key": "user:1", "type": "hash", "ttl": 120, "length": int64(2), "value": map[string]interface{}{"name": "old"}},
|
|
},
|
|
}
|
|
|
|
oldNewRedisClient := newRedisSourceClient
|
|
oldNewDatabase := newSyncDatabase
|
|
defer func() {
|
|
newRedisSourceClient = oldNewRedisClient
|
|
newSyncDatabase = oldNewDatabase
|
|
}()
|
|
newRedisSourceClient = func() redisMigrationClient { return fakeRedis }
|
|
newSyncDatabase = func(dbType string) (db.Database, error) { return fakeTarget, nil }
|
|
|
|
engine := NewSyncEngine(Reporter{})
|
|
result := engine.RunSync(SyncConfig{
|
|
SourceConfig: connection.ConnectionConfig{Type: "redis", Database: "0"},
|
|
TargetConfig: connection.ConnectionConfig{Type: "mongodb", Database: "app"},
|
|
Tables: []string{"user:1", "user:2"},
|
|
Content: "data",
|
|
Mode: "insert_update",
|
|
})
|
|
|
|
if !result.Success {
|
|
t.Fatalf("expected success, got: %+v", result)
|
|
}
|
|
if fakeRedis.connectConfig.RedisDB != 0 {
|
|
t.Fatalf("expected redis db 0, got %d", fakeRedis.connectConfig.RedisDB)
|
|
}
|
|
if fakeTarget.applyTable != "redis_db_0_keys" {
|
|
t.Fatalf("unexpected apply table: %s", fakeTarget.applyTable)
|
|
}
|
|
if len(fakeTarget.applySet.Inserts) != 1 || len(fakeTarget.applySet.Updates) != 1 {
|
|
t.Fatalf("unexpected change set: %+v", fakeTarget.applySet)
|
|
}
|
|
}
|
|
|
|
func TestRunSync_RedisToMongoUsesConfiguredCollectionName(t *testing.T) {
|
|
fakeRedis := &fakeRedisMigrationClient{
|
|
values: map[string]*redispkg.RedisValue{
|
|
"user:1": {Type: "string", TTL: -1, Length: 1, Value: "online"},
|
|
},
|
|
}
|
|
fakeTarget := &fakeRedisMongoTargetDB{
|
|
tables: []string{"custom_keyspace_docs"},
|
|
queryTable: "custom_keyspace_docs",
|
|
}
|
|
|
|
oldNewRedisClient := newRedisSourceClient
|
|
oldNewDatabase := newSyncDatabase
|
|
defer func() {
|
|
newRedisSourceClient = oldNewRedisClient
|
|
newSyncDatabase = oldNewDatabase
|
|
}()
|
|
newRedisSourceClient = func() redisMigrationClient { return fakeRedis }
|
|
newSyncDatabase = func(dbType string) (db.Database, error) { return fakeTarget, nil }
|
|
|
|
engine := NewSyncEngine(Reporter{})
|
|
result := engine.RunSync(SyncConfig{
|
|
SourceConfig: connection.ConnectionConfig{Type: "redis", Database: "0"},
|
|
TargetConfig: connection.ConnectionConfig{Type: "mongodb", Database: "app"},
|
|
Tables: []string{"user:1"},
|
|
Content: "data",
|
|
Mode: "insert_update",
|
|
MongoCollectionName: "custom_keyspace_docs",
|
|
})
|
|
|
|
if !result.Success {
|
|
t.Fatalf("expected success, got: %+v", result)
|
|
}
|
|
if fakeTarget.applyTable != "custom_keyspace_docs" {
|
|
t.Fatalf("unexpected apply table: %s", fakeTarget.applyTable)
|
|
}
|
|
}
|
|
|
|
func TestPreview_RedisToMongoReturnsDocumentPreview(t *testing.T) {
|
|
fakeRedis := &fakeRedisMigrationClient{
|
|
values: map[string]*redispkg.RedisValue{
|
|
"session:1": {Type: "string", TTL: 60, Length: 1, Value: "token"},
|
|
},
|
|
}
|
|
fakeTarget := &fakeRedisMongoTargetDB{}
|
|
|
|
oldNewRedisClient := newRedisSourceClient
|
|
oldNewDatabase := newSyncDatabase
|
|
defer func() {
|
|
newRedisSourceClient = oldNewRedisClient
|
|
newSyncDatabase = oldNewDatabase
|
|
}()
|
|
newRedisSourceClient = func() redisMigrationClient { return fakeRedis }
|
|
newSyncDatabase = func(dbType string) (db.Database, error) { return fakeTarget, nil }
|
|
|
|
engine := NewSyncEngine(Reporter{})
|
|
preview, err := engine.Preview(SyncConfig{
|
|
SourceConfig: connection.ConnectionConfig{Type: "redis", Database: "0"},
|
|
TargetConfig: connection.ConnectionConfig{Type: "mongodb", Database: "app"},
|
|
Tables: []string{"session:1"},
|
|
Content: "data",
|
|
Mode: "insert_update",
|
|
}, "session:1", 20)
|
|
if err != nil {
|
|
t.Fatalf("unexpected error: %v", err)
|
|
}
|
|
if preview.PKColumn != "_id" {
|
|
t.Fatalf("unexpected pk column: %s", preview.PKColumn)
|
|
}
|
|
if preview.TotalInserts != 1 || len(preview.Inserts) != 1 {
|
|
t.Fatalf("unexpected preview: %+v", preview)
|
|
}
|
|
if preview.Inserts[0].PK != "db0:session:1" {
|
|
t.Fatalf("unexpected preview pk: %+v", preview.Inserts[0])
|
|
}
|
|
}
|
|
|
|
func TestRunSync_MongoToRedisAppliesStringAndHash(t *testing.T) {
|
|
fakeSource := &fakeMongoRedisSourceDB{
|
|
tables: []string{"redis_db_0_keys"},
|
|
rowsByTable: map[string][]map[string]interface{}{
|
|
"redis_db_0_keys": {
|
|
{"_id": "db0:session:1", "key": "session:1", "type": "string", "ttl": int64(60), "value": "token"},
|
|
{"_id": "db0:user:1", "key": "user:1", "type": "hash", "ttl": int64(120), "value": map[string]interface{}{"name": "alice", "role": "admin"}},
|
|
},
|
|
},
|
|
}
|
|
fakeRedis := &fakeRedisMigrationClient{
|
|
values: map[string]*redispkg.RedisValue{
|
|
"user:1": {Type: "hash", TTL: 120, Length: 1, Value: map[string]string{"name": "old"}},
|
|
},
|
|
}
|
|
|
|
oldNewRedisClient := newRedisSourceClient
|
|
oldNewDatabase := newSyncDatabase
|
|
defer func() {
|
|
newRedisSourceClient = oldNewRedisClient
|
|
newSyncDatabase = oldNewDatabase
|
|
}()
|
|
newRedisSourceClient = func() redisMigrationClient { return fakeRedis }
|
|
newSyncDatabase = func(dbType string) (db.Database, error) { return fakeSource, nil }
|
|
|
|
engine := NewSyncEngine(Reporter{})
|
|
result := engine.RunSync(SyncConfig{
|
|
SourceConfig: connection.ConnectionConfig{Type: "mongodb", Database: "app"},
|
|
TargetConfig: connection.ConnectionConfig{Type: "redis", Database: "0"},
|
|
Tables: []string{"redis_db_0_keys"},
|
|
Content: "data",
|
|
Mode: "insert_update",
|
|
})
|
|
|
|
if !result.Success {
|
|
t.Fatalf("expected success, got: %+v", result)
|
|
}
|
|
if fakeRedis.connectConfig.RedisDB != 0 {
|
|
t.Fatalf("expected redis db 0, got %d", fakeRedis.connectConfig.RedisDB)
|
|
}
|
|
if got := fakeRedis.values["session:1"]; got == nil || got.Type != "string" || got.Value != "token" || got.TTL != 60 {
|
|
t.Fatalf("unexpected string value: %+v", got)
|
|
}
|
|
gotHash, _ := fakeRedis.values["user:1"].Value.(map[string]string)
|
|
if gotHash["name"] != "alice" || gotHash["role"] != "admin" {
|
|
t.Fatalf("unexpected hash value: %+v", fakeRedis.values["user:1"])
|
|
}
|
|
if result.RowsInserted != 1 || result.RowsUpdated != 1 {
|
|
t.Fatalf("unexpected sync result: %+v", result)
|
|
}
|
|
}
|
|
|
|
func TestPreview_MongoToRedisReturnsCollectionPreview(t *testing.T) {
|
|
fakeSource := &fakeMongoRedisSourceDB{
|
|
tables: []string{"redis_db_0_keys"},
|
|
rowsByTable: map[string][]map[string]interface{}{
|
|
"redis_db_0_keys": {
|
|
{"_id": "db0:session:1", "key": "session:1", "type": "string", "ttl": int64(60), "value": "token"},
|
|
},
|
|
},
|
|
}
|
|
fakeRedis := &fakeRedisMigrationClient{values: map[string]*redispkg.RedisValue{}}
|
|
|
|
oldNewRedisClient := newRedisSourceClient
|
|
oldNewDatabase := newSyncDatabase
|
|
defer func() {
|
|
newRedisSourceClient = oldNewRedisClient
|
|
newSyncDatabase = oldNewDatabase
|
|
}()
|
|
newRedisSourceClient = func() redisMigrationClient { return fakeRedis }
|
|
newSyncDatabase = func(dbType string) (db.Database, error) { return fakeSource, nil }
|
|
|
|
engine := NewSyncEngine(Reporter{})
|
|
preview, err := engine.Preview(SyncConfig{
|
|
SourceConfig: connection.ConnectionConfig{Type: "mongodb", Database: "app"},
|
|
TargetConfig: connection.ConnectionConfig{Type: "redis", Database: "0"},
|
|
Tables: []string{"redis_db_0_keys"},
|
|
Content: "data",
|
|
Mode: "insert_update",
|
|
}, "redis_db_0_keys", 20)
|
|
if err != nil {
|
|
t.Fatalf("unexpected error: %v", err)
|
|
}
|
|
if preview.Table != "redis_db_0_keys" || preview.PKColumn != "key" {
|
|
t.Fatalf("unexpected preview header: %+v", preview)
|
|
}
|
|
if preview.TotalInserts != 1 || len(preview.Inserts) != 1 {
|
|
t.Fatalf("unexpected preview rows: %+v", preview)
|
|
}
|
|
if preview.Inserts[0].PK != "session:1" {
|
|
t.Fatalf("unexpected preview pk: %+v", preview.Inserts[0])
|
|
}
|
|
}
|