Files
MyGoNavi/internal/sync/migration_redis.go
Syngnat 22bd1c4c28 Release/0.5.5 (#207)
* 🐛 fix(data-viewer): 修复ClickHouse尾部分页异常并增强DuckDB复杂类型兼容

- DataViewer 新增 ClickHouse 反向分页策略,修复最后页与倒数页查询失败
- DuckDB 查询失败时按列类型生成安全 SELECT,复杂类型转 VARCHAR 重试
- 分页状态统一使用 currentPage 回填,避免页码与总数推导不一致
- 增强查询异常日志与重试路径,降低大表场景卡顿与误报

*  feat(frontend-driver): 驱动管理支持快速搜索并优化信息展示

- 新增搜索框,支持按 DuckDB/ClickHouse 等关键字快速定位驱动
- 显示“匹配 x / y”统计与无结果提示
- 优化头部区域排版,提升透明/暗色场景下的视觉对齐

* 🔧 fix(connection-modal): 修复多数据源URI导入解析并校正Oracle服务名校验

- 新增单主机URI解析映射,兼容 postgres/postgresql、sqlserver、redis、tdengine、dameng(dm)、kingbase、highgo、vastbase、clickhouse、oracle
- 抽取 parseSingleHostUri 复用逻辑,统一 host/port/user/password/database 回填行为
- Oracle 连接新增服务名必填校验,移除“服务名为空回退用户名”的隐式逻辑
- 连接弹窗补充 Oracle 服务名输入项与 URI 示例

* 🐛 fix(query-export): 修复查询结果导出卡住并统一按数据源能力控制导出路径

- 查询结果页导出增加稳定兜底,异常时确保 loading 关闭避免持续转圈
- DataGrid 导出逻辑按数据源能力分流,优先走后端 ExportQuery 并保留结果集导出降级
- QueryEditor 传递结果导出 SQL,保证查询结果导出范围与当前结果一致
- 后端补充 ExportData/ExportQuery 关键日志,提升导出链路可观测性

* 🐛 fix(precision): 修复查询链路与分页统计的大整数精度丢失

- 代理响应数据解码改为 UseNumber,避免默认 float64 吞精度
- 统一归一化 json.Number 与超界整数,超出 JS 安全范围转字符串
- 修复 DataViewer 总数解析,超大值不再误转 Number 参与分页
- refs #142

* 🐛 fix(driver-manager): 修复驱动管理网络告警重复并强化代理引导

- 新增下载链路域名探测,区分“GitHub可达但驱动下载链路不可达”
- 网络不可达场景仅保留红色强提醒,移除重复二级告警
- 强提醒增加“打开全局代理设置”入口,优先引导使用 GoNavi 全局代理
- 统一网络检测与目录说明提示图标尺寸,修复加载期视觉不一致
- refs #141

* ♻️ refactor(frontend-interaction): 统一标签拖拽与暗色主题交互实现

- 重构Tab拖拽排序实现,统一为可配置拖拽引擎
- 规范拖拽与点击事件边界,提升交互一致性
- 统一多组件暗色透明样式策略,减少硬编码色值
- 提升Redis/表格/连接面板在透明模式下的观感一致性
- refs #144

* ♻️ refactor(update-state): 重构在线更新状态流并按版本统一进度展示

- 重构更新检查与下载状态同步流程,减少前后端状态分叉
- 进度展示严格绑定 latestVersion,避免跨版本状态串用
- 优化 about 打开场景的静默检查状态回填逻辑
- 统一下载弹窗关闭/后台隐藏行为
- 保持现有安装流程并补齐目录打开能力

* 🎨 style(sidebar-log): 将SQL执行日志入口调整为悬浮胶囊样式

- 移除侧栏底部整条日志入口容器
- 新增悬浮按钮阴影/边框/透明背景并适配明暗主题
- 为树区域预留底部空间避免入口遮挡内容

*  feat(redis-cluster): 支持集群模式逻辑多库隔离与 0-15 库切换

- 前端恢复 Redis 集群场景下 db0-db15 的数据库选择与展示
- 后端新增集群逻辑库命名空间前缀映射,统一 key/pattern 读写隔离
- 覆盖扫描、读取、写入、删除、重命名等核心操作的键映射规则
- 集群命令通道支持 SELECT 逻辑切库与 FLUSHDB 逻辑库清空
- refs #145

*  feat(DataGrid): 大数据表虚拟滚动性能优化及UI一致性修复

- 启用动态虚拟滚动(数据量≥500行自动切换),解决万行数据表卡顿问题
- 虚拟模式下EditableCell改用div渲染,CSS选择器从元素级改为类级适配虚拟DOM
- 修复虚拟模式双水平滚动条:样式化rc-virtual-list内置滚动条为胶囊外观,禁用自定义外部滚动条
- 为rc-virtual-list水平滚动条添加鼠标滚轮支持(MutationObserver + marginLeft驱动)
- 修复白色主题透明模式下列名悬浮Tooltip对比度不足的问题
- 新增白色主题全局滚动条样式适配透明模式(App.css)
- App.tsx主题token与组件样式优化
- refs #147

* 🔧 chore(app): 清理 App.tsx 类型告警并收敛前端壳层实现

- 清除未使用代码和冗余状态
- 替换弃用 API 以消除 IDE 提示
- 显式处理浮动 Promise 避免告警
- 保持现有更新检查和代理设置行为不变

* 🔧 fix(ci): 修复 Windows AMD64 下 DuckDB 驱动构建链路

- 将 DuckDB 工具链准备切换为优先使用 MSYS2
- 增加 gcc 和 g++ 存在性校验与版本验证
- 在 MSYS2 异常时回退 Chocolatey 安装 MinGW
- 保持 Windows ARM64 跳过 DuckDB 构建与平台支持一致

* 🔧 fix(ci): 修复 Windows AMD64 下 DuckDB 驱动构建链路

- 将 DuckDB 工具链准备切换为优先使用 MSYS2
- 增加 gcc 和 g++ 存在性校验与版本验证
- 在 MSYS2 异常时回退 Chocolatey 安装 MinGW
- 保持 Windows ARM64 跳过 DuckDB 构建与平台支持一致

* 🔧 fix(ci): 修复 Windows AMD64 下 DuckDB 驱动构建工具链

- 将 DuckDB 编译链从 MINGW64 切换为 MSYS2 UCRT64
- 修正 Windows AMD64 的 gcc 和 g++ 探测路径
- 增加 DuckDB 编译器版本校验步骤

* 📝 docs(contributing): 补充中英文贡献指南并统一 README 入口

- 新增英文版 CONTRIBUTING.md 作为正式贡献文档
- 新增中文版 CONTRIBUTING.zh-CN.md 作为中文贡献说明
- 调整 README 和 README.zh-CN 的贡献入口指向对应语言文档

* - feat(connection,metadata,kingbase): 增强多数据源连接能力并修复金仓/达梦/Oracle/ClickHouse兼容性问题 (#188) (#190)

* feat(http-tunnel): 支持独立 HTTP 隧道连接并覆盖多数据源

refs #168

* fix(kingbase-data-grid): 修复金仓打开表卡顿并降低对象渲染开销

refs #178

* fix(kingbase-transaction): 修复金仓事务提交重复引号导致语法错误

refs #176

* fix(driver-agent): 修复老版本 Win10 升级后金仓驱动代理启动失败

refs #177

* chore(ci): 新增手动触发的 macOS 测试构建工作流

* chore(ci): 允许测试工作流在当前分支自动触发

* fix(query-editor): 修复 SQL 编辑中光标随机跳到末尾 refs #185

* feat(data-sync): 增加差异 SQL 预览能力便于审核 refs #174

* fix(clickhouse-connect): 自动识别并回退 HTTP/Native 协议连接 refs #181

* fix(oracle-metadata): 修复视图与函数加载按 schema 过滤异常 refs #155

* fix(dameng-databases): 修复显示全部库时数据库列表不完整 refs #154

* fix(connection,db-list): 统一处理空列表返回并修复达梦连接测试报错 refs #157

Co-authored-by: 辣条 <69459608+tianqijiuyun-latiao@users.noreply.github.com>

*  feat(release-notes): 支持自动生成 Release 更新说明并区分配置文件命名

* 🔁 chore(sync): 回灌 main 到 dev (#192)

* - feat(connection,metadata,kingbase): 增强多数据源连接能力并修复金仓/达梦/Oracle/ClickHouse兼容性问题 (#188)

* feat(http-tunnel): 支持独立 HTTP 隧道连接并覆盖多数据源

refs #168

* fix(kingbase-data-grid): 修复金仓打开表卡顿并降低对象渲染开销

refs #178

* fix(kingbase-transaction): 修复金仓事务提交重复引号导致语法错误

refs #176

* fix(driver-agent): 修复老版本 Win10 升级后金仓驱动代理启动失败

refs #177

* chore(ci): 新增手动触发的 macOS 测试构建工作流

* chore(ci): 允许测试工作流在当前分支自动触发

* fix(query-editor): 修复 SQL 编辑中光标随机跳到末尾 refs #185

* feat(data-sync): 增加差异 SQL 预览能力便于审核 refs #174

* fix(clickhouse-connect): 自动识别并回退 HTTP/Native 协议连接 refs #181

* fix(oracle-metadata): 修复视图与函数加载按 schema 过滤异常 refs #155

* fix(dameng-databases): 修复显示全部库时数据库列表不完整 refs #154

* fix(connection,db-list): 统一处理空列表返回并修复达梦连接测试报错 refs #157

* Release/0.5.3 (#191)

---------

Co-authored-by: 辣条 <69459608+tianqijiuyun-latiao@users.noreply.github.com>
Co-authored-by: Syngnat <92659908+Syngnat@users.noreply.github.com>

* 🐛 fix(branch-sync): 修复 main 回灌 dev 时 mergeable 异步计算导致漏开自动合并

- 增加 mergeable 状态轮询,避免新建同步 PR 后立即返回 UNKNOWN
- 在合并状态未稳定时输出中文告警与执行摘要
- 保持冲突分支、待计算分支与自动合并分支的处理路径清晰

* 🔁 chore(sync): 回灌 main 到 dev (#195)

* - feat(connection,metadata,kingbase): 增强多数据源连接能力并修复金仓/达梦/Oracle/ClickHouse兼容性问题 (#188)

* feat(http-tunnel): 支持独立 HTTP 隧道连接并覆盖多数据源

refs #168

* fix(kingbase-data-grid): 修复金仓打开表卡顿并降低对象渲染开销

refs #178

* fix(kingbase-transaction): 修复金仓事务提交重复引号导致语法错误

refs #176

* fix(driver-agent): 修复老版本 Win10 升级后金仓驱动代理启动失败

refs #177

* chore(ci): 新增手动触发的 macOS 测试构建工作流

* chore(ci): 允许测试工作流在当前分支自动触发

* fix(query-editor): 修复 SQL 编辑中光标随机跳到末尾 refs #185

* feat(data-sync): 增加差异 SQL 预览能力便于审核 refs #174

* fix(clickhouse-connect): 自动识别并回退 HTTP/Native 协议连接 refs #181

* fix(oracle-metadata): 修复视图与函数加载按 schema 过滤异常 refs #155

* fix(dameng-databases): 修复显示全部库时数据库列表不完整 refs #154

* fix(connection,db-list): 统一处理空列表返回并修复达梦连接测试报错 refs #157

* Release/0.5.3 (#191)

* - chore(ci): 新增全平台测试包手动构建工作流 tianqijiuyun-latiao 今天 下午4:26 (#194)

* feat(http-tunnel): 支持独立 HTTP 隧道连接并覆盖多数据源

refs #168

* fix(kingbase-data-grid): 修复金仓打开表卡顿并降低对象渲染开销

refs #178

* fix(kingbase-transaction): 修复金仓事务提交重复引号导致语法错误

refs #176

* fix(driver-agent): 修复老版本 Win10 升级后金仓驱动代理启动失败

refs #177

* chore(ci): 新增手动触发的 macOS 测试构建工作流

* chore(ci): 允许测试工作流在当前分支自动触发

* fix(query-editor): 修复 SQL 编辑中光标随机跳到末尾 refs #185

* feat(data-sync): 增加差异 SQL 预览能力便于审核 refs #174

* fix(clickhouse-connect): 自动识别并回退 HTTP/Native 协议连接 refs #181

* fix(oracle-metadata): 修复视图与函数加载按 schema 过滤异常 refs #155

* fix(dameng-databases): 修复显示全部库时数据库列表不完整 refs #154

* fix(connection,db-list): 统一处理空列表返回并修复达梦连接测试报错 refs #157

* fix(kingbase): 补齐主键识别并优化宽表卡顿 refs #176 refs #178

* fix(query-execution): 支持带前置注释的读查询结果识别

* chore(ci): 新增全平台测试包手动构建工作流

---------

Co-authored-by: 辣条 <69459608+tianqijiuyun-latiao@users.noreply.github.com>
Co-authored-by: Syngnat <92659908+Syngnat@users.noreply.github.com>

* ♻️ refactor(frontend-sync): 优化桌面交互细节并移除 main 回灌 dev 自动化

- 优化新建连接、主题设置、侧边栏工具区与 SQL 日志的界面表现
- 调整分页、筛选、透明模式与弹窗样式,统一整体交互层次
- 收口外观参数生效逻辑并补齐多组件适配
- 删除 sync-main-to-dev 工作流并同步维护者手动回灌说明

* feat: 统一筛选条件逻辑按钮宽度 (#201)

* 🐛 fix(oracle-query): 修复 Oracle 表数据分页 SQL 兼容问题 refs #196 (#202)

*  feat(datasource): 支持 DuckDB Parquet 文件模式并优化弹窗打开链路

- 统一 DuckDB 文件库与 Parquet 文件接入能力
- 补充 URI、文件选择、只读挂载与连接缓存键处理
- 去掉数据源卡片点击前的同步驱动查询,修复打开卡顿

*  feat(datasource): 支持 DuckDB Parquet 文件模式并优化弹窗打开链路

- 统一 DuckDB 文件库与 Parquet 文件接入能力
- 补充 URI、文件选择、只读挂载与连接缓存键处理
- 去掉数据源卡片点击前的同步驱动查询,修复打开卡顿
- refs #166

* 🐛 fix(dameng): 修复达梦连接成功后数据库列表为空问题

- 调整达梦数据库列表获取策略,优先回退查询当前 schema 与当前用户
- 保留可见用户与 owner 聚合逻辑,兼容低权限账号场景
- 补充前端空列表提示与后端单元测试,降低排查成本
- close #203

*  feat(data-sync): 扩展跨库迁移链路并优化数据同步交互

- 统一同库同步与跨库迁移入口,补充模式区分与风险提示
- 扩展 ClickHouse 与 PG-like 双向迁移,并新增 PG-like、ClickHouse、TDengine 到 MongoDB 的迁移路由
- 完善 TDengine 目标端建表规划、回归测试与需求追踪文档
- refs #51

---------

Co-authored-by: Syngnat <yangguofeng919@gmail.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: 辣条 <69459608+tianqijiuyun-latiao@users.noreply.github.com>
Co-authored-by: TSS <266256496+Zencok@users.noreply.github.com>
2026-03-09 17:36:52 +08:00

1316 lines
43 KiB
Go
Raw Blame History

This file contains ambiguous Unicode characters
This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
package sync
import (
"GoNavi-Wails/internal/connection"
"GoNavi-Wails/internal/db"
redispkg "GoNavi-Wails/internal/redis"
"encoding/json"
"fmt"
"sort"
"strconv"
"strings"
)
type redisMigrationClient interface {
Connect(config connection.ConnectionConfig) error
Close() error
ScanKeys(pattern string, cursor uint64, count int64) (*redispkg.RedisScanResult, error)
GetKeyType(key string) (string, error)
GetValue(key string) (*redispkg.RedisValue, error)
DeleteKeys(keys []string) (int64, error)
SetTTL(key string, ttl int64) error
SetString(key, value string, ttl int64) error
SetHashField(key, field, value string) error
ListPush(key string, values ...string) error
SetAdd(key string, members ...string) error
ZSetAdd(key string, members ...redispkg.ZSetMember) error
StreamAdd(key string, fields map[string]string, id string) (string, error)
}
var newSyncDatabase = db.NewDatabase
var newRedisSourceClient = func() redisMigrationClient { return redispkg.NewRedisClient() }
func isRedisToMongoKeyspacePair(config SyncConfig) bool {
return resolveMigrationDBType(config.SourceConfig) == "redis" && resolveMigrationDBType(config.TargetConfig) == "mongodb"
}
func resolveRedisDBIndex(config connection.ConnectionConfig) int {
if config.RedisDB >= 0 && config.RedisDB <= 15 {
return config.RedisDB
}
if text := strings.TrimSpace(config.Database); text != "" {
if idx, err := strconv.Atoi(text); err == nil && idx >= 0 && idx <= 15 {
return idx
}
}
return 0
}
func withResolvedRedisDB(config connection.ConnectionConfig) connection.ConnectionConfig {
next := config
next.Type = "redis"
next.RedisDB = resolveRedisDBIndex(config)
return next
}
func resolveMongoCollectionName(config SyncConfig) string {
if name := strings.TrimSpace(config.MongoCollectionName); name != "" {
return name
}
if resolveMigrationDBType(config.SourceConfig) == "redis" {
return fmt.Sprintf("redis_db_%d_keys", resolveRedisDBIndex(config.SourceConfig))
}
return fmt.Sprintf("redis_db_%d_keys", resolveRedisDBIndex(config.TargetConfig))
}
func deriveRedisMongoCollectionName(config SyncConfig) string {
return resolveMongoCollectionName(config)
}
func buildRedisToMongoPlan(config SyncConfig, keyName string, targetDB db.Database) (SchemaMigrationPlan, error) {
collection := deriveRedisMongoCollectionName(config)
plan := SchemaMigrationPlan{
SourceSchema: strconv.Itoa(resolveRedisDBIndex(config.SourceConfig)),
SourceTable: keyName,
SourceQueryTable: keyName,
TargetSchema: strings.TrimSpace(config.TargetConfig.Database),
TargetTable: collection,
TargetQueryTable: collection,
PlannedAction: "按 Redis Key 生成 MongoDB 文档导入",
Warnings: []string{"Redis -> MongoDB 按 keyspace 语义迁移,不执行表级 schema 校验", "Redis TTL/集合顺序等语义会按文档字段保留,不保证与原系统完全等价"},
UnsupportedObjects: []string{"Redis Consumer Group / PubSub / Lua 脚本 / 事务状态当前不迁移"},
}
exists, err := inspectMongoCollection(targetDB, plan.TargetSchema, collection)
if err != nil {
return plan, fmt.Errorf("检查目标集合失败: %w", err)
}
plan.TargetTableExists = exists
strategy := normalizeTargetTableStrategy(config.TargetTableStrategy)
if exists {
return dedupeSchemaMigrationPlan(plan), nil
}
if strategy == "existing_only" {
plan.PlannedAction = "目标集合不存在,需先手工创建"
plan.Warnings = append(plan.Warnings, "当前策略要求目标集合已存在,执行时不会自动建集合")
return dedupeSchemaMigrationPlan(plan), nil
}
createCommand, err := buildMongoCreateCollectionCommand(collection)
if err != nil {
return plan, err
}
plan.AutoCreate = true
plan.PlannedAction = "目标集合不存在,将自动创建集合后导入"
plan.PreDataSQL = []string{createCommand}
return dedupeSchemaMigrationPlan(plan), nil
}
func listRedisMigrationKeys(client redisMigrationClient, selected []string) ([]string, error) {
if len(selected) > 0 {
return dedupeStrings(selected), nil
}
cursor := uint64(0)
keys := make([]string, 0, 64)
seen := map[string]struct{}{}
for {
result, err := client.ScanKeys("*", cursor, 1000)
if err != nil {
return nil, err
}
if result != nil {
for _, item := range result.Keys {
key := strings.TrimSpace(item.Key)
if key == "" {
continue
}
if _, ok := seen[key]; ok {
continue
}
seen[key] = struct{}{}
keys = append(keys, key)
}
if strings.TrimSpace(result.Cursor) == "" || strings.TrimSpace(result.Cursor) == "0" {
break
}
next, err := strconv.ParseUint(strings.TrimSpace(result.Cursor), 10, 64)
if err != nil || next == cursor {
break
}
cursor = next
continue
}
break
}
sort.Strings(keys)
return keys, nil
}
func buildRedisMongoDocument(dbIndex int, key string, value *redispkg.RedisValue) map[string]interface{} {
doc := map[string]interface{}{
"_id": fmt.Sprintf("db%d:%s", dbIndex, key),
"redisDb": dbIndex,
"key": key,
"source": "redis",
}
if value == nil {
return doc
}
doc["type"] = value.Type
doc["ttl"] = value.TTL
doc["length"] = value.Length
doc["value"] = normalizeRedisMongoValue(value.Value)
return doc
}
func normalizeRedisMongoValue(value interface{}) interface{} {
switch typed := value.(type) {
case nil:
return nil
case []byte:
return string(typed)
case map[string]string:
result := make(map[string]interface{}, len(typed))
for k, v := range typed {
result[k] = v
}
return result
case []string:
result := make([]interface{}, 0, len(typed))
for _, item := range typed {
result = append(result, item)
}
return result
case []redispkg.ZSetMember:
result := make([]map[string]interface{}, 0, len(typed))
for _, item := range typed {
result = append(result, map[string]interface{}{"member": item.Member, "score": item.Score})
}
return result
case []redispkg.StreamEntry:
result := make([]map[string]interface{}, 0, len(typed))
for _, item := range typed {
fields := make(map[string]interface{}, len(item.Fields))
for k, v := range item.Fields {
fields[k] = v
}
result = append(result, map[string]interface{}{"id": item.ID, "fields": fields})
}
return result
case map[string]interface{}:
result := make(map[string]interface{}, len(typed))
for k, v := range typed {
result[k] = normalizeRedisMongoValue(v)
}
return result
case []interface{}:
result := make([]interface{}, 0, len(typed))
for _, item := range typed {
result = append(result, normalizeRedisMongoValue(item))
}
return result
default:
return typed
}
}
func buildRedisMongoExistingDocsQuery(collection string, ids []string) (string, error) {
command := map[string]interface{}{
"find": collection,
"filter": map[string]interface{}{
"_id": map[string]interface{}{"$in": ids},
},
}
data, err := json.Marshal(command)
if err != nil {
return "", err
}
return string(data), nil
}
func loadExistingRedisMongoDocs(targetDB db.Database, collection string, ids []string) (map[string]map[string]interface{}, error) {
result := make(map[string]map[string]interface{}, len(ids))
if len(ids) == 0 {
return result, nil
}
query, err := buildRedisMongoExistingDocsQuery(collection, ids)
if err != nil {
return nil, err
}
rows, _, err := targetDB.Query(query)
if err != nil {
return nil, err
}
for _, row := range rows {
id := strings.TrimSpace(fmt.Sprintf("%v", row["_id"]))
if id == "" || id == "<nil>" {
continue
}
result[id] = row
}
return result, nil
}
func buildRedisMongoChanges(config SyncConfig, keys []string, client redisMigrationClient, targetDB db.Database, collection string) (connection.ChangeSet, []map[string]interface{}, error) {
changeSet := connection.ChangeSet{Inserts: []map[string]interface{}{}, Updates: []connection.UpdateRow{}, Deletes: []map[string]interface{}{}}
documents := make([]map[string]interface{}, 0, len(keys))
dbIndex := resolveRedisDBIndex(config.SourceConfig)
for _, key := range keys {
value, err := client.GetValue(key)
if err != nil {
return changeSet, nil, fmt.Errorf("读取 Redis Key 失败: key=%s err=%w", key, err)
}
documents = append(documents, buildRedisMongoDocument(dbIndex, key, value))
}
ids := make([]string, 0, len(documents))
for _, doc := range documents {
ids = append(ids, fmt.Sprintf("%v", doc["_id"]))
}
existing, err := loadExistingRedisMongoDocs(targetDB, collection, ids)
if err != nil {
return changeSet, nil, err
}
mode := normalizeSyncMode(config.Mode)
for _, doc := range documents {
id := fmt.Sprintf("%v", doc["_id"])
existingDoc, ok := existing[id]
if !ok {
changeSet.Inserts = append(changeSet.Inserts, doc)
continue
}
if mode == "insert_only" {
continue
}
values := cloneMapWithoutKeys(doc, "_id")
if sameRedisMongoDocument(existingDoc, doc) {
continue
}
changeSet.Updates = append(changeSet.Updates, connection.UpdateRow{Keys: map[string]interface{}{"_id": id}, Values: values})
}
return changeSet, documents, nil
}
func sameRedisMongoDocument(existing map[string]interface{}, desired map[string]interface{}) bool {
for k, v := range desired {
if k == "_id" {
continue
}
if fmt.Sprintf("%v", normalizeRedisMongoValue(v)) != fmt.Sprintf("%v", normalizeRedisMongoValue(existing[k])) {
return false
}
}
return true
}
func cloneMapWithoutKeys(input map[string]interface{}, skipKeys ...string) map[string]interface{} {
skip := make(map[string]struct{}, len(skipKeys))
for _, key := range skipKeys {
skip[key] = struct{}{}
}
result := make(map[string]interface{}, len(input))
for k, v := range input {
if _, ok := skip[k]; ok {
continue
}
result[k] = v
}
return result
}
func (s *SyncEngine) runRedisToMongoSync(config SyncConfig, result SyncResult) SyncResult {
tables := config.Tables
strategy := normalizeTargetTableStrategy(config.TargetTableStrategy)
mode := normalizeSyncMode(config.Mode)
s.progress(config.JobID, 0, len(tables), "", "开始 Redis 键空间迁移")
s.appendLog(config.JobID, &result, "info", fmt.Sprintf("Redis -> MongoDB 键空间迁移;模式:%s目标策略%s", mode, strategy))
if mode == "full_overwrite" {
s.appendLog(config.JobID, &result, "warn", "Redis -> MongoDB 第一版暂不执行集合级 full_overwrite 删除,已降级为 insert_update")
}
sourceClient := newRedisSourceClient()
sourceConfig := withResolvedRedisDB(config.SourceConfig)
if err := sourceClient.Connect(sourceConfig); err != nil {
return s.fail(config.JobID, len(tables), result, "源 Redis 连接失败: "+err.Error())
}
defer sourceClient.Close()
targetDB, err := newSyncDatabase(config.TargetConfig.Type)
if err != nil {
return s.fail(config.JobID, len(tables), result, "初始化目标数据库驱动失败: "+err.Error())
}
if err := targetDB.Connect(config.TargetConfig); err != nil {
return s.fail(config.JobID, len(tables), result, "目标数据库连接失败: "+err.Error())
}
defer targetDB.Close()
keys, err := listRedisMigrationKeys(sourceClient, config.Tables)
if err != nil {
return s.fail(config.JobID, len(tables), result, "扫描 Redis Key 失败: "+err.Error())
}
if len(keys) == 0 {
result.Message = "未发现可迁移的 Redis Key"
s.progress(config.JobID, 0, 0, "", "同步完成")
return result
}
totalKeys := len(keys)
collection := deriveRedisMongoCollectionName(config)
plan, err := buildRedisToMongoPlan(config, firstNonEmpty(keys[0], collection), targetDB)
if err != nil {
return s.fail(config.JobID, totalKeys, result, err.Error())
}
for _, warning := range plan.Warnings {
s.appendLog(config.JobID, &result, "warn", " -> "+warning)
}
for _, unsupported := range plan.UnsupportedObjects {
s.appendLog(config.JobID, &result, "warn", " -> "+unsupported)
}
if strings.TrimSpace(plan.PlannedAction) != "" {
s.appendLog(config.JobID, &result, "info", " -> "+plan.PlannedAction)
}
if !plan.TargetTableExists && !plan.AutoCreate {
result.Message = firstNonEmpty(plan.PlannedAction, "目标集合不存在,当前策略不允许自动创建")
return result
}
if !plan.TargetTableExists && len(plan.PreDataSQL) > 0 {
s.progress(config.JobID, 0, totalKeys, collection, "创建目标集合")
if err := executeSQLStatements(targetDB.Exec, plan.PreDataSQL); err != nil {
return s.fail(config.JobID, totalKeys, result, "创建目标集合失败: "+err.Error())
}
}
changeSet, documents, err := buildRedisMongoChanges(config, keys, sourceClient, targetDB, collection)
if err != nil {
return s.fail(config.JobID, totalKeys, result, "构建 Redis 迁移变更失败: "+err.Error())
}
for idx, key := range keys {
s.appendLog(config.JobID, &result, "info", fmt.Sprintf("正在迁移 Key: %s", key))
s.progress(config.JobID, idx, totalKeys, key, fmt.Sprintf("迁移 Key(%d/%d)", idx+1, totalKeys))
}
if len(changeSet.Inserts) == 0 && len(changeSet.Updates) == 0 && len(changeSet.Deletes) == 0 {
s.appendLog(config.JobID, &result, "info", " -> 目标集合中对应文档已是最新状态")
result.TablesSynced = totalKeys
result.Message = fmt.Sprintf("Redis 键空间迁移完成,共处理 %d 个 Key", totalKeys)
s.progress(config.JobID, totalKeys, totalKeys, collection, "同步完成")
return result
}
applier, ok := targetDB.(db.BatchApplier)
if !ok {
return s.fail(config.JobID, totalKeys, result, "目标驱动不支持 MongoDB 文档写入")
}
_ = documents
if err := applier.ApplyChanges(collection, changeSet); err != nil {
return s.fail(config.JobID, totalKeys, result, "应用 Redis 迁移变更失败: "+err.Error())
}
result.RowsInserted += len(changeSet.Inserts)
result.RowsUpdated += len(changeSet.Updates)
result.RowsDeleted += len(changeSet.Deletes)
result.TablesSynced = totalKeys
result.Message = fmt.Sprintf("Redis 键空间迁移完成,共处理 %d 个 Key", totalKeys)
s.progress(config.JobID, totalKeys, totalKeys, collection, "同步完成")
return result
}
func (s *SyncEngine) analyzeRedisToMongo(config SyncConfig) SyncAnalyzeResult {
result := SyncAnalyzeResult{Success: true, Tables: []TableDiffSummary{}}
sourceClient := newRedisSourceClient()
sourceConfig := withResolvedRedisDB(config.SourceConfig)
if err := sourceClient.Connect(sourceConfig); err != nil {
return SyncAnalyzeResult{Success: false, Message: "源 Redis 连接失败: " + err.Error()}
}
defer sourceClient.Close()
targetDB, err := newSyncDatabase(config.TargetConfig.Type)
if err != nil {
return SyncAnalyzeResult{Success: false, Message: "初始化目标数据库驱动失败: " + err.Error()}
}
if err := targetDB.Connect(config.TargetConfig); err != nil {
return SyncAnalyzeResult{Success: false, Message: "目标数据库连接失败: " + err.Error()}
}
defer targetDB.Close()
keys, err := listRedisMigrationKeys(sourceClient, config.Tables)
if err != nil {
return SyncAnalyzeResult{Success: false, Message: "扫描 Redis Key 失败: " + err.Error()}
}
collection := deriveRedisMongoCollectionName(config)
changeSet, documents, err := buildRedisMongoChanges(config, keys, sourceClient, targetDB, collection)
if err != nil {
return SyncAnalyzeResult{Success: false, Message: "分析 Redis 迁移变更失败: " + err.Error()}
}
insertSet := make(map[string]struct{}, len(changeSet.Inserts))
updateSet := make(map[string]struct{}, len(changeSet.Updates))
for _, row := range changeSet.Inserts {
insertSet[fmt.Sprintf("%v", row["_id"])] = struct{}{}
}
for _, row := range changeSet.Updates {
updateSet[fmt.Sprintf("%v", row.Keys["_id"])] = struct{}{}
}
for _, doc := range documents {
key := fmt.Sprintf("%v", doc["key"])
id := fmt.Sprintf("%v", doc["_id"])
summary := TableDiffSummary{
Table: key,
PKColumn: "_id",
CanSync: true,
TargetTableExists: true,
PlannedAction: fmt.Sprintf("迁移到集合 %s", collection),
Warnings: []string{
"Redis Key 将按文档写入 MongoDB 集合",
},
}
if _, ok := insertSet[id]; ok {
summary.Inserts = 1
summary.Message = "执行时将写入新文档"
} else if _, ok := updateSet[id]; ok {
summary.Updates = 1
summary.Message = "执行时将更新已有文档"
} else {
summary.Same = 1
summary.Message = "目标集合中对应文档已是最新状态"
}
result.Tables = append(result.Tables, summary)
}
result.Message = fmt.Sprintf("已完成 %d 个 Redis Key 的迁移分析", len(result.Tables))
return result
}
func (s *SyncEngine) previewRedisToMongo(config SyncConfig, keyName string, limit int) (TableDiffPreview, error) {
_ = limit
sourceClient := newRedisSourceClient()
sourceConfig := withResolvedRedisDB(config.SourceConfig)
if err := sourceClient.Connect(sourceConfig); err != nil {
return TableDiffPreview{}, fmt.Errorf("源 Redis 连接失败: %w", err)
}
defer sourceClient.Close()
targetDB, err := newSyncDatabase(config.TargetConfig.Type)
if err != nil {
return TableDiffPreview{}, fmt.Errorf("初始化目标数据库驱动失败: %w", err)
}
if err := targetDB.Connect(config.TargetConfig); err != nil {
return TableDiffPreview{}, fmt.Errorf("目标数据库连接失败: %w", err)
}
defer targetDB.Close()
collection := deriveRedisMongoCollectionName(config)
changeSet, documents, err := buildRedisMongoChanges(config, []string{keyName}, sourceClient, targetDB, collection)
if err != nil {
return TableDiffPreview{}, err
}
preview := TableDiffPreview{Table: keyName, PKColumn: "_id", Inserts: []PreviewRow{}, Updates: []PreviewUpdateRow{}, Deletes: []PreviewRow{}}
if len(documents) == 0 {
return preview, nil
}
doc := documents[0]
id := fmt.Sprintf("%v", doc["_id"])
existingDocs, err := loadExistingRedisMongoDocs(targetDB, collection, []string{id})
if err != nil {
return TableDiffPreview{}, err
}
if len(changeSet.Inserts) > 0 {
preview.TotalInserts = 1
preview.Inserts = append(preview.Inserts, PreviewRow{PK: id, Row: doc})
return preview, nil
}
if len(changeSet.Updates) > 0 {
preview.TotalUpdates = 1
preview.Updates = append(preview.Updates, PreviewUpdateRow{PK: id, ChangedColumns: sortedMapKeys(changeSet.Updates[0].Values), Source: doc, Target: existingDocs[id]})
return preview, nil
}
return preview, nil
}
func sortedMapKeys(values map[string]interface{}) []string {
keys := make([]string, 0, len(values))
for key := range values {
keys = append(keys, key)
}
sort.Strings(keys)
return keys
}
func isMongoToRedisKeyspacePair(config SyncConfig) bool {
return resolveMigrationDBType(config.SourceConfig) == "mongodb" && resolveMigrationDBType(config.TargetConfig) == "redis"
}
type mongoRedisKeyDocument struct {
Key string
Type string
TTL int64
Value interface{}
SourceRow map[string]interface{}
Desired *redispkg.RedisValue
}
type mongoRedisKeyDiff struct {
Collection string
Document mongoRedisKeyDocument
Current *redispkg.RedisValue
Exists bool
Action string
ChangedColumns []string
}
func deriveRedisTargetLabel(config SyncConfig) string {
return fmt.Sprintf("Redis DB %d", resolveRedisDBIndex(config.TargetConfig))
}
func deriveDefaultMongoRedisCollection(config SyncConfig) string {
return resolveMongoCollectionName(config)
}
func listMongoRedisCollections(sourceDB db.Database, config SyncConfig) ([]string, error) {
if len(config.Tables) > 0 {
return dedupeStrings(config.Tables), nil
}
tables, err := sourceDB.GetTables(strings.TrimSpace(config.SourceConfig.Database))
if err == nil && len(tables) > 0 {
return dedupeStrings(tables), nil
}
return []string{deriveDefaultMongoRedisCollection(config)}, nil
}
func buildMongoRedisFindQuery(collection string, limit int) (string, error) {
command := map[string]interface{}{
"find": strings.TrimSpace(collection),
"filter": map[string]interface{}{},
}
if limit > 0 {
command["limit"] = limit
}
data, err := json.Marshal(command)
if err != nil {
return "", err
}
return string(data), nil
}
func loadMongoRedisDocuments(sourceDB db.Database, collection string, limit int) ([]map[string]interface{}, error) {
query, err := buildMongoRedisFindQuery(collection, limit)
if err != nil {
return nil, err
}
rows, _, err := sourceDB.Query(query)
if err != nil {
return nil, err
}
return rows, nil
}
func parseMongoRedisDocument(row map[string]interface{}) (mongoRedisKeyDocument, error) {
key := strings.TrimSpace(asRedisMigrationString(row["key"]))
if key == "" {
if rawID := strings.TrimSpace(asRedisMigrationString(row["_id"])); rawID != "" {
if _, tail, ok := strings.Cut(rawID, ":"); ok {
key = strings.TrimSpace(tail)
}
}
}
if key == "" {
return mongoRedisKeyDocument{}, fmt.Errorf("文档缺少 key 字段")
}
redisType := strings.ToLower(strings.TrimSpace(asRedisMigrationString(row["type"])))
if redisType == "" {
return mongoRedisKeyDocument{}, fmt.Errorf("文档缺少 type 字段: key=%s", key)
}
ttl := normalizeRedisMigrationTTL(asRedisMigrationInt64(row["ttl"], -1))
desired := &redispkg.RedisValue{Type: redisType, TTL: ttl}
sourceRow := cloneMapWithoutKeys(row)
sourceRow["key"] = key
sourceRow["type"] = redisType
sourceRow["ttl"] = ttl
switch redisType {
case "string":
value := asRedisMigrationString(row["value"])
desired.Value = value
desired.Length = int64(len(value))
sourceRow["value"] = value
case "hash":
value, err := asRedisMigrationStringMap(row["value"])
if err != nil {
return mongoRedisKeyDocument{}, fmt.Errorf("key=%s hash 值无效: %w", key, err)
}
desired.Value = value
desired.Length = int64(len(value))
sourceRow["value"] = normalizeRedisMongoValue(value)
case "list":
value, err := asRedisMigrationStringSlice(row["value"])
if err != nil {
return mongoRedisKeyDocument{}, fmt.Errorf("key=%s list 值无效: %w", key, err)
}
desired.Value = value
desired.Length = int64(len(value))
sourceRow["value"] = normalizeRedisMongoValue(value)
case "set":
value, err := asRedisMigrationStringSlice(row["value"])
if err != nil {
return mongoRedisKeyDocument{}, fmt.Errorf("key=%s set 值无效: %w", key, err)
}
sort.Strings(value)
desired.Value = value
desired.Length = int64(len(value))
sourceRow["value"] = normalizeRedisMongoValue(value)
case "zset":
value, err := asRedisMigrationZSetMembers(row["value"])
if err != nil {
return mongoRedisKeyDocument{}, fmt.Errorf("key=%s zset 值无效: %w", key, err)
}
sort.Slice(value, func(i, j int) bool {
if value[i].Score == value[j].Score {
return value[i].Member < value[j].Member
}
return value[i].Score < value[j].Score
})
desired.Value = value
desired.Length = int64(len(value))
sourceRow["value"] = normalizeRedisMongoValue(value)
case "stream":
value, err := asRedisMigrationStreamEntries(row["value"])
if err != nil {
return mongoRedisKeyDocument{}, fmt.Errorf("key=%s stream 值无效: %w", key, err)
}
sort.Slice(value, func(i, j int) bool { return value[i].ID < value[j].ID })
desired.Value = value
desired.Length = int64(len(value))
sourceRow["value"] = normalizeRedisMongoValue(value)
default:
return mongoRedisKeyDocument{}, fmt.Errorf("key=%s 暂不支持 Redis 类型 %s", key, redisType)
}
return mongoRedisKeyDocument{Key: key, Type: redisType, TTL: ttl, Value: desired.Value, SourceRow: sourceRow, Desired: desired}, nil
}
func buildMongoToRedisDiffs(sourceDB db.Database, targetClient redisMigrationClient, collection string, mode string) ([]mongoRedisKeyDiff, error) {
rows, err := loadMongoRedisDocuments(sourceDB, collection, 0)
if err != nil {
return nil, err
}
diffs := make([]mongoRedisKeyDiff, 0, len(rows))
effectiveMode := normalizeSyncMode(mode)
for _, row := range rows {
doc, err := parseMongoRedisDocument(row)
if err != nil {
return nil, err
}
current, exists, err := loadExistingRedisMigrationValue(targetClient, doc.Key)
if err != nil {
return nil, fmt.Errorf("读取目标 Redis Key 失败: key=%s err=%w", doc.Key, err)
}
action := "insert"
changedColumns := []string{"type", "ttl", "value"}
if exists {
if sameRedisMigrationValue(current, doc.Desired) {
action = "same"
changedColumns = nil
} else if effectiveMode == "insert_only" {
action = "same"
changedColumns = nil
} else {
action = "update"
changedColumns = diffRedisMigrationColumns(current, doc.Desired)
}
}
diffs = append(diffs, mongoRedisKeyDiff{
Collection: collection,
Document: doc,
Current: current,
Exists: exists,
Action: action,
ChangedColumns: changedColumns,
})
}
sort.Slice(diffs, func(i, j int) bool { return diffs[i].Document.Key < diffs[j].Document.Key })
return diffs, nil
}
func loadExistingRedisMigrationValue(client redisMigrationClient, key string) (*redispkg.RedisValue, bool, error) {
keyType, err := client.GetKeyType(key)
if err != nil {
return nil, false, err
}
keyType = strings.ToLower(strings.TrimSpace(keyType))
if keyType == "" || keyType == "none" {
return nil, false, nil
}
value, err := client.GetValue(key)
if err != nil {
return nil, false, err
}
if value == nil {
return nil, false, nil
}
value.Type = keyType
value.TTL = normalizeRedisMigrationTTL(value.TTL)
return value, true, nil
}
func normalizeRedisMigrationTTL(ttl int64) int64 {
if ttl > 0 {
return ttl
}
return -1
}
func sameRedisMigrationValue(current *redispkg.RedisValue, desired *redispkg.RedisValue) bool {
if current == nil || desired == nil {
return current == nil && desired == nil
}
if strings.ToLower(strings.TrimSpace(current.Type)) != strings.ToLower(strings.TrimSpace(desired.Type)) {
return false
}
if normalizeRedisMigrationTTL(current.TTL) != normalizeRedisMigrationTTL(desired.TTL) {
return false
}
return canonicalRedisMigrationValue(current) == canonicalRedisMigrationValue(desired)
}
func canonicalRedisMigrationValue(value *redispkg.RedisValue) string {
if value == nil {
return "null"
}
payload := map[string]interface{}{
"type": strings.ToLower(strings.TrimSpace(value.Type)),
"ttl": normalizeRedisMigrationTTL(value.TTL),
"value": normalizeRedisComparablePayload(strings.ToLower(strings.TrimSpace(value.Type)), value.Value),
}
data, err := json.Marshal(payload)
if err != nil {
return fmt.Sprintf("%v", payload)
}
return string(data)
}
func normalizeRedisComparablePayload(redisType string, value interface{}) interface{} {
switch redisType {
case "string":
return asRedisMigrationString(value)
case "hash":
mapped, err := asRedisMigrationStringMap(value)
if err != nil {
return fmt.Sprintf("%v", value)
}
return normalizeRedisMongoValue(mapped)
case "list":
items, err := asRedisMigrationStringSlice(value)
if err != nil {
return fmt.Sprintf("%v", value)
}
return normalizeRedisMongoValue(items)
case "set":
items, err := asRedisMigrationStringSlice(value)
if err != nil {
return fmt.Sprintf("%v", value)
}
sort.Strings(items)
return normalizeRedisMongoValue(items)
case "zset":
members, err := asRedisMigrationZSetMembers(value)
if err != nil {
return fmt.Sprintf("%v", value)
}
sort.Slice(members, func(i, j int) bool {
if members[i].Score == members[j].Score {
return members[i].Member < members[j].Member
}
return members[i].Score < members[j].Score
})
return normalizeRedisMongoValue(members)
case "stream":
entries, err := asRedisMigrationStreamEntries(value)
if err != nil {
return fmt.Sprintf("%v", value)
}
sort.Slice(entries, func(i, j int) bool { return entries[i].ID < entries[j].ID })
return normalizeRedisMongoValue(entries)
default:
return normalizeRedisMongoValue(value)
}
}
func diffRedisMigrationColumns(current *redispkg.RedisValue, desired *redispkg.RedisValue) []string {
changed := make([]string, 0, 3)
if current == nil || desired == nil {
return []string{"type", "ttl", "value"}
}
if strings.ToLower(strings.TrimSpace(current.Type)) != strings.ToLower(strings.TrimSpace(desired.Type)) {
changed = append(changed, "type")
}
if normalizeRedisMigrationTTL(current.TTL) != normalizeRedisMigrationTTL(desired.TTL) {
changed = append(changed, "ttl")
}
currentComparable := normalizeRedisComparablePayload(strings.ToLower(strings.TrimSpace(desired.Type)), current.Value)
desiredComparable := normalizeRedisComparablePayload(strings.ToLower(strings.TrimSpace(desired.Type)), desired.Value)
currentJSON, _ := json.Marshal(currentComparable)
desiredJSON, _ := json.Marshal(desiredComparable)
if string(currentJSON) != string(desiredJSON) {
changed = append(changed, "value")
}
return dedupeStrings(changed)
}
func buildRedisPreviewRow(key string, value *redispkg.RedisValue) map[string]interface{} {
if value == nil {
return map[string]interface{}{"key": key}
}
return map[string]interface{}{
"key": key,
"type": strings.ToLower(strings.TrimSpace(value.Type)),
"ttl": normalizeRedisMigrationTTL(value.TTL),
"value": normalizeRedisComparablePayload(strings.ToLower(strings.TrimSpace(value.Type)), value.Value),
}
}
func applyMongoRedisDiff(targetClient redisMigrationClient, diff mongoRedisKeyDiff) error {
desired := diff.Document.Desired
if desired == nil {
return fmt.Errorf("空的 Redis 目标值: key=%s", diff.Document.Key)
}
redisType := strings.ToLower(strings.TrimSpace(desired.Type))
ttl := normalizeRedisMigrationTTL(desired.TTL)
if diff.Exists && diff.Action == "update" && redisType != "string" {
if _, err := targetClient.DeleteKeys([]string{diff.Document.Key}); err != nil {
return err
}
}
switch redisType {
case "string":
return targetClient.SetString(diff.Document.Key, asRedisMigrationString(desired.Value), ttl)
case "hash":
mapped, err := asRedisMigrationStringMap(desired.Value)
if err != nil {
return err
}
fields := make([]string, 0, len(mapped))
for field := range mapped {
fields = append(fields, field)
}
sort.Strings(fields)
for _, field := range fields {
if err := targetClient.SetHashField(diff.Document.Key, field, mapped[field]); err != nil {
return err
}
}
return targetClient.SetTTL(diff.Document.Key, ttl)
case "list":
items, err := asRedisMigrationStringSlice(desired.Value)
if err != nil {
return err
}
if len(items) > 0 {
if err := targetClient.ListPush(diff.Document.Key, items...); err != nil {
return err
}
}
return targetClient.SetTTL(diff.Document.Key, ttl)
case "set":
items, err := asRedisMigrationStringSlice(desired.Value)
if err != nil {
return err
}
if len(items) > 0 {
if err := targetClient.SetAdd(diff.Document.Key, items...); err != nil {
return err
}
}
return targetClient.SetTTL(diff.Document.Key, ttl)
case "zset":
members, err := asRedisMigrationZSetMembers(desired.Value)
if err != nil {
return err
}
if len(members) > 0 {
if err := targetClient.ZSetAdd(diff.Document.Key, members...); err != nil {
return err
}
}
return targetClient.SetTTL(diff.Document.Key, ttl)
case "stream":
entries, err := asRedisMigrationStreamEntries(desired.Value)
if err != nil {
return err
}
for _, entry := range entries {
if _, err := targetClient.StreamAdd(diff.Document.Key, entry.Fields, entry.ID); err != nil {
return err
}
}
return targetClient.SetTTL(diff.Document.Key, ttl)
default:
return fmt.Errorf("暂不支持 Redis 类型 %s", redisType)
}
}
func asRedisMigrationString(value interface{}) string {
switch typed := value.(type) {
case nil:
return ""
case string:
return typed
case []byte:
return string(typed)
default:
return fmt.Sprintf("%v", typed)
}
}
func asRedisMigrationInt64(value interface{}, defaultValue int64) int64 {
switch typed := value.(type) {
case nil:
return defaultValue
case int:
return int64(typed)
case int8:
return int64(typed)
case int16:
return int64(typed)
case int32:
return int64(typed)
case int64:
return typed
case uint:
return int64(typed)
case uint8:
return int64(typed)
case uint16:
return int64(typed)
case uint32:
return int64(typed)
case uint64:
return int64(typed)
case float32:
return int64(typed)
case float64:
return int64(typed)
case json.Number:
if n, err := typed.Int64(); err == nil {
return n
}
case string:
if n, err := strconv.ParseInt(strings.TrimSpace(typed), 10, 64); err == nil {
return n
}
}
return defaultValue
}
func asRedisMigrationFloat64(value interface{}) (float64, error) {
switch typed := value.(type) {
case float64:
return typed, nil
case float32:
return float64(typed), nil
case int:
return float64(typed), nil
case int8:
return float64(typed), nil
case int16:
return float64(typed), nil
case int32:
return float64(typed), nil
case int64:
return float64(typed), nil
case uint:
return float64(typed), nil
case uint8:
return float64(typed), nil
case uint16:
return float64(typed), nil
case uint32:
return float64(typed), nil
case uint64:
return float64(typed), nil
case json.Number:
return typed.Float64()
case string:
return strconv.ParseFloat(strings.TrimSpace(typed), 64)
default:
return 0, fmt.Errorf("无法转换为 float64: %T", value)
}
}
func asRedisMigrationStringMap(value interface{}) (map[string]string, error) {
switch typed := value.(type) {
case nil:
return map[string]string{}, nil
case map[string]string:
result := make(map[string]string, len(typed))
for k, v := range typed {
result[k] = v
}
return result, nil
case map[string]interface{}:
result := make(map[string]string, len(typed))
for k, v := range typed {
result[k] = asRedisMigrationString(v)
}
return result, nil
default:
return nil, fmt.Errorf("期望对象,实际=%T", value)
}
}
func asRedisMigrationStringSlice(value interface{}) ([]string, error) {
switch typed := value.(type) {
case nil:
return []string{}, nil
case []string:
result := append([]string(nil), typed...)
return result, nil
case []interface{}:
result := make([]string, 0, len(typed))
for _, item := range typed {
result = append(result, asRedisMigrationString(item))
}
return result, nil
default:
return nil, fmt.Errorf("期望数组,实际=%T", value)
}
}
func asRedisMigrationZSetMembers(value interface{}) ([]redispkg.ZSetMember, error) {
switch typed := value.(type) {
case nil:
return []redispkg.ZSetMember{}, nil
case []redispkg.ZSetMember:
result := append([]redispkg.ZSetMember(nil), typed...)
return result, nil
case []interface{}:
result := make([]redispkg.ZSetMember, 0, len(typed))
for _, item := range typed {
mapped, ok := item.(map[string]interface{})
if !ok {
return nil, fmt.Errorf("zset 成员格式无效: %T", item)
}
score, err := asRedisMigrationFloat64(mapped["score"])
if err != nil {
return nil, err
}
result = append(result, redispkg.ZSetMember{Member: asRedisMigrationString(mapped["member"]), Score: score})
}
return result, nil
default:
return nil, fmt.Errorf("期望 zset 数组,实际=%T", value)
}
}
func asRedisMigrationStreamEntries(value interface{}) ([]redispkg.StreamEntry, error) {
switch typed := value.(type) {
case nil:
return []redispkg.StreamEntry{}, nil
case []redispkg.StreamEntry:
result := append([]redispkg.StreamEntry(nil), typed...)
return result, nil
case []interface{}:
result := make([]redispkg.StreamEntry, 0, len(typed))
for _, item := range typed {
mapped, ok := item.(map[string]interface{})
if !ok {
return nil, fmt.Errorf("stream 条目格式无效: %T", item)
}
fields, err := asRedisMigrationStringMap(mapped["fields"])
if err != nil {
return nil, err
}
result = append(result, redispkg.StreamEntry{ID: asRedisMigrationString(mapped["id"]), Fields: fields})
}
return result, nil
default:
return nil, fmt.Errorf("期望 stream 数组,实际=%T", value)
}
}
func (s *SyncEngine) runMongoToRedisSync(config SyncConfig, result SyncResult) SyncResult {
collections := dedupeStrings(config.Tables)
sourceDB, err := newSyncDatabase(config.SourceConfig.Type)
if err != nil {
return s.fail(config.JobID, len(collections), result, "初始化源数据库驱动失败: "+err.Error())
}
if err := sourceDB.Connect(config.SourceConfig); err != nil {
return s.fail(config.JobID, len(collections), result, "源 MongoDB 连接失败: "+err.Error())
}
defer sourceDB.Close()
if len(collections) == 0 {
collections, err = listMongoRedisCollections(sourceDB, config)
if err != nil {
return s.fail(config.JobID, 0, result, "获取 MongoDB 集合列表失败: "+err.Error())
}
}
if len(collections) == 0 {
result.Message = "未发现可迁移的 MongoDB 集合"
s.progress(config.JobID, 0, 0, "", "同步完成")
return result
}
effectiveMode := normalizeSyncMode(config.Mode)
totalCollections := len(collections)
s.progress(config.JobID, 0, totalCollections, "", "开始 MongoDB 键空间迁移")
s.appendLog(config.JobID, &result, "info", fmt.Sprintf("MongoDB -> Redis 键空间迁移;模式:%s目标%s", effectiveMode, deriveRedisTargetLabel(config)))
s.appendLog(config.JobID, &result, "warn", "MongoDB -> Redis 第一版仅支持固定文档格式key/type/ttl/value")
if effectiveMode == "full_overwrite" {
s.appendLog(config.JobID, &result, "warn", "MongoDB -> Redis 第一版暂不执行 Redis DB 级 full_overwrite 删除,已降级为 insert_update")
effectiveMode = "insert_update"
}
targetClient := newRedisSourceClient()
targetConfig := withResolvedRedisDB(config.TargetConfig)
if err := targetClient.Connect(targetConfig); err != nil {
return s.fail(config.JobID, totalCollections, result, "目标 Redis 连接失败: "+err.Error())
}
defer targetClient.Close()
processedKeys := 0
for idx, collection := range collections {
s.appendLog(config.JobID, &result, "info", fmt.Sprintf("正在同步集合: %s", collection))
s.progress(config.JobID, idx, totalCollections, collection, fmt.Sprintf("迁移集合(%d/%d)", idx+1, totalCollections))
diffs, err := buildMongoToRedisDiffs(sourceDB, targetClient, collection, effectiveMode)
if err != nil {
return s.fail(config.JobID, totalCollections, result, fmt.Sprintf("分析集合 %s 失败: %v", collection, err))
}
for _, diff := range diffs {
processedKeys++
if diff.Action == "same" {
continue
}
s.appendLog(config.JobID, &result, "info", fmt.Sprintf("正在迁移 Key: %s", diff.Document.Key))
if err := applyMongoRedisDiff(targetClient, diff); err != nil {
return s.fail(config.JobID, totalCollections, result, fmt.Sprintf("写入 Redis Key %s 失败: %v", diff.Document.Key, err))
}
switch diff.Action {
case "insert":
result.RowsInserted++
case "update":
result.RowsUpdated++
}
}
result.TablesSynced++
s.progress(config.JobID, idx+1, totalCollections, collection, "集合处理完成")
}
if processedKeys == 0 {
result.Message = "未发现可迁移的 MongoDB Redis 文档"
return result
}
result.Message = fmt.Sprintf("MongoDB 键空间迁移完成,共处理 %d 个集合、%d 个 Key", result.TablesSynced, processedKeys)
return result
}
func (s *SyncEngine) analyzeMongoToRedis(config SyncConfig) SyncAnalyzeResult {
result := SyncAnalyzeResult{Success: true, Tables: []TableDiffSummary{}}
sourceDB, err := newSyncDatabase(config.SourceConfig.Type)
if err != nil {
return SyncAnalyzeResult{Success: false, Message: "初始化源数据库驱动失败: " + err.Error()}
}
if err := sourceDB.Connect(config.SourceConfig); err != nil {
return SyncAnalyzeResult{Success: false, Message: "源 MongoDB 连接失败: " + err.Error()}
}
defer sourceDB.Close()
collections, err := listMongoRedisCollections(sourceDB, config)
if err != nil {
return SyncAnalyzeResult{Success: false, Message: "获取 MongoDB 集合列表失败: " + err.Error()}
}
effectiveMode := normalizeSyncMode(config.Mode)
modeWarning := ""
if effectiveMode == "full_overwrite" {
modeWarning = "MongoDB -> Redis 第一版会将 full_overwrite 降级为 insert_update避免误删 DB 内其他 Key"
effectiveMode = "insert_update"
}
targetClient := newRedisSourceClient()
targetConfig := withResolvedRedisDB(config.TargetConfig)
if err := targetClient.Connect(targetConfig); err != nil {
return SyncAnalyzeResult{Success: false, Message: "目标 Redis 连接失败: " + err.Error()}
}
defer targetClient.Close()
for _, collection := range collections {
summary := TableDiffSummary{
Table: collection,
PKColumn: "key",
CanSync: true,
TargetTableExists: true,
PlannedAction: fmt.Sprintf("迁移到 %s", deriveRedisTargetLabel(config)),
Warnings: []string{
"MongoDB 集合中的文档会按 keyspace 语义写入 Redis",
"当前仅支持固定文档格式key/type/ttl/value",
},
}
if modeWarning != "" {
summary.Warnings = append(summary.Warnings, modeWarning)
}
diffs, err := buildMongoToRedisDiffs(sourceDB, targetClient, collection, effectiveMode)
if err != nil {
summary.CanSync = false
summary.Message = err.Error()
result.Tables = append(result.Tables, summary)
continue
}
for _, diff := range diffs {
switch diff.Action {
case "insert":
summary.Inserts++
case "update":
summary.Updates++
default:
summary.Same++
}
}
if summary.Inserts == 0 && summary.Updates == 0 {
if summary.Same == 0 {
summary.Message = "集合中未发现可迁移文档"
} else {
summary.Message = "目标 Redis 中对应 Key 已是最新状态"
}
} else {
summary.Message = fmt.Sprintf("执行时将写入 %d 个新 Key、更新 %d 个已有 Key", summary.Inserts, summary.Updates)
}
result.Tables = append(result.Tables, summary)
}
result.Message = fmt.Sprintf("已完成 %d 个 MongoDB 集合的 Redis 迁移分析", len(result.Tables))
return result
}
func (s *SyncEngine) previewMongoToRedis(config SyncConfig, collection string, limit int) (TableDiffPreview, error) {
sourceDB, err := newSyncDatabase(config.SourceConfig.Type)
if err != nil {
return TableDiffPreview{}, fmt.Errorf("初始化源数据库驱动失败: %w", err)
}
if err := sourceDB.Connect(config.SourceConfig); err != nil {
return TableDiffPreview{}, fmt.Errorf("源 MongoDB 连接失败: %w", err)
}
defer sourceDB.Close()
targetClient := newRedisSourceClient()
targetConfig := withResolvedRedisDB(config.TargetConfig)
if err := targetClient.Connect(targetConfig); err != nil {
return TableDiffPreview{}, fmt.Errorf("目标 Redis 连接失败: %w", err)
}
defer targetClient.Close()
effectiveMode := normalizeSyncMode(config.Mode)
if effectiveMode == "full_overwrite" {
effectiveMode = "insert_update"
}
diffs, err := buildMongoToRedisDiffs(sourceDB, targetClient, collection, effectiveMode)
if err != nil {
return TableDiffPreview{}, err
}
preview := TableDiffPreview{Table: collection, PKColumn: "key", Inserts: []PreviewRow{}, Updates: []PreviewUpdateRow{}, Deletes: []PreviewRow{}}
for _, diff := range diffs {
switch diff.Action {
case "insert":
preview.TotalInserts++
if len(preview.Inserts) < limit {
preview.Inserts = append(preview.Inserts, PreviewRow{PK: diff.Document.Key, Row: diff.Document.SourceRow})
}
case "update":
preview.TotalUpdates++
if len(preview.Updates) < limit {
preview.Updates = append(preview.Updates, PreviewUpdateRow{PK: diff.Document.Key, ChangedColumns: diff.ChangedColumns, Source: diff.Document.SourceRow, Target: buildRedisPreviewRow(diff.Document.Key, diff.Current)})
}
}
}
return preview, nil
}