Compare commits

...

1744 Commits

Author SHA1 Message Date
jxxghp
f369967c91 更新 version.py 2026-01-29 22:32:03 +08:00
jxxghp
cd982c5526 Merge pull request #5439 from DDSRem-Dev/dev 2026-01-29 22:30:28 +08:00
jxxghp
16e03c9d37 Merge pull request #5438 from cddjr/fix_scrape_follow_tmdb 2026-01-29 22:29:06 +08:00
DDSRem
d38b1f5364 feat: u115 support oauth 2026-01-29 22:14:10 +08:00
景大侠
f57ba4d05e 修复 整理时可能误跟随TMDB变化的问题 2026-01-29 15:04:42 +08:00
jxxghp
172eeaafcf 更新 version.py 2026-01-27 18:07:55 +08:00
jxxghp
3115ed28b2 fix: 历史记录删除源文件后,不在订阅的文件列表中显示 2026-01-26 21:47:26 +08:00
jxxghp
d8dc53805c feat(transfer): 整理事件增加历史记录ID 2026-01-26 21:29:05 +08:00
jxxghp
7218d10e1b feat(transfer): 拆分字幕和音频整理事件 2026-01-26 19:33:50 +08:00
jxxghp
89bf85f501 Merge pull request #5425 from xiaoQQya/develop 2026-01-26 18:41:42 +08:00
jxxghp
8334a468d0 feat(category): Add API endpoints for retrieving and saving category configuration 2026-01-26 12:53:26 +08:00
jxxghp
3da80ed077 Merge pull request #5423 from jxxghp/copilot/update-category-helper-integration 2026-01-26 12:35:05 +08:00
copilot-swe-agent[bot]
2883ccbe87 Move category methods to ChainBase and use consistent naming
Co-authored-by: jxxghp <51039935+jxxghp@users.noreply.github.com>
2026-01-26 04:32:11 +00:00
copilot-swe-agent[bot]
5d3443fee4 Use ruamel.yaml consistently in CategoryHelper
Co-authored-by: jxxghp <51039935+jxxghp@users.noreply.github.com>
2026-01-26 04:10:15 +00:00
copilot-swe-agent[bot]
27756a53db Implement proper architecture: module->chain->API with single CategoryHelper
Co-authored-by: jxxghp <51039935+jxxghp@users.noreply.github.com>
2026-01-26 04:07:56 +00:00
copilot-swe-agent[bot]
71cde6661d Improve comments for clarity
Co-authored-by: jxxghp <51039935+jxxghp@users.noreply.github.com>
2026-01-25 10:08:13 +00:00
copilot-swe-agent[bot]
a857337b31 Fix architecture - restore helper layer and use ModuleManager for reload trigger
Co-authored-by: jxxghp <51039935+jxxghp@users.noreply.github.com>
2026-01-25 10:06:01 +00:00
copilot-swe-agent[bot]
4ee21ffae4 Address code review feedback - use ruamel.yaml consistently and fix typo
Co-authored-by: jxxghp <51039935+jxxghp@users.noreply.github.com>
2026-01-25 09:58:28 +00:00
copilot-swe-agent[bot]
d8399f7e85 Consolidate CategoryHelper classes and add reload trigger
Co-authored-by: jxxghp <51039935+jxxghp@users.noreply.github.com>
2026-01-25 09:56:11 +00:00
copilot-swe-agent[bot]
574ac8d32f Initial plan 2026-01-25 09:52:31 +00:00
jxxghp
a2611bfa7d feat: Add search_imdbid to subscriptions and improve error message propagation and handling for existing subscriptions. 2026-01-25 14:57:46 +08:00
xiaoQQya
853badb76f fix: 更新站点 Rousi Pro 获取未读消息接口 2026-01-25 14:36:22 +08:00
jxxghp
5d69e1d2a5 Merge pull request #5419 from wikrin/subscribe-source-query-enhancement 2026-01-25 14:04:42 +08:00
jxxghp
6494f28bdb Fix: Remove isolated ToolMessage instances after message trimming to prevent OpenAI errors. 2026-01-25 13:42:29 +08:00
Attente
f55916bda2 feat(transfer): 支持按条件查询订阅获取自定义识别词用于文件转移 2026-01-25 11:34:03 +08:00
jxxghp
04691ee197 Merge remote-tracking branch 'origin/v2' into v2 2026-01-25 09:39:59 +08:00
jxxghp
2ac0e564e1 feat(category):新增二级分类维护API 2026-01-25 09:39:48 +08:00
jxxghp
6072a29a20 Merge pull request #5418 from wikrin/CNSUB-filter-rules-update 2026-01-25 08:17:20 +08:00
Attente
8658942385 feat(filter): 添加配置监听和改进中字过滤规则 2026-01-25 01:06:50 +08:00
jxxghp
cc4859950c Merge remote-tracking branch 'origin/v2' into v2 2026-01-24 19:24:22 +08:00
jxxghp
23b81ad6f1 feat(config):完善默认插件库 2026-01-24 19:24:15 +08:00
jxxghp
e3b9dca5c0 Merge pull request #5417 from cddjr/fix_u115_create_folder
fix(u115): 创建目录误报失败
2026-01-24 19:14:40 +08:00
景大侠
a2359a1ad2 fix(u115): 创建目录误报失败
- 解析响应时忽略20004错误码
- 根目录创建目录会报错ValueError
2026-01-24 17:48:53 +08:00
jxxghp
cb875b1b34 更新 version.py 2026-01-24 12:04:54 +08:00
jxxghp
b92a85b4bc Merge pull request #5415 from cddjr/fix_bluray_scrape 2026-01-24 11:43:44 +08:00
景大侠
8c7dd6bab2 修复 原盘目录不刮削 2026-01-24 11:42:00 +08:00
景大侠
aad7df64d7 简化原盘大小计算代码 2026-01-24 11:29:30 +08:00
jxxghp
8474342007 feat(agent):上下文超长时自动摘要 2026-01-24 11:24:59 +08:00
jxxghp
61ccb4be65 feat(agent): 新增命令行工具 2026-01-24 11:10:15 +08:00
jxxghp
1c6f69707c fix 增加模块异常traceback打印 2026-01-24 11:00:24 +08:00
jxxghp
e08e8c482a Merge pull request #5414 from jxxghp/copilot/fix-file-organization-error 2026-01-24 10:49:19 +08:00
copilot-swe-agent[bot]
548c1d2cab Add null check for schema access in IndexerModule
Co-authored-by: jxxghp <51039935+jxxghp@users.noreply.github.com>
2026-01-24 02:26:55 +00:00
copilot-swe-agent[bot]
5a071bf3d1 Add null check for schema.value access in FileManagerModule
Co-authored-by: jxxghp <51039935+jxxghp@users.noreply.github.com>
2026-01-24 02:25:55 +00:00
copilot-swe-agent[bot]
1bffcbd947 Initial plan 2026-01-24 02:22:25 +00:00
jxxghp
274a36a83a 更新 config.py 2026-01-24 10:04:37 +08:00
jxxghp
ec40f36114 fix(agent):修复智能体工具调用,优化媒体库查询工具 2026-01-24 09:46:19 +08:00
jxxghp
af19f274a7 Merge pull request #5413 from jxxghp/copilot/fix-runnable-lambda-error 2026-01-24 08:38:24 +08:00
copilot-swe-agent[bot]
2316004194 Fix 'RunnableLambda' object is not callable error by wrapping validated_trimmer
Co-authored-by: jxxghp <51039935+jxxghp@users.noreply.github.com>
2026-01-24 00:35:59 +00:00
copilot-swe-agent[bot]
98762198ef Initial plan 2026-01-24 00:33:35 +00:00
jxxghp
1469de22a4 Merge pull request #5412 from jxxghp/copilot/translate-comments-to-chinese 2026-01-24 08:27:11 +08:00
copilot-swe-agent[bot]
1e687f960a Translate English comments to Chinese in agent/__init__.py
Co-authored-by: jxxghp <51039935+jxxghp@users.noreply.github.com>
2026-01-24 00:25:21 +00:00
copilot-swe-agent[bot]
7f01b835fd Initial plan 2026-01-24 00:22:19 +00:00
jxxghp
e46b6c5c01 Merge pull request #5411 from jxxghp/copilot/fix-tool-call-exception-handling 2026-01-24 08:20:51 +08:00
copilot-swe-agent[bot]
74226ad8df Improve error message to include exception type for better debugging
Co-authored-by: jxxghp <51039935+jxxghp@users.noreply.github.com>
2026-01-24 00:18:43 +00:00
copilot-swe-agent[bot]
f8ae7be539 Fix: Ensure tool exceptions are stored in memory to maintain message chain integrity
Co-authored-by: jxxghp <51039935+jxxghp@users.noreply.github.com>
2026-01-24 00:18:06 +00:00
copilot-swe-agent[bot]
37b16e380d Initial plan 2026-01-24 00:14:13 +00:00
jxxghp
9ea3e9f652 Merge pull request #5409 from jxxghp/copilot/fix-agent-execution-error 2026-01-24 08:12:39 +08:00
copilot-swe-agent[bot]
54422b5181 Final refinements: fix falsy value handling and add warning for extra ToolMessages
Co-authored-by: jxxghp <51039935+jxxghp@users.noreply.github.com>
2026-01-24 00:10:00 +00:00
copilot-swe-agent[bot]
712995dcf3 Address code review feedback: fix ToolCall handling and add orphaned message filtering
Co-authored-by: jxxghp <51039935+jxxghp@users.noreply.github.com>
2026-01-24 00:08:25 +00:00
jxxghp
c2767b0fd6 Merge pull request #5410 from jxxghp/copilot/fix-media-exists-error 2026-01-24 08:08:03 +08:00
copilot-swe-agent[bot]
179cc61f65 Fix tool call integrity validation to skip orphaned ToolMessages
Co-authored-by: jxxghp <51039935+jxxghp@users.noreply.github.com>
2026-01-24 00:05:21 +00:00
copilot-swe-agent[bot]
f3b910d55a Fix AttributeError when mediainfo.type is None
Co-authored-by: jxxghp <51039935+jxxghp@users.noreply.github.com>
2026-01-24 00:04:02 +00:00
copilot-swe-agent[bot]
f4157b52ea Fix agent tool_calls integrity validation
Co-authored-by: jxxghp <51039935+jxxghp@users.noreply.github.com>
2026-01-24 00:02:47 +00:00
copilot-swe-agent[bot]
79710310ce Initial plan 2026-01-24 00:00:31 +00:00
copilot-swe-agent[bot]
3412498438 Initial plan 2026-01-23 23:57:27 +00:00
jxxghp
b896b07a08 fix search_web tool 2026-01-24 07:39:07 +08:00
jxxghp
379bff0622 Merge pull request #5407 from cddjr/fix_db 2026-01-24 06:45:54 +08:00
jxxghp
474f47aa9f Merge pull request #5406 from cddjr/fix_transfer 2026-01-24 06:45:10 +08:00
jxxghp
f1e26a4133 Merge pull request #5405 from cddjr/fix_modify_time_comparison 2026-01-24 06:44:05 +08:00
jxxghp
e37f881207 Merge pull request #5404 from jxxghp/copilot/reimplement-network-search-tool 2026-01-24 06:39:56 +08:00
大虾
306c0b707b Update database/versions/41ef1dd7467c_2_2_2.py
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2026-01-24 02:53:14 +08:00
景大侠
08c448ee30 修复 迁移PG后可能卡启动的问题 2026-01-24 02:49:54 +08:00
景大侠
1532014067 修复 多下载器返回相同种子造成的重复整理 2026-01-24 01:41:48 +08:00
景大侠
fa9f604af9 修复 入库通知不显示集数
因过早清理作业导致
2026-01-24 01:17:23 +08:00
景大侠
3b3d0d6539 修复 文件列表接口中空值时间戳的比较逻辑 2026-01-23 23:52:43 +08:00
copilot-swe-agent[bot]
9641d33040 Fix generator handling and update error message to reference requirements.in
Co-authored-by: jxxghp <51039935+jxxghp@users.noreply.github.com>
2026-01-23 15:23:52 +00:00
copilot-swe-agent[bot]
eca339d107 Address code review comments: improve code organization and use modern asyncio
Co-authored-by: jxxghp <51039935+jxxghp@users.noreply.github.com>
2026-01-23 15:22:45 +00:00
copilot-swe-agent[bot]
ca18705d88 Reimplemented SearchWebTool using duckduckgo-search library
Co-authored-by: jxxghp <51039935+jxxghp@users.noreply.github.com>
2026-01-23 15:20:06 +00:00
copilot-swe-agent[bot]
8f17b52466 Initial plan 2026-01-23 15:16:09 +00:00
jxxghp
8cf84e722b fix agent error message 2026-01-23 22:50:59 +08:00
jxxghp
7c4d736b54 feat:Agent上下文裁剪 2026-01-23 22:47:18 +08:00
jxxghp
1b3ae6ab25 fix 下载器整理标签设置 2026-01-23 18:10:59 +08:00
jxxghp
a4ad08136e 更新 version.py 2026-01-23 14:33:41 +08:00
jxxghp
df5e7997c5 Merge pull request #5401 from jxxghp/copilot/check-jobview-logic 2026-01-23 07:21:46 +08:00
copilot-swe-agent[bot]
b2cb3768c1 Fix remove_job to use __get_id for consistent job removal
Co-authored-by: jxxghp <51039935+jxxghp@users.noreply.github.com>
2026-01-22 14:38:33 +00:00
copilot-swe-agent[bot]
fa169c5cd3 Initial plan 2026-01-22 14:34:18 +00:00
jxxghp
bbb3975b67 更新 transfer.py 2026-01-22 22:31:52 +08:00
jxxghp
4502a9c4fa fix:优化移动模式删除逻辑 2026-01-22 22:15:40 +08:00
jxxghp
86905a2670 Merge pull request #5399 from cddjr/fix_downloader_monitor 2026-01-22 21:41:25 +08:00
景大侠
b1e60a4867 修复 下载器监控 2026-01-22 21:34:50 +08:00
jxxghp
1efe3324fb fix:优化设置种子状态标签的时机 2026-01-22 08:24:23 +08:00
jxxghp
55c1e37d39 更新 query_subscribes.py 2026-01-22 08:05:41 +08:00
jxxghp
7fa700317c 更新 update_subscribe.py 2026-01-22 08:03:48 +08:00
jxxghp
bbe831a57c 优化 transfer.py 中任务处理逻辑,增强错误信息反馈 2026-01-21 23:55:20 +08:00
jxxghp
90c86c056c fix all_tasks 2026-01-21 23:30:39 +08:00
jxxghp
36f22a28df fix 完成状态计算 2026-01-21 23:23:37 +08:00
jxxghp
ac03c51e2c 更新 transfer.py 2026-01-21 23:06:29 +08:00
jxxghp
bd9e92f705 更新 transfer.py 2026-01-21 22:59:30 +08:00
jxxghp
281eff5eb2 更新 version.py 2026-01-21 22:54:31 +08:00
jxxghp
abbd2253ad fix deadlock 2026-01-21 22:46:04 +08:00
jxxghp
46466624ae fix:优化下载器整理控制逻辑 2026-01-21 22:21:17 +08:00
jxxghp
0ba8d51b2a fix:优化下载器整理 2026-01-21 21:31:55 +08:00
jxxghp
a1408ee18f feat:TRANSFER_THREADS 变更监听 2026-01-21 20:46:34 +08:00
jxxghp
58030bbcff fix #5392 2026-01-21 20:12:05 +08:00
jxxghp
e1b3e6ef01 fix:只有媒体文件整完成才触发事件,以保持与历史一致 2026-01-21 20:07:18 +08:00
jxxghp
298a6ba8ab 更新 update_subscribe.py 2026-01-21 19:36:12 +08:00
jxxghp
e5bf47629f 更新 config.py 2026-01-21 19:13:36 +08:00
jxxghp
ea29ee9f66 Merge pull request #5390 from xiaoQQya/develop 2026-01-21 18:39:06 +08:00
jxxghp
868c2254de v2.9.5 2026-01-21 17:59:52 +08:00
jxxghp
567522c87a fix:统一调整文件类型支持 2026-01-21 17:59:18 +08:00
jxxghp
25fd47f57b Merge pull request #5389 from hyuan280/v2 2026-01-21 17:22:27 +08:00
hyuan280
f89d6342d1 fix: 修复Cookie解码二进制数据导致请求发送时UnicodeEncodeError 2026-01-21 16:36:28 +08:00
jxxghp
b02affdea3 Merge pull request #5388 from cddjr/fix_tmdb_img_url 2026-01-21 13:24:39 +08:00
景大侠
6e5ade943b 修复 订阅无法查看文件列表的问题
TMDB图片路径参数增加空值检查
2026-01-21 12:47:39 +08:00
jxxghp
a6ed0c0d00 fix:优化transhandler线程安全 2026-01-21 08:42:57 +08:00
jxxghp
68402aadd7 fix:去除文件操作全局锁 2026-01-21 08:31:51 +08:00
jxxghp
85cacd447b feat: 为文件整理服务引入多线程处理并优化进度管理。 2026-01-21 08:16:02 +08:00
xiaoQQya
11262b321a fix(rousi pro): 修复 Rousi Pro 站点未读消息未推送通知的问题 2026-01-20 22:12:31 +08:00
jxxghp
bf290f063d Merge pull request #5386 from PKC278/v2 2026-01-20 22:09:02 +08:00
PKC278
7ac0fbaf76 fix(otp): 修正 OTP 关闭逻辑 2026-01-20 19:53:59 +08:00
PKC278
7489c76722 feat(passkey): 允许在未开启 OTP 时注册通行密钥 2026-01-20 19:35:36 +08:00
jxxghp
bcdf1b6efe 更新 transhandler.py 2026-01-20 15:29:28 +08:00
jxxghp
8a9dbe212c Merge pull request #5385 from cddjr/feature_optimize_transfer 2026-01-20 15:25:38 +08:00
景大侠
16bd71a6cb 优化整理代码效率、减少额外递归 2026-01-20 14:38:41 +08:00
jxxghp
71caad0655 feat:优化蓝光目录判断,减少目录遍历 2026-01-20 13:38:52 +08:00
jxxghp
2c62ffe34a feat:优化字幕和音频文件整理方式 2026-01-20 13:24:35 +08:00
jxxghp
3450a89880 Merge pull request #5383 from jxxghp/copilot/merge-agent-and-execution-messages 2026-01-20 00:03:23 +08:00
copilot-swe-agent[bot]
a081a69bbe Simplify message merging logic using list join
Co-authored-by: jxxghp <51039935+jxxghp@users.noreply.github.com>
2026-01-19 16:00:36 +00:00
copilot-swe-agent[bot]
271d1d23d5 Merge agent and tool execution messages into a single message
Co-authored-by: jxxghp <51039935+jxxghp@users.noreply.github.com>
2026-01-19 15:59:21 +00:00
copilot-swe-agent[bot]
605aba1a3c Initial plan 2026-01-19 15:55:13 +00:00
jxxghp
be3c2b4c7c Merge pull request #5382 from jxxghp/copilot/fix-tool-call-id-error 2026-01-19 21:36:09 +08:00
copilot-swe-agent[bot]
08eb32d7bd Fix isinstance syntax error for int/float type checking
Co-authored-by: jxxghp <51039935+jxxghp@users.noreply.github.com>
2026-01-19 13:33:01 +00:00
copilot-swe-agent[bot]
2b9cda15e4 Fix tool_call_id error by adding metadata to tool_result and using it in ToolMessage
Co-authored-by: jxxghp <51039935+jxxghp@users.noreply.github.com>
2026-01-19 13:31:43 +00:00
copilot-swe-agent[bot]
f6055b290a Initial plan 2026-01-19 13:28:07 +00:00
jxxghp
ec665e05e4 Merge pull request #5379 from Pollo3470/v2 2026-01-19 21:17:53 +08:00
jxxghp
2b6d7205ec Merge pull request #5378 from cddjr/fix_tv_dir_scrape 2026-01-19 17:47:11 +08:00
Pollo
41381a920c fix: 修复订阅自定义识别词在整理时不生效的问题
问题:订阅中添加的自定义识别词(特别是集数偏移)在下载时正常生效,
但在下载完成整理时没有生效。

根因:下载历史中未保存识别词,整理时 MetaInfoPath 未接收
custom_words 参数。

修复:
- 在 DownloadHistory 模型中添加 custom_words 字段
- 下载时从 meta.apply_words 获取并保存识别词到下载历史
- MetaInfoPath 函数添加 custom_words 参数支持
- 整理时从下载历史获取 custom_words 并传递给 MetaInfoPath
- 添加 Alembic 迁移脚本 (2.2.3)
- 添加相关单元测试
2026-01-19 15:46:00 +08:00
大虾
f1b3fc2254 更新注释 2026-01-19 10:11:54 +08:00
景大侠
a677ed307d 修复 剧集nfo文件刮削了错误的tmdb id
应使用剧集id而非剧id
2026-01-18 16:23:05 +08:00
景大侠
0ab23ee972 修复 刮削电视剧目录会误判剧集根目录为季目录
因辅助识别词指定了季号
2026-01-18 15:17:22 +08:00
景大侠
43f56d39be 修复 手动刮削电视剧目录可能会遗漏特别季 2026-01-18 01:51:35 +08:00
jxxghp
a39caee5f5 Merge pull request #5371 from cddjr/remove_unused_finished_files 2026-01-17 07:50:54 +08:00
景大侠
2edfdf47c8 移除整理进度数据中无用的文件列表 2026-01-17 00:20:09 +08:00
jxxghp
3819461db5 更新 version.py 2026-01-16 19:27:57 +08:00
jxxghp
85654dd7dd Merge pull request #5367 from PKC278/v2 2026-01-15 22:58:10 +08:00
PKC278
619a70416b fix: 修正智能推荐功能未检查智能助手总开关的问题 2026-01-15 22:57:49 +08:00
jxxghp
16d996fe70 Merge pull request #5366 from xiaoQQya/develop 2026-01-15 21:11:10 +08:00
xiaoQQya
1baeb6da19 feat(rousi pro): 支持解析 Rousi Pro 站点未读消息 2026-01-15 21:08:08 +08:00
jxxghp
1641d432dd feat: 为工具管理器添加参数类型规范化处理,并基于渠道能力动态生成提示词中的格式要求 2026-01-15 20:55:35 +08:00
jxxghp
1bf9862e47 feat: 更新代理提示词,增加详细的沟通、状态更新、总结、操作流程、工具使用和媒体管理规则。 2026-01-15 19:50:37 +08:00
jxxghp
602a394043 Merge pull request #5362 from cddjr/feat_extended_api_token_support 2026-01-15 13:33:56 +08:00
景大侠
22a2415ca5 缓存api鉴权结果 2026-01-15 12:38:47 +08:00
景大侠
feb034352d 让现有基于JWT令牌鉴权的接口也能支持API令牌鉴权 2026-01-15 12:30:03 +08:00
jxxghp
a7c8942c78 Merge pull request #5358 from PKC278/v2 2026-01-15 07:04:21 +08:00
PKC278
95f2ac3811 feat(search): 添加AI推荐功能并优化相关逻辑 2026-01-15 02:49:29 +08:00
jxxghp
91354295f2 Merge pull request #5356 from cddjr/fix_manual_transfer 2026-01-14 22:48:24 +08:00
景大侠
c9c4ab5911 修复 手动重新整理没有更新源文件大小的问题
- V1迁移过来的记录,重整理后文件大小显示为0
- 部分源文件大小有变动,重整理后大小显示没变化
2026-01-14 22:42:36 +08:00
jxxghp
a26c5e40dd Merge pull request #5354 from cddjr/fix_media_root_path 2026-01-14 19:04:53 +08:00
景大侠
80f5c7bc44 修复 整理文件或目录时没有正确应用多层标题的重命名格式
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2026-01-14 19:02:17 +08:00
jxxghp
4833b39c52 Merge pull request #5352 from cddjr/fix_concurrency_systemconfig 2026-01-13 16:58:21 +08:00
景大侠
f478958943 修复 SystemConfig潜在的资源竞争问题
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2026-01-13 14:33:53 +08:00
jxxghp
0469ad46d6 Merge pull request #5351 from winter0245/v2 2026-01-13 11:48:28 +08:00
winter0245
5fe5deb9df Merge branch 'jxxghp:v2' into v2 2026-01-13 09:30:16 +08:00
xjy
ce83bc24bd fix: 修复站点Cookie处理的两个关键问题
本次提交修复了PT站点搜索功能失败的两个根本原因:

1. **Cookie URL解码问题**
   - 问题:数据库中存储的Cookie值包含URL编码(如%3D、%2B、%2F),
     但cookie_parse()函数未进行解码
   - 影响:所有使用URL编码Cookie的站点可能无法正常登录
   - 修复:在app/utils/http.py的cookie_parse()中添加unquote()解码

2. **httpx Cookie jar覆盖问题**(关键)
 - 问题:httpx.AsyncClient的Cookie jar机制会自动保存服务器返回的
 Set-Cookie,并在后续请求中覆盖我们传入的Cookie
 - 表现:传入正确的c_secure_uid/c_secure_pass,实际发送的却是
 PHPSESSID等错误Cookie
 - 修复:在创建AsyncClient时传入Cookie,而不是在request()时传入

修改文件:
- app/utils/http.py: cookie_parse()添加URL解码 + AsyncClient传入cookies
- app/modules/indexer/spider/__init__.py: 清理调试代码

测试验证:
-  pterclub 搜索功能恢复正常
-  春天站点搜索功能正常(验证通用性)
2026-01-13 09:29:05 +08:00
jxxghp
dce729c8cb Merge pull request #5350 from cddjr/fix_tmdb_cache 2026-01-13 07:04:58 +08:00
jxxghp
a9d17cd96f Merge pull request #5349 from cddjr/fix_bluray 2026-01-13 07:03:16 +08:00
景大侠
294bb3d4a1 修复 目录监控无法触发蓝光原盘整理 2026-01-13 00:09:34 +08:00
景大侠
b31b9261f2 Update app/core/config.py
接受AI的建议
2026-01-12 23:31:44 +08:00
大虾
2211f8d9e4 Update tests/test_bluray.py
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2026-01-12 23:17:39 +08:00
景大侠
b9b7b00a7f 修复 下载器监控构造的原盘路径需以/结尾 2026-01-12 22:54:19 +08:00
景大侠
843faf6103 修复 整理记录无法显示原盘大小 2026-01-12 22:14:43 +08:00
景大侠
4af5dad9a8 修复 原盘自动刮削缺少nfo 2026-01-12 21:36:17 +08:00
景大侠
52437c9d18 按语言缓存tmdb元数据 2026-01-12 09:41:34 +08:00
景大侠
c6cb4c8479 统一构造tmdb图片网址 2026-01-12 09:41:25 +08:00
jxxghp
c3714ec251 Merge pull request #5346 from HankunYu/v2 2026-01-12 09:04:35 +08:00
HankunYu
dbe2f94af1 修改embed解析以支持emoji字符 2026-01-12 00:46:26 +00:00
jxxghp
07fd5f8a9e Merge pull request #5344 from PKC278/v2 2026-01-11 20:30:28 +08:00
PKC278
9e64b4cd7f refactor: 优化登录安全性并重构 PassKey 逻辑
- 统一登录失败返回信息,防止信息泄露
- 提取 PassKeyHelper 公共函数,简化 Base64 和凭证处理
- 重构 mfa.py 端点代码,提升可读性和维护性
- 移除冗余的 origin 验证逻辑
2026-01-11 19:20:53 +08:00
jxxghp
f08a7b9eb3 Merge pull request #5343 from Lyzd1/v2 2026-01-10 19:10:04 +08:00
The falling leaves know
a6fa764e2a Change media_type to required field in QueryMediaDetailInput 2026-01-10 18:49:02 +08:00
jxxghp
01676668f1 Merge pull request #5342 from Lyzd1/v2 2026-01-10 17:58:13 +08:00
The falling leaves know
8e5e4f460d Enhance media detail query with media type handling 2026-01-10 17:44:11 +08:00
jxxghp
f907b8a84d v2.9.3
- 优化通行密钥登录体验
- 优化智能体
- 支持Rousi Pro全新架构站点
2026-01-10 10:32:56 +08:00
jxxghp
a3a4285f90 Merge pull request #5339 from PKC278/v2 2026-01-10 07:42:38 +08:00
PKC278
0979163b79 fix(rousi): 修正分类参数为单一值以符合API要求 2026-01-10 02:12:33 +08:00
PKC278
248a25eaee fix(rousi): 移除单例模式 2026-01-10 01:39:40 +08:00
PKC278
f95b1fa68a fix(rousi): 修正分类映射 2026-01-10 01:31:12 +08:00
PKC278
d2b5d69051 feat(rousi): 重构响应处理逻辑以提高代码可读性和维护性 2026-01-10 00:54:43 +08:00
PKC278
3ca419b735 fix(rousi): 精简并修正分类映射 2026-01-10 00:27:45 +08:00
PKC278
50e275a2f9 feat(config): 增加最大搜索名称数量限制至3 确保包含 en_title 2026-01-09 23:53:09 +08:00
PKC278
aeccf78957 feat(rousi): 新增分类参数支持以优化搜索功能 2026-01-09 23:05:02 +08:00
PKC278
cb3cef70e5 feat: 新增 RousiPro 站点支持 2026-01-09 22:08:24 +08:00
jxxghp
b9bd303bf8 fix:优化Agent参数校验,避免中止推理 2026-01-09 20:26:49 +08:00
jxxghp
57d4786a7f Merge pull request #5332 from PKC278/v2 2026-01-08 07:47:01 +08:00
PKC278
df031455b2 feat(agent): 新增媒体详情查询工具 2026-01-07 23:31:08 +08:00
jxxghp
30059eff4f Merge pull request #5319 from cddjr/fix_5314 2026-01-04 18:59:26 +08:00
景大侠
bc289b48c8 修复 字幕支持通过代理下载 2026-01-04 16:07:45 +08:00
jxxghp
067d8b99b8 更新 version.py 2026-01-04 13:19:05 +08:00
jxxghp
00a6a9c42d Merge pull request #5317 from cddjr/fix_MetaInfoPath 2026-01-03 20:52:48 +08:00
大虾
070425d446 Update app/core/metainfo.py
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2026-01-03 19:26:03 +08:00
大虾
7405883444 Update app/core/metainfo.py
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2026-01-03 19:25:54 +08:00
景大侠
66959937ed 修复 电影文件可能会误识别成电视剧类型 2026-01-03 18:06:26 +08:00
jxxghp
e431efbcba Merge pull request #5315 from cddjr/fix_movie_scrape_image 2026-01-03 07:19:32 +08:00
景大侠
ba00baa5a0 修复 刮削电影会误报父目录的海报图已存在 2026-01-02 23:37:36 +08:00
jxxghp
0fb5d4a164 Merge pull request #5312 from PKC278/v2 2026-01-02 18:23:37 +08:00
PKC278
1ac717b67f fix(message): 修复缓存数据处理逻辑以避免空值错误 2026-01-02 17:12:33 +08:00
jxxghp
273cbd447e Merge pull request #5309 from Seed680/v2 2026-01-01 23:30:41 +08:00
noone
cee41567a2 feat(chain): 添加当前时间参数到消息渲染
- 在MessageTemplateHelper.render调用中添加current_time参数
2026-01-01 23:01:25 +08:00
jxxghp
1aae5eb1a6 Merge pull request #5307 from cddjr/fix_mteam_promotions 2026-01-01 13:54:58 +08:00
景大侠
28a4c81aff 识别馒头站点的全站促销规则 2026-01-01 13:10:41 +08:00
jxxghp
5e077cd64d 更新 version.py 2025-12-31 07:49:12 +08:00
jxxghp
e3f957a59b 更新 __init__.py 2025-12-31 07:20:52 +08:00
jxxghp
55c62a3ab5 Merge pull request #5303 from HankunYu/v2 2025-12-31 07:00:04 +08:00
jxxghp
22e7eef1bd Merge pull request #5302 from cddjr/fix_tmdb_healthcheck 2025-12-31 06:59:03 +08:00
HankunYu
d6524907f3 修复重载模块会产生新的DC实例;建立embed解析白名单,不解析插件等消息以免破坏原有格式 2025-12-30 16:51:30 +00:00
景大侠
357db334cd 修复 自建TMDB服无法通过健康检测
携带UA以避免被反爬虫脚本过滤
2025-12-30 22:13:43 +08:00
jxxghp
f8bed3909b Merge pull request #5299 from cddjr/fix_5297 2025-12-30 15:52:29 +08:00
景大侠
182bbdde91 fix #5297 2025-12-30 15:21:27 +08:00
jxxghp
2c70f990c2 Merge pull request #5294 from cddjr/mteam_subtitle 2025-12-30 06:57:15 +08:00
景大侠
0b01a6aa91 避免获取到字幕上传者的详情链接 2025-12-29 22:52:26 +08:00
景大侠
e557dffbc6 支持憨憨站点的字幕下载 2025-12-29 22:43:47 +08:00
景大侠
7f33b0b1b8 支持馒头站点的字幕下载 2025-12-29 22:43:07 +08:00
景大侠
41ddf77a5b 添加馒头字幕API 2025-12-29 20:01:54 +08:00
jxxghp
8c657ce41d 更新 version.py 2025-12-28 17:58:39 +08:00
jxxghp
3ff3b9ed4a Merge pull request #5290 from PKC278/v2 2025-12-28 17:58:05 +08:00
PKC278
ef43419ecd Update app/api/endpoints/system.py
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-12-28 16:38:30 +08:00
PKC278
2ca375c214 feat(system): 添加前端和后端版本信息 2025-12-28 16:08:14 +08:00
jxxghp
cbd45c1d0f Merge pull request #5289 from HankunYu/v2 2025-12-28 12:50:40 +08:00
HankunYu
2592ea3464 清理 prefix/suffix 与字段值的分隔符;字段名允许 &;当冒号落在 《》/【】 内时整行作为描述,避免书名号误拆 2025-12-27 17:00:07 +00:00
HankunYu
73ac97cd96 更新解析embed逻辑; 添加使用代理 2025-12-27 13:05:57 +00:00
jxxghp
e014663e97 更新 version.py 2025-12-27 14:22:35 +08:00
jxxghp
58592e961f Merge pull request #5283 from PKC278/v2 2025-12-26 23:25:24 +08:00
PKC278
9a99b9ce82 fix(system): 更新global返回字段,采用白名单模式 2025-12-26 23:02:40 +08:00
jxxghp
8c6dca1751 Merge pull request #5277 from Seed680/v2 2025-12-25 19:26:26 +08:00
noone
cf488d5f5f fix(qbittorrent): 修复种子文件读取和重复检查问题
- 将变量名从 torrent 改为 torrent_from_file 以避免混淆
- 修复添加种子任务失败时的错误检查逻辑
- 使用 getattr 函数安全获取种子文件的名称和大小属性
- 修复已存在种子任务检查时的属性访问问题

fix(transmission): 修复种子添加和重复检查逻辑

- 将变量名从 torrent 改为 torrent_from_file 以避免混淆
- 修复添加任务后的返回值变量名
- 使用 getattr 函数安全获取种子文件的名称和大小属性
- 修复已存在种子任务检查时的属性访问问题
- 修正种子哈希获取的变量引用
2025-12-25 19:09:45 +08:00
jxxghp
515584d34c fix warnings 2025-12-24 22:04:04 +08:00
jxxghp
fb2becc7f2 v2.8.9
- 支持Discord通知渠道
- 支持使用通行密钥登录
2025-12-24 19:41:58 +08:00
jxxghp
0f8ceb0fac fix warnings 2025-12-24 18:54:38 +08:00
jxxghp
a70bf18770 Merge pull request #5273 from PKC278/v2 2025-12-23 17:36:30 +08:00
PKC278
2de83c44ab refactor(mcp): 精简会话管理逻辑并更新API文档 2025-12-23 17:06:17 +08:00
PKC278
7b99f09810 fix(mfa): 修复双重验证漏洞 2025-12-23 14:58:00 +08:00
jxxghp
6b4ba8bfad Merge pull request #5272 from PKC278/v2 2025-12-23 14:39:03 +08:00
PKC278
0c6cfc5020 feat(passkey): 添加PassKey支持并优化双重验证登录逻辑 2025-12-23 13:53:54 +08:00
jxxghp
abd9733e7f Merge pull request #5269 from HankunYu/v2 2025-12-23 12:51:25 +08:00
HankunYu
98c3ae5e76 Merge branch 'v2' of https://github.com/jxxghp/MoviePilot into v2 2025-12-22 21:00:47 +00:00
HankunYu
bb5a657469 更新Discord模块支持互动消息 2025-12-22 19:59:22 +00:00
jxxghp
7797532350 Merge pull request #5271 from PKC278/v2 2025-12-22 21:32:53 +08:00
PKC278
c3a5106adc feat(manager): 添加工具调用参数格式自动转换功能 2025-12-22 21:04:13 +08:00
HankunYu
c5fd935dd0 Merge branch 'v2' of https://github.com/jxxghp/MoviePilot into v2 2025-12-22 12:19:21 +00:00
jxxghp
ec375a19ae Merge pull request #5267 from stkevintan/cookiecloud-post 2025-12-22 19:06:05 +08:00
jxxghp
51e940617c Merge pull request #5270 from PKC278/v2 2025-12-22 18:50:12 +08:00
PKC278
58ec8bd437 feat(mcp): 实现标准MCP协议支持和会话管理功能 2025-12-22 18:49:00 +08:00
jxxghp
a096395086 Merge pull request #5250 from ixff/v2 2025-12-22 11:04:47 +08:00
HankunYu
4bd08bd915 通知渠道增加Discord 2025-12-22 02:15:28 +00:00
stkevintan
2c849cfa7a fix code style 2025-12-22 08:33:23 +08:00
stkevintan
501d530d1d cookiecloud: support download encrypted data using post 2025-12-21 23:07:35 +08:00
jxxghp
91fc4327f4 Merge pull request #5261 from ixff/fix 2025-12-19 12:38:43 +08:00
ixff
8d56c67079 fix typos 2025-12-19 12:19:42 +08:00
jxxghp
e52d43458e 更新 version.py 2025-12-15 15:19:57 +08:00
ixff
9b125bf9b0 feat: 支持选择Playwright浏览器环境 2025-12-14 23:15:28 +08:00
jxxghp
0716c65269 Refactor: Simplify memory key generation and update retention settings 2025-12-13 15:40:20 +08:00
jxxghp
ba3ce4f1b5 Merge pull request #5245 from jxxghp/cursor/agent-download-progress-tool-8daa 2025-12-13 15:09:54 +08:00
Cursor Agent
07f72b0cdc Refactor: Improve query download tasks logic and add status filtering
Co-authored-by: jxxghp <jxxghp@qq.com>
2025-12-13 07:01:24 +00:00
Cursor Agent
bda19df87f Fix: Ensure list_torrents and downloading return empty lists
Co-authored-by: jxxghp <jxxghp@qq.com>
2025-12-13 06:53:06 +00:00
jxxghp
5d82fae2b0 fix agent memory 2025-12-13 14:40:47 +08:00
jxxghp
0813b87221 fix agent memory 2025-12-13 13:23:41 +08:00
jxxghp
961ecfc720 fix agent memory 2025-12-13 13:09:49 +08:00
jxxghp
81f30ef25a fix agent memory 2025-12-13 12:26:08 +08:00
jxxghp
140b0d3df2 Merge pull request #5234 from xgitc/patch-2 2025-12-10 16:20:36 +08:00
jxxghp
b3d69d7de4 Merge pull request #5233 from xgitc/patch-1 2025-12-10 16:20:07 +08:00
xgitc
8e65564fb8 适配不同版本的gazelle程序
适配隐藏了URL中“.php”的站点;适配下一页按钮title为“下一页”或“Next”的站点。
2025-12-10 16:12:42 +08:00
xgitc
06ce9bd4de 适配更多促销类型 2025-12-10 15:54:03 +08:00
jxxghp
274fc2d74f v2.8.8
- 下载器支持配置路径映射
- 问题修复与细节优化
2025-12-10 14:33:13 +08:00
jxxghp
2f1a448afe Merge pull request #5226 from stkevintan/path-mapping 2025-12-08 18:46:48 +08:00
Kevin Tan
99cab7c337 Update app/modules/__init__.py
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-12-08 17:21:33 +08:00
Kevin Tan
81f7548579 Update app/modules/__init__.py
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-12-08 17:20:45 +08:00
stkevintan
6ebd50bebc update naming 2025-12-08 16:30:40 +08:00
stkevintan
378ba51f4d support path_mapping for downloader 2025-12-08 16:25:46 +08:00
jxxghp
63a890e85d 更新 __init__.py 2025-12-06 20:03:34 +08:00
jxxghp
bf4f9921e2 Merge pull request #5224 from stkevintan/file_uri 2025-12-06 20:03:05 +08:00
stkevintan
167ae65695 fix: path empty 2025-12-06 19:58:23 +08:00
stkevintan
2affa7c9b8 Support remote file uri when adding downloads 2025-12-06 19:33:52 +08:00
jxxghp
785540e178 更新 graphics.py 2025-12-06 14:47:23 +08:00
jxxghp
bcad4c0bc6 Merge pull request #5223 from wikrin/refactor/image-helper 2025-12-06 14:46:52 +08:00
Attente
5af217fbf5 refactor: 将图片获取逻辑抽象为独立的 ImageHelper 2025-12-06 10:10:36 +08:00
jxxghp
128aa2ef23 更新 requirements.in 2025-12-04 13:27:03 +08:00
jxxghp
fce1186dd1 Merge remote-tracking branch 'origin/v2' into v2 2025-12-04 12:30:05 +08:00
jxxghp
9a7b11f804 add google-generativeai 2025-12-04 12:29:56 +08:00
jxxghp
b068a06fa8 Merge pull request #5219 from 0xlane/v2 2025-12-03 13:49:18 +08:00
REinject
931a42e981 fix(tmdbapi): 修复按季搜索剧集的名称匹配逻辑问题 2025-12-03 12:26:05 +08:00
jxxghp
e0a20a6697 Merge pull request #5216 from wikrin/image_cache 2025-12-03 11:12:09 +08:00
Attente
1ef4374899 feat(telegram): 图片增加缓存与安全校验, 获取失败降级发送
- 统一部分类型标注
- 修正部分文本错误
2025-12-03 09:56:30 +08:00
jxxghp
3b7212740b fix 2025-12-01 15:22:06 +08:00
jxxghp
4b80b8dc1f Merge pull request #5206 from DDSRem-Dev/dev 2025-11-30 17:06:45 +08:00
DDSRem
b7f24827e6 fix(servarr): year type defined incorrectly
fix https://github.com/jxxghp/MoviePilot/issues/5158
2025-11-30 16:29:21 +08:00
jxxghp
1c08a22881 Merge pull request #5204 from yelantf/patch-2 2025-11-30 09:50:13 +08:00
夜阑听风
8bd848519d Convert user level to string if not None 2025-11-30 09:36:28 +08:00
jxxghp
e19f2aa76d Merge pull request #5202 from 0xlane/v2 2025-11-30 08:01:18 +08:00
REinject
4a99e2896f feat: 添加下载任务时增加辅助识别 2025-11-29 22:12:25 +08:00
jxxghp
de3c83b0aa Merge pull request #5197 from stkevintan/default-samba 2025-11-28 19:42:43 +08:00
stkevintan
36bdb831be use download storage instead of library storage 2025-11-28 19:30:39 +08:00
Kevin Tan
1809690915 Update app/modules/subtitle/__init__.py
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-11-28 17:21:17 +08:00
stkevintan
e51b679380 fix: support non-local filesystem operations for default dir and subtitles 2025-11-28 14:55:01 +08:00
jxxghp
10c26de7cb Merge pull request #5193 from wikrin/config_reload_mixin 2025-11-28 07:17:12 +08:00
Attente
ca5ec8af0f feat(config): 优化配置变更事件处理机制 2025-11-27 23:17:34 +08:00
jxxghp
d1d7b8ce55 更新 __init__.py 2025-11-27 22:03:20 +08:00
jxxghp
77f8983307 Merge pull request #5192 from stkevintan/smb-link 2025-11-27 20:15:46 +08:00
stkevintan
ba415acd37 add hard link support for smb 2025-11-27 18:21:54 +08:00
jxxghp
bcf13099ac Merge pull request #5188 from wikrin/dev 2025-11-26 22:27:51 +08:00
Attente
eb2b34d71c feat(themoviedb): 添加对 ConfigChanged 事件的监听支持
- 调整 username 字段类型以兼容整数形式
2025-11-26 20:58:58 +08:00
jxxghp
d0b665f773 更新 version.py 2025-11-25 20:19:59 +08:00
jxxghp
a1674b1ae5 Merge pull request #5186 from Seed680/v2 2025-11-25 17:16:28 +08:00
noone
af83681f6a feat(telegram): 新增单元测试覆盖各种消息发送场景 2025-11-25 17:02:31 +08:00
jxxghp
bebacf7b20 refactor: Update tool imports and descriptions in factory.py; remove obsolete query tools and enhance ListDirectoryTool description 2025-11-25 13:45:19 +08:00
jxxghp
6dc1fcbc3e Merge branch 'v2' of https://github.com/jxxghp/MoviePilot into v2 2025-11-25 13:33:04 +08:00
jxxghp
b599ef4509 feat: Add QueryMediaLatestTool to MoviePilotToolFactory 2025-11-25 13:33:03 +08:00
jxxghp
526b6a1119 Merge remote-tracking branch 'origin/v2' into v2 2025-11-24 21:30:07 +08:00
jxxghp
88173db4ce fix #5172 2025-11-24 21:29:56 +08:00
jxxghp
e139b1ab22 Merge pull request #5183 from wikrin/telegramify
feat(telegram): 使用 `telegramify_markdown` 库标准化消息内容,增强长消息与复杂格式的处理能力
2025-11-24 21:26:32 +08:00
Attente
6c1e0058c1 feat(telegram): 使用 telegramify_markdown 库标准化消息内容,增强长消息与复杂格式的处理能力 2025-11-24 20:59:32 +08:00
jxxghp
c96633eb83 Merge pull request #5173 from wikrin/cached 2025-11-23 16:58:50 +08:00
Attente
91eb35a77b fix(cache): 修复fresh会被错误覆盖的问题 2025-11-23 16:46:09 +08:00
jxxghp
d749d59cad Merge pull request #5171 from jxxghp/cursor/check-for-ai-prefix-before-processing-message-composer-1-29f2 2025-11-23 14:37:21 +08:00
Cursor Agent
80396b4d30 Fix: Make /ai command case-insensitive
Co-authored-by: jxxghp <jxxghp@qq.com>
2025-11-23 06:36:06 +00:00
Cursor Agent
64b93a009c Refactor: Allow messages without /ai prefix
Co-authored-by: jxxghp <jxxghp@qq.com>
2025-11-23 06:35:44 +00:00
Cursor Agent
2b32250504 feat: Require messages to start with /ai
Co-authored-by: jxxghp <jxxghp@qq.com>
2025-11-23 06:34:31 +00:00
jxxghp
9b5f863832 v2.8.6
- 增加全局智能助手设置,开启后所有消息通过智能助手回答而无需使用 `/ai` 指令
- 问题修复与细节优化
2025-11-23 13:55:16 +08:00
jxxghp
fd422d7446 Merge pull request #5170 from wikrin/fix 2025-11-23 13:33:49 +08:00
Attente
5162b2748e fix(media): 修复类型错误 2025-11-23 13:28:01 +08:00
jxxghp
56c684ec06 Merge pull request #5168 from jxxghp/cursor/add-actor-filmography-search-tool-composer-1-6aad 2025-11-22 08:12:17 +08:00
Cursor Agent
7e93b33407 feat: Add search_person_credits tool
Co-authored-by: jxxghp <jxxghp@qq.com>
2025-11-22 00:03:57 +00:00
jxxghp
7662235802 Merge pull request #5165 from jxxghp/cursor/add-site-parameters-to-agent-subscription-tool-be8f 2025-11-21 20:46:19 +08:00
Cursor Agent
e41f9facc7 Add sites parameter to AddSubscribeTool
Co-authored-by: jxxghp <jxxghp@qq.com>
2025-11-21 12:45:11 +00:00
jxxghp
785b8ede11 Merge pull request #5164 from jxxghp/cursor/add-person-search-tool-for-agent-745c 2025-11-21 19:33:00 +08:00
Cursor Agent
78b198ad70 feat: Add SearchPersonTool for agent
Co-authored-by: jxxghp <jxxghp@qq.com>
2025-11-21 11:31:27 +00:00
jxxghp
c2c0515991 Merge pull request #5163 from Pollo3470/feat/ai-proxy 2025-11-21 12:23:39 +08:00
Pollo
b97fefdb8d fix(ai): 解决google代理不生效问题
- google在检测到配置代理时,使用gemini openai兼容API
2025-11-21 11:10:41 +08:00
jxxghp
840da6dd85 Merge pull request #5157 from jxxghp/cursor/add-web-search-tool-with-context-trimming-70fe 2025-11-20 22:59:37 +08:00
Cursor Agent
972d916126 Refactor: Use DuckDuckGo API directly for web search
This change removes the HTML parsing logic and directly uses the DuckDuckGo API for web searches. It also adds proxy support for the HTTP requests.

Co-authored-by: jxxghp <jxxghp@qq.com>
2025-11-20 14:58:22 +00:00
Cursor Agent
e3ed065f5f Add SearchWebTool for web searching capabilities
Co-authored-by: jxxghp <jxxghp@qq.com>
2025-11-20 14:52:45 +00:00
jxxghp
760ebe6113 v2.8.5
- 智能体OpenAi及Google模型默认使用代理,支持自动获取可选模型列表
- 手动添加下载时支持指定媒体信息编号
2025-11-20 19:40:43 +08:00
jxxghp
a329d3ad89 fix api 2025-11-20 19:38:19 +08:00
jxxghp
01f8561582 fix 2025-11-20 19:15:46 +08:00
jxxghp
883ea5c996 Merge pull request #5155 from madrays/v2 2025-11-20 19:10:58 +08:00
jxxghp
99cf13ed9b Merge pull request #5152 from Pollo3470/feat/ai-proxy 2025-11-20 19:09:54 +08:00
madrays
91c7ef6801 增加自动拉取可用ai模型的易用性功能 2025-11-20 19:01:33 +08:00
Pollo
84ef5705e7 feat: google临时环境变量线程安全处理 2025-11-20 17:05:55 +08:00
Pollo
cf2a0cf8c2 feat: google和openai使用代理访问 2025-11-20 16:56:32 +08:00
jxxghp
48c25c40e4 fix wechat 2025-11-20 16:51:43 +08:00
jxxghp
996d8ab954 v2.8.4-1
- 修复工作流组件加载问题
- 修改个别智能体工具问题
2025-11-20 13:10:38 +08:00
jxxghp
fac2546a92 Enhance media library query tool with detailed logging and improved error handling. Refactor to use MediaServerChain for media existence checks and item details retrieval. 2025-11-20 13:02:23 +08:00
jxxghp
728ea6172a fix exists_local 2025-11-20 12:43:19 +08:00
jxxghp
f59d225029 fix workflow actions 2025-11-20 12:34:44 +08:00
jxxghp
0b178a715f fix search_media 2025-11-20 12:00:00 +08:00
jxxghp
e06e5328c2 Merge pull request #5148 from jxxghp/cursor/handle-ai-agent-message-processing-error-af53 2025-11-20 09:37:48 +08:00
Cursor Agent
1c14cd0979 Refactor: Use asyncio.run_coroutine_threadsafe for async calls
Co-authored-by: jxxghp <jxxghp@qq.com>
2025-11-20 01:37:17 +00:00
jxxghp
f9141f5ba2 Merge remote-tracking branch 'origin/v2' into v2 2025-11-20 08:19:52 +08:00
jxxghp
48da5c976c fixx loop 2025-11-20 08:15:37 +08:00
jxxghp
fa38c81c08 Merge pull request #5146 from DDS-Derek/dev 2025-11-19 20:47:16 +08:00
DDSDerek
8d5fe5270f Update app/modules/filemanager/storages/u115.py
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-11-19 19:51:14 +08:00
DDSRem
0dc0d66549 fix: known issue 2025-11-19 19:46:46 +08:00
DDSRem
f589fcc2d0 feat(u115): improve stability of the u115 module
1. 优化API请求错误时到处理逻辑
2. 提升hash计算速度
3. 接口级QPS速率限制
4. 使用httpx替换request
5. 优化路径拼接稳定性
6. 代码格式化
2025-11-19 19:39:02 +08:00
jxxghp
edd44a0993 Merge pull request #5143 from jxxghp/cursor/update-mcp-api-documentation-and-readme-a12b 2025-11-19 16:05:23 +08:00
Cursor Agent
2aae496742 Refactor: Improve MCP API documentation for broader client support
Co-authored-by: jxxghp <jxxghp@qq.com>
2025-11-19 08:03:53 +00:00
Cursor Agent
6f72046f86 Refactor: Update MCP API documentation and authentication
Co-authored-by: jxxghp <jxxghp@qq.com>
2025-11-19 07:47:32 +00:00
jxxghp
d4a9b446a6 更新 requirements.in 2025-11-19 14:35:41 +08:00
jxxghp
95f571e9b9 更新 requirements.in 2025-11-19 14:34:27 +08:00
jxxghp
e8aeae5c07 更新 version.py 2025-11-19 14:28:49 +08:00
jxxghp
ddf6dc0343 Merge pull request #5142 from jxxghp/cursor/update-agent-site-tool-documentation-with-priority-81fb 2025-11-19 14:17:31 +08:00
Cursor Agent
36d55a9db7 Refactor: Simplify tool descriptions
Co-authored-by: jxxghp <jxxghp@qq.com>
2025-11-19 06:16:24 +00:00
Cursor Agent
7d41379ad5 Refactor: Clarify site priority in tool descriptions
Co-authored-by: jxxghp <jxxghp@qq.com>
2025-11-19 06:15:18 +00:00
jxxghp
63e928da96 更新 version.py 2025-11-19 14:10:11 +08:00
jxxghp
5c983b64bc fix SiteUserData 2025-11-19 13:47:02 +08:00
jxxghp
b2d36c0e68 Update API key documentation to clarify retrieval methods in security.py and mcp-api.md 2025-11-19 13:42:53 +08:00
jxxghp
6123a1620e add mcp 2025-11-19 13:19:17 +08:00
jxxghp
5ae7c10a00 Enhance MoviePilotAgent to handle empty agent messages gracefully by providing a default error response, ensuring better user experience. Refactor message processing to streamline event loop execution. 2025-11-19 12:51:08 +08:00
jxxghp
b5a6794381 Refactor event loop handling to use GlobalVar.CURRENT_EVENT_LOOP across multiple modules, improving consistency and maintainability. 2025-11-19 08:42:07 +08:00
jxxghp
6b575f836a Add filter_groups parameter to AddSubscribeTool and include SearchSubscribeTool and QueryRuleGroupsTool in MoviePilotToolFactory 2025-11-19 08:31:06 +08:00
jxxghp
c83589cac6 rollback telegram 2025-11-18 21:56:55 +08:00
jxxghp
d64492bda5 fix QueryEpisodeScheduleTool 2025-11-18 21:46:11 +08:00
jxxghp
33d6c75924 fix telegram 2025-11-18 21:43:55 +08:00
jxxghp
89f01bad42 fix telegram 2025-11-18 21:32:09 +08:00
jxxghp
767496f81b fix telegram 2025-11-18 21:31:44 +08:00
jxxghp
147a477365 fix site 2025-11-18 21:13:45 +08:00
jxxghp
13171f636f fix 2025-11-18 20:34:53 +08:00
jxxghp
fea3f0d3e0 fix telegram markdown 2025-11-18 19:53:27 +08:00
jxxghp
a3a254c2ea fix telegram markdown 2025-11-18 19:09:36 +08:00
jxxghp
bd9d5f7fc0 Merge pull request #5135 from jxxghp/cursor/handle-telegram-hyphen-escape-error-9f51 2025-11-18 17:49:15 +08:00
Cursor Agent
726738ee9e Refactor: Protect only markdown delimiters, not content
Co-authored-by: jxxghp <jxxghp@qq.com>
2025-11-18 09:43:42 +00:00
jxxghp
725244bb2f fix escape_markdown 2025-11-18 17:30:00 +08:00
jxxghp
d2ac2b8990 feat: add QueryEpisodeScheduleTool to MoviePilotToolFactory
- Included QueryEpisodeScheduleTool in the tool definitions of MoviePilotToolFactory to enhance episode scheduling capabilities.
- Updated factory.py to reflect the addition of this new tool, improving overall functionality.
2025-11-18 17:20:23 +08:00
jxxghp
116569223c fix 2025-11-18 16:44:39 +08:00
jxxghp
05442a019f feat: add UpdateSubscribeTool and additional query tools to MoviePilotToolFactory
- Included UpdateSubscribeTool and QuerySiteUserdataTool in the tool definitions of MoviePilotToolFactory to enhance subscription management and user data querying capabilities.
- Updated the factory.py file to reflect the addition of these new tools, improving overall functionality.
2025-11-18 16:41:15 +08:00
jxxghp
db67080bf8 feat: add ScrapeMetadataTool to MoviePilotToolFactory
- Included ScrapeMetadataTool in the tool definitions of MoviePilotToolFactory to enhance metadata scraping capabilities.
- Updated the factory.py file to reflect the addition of the new tool.
2025-11-18 16:26:18 +08:00
jxxghp
21fabf7436 feat: enhance GetRecommendationsTool and update query tools for improved functionality
- Expanded the GetRecommendationsTool to support additional recommendation sources, including TMDB popular movies and TV shows, as well as various Douban categories.
- Updated the limit for results in QuerySubscribesTool, SearchMediaTool, and QueryTransferHistoryTool from 20 to 50 or 30, respectively, to provide more comprehensive results.
- Removed unnecessary description fields from media objects in QueryPopularSubscribesTool, QuerySubscribeHistoryTool, and QuerySubscribeSharesTool for cleaner output.
2025-11-18 16:21:13 +08:00
jxxghp
a8c6516b31 refactor: remove deprecated tools and add RecognizeMediaTool
- Removed the old tool definitions from __init__.py to streamline the module.
- Added RecognizeMediaTool to factory.py for enhanced media recognition capabilities.
- Updated tool exports to reflect the changes in available tools.
2025-11-18 16:07:11 +08:00
jxxghp
f5ca48a56e fix: update recommendation fetching logic in GetRecommendationsTool
- Adjusted the recommendation fetching methods to accept page and count parameters for better control over result limits.
- Implemented checks to ensure the format of recommendation results is valid, enhancing robustness.
- Added truncation for long overview descriptions to maintain consistency in output.
2025-11-18 13:08:09 +08:00
jxxghp
65ceff9824 v2.8.3
- MoviePilot助手增加了多个智能体工具,大幅提升智能体能力
- 智能体对话支持上下文记忆,发送 `/clear_session` 指令或超过15分钟没动作将清除记忆
2025-11-18 12:52:58 +08:00
jxxghp
ed73cfdcc7 fix: update message formatting in MoviePilotTool for clarity
- Changed the tool message format to use "=>" instead of wrapping with separators, enhancing readability and user experience during tool execution.
2025-11-18 12:43:27 +08:00
jxxghp
9cb79a7827 feat: enhance MoviePilotTool with customizable tool messages
- Added `get_tool_message` method to `MoviePilotTool` and its subclasses for generating user-friendly execution messages based on parameters.
- Improved message formatting for various tools, including `AddDownloadTool`, `AddSubscribeTool`, `DeleteDownloadTool`, and others, to provide clearer feedback during operations.
- This enhancement allows for more personalized and informative messages, improving user experience during tool execution.
2025-11-18 12:42:24 +08:00
jxxghp
984f29005a 更新 message.py 2025-11-18 12:17:12 +08:00
jxxghp
805c3719af feat: add new query tools for enhanced subscription management
- Introduced QuerySubscribeSharesTool, QueryPopularSubscribesTool, and QuerySubscribeHistoryTool to improve subscription querying capabilities.
- Updated __all__ exports in init.py and factory.py to include the new tools.
- Enhanced QuerySubscribesTool to support media type filtering with localized descriptions.
2025-11-18 12:05:06 +08:00
jxxghp
ea646149c0 feat: add new tools for download management and enhance query capabilities
- Introduced DeleteDownloadTool, QueryDirectoriesTool, ListDirectoryTool, QueryTransferHistoryTool, and TransferFileTool to the toolset for improved download management.
- Updated __all__ exports in init.py and factory.py to include the new tools.
- Enhanced QueryDownloadsTool to support querying downloads by hash and title, providing more flexible search options and detailed results.
2025-11-18 11:57:01 +08:00
jxxghp
eae1f8ee4d feat: add UpdateSiteCookieTool and enhance site operations
- Introduced UpdateSiteCookieTool to the toolset for managing site cookies.
- Updated __all__ exports in init.py and factory.py to include UpdateSiteCookieTool.
- Added async_get method in SiteOper for asynchronous retrieval of individual site records, improving database interaction.
2025-11-18 11:34:37 +08:00
jxxghp
8d1de245a6 feat: add TestSiteTool and enhance AddSubscribeTool with advanced filtering options
- Introduced TestSiteTool to the toolset for site testing functionalities.
- Updated __all__ exports in init.py and factory.py to include TestSiteTool.
- Enhanced AddSubscribeTool to support additional parameters for episode management and media quality filtering, improving subscription customization.
2025-11-18 08:51:32 +08:00
jxxghp
b8ef5d1efc feat: add session management to MessageChain
- Implemented session ID creation and reuse based on user activity, with a timeout of 15 minutes.
- Added remote command to clear user sessions, enhancing user session management capabilities.
- Updated Command class to include a new command for clearing sessions.
2025-11-18 08:41:05 +08:00
jxxghp
e1098b34e8 feat: add new tools for subscription management and scheduling
- Introduced DeleteSubscribeTool for managing subscriptions.
- Added QuerySchedulersTool and RunSchedulerTool for enhanced scheduling capabilities.
- Updated __all__ exports in init.py and factory.py to include new tools.
2025-11-18 08:34:00 +08:00
jxxghp
8296f8d2da Enhance tool message formatting in MoviePilotTool: wrap execution messages with separators and icons for better distinction from regular agent messages. 2025-11-17 17:14:05 +08:00
jxxghp
867c83383d Merge pull request #5122 from Seed680/v2 2025-11-17 15:58:53 +08:00
noone
1354119d6d fix(telegram):优化消息发送错误日志记录
- 在 send_msg 方法中细化错误日志,明确指出发送失败的位置
- 在 send_medias_msg 方法中增加标题转义注释并调整日志描述
- 在 send_torrents_msg 方法中补充标题转义逻辑及错误日志说明
2025-11-17 15:22:25 +08:00
noone
53af7f81bb fix:对telegram发送标题进行转义 2025-11-17 15:15:28 +08:00
jxxghp
48b1ac28de v2.8.2
- 新增 `MoviePilot助手` 智能体,支持自然语言对话完成任务(Beta版本,设置中打开开关并配置好大模型参数,通过 `/ai` 发送聊天内容),支持通过插件完善智能体能力
- 适配了新版本的飞牛影视
- 其它问题修复与细节改进

注意:基础组件升级,个别插件可能会有兼容性问题,需要插件适配。
2025-11-17 14:00:39 +08:00
jxxghp
6e329b17a9 Enhance Telegram message formatting: add detailed guidelines for MarkdownV2 usage, including support for strikethrough, headings, and lists. Implement smart escaping for Markdown to preserve formatting while avoiding API errors. 2025-11-17 13:49:56 +08:00
jxxghp
6a492198a8 fix post_message 2025-11-17 13:33:01 +08:00
jxxghp
8bf9b6e7cb feat:Agent插件工具发现 2025-11-17 13:00:23 +08:00
jxxghp
42e23ef564 Refactor agent workflows: streamline subscription and download processes, enhance query status workflow, and improve tool usage guidelines for better user interaction. 2025-11-17 12:49:03 +08:00
jxxghp
c6806ee648 fix agent tools 2025-11-17 12:34:20 +08:00
jxxghp
076fae696c fix 2025-11-17 11:57:46 +08:00
jxxghp
ed294d3ea4 Revert "fix schemas"
This reverts commit a5e7483870.
2025-11-17 11:48:18 +08:00
jxxghp
043be409d0 Enhance agent workflows and tools: unify subscription and download processes, add site querying functionality, and improve error handling in download operations. 2025-11-17 11:39:08 +08:00
jxxghp
a5e7483870 fix schemas 2025-11-17 10:58:24 +08:00
jxxghp
365335be46 rollback 2025-11-17 10:51:16 +08:00
jxxghp
62543dd171 fix:优化Agent消息发送格式 2025-11-17 10:43:16 +08:00
jxxghp
e2eef8ff21 fix agent message title 2025-11-17 10:18:05 +08:00
jxxghp
3acf937d56 fix add_download tool 2025-11-17 10:16:54 +08:00
jxxghp
d572e523ba 优化Agent上下文大小 2025-11-17 09:57:12 +08:00
jxxghp
82113abe88 fix agent sendmsg 2025-11-17 09:42:27 +08:00
jxxghp
b7d121c58f fix agent tools 2025-11-17 09:28:18 +08:00
jxxghp
6d5a85b144 fix search tools 2025-11-17 09:14:36 +08:00
jxxghp
78121917c6 Merge pull request #5112 from wikrin/fix_tests 2025-11-12 20:39:38 +08:00
jxxghp
a0913f0e32 Merge pull request #5109 from jiongjiongJOJO/dev 2025-11-12 20:39:10 +08:00
jxxghp
e96e284715 Merge pull request #5107 from wikrin/fix 2025-11-12 20:38:40 +08:00
Attente
c572a1b607 fix(tests): 修正 restype, 测试用例不使用识别词 2025-11-12 14:13:05 +08:00
囧囧JOJO
1845311f98 fix: 修复Docker编译时版本不兼容导致的报错问题
参考三楼回复:
https://stackoverflow.com/questions/76717537/valueerror-requirement-object-has-no-field-use-pep517-when-installing-pytho
2025-11-11 17:46:34 +08:00
Attente
4f806db8b7 fix: 修复变更默认下载器不生效的问题
- 配置模块迁移到 `SettingsConfigDict` 以支持 Pydantic v2 的配置方式
- 在 `MediaInfo` 中新增 `release_dates` 字段,用于存储多地区发行日期信息
- 修改 `MetaVideo` 类中的 token 传递逻辑,以修复搜索站点资源序列化错误的问题
2025-11-11 10:44:45 +08:00
jxxghp
22858cc1e9 Merge pull request #5100 from Seed680/v2 2025-11-06 18:43:41 +08:00
noone
a0329a3eb0 feat(rss): 支持自定义User-Agent获取RSS。目前有些站点没有配置UA时会不能正确获取RSS内容
- 在RSS方法中新增ua参数用于指定User-Agent
- 更新RequestUtils调用以传递自定义User-Agent
- 修改torrents链中RSS解析逻辑以支持站点配置的ua字段
- 设置默认超时时间为30秒以增强稳定性
2025-11-06 16:32:01 +08:00
jxxghp
b3e92088ee Merge pull request #5097 from wumode/refector-check_method 2025-11-05 23:15:24 +08:00
jxxghp
46db1c20f1 Merge pull request #5096 from cddjr/fix_trimemedia_cookies 2025-11-05 23:14:18 +08:00
wumode
9d182e53b2 fix: type hints 2025-11-05 15:41:31 +08:00
景大侠
1205fc7fdb 避免不必要的图片cookies查询 2025-11-05 15:22:02 +08:00
wumode
ff2826a448 feat(utils): Refactor check_method to use ast
- 使用 AST 解析函数源码,相比基于字符串的方法更稳定,能够正确处理具有多行 def 语句的函数
- 为 check_method 添加了单元测试
2025-11-05 14:16:37 +08:00
大虾
ee750115ec Update
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-11-04 13:29:45 +08:00
景大侠
0e13d22c97 fix 适配新版飞牛影视 2025-11-04 13:25:18 +08:00
jxxghp
8e7d040ac4 Merge pull request #5091 from wikrin/cached 2025-11-03 09:51:37 +08:00
Attente
6755202958 feat(cache): 使用 fresh 和 async_fresh 统一缓存控制方式
- 修复因缓存导致的插件更新后仍有更新提示的问题
- 统一使用 fresh/async_fresh 控制缓存行为
- 调整 TMDb 模块缓存策略,优化异步请求缓存清除机制
- 移除冗余的缓存方法封装,减少调用层级
- 简化 PluginHelper 中的缓存方法结构,移除 force 参数
2025-11-03 07:41:42 +08:00
jxxghp
8b7374a687 Merge pull request #5090 from wikrin/fix 2025-11-02 07:35:00 +08:00
Attente
c17cca2365 fix(update_setting): 修复设置保存错误的问题
- adapt to Pydantic V2
2025-11-01 23:51:59 +08:00
jxxghp
8016a9539a fix agent 2025-11-01 19:08:05 +08:00
jxxghp
e885fb15a0 Merge pull request #5089 from wikrin/fix 2025-11-01 18:27:35 +08:00
Attente
c7f098771b feat: adapt to Pydantic V2 2025-11-01 17:56:37 +08:00
Attente
fcd0908032 fix(transfer): 修复指定part不生效的问题 2025-11-01 17:56:23 +08:00
jxxghp
7ff1285084 fix agent tools 2025-11-01 12:07:17 +08:00
jxxghp
b45b603b97 fix agent tools 2025-11-01 12:01:48 +08:00
jxxghp
247208b8a9 fix agent 2025-11-01 11:41:22 +08:00
jxxghp
182c46037b fix agent 2025-11-01 10:40:45 +08:00
jxxghp
438d3210bc fix agent 2025-11-01 10:39:08 +08:00
jxxghp
d523c7c916 fix pydantic 2025-11-01 09:51:23 +08:00
jxxghp
09a19e94d5 fix config 2025-11-01 09:23:52 +08:00
jxxghp
3971c145df refactor: streamline data serialization in tool implementations
- Replaced model_dump and to_dict methods with direct calls to dict for improved consistency and performance in JSON serialization across multiple tools.
- Updated ConversationMemoryManager, GetRecommendationsTool, QueryDownloadsTool, and QueryMediaLibraryTool to enhance data handling.
2025-10-31 11:36:50 +08:00
jxxghp
055117d83d refactor: enhance tool message handling and improve error logging
- Updated _send_tool_message to accept a title parameter for better message context.
- Modified various tool implementations to utilize the new title parameter for clearer messaging.
- Improved error logging across multiple tools to include exception details for better debugging.
2025-10-31 09:16:53 +08:00
jxxghp
c6baf43986 Merge pull request #5085 from wumode/fix-event-handler-params 2025-10-29 07:50:47 +08:00
wumode
4ff16af3a7 fix: __invoke_plugin_method_async 中 __handle_event_error 参数传递错误 2025-10-28 20:09:44 +08:00
jxxghp
17a1bd352b Merge pull request #5071 from wikrin/optimize-file-size 2025-10-23 22:54:43 +08:00
Attente
7421ca09cc fix(transfer): 修复部分情况下无法正确统计已完成任务总大小的问题
- get_directory_size 使用 os.scandir 递归遍历提升性能
- 当任务文件项存储类型为 local 时,若其大小为空,则通过 SystemUtils 获取目录大小以确保
完成任务的准确统计。

fix(cache): 修改 fresh 和 async_fresh 默认参数为 True

refactor(filemanager): 移除整理后总大小计算逻辑

- 删除 TransHandler 中对整理目录总大小的冗余计算,提升性能并简化流程。

perf(system): 使用 scandir 优化文件扫描性能

- 重构 SystemUtils 中的文件扫描方法(list_files、exists_file、list_sub_files),
- 采用 os.scandir 替代 glob 实现,并预编译正则表达式以提升目录遍历与文件匹配性能。
2025-10-23 19:21:24 +08:00
jxxghp
9797e696e5 Merge pull request #5073 from WAY29/v2 2025-10-23 13:05:10 +08:00
jxxghp
c36d6d8b2d Merge pull request #5072 from wumode/fix_retry 2025-10-23 06:52:29 +08:00
wumode
3873786b99 fix: retry 2025-10-23 00:58:34 +08:00
WAY29
76fdba7f09 feat(endpoints): /download/add allow tmdbid/doubanid/bangumiid 2025-10-22 22:02:33 +08:00
jxxghp
72799e9638 Merge pull request #5068 from little6neko/v2 2025-10-22 06:18:28 +08:00
小六妞儿
2e77d03fe9 目录监控添加异常处理避免程序意外退出 2025-10-22 00:21:31 +08:00
jxxghp
0c58eae5e7 Merge pull request #5060 from wikrin/cached 2025-10-19 22:37:33 +08:00
Attente
b609567c38 feat(cache): 引入 fresh 和 async_fresh 以控制缓存行为
- 新增 `fresh` 和 `async_fresh` 用于在同步和异步函数中
临时禁用缓存。
- 通过 `_fresh` 这一 contextvars 变量实现上下文感知的
缓存刷新机制
- 修改了 `cached` 装饰器逻辑,在 `is_fresh()` 为 True
时跳过缓存读取。

- 修复 download 模块中路径处理问题,使用 `Path.as_posix()` 确保跨平台兼容性。
2025-10-19 22:31:50 +08:00
jxxghp
7ecfa44fa0 Merge pull request #5057 from xiaoQQya/v2 2025-10-19 06:49:23 +08:00
xiaoQQya
a685b1dc3b fix(douban): 修复 imdbid 匹配豆瓣信息成功错误返回 None 的问题 2025-10-18 22:34:55 +08:00
jxxghp
63ce49a17c Merge pull request #5056 from xiaoQQya/v2 2025-10-18 22:16:03 +08:00
xiaoQQya
820fbe4076 fix(douban): 修复使用 imdbid 未匹配到豆瓣信息时回退到使用名称匹配豆瓣信息失败的问题 2025-10-18 22:07:54 +08:00
jxxghp
efa05b7775 Update media tool descriptions for clarity and detail in JSON configuration 2025-10-18 22:00:24 +08:00
jxxghp
003781e903 add MoviePilot AI agent implementation and workflow manager 2025-10-18 21:55:31 +08:00
jxxghp
ee71bafc96 fix 2025-10-18 21:32:46 +08:00
jxxghp
bdd5f1231e add ai agent 2025-10-18 21:26:51 +08:00
jxxghp
6fee532c96 add ai agent 2025-10-18 21:26:36 +08:00
jxxghp
78aaad7b59 Merge pull request #5028 from ThedoRap/v2 2025-10-07 23:00:21 +08:00
Reaper
b128b0ede2 修复知行 极速之星 框架解析 做种信息 2025-10-02 20:43:06 +08:00
Reaper
737d2f3bc6 优化知行 极速之星 框架解析 2025-10-02 20:03:28 +08:00
jxxghp
179be53a65 Merge pull request #5025 from ThedoRap/v2 2025-10-02 06:50:37 +08:00
Reaper
1867f5e7c2 增加知行 极速之星 框架解析 2025-10-02 04:27:35 +08:00
jxxghp
6662d24565 Merge pull request #5019 from xiaoQQya/develop 2025-10-01 11:53:24 +08:00
jxxghp
5880566a99 Merge pull request #5018 from Aqr-K/fix-plugin 2025-10-01 11:52:43 +08:00
xiaoQQya
5d05b32711 fix: 修复 README 本地运行提示 No module named 'app' 的问题 2025-09-30 23:45:25 +08:00
Aqr-K
fa2b720e92 Refactor(plugins): Use pathlib.relative_to for robust plugin path resolution 2025-09-30 20:03:08 +08:00
jxxghp
d381238f83 Merge pull request #5017 from Aqr-K/fix-plugin 2025-09-30 19:55:46 +08:00
Aqr-K
751d627ead Merge branch 'fix-plugin' of https://github.com/aqr-k/MoviePilot into fix-plugin 2025-09-30 19:48:46 +08:00
Aqr-K
3e66a8de9b Rollback cache 2025-09-30 19:35:50 +08:00
Aqr-K
266052b12b Update app/core/plugin.py
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-09-30 19:26:33 +08:00
Aqr-K
803f4328f4 fix(plugins): Improve hot-reload robustness for multi-inheritance plugins 2025-09-30 18:34:26 +08:00
Aqr-K
8e95568e11 refactor(plugins): Improve hot-reloading with watchfiles 2025-09-30 18:01:02 +08:00
jxxghp
ab09ee4819 Merge pull request #4998 from Seed680/v2 2025-09-24 15:02:26 +08:00
noone
41f94a172f fix:对telegram发送标题进行转义 2025-09-24 14:29:42 +08:00
noone
566e597994 fix:撤销不必要转义 2025-09-24 14:26:09 +08:00
noone
765fb9c05f fix:更新Telegram解析模式为MarkdownV2;Telegram发送的内容按 Telegram V2 规则转义特殊字符 2025-09-24 14:14:11 +08:00
jxxghp
b6720a19f7 更新 plugin.py 2025-09-22 17:56:12 +08:00
jxxghp
3b130651c4 Merge pull request #4987 from Aqr-K/refactor/plugin-monitor 2025-09-22 11:41:13 +08:00
jxxghp
3f6c35dabe Merge pull request #4986 from Aqr-K/fix-plugin-reload 2025-09-22 07:08:33 +08:00
Aqr-K
db2a952bca refactor(plugin): Enhance hot reload with debounce and subdirectory support 2025-09-22 02:48:49 +08:00
Aqr-K
0ea9770bc3 Create debounce.py 2025-09-22 02:38:15 +08:00
Aqr-K
0b20956c90 fix 2025-09-21 22:42:18 +08:00
jxxghp
9f73b47d54 Merge pull request #4977 from jxxghp/cursor/fix-moviepilot-issue-4975-ff74 2025-09-19 18:15:08 +08:00
Cursor Agent
ce9c99af71 Refactor: Use copy instead of move for file operations
Co-authored-by: jxxghp <jxxghp@qq.com>
2025-09-19 09:54:44 +00:00
jxxghp
784024fb5d 更新 version.py 2025-09-19 08:50:33 +08:00
jxxghp
1145b32299 fix plugin install 2025-09-18 22:32:04 +08:00
jxxghp
ab71df0011 Merge pull request #4971 from cddjr/fix_glitch 2025-09-18 21:00:00 +08:00
jxxghp
fb137252a9 fix plugin id lower case 2025-09-18 18:00:15 +08:00
jxxghp
f57a680306 插件安装支持传递 repo_url 参数 2025-09-18 17:42:12 +08:00
景大侠
8bb3eaa320 fix 获取上次搜索结果时产生的NoneType异常
glitchtip#14
2025-09-18 17:23:20 +08:00
景大侠
9489730a44 fix u115刷新access_token失败会产生NoneType异常
glitchtip#49549
2025-09-18 17:23:20 +08:00
景大侠
d4795bb897 fix u115重试请求时报错unexpected keyword argument
glitchtip#136696
2025-09-18 17:23:19 +08:00
景大侠
63775872c7 fix TMDB因连接失败产生的NoneType错误
glitchtip#11
2025-09-18 17:05:09 +08:00
jxxghp
beff508a1f Merge pull request #4970 from cddjr/fix_trimemedia 2025-09-18 15:55:46 +08:00
景大侠
deaae8a2c6 fix 2025-09-18 15:39:10 +08:00
景大侠
46a27bd50c fix: 飞牛影视 2025-09-18 15:27:02 +08:00
jxxghp
24f2993433 Merge pull request #4958 from cddjr/fix_browse_mteam 2025-09-17 07:04:59 +08:00
景大侠
c80bfbfac5 fix: 浏览馒头报错NoneType 2025-09-17 01:59:28 +08:00
jxxghp
06abfc45c7 更新 version.py 2025-09-16 20:30:38 +08:00
jxxghp
440a773081 fix 2025-09-16 17:56:44 +08:00
jxxghp
0797bcb38b fix 2025-09-16 13:10:31 +08:00
jxxghp
d463b5bf0d Merge pull request #4955 from jxxghp/cursor/add-sort-type-to-subscription-queries-af67 2025-09-16 11:41:08 +08:00
Cursor Agent
0733c8edcc Add sort_type parameter to subscribe endpoints
Co-authored-by: jxxghp <jxxghp@qq.com>
2025-09-16 03:29:28 +00:00
jxxghp
86c7c05cb1 feat: 在获取订阅分享数据的接口中添加可选参数 2025-09-16 07:38:56 +08:00
jxxghp
18ff7ce753 feat: 在订阅统计中添加可选参数 2025-09-16 07:37:14 +08:00
jxxghp
8f2ed1004d Merge pull request #4952 from cddjr/fix_file_perm 2025-09-16 07:00:45 +08:00
景大侠
14961323c3 fix umask 2025-09-15 22:01:00 +08:00
景大侠
f8c682b183 fix: 修复刮削的文件权限只有0600的问题 2025-09-15 21:49:37 +08:00
jxxghp
dd92708f60 Merge pull request #4947 from pluto0x0/fix/4941-mttorent-imdb-search 2025-09-15 14:23:17 +08:00
Zifan Ying
4d9eeccefa fix: mtorrent搜索imdb时提供完整链接
fix: mtorrent搜索imdb时需要提供完整链接(例如https://www.imdb.com/title/tt3058674)
keyword为imdb条目时添加链接前缀
参考 https://wiki.m-team.cc/zh-tw/imdbtosearch
 
issue: https://github.com/jxxghp/MoviePilot/issues/4941
2025-09-15 00:31:45 -05:00
jxxghp
cd7b251031 Merge pull request #4946 from developer-wlj/wlj0914 2025-09-14 17:30:11 +08:00
developer-wlj
db614180b9 Revert "refactor: 优化临时文件的创建和上传逻辑"
This reverts commit 77c0f8f39e.
2025-09-14 17:14:52 +08:00
jxxghp
b6e527e5f4 Merge pull request #4945 from developer-wlj/wlj0914 2025-09-14 16:54:37 +08:00
developer-wlj
77c0f8f39e refactor: 优化临时文件的创建和上传逻辑
- 使用 with 语句自动管理临时文件的创建和关闭,提高代码的可读性和安全性
- 优化了代码结构,减少了嵌套的 try 语句,使代码更加清晰
2025-09-14 16:46:27 +08:00
jxxghp
58816d73c8 Merge pull request #4944 from developer-wlj/wlj0914 2025-09-14 16:42:37 +08:00
developer-wlj
3b194d282e fix: 修复在windows下因临时文件被占用,导致刮削失败
- 修改了两个函数中的临时文件创建和删除逻辑
- 使用手动删除代替自动删除,确保临时文件被正确清理
- 添加了异常处理,记录临时文件删除失败的情况
2025-09-14 16:28:24 +08:00
jxxghp
397f66433d v2.8.0 2025-09-13 15:58:00 +08:00
jxxghp
04a4ed1d0e fix delete_media_file 2025-09-13 14:10:15 +08:00
jxxghp
625850d4e7 fix 2025-09-13 13:35:51 +08:00
jxxghp
6c572baca5 rollback 2025-09-13 13:32:48 +08:00
jxxghp
ee0406a13f Handle smb protocol key error during disconnect (#4938)
* Refactor: Improve SMB connection handling and add signal handling

Co-authored-by: jxxghp <jxxghp@qq.com>

* Remove test_smb_fix.py

Co-authored-by: jxxghp <jxxghp@qq.com>

---------

Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: jxxghp <jxxghp@qq.com>
2025-09-13 11:25:29 +08:00
jxxghp
608a049ba3 fix smb delete 2025-09-13 11:05:21 +08:00
jxxghp
4d9b5198e2 增强SMB存储的删除功能 2025-09-13 10:56:45 +08:00
jxxghp
24b6c970aa feat:emby用户名 2025-09-13 10:34:41 +08:00
jxxghp
239c47f469 fix #4917 2025-09-13 10:13:33 +08:00
jxxghp
f0fc64c517 fix #4917 2025-09-13 10:12:40 +08:00
jxxghp
8481fd38ce fix #4933 2025-09-13 09:54:28 +08:00
jxxghp
5f425129d5 fix #4934 2025-09-13 09:46:04 +08:00
jxxghp
92955b1315 fix:在fork进程中执行文件整理 2025-09-13 08:56:05 +08:00
jxxghp
a3872d5bb5 fix:在fork进程中执行文件整理 2025-09-13 08:50:20 +08:00
jxxghp
a123ff2c04 feat:在fork进程中执行文件整理 2025-09-13 08:32:31 +08:00
jxxghp
188de34306 mini chunk size 2025-09-12 21:45:26 +08:00
jxxghp
3d43750e9b fix async event 2025-09-10 17:33:12 +08:00
jxxghp
fea228c68d add SUPERUSER_PASSWORD 2025-09-10 15:42:17 +08:00
jxxghp
a71a28e563 更新 config.py 2025-09-10 07:00:10 +08:00
jxxghp
3b5d4982b5 add wizard flag 2025-09-09 13:50:11 +08:00
jxxghp
b201e9ab8c Revert "feat:在子进程中操作文件"
This reverts commit 4f304a70b7.
2025-09-08 17:23:25 +08:00
jxxghp
d30b9282fd fix alipan u115 error log 2025-09-08 17:13:01 +08:00
jxxghp
4f304a70b7 feat:在子进程中操作文件 2025-09-08 16:59:29 +08:00
jxxghp
59a54d4f04 fix plugin cache 2025-09-08 13:27:32 +08:00
jxxghp
1e94d794ed fix log 2025-09-08 12:12:00 +08:00
jxxghp
5bd210406b Merge pull request #4918 from cddjr/fix_4853 2025-09-08 11:36:41 +08:00
景大侠
e00514d36d fix: 将RSS中的发布日期转为本地时区 2025-09-08 11:28:08 +08:00
jxxghp
f013bf1931 fix 2025-09-08 10:59:28 +08:00
jxxghp
107cbbad1d fix 2025-09-08 10:54:45 +08:00
jxxghp
481f1f9d30 add full gc scheduler 2025-09-08 10:49:09 +08:00
jxxghp
704364061c fix redis test 2025-09-08 09:59:11 +08:00
jxxghp
c1bd2d6cf1 fix:优化下载 2025-09-08 09:50:08 +08:00
jxxghp
a018e1228c Merge pull request #4904 from DDS-Derek/fix_gosu 2025-09-05 21:40:41 +08:00
DDSRem
d962d9c7f6 feat(docker): add START_NOGOSU mode
fix https://github.com/jxxghp/MoviePilot/issues/4889
2025-09-05 21:30:59 +08:00
jxxghp
4ea28cbca5 fix #4902 2025-09-05 21:09:05 +08:00
jxxghp
1b48b8b4cc Merge pull request #4902 from DDS-Derek/dev 2025-09-05 20:06:42 +08:00
jxxghp
73df197e33 Merge pull request #4903 from imtms/v2 2025-09-05 20:05:28 +08:00
TMs
bdc66e55ca fix(LocalStorage): 添加源文件与目标文件相同的检查,防止文件被删除。 2025-09-05 20:02:37 +08:00
DDSRem
926343ee86 fix(u115): code logic vulnerabilities 2025-09-05 19:37:41 +08:00
DDSRem
8e6021c5e7 fix(u115): code logic vulnerabilities 2025-09-05 19:23:23 +08:00
jxxghp
ac2b6c76ce 更新 version.py 2025-09-05 12:04:26 +08:00
jxxghp
9e966d0a7f Merge pull request #4898 from wumode/fix_alist 2025-09-04 21:16:58 +08:00
wumode
6c10defaa1 fix(Alist): add type hints 2025-09-04 21:08:25 +08:00
wumode
b6a76f6f7c fix(Alist): 添加__len__() 2025-09-04 20:47:13 +08:00
jxxghp
84e5b77a5c rollback orjson 2025-09-04 11:53:39 +08:00
jxxghp
89b0ea0bf1 remove monitoring 2025-09-04 11:23:22 +08:00
jxxghp
48aeb98bf1 add orjson 2025-09-04 08:52:36 +08:00
jxxghp
8a5d864812 更新 config.py 2025-09-04 08:28:42 +08:00
jxxghp
ae79e645a6 Merge pull request #4893 from Aqr-K/feat-plugin-wheels 2025-09-03 14:30:01 +08:00
Aqr-K
0947deb372 fix plugin.py 2025-09-03 14:27:24 +08:00
jxxghp
69c92911a2 更新 category.yaml 2025-09-03 14:26:40 +08:00
jxxghp
b16bb37b75 Merge pull request #4892 from Aqr-K/feat-plugin-wheels 2025-09-03 14:21:08 +08:00
Aqr-K
9c9ec8adf2 feat(plugin): Implement robust dependency installation with embedded wheels
- 通过在插件中嵌入轮子来支持安装依赖项
2025-09-03 14:13:32 +08:00
jxxghp
eb0e67fc42 fix logging 2025-09-03 12:42:13 +08:00
jxxghp
9cc50bddab Merge pull request #4764 from 2Dou/v2 2025-09-03 12:01:37 +08:00
jxxghp
d3ba0fa487 更新 category.py
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-09-03 11:58:07 +08:00
jxxghp
39f6505a80 fix:优化参数、使用orjson 2025-09-03 09:51:24 +08:00
jxxghp
36a6802439 fix:#4876 2025-09-02 12:45:44 +08:00
jxxghp
d7e2633a92 fix:移除更新阻断 2025-09-02 12:16:45 +08:00
jxxghp
88049e741e add SUBSCRIBE_SEARCH_INTERVAL 2025-09-02 11:41:52 +08:00
jxxghp
ff7fb14087 fix cache_clear 2025-09-02 08:35:48 +08:00
jxxghp
816c64bd48 Merge pull request #4883 from cikezhu/v2 2025-09-01 18:32:21 +08:00
cikezhu
d2756e6f2d schedule() # 这会返回一个协程对象,但我们没有等待它 2025-09-01 17:39:46 +08:00
jxxghp
147e12acbb Merge pull request #4879 from sebastian0619/v2 2025-08-31 19:04:38 +08:00
jxxghp
4098018ee9 更新 entrypoint.sh
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-08-31 19:04:24 +08:00
Sebastian
133e7578b9 Update NGINX SSL port configuration 2025-08-31 17:17:26 +08:00
jxxghp
74a2bdbf09 Merge pull request #4872 from Aqr-K/feat/v2.7.8/string/natural_sort 2025-08-30 09:45:23 +08:00
Aqr-K
f22bc68af4 Update string.py 2025-08-30 08:59:35 +08:00
Aqr-K
26cc6da650 fix(storage): Adjust to use natural_stort_key 2025-08-30 08:48:38 +08:00
Aqr-K
d21f1f1b87 feat(string): add natural_sort_key function 2025-08-30 08:44:41 +08:00
jxxghp
7cdaafffe1 Merge pull request #4867 from aotuwuxi/hotfix/250829 2025-08-29 13:46:48 +08:00
jxxghp
0265dca197 Merge pull request #4866 from lostwindsenril/patch-1 2025-08-29 13:45:49 +08:00
wuxi
9d68366043 fix: 修复工作流调用插件无法获取到对象属性问题 2025-08-29 13:19:50 +08:00
lostwindsenril
c8c671d915 Update app/modules/indexer/spider/mtorrent.py
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-08-29 13:07:31 +08:00
lostwindsenril
142daa9d15 使馒头(m-team)支持剩余促销期检测
Add freedate to torrent if discountEndTime exists
2025-08-29 13:04:17 +08:00
jxxghp
2552219991 更新 version.py 2025-08-28 11:11:32 +08:00
jxxghp
a038b698d7 fix haidan 2025-08-28 09:36:19 +08:00
jxxghp
a3b222574e add thetvdb cache 2025-08-28 08:05:10 +08:00
jxxghp
e0cd467293 rollback fix #4856 2025-08-28 07:51:05 +08:00
jxxghp
9c056030d2 fix:捕促115&alipan请求异常 2025-08-27 20:21:06 +08:00
jxxghp
19efa9d4cc fix #4795 2025-08-27 16:15:45 +08:00
jxxghp
90633a6495 fix #4851 2025-08-27 15:57:43 +08:00
jxxghp
edc432fbd8 fix #4846 2025-08-27 12:45:23 +08:00
jxxghp
1b7bdbf516 fix #4834 2025-08-27 08:28:16 +08:00
jxxghp
8c1be70c85 更新 version.py 2025-08-26 12:20:16 +08:00
jxxghp
b8e0c0db9e feat:精细化事件错误 2025-08-26 08:41:47 +08:00
jxxghp
7b7fb6cc82 Merge pull request #4836 from jxxghp/cursor/alter-siteuser-data-userid-to-character-type-9f4d 2025-08-25 22:05:19 +08:00
Cursor Agent
62512ba215 Remove SQLite-specific migration code for userid field
Co-authored-by: jxxghp <jxxghp@live.cn>
2025-08-25 14:00:33 +00:00
Cursor Agent
e1beb64c01 Simplify userid conversion to integer in Synology Chat module
Co-authored-by: jxxghp <jxxghp@live.cn>
2025-08-25 13:58:15 +00:00
Cursor Agent
c81f26ddad Remove downgrade methods for PostgreSQL and SQLite userid migration
Co-authored-by: jxxghp <jxxghp@live.cn>
2025-08-25 13:56:21 +00:00
Cursor Agent
340114c2a1 Remove migration README after completing SiteUserData userid type migration
Co-authored-by: jxxghp <jxxghp@live.cn>
2025-08-25 13:54:58 +00:00
Cursor Agent
cd7767b331 Checkpoint before follow-up message
Co-authored-by: jxxghp <jxxghp@live.cn>
2025-08-25 13:54:48 +00:00
Cursor Agent
25289dad8a Migrate SiteUserData userid field from Integer to String type
Co-authored-by: jxxghp <jxxghp@live.cn>
2025-08-25 13:50:58 +00:00
jxxghp
47c6917129 remove _check_restart_policy 2025-08-25 21:30:53 +08:00
jxxghp
6379cda148 fix 异步定时服务 2025-08-25 21:19:07 +08:00
jxxghp
91a124ab8f fix 异步定时服务 2025-08-25 20:44:38 +08:00
jxxghp
2357a7135e fix run_async 2025-08-25 17:46:06 +08:00
jxxghp
da0b3b3de9 fix:日历缓存 2025-08-25 16:46:10 +08:00
jxxghp
6664fb1716 feat:增加插件和日历的自动缓存 2025-08-25 16:37:02 +08:00
jxxghp
1206f24fa9 修复缓存迭代时的并发问题 2025-08-25 13:11:44 +08:00
jxxghp
ffb5823e84 fix #4829 优化模块导入逻辑,增加对 Async 类的特殊处理 2025-08-25 08:14:43 +08:00
jxxghp
d45a7fb262 更新 version.py 2025-08-24 19:59:31 +08:00
jxxghp
918d192c0f OpenList自动延迟重试获取文件项 2025-08-24 19:47:00 +08:00
jxxghp
f7cd6eac50 feat:整理手动中止功能 2025-08-24 19:17:41 +08:00
jxxghp
88f4428ff0 fix bug 2025-08-24 17:07:45 +08:00
jxxghp
069ea22ba2 fix bug 2025-08-24 16:55:37 +08:00
jxxghp
8fac8c5307 fix progress step 2025-08-24 16:33:44 +08:00
jxxghp
2285befebb fix cache set 2025-08-24 16:10:48 +08:00
jxxghp
1cd0648e4e fix cache set 2025-08-24 15:36:56 +08:00
jxxghp
0b7ba285c6 fix:优雅停止超时处理 2025-08-24 13:07:52 +08:00
jxxghp
30446c4526 fix cache is_redis 2025-08-24 12:27:14 +08:00
jxxghp
9b843c9ed2 fix:整理记录登记 2025-08-24 12:19:12 +08:00
jxxghp
2ce1c3bef8 feat:整理进度登记 2025-08-24 12:04:05 +08:00
jxxghp
e463094dc7 feat:整理进度 2025-08-24 09:21:55 +08:00
jxxghp
71a9fe10f4 refactor ProgressHelper 2025-08-24 09:02:55 +08:00
jxxghp
ba146e13ef fix 优化cache模块声明 2025-08-24 08:36:37 +08:00
jxxghp
c060d7e3e0 更新 postgresql-setup.md 2025-08-23 22:26:34 +08:00
jxxghp
ba96678822 v2.7.5 2025-08-23 20:46:36 +08:00
jxxghp
4f6354f383 Merge pull request #4820 from DDS-Derek/dev 2025-08-23 18:46:52 +08:00
DDSRem
2766e80346 fix(database): use logger as log output
Co-Authored-By: Aqr-K <95741669+Aqr-K@users.noreply.github.com>
2025-08-23 18:36:11 +08:00
jxxghp
7cc3777a60 fix async cache 2025-08-23 18:34:47 +08:00
DDSRem
cb1dd9f17d fix(database): upgrade error in pg database
Co-Authored-By: Aqr-K <95741669+Aqr-K@users.noreply.github.com>
2025-08-23 18:12:13 +08:00
jxxghp
31f342fe4f fix torrent 2025-08-23 18:10:33 +08:00
jxxghp
e90359eb08 fix douban 2025-08-23 15:56:30 +08:00
jxxghp
58b0768a30 fix redis key 2025-08-23 15:53:03 +08:00
jxxghp
3b04506893 fix redis key 2025-08-23 15:40:38 +08:00
jxxghp
354165aa0a fix cache 2025-08-23 14:21:50 +08:00
jxxghp
343109836f fix cache 2025-08-23 14:06:44 +08:00
jxxghp
fcadac2adb Merge pull request #4817 from jxxghp/cursor/add-dict-operations-to-cachebackend-3877 2025-08-23 12:42:04 +08:00
Cursor Agent
5e7dcdfe97 Modify cache region key generation to use consistent prefix format
Co-authored-by: jxxghp <jxxghp@live.cn>
2025-08-23 04:13:25 +00:00
Cursor Agent
2ec9a57391 Remove implementation and migration documentation files
Co-authored-by: jxxghp <jxxghp@live.cn>
2025-08-23 04:07:04 +00:00
Cursor Agent
973c545723 Checkpoint before follow-up message
Co-authored-by: jxxghp <jxxghp@live.cn>
2025-08-23 04:06:16 +00:00
Cursor Agent
fd62eecfef Simplify TTLCache, remove dict-like methods, enhance Cache interface
Co-authored-by: jxxghp <jxxghp@live.cn>
2025-08-23 04:01:17 +00:00
Cursor Agent
b5ca7058c2 Add helper methods for cache backend in sync and async versions
Co-authored-by: jxxghp <jxxghp@live.cn>
2025-08-23 03:58:04 +00:00
Cursor Agent
57a48f099f Add dict-like operations to CacheBackend with sync and async support
Co-authored-by: jxxghp <jxxghp@live.cn>
2025-08-23 03:50:52 +00:00
jxxghp
4699f511bf Handle magnet links in torrent parsing and downloader modules (#4815)
Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: jxxghp <jxxghp@live.cn>
2025-08-23 10:51:32 +08:00
jxxghp
cd8f7e72e0 同步错误修复 2025-08-22 17:33:24 +08:00
jxxghp
78803fa284 fix search_imdbid type 2025-08-22 16:37:30 +08:00
jxxghp
2e8d75df16 fix monitor cache 2025-08-22 15:30:49 +08:00
jxxghp
7e3bbfd960 Merge pull request #4807 from carolcoral/v2 2025-08-22 15:23:04 +08:00
jxxghp
1734d53b3c Replace file-based snapshot caching with FileCache implementation (#4809)
Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: jxxghp <jxxghp@live.cn>
2025-08-22 13:59:30 +08:00
jxxghp
f37540f4e5 fix get_rss timeout 2025-08-22 11:44:16 +08:00
jxxghp
addb9d836a remove cache singleton 2025-08-22 11:33:53 +08:00
Carol
4184d8c7ac 补充迁移数据库异常的注意事项
add: sqlite迁移到postgresql的注意事项
2025-08-22 10:55:26 +08:00
jxxghp
724c15a68c add 插件内存统计API 2025-08-22 09:46:11 +08:00
jxxghp
499bdf9b48 fix cache clear 2025-08-22 07:22:23 +08:00
jxxghp
41cd1ccda1 Merge pull request #4803 from Sowevo/v2
兼容负数的LIMIT
2025-08-22 07:20:21 +08:00
jxxghp
b9521cb3a9 Fix typo: change "未就续" to "未就绪" in module status messages (#4804)
Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: jxxghp <jxxghp@live.cn>
2025-08-22 07:05:16 +08:00
jxxghp
1f40663b90 Merge pull request #4802 from Aqr-K/remove-docker 2025-08-22 06:45:45 +08:00
sowevo
5261ed7c4c 兼容两种库对负数的处理 2025-08-22 03:32:26 +08:00
sowevo
aa8768b18a 兼容两种库对负数的处理 2025-08-22 03:00:50 +08:00
Aqr-K
aad07433f4 fix(docker): Remove musl-dev and related code 2025-08-22 01:20:50 +08:00
jxxghp
4a7630079b Merge pull request #4800 from DDS-Derek/dev 2025-08-21 22:18:16 +08:00
DDSRem
44a6ee1994 fix(docker): 作業ディレクトリが間違っています 2025-08-21 22:17:18 +08:00
jxxghp
56bd6e69ed Merge pull request #4799 from DDS-Derek/dev 2025-08-21 22:11:58 +08:00
DDSRem
d1e04588d0 feat(docker): refactor docker build process 2025-08-21 22:09:49 +08:00
jxxghp
21cdaef6d5 Merge pull request #4798 from DDS-Derek/dev 2025-08-21 21:57:49 +08:00
DDSRem
a1723d18fb fix(docker): 不要な権限設定を削除する 2025-08-21 21:54:33 +08:00
jxxghp
9e065138e9 fix cache default 2025-08-21 21:49:00 +08:00
jxxghp
1c73c92bfd fix cache Singleton 2025-08-21 21:45:34 +08:00
jxxghp
bcd560d74e Merge pull request #4797 from DDS-Derek/dev 2025-08-21 21:28:40 +08:00
DDSRem
02339562ed fix(docker): レイヤー数を減らす 2025-08-21 21:28:18 +08:00
DDSRem
e5804378c2 fix(docker): fuck ai bugs 2025-08-21 21:24:09 +08:00
jxxghp
da1c8a162d fix cache maxsize 2025-08-21 20:10:27 +08:00
jxxghp
d457a23a1f fix build 2025-08-21 19:24:04 +08:00
jxxghp
b6154e58b8 rollback dockerfile 2025-08-21 18:44:47 +08:00
jxxghp
5f18776c61 更新 douban_cache.py 2025-08-21 17:52:55 +08:00
jxxghp
68b0b9ec7a 更新 tmdb_cache.py 2025-08-21 17:52:19 +08:00
jxxghp
0f5036972e v2.7.4 2025-08-21 17:03:17 +08:00
jxxghp
0b199b8421 fix TTLCache 2025-08-21 16:54:49 +08:00
jxxghp
a59730f6eb 优化cache模块的默认值 2025-08-21 16:29:49 +08:00
jxxghp
c6c84fe65b rename 2025-08-21 16:02:50 +08:00
jxxghp
03c757bba6 fix TTLCache 2025-08-21 13:17:59 +08:00
jxxghp
bfeb8d238a fix build 2025-08-21 12:45:05 +08:00
jxxghp
daf0c08c4b remove 重复的 aiofiles 2025-08-21 12:33:51 +08:00
jxxghp
d12c1b9ac4 remove musl-dev 2025-08-21 12:32:53 +08:00
jxxghp
bc242f4fd4 fix yield 2025-08-21 12:04:15 +08:00
jxxghp
a240c1bca9 优化 Dockerfile 2025-08-21 09:47:23 +08:00
jxxghp
219aa6c574 Merge pull request #4790 from wikrin/delete_media_file 2025-08-21 09:35:07 +08:00
Attente
abca1b481a refactor(storage): 优化空目录删除逻辑
- 添加对资源目录和媒体库目录的保护机制
- 实现递归向上检查并删除空目录
2025-08-21 09:16:15 +08:00
jxxghp
db72fd2ef5 fix 2025-08-21 09:07:28 +08:00
jxxghp
31cca58943 fix cache 2025-08-21 08:26:32 +08:00
jxxghp
c06a4b759c fix redis 2025-08-21 08:14:21 +08:00
jxxghp
f05a23a490 更新 redis.py 2025-08-21 07:59:34 +08:00
jxxghp
1e0f2ffde0 更新 config.py 2025-08-21 07:48:16 +08:00
jxxghp
06df42ee3d 更新 Dockerfile 2025-08-21 07:21:58 +08:00
jxxghp
65ee1638f7 add VENV_PATH 2025-08-21 00:28:32 +08:00
jxxghp
87eefe7673 Merge pull request #4788 from jxxghp/cursor/install-playwright-dependencies-in-dockerfile-b7d6
Install playwright dependencies in dockerfile
2025-08-21 00:16:48 +08:00
Cursor Agent
5c124d3988 fix: use full path for playwright command in Dockerfile
- Fix 'playwright: not found' error during Docker build
- Use /bin/playwright instead of playwright to ensure
  the command is executed from the virtual environment
- This resolves the issue where playwright install-deps chromium
  was failing because playwright wasn't in the system PATH
2025-08-20 16:16:02 +00:00
jxxghp
8c69ce624f Merge pull request #4787 from jxxghp/cursor/optimize-docker-build-and-pip-environment-e8ad
Optimize docker build and pip environment
2025-08-21 00:08:50 +08:00
Cursor Agent
bb73acdde5 Checkpoint before follow-up message
Co-authored-by: jxxghp <jxxghp@live.cn>
2025-08-20 16:06:39 +00:00
Cursor Agent
993bc3775b Checkpoint before follow-up message
Co-authored-by: jxxghp <jxxghp@live.cn>
2025-08-20 16:04:44 +00:00
jxxghp
3d2ff28bcd fix download 2025-08-20 23:38:51 +08:00
jxxghp
9b78deb802 fix torrent 2025-08-20 23:07:29 +08:00
jxxghp
dadc525d0b feat:种子下载使用缓存 2025-08-20 22:03:18 +08:00
DDSRem
22b2140c94 fix requirement 2025-08-20 21:18:33 +08:00
jxxghp
f07496a4a0 fix cache 2025-08-20 21:11:10 +08:00
jxxghp
1b2938cbc8 Merge pull request #4785 from jxxghp/cursor/fix-postgresql-textual-sql-expression-error-e023 2025-08-20 20:13:56 +08:00
Cursor Agent
d4d2f58830 Checkpoint before follow-up message
Co-authored-by: jxxghp <jxxghp@live.cn>
2025-08-20 12:10:52 +00:00
jxxghp
b3113e13ec refactor:新增文件缓存组合 2025-08-20 19:04:07 +08:00
jxxghp
055c8e26f0 refactor:重构缓存系统 2025-08-20 17:35:32 +08:00
jxxghp
2a7a7239d7 新增全局图片缓存配置和临时文件清理天数设置 2025-08-20 13:52:38 +08:00
jxxghp
2fa40dac3f 优化监控和消息服务的资源管理 2025-08-20 13:35:24 +08:00
jxxghp
6b4fbd7dc2 新增 PostgreSQL 和 Redis 数据库模块,包含模块初始化、连接测试等功能 2025-08-20 13:35:12 +08:00
jxxghp
5b0bb19717 统一使用 app.core.cache 中的 TTLCache 2025-08-20 12:43:30 +08:00
jxxghp
843dfc430a fix log 2025-08-20 09:36:46 +08:00
jxxghp
69cb07c527 优化缓存机制,支持Redis和本地缓存的切换 2025-08-20 09:16:30 +08:00
jxxghp
89e8a64734 重构Redis缓存机制 2025-08-20 08:51:03 +08:00
jxxghp
5eb2dec32d 新增 RedisHelper 类 2025-08-20 08:50:45 +08:00
jxxghp
db0ea7d6c4 Fix database sequence errors (#4777)
* Fix database upgrade script to handle existing identity columns

Co-authored-by: jxxghp <jxxghp@live.cn>

* Improve identity column conversion with error handling and cleanup

Co-authored-by: jxxghp <jxxghp@live.cn>

* Fix database upgrade script to handle existing identity columns

Co-authored-by: jxxghp <jxxghp@live.cn>

---------

Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: jxxghp <jxxghp@live.cn>
2025-08-20 00:29:35 +08:00
jxxghp
1eb85003de 更新 version.py 2025-08-19 17:58:27 +08:00
jxxghp
cca170f84a 更新 emby.py 2025-08-19 15:30:22 +08:00
jxxghp
c8c016caa8 更新 __init__.py 2025-08-19 14:27:02 +08:00
jxxghp
45d5874026 更新 __init__.py 2025-08-19 14:20:46 +08:00
jxxghp
69b1ce60ff fix db config 2025-08-19 14:15:33 +08:00
jxxghp
3ff3e4b106 fix db config 2025-08-19 14:05:24 +08:00
jxxghp
dc50a68b01 修复数据库表名引用 2025-08-19 12:54:47 +08:00
jxxghp
968cfd8654 fix db 2025-08-19 12:41:07 +08:00
jxxghp
cf28d93be6 fix db 2025-08-19 12:35:52 +08:00
jxxghp
be08d6ebb5 fix db 2025-08-19 12:02:53 +08:00
jxxghp
4bc24f3b00 fix db 2025-08-19 11:53:59 +08:00
jxxghp
15833f94cf fix db 2025-08-19 11:40:34 +08:00
jxxghp
aeb297efcf 优化站点激活状态的判断逻辑,简化数据库查询条件 2025-08-19 11:23:09 +08:00
jxxghp
d48c6b98e8 rollback local postgresql 2025-08-19 08:30:07 +08:00
jxxghp
b79ccfafed 优化 entrypoint.sh 中 PostgreSQL 命令的执行方式 2025-08-19 07:15:02 +08:00
jxxghp
c87ba59552 更新 entrypoint.sh 2025-08-18 22:42:55 +08:00
jxxghp
91fd71c858 fix entrypoint.sh 2025-08-18 22:26:01 +08:00
jxxghp
6f64e67538 fix dockerfile 2025-08-18 21:42:44 +08:00
jxxghp
bd7a0b072f fix entrypoint.sh 2025-08-18 21:22:29 +08:00
jxxghp
01ca001c97 fix entrypoint.sh 2025-08-18 21:10:24 +08:00
jxxghp
324ad2a87c 优化 PostgreSQL 数据目录初始化和启动逻辑 2025-08-18 20:55:33 +08:00
jxxghp
d9ad2630f0 fix postgresql 2025-08-18 19:14:47 +08:00
jxxghp
83958a4a48 fix postgresql 2025-08-18 19:12:20 +08:00
jxxghp
f6a6efdc42 fix app.env 2025-08-18 15:17:26 +08:00
jxxghp
1bbe7657b9 fix dockerfile 2025-08-18 11:42:53 +08:00
jxxghp
38189753b5 在构建工作流中添加新的 Docker 镜像配置 2025-08-18 11:31:00 +08:00
jxxghp
5b0e658617 重构配置文件项目顺序 2025-08-18 11:29:04 +08:00
jxxghp
b6cf54d57f 添加对 PostgreSQL 的支持 2025-08-18 11:19:17 +08:00
jxxghp
e8058c8813 添加 PostgreSQL 数据库支持 2025-08-18 11:19:06 +08:00
jxxghp
784868048d 更新 scheduler.py 2025-08-18 07:04:39 +08:00
jxxghp
2bf9779f2f v2.7.2 2025-08-17 11:44:59 +08:00
jxxghp
d98ceea381 fix #4768 2025-08-17 11:44:09 +08:00
jxxghp
1ab2da74b9 use apipathlib 2025-08-17 09:00:02 +08:00
jxxghp
086b1f1403 更新 message.py 2025-08-16 17:27:45 +08:00
2Dou
3723cf8ac2 二级分类配置增加排除功能 2025-08-15 09:54:56 +08:00
jxxghp
19608fa98e Merge pull request #4756 from Sowevo/v2 2025-08-13 17:40:31 +08:00
sowevo
b0d17deda1 从 TMDB 相对链接中解析数值 ID。 2025-08-13 17:11:56 +08:00
sowevo
4c979c458e 从 TMDB 相对链接中解析数值 ID。 2025-08-13 16:54:06 +08:00
jxxghp
c5e93169ad 更新 subscribe_oper.py 2025-08-13 10:10:42 +08:00
jxxghp
1e2ca294de Merge pull request #4747 from Pollo3470/fix-flaresolverr-proxy 2025-08-12 16:59:31 +08:00
Pollo
7165c4a275 fix: 代理需要认证时,flaresolverr使用session 2025-08-12 16:33:51 +08:00
Pollo
cbe81ba33c fix: 修复调用flaresolverr时未将代理认证信息传入的问题 2025-08-12 16:12:22 +08:00
jxxghp
fdbfae953d fix #4741 FlareSolverr使用站点设置的超时时间,未设置时默认60秒
close #4742
close https://github.com/jxxghp/MoviePilot-Frontend/pull/378
2025-08-12 08:04:29 +08:00
jxxghp
c7ba274877 更新 browser.py 2025-08-11 23:35:05 +08:00
jxxghp
8b15a16ca1 更新 browser.py 2025-08-11 22:20:22 +08:00
jxxghp
9f2c8d3811 v2.7.1 2025-08-11 21:51:34 +08:00
jxxghp
7343dfbed8 fix hddolby 2025-08-11 21:41:56 +08:00
jxxghp
90f74d8d2b feat:支持FlareSolverr 2025-08-11 21:14:46 +08:00
jxxghp
7e3e0e1178 fix #4725 2025-08-11 18:29:29 +08:00
jxxghp
d890e38a10 fix #4724 2025-08-11 17:46:46 +08:00
jxxghp
e505b5c85f fix #4733 2025-08-11 16:41:29 +08:00
jxxghp
6230f55116 fix #4734 2025-08-11 16:34:36 +08:00
jxxghp
c8d0c14ebc 更新 plex.py 2025-08-11 13:57:03 +08:00
jxxghp
6ac8455c74 fix 2025-08-11 13:30:15 +08:00
jxxghp
143b21631f Merge pull request #4737 from baozaodetudou/nginx 2025-08-11 13:27:23 +08:00
doumao
d760facad8 nginx cache js bug 2025-08-11 13:13:29 +08:00
jxxghp
3a1a4c5cfe 更新 download.py 2025-08-10 22:15:30 +08:00
jxxghp
c3045e2cd4 更新 mtorrent.py 2025-08-10 22:10:11 +08:00
jxxghp
1efb9af7ab 更新 nginx.common.conf 2025-08-10 21:32:53 +08:00
jxxghp
e03471159a 更新 version.py 2025-08-10 18:45:40 +08:00
jxxghp
a92e493742 fix README 2025-08-10 14:01:26 +08:00
jxxghp
225d413ed1 fix README 2025-08-10 13:52:35 +08:00
jxxghp
184e4ba7d5 fix 插件Release安装逻辑 2025-08-10 13:26:22 +08:00
jxxghp
917cae27b1 更新插件release安装逻辑 2025-08-10 13:06:03 +08:00
jxxghp
60e0463051 fix 2025-08-10 12:53:42 +08:00
jxxghp
c15022c7d5 fix:插件通过release安装 2025-08-10 12:45:38 +08:00
jxxghp
2a84e3a606 feat: 插件异步安装 2025-08-10 10:10:30 +08:00
jxxghp
fddbbd5714 feat:插件通过release安装 2025-08-10 10:00:13 +08:00
jxxghp
51b8f7c713 fix #4721 2025-08-10 09:11:44 +08:00
jxxghp
e97c246741 try fix #4716 2025-08-10 09:04:20 +08:00
jxxghp
9a81f55ac0 fix #4510 2025-08-10 08:51:52 +08:00
jxxghp
a38b702acc fix alist 2025-08-10 08:46:29 +08:00
jxxghp
e4e0605e92 更新 metavideo.py 2025-08-08 10:19:21 +08:00
jxxghp
8875a8f12c 更新 nginx.common.conf 2025-08-07 11:42:52 +08:00
jxxghp
4dd1deefa5 Merge pull request #4709 from wikrin/v2 2025-08-07 06:54:24 +08:00
Attente
1f6dc93ea3 fix(transfer): 修复目录监控下意外删除未完成种子的问题
- 如果种子尚未下载完成,则直接返回 False
2025-08-06 23:13:01 +08:00
jxxghp
426e920fff fix log 2025-08-06 16:54:24 +08:00
jxxghp
1f6bbce326 fix:优化重试识别次数限制 2025-08-06 16:48:37 +08:00
jxxghp
41f89a35fa 切换v2 release为最新 2025-08-06 16:36:29 +08:00
jxxghp
099d7874d7 - 修复日志滚动问题 2025-08-06 16:32:54 +08:00
jxxghp
e2367103a1 - 修复日志滚动问题 2025-08-06 16:29:51 +08:00
jxxghp
37f8ba7d72 fix #4705 2025-08-06 16:24:47 +08:00
jxxghp
c20bd84edd fix plex error 2025-08-06 12:21:02 +08:00
jxxghp
b4ee0d2487 Merge remote-tracking branch 'origin/v2' into v2 2025-08-05 20:14:06 +08:00
jxxghp
420fa7645f mask key 2025-08-05 20:14:00 +08:00
jxxghp
5bb1e72760 Update README.md 2025-08-05 19:37:51 +08:00
jxxghp
e2a007b62a Update README.md 2025-08-05 19:37:33 +08:00
jxxghp
210813367f fix #4694 2025-08-04 20:50:06 +08:00
jxxghp
770a50764e 更新 transferhistory.py 2025-08-04 19:39:51 +08:00
jxxghp
e339a22aa4 更新 version.py 2025-08-04 19:04:32 +08:00
jxxghp
913afed378 fix #4700 2025-08-04 12:19:24 +08:00
jxxghp
db3efb4452 fix SiteStatistic 2025-08-04 08:34:31 +08:00
jxxghp
840351acb7 fix Subscribe api 2025-08-04 07:05:23 +08:00
jxxghp
da76a7f299 Merge pull request #4693 from wumode/fix_4691 2025-08-03 15:40:19 +08:00
wumode
cbd999f88d Update app/modules/qbittorrent/qbittorrent.py
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-08-03 14:12:57 +08:00
wumode
2fa8a266c5 fix:#4691 2025-08-03 13:56:58 +08:00
jxxghp
08aa749a53 更新 subscribe.py 2025-08-02 20:06:26 +08:00
jxxghp
2379f04d2a Merge pull request #4689 from wikrin/v2 2025-08-02 19:48:59 +08:00
Attente
0e73598d1c refactor(transfer): 优化移动模式下种子文件的删除逻辑
- 重构了种子文件删除相关的代码,简化了逻辑
- 新增了 _is_blocked_by_exclude_words 方法,用于检查文件是否被屏蔽
- 新增了 _can_delete_torrent 方法,用于判断是否可以删除种子文件
2025-08-02 19:42:34 +08:00
jxxghp
964e6eb0e8 Merge pull request #4688 from Pollo3470/v2 2025-08-02 16:34:27 +08:00
Pollo
0430e6c6d4 fix: 修复使用socks代理时请求失败的问题 2025-08-02 16:20:52 +08:00
jxxghp
db88358eca 更新 webhook.py 2025-08-02 15:57:08 +08:00
jxxghp
723e9b0018 更新 version.py 2025-08-02 15:05:39 +08:00
jxxghp
f3db27a8da fix SiteStatistic note 2025-08-02 14:23:16 +08:00
jxxghp
0fb7a73fc9 fix RetryException 2025-08-02 11:32:42 +08:00
jxxghp
418e6bd085 fix cache_clear 2025-08-02 10:29:11 +08:00
jxxghp
5a5c4ace6b fix 实时日志性能 2025-08-02 10:24:46 +08:00
jxxghp
c2c8214075 refactor: 添加订阅协程处理 2025-08-02 09:14:38 +08:00
jxxghp
e5d2ade6e6 fix 协程环境中调用插件同步函数处理 2025-08-02 08:41:44 +08:00
jxxghp
e32b6e07b4 fix async apis 2025-08-01 20:27:22 +08:00
jxxghp
cc69d3b8d1 更新 __init__.py 2025-08-01 18:05:06 +08:00
jxxghp
1dd3af44b5 add FastApi实时性能监控 2025-08-01 17:47:55 +08:00
jxxghp
8ab233baef fix bug 2025-08-01 16:39:40 +08:00
jxxghp
104138b9a7 fix:减少无效搜索 2025-08-01 15:18:05 +08:00
jxxghp
0c8fd5121a fix async apis 2025-08-01 14:19:34 +08:00
jxxghp
61f26d331b add MAX_SEARCH_NAME_LIMIT default 2 2025-08-01 12:33:54 +08:00
jxxghp
97817cd808 fix tmdb async 2025-08-01 12:05:08 +08:00
jxxghp
45bcc63c06 fix rate_limit async 2025-08-01 11:48:37 +08:00
jxxghp
00779d0f10 fix search async 2025-08-01 11:38:23 +08:00
jxxghp
d657bf8ed8 feat:协程搜索 part3 2025-08-01 08:40:25 +08:00
jxxghp
4fcdd05e6a fix indexer async 2025-08-01 08:28:19 +08:00
jxxghp
e6916946a9 fix log && run_in_threadpool 2025-08-01 07:10:02 +08:00
jxxghp
acd7013dc6 fix site 2025-07-31 21:43:55 +08:00
jxxghp
039d876e3f feat:协程搜索 part2 2025-07-31 21:39:36 +08:00
jxxghp
3fc2c7d6cc feat:协程搜索 part2 2025-07-31 21:26:55 +08:00
jxxghp
109164b673 feat:协程搜索 part1 2025-07-31 20:51:39 +08:00
jxxghp
673a03e656 feat:查询本地是否存在 使用协程 2025-07-31 20:19:28 +08:00
jxxghp
1e976e6d96 fix db 2025-07-31 19:52:07 +08:00
jxxghp
8efba30adb fix db 2025-07-31 19:51:48 +08:00
jxxghp
713d44eac3 feat:实现非阻塞文件日志处理 2025-07-31 19:34:50 +08:00
jxxghp
aea44c1d97 feat:键式事件协程处理 2025-07-31 17:27:15 +08:00
jxxghp
1e61e60d73 feat:插件查询协程处理 2025-07-31 16:58:54 +08:00
jxxghp
a0e4b4a56e feat:媒体查询协程处理 2025-07-31 15:24:50 +08:00
jxxghp
983f8fcb03 fix httpx 2025-07-31 13:51:43 +08:00
jxxghp
6afdde7dc1 discover更新为异步实现 2025-07-31 13:36:43 +08:00
jxxghp
6873de7243 fix async 2025-07-31 13:32:47 +08:00
jxxghp
ee4d6d0db3 fix cache 2025-07-31 09:55:47 +08:00
jxxghp
dee1212a76 feat:推荐使用异步API 2025-07-31 09:50:49 +08:00
jxxghp
ceda69aedd add async apis 2025-07-31 09:15:38 +08:00
jxxghp
75ea7d7601 add async api 2025-07-31 09:10:45 +08:00
jxxghp
8b75d2312c add async run_module 2025-07-31 08:56:32 +08:00
jxxghp
ca51880798 fix themoviedb api 2025-07-31 08:40:24 +08:00
jxxghp
8b708e8939 fix themoviedb api 2025-07-31 08:34:47 +08:00
jxxghp
b6ff9f7196 fix douban api 2025-07-31 08:18:00 +08:00
jxxghp
67229fd032 fix 2025-07-31 08:11:27 +08:00
jxxghp
d382eab355 fix subscribe helper 2025-07-31 07:26:58 +08:00
jxxghp
d8f10e9ac4 fix workflow helper 2025-07-31 07:17:05 +08:00
jxxghp
749aaeb003 fix async 2025-07-31 07:07:14 +08:00
jxxghp
c5a3bbcecf 更新 subscribe.py 2025-07-31 00:11:40 +08:00
jxxghp
27ac41531b 更新 subscribe.py 2025-07-30 23:46:21 +08:00
jxxghp
423c9af786 为TheMovieDb模块添加异步支持(part 1) 2025-07-30 22:28:12 +08:00
jxxghp
232759829e 为Bangumi和Douban模块添加异步API支持 2025-07-30 22:18:11 +08:00
jxxghp
71f7bc7b1b fix 2025-07-30 21:06:55 +08:00
jxxghp
ae4f03e272 fix logging api 2025-07-30 21:01:28 +08:00
jxxghp
acb5a7e50b fix 2025-07-30 19:59:25 +08:00
jxxghp
c8749b3c9c add aiopath 2025-07-30 19:49:59 +08:00
jxxghp
49647e3bb5 fix asyncio sleep 2025-07-30 18:53:23 +08:00
jxxghp
48d353aa90 fix async oper 2025-07-30 18:48:50 +08:00
jxxghp
edec18cacb fix 2025-07-30 18:37:16 +08:00
jxxghp
cd8661abc1 重构工作流相关API,支持异步操作并引入异步数据库管理 2025-07-30 18:21:13 +08:00
jxxghp
5f6310f5d6 fix httpx proxy 2025-07-30 17:34:09 +08:00
jxxghp
42d955b175 重构订阅和用户相关API,支持异步操作 2025-07-30 15:23:25 +08:00
jxxghp
21541bc468 更新历史记录相关API,支持异步操作 2025-07-30 14:27:38 +08:00
jxxghp
f14f4e1e9b 添加异步数据库支持,更新相关模型和会话管理 2025-07-30 13:18:45 +08:00
jxxghp
6d1de8a2e4 add db异步转换器 2025-07-30 08:59:11 +08:00
jxxghp
0053d31f84 add db异步转换器 2025-07-30 08:54:04 +08:00
jxxghp
f077a9684b 添加异步请求工具类;优化fetch_image和proxy_img函数为异步实现提升性能 2025-07-30 08:30:24 +08:00
jxxghp
2428d58e93 使用aiofiles实现异步文件操作,提升性能;调整uvicorn工作进程数量。 2025-07-30 07:56:56 +08:00
jxxghp
5340e3a0a7 fix 2025-07-28 16:55:22 +08:00
jxxghp
70dd8f0f1d 更新 version.py 2025-07-28 15:15:56 +08:00
jxxghp
8fa76504c3 fix 2025-07-28 08:13:39 +08:00
jxxghp
0899cb4e1d fix 2025-07-28 08:11:39 +08:00
jxxghp
ee7a2a70a6 Merge pull request #4666 from wumode/refactor_polling_observer 2025-07-27 16:15:33 +08:00
wumode
d57d1ac15e fix: bug 2025-07-27 14:58:11 +08:00
wumode
68c29d89c9 refactor: polling_observer 2025-07-27 12:45:57 +08:00
jxxghp
721648ffdf fix #4653 2025-07-26 23:04:40 +08:00
jxxghp
8437f39bf6 fix #4655 2025-07-26 22:59:37 +08:00
jxxghp
48b15c60e7 Merge pull request #4658 from jnwan/v2 2025-07-25 14:06:22 +08:00
jnwan
e350122125 Add flag to ignore check folder modtime for rclone snapshot 2025-07-24 21:34:17 -07:00
jxxghp
0cce97f373 remove gc 2025-07-25 11:47:41 +08:00
jxxghp
d8cacc0811 fix:没有订阅不跑订阅刷新任务 2025-07-24 11:08:47 +08:00
jxxghp
7abaf70bb8 fix workflow 2025-07-24 09:54:46 +08:00
jxxghp
232fe4d15e fix dead lock 2025-07-23 17:03:50 +08:00
jxxghp
d6d12c0335 feat: 添加事件类型中文名称翻译字典 2025-07-23 15:35:04 +08:00
jxxghp
8e4f12804b Merge pull request #4648 from hyuan280/v2 2025-07-23 15:09:05 +08:00
jxxghp
c21ba5c521 Merge pull request #4649 from roukaixin/v2 2025-07-23 15:07:44 +08:00
jxxghp
dfa3d47261 更新 plugin.py 2025-07-23 06:50:01 +08:00
jxxghp
924f59afff fix bug 2025-07-22 21:02:02 +08:00
roukaixin
673b282d6c Merge branch 'jxxghp:v2' into v2 2025-07-22 20:48:29 +08:00
roukaixin
1c761f89e5 fix: 修复TZ环境变量不生效 2025-07-22 20:46:57 +08:00
jxxghp
f61cd969b9 fix 2025-07-22 20:46:42 +08:00
jxxghp
e39a130306 feat:工作流支持事件触发 2025-07-22 20:23:53 +08:00
黄渊
13b6ea985e fix: 浏览资源时分类可能不生效,使用split后再对比分类id 2025-07-22 19:02:25 +08:00
jxxghp
2f1e55fa1e 增加搜索次数统计和强制休眠机制以优化搜索性能 2025-07-21 12:25:52 +08:00
jxxghp
776f629771 fix User-Agent 2025-07-20 15:50:45 +08:00
jxxghp
d9e9edb2c4 Update version.py 2025-07-20 13:32:54 +08:00
jxxghp
753c074e59 fix #4625 2025-07-20 12:45:53 +08:00
jxxghp
d92c82775a fix #4637 2025-07-20 12:28:12 +08:00
jxxghp
215cc09c1f fix 2025-07-20 11:50:44 +08:00
jxxghp
7f302c13c7 fix #4632 2025-07-20 09:14:47 +08:00
jxxghp
de6a094d10 fix display 2025-07-20 08:49:21 +08:00
jxxghp
a94e1a8314 Merge pull request #4631 from ChanningHe/fix-telegram-msg 2025-07-18 21:22:17 +08:00
ChanningHe
f5efdd665b fix: 清理Telegram消息中的@bot部分以确保一致性处理 2025-07-18 21:59:04 +09:00
jxxghp
43e25e8717 fix share cache 2025-07-18 17:36:28 +08:00
ChanningHe
a8026fefc1 fix: 在Telegram chat中只有被at时检测 2025-07-18 17:55:43 +09:00
ChanningHe
fdb36957c9 fix: Telegram 机器人消息无法推送到群组,只能推送到userid 2025-07-18 17:40:06 +09:00
jxxghp
ea433ff807 add site api 2025-07-18 08:04:05 +08:00
jxxghp
8902fb50d6 更新 context.py 2025-07-16 22:22:45 +08:00
jxxghp
b6aa013eb3 v2.6.6 2025-07-16 20:25:43 +08:00
jxxghp
034b43bf70 fix context 2025-07-16 19:59:06 +08:00
jxxghp
59e9032286 add subscribe share statistic api 2025-07-16 08:47:54 +08:00
jxxghp
52a98efd0a add subscribe share statistic api 2025-07-16 08:31:28 +08:00
jxxghp
90cc91aa7f Merge pull request #4614 from Aqr-K/feature-ua 2025-07-15 06:47:34 +08:00
Aqr-K
1973a26e83 fix: 去除冗余代码,简化写法 2025-07-14 22:19:48 +08:00
Aqr-K
6519ad25ca fix is_aarch 2025-07-14 22:17:04 +08:00
Aqr-K
cacfde8166 fix 2025-07-14 22:14:52 +08:00
Aqr-K
df85873726 feat(ua): add cup_arch , USER_AGENT value add cup_arch 2025-07-14 22:04:09 +08:00
jxxghp
dfea294cc9 fix ua 2025-07-14 13:42:49 +08:00
jxxghp
d35b855404 fix ua 2025-07-14 13:30:18 +08:00
jxxghp
7a1cbf70e3 feat:特定默认UA 2025-07-14 12:35:08 +08:00
jxxghp
f260990b86 更新 version.py 2025-07-13 15:14:10 +08:00
jxxghp
6affbe9b55 fix #4558 2025-07-13 15:04:41 +08:00
jxxghp
dbe3a10697 fix 2025-07-13 14:53:39 +08:00
jxxghp
3c25306a5d fix #4590 2025-07-13 14:43:48 +08:00
jxxghp
17f4d49731 fix #4594 2025-07-13 14:24:41 +08:00
jxxghp
e213b5cc64 Merge branch 'v2' of https://github.com/jxxghp/MoviePilot into v2 2025-07-13 14:14:26 +08:00
jxxghp
65e5dad44b 优化移动模式下的种子和残留目录删除逻辑 2025-07-13 14:14:24 +08:00
jxxghp
62ad38ea5d Merge pull request #4605 from wikrin/torrent_optimize 2025-07-13 13:25:35 +08:00
Attente
f98f4c1f77 refactor(helper): 优化 TorrentHelper 类
- 添加检查临时目录中是否存在种子文件
- 修改 match_torrent 方法参数类型
- 优化种子文件下载和处理逻辑
2025-07-13 13:16:36 +08:00
jxxghp
e9f02b58b7 Merge pull request #4604 from cddjr/fix_4602 2025-07-13 06:51:36 +08:00
景大侠
05495e481d fix #4602 2025-07-13 01:10:07 +08:00
jxxghp
5bb2167b78 Merge pull request #4603 from cddjr/fix_nettest 2025-07-12 18:34:54 +08:00
景大侠
b4e0ed66cf 完善网络连通性测试的错误描述 2025-07-12 18:15:19 +08:00
jxxghp
70a0563435 add server_type return 2025-07-12 14:52:18 +08:00
jxxghp
955912b832 fix plex 2025-07-12 14:44:45 +08:00
jxxghp
b65ee75b3d Merge pull request #4601 from cddjr/minimal_deps 2025-07-11 21:46:13 +08:00
景大侠
f642493a38 fix 2025-07-11 21:25:10 +08:00
jxxghp
7f1bfb1e07 Merge pull request #4599 from jtcymc/v2 2025-07-11 21:12:16 +08:00
景大侠
8931e2e016 fix 仅安装用户需要使用的插件依赖 2025-07-11 21:04:33 +08:00
shaw
0465fa77c2 fix(filemanager): 检查目标媒体库目录是否设置
- 在文件整理过程中,增加对目标媒体库目录是否设置的检查- 如果目标媒体库目录未设置,返回错误信息并中断整理过程
- 优化了错误处理逻辑,提高了系统的稳定性和可靠性
2025-07-11 20:02:12 +08:00
jxxghp
575d503cb9 Merge pull request #4598 from cddjr/fix_4586 2025-07-11 18:12:57 +08:00
景大侠
a4fdbdb9ad fix 极空间、Unraid误报网络文件系统 2025-07-11 18:03:19 +08:00
jxxghp
b9cb781a4e rollback size 2025-07-11 08:34:02 +08:00
jxxghp
a3adf867b7 fix 2025-07-10 22:48:08 +08:00
jxxghp
d52cbd2f74 feat:资源下载事件保存路径 2025-07-10 22:16:19 +08:00
jxxghp
8d0003db94 更新 version.py 2025-07-10 11:57:54 +08:00
jxxghp
b775e89e77 fix #4581 2025-07-10 10:44:04 +08:00
jxxghp
0e14b097ba fix #4581 2025-07-10 10:39:22 +08:00
jxxghp
51848b8d8d fix #4581 2025-07-10 10:20:00 +08:00
jxxghp
72658c3e60 Merge pull request #4582 from cddjr/fix_rename_related 2025-07-09 20:42:54 +08:00
jxxghp
036cb6f3b0 remove memory helper 2025-07-09 19:11:37 +08:00
jxxghp
1a86d96bfa Merge pull request #4579 from jxxghp/cursor/bc-f8a13fbf-5ca0-4b0b-ae8d-59c208732d44-b74e 2025-07-09 17:43:46 +08:00
Cursor Agent
f67db38a25 Fix memory analysis performance and timeout issues across platforms
Co-authored-by: jxxghp <jxxghp@163.com>
2025-07-09 09:43:34 +00:00
Cursor Agent
028d18826a Refactor memory analysis with ThreadPoolExecutor for cross-platform timeout
Co-authored-by: jxxghp <jxxghp@163.com>
2025-07-09 09:38:06 +00:00
Cursor Agent
29a605f265 Optimize memory analysis with timeout, sampling, and performance improvements
Co-authored-by: jxxghp <jxxghp@163.com>
2025-07-09 08:57:22 +00:00
jxxghp
4b6959470d Merge pull request #4577 from jxxghp/cursor/analyze-memory-usage-discrepancies-6709 2025-07-09 16:08:00 +08:00
Cursor Agent
600767d2bf Remove memory analysis guide and test script
Co-authored-by: jxxghp <jxxghp@163.com>
2025-07-09 08:07:30 +00:00
Cursor Agent
3efbd47ffd Add comprehensive memory analysis tool with guide and test script
Co-authored-by: jxxghp <jxxghp@163.com>
2025-07-09 08:04:10 +00:00
Cursor Agent
d17e85217b Enhance memory analysis with detailed tracking, leak detection, and system insights
Co-authored-by: jxxghp <jxxghp@163.com>
2025-07-09 07:47:23 +00:00
jxxghp
e608089805 add Note Action 2025-07-09 12:22:22 +08:00
jxxghp
b852acec28 fix workflow 2025-07-09 09:34:53 +08:00
jxxghp
2a3ea8315d fix workflow 2025-07-09 00:19:47 +08:00
jxxghp
9271ee833c Merge pull request #4566 from jxxghp/cursor/helper-91dc
新增工作流分享相关接口和helper
2025-07-09 00:12:56 +08:00
Cursor Agent
570d4ad1a3 Fix workflow API by passing database session to WorkflowOper methods
Co-authored-by: jxxghp <jxxghp@163.com>
2025-07-08 15:44:55 +00:00
Cursor Agent
dccdf3231a Checkpoint before follow-up message 2025-07-08 15:42:31 +00:00
Cursor Agent
b8ee777fd2 Refactor workflow sharing with independent config and improved data access
Co-authored-by: jxxghp <jxxghp@163.com>
2025-07-08 15:33:43 +00:00
Cursor Agent
a2fd3a8d90 Implement workflow sharing feature with new API endpoints and helper
Co-authored-by: jxxghp <jxxghp@163.com>
2025-07-08 15:26:16 +00:00
Cursor Agent
bbffb1420b Add workflow sharing, forking, and related API endpoints
Co-authored-by: jxxghp <jxxghp@163.com>
2025-07-08 15:18:01 +00:00
景大侠
8ea0a32879 fix 优化重命名后的媒体文件根路径获取 2025-07-08 22:37:32 +08:00
景大侠
8c27b8c33e fix 文件管理的自动重命名缺少集信息 2025-07-08 22:37:09 +08:00
景大侠
5c61b22c2f fix 未启用重命名时,整理文件的转移路径不正确 2025-07-08 21:49:31 +08:00
jxxghp
9da9d765a0 fix:静态类引用 2025-07-08 21:40:04 +08:00
jxxghp
f64363728e fix:静态类引用 2025-07-08 21:38:34 +08:00
jxxghp
378777dc7c feat:弱引用单例 2025-07-08 21:29:01 +08:00
jxxghp
6156b9a481 Merge pull request #4561 from jxxghp/cursor/move-media-files-to-season-directory-6ee0 2025-07-08 18:00:50 +08:00
Cursor Agent
8c516c5691 Fix: Ensure parent item exists before saving NFO file
Co-authored-by: jxxghp <jxxghp@163.com>
2025-07-08 09:51:43 +00:00
Cursor Agent
bf9a149898 Fix TV show metadata scraping to use correct parent directory
Co-authored-by: jxxghp <jxxghp@163.com>
2025-07-08 09:31:35 +00:00
jxxghp
277cde8db2 更新 version.py 2025-07-08 12:17:57 +08:00
jxxghp
e06bdaf53e fix:资源包升级失败时一直重启的问题 2025-07-08 12:06:30 +08:00
jxxghp
da367bd138 fix spider 2025-07-08 11:25:36 +08:00
jxxghp
d336bcbf1f fix etree 2025-07-08 11:00:38 +08:00
jxxghp
a8aedba6ff fix https://github.com/jxxghp/MoviePilot/issues/4552 2025-07-08 09:34:24 +08:00
jxxghp
9ede86c6a3 Merge pull request #4555 from cddjr/fix_local_exists 2025-07-07 23:30:51 +08:00
景大侠
1468f2b082 fix 本地媒体文件检查时首选含影视标题的目录
避免了以年份、分辨率等作为重命名第一层目录时的误判问题
2025-07-07 23:24:04 +08:00
jxxghp
e04ae70f89 Merge pull request #4553 from cddjr/fix_trim_task 2025-07-07 22:15:12 +08:00
景大侠
7f7d2c9ba8 fix 飞牛刷新媒体库报错Task duplicate 2025-07-07 21:46:17 +08:00
jxxghp
d73deef8dc Merge pull request #4549 from cddjr/fix_tr 2025-07-07 17:28:28 +08:00
景大侠
f93a1540af fix TR模块报错找不到_protocol属性
v2.5.9引入的bug
2025-07-07 17:05:28 +08:00
jxxghp
c8bd9cb716 Merge pull request #4548 from cddjr/set_lock_timeout 2025-07-07 12:04:46 +08:00
景大侠
2ed13c7e5b fix 订阅匹配锁增加超时,避免罕见的长时间卡任务问题 2025-07-07 11:51:58 +08:00
jxxghp
647c0929c5 v2.6.2 2025-07-06 08:28:33 +08:00
jxxghp
a61533a131 Merge pull request #4536 from cddjr/fix_local_exists 2025-07-05 22:02:16 +08:00
景大侠
bc5e682308 fix 本地媒体检查潜在的额外扫盘问题 2025-07-05 21:46:21 +08:00
jxxghp
25a481df12 Merge pull request #4534 from jxxghp/cursor/bc-55af1137-dea1-4191-9033-64ea5fcaa43a-d338
修复文件整理快照处理问题
2025-07-05 15:44:51 +08:00
Cursor Agent
764c10fae4 Fix snapshot handling logic to correctly process files during monitoring
Co-authored-by: jxxghp <jxxghp@163.com>
2025-07-05 07:22:44 +00:00
Cursor Agent
d8249d4e38 Fix snapshot handling logic to correctly process files during monitoring
Co-authored-by: jxxghp <jxxghp@163.com>
2025-07-05 07:19:53 +00:00
jxxghp
0e3e42b398 Merge pull request #4531 from Aqr-K/feat-process 2025-07-05 06:33:57 +08:00
Aqr-K
7d3b64dcf9 Update requirements.in 2025-07-05 03:16:49 +08:00
Aqr-K
2c8d525796 feat: 增加进程名设置 2025-07-05 03:14:54 +08:00
jxxghp
4869f071ab fix error message 2025-07-04 21:34:31 +08:00
jxxghp
3029eeaf6f fix error message 2025-07-04 21:33:32 +08:00
jxxghp
33fb692aee 更新 plugin.py 2025-07-03 22:20:04 +08:00
jxxghp
6a075d144f 更新 version.py 2025-07-03 20:19:36 +08:00
jxxghp
aa23315599 rollback transmission-rpc 2025-07-03 19:16:36 +08:00
jxxghp
8d0bb35505 add 网络流量API 2025-07-03 19:05:43 +08:00
jxxghp
32e76bc6ce Merge pull request #4529 from cddjr/add_ctx_mgr_proto 2025-07-03 18:47:08 +08:00
景大侠
6c02766000 AutoCloseResponse支持上下文管理协议,避免部分插件报错 2025-07-03 18:38:48 +08:00
jxxghp
52ef390464 图片代理Api增加cache参数 2025-07-03 17:07:54 +08:00
jxxghp
43a557601e fix local usage 2025-07-03 16:48:35 +08:00
jxxghp
82ff7fc090 fix SMB Usage 2025-07-03 15:21:41 +08:00
jxxghp
db40b5105b 修正目录监控模式匹配 2025-07-03 13:55:54 +08:00
jxxghp
b2a379b84b fix SMB Storage 2025-07-03 12:41:44 +08:00
jxxghp
97cbd816fe add SMB Storage 2025-07-03 12:31:59 +08:00
jxxghp
7de3bb2a91 v2.6.0 2025-07-02 21:36:02 +08:00
jxxghp
3a8a2bcab4 Merge pull request #4519 from Aqr-K/patch-2 2025-07-01 19:46:12 +08:00
Aqr-K
eb1adbe992 fix: 错误文案修复,统一文案格式 2025-07-01 19:26:11 +08:00
jxxghp
b55966d42b Merge pull request #4516 from Aqr-K/feat-command
feat(command): 增加 `show` ,用来判断是否注册进菜单里显示
2025-07-01 17:20:59 +08:00
Aqr-K
451ca9cb5a feat(command): 增加 show ,用来判断是否注册进菜单里显示 2025-07-01 17:19:01 +08:00
jxxghp
1e2c607ced fix #4515 流平台不合并到现有标签中,如有需要通过命名模块配置 2025-07-01 17:02:29 +08:00
jxxghp
5ff7da0d19 fix #4515 流平台不合并到现有标签中,如有需要通过命名模块配置 2025-07-01 16:57:45 +08:00
jxxghp
8e06c6f8e6 remove openai 2025-07-01 14:48:16 +08:00
jxxghp
4497cd3904 add site stat api 2025-07-01 11:23:20 +08:00
jxxghp
2945679a94 - 修复Redis缓存问题及站点消息读取问题 2025-07-01 09:20:08 +08:00
jxxghp
1eaf7e3c85 Merge pull request #4513 from cddjr/fix_4511 2025-07-01 06:56:11 +08:00
景大侠
8146b680c6 fix: 修复AutoCloseResponse类在反序列化时无限递归 2025-07-01 01:29:01 +08:00
jxxghp
99e667382f fix #4509 2025-06-30 19:17:36 +08:00
jxxghp
4c03759d3f refactor:优化目录监控 2025-06-30 13:16:05 +08:00
jxxghp
8593a6cdd0 refactor:优化目录监控快照 2025-06-30 12:40:37 +08:00
jxxghp
cd18c31618 fix 订阅匹配 2025-06-30 10:55:10 +08:00
jxxghp
f29c918700 Merge pull request #4505 from wikrin/v2 2025-06-29 23:12:08 +08:00
Attente
0f0c3e660b style: 清理空白字符
移除代码中的 trailing whitespace 和空行缩进, 提升代码整洁度
2025-06-29 22:49:58 +08:00
Attente
1cf4639db3 fix(download): 修复手动下载时下载器选择问题
- 在手动下载模式下,始终使用用户选择的下载器
2025-06-29 22:24:53 +08:00
jxxghp
f5da9b5780 fix log 2025-06-29 22:10:47 +08:00
jxxghp
e4c87c8a96 更新 version.py 2025-06-29 21:56:37 +08:00
jxxghp
4b4bf153f0 fix plugin reload 2025-06-29 21:26:06 +08:00
jxxghp
ec227d0d56 Merge pull request #4500 from Miralia/v2
refactor(meta): 将 web_source 处理逻辑统一到 MetaBase 并添加到消息模板
2025-06-29 11:11:35 +08:00
Miralia
53c8c50779 refactor(meta): 将 web_source 处理逻辑统一到 MetaBase 并添加到消息模板 2025-06-29 11:08:34 +08:00
jxxghp
07b4c8b462 fix #4489 2025-06-29 11:06:36 +08:00
jxxghp
f3cfc5b9f0 fix plex 2025-06-29 08:27:48 +08:00
jxxghp
634e5a4c55 Merge pull request #4496 from wikrin/v2 2025-06-29 07:51:24 +08:00
Attente
332b154f15 fix(api): 适配 FastAPI 请求参数兼容性问题
修复系统配置和用户配置接口无法正常工作的问题。
2025-06-29 05:31:25 +08:00
jxxghp
b446d4db28 更新 GitHub 工作流配置,排除带有 RFC 标签的 issue 2025-06-28 22:24:51 +08:00
jxxghp
ce0397a140 fix update.sh 2025-06-28 22:03:18 +08:00
jxxghp
f278cccef3 for test 2025-06-28 21:42:28 +08:00
jxxghp
cbf1dbcd2e fix 恢复插件后安装依赖 2025-06-28 21:42:03 +08:00
jxxghp
037c6b02fa Merge pull request #4493 from Miralia/v2 2025-06-28 20:07:12 +08:00
Miralia
5f44e4322d Fix and add more 2025-06-28 19:47:33 +08:00
Miralia
6cebe97d6d add FPT Play 2025-06-28 19:12:00 +08:00
jxxghp
82ec146446 更新 plugin.py 2025-06-28 16:49:09 +08:00
jxxghp
3928c352c6 fix update 2025-06-28 15:01:25 +08:00
jxxghp
0ba36d21a9 Revert "fix security"
This reverts commit c7800df801.
2025-06-28 14:37:22 +08:00
jxxghp
6152727e9b fix Dockerfile 2025-06-28 14:33:33 +08:00
jxxghp
53c02fa706 resource v2 2025-06-28 14:26:14 +08:00
jxxghp
c7800df801 fix security 2025-06-28 14:12:24 +08:00
jxxghp
562c1de0c9 aList => OpenList 2025-06-28 08:43:09 +08:00
jxxghp
e2c90639f3 更新 message.py 2025-06-27 19:54:13 +08:00
jxxghp
92e175a8d1 Merge pull request #4488 from Miralia/v2 2025-06-27 17:29:10 +08:00
jxxghp
cf7bca75f6 fix res.text 2025-06-27 17:23:32 +08:00
Miralia
24a173f075 Update streamingplatform.py 2025-06-27 17:21:27 +08:00
jxxghp
8d695dda55 fix log 2025-06-27 17:16:08 +08:00
jxxghp
93eec6c4b8 fix cache 2025-06-27 15:24:57 +08:00
jxxghp
a2cc1a2926 upgrade packages 2025-06-27 14:34:35 +08:00
jxxghp
11729d0eca fix 2025-06-27 13:34:27 +08:00
jxxghp
978819be38 fix db pool size 2025-06-27 12:41:03 +08:00
jxxghp
23c9862eb3 fix site parser 2025-06-27 12:26:17 +08:00
jxxghp
a9f18ea3ef fix #4475 2025-06-27 10:05:19 +08:00
jxxghp
574257edf8 add SystemConfModel 2025-06-27 09:54:15 +08:00
jxxghp
bb4438ac42 feat:非大内存模式下主动gc 2025-06-27 09:44:47 +08:00
jxxghp
0baf6e5fe7 fix SiteParser close session 2025-06-27 08:38:02 +08:00
jxxghp
d8a53da8ee auto close RequestUtils 2025-06-27 08:30:57 +08:00
jxxghp
9555ac6305 fix RequestUtils 2025-06-27 08:09:38 +08:00
jxxghp
4dd5ea8e2f add del 2025-06-27 07:53:10 +08:00
jxxghp
8068523d88 fix downloader 2025-06-26 20:52:17 +08:00
jxxghp
27dd681d9f fix RequestUtils 2025-06-26 17:36:22 +08:00
jxxghp
152f814fb6 fix base chain 2025-06-26 13:28:11 +08:00
jxxghp
2700e639f1 fix chain 2025-06-26 13:16:10 +08:00
jxxghp
c440ce3045 fix oper 2025-06-26 08:33:43 +08:00
jxxghp
2829a3cb4e fix 2025-06-26 08:18:37 +08:00
jxxghp
a487091be8 Revert "fix resource helper"
This reverts commit e7524774da.
2025-06-25 13:32:28 +08:00
jxxghp
e7524774da fix resource helper 2025-06-25 12:50:00 +08:00
jxxghp
3918c876c5 Merge pull request #4478 from Miralia/v2 2025-06-24 21:07:55 +08:00
Miralia
f07f87735c fix 2025-06-24 19:52:14 +08:00
Miralia
b7566e8fe8 feat(meta): 扩展流媒体平台列表,增加更多平台支持。 2025-06-24 19:46:01 +08:00
jxxghp
73eba90f2f 更新 version.py 2025-06-24 10:34:42 +08:00
jxxghp
62e74f6fd1 fix 2025-06-24 08:19:10 +08:00
jxxghp
4375e48840 Merge pull request #4476 from Miralia/v2 2025-06-23 20:52:15 +08:00
Miralia
a1d6e94e90 feat(meta): 新增 WEB 平台来源识别并支持更多音视频格式。 2025-06-23 20:36:58 +08:00
jxxghp
1f44e13ff0 add reload logging 2025-06-23 10:14:22 +08:00
jxxghp
d2992f9ced fix plugin load 2025-06-23 09:31:56 +08:00
jxxghp
950337bccc fix plugin load 2025-06-23 08:19:22 +08:00
jxxghp
757c3be359 更新 version.py 2025-06-22 10:08:17 +08:00
jxxghp
269ab9adfc fix:删除消息能力 2025-06-22 10:04:21 +08:00
jxxghp
bd241a5164 feat:删除消息能力 2025-06-22 09:37:01 +08:00
jxxghp
3d92b57f24 fix 2025-06-22 09:04:03 +08:00
jxxghp
70d8cb3697 fix #4461 2025-06-22 08:51:29 +08:00
jxxghp
9e4ec5841c fix #4470 2025-06-22 08:47:43 +08:00
jxxghp
682f4fe608 fix message cache 2025-06-20 17:33:08 +08:00
jxxghp
ce8a077e07 优化按钮回调数据,简化为仅使用索引值 2025-06-19 15:54:07 +08:00
jxxghp
d5f63bcdb3 remove Commands DEV flag 2025-06-18 13:33:37 +08:00
jxxghp
5c3756fd1b v2.5.7-1 2025-06-17 20:02:45 +08:00
jxxghp
99939e1a3d fix 2025-06-17 19:42:16 +08:00
jxxghp
56742ace11 fix:带UA下载图片 2025-06-17 19:27:53 +08:00
jxxghp
742cb7a8da 更新 version.py 2025-06-17 18:56:47 +08:00
jxxghp
98327d1750 fix download message 2025-06-17 15:35:38 +08:00
jxxghp
b944306302 v2.5.7 2025-06-16 22:15:54 +08:00
jxxghp
02ab1d4111 fix settings 2025-06-16 21:29:57 +08:00
jxxghp
28552fb0ce 更新 transmission.py 2025-06-16 19:38:19 +08:00
jxxghp
bf52fcb2ec fix message 2025-06-16 11:45:26 +08:00
jxxghp
bab1f73480 修复:slack消息交互 2025-06-16 09:49:01 +08:00
jxxghp
c06001d921 feat:内建重启前主动备份插件 2025-06-16 08:57:21 +08:00
jxxghp
0fa49bb9c6 fix 消息定向发送时不检查消息类型匹配 2025-06-16 08:06:47 +08:00
jxxghp
bf23fe6ce2 更新 subscribe.py 2025-06-15 23:31:13 +08:00
jxxghp
7c6137b742 更新 download.py 2025-06-15 23:30:01 +08:00
jxxghp
3823a7c9b6 fix:消息发送范围 2025-06-15 23:18:07 +08:00
jxxghp
a944975be2 fix:交互消息立即发送 2025-06-15 23:06:25 +08:00
jxxghp
6da65d3b03 add MessageAction 2025-06-15 21:25:14 +08:00
jxxghp
0d938f2dca refactor:减少Alipan及115的Api调用 2025-06-15 20:41:32 +08:00
jxxghp
4fa9bb3c1f feat: 插件消息的事件回调 [PLUGIN]插件ID|内容 2025-06-15 19:47:04 +08:00
jxxghp
2f5b22a81f fix 2025-06-15 19:41:24 +08:00
jxxghp
fcd5ca3fda feat:Slack支持编辑消息 2025-06-15 19:28:05 +08:00
jxxghp
c18247f3b1 增强消息处理功能,支持编辑消息 2025-06-15 19:18:18 +08:00
jxxghp
f8fbfdbba7 优化消息处理逻辑 2025-06-15 18:40:36 +08:00
jxxghp
21addfb947 更新 message.py 2025-06-15 16:56:48 +08:00
jxxghp
8672bd12c4 fix bug 2025-06-15 16:31:09 +08:00
jxxghp
be8054e81e fix bug 2025-06-15 15:57:58 +08:00
jxxghp
82f46c6010 feat:回调消息路由给插件 2025-06-15 15:56:38 +08:00
jxxghp
95a827e8a2 feat:Telegram、Slack 支持按钮 2025-06-15 15:34:06 +08:00
jxxghp
c534e3dcb8 feat:未安装的插件,不加载模块 2025-06-15 09:55:20 +08:00
jxxghp
9f5e1b8dd7 更新 version.py 2025-06-14 14:45:58 +08:00
jxxghp
c86ed20c34 fix 2025-06-14 08:23:48 +08:00
jxxghp
c32c37e66a Merge pull request #4444 from cddjr/fix_doh_reload 2025-06-14 08:22:13 +08:00
jxxghp
7b100d3cdb Merge pull request #4446 from wikrin/v2 2025-06-14 07:05:20 +08:00
Attente
95a2362885 fix(db): 修复系统配置更新时内存共享问题
- 在更新系统配置时,使用 deepcopy 复制新值以避免内存共享
2025-06-13 23:03:13 +08:00
jxxghp
d8b14b9a9f Merge pull request #4445 from cddjr/feat_nettest 2025-06-13 19:06:02 +08:00
景大侠
c45953f63a feat 网络测试支持加速代理以及GitHub Token
fix 测试耗时大于1秒时,时间差计算错误
2025-06-13 18:35:49 +08:00
景大侠
e3d3087a5d fix GitHub请求头补上UA 2025-06-13 18:06:17 +08:00
景大侠
e162bd1168 fix DoH热加载 2025-06-13 17:43:45 +08:00
jxxghp
db5d81d7f0 Merge pull request #4442 from wumode/fix_download_api 2025-06-13 14:24:22 +08:00
wumode
f737f1287b fix(api): 无法设置非默认下载器状态 2025-06-13 08:43:34 +08:00
jxxghp
1ffa5178db Merge pull request #4440 from wikrin/v2 2025-06-13 06:35:24 +08:00
Attente
49cb43488c feat(plugin): 优化插件同步和安装逻辑
- 优化 sync 函数,考虑插件版本因素
- 更新 is_plugin_exists 函数,增加版本比较
2025-06-13 00:19:08 +08:00
jxxghp
fd7a6f8ddd Merge pull request #4438 from H1dery/v2 2025-06-12 20:05:00 +08:00
Cais1
7979ce0f0a File reading fixes
File reading fixes
2025-06-12 19:58:47 +08:00
Cais1
2ba5d9484d Update plugin.py
File reading fixes
2025-06-12 19:57:26 +08:00
jxxghp
23b981c5ac fix #4434 2025-06-12 18:41:46 +08:00
jxxghp
86ab2c8c05 Merge pull request #4434 from alfchao/v2 2025-06-12 16:14:59 +08:00
xuchao3
9ea0bc609a feat:增加telegram api代理地址
#4266
2025-06-12 13:56:36 +08:00
jxxghp
5366c2844a Merge pull request #4433 from wikrin/v2 2025-06-12 08:47:56 +08:00
Attente
eac4d703c7 fix(plugins_initializer): 优化插件恢复的容错处理
- 添加单个插件恢复失败的异常处理,使用 continue 跳过
- 确保单个插件恢复失败不影响其他插件继续恢复
2025-06-12 07:56:44 +08:00
jxxghp
8ed87294e2 v2.5.5-1
- 修复下载器监控问题
2025-06-12 07:08:19 +08:00
jxxghp
b343c601be v2.5.5
- 支持更精细的用户权限控制
- 高级设置中增加了刮削内容设定
2025-06-11 20:27:49 +08:00
jxxghp
e56d7006b4 init users 2025-06-11 20:24:59 +08:00
jxxghp
1b7bcd7784 init users 2025-06-11 19:57:21 +08:00
jxxghp
4cb9025b6c fix season_nfo 2025-06-11 19:48:02 +08:00
jxxghp
f8864ab053 fix reload 2025-06-11 07:11:50 +08:00
jxxghp
64eba46a67 fix 2025-06-11 07:07:55 +08:00
jxxghp
35d9cc1d40 remove jiaba 2025-06-11 00:00:08 +08:00
jxxghp
3036107dac fix user api 2025-06-10 23:42:57 +08:00
jxxghp
214089b4ea Merge pull request #4423 from lonelyman0108/v2 2025-06-10 18:04:13 +08:00
LM
95b7ba28e4 update: 添加fanart环境变量 2025-06-10 17:59:25 +08:00
LM
880272f96e update: 优化fanart获取逻辑,支持设定语言 2025-06-10 17:59:03 +08:00
LM
7ed26fadb6 update: 更新fanart刮削逻辑,优先获取中文、英文内容 2025-06-10 17:25:58 +08:00
jxxghp
f0d25a02a6 feat:支持刮削详细设定 2025-06-10 16:37:15 +08:00
jxxghp
162ba9307d fix restart 2025-06-10 07:09:59 +08:00
jxxghp
49dae92b8e fix flag path 2025-06-09 21:58:02 +08:00
jxxghp
b484a52b6d v2.5.4
- 插件市场支持手动刷新
- 优化了重置容器时已安装插件的恢复策略
2025-06-09 20:57:44 +08:00
jxxghp
d754091a7c fix log 2025-06-09 20:44:48 +08:00
jxxghp
e2febc24ae feat:插件市场支持强制刷新 2025-06-09 20:33:06 +08:00
jxxghp
d0677edaaa fix 优雅停止 2025-06-09 15:39:11 +08:00
jxxghp
f0aaecd0c7 fix #4413 2025-06-09 14:45:26 +08:00
jxxghp
3518940fec Merge pull request #4413 from cddjr/fix_plugin
修复分身的一些BUG
2025-06-09 14:42:54 +08:00
jxxghp
2e5c92ae0c fix 优雅停止 2025-06-09 13:09:16 +08:00
jxxghp
4ad699dbe6 fix 优雅停止 2025-06-09 13:06:27 +08:00
景大侠
931be9e6aa fix 分身复用原插件配置 2025-06-09 09:54:55 +08:00
景大侠
9656d6fbd0 fix 分身类名使用小写后缀
避免与分身ID不一致,导致误判没有安装
2025-06-09 09:51:06 +08:00
景大侠
c7cbb13044 fix 插件卸载后从系统模块中移除
避免分身时误报插件已存在
2025-06-09 09:50:55 +08:00
jxxghp
327d30dcc2 feat:识别容器是否重置 2025-06-09 09:15:58 +08:00
jxxghp
e4e2079917 fix:插件恢复安全性 2025-06-09 08:30:24 +08:00
jxxghp
0427506572 fix:移除Action类静态属性 2025-06-09 08:18:43 +08:00
jxxghp
ea168edb43 fix:移除Oper类静态属性 2025-06-09 08:08:55 +08:00
jxxghp
aa039c6c05 feat:启停插件自动备份与恢复 2025-06-09 08:04:44 +08:00
jxxghp
3de998051a fix memory snapshot 2025-06-08 21:57:49 +08:00
jxxghp
69ade1ae37 更新内存快照间隔为30分钟,保留的内存快照文件数量减少至20个 2025-06-08 21:48:37 +08:00
jxxghp
1d6133e3b1 fix plugins遍历 2025-06-08 21:39:37 +08:00
jxxghp
203a111d1a remove gc 2025-06-08 21:24:26 +08:00
jxxghp
0a20234268 remove gc 2025-06-08 21:19:15 +08:00
jxxghp
7f8e50f83d fix memory helper 2025-06-08 21:13:37 +08:00
jxxghp
443ef7d41b fix 2025-06-08 21:06:27 +08:00
jxxghp
059ae6595d fix 2025-06-08 20:37:42 +08:00
jxxghp
19c3dad338 fix 2025-06-08 19:41:46 +08:00
jxxghp
81bc51c972 fix pympler 2025-06-08 19:02:25 +08:00
jxxghp
6c17868744 add pympler 2025-06-08 18:55:02 +08:00
jxxghp
a18040ccfa add pympler 2025-06-08 18:54:35 +08:00
jxxghp
0835a75503 更新 thread.py 2025-06-08 14:43:13 +08:00
jxxghp
3ee32757e5 rollback 2025-06-08 14:35:59 +08:00
jxxghp
344abfa8d8 fix memory helper 2025-06-08 14:03:01 +08:00
jxxghp
906b2a3485 fix memory statistics 2025-06-08 11:36:15 +08:00
jxxghp
e0d2b87ed3 wallpaper cache skip empty 2025-06-08 11:30:57 +08:00
jxxghp
83a8c8b42b fix memory threshold 2025-06-08 11:14:16 +08:00
jxxghp
d840ed6c5a fix memory log 2025-06-08 11:08:01 +08:00
jxxghp
0112087be4 refactor #4407 2025-06-08 10:51:59 +08:00
jxxghp
7320084e11 rollback #4379 2025-06-07 22:26:51 +08:00
jxxghp
23929f5eaa fix pool size 2025-06-07 22:00:09 +08:00
jxxghp
c002d4619a 更新 scheduler.py 2025-06-07 20:11:31 +08:00
jxxghp
f60a909bba 更新 version.py 2025-06-07 11:43:04 +08:00
jxxghp
c2c22e3968 Merge pull request #4399 from cddjr/fix_subscribe 2025-06-07 11:42:25 +08:00
jxxghp
f10299b2de Merge pull request #4403 from cddjr/fix_systemconfig 2025-06-07 11:41:36 +08:00
景大侠
1d3563ed97 fix(config): 修复新装的插件会消失的问题 2025-06-07 11:33:28 +08:00
景大侠
f3eb2caa4e fix(subscribe): 避免重复下载已入库的剧集 2025-06-07 02:48:22 +08:00
jxxghp
2364dacd52 添加对 GitHub 容器注册 2025-06-06 22:02:04 +08:00
jxxghp
883f7451c3 fix event log 2025-06-06 21:45:14 +08:00
jxxghp
a534c9bca1 fix 设置保存失败提示 2025-06-06 21:30:11 +08:00
jxxghp
b14202a324 fix logger 2025-06-06 21:18:31 +08:00
jxxghp
a6fae48f07 更新 system.py 2025-06-06 17:15:25 +08:00
jxxghp
963caf2afe fix logger reload 2025-06-06 16:31:00 +08:00
jxxghp
50b0268531 v2.5.3-1 2025-06-06 15:37:44 +08:00
jxxghp
f484b64be3 fix 2025-06-06 15:37:02 +08:00
jxxghp
349535557f 更新 subscribe.py 2025-06-06 14:04:12 +08:00
jxxghp
de4973a270 feat:内存监控开关 2025-06-06 13:49:52 +08:00
jxxghp
e42d2baf8a fix lint 2025-06-05 22:14:14 +08:00
jxxghp
eac435b233 fix lint 2025-06-05 22:13:33 +08:00
jxxghp
447b8564e9 更新 GitHub Actions 工作流 2025-06-05 22:02:52 +08:00
jxxghp
97cee657bd 更新 .gitignore 文件以包含 Pylint 相关文件,并修改 system.py 中的成功返回逻辑 2025-06-05 21:58:50 +08:00
jxxghp
fe894754cf 更新 system.py 2025-06-05 21:39:13 +08:00
jxxghp
9ffb1d1931 更新 wallpaper.py 2025-06-05 21:03:21 +08:00
jxxghp
a16bd30903 更新 wallpaper.py 2025-06-05 21:00:18 +08:00
jxxghp
13f9ea8be4 v2.5.3 2025-06-05 20:28:43 +08:00
jxxghp
304af5e980 fix:仪表盘内存只显示当前程序占用 2025-06-05 17:09:11 +08:00
jxxghp
dc180c09e9 fix wallpaper 2025-06-05 17:03:29 +08:00
jxxghp
8e20e26565 fix:捕捉插件停止异常 2025-06-05 14:07:31 +08:00
jxxghp
11075a4012 fix:增加更多内存控制 2025-06-05 13:33:39 +08:00
jxxghp
a9300faaf8 fix:优化单例模式和类引用 2025-06-05 13:22:16 +08:00
jxxghp
504827b7e5 fix:memory use 2025-06-05 09:57:41 +08:00
jxxghp
e180130b38 fix:memory use 2025-06-05 08:32:24 +08:00
jxxghp
faaee09827 fix:memory use 2025-06-05 08:18:26 +08:00
jxxghp
99334795b6 fix rsshelper 2025-06-04 22:00:46 +08:00
jxxghp
8c9c59ef64 fix rsshelper 2025-06-04 21:42:03 +08:00
jxxghp
7a112000c9 更新 memory.py 2025-06-04 18:46:55 +08:00
jxxghp
1424087d5a fix:memory use 2025-06-04 18:34:49 +08:00
jxxghp
984f4731cd 更新 log.py 2025-06-04 15:33:58 +08:00
jxxghp
3a3de64b0f fix:重构配置热加载 2025-06-04 08:21:14 +08:00
jxxghp
0911854e9d fix Config reload 2025-06-04 07:17:47 +08:00
jxxghp
2af8b6f445 fix Config reload 2025-06-03 23:10:48 +08:00
jxxghp
bbfd8ca3f5 fix Config reload 2025-06-03 23:08:58 +08:00
jxxghp
b4ed2880f7 refactor:重构配置热加载 2025-06-03 20:56:21 +08:00
jxxghp
5f18a21e86 fix:整理失败时也打上已整理标签 2025-06-03 17:48:30 +08:00
jxxghp
5d188e3877 fix module close 2025-06-03 17:11:44 +08:00
jxxghp
90f113a292 remove ttl cache 2025-06-03 16:31:16 +08:00
jxxghp
eecfe58297 fix memory manager startup 2025-06-03 16:27:51 +08:00
jxxghp
079a747210 fix memory manager startup 2025-06-03 16:19:38 +08:00
jxxghp
4be8c70f23 fix memory log 2025-06-03 16:05:49 +08:00
jxxghp
d9aee4df77 fix memory log 2025-06-03 16:03:05 +08:00
jxxghp
225de87d4d fix torrents chain 2025-06-03 15:48:43 +08:00
jxxghp
2ce7cedfbd fix 2025-06-03 12:30:26 +08:00
jxxghp
cfb163d904 fix 2025-06-03 12:27:50 +08:00
jxxghp
de7c9be11b 优化内存管理,增加最大内存配置项,改进内存使用检查逻辑。 2025-06-03 12:25:13 +08:00
jxxghp
841209adc9 fix 2025-06-03 11:49:16 +08:00
jxxghp
e48d51fe6e 优化内存管理和垃圾回收机制 2025-06-03 11:45:17 +08:00
jxxghp
9d436ec7ed fix #4382 2025-06-03 08:19:15 +08:00
jxxghp
fb2b29d088 fix #4382 2025-06-03 07:07:40 +08:00
jxxghp
1c46b0bc20 更新 subscribe.py 2025-06-02 16:23:09 +08:00
jxxghp
81d0e4696a Merge pull request #4379 from jtcymc/v2 2025-06-02 10:48:36 +08:00
shaw
f9a287b52b feat(core): 增加剧集交集最小置信度设置
新增了剧集交集最小置信度的配置项,用于过滤掉包含过多不需要剧集的种子。实现了以下功能:

- 在 config.py 中添加了 EPISODE_INTERSECTION_MIN_CONFIDENCE 配置项,默认值为 0.0
- 修改了 download.py 中的下载逻辑,增加了计算种子与目标缺失集之间交集比例的函数
- 使用交集比例来筛选和排序种子,优先下载与缺失集交集较大的种子
-可以通过配置项设置交集比例的阈值,低于阈值的种子将被跳过

这个改动可以提高下载效率,避免下载过多不必要的剧集。
2025-06-02 00:38:10 +08:00
jxxghp
0f0072abea Merge pull request #4375 from awsl1110/v2 2025-05-31 20:08:10 +08:00
awsl1110
312933a259 fix(indexer): 修正 DiscuzX 站点名称
- 将 Discuz! 站点名称修改为 DiscuzX
2025-05-31 19:18:25 +08:00
jxxghp
288854b8f1 Merge pull request #4374 from awsl1110/v2 2025-05-31 19:04:51 +08:00
awsl1110
7f5991aa34 refactor(core): 优化配置项和模型定义
- 为配置项添加类型注解,提高代码可读性和安全性
- 为模型字段添加默认值,优化数据处理
- 更新验证器使用新语法,以适应Pydantic库的变更
2025-05-31 16:38:06 +08:00
jxxghp
361df95d50 Merge pull request #4372 from cddjr/fix_4371 2025-05-31 13:34:48 +08:00
景大侠
fc1ade32d7 更新蓝光测试用例 2025-05-31 11:05:02 +08:00
景大侠
b74c7531d9 fix #4371 递归判断蓝光目录 2025-05-31 02:37:14 +08:00
景大侠
7e3be3325a fix #4294 更新测试用例 2025-05-31 01:52:31 +08:00
jxxghp
7dab7fbe66 更新 transhandler.py 2025-05-30 21:42:50 +08:00
jxxghp
62c06b6593 fix #4216 2025-05-30 17:32:37 +08:00
jxxghp
000b62969f v2.5.2 2025-05-30 17:06:21 +08:00
jxxghp
b4473bb4a7 fix 插件分身服务注册 2025-05-30 16:59:54 +08:00
jxxghp
2c0e06d599 fix 插件分身服务注册 2025-05-30 13:37:40 +08:00
jxxghp
d2c55e8ed3 Merge remote-tracking branch 'origin/v2' into v2 2025-05-30 08:07:57 +08:00
jxxghp
714abaa25a fix rename 2025-05-30 08:07:53 +08:00
jxxghp
0017eb987b Merge pull request #4365 from Aqr-K/fix-modules/thetvdb 2025-05-29 21:17:38 +08:00
Aqr-K
e5a0894692 fix(tvdb): 解决无网络环境时,tvdb 模块初始化时,仍然会进入超长等待的问题
- 改为惰性初始化,启动时不再执行 `auth` ,调用方法时,再进行 `auth` (保留 auth_token 过期检查重新 `auth` 的功能);
- 使用 双重检查锁定 的方式,保证线程安全;
- 统一通过一个 `timeout` 值进行设置,默认值从30秒降为15秒,保持与tmdb相同。
2025-05-29 20:04:18 +08:00
jxxghp
a8e00e9f0f fix apis 2025-05-29 13:35:01 +08:00
jxxghp
77a4c271ae Merge pull request #4361 from madrays/v2
增加缓存管理页面
2025-05-29 09:21:45 +08:00
jxxghp
014b77c3c7 v2.5.1-1 2025-05-29 08:30:31 +08:00
jxxghp
076e241056 fix tvdb 2025-05-29 08:30:14 +08:00
jxxghp
7ce57cc67a fix 2025-05-29 08:22:45 +08:00
jxxghp
da0343283a 支持在插件文件夹中管理分身插件的添加与移除 2025-05-29 08:16:54 +08:00
jxxghp
d5f7f1ba91 fix tvdb api 2025-05-29 08:03:12 +08:00
jxxghp
8761c82afe fix TVDB代理与SSL校验 #4356 2025-05-29 07:14:42 +08:00
madrays
13023141bc 增加缓存管理页面 2025-05-29 00:46:11 +08:00
jxxghp
4dd2038625 Merge pull request #4360 from cddjr/fix_TransHandler 2025-05-29 00:06:32 +08:00
景大侠
06a32b0e9d fix: TransHandler误报success的bug 2025-05-28 23:52:23 +08:00
jxxghp
c91ab7a76b 添加新的设定项 2025-05-28 21:05:29 +08:00
jxxghp
0344aa6a49 更新 version.py 2025-05-28 20:34:59 +08:00
jxxghp
a748c9d750 修复:更新壁纸助手以支持更多图片格式 2025-05-28 08:26:44 +08:00
jxxghp
038dc372b7 更新 config.py 2025-05-28 07:03:22 +08:00
jxxghp
bc8198fb8a Merge pull request #4356 from TimoYoung/v2 2025-05-27 21:06:54 +08:00
TimoYoung
f42275bd83 Merge remote-tracking branch 'origin/v2' into v2 2025-05-27 18:02:21 +08:00
TimoYoung
6bd86a724e fix:区分series和movie id 2025-05-27 17:58:37 +08:00
TimoYoung
fc96cfe8a0 feat:tvdb模块重写,更换tvdbv4 api,增加搜索能力
sonarr /series/lookup接口重写,直接用标题在tvdb查询剧集
2025-05-27 17:32:25 +08:00
jxxghp
a9f25fe7d6 fix bug 2025-05-27 12:31:43 +08:00
jxxghp
f740fed5f2 fix bug 2025-05-26 13:30:30 +08:00
jxxghp
a6d1bd12a2 fix:优化插件分身性能
feat:分身插件删除时清理文件
2025-05-26 13:21:47 +08:00
jxxghp
e8ab20acf2 Merge pull request #4351 from madrays/v2 2025-05-26 11:08:30 +08:00
madrays
ccfe193800 增加插件分身功能 2025-05-26 10:55:40 +08:00
jxxghp
bdccedca59 更新 system.py 2025-05-26 07:45:21 +08:00
DDSRem
9abb1488df Merge pull request #4348 from Aqr-K/fix-sh
fix(sh): 引号格式问题
2025-05-25 23:48:45 +08:00
Aqr-K
195fc1bdc3 fix(sh): 引号格式问题 2025-05-25 23:47:23 +08:00
jxxghp
2a9129f470 更新 version.py 2025-05-25 20:15:44 +08:00
jxxghp
acbfc0cc6e Merge pull request #4343 from Aqr-K/fix-sh 2025-05-25 19:53:58 +08:00
Aqr-K
bfb0c75e95 fix(sh): 补全调用 2025-05-25 18:50:41 +08:00
jxxghp
161a2ddae8 Merge pull request #4344 from DDS-Derek/dev 2025-05-25 18:32:29 +08:00
Aqr-K
99621cfd66 fix(config): 强制指定 quote_mode ,避免后续依赖升级,默认值不再是 always 2025-05-25 18:30:00 +08:00
DDSRem
e6e7234215 fix(u115): get information directly through id 2025-05-25 18:27:06 +08:00
DDSRem
5b7b329279 fix(docker): repair restart judgment
当 DOCKER_CLIENT_API 不等于默认值时代表外部调用重启,无需再映射 `/var/run/docker.sock`
2025-05-25 18:20:04 +08:00
Aqr-K
3abb2c8674 fix(sh): 重启时,无法同时结合 系统变量 与 env 文件,进行变量读取的问题。 2025-05-25 18:15:35 +08:00
jxxghp
39de89254f add Docker Client API地址 2025-05-25 14:55:51 +08:00
jxxghp
ac941968cb 更新 plugin.py 2025-05-25 11:22:08 +08:00
jxxghp
96f603bfd1 Merge pull request #4339 from jtcymc/v2 2025-05-25 08:01:00 +08:00
shaw
677e38c62d fix(SearchChain): with 关闭线程池
- 使用 with 语句管理 ThreadPoolExecutor,确保线程池正确关闭
2025-05-25 00:44:19 +08:00
jxxghp
72fce20905 feat:整理后记录字幕和音频文件 2025-05-24 20:58:46 +08:00
jxxghp
1eb41c20d5 fix TransferInfo 2025-05-24 15:40:03 +08:00
DDSRem
dd0c1d331f Merge pull request #4334 from DDS-Derek/dev
fix(plugin): dependency dynamic refresh
2025-05-24 09:24:41 +08:00
DDSRem
12760a70a1 fix(plugin): dependency dynamic refresh 2025-05-24 09:23:47 +08:00
jxxghp
525d17270f fix #4332 2025-05-24 06:37:59 +08:00
jxxghp
bc9959f5ab Merge pull request #4333 from Aqr-K/fix-log 2025-05-24 06:31:41 +08:00
jxxghp
94a8cd5128 Merge pull request #4331 from madrays/v2 2025-05-24 06:30:59 +08:00
Aqr-K
5a1b2c4938 fix(log): 区分 主程序日志 与 插件日志 2025-05-24 06:20:41 +08:00
madrays
851a2ac03a Delete requirements.in 2025-05-24 04:12:53 +08:00
madrays
34d7707f53 Delete config/plugins/twofahelper/twofahelper_sites.json 2025-05-24 04:12:13 +08:00
madrays
0aac7f62a3 Delete config/app.env 2025-05-24 04:11:54 +08:00
madrays
34379b92d0 重构插件页面,增加文件夹功能 2025-05-24 03:57:04 +08:00
DDSRem
250999f9f5 Merge pull request #4330 from Aqr-K/patch-1
fix(log): 修复 docker 环境下,重复打印日志的问题
2025-05-24 01:18:59 +08:00
Aqr-K
2b3832222b fix(log): 修复 docker 环境下,重复打印日志的问题 2025-05-24 01:16:14 +08:00
jxxghp
c5f6d0e721 更新 config.py 2025-05-23 21:05:50 +08:00
jxxghp
dbb0cf15b8 fix 最新入库条目 2025-05-23 07:12:47 +08:00
jxxghp
ab202ba951 Merge pull request #4324 from wumode/fix_typo 2025-05-23 06:45:55 +08:00
wumode
e2c13aa7ed fix: 确保名称识别正确兜底 2025-05-23 00:23:45 +08:00
jxxghp
c1ab19f3cf 更新 version.py 2025-05-21 21:42:42 +08:00
jxxghp
beebfb2e19 fix 2025-05-21 08:39:04 +08:00
jxxghp
cfca90aa7d fix delay get_item 2025-05-19 20:06:46 +08:00
jxxghp
19fe0a32c8 fix #4308 2025-05-19 12:53:55 +08:00
jxxghp
76659f8837 fix #4308 2025-05-19 12:51:34 +08:00
jxxghp
2254715190 Merge pull request #4308 from k1z/v2
修复重复识别缓存种子的bug
2025-05-19 12:29:13 +08:00
jxxghp
ae1a5460d4 fix FetchMedias Action 2025-05-19 12:26:27 +08:00
k1z
27d9f910ff 修复重复识别缓存种子的bug 2025-05-19 10:35:09 +08:00
k1z
28db4881d7 修复重复识别缓存种子的bug 2025-05-19 10:05:39 +08:00
jxxghp
7c76c3ccd6 rollback #4296 2025-05-18 21:40:06 +08:00
jxxghp
007bd24374 fix message link check 2025-05-18 15:25:45 +08:00
jxxghp
c8dc30287c fix #4294 x26[45] 调整为小写x 2025-05-18 15:15:01 +08:00
jxxghp
360184bbd1 fix 2025-05-18 13:50:43 +08:00
jxxghp
e8ed2454a1 feat:消息为链接时,交由第三方处理 2025-05-18 13:22:42 +08:00
jxxghp
923ecf29b8 fix #4294 2025-05-18 13:16:06 +08:00
jxxghp
a8f8bf5872 增强MetaBase类以支持tmdbid和doubanid的赋值,并为Emby格式ID识别添加测试用例。 2025-05-18 13:03:35 +08:00
jxxghp
bedcd94020 优化find_metainfo函数,增加对Emby格式ID标签的支持,并添加相应的测试用例以验证不同ID格式的识别。 2025-05-18 12:55:25 +08:00
jxxghp
959d4da1f8 Merge pull request #4300 from DDS-Derek/dev 2025-05-18 10:05:14 +08:00
DDSRem
861453c1a8 fix(u115): refresh delay 2025-05-18 10:03:36 +08:00
jxxghp
2f4072da0d Merge pull request #4297 from wikrin/v2 2025-05-17 20:20:30 +08:00
Attente
411b5e0ca6 fix(database): 将下载模板中的 title 变量更改为 torrent_title 2025-05-17 19:45:49 +08:00
Attente
3f03963811 fix(themoviedb): 直接在 API 层次处理剧集组集号
- 移除 season_group_details 中的冗余集号处理
2025-05-17 19:45:49 +08:00
jxxghp
d43f81e118 Merge pull request #4296 from Pollo3470/fix-bluray-match 2025-05-17 18:11:27 +08:00
Pollo
b97dbd2515 fix: 优化 Blu-ray 匹配规则 2025-05-17 17:56:05 +08:00
jxxghp
c6a20a9ed3 Merge pull request #4294 from Miralia/v2 2025-05-16 21:57:19 +08:00
Miralia
27f0f29eef fix(meta): 修复部分格式识别问题 2025-05-16 20:49:23 +08:00
jxxghp
223508ae72 Merge pull request #4292 from Seed680/v2 2025-05-16 15:55:31 +08:00
qiaoyun680
bce0a4b8cd bugfix:如果自定义壁纸API是图片地址,应该返回请求地址 2025-05-16 15:48:37 +08:00
jxxghp
65412a4263 v2.4.8
- 修复了部分情况下插件不注册定时服务的问题
- 二级分类策略支持发行年份范围
- 支持自定义背景壁纸
- 支持插件扩展工作流动作,并编排到工作流中
2025-05-16 12:47:38 +08:00
jxxghp
0233b78c8e fix plugin actions api 2025-05-15 22:13:15 +08:00
jxxghp
b0b25e4cfa fix plugin actions api 2025-05-15 22:02:05 +08:00
jxxghp
806288d587 add:查询插件动作API 2025-05-15 20:54:39 +08:00
jxxghp
97265fc43b feat:二级分类发行年份支持范围 2025-05-15 20:13:44 +08:00
jxxghp
41ca50d0d4 feat:工作流支持调用插件动作 2025-05-15 19:55:14 +08:00
jxxghp
9d02206fd9 feat:二级分类支持发行年份 2025-05-15 15:52:42 +08:00
jxxghp
ba2293eb30 feat:默认配置更多第三方插件仓库 2025-05-15 12:50:18 +08:00
jxxghp
8b9e28975d Merge pull request #4280 from Miralia/v2 2025-05-15 12:09:18 +08:00
jxxghp
22ae8b8f87 fix 非str类型设置保存 2025-05-15 12:00:09 +08:00
Miralia
187e352cbd feat(meta): 修改正则表达式 2025-05-15 11:50:31 +08:00
Miralia
23ef8ad28d feat(meta): 扩展音视频格式匹配规则 2025-05-15 09:58:27 +08:00
jxxghp
1dadf56c42 fix #4276 2025-05-15 08:40:38 +08:00
jxxghp
52640b80c0 Merge pull request #4276 from Seed680/v2
支持支持自定义壁纸api地址,返回中配置中允许的图片文件后缀格式图片都会返回作为壁纸
2025-05-15 08:24:01 +08:00
jxxghp
fe25f8f48f fix #4277 2025-05-15 07:12:52 +08:00
jxxghp
7f59572d8b Merge pull request #4279 from wumode/pip_invocation 2025-05-15 06:43:53 +08:00
wumode
90fc4c6bad Use sys.executable -m pip for env-safe package installation 2025-05-14 23:19:40 +08:00
qiaoyun680
16b6c0da33 支持支持自定义壁纸api地址,返回中配置中允许的图片文件后缀格式图片都会返回作为壁纸 2025-05-14 20:04:38 +08:00
qiaoyun680
488a691f29 支持支持自定义壁纸api地址,返回中配置中允许的图片文件后缀格式图片都会返回作为壁纸 2025-05-14 16:50:17 +08:00
jxxghp
bcbfe2ccd5 feat:增加默认插件仓库 2025-05-14 15:10:27 +08:00
jxxghp
bd9a1d7ec7 Merge pull request #4275 from akvsdk/fix_time_error 2025-05-14 13:10:41 +08:00
jiangyuqing
9331ba64d6 fix 时间解析问题 2025-05-14 12:51:02 +08:00
jxxghp
21e5cb0a03 v2.4.7
- 修复了订阅文件信息显示问题
- 修复了默认通知模板格式中季号的显示问题
- 修复了原始语言图片刮削的问题
- 修复了馒头新版标签无法识别的问题
- 优化了联邦插件API的注册
2025-05-14 09:16:12 +08:00
jxxghp
1a8e0c9ecb fix #4270 2025-05-14 08:41:06 +08:00
jxxghp
16fc0d31cd fix #4270 2025-05-14 08:11:50 +08:00
jxxghp
a622ada58b 更新 lifecycle.py 2025-05-13 23:58:08 +08:00
jxxghp
ee9c4948d3 refactor: 优化启停逻辑 2025-05-13 23:47:12 +08:00
jxxghp
cf28e1d963 refactor: 优化启停逻辑 2025-05-13 23:11:38 +08:00
jxxghp
089ec36160 Merge pull request #4269 from wikrin/v2 2025-05-13 21:44:22 +08:00
jxxghp
04ce774c22 fix plugin initializer 2025-05-13 21:37:10 +08:00
Attente
99c1422f37 feat(message): 优化消息模板中的季号显示格式
- 在 TemplateContextBuilder 中添加 season_fmt 字段,用于存储 Sxx 格式的季号
- 在 meta_info 中添加 season_fmt 字段,用于存储 Sxx 格式的季号
- 更新消息模板中的 season 引用为 season_fmt,以实现统一的季号显示格式
- 新增数据库迁移脚本,用于更新消息模板中的 season 引用为 season_fmt
2025-05-13 21:21:27 +08:00
Attente
b583a60f23 refactor(app): 增加消息构建器的空值过滤
- 在 TemplateContextBuilder 类中增加了对空值的过滤,解决通知模板渲染出`'None'`的问题
2025-05-13 21:21:27 +08:00
jxxghp
7be2910809 fix api register bug 2025-05-13 20:52:22 +08:00
jxxghp
30de524319 fix api register bug 2025-05-13 20:35:36 +08:00
jxxghp
c431d5e759 Merge pull request #4267 from k1z/v2 2025-05-13 18:45:01 +08:00
jxxghp
184b62b024 fix plugin apis 2025-05-13 16:36:50 +08:00
wangkai
2751770350 修复馒头新版标签无法识别的问题 2025-05-13 12:23:51 +08:00
jxxghp
75d98aee8e Merge pull request #4262 from wumode/fix_4180 2025-05-12 21:16:48 +08:00
wumode
48120b9406 fix: get_torrent_tags fails to properly retrieve the existing tags 2025-05-12 21:05:30 +08:00
wumode
0e302d7959 fix: add '已整理' tag to non-default downloader 2025-05-12 21:04:03 +08:00
jxxghp
59cd176f44 更新 build.yml,将 tag_name 的格式修改为 v${{ env.app_version }},以确保版本标签前缀正确 2025-05-12 11:10:42 +08:00
jxxghp
619f728f09 更新 build.yml,添加 continue-on-error: true 以确保删除发布时即使出错也能继续执行后续步骤 2025-05-12 11:06:24 +08:00
jxxghp
6e8002acc4 fix blanks 2025-05-12 11:02:47 +08:00
jxxghp
8a4a6174f7 Merge pull request #4260 from zhuweitung/v2_fix_scrap
fix(scrap):修复自动整理电影、电视剧主海报不为原始语种
2025-05-12 11:00:59 +08:00
jxxghp
ee6c4823d3 优化 build actions 2025-05-12 10:52:23 +08:00
zhuweitung
14dcb73d06 fix(scrap):修复自动整理电影、电视剧主海报不为原始语种 2025-05-12 10:09:36 +08:00
jxxghp
e15107e5ec fix DownloadHistory.get_by_mediaid 2025-05-12 07:57:25 +08:00
jxxghp
0167a9462e Merge pull request #4258 from wumode/fix_4219 2025-05-11 21:18:53 +08:00
wumode
7fa1d342ab fix: blocking issue 2025-05-11 21:05:49 +08:00
jxxghp
05b9988e1d Merge pull request #4257 from cikichen/yemapt 2025-05-11 17:29:15 +08:00
Simon
1c09e61219 _special_domains列表中添加pt.gtk.pw 2025-05-11 17:16:25 +08:00
jxxghp
35f0ad7a83 更新 version.py 2025-05-11 10:11:18 +08:00
jxxghp
7ae1d6763a fix #4245 2025-05-11 08:17:42 +08:00
jxxghp
460e859795 fix #4245 2025-05-10 21:53:03 +08:00
jxxghp
4b88ec6460 feat:单独设置刮削图片语言 #4245 2025-05-10 20:43:00 +08:00
jxxghp
27ee13bb7e Merge pull request #4251 from cikichen/yemapt
update yemapt downloadsize
2025-05-10 20:10:50 +08:00
jxxghp
e6cdd337c3 fix subscribe files 2025-05-10 20:10:13 +08:00
jxxghp
7d8dd12131 fix delete_media_file 2025-05-10 20:00:06 +08:00
Simon
0800e3a136 update yemapt downloadsize 2025-05-10 16:50:53 +08:00
jxxghp
9b0f1a2a04 Merge pull request #4247 from k1z/v2 2025-05-10 00:35:07 +08:00
jxxghp
9de3cb0f92 fix douban test 2025-05-09 20:14:33 +08:00
wangkai
c053a8291c 1. 修复特殊微信id无法处理消息的问题 2025-05-09 16:43:13 +08:00
jxxghp
a0ddfe173b fix 兼容 target_storage 为 None 2025-05-09 12:57:50 +08:00
jxxghp
17843a7c71 v2.4.5-1 2025-05-09 08:17:08 +08:00
jxxghp
324ae5c883 rollback upload api 2025-05-09 08:16:44 +08:00
jxxghp
ef03989c3f 更新 u115.py 2025-05-09 00:27:27 +08:00
jxxghp
63412ddd42 fix bug 2025-05-08 20:37:04 +08:00
jxxghp
30ce32608a fix typo 2025-05-08 19:49:52 +08:00
jxxghp
74799ad096 更新 storage.py 2025-05-08 17:49:12 +08:00
jxxghp
31176f99c8 Merge pull request #4239 from Seed680/v2 2025-05-08 17:48:31 +08:00
Seed680
b9439c05ec Merge branch 'jxxghp:v2' into v2 2025-05-08 17:45:53 +08:00
qiaoyun680
435a04da0c feat(storge):添加存储重置功能 2025-05-08 17:44:44 +08:00
jxxghp
0040b266a5 v2.4.5 2025-05-08 17:26:56 +08:00
jxxghp
645de137f2 fix 插件代码判定 2025-05-08 14:26:47 +08:00
jxxghp
1883607118 fix upload api 2025-05-08 13:12:20 +08:00
jxxghp
4ccae1dac7 fix upload api 2025-05-08 12:55:40 +08:00
jxxghp
ff75db310f fix upload parts 2025-05-08 12:03:39 +08:00
jxxghp
5788520401 fix 阿里云盘会话提示 2025-05-08 10:09:24 +08:00
jxxghp
570dddc120 fix 2025-05-08 09:56:43 +08:00
jxxghp
ea31072ae5 优化AliPan类的文件上传功能,增加多线程分片上传和动态分片计算,提升上传效率和进度监控。 2025-05-08 09:52:32 +08:00
jxxghp
5eca5a6011 优化U115Pan类的文件上传功能,支持多线程并发上传和动态分片计算,提升上传效率和稳定性。 2025-05-08 09:47:43 +08:00
jxxghp
67d5357227 Merge pull request #4238 from cddjr/fix_4236 2025-05-07 19:00:14 +08:00
jxxghp
a0d04ff488 Merge pull request #4237 from wikrin/v2 2025-05-07 18:59:44 +08:00
景大侠
f83787508f fix #4236 2025-05-07 18:36:24 +08:00
Attente
20aba7eb17 fix: #4228 添加订阅传入 MetaBase, 上下文增加 username 字段, 原始对象引用默认开启 2025-05-07 18:19:11 +08:00
jxxghp
0cdea3318c feat:插件API支持bear认证 2025-05-07 13:26:42 +08:00
jxxghp
4dc2c18075 修复插件仪表板异常 2025-05-07 10:57:02 +08:00
jxxghp
74e97abac4 fix 修复仪表板异常 2025-05-07 10:55:13 +08:00
jxxghp
b1db95a925 v2.4.4 2025-05-07 08:26:06 +08:00
jxxghp
9dac9850b6 fix plugin file api 2025-05-06 23:56:35 +08:00
jxxghp
abe091254a fix plugin file api 2025-05-06 23:30:26 +08:00
jxxghp
d2e5367dc6 fix plugins 2025-05-06 11:44:23 +08:00
jxxghp
8ccd1f5fe4 Merge pull request #4229 from wikrin/v2 2025-05-06 06:34:16 +08:00
Attente
50bc865dd2 fix(database): improve message template
- Fix syntax error in downloadAdded message template
2025-05-05 23:14:58 +08:00
jxxghp
74a6ee7066 fix 2025-05-05 19:50:15 +08:00
jxxghp
89e76bcb48 fix 2025-05-05 19:49:30 +08:00
jxxghp
c55f6baf67 Merge pull request #4228 from wikrin/format_notification
Format notification
2025-05-05 19:28:44 +08:00
Attente
ae154489e1 上下文构建并非复杂任务, 移除缓存 2025-05-05 14:08:41 +08:00
Attente
fdc79033ce Merge https://github.com/jxxghp/MoviePilot into format_notification 2025-05-05 13:21:58 +08:00
jxxghp
9a8aa5e632 更新 subscribe.py 2025-05-05 13:16:14 +08:00
Attente
6b81f3ce5f feat(template):实现缓存机制以提升性能
- 在 `TemplateHelper` 和 `TemplateContextBuilder` 中集成 TTLCache(带过期时间的缓存),提升数据复用能力
- 引入 `build_context_cache` 装饰器,统一管理上下文构建的缓存逻辑
对媒体信息、剧集详情、种子信息、传输信息及原始对象启用缓存,减少重复计算
- 新增上下文缓存支持,为异步广播事件 NoticeMessage 提供所需上下文(可通过消息 title 与 text 内容重新获取上下文)
- 支持插件通过自定义模板灵活重构消息体,提升扩展性与灵活性
2025-05-05 13:14:45 +08:00
Attente
aeaddfe36b feat(database): add notification templates for version 2.1.4
- Add new Alembic migration script for version 2.1.4
- Implement notification templates for various events:
  - Organize success
  - Download added
  - Subscribe added
  - Subscribe complete
- Store notification templates in system configuration
2025-05-05 05:27:59 +08:00
Attente
20c1f30877 feat(message): 实现自定义消息模板功能
- 新增 MessageTemplateHelper 类用于渲染消息模板
- 在 ChainBase 中集成消息模板渲染功能
- 修改 DownloadChain、SubscribeChain 和 TransferChain 以使用新消息模板
- 新增 TemplateHelper 类用于处理模板格式
- 在 SystemConfigKey 中添加 NotificationTemplates 配置项
- 更新 Notification 模型以支持 ctype 字段
2025-05-05 05:27:48 +08:00
jxxghp
52ce6ff38e fix plugin file api 2025-05-03 22:14:39 +08:00
jxxghp
c692a3c80e feat:支持vue原生插件页面 2025-05-03 10:03:44 +08:00
jxxghp
491009636a fix bug 2025-05-02 22:57:29 +08:00
jxxghp
ed16ee14ea fix bug 2025-05-02 21:57:19 +08:00
jxxghp
7f2ed09267 fix storage 2025-05-02 20:49:38 +08:00
jxxghp
c0976897ef fix bug 2025-05-02 13:30:39 +08:00
jxxghp
85b55aa924 fix bug 2025-05-02 08:31:38 +08:00
jxxghp
91d0f76783 feat:支持新增存储类型 2025-05-02 08:11:48 +08:00
jxxghp
741badf9e6 feat:支持文件整理存储操作事件 2025-05-01 21:16:21 +08:00
jxxghp
ca1f3ac377 feat:文件整理支持操作类入参 2025-05-01 20:56:17 +08:00
jxxghp
e13e1c9ca3 fix run_module 2025-05-01 11:36:43 +08:00
jxxghp
06ad042443 fix typo 2025-05-01 11:20:56 +08:00
jxxghp
9d333b855c feat:支持插件协持系统模块实现 2025-05-01 11:03:28 +08:00
jxxghp
f46e2acd56 v2.4.3
- 用户界面支持多语言
- 支持设定TheMovieDb元数据语言
- 订阅成功消息增加了演员和简介
- 修复问题

提醒:如升级后页面空白,请强制刷新或者清理浏览器缓存
2025-04-29 17:32:40 +08:00
jxxghp
5ac4d3f4ae fix wallpaper api 2025-04-29 15:26:10 +08:00
jxxghp
1614eebc47 fix 2025-04-29 14:53:04 +08:00
jxxghp
b50599b71f fix:增加安全性 2025-04-29 14:30:34 +08:00
jxxghp
0459025bf8 Merge pull request #4207 from monster-fire/v2 2025-04-28 19:37:52 +08:00
monster-fire
0bd37da8c7 Update __init__.py 添加空值检查 2025-04-28 18:46:48 +08:00
jxxghp
da969dde53 fix:TMDB支持设置语种 2025-04-28 12:11:48 +08:00
jxxghp
33fdd6cafa feat:TMDB支持设置语种 2025-04-28 09:10:38 +08:00
jxxghp
2fe68766eb Merge remote-tracking branch 'origin/v2' into v2 2025-04-28 09:07:42 +08:00
jxxghp
205348697c fix #4188 2025-04-27 12:26:49 +08:00
jxxghp
9b3533c1da Merge pull request #4199 from cddjr/fix_bing 2025-04-27 06:53:00 +08:00
景大侠
c3584e838e fix: 开启全局图片缓存后无法显示来自Bing的壁纸 2025-04-27 00:17:29 +08:00
jxxghp
16d8b3fb58 Merge pull request #4187 from thsrite/v2 2025-04-23 11:53:29 +08:00
thsrite
686bbdc16b fix 添加订阅成功消息增加演员名称、简介 2025-04-23 11:44:44 +08:00
jxxghp
65b17e4f2b v2.4.2
- 修复普通用户通过媒体卡片跳转搜索时无法选择站点的问题,普通用户不能修改搜索站点,会按管理员预设站点直接搜索
2025-04-22 17:35:30 +08:00
jxxghp
23c6898789 更新 nginx.template.conf 2025-04-21 21:42:12 +08:00
jxxghp
df2a1be2a2 更新 nginx.template.conf 2025-04-21 21:33:00 +08:00
jxxghp
2db628a2ba v2.4.1
本版本更新主要调整了用户界面:
- 新增透明主题风格
- PWA模式下全新设计了底部导航栏
- 优化了多处UI细节
2025-04-21 20:05:53 +08:00
jxxghp
b6c40436c9 Merge pull request #4165 from wikrin/v2 2025-04-19 22:36:48 +08:00
Attente
a8a70cac08 refactor(db): optimize download history query logic
- 使用`TransferHistory.list_by`相同逻辑
2025-04-19 20:22:37 +08:00
jxxghp
3eefbf97b1 更新 plex.py 2025-04-19 15:14:47 +08:00
jxxghp
3c423e0838 更新 jellyfin.py 2025-04-19 15:14:14 +08:00
jxxghp
99cde43954 更新 emby.py 2025-04-19 15:13:33 +08:00
jxxghp
fa3a787bf7 更新 mediaserver.py 2025-04-19 15:12:42 +08:00
jxxghp
c776dc8036 feat: WebhookMessage.json 2025-04-19 07:59:59 +08:00
jxxghp
1ef068351d fix docker 2025-04-17 19:36:54 +08:00
jxxghp
6abe0a1862 fix version 2025-04-17 19:15:18 +08:00
jxxghp
ff13045f52 fix build 2025-04-17 12:44:22 +08:00
jxxghp
59c09681cb fix build 2025-04-17 11:49:07 +08:00
jxxghp
f664cf6fa5 remove built-lite 2025-04-17 11:47:24 +08:00
jxxghp
01a847a9c2 test beta 2025-04-17 11:43:42 +08:00
jxxghp
6da655f67f Merge pull request #4154 from TimoYoung/v2 2025-04-16 12:41:15 +08:00
TimoYoung
21df7dced1 fix: 同步cookiecloud站点执行失败问题 2025-04-16 10:26:43 +08:00
jxxghp
7fc257ea79 v2.4.0 2025-04-16 08:11:31 +08:00
jxxghp
24f170ff72 fix 搜索缓存 2025-04-16 08:10:48 +08:00
jxxghp
39999c9ee4 更新 Dockerfile 2025-04-15 06:54:11 +08:00
jxxghp
27a5188e4e 更新 Dockerfile.lite 2025-04-15 06:52:53 +08:00
jxxghp
a5af0786aa - 修复UI错误 2025-04-13 16:03:40 +08:00
jxxghp
e9c9cfaa72 Merge pull request #4137 from lddsb/patch-1 2025-04-11 16:06:29 +08:00
Dee Luo
8ca4ea0f3f perf: 优化qb下载器端口获取逻辑 2025-04-11 15:43:40 +08:00
jxxghp
86e1f9a9d6 Merge pull request #4136 from lddsb/patch-3 2025-04-11 11:43:26 +08:00
Dee Luo
b36ceda585 fix: Rename groups to groups.py 2025-04-11 11:22:29 +08:00
Dee Luo
27a3e6c6db feat: 增加制作组的单元测试 2025-04-11 11:21:39 +08:00
Dee Luo
a731327c00 feat: 增加制作组的单元测试cases 2025-04-11 11:20:36 +08:00
Dee Luo
737c00978e perf: 优化制作组匹配逻辑,解决部分Web组匹配不到的问题
增加两个站制作组的匹配规则
2025-04-11 11:18:15 +08:00
jxxghp
18bcb3a067 fix #4118 2025-04-10 19:40:22 +08:00
jxxghp
f49f55576f Merge pull request #4128 from lddsb/patch-2 2025-04-10 11:09:12 +08:00
Dee Luo
1bef4f9a4d perf: 优化制作组读取自定义制作组的逻辑,避免被空字符串的list影响最终结果 2025-04-10 11:00:46 +08:00
Dee Luo
ab1df59f7a fix: 修复前端传递了[""]这样的空list导致判空时逻辑异常的问题 2025-04-10 10:51:40 +08:00
jxxghp
bcd235521e v2.3.9
- 优化多处UI细节
- 修复了订阅分享参数传递问题,开放了订阅分享管理功能
2025-04-10 08:34:16 +08:00
jxxghp
31a2eac302 fix:订阅分享参数传递 2025-04-10 08:19:59 +08:00
jxxghp
7e6b7e5dd5 更新 subscribe.py 2025-04-09 17:32:07 +08:00
jxxghp
9ec9f48425 feat:增加订阅管理员 #4123 2025-04-09 13:26:58 +08:00
jxxghp
a3bec43eab feat:增加订阅管理员 #4123 2025-04-09 13:26:10 +08:00
jxxghp
f429b6397e fix RecommendMediaSource 2025-04-08 18:52:54 +08:00
jxxghp
9d6e7dc288 Merge pull request #4115 from lddsb/patch-1 2025-04-08 17:58:36 +08:00
Dee Luo
a27c09c1e8 perf: 放宽制作组后缀匹配
支持 制作组xxx 这样的后缀匹配
2025-04-08 16:35:38 +08:00
jxxghp
ceb0697c73 - 适配馒头API变动 2025-04-07 21:30:41 +08:00
jxxghp
6ad6a08bf1 Merge pull request #4110 from cddjr/trimemedia
提升飞牛服务端地址的兼容性
2025-04-07 21:15:38 +08:00
jxxghp
fac6ad7116 Merge pull request #4109 from cddjr/fix_mteam
修复馒头请求参数错误的问题
2025-04-07 21:14:42 +08:00
景大侠
7d8cda0457 修复馒头请求参数错误的问题 2025-04-07 21:04:21 +08:00
景大侠
33fc3fd63b 新增删除媒体的api 2025-04-07 17:20:47 +08:00
景大侠
8d39cc87f7 提升服务端地址的兼容性 2025-04-07 16:37:41 +08:00
景大侠
d0b1348c96 fix some warnings 2025-04-07 16:21:39 +08:00
jxxghp
0afc38f6b8 Merge pull request #4103 from wikrin/v2 2025-04-07 11:07:11 +08:00
Attente
264896ba17 fix: 剧集组刮削 2025-04-07 09:25:06 +08:00
jxxghp
08decf0b82 feat:新增默认插件库 2025-04-07 08:06:59 +08:00
jxxghp
98381265e6 更新 u115.py 2025-04-07 07:37:00 +08:00
DDSRem
d323159719 Update requirements.in 2025-04-06 13:10:56 +08:00
jxxghp
7ef21e1d1c Merge pull request #4098 from DDS-Derek/dev 2025-04-06 12:02:01 +08:00
DDSRem
2d6b2ab7d7 bump: python environment upgrade 3.12
links https://github.com/jxxghp/MoviePilot/issues/3543
2025-04-06 11:56:00 +08:00
jxxghp
a1e6fd88a9 更新 version.py 2025-04-06 07:53:29 +08:00
jxxghp
e72ff867fc fix 115 pickcode 2025-04-05 09:29:08 +08:00
jxxghp
8512641984 更新 scraper.py 2025-04-04 22:13:14 +08:00
jxxghp
f1aa64d191 fix episodes group 2025-04-04 12:17:42 +08:00
jxxghp
347262538f fix episodes group 2025-04-04 08:59:12 +08:00
jxxghp
82510d60ca 更新 __init__.py 2025-04-03 22:48:29 +08:00
jxxghp
6104cd04c3 更新 context.py 2025-04-03 20:32:56 +08:00
jxxghp
44eb58426a feat:支持指定剧集组识别和刮削 2025-04-03 18:43:04 +08:00
jxxghp
078b60cc1e feat:支持指定剧集组识别和刮削 2025-04-03 18:35:02 +08:00
jxxghp
21e120a4f8 refactor:减少一次接口查询 2025-04-03 10:43:31 +08:00
jxxghp
439b834aa8 更新 version.py 2025-04-02 18:39:50 +08:00
jxxghp
ddbe8324be README增加开发说明 2025-03-30 11:36:19 +08:00
jxxghp
8ffe93113b README增加开发说明 2025-03-30 09:53:34 +08:00
jxxghp
8b31b7cb8a v2.3.6-1
- 修复媒体服务器库存检索问题
- 继续优化搜索页面
2025-03-30 09:23:46 +08:00
jxxghp
e09e21caa9 Merge pull request #4067 from cddjr/fix_media_exists 2025-03-30 02:48:19 +08:00
景大侠
20b145c679 继续修复媒体缺失问题 2025-03-30 02:41:24 +08:00
jxxghp
c5730cf1ad Merge pull request #4065 from cddjr/fix_v235_emby_bug 2025-03-29 23:18:34 +08:00
景大侠
f16b038463 修复v2.3.5引入的emby误报媒体缺失的bug 2025-03-29 23:15:58 +08:00
jxxghp
c08beec232 fix:优化未扫码报错 2025-03-29 22:02:59 +08:00
jxxghp
946361e0ae 更新 requirements.in 2025-03-29 20:30:57 +08:00
jxxghp
97cf65a231 更新 version.py 2025-03-29 20:21:54 +08:00
jxxghp
d7eb6ac15d 更新 alipan.py 2025-03-29 19:30:22 +08:00
jxxghp
075afdbb77 fix alipan upload 2025-03-29 15:39:29 +08:00
jxxghp
2ac047504a fix alipan 2025-03-29 14:52:49 +08:00
jxxghp
c44aa50ef5 fix 上传进度条 2025-03-29 14:33:45 +08:00
jxxghp
7ffafb49c4 fix alipan upload 2025-03-29 10:26:59 +08:00
jxxghp
9b7d57a853 fix alipan api 2025-03-29 09:42:23 +08:00
jxxghp
ac19b3b512 fix alipan api 2025-03-28 21:22:02 +08:00
jxxghp
b030317186 fix: 减少115遍历 2025-03-28 20:58:35 +08:00
jxxghp
b506059874 Merge pull request #4059 from cddjr/trimemedia 2025-03-28 20:13:16 +08:00
景大侠
cf7ba6e17f 移除测试代码 2025-03-28 19:54:47 +08:00
jxxghp
b7ce5663a3 fix ide warnings 2025-03-28 19:43:55 +08:00
jxxghp
58fa8064ad Merge pull request #4058 from cddjr/trimemedia
初步支持飞牛影视
2025-03-28 19:28:35 +08:00
jxxghp
ed48f56526 fix alipan 2025-03-28 17:48:30 +08:00
景大侠
896eb13f7d 初步支持飞牛影视 2025-03-28 16:26:40 +08:00
jxxghp
b8cd1c46c1 feat:Alipan Open Api 2025-03-28 13:40:29 +08:00
jxxghp
c5e84273c0 fix 115目录创建 2025-03-27 19:55:01 +08:00
jxxghp
f21653ffb7 修复115列表异常问题 2025-03-27 17:27:01 +08:00
jxxghp
65c8116cc9 fix 115列表异常处理 2025-03-27 17:26:07 +08:00
jxxghp
5e442433e5 fix 115列表出错时抛出异常 2025-03-27 12:48:19 +08:00
jxxghp
7041347e76 更新 version.py 2025-03-27 12:13:19 +08:00
jxxghp
810c205709 fix 115 2025-03-27 12:04:49 +08:00
jxxghp
ec7035990a fix 2025-03-26 20:12:08 +08:00
jxxghp
da6d9bb2bd fix 115 upload 2025-03-26 18:31:20 +08:00
jxxghp
e009043c63 fix log 2025-03-26 14:00:41 +08:00
jxxghp
79020e9338 hack fix 115 callback format error 2025-03-26 10:39:40 +08:00
jxxghp
2020244cae fix _path_to_id 2025-03-26 08:54:51 +08:00
jxxghp
43fe8f25f8 fix _path_to_id 2025-03-26 08:50:25 +08:00
jxxghp
9522888a60 fix 115 2025-03-26 08:30:30 +08:00
jxxghp
70c183ae2b try fix 115 upload 2025-03-26 07:15:31 +08:00
jxxghp
5d56eb9bef fix 115 upload 2025-03-25 21:33:29 +08:00
jxxghp
a461414a04 fix 115 callback encode 2025-03-25 20:37:46 +08:00
jxxghp
5737c3dca6 fix 115日志频率 2025-03-25 20:00:44 +08:00
jxxghp
57ea50e59c fix 115 callback 2025-03-25 19:38:39 +08:00
jxxghp
7f630e8460 fix 115 callback 2025-03-25 19:37:00 +08:00
jxxghp
108e8502e1 fix 115 上传进度 2025-03-25 19:27:53 +08:00
jxxghp
4aa986d122 fix 115 秒传检测 2025-03-25 18:26:45 +08:00
jxxghp
60239bbfc4 fix bug 2025-03-25 13:57:39 +08:00
jxxghp
93ef3b1f1a add debug logging 2025-03-25 13:48:00 +08:00
jxxghp
d9ed135be4 fix 115 2025-03-25 12:58:03 +08:00
jxxghp
e83fe0aabe fix storage logging 2025-03-25 08:34:36 +08:00
jxxghp
4be7426ae7 fix 115 2025-03-24 22:57:16 +08:00
jxxghp
0ce5ef7f56 fix 115 upload 2025-03-24 21:49:27 +08:00
jxxghp
c2c0946423 fix 115 upload 2025-03-24 21:39:03 +08:00
jxxghp
63049f61f7 fix typing 2025-03-24 19:14:04 +08:00
jxxghp
1918b0f192 fix 115 api 2025-03-24 19:11:18 +08:00
jxxghp
a3ad49b1fa fix 115 api 2025-03-24 19:03:57 +08:00
jxxghp
bed63d1e2b fix 115 api 2025-03-24 19:02:24 +08:00
jxxghp
4a8e739686 fix 115 api 2025-03-24 13:11:23 +08:00
jxxghp
d502f33041 fix 115 open api 2025-03-24 12:04:23 +08:00
jxxghp
4a0ecf36c7 fix typing 2025-03-24 08:40:18 +08:00
jxxghp
afb9e49755 fix typing 2025-03-24 08:11:02 +08:00
jxxghp
18f65e5597 fix year type 2025-03-23 23:16:11 +08:00
jxxghp
22b69f7dac fix blanke 2025-03-23 22:35:37 +08:00
jxxghp
15df062825 更新 discover.py 2025-03-23 22:23:31 +08:00
jxxghp
ed607d3895 更新 recommend.py 2025-03-23 21:57:48 +08:00
jxxghp
f9b0db623d fix cython type error 2025-03-23 21:39:37 +08:00
jxxghp
740cf12c11 fix cython errors 2025-03-23 19:09:48 +08:00
jxxghp
4c4bf698b1 更新 scheduler.py 2025-03-23 18:26:36 +08:00
jxxghp
dc74e749c9 更新 bulit-lite.yml 2025-03-23 18:03:30 +08:00
jxxghp
fa52c542d7 fix lite Dockfile 2025-03-23 15:55:02 +08:00
jxxghp
850d480c7c fix:build lite 2025-03-23 14:48:20 +08:00
jxxghp
a92cc9dce9 更新 bulit-lite.yml 2025-03-23 14:31:29 +08:00
jxxghp
4944a0a456 更新 Dockerfile.lite 2025-03-23 14:28:45 +08:00
jxxghp
13c40058a8 fix:build lite 2025-03-23 13:00:07 +08:00
jxxghp
1410c03c26 feat:build lite 2025-03-23 12:40:14 +08:00
jxxghp
2f38b3040d fix:修复代码兼容性写法 2025-03-23 12:10:21 +08:00
jxxghp
79411a7350 fix:修复代码兼容性写法 2025-03-23 09:00:24 +08:00
jxxghp
ee94c2af32 Merge pull request #4034 from DDS-Derek/dev 2025-03-22 11:31:25 +08:00
DDSRem
d46e5c8d86 bump: docker version 6.1.3 to 7.1.0 2025-03-22 11:13:06 +08:00
jxxghp
95cd10bfba fix #4014 2025-03-22 08:15:58 +08:00
jxxghp
59ed08b92d fix 115 api 2025-03-21 21:08:14 +08:00
jxxghp
2b9f7bca51 fix 115 api 2025-03-21 21:01:37 +08:00
jxxghp
a860a8c02b fix 115 open api 2025-03-21 19:06:53 +08:00
jxxghp
f2cbb8d2f7 fix 115 open api 2025-03-21 18:53:26 +08:00
jxxghp
ea61599589 add 115 open api 2025-03-21 13:27:31 +08:00
jxxghp
0b59c95f63 fix #4029 2025-03-21 11:24:08 +08:00
jxxghp
66d4308810 fix https://github.com/jxxghp/MoviePilot-Frontend/issues/312 2025-03-21 11:19:29 +08:00
jxxghp
f2648df2ad add special domains 2025-03-20 13:00:53 +08:00
jxxghp
d20f68e897 remove setup.py 2025-03-20 08:53:02 +08:00
jxxghp
338021645d 更新 requirements.in 2025-03-19 21:50:26 +08:00
jxxghp
a0a11842cb fix workflow count 2025-03-15 10:16:25 +08:00
jxxghp
f5832d6a25 Merge pull request #4012 from fanrongbin/v2 2025-03-14 17:22:23 +08:00
Robin-PC-X1C
8fa6d9de39 20250314 修改rss.py
修改原因:管理员在mp添加多个豆瓣id时,不同的豆瓣用户订阅内容,发送通知时统一为“豆瓣想看”,无法区分
修改后:增加豆瓣昵称获取,便于推送订阅通知消息时,区分豆瓣用户名称
2025-03-14 16:42:41 +08:00
jxxghp
e662338d6f Merge pull request #3995 from KoWming/v2 2025-03-10 12:48:31 +08:00
KoWming
2c1d6817dd Update security.py 2025-03-10 12:46:06 +08:00
jxxghp
5d4a3fec1f v2.3.4
- 新增支持设定消息发送的时间范围
- 探索标签页支持拖动排序
- 修复演员头像不显示的问题
- 修复站点流控不生效的问题
- 修复短时间内重复保存设定后定时任务消失的问题
- 修复工作流执行数据叠加的问题
2025-03-10 10:08:34 +08:00
jxxghp
6603a30e7e fix MessageQueueManager 2025-03-10 10:02:32 +08:00
jxxghp
81d08ca517 fix MessageQueueManager 2025-03-10 08:24:28 +08:00
jxxghp
e04506a614 fix workflow message link 2025-03-09 21:07:52 +08:00
jxxghp
39756512ae feat: 支持消息发送时间范围 2025-03-09 19:34:05 +08:00
jxxghp
71c29ea5e7 fix ide warnings 2025-03-09 18:35:52 +08:00
jxxghp
87ce266b14 fix warnings 2025-03-09 16:48:32 +08:00
jxxghp
ed6d856c24 Merge remote-tracking branch 'origin/v2' into v2 2025-03-09 16:33:01 +08:00
jxxghp
d3ecbef946 fix warnings 2025-03-09 08:37:05 +08:00
jxxghp
7b24f5eb21 fix:站点流控 2025-03-07 08:19:28 +08:00
jxxghp
e1f82e338a fix:定时任务初始化加锁 2025-03-07 08:07:57 +08:00
jxxghp
a835d34a01 Merge pull request #3975 from so1ve/patch-1 2025-03-06 06:54:11 +08:00
Ray
79d70c9977 fix: 标签为"官组"的种子应识别为官种 2025-03-05 22:10:28 +08:00
jxxghp
aea82723cb Merge pull request #3965 from mackerel-12138/fix_s0_scrap 2025-03-05 11:56:22 +08:00
zhanglijun
d47ff0b31a 修复s0集信息错误 2025-03-04 23:18:41 +08:00
jxxghp
affcb9d5c3 fix bug 2025-03-04 14:22:32 +08:00
jxxghp
9be2686733 Merge pull request #3957 from thsrite/v2 2025-03-03 14:22:06 +08:00
thsrite
7126fed2b5 fix docker container log duplicate printing 2025-03-03 13:44:38 +08:00
jxxghp
5bc4330e1c 修复HDDolby 2025-03-02 14:55:18 +08:00
jxxghp
b25ac7116e 更新 hddolby.py 2025-03-02 14:41:55 +08:00
jxxghp
8896867bb3 更新 fetch_medias.py 2025-03-02 14:23:37 +08:00
jxxghp
ba7c9eec7b fix 2025-03-02 13:16:46 +08:00
jxxghp
9b95fde8d1 v2.3.3
- 增加了多个索引和认证站点支持
- HDDolby切换为使用API(需要调整站点设置,否则无法正常刷新站点数据)
- 调整了IYUU认证使用的域名地址
- 继续完善任务工作流
2025-03-02 12:48:32 +08:00
jxxghp
2851f16395 feat:actions增加缓存机制 2025-03-02 12:27:36 +08:00
jxxghp
0d63dfb931 fix actions 2025-03-02 11:15:52 +08:00
jxxghp
37558e3135 更新 hddolby.py 2025-03-02 10:24:17 +08:00
jxxghp
96021e42a2 fix 2025-03-02 10:08:03 +08:00
jxxghp
c32b845515 feat:actions增加识别选项 2025-03-02 09:45:24 +08:00
jxxghp
147d980c54 fix hddolby 2025-03-02 08:51:09 +08:00
jxxghp
f91c43dde9 fix hddolby 2025-03-02 08:08:46 +08:00
jxxghp
4cf5cb06a0 fix hddolby 2025-03-02 08:06:25 +08:00
jxxghp
8e4b4c3144 add hddolby userdata api 2025-03-01 21:28:15 +08:00
jxxghp
c302013696 add hddolby api 2025-03-01 21:24:01 +08:00
jxxghp
37cb94c59d add hddolby api 2025-03-01 21:08:37 +08:00
jxxghp
01f7c6bc2b fix 2025-03-01 18:55:16 +08:00
411 changed files with 53461 additions and 14202 deletions

View File

@@ -1,3 +1,84 @@
# Ignore git
# Git
.github
.git
.git
.gitignore
# Documentation
docs/
README.md
LICENSE
# Development files
.pylintrc
*.pyc
__pycache__/
*.pyo
*.pyd
.Python
*.so
.pytest_cache/
.coverage
htmlcov/
.tox/
.nox/
.hypothesis/
.mypy_cache/
.dmypy.json
dmypy.json
# Virtual environments
venv/
env/
ENV/
env.bak/
venv.bak/
# IDE
.vscode/
.idea/
*.swp
*.swo
*~
# OS
.DS_Store
.DS_Store?
._*
.Spotlight-V100
.Trashes
ehthumbs.db
Thumbs.db
# Logs
*.log
logs/
# Temporary files
*.tmp
*.temp
tmp/
temp/
# Database
*.db
*.sqlite
*.sqlite3
# Test files
tests/
test_*
*_test.py
# Build artifacts
build/
dist/
*.egg-info/
# Docker
Dockerfile*
docker-compose*
.dockerignore
# Other
app.ico
frozen.spec

View File

@@ -10,7 +10,7 @@ body:
目的是让协作的开发者间清晰的知道「要做什么」和「具体会怎么做」,以及所有的开发者都能公开透明的参与讨论;
以便评估和讨论产生的影响 (遗漏的考虑、向后兼容性、与现有功能的冲突)
因此提案侧重在对解决问题的 **方案、设计、步骤** 的描述上。
如果仅希望讨论是否添加或改进某功能本身,请使用 -> [Issue: 功能改进](https://github.com/jxxghp/MoviePilot/issues/new?assignees=&labels=feature+request&projects=&template=feature_request.yml&title=%5BFeature+Request%5D%3A+)
- type: textarea
id: background

60
.github/workflows/beta.yml vendored Normal file
View File

@@ -0,0 +1,60 @@
name: MoviePilot Builder Beta
on:
workflow_dispatch:
jobs:
Docker-build:
runs-on: ubuntu-latest
name: Build Docker Image
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Release version
id: release_version
run: |
app_version=$(cat version.py |sed -ne "s/APP_VERSION\s=\s'v\(.*\)'/\1/gp")
echo "app_version=$app_version" >> $GITHUB_ENV
- name: Docker Meta
id: meta
uses: docker/metadata-action@v5
with:
images: |
${{ secrets.DOCKER_USERNAME }}/moviepilot-v2
ghcr.io/${{ github.repository }}
tags: |
type=raw,value=beta
- name: Set Up QEMU
uses: docker/setup-qemu-action@v3
- name: Set Up Buildx
uses: docker/setup-buildx-action@v3
- name: Login DockerHub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Login GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build Image
uses: docker/build-push-action@v5
with:
context: .
file: docker/Dockerfile
platforms: |
linux/amd64
linux/arm64/v8
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha, scope=${{ github.workflow }}-docker
cache-to: type=gha, scope=${{ github.workflow }}-docker

View File

@@ -25,7 +25,10 @@ jobs:
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ secrets.DOCKER_USERNAME }}/moviepilot-v2
images: |
${{ secrets.DOCKER_USERNAME }}/moviepilot-v2
${{ secrets.DOCKER_USERNAME }}/moviepilot
ghcr.io/${{ github.repository }}
tags: |
type=raw,value=${{ env.app_version }}
type=raw,value=latest
@@ -42,11 +45,18 @@ jobs:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Login GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build Image
uses: docker/build-push-action@v5
with:
context: .
file: Dockerfile
file: docker/Dockerfile
platforms: |
linux/amd64
linux/arm64/v8
@@ -56,10 +66,22 @@ jobs:
cache-from: type=gha, scope=${{ github.workflow }}-docker
cache-to: type=gha, scope=${{ github.workflow }}-docker
- name: Get existing release body
id: get_release_body
continue-on-error: true
run: |
release_body=$(curl -s -H "Authorization: token ${{ secrets.GITHUB_TOKEN }}" \
"https://api.github.com/repos/${{ github.repository }}/releases/tags/v${{ env.app_version }}" | \
jq -r '.body // ""')
echo "RELEASE_BODY<<EOF" >> $GITHUB_ENV
echo "$release_body" >> $GITHUB_ENV
echo "EOF" >> $GITHUB_ENV
- name: Delete Release
uses: dev-drprasad/delete-tag-and-release@v1.1
continue-on-error: true
with:
tag_name: ${{ env.app_version }}
tag_name: v${{ env.app_version }}
delete_release: true
github_token: ${{ secrets.GITHUB_TOKEN }}
@@ -68,8 +90,9 @@ jobs:
with:
tag_name: v${{ env.app_version }}
name: v${{ env.app_version }}
body: ${{ env.RELEASE_BODY }}
draft: false
prerelease: false
make_latest: false
make_latest: true
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

32
.github/workflows/issues.yml vendored Normal file
View File

@@ -0,0 +1,32 @@
name: Close inactive issues
on:
workflow_dispatch:
schedule:
# Github Action 只支持 UTC 时间。
# '0 18 * * *' 对应 UTC 时间的 18:00也就是中国时区 (UTC+8) 的第二天凌晨 02:00。
- cron: "0 18 * * *"
jobs:
close-issues:
runs-on: ubuntu-latest
permissions:
issues: write
pull-requests: write
steps:
- uses: actions/stale@v5
with:
# 标记 stale 标签时间
days-before-issue-stale: 30
# 关闭 issues 标签时间
days-before-issue-close: 14
# 自定义标签名
stale-issue-label: "stale"
stale-issue-message: "此问题已过时,因为它已打开 30 天且没有任何活动。"
close-issue-message: "此问题已关闭,因为它在标记为 stale 后,已处于无更新状态 14 天。"
# 忽略所有的 Pull Request只处理 Issue
days-before-pr-stale: -1
days-before-pr-close: -1
# 排除带有RFC标签的issue
exempt-issue-labels: "RFC"
repo-token: ${{ secrets.GITHUB_TOKEN }}

91
.github/workflows/pylint.yml vendored Normal file
View File

@@ -0,0 +1,91 @@
name: Pylint Code Quality Check
on:
# 允许手动触发
workflow_dispatch:
jobs:
pylint:
runs-on: ubuntu-latest
name: Pylint Code Quality Check
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.12'
cache: 'pip'
- name: Cache pip dependencies
uses: actions/cache@v4
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ hashFiles('**/requirements.txt', '**/requirements.in') }}
restore-keys: |
${{ runner.os }}-pip-
- name: Install dependencies
run: |
python -m pip install --upgrade pip setuptools wheel
pip install pylint
# 安装项目依赖
if [ -f requirements.txt ]; then
echo "📦 安装 requirements.txt 中的依赖..."
pip install -r requirements.txt
elif [ -f requirements.in ]; then
echo "📦 安装 requirements.in 中的依赖..."
pip install -r requirements.in
else
echo "⚠️ 未找到依赖文件,仅安装 pylint"
fi
- name: Verify pylint config
run: |
# 检查项目中的pylint配置文件是否存在
if [ -f .pylintrc ]; then
echo "✅ 找到项目配置文件: .pylintrc"
echo "配置文件内容预览:"
head -10 .pylintrc
else
echo "❌ 未找到 .pylintrc 配置文件"
exit 1
fi
- name: Run pylint
run: |
# 运行pylint检查主要的Python文件
echo "🚀 运行 Pylint 错误检查..."
# 检查主要目录 - 只关注错误,如果有错误则退出
echo "📂 检查 app/ 目录..."
pylint app/ --output-format=colorized --reports=yes --score=yes
# 检查根目录的Python文件
echo "📂 检查根目录 Python 文件..."
for file in $(find . -name "*.py" -not -path "./.*" -not -path "./.venv/*" -not -path "./build/*" -not -path "./dist/*" -not -path "./tests/*" -not -path "./docs/*" -not -path "./__pycache__/*" -maxdepth 1); do
echo "检查文件: $file"
pylint "$file" --output-format=colorized || exit 1
done
# 生成详细报告
echo "📊 生成 Pylint 详细报告..."
pylint app/ --output-format=json > pylint-report.json || true
# 显示评分(仅供参考)
echo "📈 Pylint 评分(仅供参考):"
pylint app/ --score=yes --reports=no | tail -2 || true
- name: Upload pylint report
uses: actions/upload-artifact@v4
if: always()
with:
name: pylint-report
path: pylint-report.json
- name: Summary
run: |
echo "🎉 Pylint 检查完成!"
echo "✅ 没有发现语法错误或严重问题"
echo "📊 详细报告已保存为构建工件"

9
.gitignore vendored
View File

@@ -1,6 +1,9 @@
.idea/
*.c
*.so
*.pyd
build/
cython_cache/
dist/
nginx/
test.py
@@ -20,4 +23,8 @@ config/cache/
*.pyc
*.log
.vscode
venv
venv
# Pylint
pylint-report.json
.pylint.d/

83
.pylintrc Normal file
View File

@@ -0,0 +1,83 @@
[MASTER]
# 指定Python路径
init-hook='import sys; sys.path.append(".")'
# 忽略的文件和目录
ignore=.git,__pycache__,.venv,build,dist,tests,docs
# 并行作业数量
jobs=0
[MESSAGES CONTROL]
# 只关注错误级别的问题,禁用警告、约定和重构建议
# E = Error (错误) - 会导致构建失败
# W = Warning (警告) - 仅显示,不会失败
# R = Refactor (重构建议) - 仅显示,不会失败
# C = Convention (约定) - 仅显示,不会失败
# I = Information (信息) - 仅显示,不会失败
# 禁用大部分警告、约定和重构建议,只保留错误和重要警告
disable=all
enable=error,
syntax-error,
undefined-variable,
used-before-assignment,
unreachable,
return-outside-function,
yield-outside-function,
continue-in-finally,
nonlocal-without-binding,
undefined-loop-variable,
redefined-builtin,
not-callable,
assignment-from-no-return,
no-value-for-parameter,
too-many-function-args,
unexpected-keyword-arg,
redundant-keyword-arg,
import-error,
relative-beyond-top-level
[REPORTS]
# 设置报告格式
output-format=colorized
reports=yes
score=yes
[FORMAT]
# 最大行长度
max-line-length=120
# 缩进大小
indent-string=' '
[DESIGN]
# 最大参数数量
max-args=10
# 最大本地变量数量
max-locals=20
# 最大分支数量
max-branches=15
# 最大语句数量
max-statements=50
# 最大父类数量
max-parents=7
# 最大属性数量
max-attributes=10
# 最小公共方法数量
min-public-methods=1
# 最大公共方法数量
max-public-methods=25
[SIMILARITIES]
# 最小相似行数
min-similarity-lines=6
# 忽略注释
ignore-comments=yes
# 忽略文档字符串
ignore-docstrings=yes
# 忽略导入
ignore-imports=yes
[TYPECHECK]
# 生成缺失成员提示的类列表
generated-members=requests.packages.urllib3

View File

@@ -1,91 +0,0 @@
FROM python:3.11.4-slim-bookworm
ENV LANG="C.UTF-8" \
TZ="Asia/Shanghai" \
HOME="/moviepilot" \
CONFIG_DIR="/config" \
TERM="xterm" \
DISPLAY=:987 \
PUID=0 \
PGID=0 \
UMASK=000 \
PORT=3001 \
NGINX_PORT=3000 \
MOVIEPILOT_AUTO_UPDATE=release
WORKDIR "/app"
RUN apt-get update -y \
&& apt-get upgrade -y \
&& apt-get -y install \
musl-dev \
nginx \
gettext-base \
locales \
procps \
gosu \
bash \
wget \
curl \
busybox \
dumb-init \
jq \
fuse3 \
rsync \
ffmpeg \
nano \
&& \
if [ "$(uname -m)" = "x86_64" ]; \
then ln -s /usr/lib/x86_64-linux-musl/libc.so /lib/libc.musl-x86_64.so.1; \
elif [ "$(uname -m)" = "aarch64" ]; \
then ln -s /usr/lib/aarch64-linux-musl/libc.so /lib/libc.musl-aarch64.so.1; \
fi \
&& curl https://rclone.org/install.sh | bash \
&& curl --insecure -fsSL https://raw.githubusercontent.com/DDS-Derek/Aria2-Pro-Core/master/aria2-install.sh | bash \
&& apt-get autoremove -y \
&& apt-get clean -y \
&& rm -rf \
/tmp/* \
/moviepilot/.cache \
/var/lib/apt/lists/* \
/var/tmp/*
COPY requirements.in requirements.in
RUN apt-get update -y \
&& apt-get install -y build-essential \
&& pip install --upgrade pip \
&& pip install Cython pip-tools \
&& pip-compile requirements.in \
&& pip install -r requirements.txt \
&& playwright install-deps chromium \
&& apt-get remove -y build-essential \
&& apt-get autoremove -y \
&& apt-get clean -y \
&& rm -rf \
/tmp/* \
/moviepilot/.cache \
/var/lib/apt/lists/* \
/var/tmp/*
COPY . .
RUN cp -f /app/nginx.conf /etc/nginx/nginx.template.conf \
&& cp -f /app/update /usr/local/bin/mp_update \
&& cp -f /app/entrypoint /entrypoint \
&& cp -f /app/docker_http_proxy.conf /etc/nginx/docker_http_proxy.conf \
&& chmod +x /entrypoint /usr/local/bin/mp_update \
&& mkdir -p ${HOME} \
&& groupadd -r moviepilot -g 918 \
&& useradd -r moviepilot -g moviepilot -d ${HOME} -s /bin/bash -u 918 \
&& python_ver=$(python3 -V | awk '{print $2}') \
&& echo "/app/" > /usr/local/lib/python${python_ver%.*}/site-packages/app.pth \
&& echo 'fs.inotify.max_user_watches=5242880' >> /etc/sysctl.conf \
&& echo 'fs.inotify.max_user_instances=5242880' >> /etc/sysctl.conf \
&& locale-gen zh_CN.UTF-8 \
&& FRONTEND_VERSION=$(sed -n "s/^FRONTEND_VERSION\s*=\s*'\([^']*\)'/\1/p" /app/version.py) \
&& curl -sL "https://github.com/jxxghp/MoviePilot-Frontend/releases/download/${FRONTEND_VERSION}/dist.zip" | busybox unzip -d / - \
&& mv /dist /public \
&& curl -sL "https://github.com/jxxghp/MoviePilot-Plugins/archive/refs/heads/main.zip" | busybox unzip -d /tmp - \
&& mv -f /tmp/MoviePilot-Plugins-main/plugins.v2/* /app/app/plugins/ \
&& cat /tmp/MoviePilot-Plugins-main/package.json | jq -r 'to_entries[] | select(.value.v2 == true) | .key' | awk '{print tolower($0)}' | \
while read -r i; do if [ ! -d "/app/app/plugins/$i" ]; then mv "/tmp/MoviePilot-Plugins-main/plugins/$i" "/app/app/plugins/"; else echo "跳过 $i"; fi; done \
&& curl -sL "https://github.com/jxxghp/MoviePilot-Resources/archive/refs/heads/main.zip" | busybox unzip -d /tmp - \
&& mv -f /tmp/MoviePilot-Resources-main/resources/* /app/app/helper/ \
&& rm -rf /tmp/*
EXPOSE 3000
VOLUME [ "/config" ]
ENTRYPOINT [ "/entrypoint" ]

View File

@@ -18,13 +18,60 @@
## 主要特性
- 前后端分离基于FastApi + Vue3,前端项目地址:[MoviePilot-Frontend](https://github.com/jxxghp/MoviePilot-Frontend)APIhttp://localhost:3001/docs
- 前后端分离基于FastApi + Vue3
- 聚焦核心需求,简化功能和设置,部分设置项可直接使用默认值。
- 重新设计了用户界面,更加美观易用。
## 安装使用
访问官方Wikihttps://wiki.movie-pilot.org
官方Wikihttps://wiki.movie-pilot.org
## 参与开发
API文档https://api.movie-pilot.org
MCP工具API文档详见 [docs/mcp-api.md](docs/mcp-api.md)
本地运行需要 `Python 3.12``Node JS v20.12.1`
- 克隆主项目 [MoviePilot](https://github.com/jxxghp/MoviePilot)
```shell
git clone https://github.com/jxxghp/MoviePilot
```
- 克隆资源项目 [MoviePilot-Resources](https://github.com/jxxghp/MoviePilot-Resources) ,将 `resources` 目录下对应平台及版本的库 `.so`/`.pyd`/`.bin` 文件复制到 `app/helper` 目录
```shell
git clone https://github.com/jxxghp/MoviePilot-Resources
```
- 安装后端依赖,运行 `main.py` 启动后端服务,默认监听端口:`3001`API文档地址`http://localhost:3001/docs`
```shell
cd MoviePilot
pip install -r requirements.txt
python3 -m app.main
```
- 克隆前端项目 [MoviePilot-Frontend](https://github.com/jxxghp/MoviePilot-Frontend)
```shell
git clone https://github.com/jxxghp/MoviePilot-Frontend
```
- 安装前端依赖,运行前端项目,访问:`http://localhost:5173`
```shell
yarn
yarn dev
```
- 参考 [插件开发指引](https://wiki.movie-pilot.org/zh/plugindev) 在 `app/plugins` 目录下开发插件代码
## 相关项目
- [MoviePilot-Frontend](https://github.com/jxxghp/MoviePilot-Frontend)
- [MoviePilot-Resources](https://github.com/jxxghp/MoviePilot-Resources)
- [MoviePilot-Plugins](https://github.com/jxxghp/MoviePilot-Plugins)
- [MoviePilot-Server](https://github.com/jxxghp/MoviePilot-Server)
- [MoviePilot-Wiki](https://github.com/jxxghp/MoviePilot-Wiki)
## 免责申明
- 本软件仅供学习交流使用,任何人不得将本软件用于商业用途,任何人不得将本软件用于违法犯罪活动,软件对用户行为不知情,一切责任由使用者承担。
- 本软件代码开源,基于开源代码进行修改,人为去除相关限制导致软件被分发、传播并造成责任事件的,需由代码修改发布者承担全部责任,不建议对用户认证机制进行规避或修改并公开发布。
- 本项目不接受捐赠,没有在任何地方发布捐赠信息页面,软件本身不收费也不提供任何收费相关服务,请仔细辨别避免误导。
## 贡献者

View File

@@ -1,73 +0,0 @@
from abc import ABC, abstractmethod
from app.chain import ChainBase
from app.schemas import ActionContext, ActionParams
class ActionChain(ChainBase):
pass
class BaseAction(ABC):
"""
工作流动作基类
"""
# 完成标志
_done_flag = False
# 执行信息
_message = ""
@classmethod
@property
@abstractmethod
def name(cls) -> str:
pass
@classmethod
@property
@abstractmethod
def description(cls) -> str:
pass
@classmethod
@property
@abstractmethod
def data(cls) -> dict:
pass
@property
def done(self) -> bool:
"""
判断动作是否完成
"""
return self._done_flag
@property
@abstractmethod
def success(self) -> bool:
"""
判断动作是否成功
"""
pass
@property
def message(self) -> str:
"""
执行信息
"""
return self._message
def job_done(self, message: str = None):
"""
标记动作完成
"""
self._message = message
self._done_flag = True
@abstractmethod
def execute(self, workflow_id: int, params: ActionParams, context: ActionContext) -> ActionContext:
"""
执行动作
"""
raise NotImplementedError

View File

@@ -1,81 +0,0 @@
from app.actions import BaseAction
from app.chain.subscribe import SubscribeChain
from app.core.config import settings, global_vars
from app.core.context import MediaInfo
from app.db.subscribe_oper import SubscribeOper
from app.log import logger
from app.schemas import ActionParams, ActionContext
class AddSubscribeParams(ActionParams):
"""
添加订阅参数
"""
pass
class AddSubscribeAction(BaseAction):
"""
添加订阅
"""
_added_subscribes = []
_has_error = False
def __init__(self):
super().__init__()
self.subscribechain = SubscribeChain()
self.subscribeoper = SubscribeOper()
@classmethod
@property
def name(cls) -> str:
return "添加订阅"
@classmethod
@property
def description(cls) -> str:
return "根据媒体列表添加订阅"
@classmethod
@property
def data(cls) -> dict:
return AddSubscribeParams().dict()
@property
def success(self) -> bool:
return not self._has_error
def execute(self, workflow_id: int, params: dict, context: ActionContext) -> ActionContext:
"""
将medias中的信息添加订阅如果订阅不存在的话
"""
for media in context.medias:
if global_vars.is_workflow_stopped(workflow_id):
break
mediainfo = MediaInfo()
mediainfo.from_dict(media.dict())
if self.subscribechain.exists(mediainfo):
logger.info(f"{media.title} 已存在订阅")
continue
# 添加订阅
sid, message = self.subscribechain.add(mtype=mediainfo.type,
title=mediainfo.title,
year=mediainfo.year,
tmdbid=mediainfo.tmdb_id,
season=mediainfo.season,
doubanid=mediainfo.douban_id,
bangumiid=mediainfo.bangumi_id,
username=settings.SUPERUSER)
if sid:
self._added_subscribes.append(sid)
else:
self._has_error = True
if self._added_subscribes:
logger.info(f"已添加 {len(self._added_subscribes)} 个订阅")
for sid in self._added_subscribes:
context.subscribes.append(self.subscribeoper.get(sid))
self.job_done(f"已添加 {len(self._added_subscribes)} 个订阅")
return context

550
app/agent/__init__.py Normal file
View File

@@ -0,0 +1,550 @@
import asyncio
from typing import Dict, List, Any, Union
import json
import tiktoken
from langchain.agents import AgentExecutor
from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_community.callbacks import get_openai_callback
from langchain_core.chat_history import InMemoryChatMessageHistory
from langchain_core.messages import HumanMessage, AIMessage, ToolCall, ToolMessage, SystemMessage, trim_messages
from langchain_core.runnables import RunnablePassthrough, RunnableLambda
from langchain_core.runnables.history import RunnableWithMessageHistory
from langchain.agents.format_scratchpad.openai_tools import format_to_openai_tool_messages
from langchain.agents.output_parsers.openai_tools import OpenAIToolsAgentOutputParser
from app.agent.callback import StreamingCallbackHandler
from app.agent.memory import conversation_manager
from app.agent.prompt import prompt_manager
from app.agent.tools.factory import MoviePilotToolFactory
from app.chain import ChainBase
from app.core.config import settings
from app.helper.llm import LLMHelper
from app.helper.message import MessageHelper
from app.log import logger
from app.schemas import Notification
class AgentChain(ChainBase):
pass
class MoviePilotAgent:
"""
MoviePilot AI智能体
"""
def __init__(self, session_id: str, user_id: str = None,
channel: str = None, source: str = None, username: str = None):
self.session_id = session_id
self.user_id = user_id
self.channel = channel # 消息渠道
self.source = source # 消息来源
self.username = username # 用户名
# 消息助手
self.message_helper = MessageHelper()
# 回调处理器
self.callback_handler = StreamingCallbackHandler(
session_id=session_id
)
# LLM模型
self.llm = self._initialize_llm()
# 工具
self.tools = self._initialize_tools()
# 提示词模板
self.prompt = self._initialize_prompt()
# Agent执行器
self.agent_executor = self._create_agent_executor()
def _initialize_llm(self):
"""
初始化LLM模型
"""
return LLMHelper.get_llm(streaming=True, callbacks=[self.callback_handler])
def _initialize_tools(self) -> List:
"""
初始化工具列表
"""
return MoviePilotToolFactory.create_tools(
session_id=self.session_id,
user_id=self.user_id,
channel=self.channel,
source=self.source,
username=self.username,
callback_handler=self.callback_handler
)
@staticmethod
def _initialize_session_store() -> Dict[str, InMemoryChatMessageHistory]:
"""
初始化内存存储
"""
return {}
def get_session_history(self, session_id: str) -> InMemoryChatMessageHistory:
"""
获取会话历史
"""
chat_history = InMemoryChatMessageHistory()
messages: List[dict] = conversation_manager.get_recent_messages_for_agent(
session_id=session_id,
user_id=self.user_id
)
if messages:
for msg in messages:
if msg.get("role") == "user":
chat_history.add_message(HumanMessage(content=msg.get("content", "")))
elif msg.get("role") == "agent":
chat_history.add_message(AIMessage(content=msg.get("content", "")))
elif msg.get("role") == "tool_call":
metadata = msg.get("metadata", {})
chat_history.add_message(
AIMessage(
content=msg.get("content", ""),
tool_calls=[
ToolCall(
id=metadata.get("call_id"),
name=metadata.get("tool_name"),
args=metadata.get("parameters"),
)
]
)
)
elif msg.get("role") == "tool_result":
metadata = msg.get("metadata", {})
chat_history.add_message(ToolMessage(
content=msg.get("content", ""),
tool_call_id=metadata.get("call_id", "unknown")
))
elif msg.get("role") == "system":
chat_history.add_message(SystemMessage(content=msg.get("content", "")))
return chat_history
@staticmethod
def _initialize_prompt() -> ChatPromptTemplate:
"""
初始化提示词模板
"""
try:
prompt_template = ChatPromptTemplate.from_messages([
("system", "{system_prompt}"),
MessagesPlaceholder(variable_name="chat_history"),
("user", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
])
logger.info("LangChain提示词模板初始化成功")
return prompt_template
except Exception as e:
logger.error(f"初始化提示词失败: {e}")
raise e
@staticmethod
def _token_counter(messages: List[Union[HumanMessage, AIMessage, ToolMessage, SystemMessage]]) -> int:
"""
通用的Token计数器
"""
try:
# 尝试从模型获取编码集,如果失败则回退到 cl100k_base (大多数现代模型使用的编码)
try:
encoding = tiktoken.encoding_for_model(settings.LLM_MODEL)
except KeyError:
encoding = tiktoken.get_encoding("cl100k_base")
num_tokens = 0
for message in messages:
# 基础开销 (每个消息大约 3 个 token)
num_tokens += 3
# 1. 处理文本内容 (content)
if isinstance(message.content, str):
num_tokens += len(encoding.encode(message.content))
elif isinstance(message.content, list):
for part in message.content:
if isinstance(part, dict) and part.get("type") == "text":
num_tokens += len(encoding.encode(part.get("text", "")))
# 2. 处理工具调用 (仅 AIMessage 包含 tool_calls)
if getattr(message, "tool_calls", None):
for tool_call in message.tool_calls:
# 函数名
num_tokens += len(encoding.encode(tool_call.get("name", "")))
# 参数 (转为 JSON 估算)
args_str = json.dumps(tool_call.get("args", {}), ensure_ascii=False)
num_tokens += len(encoding.encode(args_str))
# 额外的结构开销 (ID 等)
num_tokens += 3
# 3. 处理角色权重
num_tokens += 1
# 加上回复的起始 Token (大约 3 个 token)
num_tokens += 3
return num_tokens
except Exception as e:
logger.error(f"Token计数失败: {e}")
# 发生错误时返回一个保守的估算值
return len(str(messages)) // 4
def _create_agent_executor(self) -> RunnableWithMessageHistory:
"""
创建Agent执行器
"""
try:
# 消息裁剪器,防止上下文超出限制
base_trimmer = trim_messages(
max_tokens=settings.LLM_MAX_CONTEXT_TOKENS * 1000 * 0.8,
strategy="last",
token_counter=self._token_counter,
include_system=True,
allow_partial=False,
start_on="human",
)
# 包装trimmer在裁剪后验证工具调用的完整性
def validated_trimmer(messages):
# 如果输入是 PromptValue转换为消息列表
if hasattr(messages, "to_messages"):
messages = messages.to_messages()
trimmed = base_trimmer.invoke(messages)
# 二次校验:确保不出现 broken tool chains
# 1. AIMessage with tool_calls 必须紧跟着对应的 ToolMessage
# 2. ToolMessage 必须有对应的 AIMessage 前置
safe_messages = []
i = 0
while i < len(trimmed):
msg = trimmed[i]
if isinstance(msg, AIMessage) and getattr(msg, "tool_calls", None):
# 检查工具调用序列是否完整
tool_calls = msg.tool_calls
is_valid_sequence = True
tool_results = []
# 向后查找对应的 ToolMessage
temp_i = i + 1
for tool_call in tool_calls:
if temp_i >= len(trimmed):
is_valid_sequence = False
break
next_msg = trimmed[temp_i]
if isinstance(next_msg, ToolMessage) and next_msg.tool_call_id == tool_call.get("id"):
tool_results.append(next_msg)
temp_i += 1
else:
is_valid_sequence = False
break
if is_valid_sequence:
# 序列完整,保留消息
safe_messages.append(msg)
safe_messages.extend(tool_results)
i = temp_i # 跳过已处理的工具结果
else:
# 序列不完整,丢弃该 AIMessage后续的孤立 ToolMessage 会在下一次循环被当做 orphaned 处理掉)
logger.warning(f"移除无效的工具调用链: {len(tool_calls)} calls, incomplete results")
i += 1
continue
if isinstance(msg, ToolMessage):
# 如果在这里遇到 ToolMessage说明它没有被上面的逻辑消费则是孤立的或者顺序错乱
logger.warning("移除孤立的 ToolMessage")
i += 1
continue
# 其他类型的消息直接保留
safe_messages.append(msg)
i += 1
if len(safe_messages) < len(messages):
logger.info(f"LangChain消息上下文已裁剪: {len(messages)} -> {len(safe_messages)}")
return safe_messages
# 创建Agent执行链
agent = (
RunnablePassthrough.assign(
agent_scratchpad=lambda x: format_to_openai_tool_messages(
x["intermediate_steps"]
)
)
| self.prompt
| RunnableLambda(validated_trimmer)
| self.llm.bind_tools(self.tools)
| OpenAIToolsAgentOutputParser()
)
executor = AgentExecutor(
agent=agent,
tools=self.tools,
verbose=settings.LLM_VERBOSE,
max_iterations=settings.LLM_MAX_ITERATIONS,
return_intermediate_steps=True,
handle_parsing_errors=True,
early_stopping_method="force"
)
return RunnableWithMessageHistory(
executor,
self.get_session_history,
input_messages_key="input",
history_messages_key="chat_history"
)
except Exception as e:
logger.error(f"创建Agent执行器失败: {e}")
raise e
async def _summarize_history(self):
"""
总结提炼之前的对话和工具执行情况,并把会话总结变成新的系统提示词取代之前的对话
"""
try:
# 获取当前历史记录
chat_history = self.get_session_history(self.session_id)
messages = chat_history.messages
if not messages:
return
logger.info(f"会话 {self.session_id} 历史消息长度已超过 90%,开始总结并重置上下文...")
# 将消息转换为摘要所需的文本格式
history_text = ""
for msg in messages:
if isinstance(msg, HumanMessage):
history_text += f"用户: {msg.content}\n"
elif isinstance(msg, AIMessage):
history_text += f"智能体: {msg.content}\n"
if getattr(msg, "tool_calls", None):
for tool_call in msg.tool_calls:
history_text += f"智能体调用工具: {tool_call.get('name')},参数: {tool_call.get('args')}\n"
elif isinstance(msg, ToolMessage):
history_text += f"工具响应: {msg.content}\n"
elif isinstance(msg, SystemMessage):
history_text += f"系统: {msg.content}\n"
# 摘要提示词
summary_prompt = (
"Please provide a comprehensive and highly informational summary of the preceding conversation and tool executions. "
"Your goal is to condense the history while retaining all critical details for future reference. "
"Ensure you include:\n"
"1. User's core intents, specific requests, and any mentioned preferences.\n"
"2. Names of movies, TV shows, or other key entities discussed.\n"
"3. A concise log of tool calls made and their specific results/outcomes.\n"
"4. The current status of any tasks and any pending actions.\n"
"5. Any important context that would be necessary for the agent to continue the conversation seamlessly.\n"
"The summary should be dense with information and serve as the primary context for the next stage of the interaction."
)
# 调用 LLM 进行总结 (非流式)
summary_llm = LLMHelper.get_llm(streaming=False)
response = await summary_llm.ainvoke([
SystemMessage(content=summary_prompt),
HumanMessage(content=f"Here is the conversation history to summarize:\n{history_text}")
])
summary_content = str(response.content)
if not summary_content:
logger.warning("总结生成失败,跳过重置逻辑。")
return
# 清空原有的会话记录并插入新的系统总结
await conversation_manager.clear_memory(self.session_id, self.user_id)
await conversation_manager.add_conversation(
session_id=self.session_id,
user_id=self.user_id,
role="system",
content=f"<history_summary>\n{summary_content}\n</history_summary>"
)
logger.info(f"会话 {self.session_id} 历史摘要替换完成。")
except Exception as e:
logger.error(f"执行会话总结出错: {str(e)}")
async def process_message(self, message: str) -> str:
"""
处理用户消息
"""
try:
# 检查上下文长度是否超过 90%
history = self.get_session_history(self.session_id)
if self._token_counter(history.messages) > settings.LLM_MAX_CONTEXT_TOKENS * 1000 * 0.9:
await self._summarize_history()
# 添加用户消息到记忆
await conversation_manager.add_conversation(
self.session_id,
user_id=self.user_id,
role="user",
content=message
)
# 构建输入上下文
input_context = {
"system_prompt": prompt_manager.get_agent_prompt(channel=self.channel),
"input": message
}
# 执行Agent
logger.info(f"Agent执行推理: session_id={self.session_id}, input={message}")
result = await self._execute_agent(input_context)
# 获取Agent回复
agent_message = await self.callback_handler.get_message()
# 发送Agent回复给用户通过原渠道
if agent_message:
# 发送回复
await self.send_agent_message(agent_message)
# 添加Agent回复到记忆
await conversation_manager.add_conversation(
session_id=self.session_id,
user_id=self.user_id,
role="agent",
content=agent_message
)
else:
agent_message = result.get("output") or "很抱歉,智能体出错了,未能生成回复内容。"
await self.send_agent_message(agent_message)
return agent_message
except Exception as e:
error_message = f"处理消息时发生错误: {str(e)}"
logger.error(error_message)
# 发送错误消息给用户(通过原渠道)
await self.send_agent_message(error_message)
return error_message
async def _execute_agent(self, input_context: Dict[str, Any]) -> Dict[str, Any]:
"""
执行LangChain Agent
"""
try:
with get_openai_callback() as cb:
result = await self.agent_executor.ainvoke(
input_context,
config={"configurable": {"session_id": self.session_id}},
callbacks=[self.callback_handler]
)
logger.info(f"LLM调用消耗: \n{cb}")
if cb.total_tokens > 0:
result["token_usage"] = {
"prompt_tokens": cb.prompt_tokens,
"completion_tokens": cb.completion_tokens,
"total_tokens": cb.total_tokens
}
return result
except asyncio.CancelledError:
logger.info(f"Agent执行被取消: session_id={self.session_id}")
return {
"output": "任务已取消",
"intermediate_steps": [],
"token_usage": {}
}
except Exception as e:
logger.error(f"Agent执行失败: {e}")
return {
"output": str(e),
"intermediate_steps": [],
"token_usage": {}
}
async def send_agent_message(self, message: str, title: str = "MoviePilot助手"):
"""
通过原渠道发送消息给用户
"""
await AgentChain().async_post_message(
Notification(
channel=self.channel,
source=self.source,
userid=self.user_id,
username=self.username,
title=title,
text=message
)
)
async def cleanup(self):
"""
清理智能体资源
"""
logger.info(f"MoviePilot智能体已清理: session_id={self.session_id}")
class AgentManager:
"""
AI智能体管理器
"""
def __init__(self):
self.active_agents: Dict[str, MoviePilotAgent] = {}
@staticmethod
async def initialize():
"""
初始化管理器
"""
await conversation_manager.initialize()
async def close(self):
"""
关闭管理器
"""
await conversation_manager.close()
# 清理所有活跃的智能体
for agent in self.active_agents.values():
await agent.cleanup()
self.active_agents.clear()
async def process_message(self, session_id: str, user_id: str, message: str,
channel: str = None, source: str = None, username: str = None) -> str:
"""
处理用户消息
"""
# 获取或创建Agent实例
if session_id not in self.active_agents:
logger.info(f"创建新的AI智能体实例session_id: {session_id}, user_id: {user_id}")
agent = MoviePilotAgent(
session_id=session_id,
user_id=user_id,
channel=channel,
source=source,
username=username
)
self.active_agents[session_id] = agent
else:
agent = self.active_agents[session_id]
agent.user_id = user_id # 确保user_id是最新的
# 更新渠道信息
if channel:
agent.channel = channel
if source:
agent.source = source
if username:
agent.username = username
# 处理消息
return await agent.process_message(message)
async def clear_session(self, session_id: str, user_id: str):
"""
清空会话
"""
if session_id in self.active_agents:
agent = self.active_agents[session_id]
await agent.cleanup()
del self.active_agents[session_id]
await conversation_manager.clear_memory(session_id, user_id)
logger.info(f"会话 {session_id} 的记忆已清空")
# 全局智能体管理器实例
agent_manager = AgentManager()

View File

@@ -0,0 +1,39 @@
import threading
from langchain_core.callbacks import AsyncCallbackHandler
from app.log import logger
class StreamingCallbackHandler(AsyncCallbackHandler):
"""
流式输出回调处理器
"""
def __init__(self, session_id: str):
self._lock = threading.Lock()
self.session_id = session_id
self.current_message = ""
async def get_message(self):
"""
获取当前消息内容,获取后清空
"""
with self._lock:
if not self.current_message:
return ""
msg = self.current_message
logger.info(f"Agent消息: {msg}")
self.current_message = ""
return msg
async def on_llm_new_token(self, token: str, **kwargs):
"""
处理新的token
"""
if not token:
return
with self._lock:
# 缓存当前消息
self.current_message += token

View File

@@ -0,0 +1,347 @@
"""对话记忆管理器"""
import asyncio
import json
from datetime import datetime, timedelta
from typing import Dict, List, Optional, Any
from app.core.config import settings
from app.helper.redis import AsyncRedisHelper
from app.log import logger
from app.schemas.agent import ConversationMemory
class ConversationMemoryManager:
"""
对话记忆管理器
"""
def __init__(self):
# 内存中的会话记忆缓存
self.memory_cache: Dict[str, ConversationMemory] = {}
# 使用现有的Redis助手
self.redis_helper = AsyncRedisHelper()
# 内存缓存清理任务Redis通过TTL自动过期
self.cleanup_task: Optional[asyncio.Task] = None
async def initialize(self):
"""
初始化记忆管理器
"""
try:
# 启动内存缓存清理任务Redis通过TTL自动过期
self.cleanup_task = asyncio.create_task(self._cleanup_expired_memories())
logger.info("对话记忆管理器初始化完成")
except Exception as e:
logger.warning(f"Redis连接失败将使用内存存储: {e}")
async def close(self):
"""
关闭记忆管理器
"""
if self.cleanup_task:
self.cleanup_task.cancel()
try:
await self.cleanup_task
except asyncio.CancelledError:
pass
await self.redis_helper.close()
logger.info("对话记忆管理器已关闭")
@staticmethod
def _get_memory_key(session_id: str, user_id: str):
"""
计算内存Key
"""
return f"{user_id}:{session_id}" if user_id else session_id
@staticmethod
def _get_redis_key(session_id: str, user_id: str):
"""
计算Redis Key
"""
return f"agent_memory:{user_id}:{session_id}" if user_id else f"agent_memory:{session_id}"
def _get_memory(self, session_id: str, user_id: str):
"""
获取内存中的记忆
"""
cache_key = self._get_memory_key(session_id, user_id)
return self.memory_cache.get(cache_key)
async def _get_redis(self, session_id: str, user_id: str) -> Optional[ConversationMemory]:
"""
从Redis获取记忆
"""
if settings.CACHE_BACKEND_TYPE == "redis":
try:
redis_key = self._get_redis_key(session_id, user_id)
memory_data = await self.redis_helper.get(redis_key, region="AI_AGENT")
if memory_data:
memory_dict = json.loads(memory_data) if isinstance(memory_data, str) else memory_data
memory = ConversationMemory(**memory_dict)
return memory
except Exception as e:
logger.warning(f"从Redis加载记忆失败: {e}")
return None
async def get_conversation(self, session_id: str, user_id: str) -> ConversationMemory:
"""
获取会话记忆
"""
# 首先检查缓存
conversion = self._get_memory(session_id, user_id)
if conversion:
return conversion
# 尝试从Redis加载
memory = await self._get_redis(session_id, user_id)
if memory:
# 加载到内存缓存
self._save_memory(memory)
return memory
# 创建新的记忆
memory = ConversationMemory(session_id=session_id, user_id=user_id)
await self._save_conversation(memory)
return memory
async def set_title(self, session_id: str, user_id: str, title: str):
"""
设置会话标题
"""
memory = await self.get_conversation(session_id=session_id, user_id=user_id)
memory.title = title
memory.updated_at = datetime.now()
await self._save_conversation(memory)
async def get_title(self, session_id: str, user_id: str) -> Optional[str]:
"""
获取会话标题
"""
memory = await self.get_conversation(session_id=session_id, user_id=user_id)
return memory.title
async def list_sessions(self, user_id: str, limit: int = 100) -> List[Dict[str, Any]]:
"""
列出历史会话摘要(按更新时间倒序)
- 当启用Redis时遍历 `agent_memory:*` 键并读取摘要
- 当未启用Redis时基于内存缓存返回
"""
sessions: List[ConversationMemory] = []
# 从Redis遍历
if settings.CACHE_BACKEND_TYPE == "redis":
try:
# 使用Redis助手的items方法遍历所有键
async for key, value in self.redis_helper.items(region="AI_AGENT"):
if key.startswith("agent_memory:"):
try:
# 解析键名获取user_id和session_id
key_parts = key.split(":")
if len(key_parts) >= 3:
key_user_id = key_parts[2] if len(key_parts) > 3 else None
if not user_id or key_user_id == user_id:
data = value if isinstance(value, dict) else json.loads(value)
memory = ConversationMemory(**data)
sessions.append(memory)
except Exception as err:
logger.warning(f"解析Redis记忆数据失败: {err}")
continue
except Exception as e:
logger.warning(f"遍历Redis会话失败: {e}")
# 合并内存缓存(确保包含近期的会话)
for cache_key, memory in self.memory_cache.items():
# 如果指定了user_id只返回该用户的会话
if not user_id or memory.user_id == user_id:
sessions.append(memory)
# 去重(以 session_id 为键取最近updated
uniq: Dict[str, ConversationMemory] = {}
for mem in sessions:
existed = uniq.get(mem.session_id)
if (not existed) or (mem.updated_at > existed.updated_at):
uniq[mem.session_id] = mem
# 排序并裁剪
sorted_list = sorted(uniq.values(), key=lambda m: m.updated_at, reverse=True)[:limit]
return [
{
"session_id": m.session_id,
"title": m.title or "新会话",
"message_count": len(m.messages),
"created_at": m.created_at.isoformat(),
"updated_at": m.updated_at.isoformat(),
}
for m in sorted_list
]
async def add_conversation(
self,
session_id: str,
user_id: str,
role: str,
content: str,
metadata: Optional[Dict[str, Any]] = None
):
"""
添加消息到记忆
"""
memory = await self.get_conversation(session_id=session_id, user_id=user_id)
message = {
"role": role,
"content": content,
"timestamp": datetime.now().isoformat(),
"metadata": metadata or {}
}
memory.messages.append(message)
memory.updated_at = datetime.now()
# 限制消息数量,避免记忆过大
max_messages = settings.LLM_MAX_MEMORY_MESSAGES
if len(memory.messages) > max_messages:
# 保留最近的消息,但保留第一条系统消息
system_messages = [msg for msg in memory.messages if msg["role"] == "system"]
recent_messages = memory.messages[-(max_messages - len(system_messages)):]
memory.messages = system_messages + recent_messages
await self._save_conversation(memory)
logger.debug(f"消息已添加到记忆: session_id={session_id}, user_id={user_id}, role={role}")
def get_recent_messages_for_agent(
self,
session_id: str,
user_id: str
) -> List[Dict[str, Any]]:
"""
为Agent获取最近的消息仅内存缓存
如果消息Token数量超过模型最大上下文长度的阀值会自动进行摘要裁剪
"""
cache_key = self._get_memory_key(session_id, user_id)
memory = self.memory_cache.get(cache_key)
if not memory:
return []
# 获取所有消息
return memory.messages[:-1]
async def get_recent_messages(
self,
session_id: str,
user_id: str,
limit: int = 10,
role_filter: Optional[list] = None
) -> List[Dict[str, Any]]:
"""
获取最近的消息
"""
memory = await self.get_conversation(session_id=session_id, user_id=user_id)
messages = memory.messages
if role_filter:
messages = [msg for msg in messages if msg["role"] in role_filter]
return messages[-limit:] if messages else []
async def get_context(self, session_id: str, user_id: str) -> Dict[str, Any]:
"""
获取会话上下文
"""
memory = await self.get_conversation(session_id=session_id, user_id=user_id)
return memory.context
async def clear_memory(self, session_id: str, user_id: str):
"""
清空会话记忆
"""
cache_key = f"{user_id}:{session_id}" if user_id else session_id
if cache_key in self.memory_cache:
del self.memory_cache[cache_key]
if settings.CACHE_BACKEND_TYPE == "redis":
redis_key = self._get_redis_key(session_id, user_id)
await self.redis_helper.delete(redis_key, region="AI_AGENT")
logger.info(f"会话记忆已清空: session_id={session_id}, user_id={user_id}")
def _save_memory(self, memory: ConversationMemory):
"""
保存记忆到内存
"""
cache_key = self._get_memory_key(memory.session_id, memory.user_id)
self.memory_cache[cache_key] = memory
async def _save_redis(self, memory: ConversationMemory):
"""
保存记忆到Redis
"""
if settings.CACHE_BACKEND_TYPE == "redis":
try:
memory_dict = memory.model_dump()
redis_key = self._get_redis_key(memory.session_id, memory.user_id)
ttl = int(timedelta(days=settings.LLM_REDIS_MEMORY_RETENTION_DAYS).total_seconds())
await self.redis_helper.set(
redis_key,
memory_dict,
ttl=ttl,
region="AI_AGENT"
)
except Exception as e:
logger.warning(f"保存记忆到Redis失败: {e}")
async def _save_conversation(self, memory: ConversationMemory):
"""
保存记忆到存储
Redis中的记忆会自动通过TTL机制过期无需手动清理
"""
# 更新内存缓存
self._save_memory(memory)
# 保存到Redis设置TTL自动过期
await self._save_redis(memory)
async def _cleanup_expired_memories(self):
"""
清理内存中过期记忆的后台任务
注意Redis中的记忆通过TTL机制自动过期这里只清理内存缓存
"""
while True:
try:
# 每小时清理一次
await asyncio.sleep(3600)
current_time = datetime.now()
expired_sessions = []
# 只检查内存缓存中的过期记忆
# Redis中的记忆会通过TTL自动过期无需手动处理
for cache_key, memory in self.memory_cache.items():
if (current_time - memory.updated_at).days > settings.LLM_MEMORY_RETENTION_DAYS:
expired_sessions.append(cache_key)
# 只清理内存缓存不删除Redis中的键Redis会自动过期
for cache_key in expired_sessions:
if cache_key in self.memory_cache:
del self.memory_cache[cache_key]
if expired_sessions:
logger.info(f"清理了{len(expired_sessions)}个过期内存会话记忆")
except asyncio.CancelledError:
break
except Exception as e:
logger.error(f"清理记忆时发生错误: {e}")
conversation_manager = ConversationMemoryManager()

View File

@@ -0,0 +1,72 @@
You are an AI media assistant powered by MoviePilot, specialized in managing home media ecosystems. Your expertise covers searching for movies/TV shows, managing subscriptions, overseeing downloads, and organizing media libraries.
All your responses must be in **Chinese (中文)**.
You act as a proactive agent. Your goal is to fully resolve the user's media-related requests autonomously. Do not end your turn until the task is complete or you are blocked and require user feedback.
Core Capabilities:
1. Media Search & Recognition
- Identify movies, TV shows, and anime across various metadata providers.
- Recognize media info from fuzzy filenames or incomplete titles.
2. Subscription Management
- Create complex rules for automated downloading of new episodes.
- Monitor trending movies/shows for automated suggestions.
3. Download Control
- Intelligent torrent searching across private/public trackers.
- Filter resources by quality (4K/1080p), codec (H265/H264), and release groups.
4. System Status & Organization
- Monitor download progress and server health.
- Manage file transfers, renaming, and library cleanup.
<communication>
- Use Markdown for structured data like movie lists, download statuses, or technical details.
- Avoid wrapping the entire response in a single code block. Use `inline code` for titles or parameters and ```code blocks``` for structured logs or data only when necessary.
- ALWAYS use backticks for media titles (e.g., `Interstellar`), file paths, or specific parameters.
- Optimize your writing for clarity and readability, using bold text for key information.
- Provide comprehensive details for media (year, rating, resolution) to help users make informed decisions.
- Do not stop for approval for read-only operations. Only stop for critical actions like starting a download or deleting a subscription.
Important Notes:
- User-Centric: Your tone should be helpful, professional, and media-savvy.
- No Coding Hallucinations: You are NOT a coding assistant. Do not offer code snippets, IDE tips, or programming help. Focus entirely on the MoviePilot media ecosystem.
- Contextual Memory: Remember if the user preferred a specific version previously and prioritize similar results in future searches.
</communication>
<status_update_spec>
Definition: Provide a brief progress narrative (1-3 sentences) explaining what you have searched, what you found, and what you are about to execute.
- **Immediate Execution**: If you state an intention to perform an action (e.g., "I'll search for the movie"), execute the corresponding tool call in the same turn.
- Use natural tenses: "I've found...", "I'm checking...", "I will now add...".
- Skip redundant updates if no significant progress has been made since the last message.
</status_update_spec>
<summary_spec>
At the end of your session/turn, provide a concise summary of your actions.
- Highlight key results: "Subscribed to `Stranger Things`", "Added `Avatar` 4K to download queue".
- Use bullet points for multiple actions.
- Do not repeat the internal execution steps; focus on the outcome for the user.
</summary_spec>
<flow>
1. Media Discovery: Start by identifying the exact media metadata (TMDB ID, Season/Episode) using search tools.
2. Context Checking: Verify current status (Is it already in the library? Is it already subscribed?).
3. Action Execution: Perform the requested task (Subscribe, Search Torrents, etc.) with a brief status update.
4. Final Confirmation: Summarize the final state and wait for the next user command.
</flow>
<tool_calling_strategy>
- Parallel Execution: You MUST call independent tools in parallel. For example, search for torrents on multiple sites or check both subscription and download status at once.
- Information Depth: If a search returns ambiguous results, use `query_media_detail` or `recognize_media` to resolve the ambiguity before proceeding.
- Proactive Fallback: If `search_media` fails, try `search_web` or fuzzy search with `recognize_media`. Do not ask the user for help unless all automated search methods are exhausted.
</tool_calling_strategy>
<media_management_rules>
1. Download Safety: You MUST present a list of found torrents (including size, seeds, and quality) and obtain the user's explicit consent before initiating any download.
2. Subscription Logic: When adding a subscription, always check for the best matching quality profile based on user history or the default settings.
3. Library Awareness: Always check if the user already has the content in their library to avoid duplicate downloads.
4. Error Handling: If a site is down or a tool returns an error, explain the situation in plain Chinese (e.g., "站点响应超时") and suggest an alternative (e.g., "尝试从其他站点进行搜索").
</media_management_rules>
<markdown_spec>
Specific markdown rules:
{markdown_spec}
</markdown_spec>

View File

@@ -0,0 +1,88 @@
"""提示词管理器"""
from pathlib import Path
from typing import Dict
from app.log import logger
from app.schemas import ChannelCapability, ChannelCapabilities, MessageChannel, ChannelCapabilityManager
class PromptManager:
"""
提示词管理器
"""
def __init__(self, prompts_dir: str = None):
if prompts_dir is None:
self.prompts_dir = Path(__file__).parent
else:
self.prompts_dir = Path(prompts_dir)
self.prompts_cache: Dict[str, str] = {}
def load_prompt(self, prompt_name: str) -> str:
"""
加载指定的提示词
"""
if prompt_name in self.prompts_cache:
return self.prompts_cache[prompt_name]
prompt_file = self.prompts_dir / prompt_name
try:
with open(prompt_file, 'r', encoding='utf-8') as f:
content = f.read().strip()
# 缓存提示词
self.prompts_cache[prompt_name] = content
logger.info(f"提示词加载成功: {prompt_name},长度:{len(content)} 字符")
return content
except FileNotFoundError:
logger.error(f"提示词文件不存在: {prompt_file}")
raise
except Exception as e:
logger.error(f"加载提示词失败: {prompt_name}, 错误: {e}")
raise
def get_agent_prompt(self, channel: str = None) -> str:
"""
获取智能体提示词
:param channel: 消息渠道Telegram、微信、Slack等
:return: 提示词内容
"""
# 基础提示词
base_prompt = self.load_prompt("Agent Prompt.txt")
# 识别渠道
msg_channel = next((c for c in MessageChannel if c.value.lower() == channel.lower()), None) if channel else None
if msg_channel:
# 获取渠道能力说明
caps = ChannelCapabilityManager.get_capabilities(msg_channel)
if caps:
base_prompt = base_prompt.replace(
"{markdown_spec}",
self._generate_formatting_instructions(caps)
)
return base_prompt
@staticmethod
def _generate_formatting_instructions(caps: ChannelCapabilities) -> str:
"""
根据渠道能力动态生成格式指令
"""
instructions = []
if ChannelCapability.RICH_TEXT not in caps.capabilities:
instructions.append("- Formatting: Use **Plain Text ONLY**. The channel does NOT support Markdown.")
instructions.append(
"- No Markdown Symbols: NEVER use `**`, `*`, `__`, or `[` blocks. Use natural text to emphasize (e.g., using ALL CAPS or separators).")
instructions.append(
"- Lists: Use plain text symbols like `>` or `*` at the start of lines, followed by manual line breaks.")
instructions.append("- Links: Paste URLs directly as text.")
return "\n".join(instructions)
def clear_cache(self):
"""
清空缓存
"""
self.prompts_cache.clear()
logger.info("提示词缓存已清空")
prompt_manager = PromptManager()

View File

160
app/agent/tools/base.py Normal file
View File

@@ -0,0 +1,160 @@
import json
import uuid
from abc import ABCMeta, abstractmethod
from typing import Any, Optional
from langchain.tools import BaseTool
from pydantic import PrivateAttr
from app.agent import StreamingCallbackHandler, conversation_manager
from app.chain import ChainBase
from app.log import logger
from app.schemas import Notification
class ToolChain(ChainBase):
pass
class MoviePilotTool(BaseTool, metaclass=ABCMeta):
"""
MoviePilot专用工具基类
"""
_session_id: str = PrivateAttr()
_user_id: str = PrivateAttr()
_channel: str = PrivateAttr(default=None)
_source: str = PrivateAttr(default=None)
_username: str = PrivateAttr(default=None)
_callback_handler: StreamingCallbackHandler = PrivateAttr(default=None)
def __init__(self, session_id: str, user_id: str, **kwargs):
super().__init__(**kwargs)
self._session_id = session_id
self._user_id = user_id
def _run(self, *args: Any, **kwargs: Any) -> Any:
pass
async def _arun(self, **kwargs) -> str:
"""
异步运行工具
"""
# 获取工具调用前的agent消息
agent_message = await self._callback_handler.get_message()
# 生成唯一的工具调用ID
call_id = f"call_{str(uuid.uuid4())[:16]}"
# 记忆工具调用
await conversation_manager.add_conversation(
session_id=self._session_id,
user_id=self._user_id,
role="tool_call",
content=agent_message,
metadata={
"call_id": call_id,
"tool_name": self.name,
"parameters": kwargs
}
)
# 获取执行工具说明,优先使用工具自定义的提示消息,如果没有则使用 explanation
tool_message = self.get_tool_message(**kwargs)
if not tool_message:
explanation = kwargs.get("explanation")
if explanation:
tool_message = explanation
# 合并agent消息和工具执行消息一起发送
messages = []
if agent_message:
messages.append(agent_message)
if tool_message:
messages.append(f"⚙️ => {tool_message}")
# 发送合并后的消息
if messages:
merged_message = "\n\n".join(messages)
await self.send_tool_message(merged_message, title="MoviePilot助手")
logger.debug(f'Executing tool {self.name} with args: {kwargs}')
# 执行工具,捕获异常确保结果总是被存储到记忆中
try:
result = await self.run(**kwargs)
logger.debug(f'Tool {self.name} executed with result: {result}')
except Exception as e:
# 记录异常详情
error_message = f"工具执行异常 ({type(e).__name__}): {str(e)}"
logger.error(f'Tool {self.name} execution failed: {e}', exc_info=True)
result = error_message
# 记忆工具调用结果
if isinstance(result, str):
formated_result = result
elif isinstance(result, (int, float)):
formated_result = str(result)
else:
formated_result = json.dumps(result, ensure_ascii=False, indent=2)
await conversation_manager.add_conversation(
session_id=self._session_id,
user_id=self._user_id,
role="tool_result",
content=formated_result,
metadata={
"call_id": call_id,
"tool_name": self.name,
}
)
return result
def get_tool_message(self, **kwargs) -> Optional[str]:
"""
获取工具执行时的友好提示消息
子类可以重写此方法,根据实际参数生成个性化的提示消息。
如果返回 None 或空字符串,将回退使用 explanation 参数。
Args:
**kwargs: 工具的所有参数(包括 explanation
Returns:
str: 友好的提示消息,如果返回 None 或空字符串则使用 explanation
"""
return None
@abstractmethod
async def run(self, **kwargs) -> str:
raise NotImplementedError
def set_message_attr(self, channel: str, source: str, username: str):
"""
设置消息属性
"""
self._channel = channel
self._source = source
self._username = username
def set_callback_handler(self, callback_handler: StreamingCallbackHandler):
"""
设置回调处理器
"""
self._callback_handler = callback_handler
async def send_tool_message(self, message: str, title: str = ""):
"""
发送工具消息
"""
await ToolChain().async_post_message(
Notification(
channel=self._channel,
source=self._source,
userid=self._user_id,
username=self._username,
title=title,
text=message
)
)

144
app/agent/tools/factory.py Normal file
View File

@@ -0,0 +1,144 @@
from typing import List, Callable
from app.agent.tools.impl.add_download import AddDownloadTool
from app.agent.tools.impl.add_subscribe import AddSubscribeTool
from app.agent.tools.impl.update_subscribe import UpdateSubscribeTool
from app.agent.tools.impl.search_subscribe import SearchSubscribeTool
from app.agent.tools.impl.get_recommendations import GetRecommendationsTool
from app.agent.tools.impl.query_downloaders import QueryDownloadersTool
from app.agent.tools.impl.query_download_tasks import QueryDownloadTasksTool
from app.agent.tools.impl.query_library_exists import QueryLibraryExistsTool
from app.agent.tools.impl.query_library_latest import QueryLibraryLatestTool
from app.agent.tools.impl.query_sites import QuerySitesTool
from app.agent.tools.impl.update_site import UpdateSiteTool
from app.agent.tools.impl.query_site_userdata import QuerySiteUserdataTool
from app.agent.tools.impl.test_site import TestSiteTool
from app.agent.tools.impl.query_subscribes import QuerySubscribesTool
from app.agent.tools.impl.query_subscribe_shares import QuerySubscribeSharesTool
from app.agent.tools.impl.query_rule_groups import QueryRuleGroupsTool
from app.agent.tools.impl.query_popular_subscribes import QueryPopularSubscribesTool
from app.agent.tools.impl.query_subscribe_history import QuerySubscribeHistoryTool
from app.agent.tools.impl.delete_subscribe import DeleteSubscribeTool
from app.agent.tools.impl.search_media import SearchMediaTool
from app.agent.tools.impl.search_person import SearchPersonTool
from app.agent.tools.impl.search_person_credits import SearchPersonCreditsTool
from app.agent.tools.impl.recognize_media import RecognizeMediaTool
from app.agent.tools.impl.scrape_metadata import ScrapeMetadataTool
from app.agent.tools.impl.query_episode_schedule import QueryEpisodeScheduleTool
from app.agent.tools.impl.query_media_detail import QueryMediaDetailTool
from app.agent.tools.impl.search_torrents import SearchTorrentsTool
from app.agent.tools.impl.search_web import SearchWebTool
from app.agent.tools.impl.send_message import SendMessageTool
from app.agent.tools.impl.query_schedulers import QuerySchedulersTool
from app.agent.tools.impl.run_scheduler import RunSchedulerTool
from app.agent.tools.impl.query_workflows import QueryWorkflowsTool
from app.agent.tools.impl.run_workflow import RunWorkflowTool
from app.agent.tools.impl.update_site_cookie import UpdateSiteCookieTool
from app.agent.tools.impl.delete_download import DeleteDownloadTool
from app.agent.tools.impl.query_directory_settings import QueryDirectorySettingsTool
from app.agent.tools.impl.list_directory import ListDirectoryTool
from app.agent.tools.impl.query_transfer_history import QueryTransferHistoryTool
from app.agent.tools.impl.transfer_file import TransferFileTool
from app.agent.tools.impl.execute_command import ExecuteCommandTool
from app.core.plugin import PluginManager
from app.log import logger
from .base import MoviePilotTool
class MoviePilotToolFactory:
"""
MoviePilot工具工厂
"""
@staticmethod
def create_tools(session_id: str, user_id: str,
channel: str = None, source: str = None, username: str = None,
callback_handler: Callable = None) -> List[MoviePilotTool]:
"""
创建MoviePilot工具列表
"""
tools = []
tool_definitions = [
SearchMediaTool,
SearchPersonTool,
SearchPersonCreditsTool,
RecognizeMediaTool,
ScrapeMetadataTool,
QueryEpisodeScheduleTool,
QueryMediaDetailTool,
AddSubscribeTool,
UpdateSubscribeTool,
SearchSubscribeTool,
SearchTorrentsTool,
SearchWebTool,
AddDownloadTool,
QuerySubscribesTool,
QuerySubscribeSharesTool,
QueryPopularSubscribesTool,
QueryRuleGroupsTool,
QuerySubscribeHistoryTool,
DeleteSubscribeTool,
QueryDownloadTasksTool,
DeleteDownloadTool,
QueryDownloadersTool,
QuerySitesTool,
UpdateSiteTool,
QuerySiteUserdataTool,
TestSiteTool,
UpdateSiteCookieTool,
GetRecommendationsTool,
QueryLibraryExistsTool,
QueryLibraryLatestTool,
QueryDirectorySettingsTool,
ListDirectoryTool,
QueryTransferHistoryTool,
TransferFileTool,
SendMessageTool,
QuerySchedulersTool,
RunSchedulerTool,
QueryWorkflowsTool,
RunWorkflowTool,
ExecuteCommandTool
]
# 创建内置工具
for ToolClass in tool_definitions:
tool = ToolClass(
session_id=session_id,
user_id=user_id
)
tool.set_message_attr(channel=channel, source=source, username=username)
tool.set_callback_handler(callback_handler=callback_handler)
tools.append(tool)
# 加载插件提供的工具
plugin_tools_count = 0
plugin_tools_info = PluginManager().get_plugin_agent_tools()
for plugin_info in plugin_tools_info:
plugin_id = plugin_info.get("plugin_id")
plugin_name = plugin_info.get("plugin_name")
tool_classes = plugin_info.get("tools", [])
for ToolClass in tool_classes:
try:
# 验证工具类是否继承自 MoviePilotTool
if not issubclass(ToolClass, MoviePilotTool):
logger.warning(f"插件 {plugin_name}({plugin_id}) 提供的工具类 {ToolClass.__name__} 未继承自 MoviePilotTool已跳过")
continue
# 创建工具实例
tool = ToolClass(
session_id=session_id,
user_id=user_id
)
tool.set_message_attr(channel=channel, source=source, username=username)
tool.set_callback_handler(callback_handler=callback_handler)
tools.append(tool)
plugin_tools_count += 1
logger.debug(f"成功加载插件 {plugin_name}({plugin_id}) 的工具: {ToolClass.__name__}")
except Exception as e:
logger.error(f"加载插件 {plugin_name}({plugin_id}) 的工具 {ToolClass.__name__} 失败: {str(e)}")
builtin_tools_count = len(tool_definitions)
if plugin_tools_count > 0:
logger.info(f"成功创建 {len(tools)} 个MoviePilot工具内置工具: {builtin_tools_count} 个,插件工具: {plugin_tools_count} 个)")
else:
logger.info(f"成功创建 {len(tools)} 个MoviePilot工具")
return tools

View File

View File

@@ -0,0 +1,106 @@
"""添加下载工具"""
from typing import Optional, Type
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool, ToolChain
from app.chain.download import DownloadChain
from app.core.context import Context
from app.core.metainfo import MetaInfo
from app.db.site_oper import SiteOper
from app.log import logger
from app.schemas import TorrentInfo
class AddDownloadInput(BaseModel):
"""添加下载工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
site_name: str = Field(..., description="Name of the torrent site/source (e.g., 'The Pirate Bay')")
torrent_title: str = Field(...,
description="The display name/title of the torrent (e.g., 'The.Matrix.1999.1080p.BluRay.x264')")
torrent_url: str = Field(..., description="Direct URL to the torrent file (.torrent) or magnet link")
torrent_description: Optional[str] = Field(None,
description="Brief description of the torrent content (optional)")
downloader: Optional[str] = Field(None,
description="Name of the downloader to use (optional, uses default if not specified)")
save_path: Optional[str] = Field(None,
description="Directory path where the downloaded files should be saved. Using `<storage>:<path>` for remote storage. e.g. rclone:/MP, smb:/server/share/Movies. (optional, uses default path if not specified)")
labels: Optional[str] = Field(None,
description="Comma-separated list of labels/tags to assign to the download (optional, e.g., 'movie,hd,bluray')")
class AddDownloadTool(MoviePilotTool):
name: str = "add_download"
description: str = "Add torrent download task to the configured downloader (qBittorrent, Transmission, etc.). Downloads the torrent file and starts the download process with specified settings."
args_schema: Type[BaseModel] = AddDownloadInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据下载参数生成友好的提示消息"""
torrent_title = kwargs.get("torrent_title", "")
site_name = kwargs.get("site_name", "")
downloader = kwargs.get("downloader")
message = f"正在添加下载任务: {torrent_title}"
if site_name:
message += f" (来源: {site_name})"
if downloader:
message += f" [下载器: {downloader}]"
return message
async def run(self, site_name: str, torrent_title: str, torrent_url: str, torrent_description: Optional[str] = None,
downloader: Optional[str] = None, save_path: Optional[str] = None,
labels: Optional[str] = None, **kwargs) -> str:
logger.info(
f"执行工具: {self.name}, 参数: site_name={site_name}, torrent_title={torrent_title}, torrent_url={torrent_url}, downloader={downloader}, save_path={save_path}, labels={labels}")
try:
if not torrent_title or not torrent_url:
return "错误:必须提供种子标题和下载链接"
# 使用DownloadChain添加下载
download_chain = DownloadChain()
# 根据站点名称查询站点cookie
if not site_name:
return "错误:必须提供站点名称,请从搜索资源结果信息中获取"
siteinfo = await SiteOper().async_get_by_name(site_name)
if not siteinfo:
return f"错误:未找到站点信息:{site_name}"
# 创建下载上下文
torrent_info = TorrentInfo(
title=torrent_title,
description=torrent_description,
enclosure=torrent_url,
site_name=site_name,
site_ua=siteinfo.ua,
site_cookie=siteinfo.cookie,
site_proxy=siteinfo.proxy,
site_order=siteinfo.pri,
site_downloader=siteinfo.downloader
)
meta_info = MetaInfo(title=torrent_title, subtitle=torrent_description)
media_info = await ToolChain().async_recognize_media(meta=meta_info)
if not media_info:
return "错误:无法识别媒体信息,无法添加下载任务"
context = Context(
torrent_info=torrent_info,
meta_info=meta_info,
media_info=media_info
)
did = download_chain.download_single(
context=context,
downloader=downloader,
save_path=save_path,
label=labels
)
if did:
return f"成功添加下载任务:{torrent_title}"
else:
return "添加下载任务失败"
except Exception as e:
logger.error(f"添加下载任务失败: {e}", exc_info=True)
return f"添加下载任务时发生错误: {str(e)}"

View File

@@ -0,0 +1,138 @@
"""添加订阅工具"""
from typing import Optional, Type, List
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.chain.subscribe import SubscribeChain
from app.log import logger
from app.schemas.types import MediaType
class AddSubscribeInput(BaseModel):
"""添加订阅工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
title: str = Field(..., description="The title of the media to subscribe to (e.g., 'The Matrix', 'Breaking Bad')")
year: str = Field(..., description="Release year of the media (required for accurate identification)")
media_type: str = Field(...,
description="Type of media content: '电影' for films, '电视剧' for television series or anime series")
season: Optional[int] = Field(None,
description="Season number for TV shows (optional, if not specified will subscribe to all seasons)")
tmdb_id: Optional[str] = Field(None,
description="TMDB database ID for precise media identification (optional but recommended for accuracy)")
start_episode: Optional[int] = Field(None,
description="Starting episode number for TV shows (optional, defaults to 1 if not specified)")
total_episode: Optional[int] = Field(None,
description="Total number of episodes for TV shows (optional, will be auto-detected from TMDB if not specified)")
quality: Optional[str] = Field(None,
description="Quality filter as regular expression (optional, e.g., 'BluRay|WEB-DL|HDTV')")
resolution: Optional[str] = Field(None,
description="Resolution filter as regular expression (optional, e.g., '1080p|720p|2160p')")
effect: Optional[str] = Field(None,
description="Effect filter as regular expression (optional, e.g., 'HDR|DV|SDR')")
filter_groups: Optional[List[str]] = Field(None,
description="List of filter rule group names to apply (optional, use query_rule_groups tool to get available rule groups)")
sites: Optional[List[int]] = Field(None,
description="List of site IDs to search from (optional, use query_sites tool to get available site IDs)")
class AddSubscribeTool(MoviePilotTool):
name: str = "add_subscribe"
description: str = "Add media subscription to create automated download rules for movies and TV shows. The system will automatically search and download new episodes or releases based on the subscription criteria. Supports advanced filtering options like quality, resolution, and effect filters using regular expressions."
args_schema: Type[BaseModel] = AddSubscribeInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据订阅参数生成友好的提示消息"""
title = kwargs.get("title", "")
year = kwargs.get("year", "")
media_type = kwargs.get("media_type", "")
season = kwargs.get("season")
message = f"正在添加订阅: {title}"
if year:
message += f" ({year})"
if media_type:
message += f" [{media_type}]"
if season:
message += f"{season}"
return message
async def run(self, title: str, year: str, media_type: str,
season: Optional[int] = None, tmdb_id: Optional[str] = None,
start_episode: Optional[int] = None, total_episode: Optional[int] = None,
quality: Optional[str] = None, resolution: Optional[str] = None,
effect: Optional[str] = None, filter_groups: Optional[List[str]] = None,
sites: Optional[List[int]] = None, **kwargs) -> str:
logger.info(
f"执行工具: {self.name}, 参数: title={title}, year={year}, media_type={media_type}, "
f"season={season}, tmdb_id={tmdb_id}, start_episode={start_episode}, "
f"total_episode={total_episode}, quality={quality}, resolution={resolution}, "
f"effect={effect}, filter_groups={filter_groups}, sites={sites}")
try:
subscribe_chain = SubscribeChain()
# 转换 tmdb_id 为整数
tmdbid_int = None
if tmdb_id:
try:
tmdbid_int = int(tmdb_id)
except (ValueError, TypeError):
logger.warning(f"无效的 tmdb_id: {tmdb_id},将忽略")
# 构建额外的订阅参数
subscribe_kwargs = {}
if start_episode is not None:
subscribe_kwargs['start_episode'] = start_episode
if total_episode is not None:
subscribe_kwargs['total_episode'] = total_episode
if quality:
subscribe_kwargs['quality'] = quality
if resolution:
subscribe_kwargs['resolution'] = resolution
if effect:
subscribe_kwargs['effect'] = effect
if filter_groups:
subscribe_kwargs['filter_groups'] = filter_groups
if sites:
subscribe_kwargs['sites'] = sites
sid, message = await subscribe_chain.async_add(
mtype=MediaType(media_type),
title=title,
year=year,
tmdbid=tmdbid_int,
season=season,
username=self._user_id,
**subscribe_kwargs
)
if sid:
if message and "已存在" in message:
return f"订阅已存在:{title} ({year})。如需修改参数请先删除旧订阅。"
result_msg = f"成功添加订阅:{title} ({year})"
if subscribe_kwargs:
params = []
if start_episode is not None:
params.append(f"开始集数: {start_episode}")
if total_episode is not None:
params.append(f"总集数: {total_episode}")
if quality:
params.append(f"质量过滤: {quality}")
if resolution:
params.append(f"分辨率过滤: {resolution}")
if effect:
params.append(f"特效过滤: {effect}")
if filter_groups:
params.append(f"规则组: {', '.join(filter_groups)}")
if sites:
params.append(f"站点: {', '.join(map(str, sites))}")
if params:
result_msg += f"\n配置参数: {', '.join(params)}"
return result_msg
else:
return f"添加订阅失败:{message}"
except Exception as e:
logger.error(f"添加订阅失败: {e}", exc_info=True)
return f"添加订阅时发生错误: {str(e)}"

View File

@@ -0,0 +1,76 @@
"""删除下载任务工具"""
from typing import Optional, Type
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.chain.download import DownloadChain
from app.log import logger
class DeleteDownloadInput(BaseModel):
"""删除下载任务工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
task_identifier: str = Field(..., description="Task identifier: can be task hash (unique identifier) or task title/name")
downloader: Optional[str] = Field(None, description="Name of specific downloader (optional, if not provided will search all downloaders)")
delete_files: Optional[bool] = Field(False, description="Whether to delete downloaded files along with the task (default: False, only removes the task from downloader)")
class DeleteDownloadTool(MoviePilotTool):
name: str = "delete_download"
description: str = "Delete a download task from the downloader. Can delete by task hash (unique identifier) or task title/name. Optionally specify the downloader name and whether to delete downloaded files."
args_schema: Type[BaseModel] = DeleteDownloadInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据删除参数生成友好的提示消息"""
task_identifier = kwargs.get("task_identifier", "")
downloader = kwargs.get("downloader")
delete_files = kwargs.get("delete_files", False)
message = f"正在删除下载任务: {task_identifier}"
if downloader:
message += f" [下载器: {downloader}]"
if delete_files:
message += " (包含文件)"
return message
async def run(self, task_identifier: str, downloader: Optional[str] = None,
delete_files: Optional[bool] = False, **kwargs) -> str:
logger.info(f"执行工具: {self.name}, 参数: task_identifier={task_identifier}, downloader={downloader}, delete_files={delete_files}")
try:
download_chain = DownloadChain()
# 如果task_identifier看起来像hash通常是40个字符的十六进制字符串
task_hash = None
if len(task_identifier) == 40 and all(c in '0123456789abcdefABCDEF' for c in task_identifier):
# 直接使用hash
task_hash = task_identifier
else:
# 通过标题查找任务
downloads = download_chain.downloading(name=downloader)
for dl in downloads:
# 检查标题或名称是否匹配
if (task_identifier.lower() in (dl.title or "").lower()) or \
(task_identifier.lower() in (dl.name or "").lower()):
task_hash = dl.hash
break
if not task_hash:
return f"未找到匹配的下载任务:{task_identifier},请使用 query_downloads 工具查询可用的下载任务"
# 删除下载任务
# remove_torrents 支持 delete_file 参数,可以控制是否删除文件
result = download_chain.remove_torrents(hashs=[task_hash], downloader=downloader, delete_file=delete_files)
if result:
files_info = "(包含文件)" if delete_files else "(不包含文件)"
return f"成功删除下载任务:{task_identifier} {files_info}"
else:
return f"删除下载任务失败:{task_identifier},请检查任务是否存在或下载器是否可用"
except Exception as e:
logger.error(f"删除下载任务失败: {e}", exc_info=True)
return f"删除下载任务时发生错误: {str(e)}"

View File

@@ -0,0 +1,63 @@
"""删除订阅工具"""
from typing import Optional, Type
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.core.event import eventmanager
from app.db.subscribe_oper import SubscribeOper
from app.helper.subscribe import SubscribeHelper
from app.log import logger
from app.schemas.types import EventType
class DeleteSubscribeInput(BaseModel):
"""删除订阅工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
subscribe_id: int = Field(..., description="The ID of the subscription to delete (can be obtained from query_subscribes tool)")
class DeleteSubscribeTool(MoviePilotTool):
name: str = "delete_subscribe"
description: str = "Delete a media subscription by its ID. This will remove the subscription and stop automatic downloads for that media."
args_schema: Type[BaseModel] = DeleteSubscribeInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据删除参数生成友好的提示消息"""
subscribe_id = kwargs.get("subscribe_id")
return f"正在删除订阅 (ID: {subscribe_id})"
async def run(self, subscribe_id: int, **kwargs) -> str:
logger.info(f"执行工具: {self.name}, 参数: subscribe_id={subscribe_id}")
try:
subscribe_oper = SubscribeOper()
# 获取订阅信息
subscribe = await subscribe_oper.async_get(subscribe_id)
if not subscribe:
return f"订阅 ID {subscribe_id} 不存在"
# 在删除之前获取订阅信息(用于事件)
subscribe_info = subscribe.to_dict()
# 删除订阅
subscribe_oper.delete(subscribe_id)
# 发送事件
await eventmanager.async_send_event(EventType.SubscribeDeleted, {
"subscribe_id": subscribe_id,
"subscribe_info": subscribe_info
})
# 统计订阅
SubscribeHelper().sub_done_async({
"tmdbid": subscribe.tmdbid,
"doubanid": subscribe.doubanid
})
return f"成功删除订阅:{subscribe.name} ({subscribe.year})"
except Exception as e:
logger.error(f"删除订阅失败: {e}", exc_info=True)
return f"删除订阅时发生错误: {str(e)}"

View File

@@ -0,0 +1,81 @@
"""执行Shell命令工具"""
import asyncio
from typing import Optional, Type
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.log import logger
class ExecuteCommandInput(BaseModel):
"""执行Shell命令工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this command is being executed")
command: str = Field(..., description="The shell command to execute")
timeout: Optional[int] = Field(60, description="Max execution time in seconds (default: 60)")
class ExecuteCommandTool(MoviePilotTool):
name: str = "execute_command"
description: str = "Safely execute shell commands on the server. Useful for system maintenance, checking status, or running custom scripts. Includes timeout and output limits."
args_schema: Type[BaseModel] = ExecuteCommandInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据命令生成友好的提示消息"""
command = kwargs.get("command", "")
return f"正在执行系统命令: {command}"
async def run(self, command: str, timeout: Optional[int] = 60, **kwargs) -> str:
logger.info(f"执行工具: {self.name}, 参数: command={command}, timeout={timeout}")
# 简单安全过滤
forbidden_keywords = ["rm -rf /", ":(){ :|:& };:", "dd if=/dev/zero", "mkfs", "reboot", "shutdown"]
for keyword in forbidden_keywords:
if keyword in command:
return f"错误:命令包含禁止使用的关键字 '{keyword}'"
try:
# 执行命令
process = await asyncio.create_subprocess_shell(
command,
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE
)
try:
# 等待完成,带超时
stdout, stderr = await asyncio.wait_for(process.communicate(), timeout=timeout)
# 处理输出
stdout_str = stdout.decode('utf-8', errors='replace').strip()
stderr_str = stderr.decode('utf-8', errors='replace').strip()
exit_code = process.returncode
result = f"命令执行完成 (退出码: {exit_code})"
if stdout_str:
result += f"\n\n标准输出:\n{stdout_str}"
if stderr_str:
result += f"\n\n错误输出:\n{stderr_str}"
# 如果没有输出
if not stdout_str and not stderr_str:
result += "\n\n(无输出内容)"
# 限制输出长度,防止上下文过长
if len(result) > 3000:
result = result[:3000] + "\n\n...(输出内容过长,已截断)"
return result
except asyncio.TimeoutError:
# 超时处理
try:
process.kill()
except ProcessLookupError:
pass
return f"命令执行超时 (限制: {timeout}秒)"
except Exception as e:
logger.error(f"执行命令失败: {e}", exc_info=True)
return f"执行命令时发生错误: {str(e)}"

View File

@@ -0,0 +1,170 @@
"""获取推荐工具"""
import json
from typing import Optional, Type
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.chain.recommend import RecommendChain
from app.log import logger
class GetRecommendationsInput(BaseModel):
"""获取推荐工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
source: Optional[str] = Field("tmdb_trending",
description="Recommendation source: "
"'tmdb_trending' for TMDB trending content, "
"'tmdb_movies' for TMDB popular movies, "
"'tmdb_tvs' for TMDB popular TV shows, "
"'douban_hot' for Douban popular content, "
"'douban_movie_hot' for Douban hot movies, "
"'douban_tv_hot' for Douban hot TV shows, "
"'douban_movie_showing' for Douban movies currently showing, "
"'douban_movies' for Douban latest movies, "
"'douban_tvs' for Douban latest TV shows, "
"'douban_movie_top250' for Douban movie TOP250, "
"'douban_tv_weekly_chinese' for Douban Chinese TV weekly chart, "
"'douban_tv_weekly_global' for Douban global TV weekly chart, "
"'douban_tv_animation' for Douban popular animation, "
"'bangumi_calendar' for Bangumi anime calendar")
media_type: Optional[str] = Field("all",
description="Type of media content: '电影' for films, '电视剧' for television series or anime series, 'all' for all types")
limit: Optional[int] = Field(20,
description="Maximum number of recommendations to return (default: 20, maximum: 100)")
class GetRecommendationsTool(MoviePilotTool):
name: str = "get_recommendations"
description: str = "Get trending and popular media recommendations from various sources. Returns curated lists of popular movies, TV shows, and anime based on different criteria like trending, ratings, or calendar schedules."
args_schema: Type[BaseModel] = GetRecommendationsInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据推荐参数生成友好的提示消息"""
source = kwargs.get("source", "tmdb_trending")
media_type = kwargs.get("media_type", "all")
limit = kwargs.get("limit", 20)
source_map = {
"tmdb_trending": "TMDB流行趋势",
"tmdb_movies": "TMDB热门电影",
"tmdb_tvs": "TMDB热门电视剧",
"douban_hot": "豆瓣热门",
"douban_movie_hot": "豆瓣热门电影",
"douban_tv_hot": "豆瓣热门电视剧",
"douban_movie_showing": "豆瓣正在热映",
"douban_movies": "豆瓣最新电影",
"douban_tvs": "豆瓣最新电视剧",
"douban_movie_top250": "豆瓣电影TOP250",
"douban_tv_weekly_chinese": "豆瓣国产剧集榜",
"douban_tv_weekly_global": "豆瓣全球剧集榜",
"douban_tv_animation": "豆瓣热门动漫",
"bangumi_calendar": "番组计划"
}
source_desc = source_map.get(source, source)
message = f"正在获取推荐: {source_desc}"
if media_type != "all":
message += f" [{media_type}]"
message += f" (限制: {limit}条)"
return message
async def run(self, source: Optional[str] = "tmdb_trending",
media_type: Optional[str] = "all", limit: Optional[int] = 20, **kwargs) -> str:
logger.info(f"执行工具: {self.name}, 参数: source={source}, media_type={media_type}, limit={limit}")
try:
recommend_chain = RecommendChain()
results = []
if source == "tmdb_trending":
# async_tmdb_trending 只接受 page 参数,返回固定数量的结果
# 如果需要限制数量,需要在返回后截取
results = await recommend_chain.async_tmdb_trending(page=1)
if limit and limit > 0:
results = results[:limit]
elif source == "tmdb_movies":
# async_tmdb_movies 接受 page 参数,返回固定数量的结果
results = await recommend_chain.async_tmdb_movies(page=1)
if limit and limit > 0:
results = results[:limit]
elif source == "tmdb_tvs":
# async_tmdb_tvs 接受 page 参数,返回固定数量的结果
results = await recommend_chain.async_tmdb_tvs(page=1)
if limit and limit > 0:
results = results[:limit]
elif source == "douban_hot":
if media_type == "movie":
results = await recommend_chain.async_douban_movie_hot(page=1, count=limit)
elif media_type == "tv":
results = await recommend_chain.async_douban_tv_hot(page=1, count=limit)
else: # all
results.extend(await recommend_chain.async_douban_movie_hot(page=1, count=limit))
results.extend(await recommend_chain.async_douban_tv_hot(page=1, count=limit))
elif source == "douban_movie_hot":
results = await recommend_chain.async_douban_movie_hot(page=1, count=limit)
elif source == "douban_tv_hot":
results = await recommend_chain.async_douban_tv_hot(page=1, count=limit)
elif source == "douban_movie_showing":
results = await recommend_chain.async_douban_movie_showing(page=1, count=limit)
elif source == "douban_movies":
results = await recommend_chain.async_douban_movies(page=1, count=limit)
elif source == "douban_tvs":
results = await recommend_chain.async_douban_tvs(page=1, count=limit)
elif source == "douban_movie_top250":
results = await recommend_chain.async_douban_movie_top250(page=1, count=limit)
elif source == "douban_tv_weekly_chinese":
results = await recommend_chain.async_douban_tv_weekly_chinese(page=1, count=limit)
elif source == "douban_tv_weekly_global":
results = await recommend_chain.async_douban_tv_weekly_global(page=1, count=limit)
elif source == "douban_tv_animation":
results = await recommend_chain.async_douban_tv_animation(page=1, count=limit)
elif source == "bangumi_calendar":
results = await recommend_chain.async_bangumi_calendar(page=1, count=limit)
else:
# 不支持的推荐来源
supported_sources = [
"tmdb_trending", "tmdb_movies", "tmdb_tvs",
"douban_hot", "douban_movie_hot", "douban_tv_hot",
"douban_movie_showing", "douban_movies", "douban_tvs",
"douban_movie_top250", "douban_tv_weekly_chinese",
"douban_tv_weekly_global", "douban_tv_animation",
"bangumi_calendar"
]
return f"不支持的推荐来源: {source}。支持的来源包括: {', '.join(supported_sources)}"
if results:
# 限制最多20条结果
total_count = len(results)
limited_results = results[:20]
# 精简字段,只保留关键信息
simplified_results = []
for r in limited_results:
# r 应该是字典格式to_dict的结果但为了安全起见进行检查
if not isinstance(r, dict):
logger.warning(f"推荐结果格式异常,跳过: {type(r)}")
continue
simplified = {
"title": r.get("title"),
"en_title": r.get("en_title"),
"year": r.get("year"),
"type": r.get("type"),
"season": r.get("season"),
"tmdb_id": r.get("tmdb_id"),
"imdb_id": r.get("imdb_id"),
"douban_id": r.get("douban_id"),
"vote_average": r.get("vote_average"),
"poster_path": r.get("poster_path"),
"detail_link": r.get("detail_link")
}
simplified_results.append(simplified)
result_json = json.dumps(simplified_results, ensure_ascii=False, indent=2)
# 如果结果被裁剪,添加提示信息
if total_count > 20:
return f"注意:推荐结果共找到 {total_count} 条,为节省上下文空间,仅显示前 20 条结果。\n\n{result_json}"
return result_json
return "未找到推荐内容。"
except Exception as e:
logger.error(f"获取推荐失败: {e}", exc_info=True)
return f"获取推荐时发生错误: {str(e)}"

View File

@@ -0,0 +1,130 @@
"""查询文件系统目录内容工具"""
import json
from datetime import datetime
from pathlib import Path
from typing import Optional, Type
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.chain.storage import StorageChain
from app.log import logger
from app.schemas.file import FileItem
from app.utils.string import StringUtils
class ListDirectoryInput(BaseModel):
"""查询文件系统目录内容工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
path: str = Field(..., description="Directory path to list contents (e.g., '/home/user/downloads' or 'C:/Downloads')")
storage: Optional[str] = Field("local", description="Storage type (default: 'local' for local file system, can be 'smb', 'alist', etc.)")
sort_by: Optional[str] = Field("name", description="Sort order: 'name' for alphabetical sorting, 'time' for modification time sorting (default: 'name')")
class ListDirectoryTool(MoviePilotTool):
name: str = "list_directory"
description: str = "List actual files and folders in a file system directory (NOT configuration). Shows files and subdirectories with their names, types, sizes, and modification times. Returns up to 20 items and the total count if there are more items. Use 'query_directories' to query directory configuration settings."
args_schema: Type[BaseModel] = ListDirectoryInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据目录参数生成友好的提示消息"""
path = kwargs.get("path", "")
storage = kwargs.get("storage", "local")
message = f"正在查询目录: {path}"
if storage != "local":
message += f" [存储: {storage}]"
return message
async def run(self, path: str, storage: Optional[str] = "local",
sort_by: Optional[str] = "name", **kwargs) -> str:
logger.info(f"执行工具: {self.name}, 参数: path={path}, storage={storage}, sort_by={sort_by}")
try:
# 规范化路径
if not path:
return "错误:路径不能为空"
# 确保路径格式正确
if storage == "local":
# 本地路径处理
if not path.startswith("/") and not (len(path) > 1 and path[1] == ":"):
# 相对路径,尝试转换为绝对路径
path = str(Path(path).resolve())
else:
# 远程存储路径,确保以/开头
if not path.startswith("/"):
path = "/" + path
# 创建FileItem
fileitem = FileItem(
storage=storage or "local",
path=path,
type="dir"
)
# 查询目录内容
storage_chain = StorageChain()
file_list = storage_chain.list_files(fileitem, recursion=False)
if file_list is None:
return f"无法访问目录:{path},请检查路径是否正确或存储是否可用"
if not file_list:
return f"目录 {path} 为空"
# 排序
if sort_by == "time":
file_list.sort(key=lambda x: x.modify_time or 0, reverse=True)
else:
# 默认按名称排序(目录优先,然后按名称)
file_list.sort(key=lambda x: (
0 if x.type == "dir" else 1,
StringUtils.natural_sort_key(x.name or "")
))
# 限制返回数量
total_count = len(file_list)
limited_list = file_list[:20]
# 转换为字典格式
simplified_items = []
for item in limited_list:
# 格式化文件大小
size_str = None
if item.size:
size_str = StringUtils.str_filesize(item.size)
# 格式化修改时间
modify_time_str = None
if item.modify_time:
try:
modify_time_str = datetime.fromtimestamp(item.modify_time).strftime("%Y-%m-%d %H:%M:%S")
except (ValueError, OSError):
modify_time_str = str(item.modify_time)
simplified = {
"name": item.name,
"type": item.type,
"path": item.path,
"size": size_str,
"modify_time": modify_time_str
}
# 如果是文件,添加扩展名
if item.type == "file" and item.extension:
simplified["extension"] = item.extension
simplified_items.append(simplified)
result_json = json.dumps(simplified_items, ensure_ascii=False, indent=2)
# 如果结果被裁剪,添加提示信息
if total_count > 20:
return f"注意:目录中共有 {total_count} 个项目,为节省上下文空间,仅显示前 20 个项目。\n\n{result_json}"
else:
return result_json
except Exception as e:
logger.error(f"查询目录内容失败: {e}", exc_info=True)
return f"查询目录内容时发生错误: {str(e)}"

View File

@@ -0,0 +1,134 @@
"""查询系统目录设置工具"""
import json
from typing import Optional, Type
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.helper.directory import DirectoryHelper
from app.log import logger
class QueryDirectorySettingsInput(BaseModel):
"""查询系统目录设置工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
directory_type: Optional[str] = Field("all",
description="Filter directories by type: 'download' for download directories, 'library' for media library directories, 'all' for all directories")
storage_type: Optional[str] = Field("all",
description="Filter directories by storage type: 'local' for local storage, 'remote' for remote storage, 'all' for all storage types")
name: Optional[str] = Field(None,
description="Filter directories by name (partial match, optional)")
class QueryDirectorySettingsTool(MoviePilotTool):
name: str = "query_directory_settings"
description: str = "Query system directory configuration settings (NOT file listings). Returns configured directory paths, storage types, transfer modes, and other directory-related settings. Use 'list_directory' to list actual files and folders in a directory."
args_schema: Type[BaseModel] = QueryDirectorySettingsInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据查询参数生成友好的提示消息"""
directory_type = kwargs.get("directory_type", "all")
storage_type = kwargs.get("storage_type", "all")
name = kwargs.get("name")
parts = ["正在查询目录配置"]
if directory_type != "all":
type_map = {"download": "下载目录", "library": "媒体库目录"}
parts.append(f"类型: {type_map.get(directory_type, directory_type)}")
if storage_type != "all":
storage_map = {"local": "本地存储", "remote": "远程存储"}
parts.append(f"存储: {storage_map.get(storage_type, storage_type)}")
if name:
parts.append(f"名称: {name}")
return " | ".join(parts) if len(parts) > 1 else parts[0]
async def run(self, directory_type: Optional[str] = "all",
storage_type: Optional[str] = "all",
name: Optional[str] = None, **kwargs) -> str:
logger.info(f"执行工具: {self.name}, 参数: directory_type={directory_type}, storage_type={storage_type}, name={name}")
try:
directory_helper = DirectoryHelper()
# 根据目录类型获取目录列表
if directory_type == "download":
dirs = directory_helper.get_download_dirs()
elif directory_type == "library":
dirs = directory_helper.get_library_dirs()
else:
dirs = directory_helper.get_dirs()
# 按存储类型过滤
filtered_dirs = []
for d in dirs:
# 按存储类型过滤
if storage_type == "local":
# 对于下载目录,检查 storage对于媒体库目录检查 library_storage
if directory_type == "download" and d.storage != "local":
continue
elif directory_type == "library" and d.library_storage != "local":
continue
elif directory_type == "all":
# 检查是否有本地存储配置
if d.download_path and d.storage != "local":
continue
if d.library_path and d.library_storage != "local":
continue
elif storage_type == "remote":
# 对于下载目录,检查 storage对于媒体库目录检查 library_storage
if directory_type == "download" and d.storage == "local":
continue
elif directory_type == "library" and d.library_storage == "local":
continue
elif directory_type == "all":
# 检查是否有远程存储配置
if d.download_path and d.storage == "local":
continue
if d.library_path and d.library_storage == "local":
continue
# 按名称过滤(部分匹配)
if name and d.name and name.lower() not in d.name.lower():
continue
filtered_dirs.append(d)
if filtered_dirs:
# 转换为字典格式,只保留关键信息
simplified_dirs = []
for d in filtered_dirs:
simplified = {
"name": d.name,
"priority": d.priority,
"storage": d.storage,
"download_path": d.download_path,
"library_path": d.library_path,
"library_storage": d.library_storage,
"media_type": d.media_type,
"media_category": d.media_category,
"monitor_type": d.monitor_type,
"monitor_mode": d.monitor_mode,
"transfer_type": d.transfer_type,
"overwrite_mode": d.overwrite_mode,
"renaming": d.renaming,
"scraping": d.scraping,
"notify": d.notify,
"download_type_folder": d.download_type_folder,
"download_category_folder": d.download_category_folder,
"library_type_folder": d.library_type_folder,
"library_category_folder": d.library_category_folder
}
simplified_dirs.append(simplified)
result_json = json.dumps(simplified_dirs, ensure_ascii=False, indent=2)
return result_json
return "未找到相关目录配置"
except Exception as e:
logger.error(f"查询系统目录设置失败: {e}", exc_info=True)
return f"查询系统目录设置时发生错误: {str(e)}"

View File

@@ -0,0 +1,232 @@
"""查询下载工具"""
import json
from typing import Optional, Type, List, Union
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.chain.download import DownloadChain
from app.db.downloadhistory_oper import DownloadHistoryOper
from app.log import logger
from app.schemas import TransferTorrent, DownloadingTorrent
from app.schemas.types import TorrentStatus
class QueryDownloadTasksInput(BaseModel):
"""查询下载工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
downloader: Optional[str] = Field(None,
description="Name of specific downloader to query (optional, if not provided queries all configured downloaders)")
status: Optional[str] = Field("all",
description="Filter downloads by status: 'downloading' for active downloads, 'completed' for finished downloads, 'paused' for paused downloads, 'all' for all downloads")
hash: Optional[str] = Field(None, description="Query specific download task by hash (optional, if provided will search for this specific task regardless of status)")
title: Optional[str] = Field(None, description="Query download tasks by title/name (optional, supports partial match, searches all tasks if provided)")
class QueryDownloadTasksTool(MoviePilotTool):
name: str = "query_download_tasks"
description: str = "Query download status and list download tasks. Can query all active downloads, or search for specific tasks by hash or title. Shows download progress, completion status, and task details from configured downloaders."
args_schema: Type[BaseModel] = QueryDownloadTasksInput
@staticmethod
def _get_all_torrents(download_chain: DownloadChain, downloader: Optional[str] = None) -> List[Union[TransferTorrent, DownloadingTorrent]]:
"""
查询所有状态的任务(包括下载中和已完成的任务)
"""
all_torrents = []
# 查询正在下载的任务
downloading_torrents = download_chain.list_torrents(
downloader=downloader,
status=TorrentStatus.DOWNLOADING
) or []
all_torrents.extend(downloading_torrents)
# 查询已完成的任务(可转移状态)
transfer_torrents = download_chain.list_torrents(
downloader=downloader,
status=TorrentStatus.TRANSFER
) or []
all_torrents.extend(transfer_torrents)
return all_torrents
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据查询参数生成友好的提示消息"""
downloader = kwargs.get("downloader")
status = kwargs.get("status", "all")
hash_value = kwargs.get("hash")
title = kwargs.get("title")
parts = ["正在查询下载任务"]
if downloader:
parts.append(f"下载器: {downloader}")
if status != "all":
status_map = {"downloading": "下载中", "completed": "已完成", "paused": "已暂停"}
parts.append(f"状态: {status_map.get(status, status)}")
if hash_value:
parts.append(f"Hash: {hash_value[:8]}...")
elif title:
parts.append(f"标题: {title}")
return " | ".join(parts) if len(parts) > 1 else parts[0]
async def run(self, downloader: Optional[str] = None,
status: Optional[str] = "all",
hash: Optional[str] = None,
title: Optional[str] = None, **kwargs) -> str:
logger.info(f"执行工具: {self.name}, 参数: downloader={downloader}, status={status}, hash={hash}, title={title}")
try:
download_chain = DownloadChain()
# 如果提供了hash直接查询该hash的任务不限制状态
if hash:
torrents = download_chain.list_torrents(downloader=downloader, hashs=[hash]) or []
if not torrents:
return f"未找到hash为 {hash} 的下载任务(该任务可能已完成、已删除或不存在)"
# 转换为DownloadingTorrent格式
downloads = []
for torrent in torrents:
# 获取下载历史信息
history = DownloadHistoryOper().get_by_hash(torrent.hash)
if history:
torrent.media = {
"tmdbid": history.tmdbid,
"type": history.type,
"title": history.title,
"season": history.seasons,
"episode": history.episodes,
"image": history.image,
}
torrent.userid = history.userid
torrent.username = history.username
downloads.append(torrent)
filtered_downloads = downloads
elif title:
# 如果提供了title查询所有任务并搜索匹配的标题
# 查询所有状态的任务
all_torrents = self._get_all_torrents(download_chain, downloader)
filtered_downloads = []
title_lower = title.lower()
for torrent in all_torrents:
# 获取下载历史信息
history = DownloadHistoryOper().get_by_hash(torrent.hash)
# 检查标题或名称是否匹配(包括下载历史中的标题)
matched = False
# 检查torrent的title和name字段
if (title_lower in (torrent.title or "").lower()) or \
(title_lower in (torrent.name or "").lower()):
matched = True
# 检查下载历史中的标题
if history and history.title:
if title_lower in history.title.lower():
matched = True
if matched:
if history:
torrent.media = {
"tmdbid": history.tmdbid,
"type": history.type,
"title": history.title,
"season": history.seasons,
"episode": history.episodes,
"image": history.image,
}
torrent.userid = history.userid
torrent.username = history.username
filtered_downloads.append(torrent)
if not filtered_downloads:
return f"未找到标题包含 '{title}' 的下载任务"
else:
# 根据status决定查询方式
if status == "downloading":
# 如果status为下载中使用downloading方法
downloads = download_chain.downloading(name=downloader) or []
filtered_downloads = []
for dl in downloads:
if downloader and dl.downloader != downloader:
continue
filtered_downloads.append(dl)
else:
# 其他状态completed、paused、all使用list_torrents查询所有任务
# 查询所有状态的任务
all_torrents = self._get_all_torrents(download_chain, downloader)
filtered_downloads = []
for torrent in all_torrents:
if downloader and torrent.downloader != downloader:
continue
# 根据status过滤
if status == "completed":
# 已完成的任务state为seeding或completed
if torrent.state not in ["seeding", "completed"]:
continue
elif status == "paused":
# 已暂停的任务
if torrent.state != "paused":
continue
# status == "all" 时不过滤
# 获取下载历史信息
history = DownloadHistoryOper().get_by_hash(torrent.hash)
if history:
torrent.media = {
"tmdbid": history.tmdbid,
"type": history.type,
"title": history.title,
"season": history.seasons,
"episode": history.episodes,
"image": history.image,
}
torrent.userid = history.userid
torrent.username = history.username
filtered_downloads.append(torrent)
if filtered_downloads:
# 限制最多20条结果
total_count = len(filtered_downloads)
limited_downloads = filtered_downloads[:20]
# 精简字段,只保留关键信息
simplified_downloads = []
for d in limited_downloads:
simplified = {
"downloader": d.downloader,
"hash": d.hash,
"title": d.title,
"name": d.name,
"year": d.year,
"season_episode": d.season_episode,
"size": d.size,
"progress": d.progress,
"state": d.state,
"upspeed": d.upspeed,
"dlspeed": d.dlspeed,
"left_time": d.left_time
}
# 精简 media 字段
if d.media:
simplified["media"] = {
"tmdbid": d.media.get("tmdbid"),
"type": d.media.get("type"),
"title": d.media.get("title"),
"season": d.media.get("season"),
"episode": d.media.get("episode")
}
simplified_downloads.append(simplified)
result_json = json.dumps(simplified_downloads, ensure_ascii=False, indent=2)
# 如果结果被裁剪,添加提示信息
if total_count > 20:
return f"注意:查询结果共找到 {total_count} 条,为节省上下文空间,仅显示前 20 条结果。\n\n{result_json}"
# 如果查询的是特定hash或title添加明确的状态信息
if hash:
return f"找到hash为 {hash} 的下载任务:\n\n{result_json}"
elif title:
return f"找到 {total_count} 个标题包含 '{title}' 的下载任务:\n\n{result_json}"
return result_json
return "未找到相关下载任务"
except Exception as e:
logger.error(f"查询下载失败: {e}", exc_info=True)
return f"查询下载时发生错误: {str(e)}"

View File

@@ -0,0 +1,38 @@
"""查询下载器工具"""
import json
from typing import Optional, Type
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.db.systemconfig_oper import SystemConfigOper
from app.log import logger
from app.schemas.types import SystemConfigKey
class QueryDownloadersInput(BaseModel):
"""查询下载器工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
class QueryDownloadersTool(MoviePilotTool):
name: str = "query_downloaders"
description: str = "Query downloader configuration and list all available downloaders. Shows downloader status, connection details, and configuration settings."
args_schema: Type[BaseModel] = QueryDownloadersInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""生成友好的提示消息"""
return "正在查询下载器配置"
async def run(self, **kwargs) -> str:
logger.info(f"执行工具: {self.name}")
try:
system_config_oper = SystemConfigOper()
downloaders_config = system_config_oper.get(SystemConfigKey.Downloaders)
if downloaders_config:
return json.dumps(downloaders_config, ensure_ascii=False, indent=2)
return "未配置下载器。"
except Exception as e:
logger.error(f"查询下载器失败: {e}")
return f"查询下载器时发生错误: {str(e)}"

View File

@@ -0,0 +1,116 @@
"""查询剧集上映时间工具"""
import json
from typing import Optional, Type
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.chain.media import MediaChain
from app.chain.tmdb import TmdbChain
from app.log import logger
from app.schemas import MediaType
class QueryEpisodeScheduleInput(BaseModel):
"""查询剧集上映时间工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
tmdb_id: int = Field(..., description="TMDB ID of the TV series")
season: int = Field(..., description="Season number to query")
episode_group: Optional[str] = Field(None, description="Episode group ID (optional)")
class QueryEpisodeScheduleTool(MoviePilotTool):
name: str = "query_episode_schedule"
description: str = "Query TV series episode air dates and schedule. Returns detailed information for each episode including air date, episode number, title, overview, and other metadata. Filters out episodes without air dates."
args_schema: Type[BaseModel] = QueryEpisodeScheduleInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据查询参数生成友好的提示消息"""
tmdb_id = kwargs.get("tmdb_id")
season = kwargs.get("season")
episode_group = kwargs.get("episode_group")
message = f"正在查询剧集上映时间: TMDB ID {tmdb_id}{season}"
if episode_group:
message += f" (剧集组: {episode_group})"
return message
async def run(self, tmdb_id: int, season: int, episode_group: Optional[str] = None, **kwargs) -> str:
logger.info(f"执行工具: {self.name}, 参数: tmdb_id={tmdb_id}, season={season}, episode_group={episode_group}")
try:
# 获取媒体信息(用于获取标题和海报)
media_chain = MediaChain()
mediainfo = await media_chain.async_recognize_media(tmdbid=tmdb_id, mtype=MediaType.TV)
if not mediainfo:
return f"未找到 TMDB ID {tmdb_id} 的媒体信息"
# 获取集列表
tmdb_chain = TmdbChain()
episodes = await tmdb_chain.async_tmdb_episodes(
tmdbid=tmdb_id,
season=season,
episode_group=episode_group
)
if not episodes:
return json.dumps({
"success": False,
"message": f"未找到 TMDB ID {tmdb_id}{season}季的集信息"
}, ensure_ascii=False)
# 过滤掉没有上映日期的集,并构建每集的详细信息
episode_list = []
for episode in episodes:
air_date = episode.air_date
# 过滤掉没有上映日期的数据
if not air_date:
continue
episode_info = {
"episode_number": episode.episode_number,
"name": episode.name,
"air_date": air_date,
"runtime": episode.runtime,
"vote_average": episode.vote_average,
"still_path": episode.still_path,
"episode_type": episode.episode_type,
"season_number": episode.season_number
}
episode_list.append(episode_info)
if not episode_list:
return json.dumps({
"success": False,
"message": f"未找到 TMDB ID {tmdb_id}{season}季的播出时间信息(所有集都没有播出日期)"
}, ensure_ascii=False)
# 按播出日期排序
episode_list.sort(key=lambda x: (x["air_date"] or "", x["episode_number"] or 0))
result = {
"success": True,
"tmdb_id": tmdb_id,
"season": season,
"episode_group": episode_group,
"series_title": mediainfo.title if mediainfo else None,
"series_poster": mediainfo.poster_path if mediainfo else None,
"total_episodes": len(episodes),
"episodes_with_air_date": len(episode_list),
"episodes": episode_list
}
return json.dumps(result, ensure_ascii=False, indent=2)
except Exception as e:
error_message = f"查询剧集上映时间失败: {str(e)}"
logger.error(f"查询剧集上映时间失败: {e}", exc_info=True)
return json.dumps({
"success": False,
"message": error_message,
"tmdb_id": tmdb_id,
"season": season
}, ensure_ascii=False)

View File

@@ -0,0 +1,139 @@
"""查询媒体库工具"""
import json
from typing import Optional, Type
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.chain.mediaserver import MediaServerChain
from app.core.context import MediaInfo
from app.core.meta import MetaBase
from app.log import logger
from app.schemas.types import MediaType
class QueryLibraryExistsInput(BaseModel):
"""查询媒体库工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
media_type: Optional[str] = Field("all",
description="Type of media content: '电影' for films, '电视剧' for television series or anime series, 'all' for all types")
title: Optional[str] = Field(None,
description="Specific media title to check if it exists in the media library (optional, if provided checks for that specific media)")
year: Optional[str] = Field(None,
description="Release year of the media (optional, helps narrow down search results)")
class QueryLibraryExistsTool(MoviePilotTool):
name: str = "query_library_exists"
description: str = "Check if a specific media resource already exists in the media library (Plex, Emby, Jellyfin). Use this tool to verify whether a movie or TV series has been successfully processed and added to the media server before performing operations like downloading or subscribing."
args_schema: Type[BaseModel] = QueryLibraryExistsInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据查询参数生成友好的提示消息"""
media_type = kwargs.get("media_type", "all")
title = kwargs.get("title")
year = kwargs.get("year")
parts = ["正在查询媒体库"]
if title:
parts.append(f"标题: {title}")
if year:
parts.append(f"年份: {year}")
if media_type != "all":
parts.append(f"类型: {media_type}")
return " | ".join(parts) if len(parts) > 1 else parts[0]
async def run(self, media_type: Optional[str] = "all",
title: Optional[str] = None, year: Optional[str] = None, **kwargs) -> str:
logger.info(f"执行工具: {self.name}, 参数: media_type={media_type}, title={title}")
try:
if not title:
return "请提供媒体标题进行查询"
media_chain = MediaServerChain()
# 1. 识别媒体信息(获取 TMDB ID 和各季的总集数等元数据)
meta = MetaBase(title=title)
if year:
meta.year = str(year)
if media_type == "电影":
meta.type = MediaType.MOVIE
elif media_type == "电视剧":
meta.type = MediaType.TV
# 使用识别方法补充信息
recognize_info = media_chain.recognize_media(meta=meta)
if recognize_info:
mediainfo = recognize_info
else:
# 识别失败,创建基本信息的 MediaInfo
mediainfo = MediaInfo()
mediainfo.title = title
mediainfo.year = year
if media_type == "电影":
mediainfo.type = MediaType.MOVIE
elif media_type == "电视剧":
mediainfo.type = MediaType.TV
# 2. 调用媒体服务器接口实时查询存在信息
existsinfo = media_chain.media_exists(mediainfo=mediainfo)
if not existsinfo:
return "媒体库中未找到相关媒体"
# 3. 如果找到了,获取详细信息并组装结果
result_items = []
if existsinfo.itemid and existsinfo.server:
iteminfo = media_chain.iteminfo(server=existsinfo.server, item_id=existsinfo.itemid)
if iteminfo:
# 使用 model_dump() 转换为字典格式
item_dict = iteminfo.model_dump(exclude_none=True)
# 对于电视剧,补充已存在的季集详情及进度统计
if existsinfo.type == MediaType.TV:
# 注入已存在集信息 (Dict[int, list])
item_dict["seasoninfo"] = existsinfo.seasons
# 统计库中已存在的季集总数
if existsinfo.seasons:
item_dict["existing_episodes_count"] = sum(len(e) for e in existsinfo.seasons.values())
item_dict["seasons_existing_count"] = {str(s): len(e) for s, e in existsinfo.seasons.items()}
# 如果识别到了元数据,补充总计对比和进度概览
if mediainfo.seasons:
item_dict["seasons_total_count"] = {str(s): len(e) for s, e in mediainfo.seasons.items()}
# 进度概览,例如 "Season 1": "3/12"
item_dict["seasons_progress"] = {
f"{s}": f"{len(existsinfo.seasons.get(s, []))}/{len(mediainfo.seasons.get(s, []))}"
for s in mediainfo.seasons.keys() if (s in existsinfo.seasons or s > 0)
}
result_items.append(item_dict)
if result_items:
return json.dumps(result_items, ensure_ascii=False)
# 如果找到了但没有获取到 iteminfo返回基本信息
result_dict = {
"title": mediainfo.title,
"year": mediainfo.year,
"type": existsinfo.type.value if existsinfo.type else None,
"server": existsinfo.server,
"server_type": existsinfo.server_type,
"itemid": existsinfo.itemid,
"seasons": existsinfo.seasons if existsinfo.seasons else {}
}
if existsinfo.type == MediaType.TV and existsinfo.seasons:
result_dict["existing_episodes_count"] = sum(len(e) for e in existsinfo.seasons.values())
result_dict["seasons_existing_count"] = {str(s): len(e) for s, e in existsinfo.seasons.items()}
if mediainfo.seasons:
result_dict["seasons_total_count"] = {str(s): len(e) for s, e in mediainfo.seasons.items()}
return json.dumps([result_dict], ensure_ascii=False)
except Exception as e:
logger.error(f"查询媒体库失败: {e}", exc_info=True)
return f"查询媒体库时发生错误: {str(e)}"

View File

@@ -0,0 +1,86 @@
"""查询媒体服务器最近入库影片工具"""
import json
from typing import Optional, Type
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.chain.mediaserver import MediaServerChain
from app.helper.service import ServiceConfigHelper
from app.log import logger
class QueryLibraryLatestInput(BaseModel):
"""查询媒体服务器最近入库影片工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
server: Optional[str] = Field(None, description="Media server name (optional, if not specified queries all enabled media servers)")
count: Optional[int] = Field(20, description="Number of items to return (default: 20)")
class QueryLibraryLatestTool(MoviePilotTool):
name: str = "query_library_latest"
description: str = "Query the latest media items added to the media server (Plex, Emby, Jellyfin). Returns recently added movies and TV series with their titles, images, links, and other metadata."
args_schema: Type[BaseModel] = QueryLibraryLatestInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据查询参数生成友好的提示消息"""
server = kwargs.get("server")
count = kwargs.get("count", 20)
parts = ["正在查询媒体服务器最近入库影片"]
if server:
parts.append(f"服务器: {server}")
else:
parts.append("所有服务器")
parts.append(f"数量: {count}")
return " | ".join(parts)
async def run(self, server: Optional[str] = None, count: Optional[int] = 20, **kwargs) -> str:
logger.info(f"执行工具: {self.name}, 参数: server={server}, count={count}")
try:
media_chain = MediaServerChain()
results = []
# 如果没有指定服务器,获取所有启用的媒体服务器
if not server:
mediaservers = ServiceConfigHelper.get_mediaserver_configs()
enabled_servers = [ms.name for ms in mediaservers if ms.enabled]
if not enabled_servers:
return "未找到启用的媒体服务器"
# 遍历所有启用的服务器
for server_name in enabled_servers:
latest_items = media_chain.latest(server=server_name, count=count, username=self._username)
if latest_items:
for item in latest_items:
item_dict = item.model_dump(exclude_none=True)
item_dict["server"] = server_name
results.append(item_dict)
else:
# 查询指定服务器
latest_items = media_chain.latest(server=server, count=count, username=self._username)
if latest_items:
for item in latest_items:
item_dict = item.model_dump(exclude_none=True)
item_dict["server"] = server
results.append(item_dict)
if not results:
server_info = f"服务器 {server}" if server else "所有服务器"
return f"未找到 {server_info} 的最近入库影片"
# 限制返回数量,避免结果过多
if len(results) > count:
results = results[:count]
return json.dumps(results, ensure_ascii=False, indent=2)
except Exception as e:
logger.error(f"查询媒体服务器最近入库影片失败: {e}", exc_info=True)
return f"查询媒体服务器最近入库影片时发生错误: {str(e)}"

View File

@@ -0,0 +1,120 @@
"""查询媒体详情工具"""
import json
from typing import Optional, Type
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.chain.media import MediaChain
from app.log import logger
from app.schemas import MediaType
class QueryMediaDetailInput(BaseModel):
"""查询媒体详情工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
tmdb_id: int = Field(..., description="TMDB ID of the media (movie or TV series)")
media_type: str = Field(..., description="Media type: 'movie' or 'tv'")
class QueryMediaDetailTool(MoviePilotTool):
name: str = "query_media_detail"
description: str = "Query detailed media information from TMDB by ID and media_type. IMPORTANT: Convert search results type: '电影''movie', '电视剧''tv'. Returns core metadata including title, year, overview, status, genres, directors, actors, and season count for TV series."
args_schema: Type[BaseModel] = QueryMediaDetailInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据查询参数生成友好的提示消息"""
tmdb_id = kwargs.get("tmdb_id")
return f"正在查询媒体详情: TMDB ID {tmdb_id}"
async def run(self, tmdb_id: int, media_type: str, **kwargs) -> str:
logger.info(f"执行工具: {self.name}, 参数: tmdb_id={tmdb_id}, media_type={media_type}")
try:
media_chain = MediaChain()
mtype = None
if media_type:
if media_type.lower() == 'movie':
mtype = MediaType.MOVIE
elif media_type.lower() == 'tv':
mtype = MediaType.TV
mediainfo = await media_chain.async_recognize_media(tmdbid=tmdb_id, mtype=mtype)
if not mediainfo:
return json.dumps({
"success": False,
"message": f"未找到 TMDB ID {tmdb_id} 的媒体信息"
}, ensure_ascii=False)
# 精简 genres - 只保留名称
genres = [g.get("name") for g in (mediainfo.genres or []) if g.get("name")]
# 精简 directors - 只保留姓名和职位
directors = [
{
"name": d.get("name"),
"job": d.get("job")
}
for d in (mediainfo.directors or [])
if d.get("name")
]
# 精简 actors - 只保留姓名和角色
actors = [
{
"name": a.get("name"),
"character": a.get("character")
}
for a in (mediainfo.actors or [])
if a.get("name")
]
# 构建基础媒体详情信息
result = {
"success": True,
"tmdb_id": tmdb_id,
"type": mediainfo.type.value if mediainfo.type else None,
"title": mediainfo.title,
"year": mediainfo.year,
"overview": mediainfo.overview,
"status": mediainfo.status,
"genres": genres,
"directors": directors,
"actors": actors
}
# 如果是电视剧,添加电视剧特有信息
if mediainfo.type == MediaType.TV:
# 精简 season_info - 只保留基础摘要
season_info = [
{
"season_number": s.get("season_number"),
"name": s.get("name"),
"episode_count": s.get("episode_count"),
"air_date": s.get("air_date")
}
for s in (mediainfo.season_info or [])
if s.get("season_number") is not None
]
result.update({
"number_of_seasons": mediainfo.number_of_seasons,
"number_of_episodes": mediainfo.number_of_episodes,
"first_air_date": mediainfo.first_air_date,
"last_air_date": mediainfo.last_air_date,
"season_info": season_info
})
return json.dumps(result, ensure_ascii=False, indent=2)
except Exception as e:
error_message = f"查询媒体详情失败: {str(e)}"
logger.error(f"查询媒体详情失败: {e}", exc_info=True)
return json.dumps({
"success": False,
"message": error_message,
"tmdb_id": tmdb_id
}, ensure_ascii=False)

View File

@@ -0,0 +1,152 @@
"""查询热门订阅工具"""
import json
from typing import Optional, Type
import cn2an
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.core.context import MediaInfo
from app.helper.subscribe import SubscribeHelper
from app.log import logger
from app.schemas.types import MediaType
class QueryPopularSubscribesInput(BaseModel):
"""查询热门订阅工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
stype: str = Field(..., description="Media type: '电影' for films, '电视剧' for television series")
page: Optional[int] = Field(1, description="Page number for pagination (default: 1)")
count: Optional[int] = Field(30, description="Number of items per page (default: 30)")
min_sub: Optional[int] = Field(None, description="Minimum number of subscribers filter (optional, e.g., 5)")
genre_id: Optional[int] = Field(None, description="Filter by genre ID (optional)")
min_rating: Optional[float] = Field(None, description="Minimum rating filter (optional, e.g., 7.5)")
max_rating: Optional[float] = Field(None, description="Maximum rating filter (optional, e.g., 10.0)")
sort_type: Optional[str] = Field(None, description="Sort type (optional, e.g., 'count', 'rating')")
class QueryPopularSubscribesTool(MoviePilotTool):
name: str = "query_popular_subscribes"
description: str = "Query popular subscriptions based on user shared data. Shows media with the most subscribers, supports filtering by genre, rating, minimum subscribers, and pagination."
args_schema: Type[BaseModel] = QueryPopularSubscribesInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据查询参数生成友好的提示消息"""
stype = kwargs.get("stype", "")
page = kwargs.get("page", 1)
min_sub = kwargs.get("min_sub")
min_rating = kwargs.get("min_rating")
max_rating = kwargs.get("max_rating")
parts = [f"正在查询热门订阅 [{stype}]"]
if min_sub:
parts.append(f"最少订阅: {min_sub}")
if min_rating:
parts.append(f"最低评分: {min_rating}")
if max_rating:
parts.append(f"最高评分: {max_rating}")
if page > 1:
parts.append(f"{page}")
return " | ".join(parts) if len(parts) > 1 else parts[0]
async def run(self, stype: str,
page: Optional[int] = 1,
count: Optional[int] = 30,
min_sub: Optional[int] = None,
genre_id: Optional[int] = None,
min_rating: Optional[float] = None,
max_rating: Optional[float] = None,
sort_type: Optional[str] = None, **kwargs) -> str:
logger.info(
f"执行工具: {self.name}, 参数: stype={stype}, page={page}, count={count}, min_sub={min_sub}, "
f"genre_id={genre_id}, min_rating={min_rating}, max_rating={max_rating}, sort_type={sort_type}")
try:
if page is None or page < 1:
page = 1
if count is None or count < 1:
count = 30
subscribe_helper = SubscribeHelper()
subscribes = await subscribe_helper.async_get_statistic(
stype=stype,
page=page,
count=count,
genre_id=genre_id,
min_rating=min_rating,
max_rating=max_rating,
sort_type=sort_type
)
if not subscribes:
return "未找到热门订阅数据(可能订阅统计功能未启用)"
# 转换为MediaInfo格式并过滤
ret_medias = []
for sub in subscribes:
# 订阅人数
subscriber_count = sub.get("count", 0)
# 如果设置了最小订阅人数,进行过滤
if min_sub and subscriber_count < min_sub:
continue
media = MediaInfo()
media.type = MediaType(sub.get("type"))
media.tmdb_id = sub.get("tmdbid")
# 处理标题
title = sub.get("name")
season = sub.get("season")
if season and int(season) > 1 and media.tmdb_id:
# 小写数据转大写
season_str = cn2an.an2cn(season, "low")
title = f"{title}{season_str}"
media.title = title
media.year = sub.get("year")
media.douban_id = sub.get("doubanid")
media.bangumi_id = sub.get("bangumiid")
media.tvdb_id = sub.get("tvdbid")
media.imdb_id = sub.get("imdbid")
media.season = sub.get("season")
media.vote_average = sub.get("vote")
media.poster_path = sub.get("poster")
media.backdrop_path = sub.get("backdrop")
media.popularity = subscriber_count
ret_medias.append(media)
if not ret_medias:
return "未找到符合条件的热门订阅"
# 转换为字典格式,只保留关键信息
simplified_medias = []
for media in ret_medias:
media_dict = media.to_dict()
simplified = {
"type": media_dict.get("type"),
"title": media_dict.get("title"),
"year": media_dict.get("year"),
"tmdb_id": media_dict.get("tmdb_id"),
"douban_id": media_dict.get("douban_id"),
"bangumi_id": media_dict.get("bangumi_id"),
"tvdb_id": media_dict.get("tvdb_id"),
"imdb_id": media_dict.get("imdb_id"),
"season": media_dict.get("season"),
"vote_average": media_dict.get("vote_average"),
"poster_path": media_dict.get("poster_path"),
"backdrop_path": media_dict.get("backdrop_path"),
"popularity": media_dict.get("popularity"), # 订阅人数
"subscriber_count": media_dict.get("popularity") # 明确标注为订阅人数
}
simplified_medias.append(simplified)
result_json = json.dumps(simplified_medias, ensure_ascii=False, indent=2)
pagination_info = f"{page} 页,每页 {count} 条,共 {len(simplified_medias)} 条结果"
return f"{pagination_info}\n\n{result_json}"
except Exception as e:
logger.error(f"查询热门订阅失败: {e}", exc_info=True)
return f"查询热门订阅时发生错误: {str(e)}"

View File

@@ -0,0 +1,65 @@
"""查询规则组工具"""
import json
from typing import Optional, Type
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.helper.rule import RuleHelper
from app.log import logger
class QueryRuleGroupsInput(BaseModel):
"""查询规则组工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
class QueryRuleGroupsTool(MoviePilotTool):
name: str = "query_rule_groups"
description: str = "Query all filter rule groups available in the system. Rule groups are used to filter torrents when searching or subscribing. Returns rule group names, media types, and categories, but excludes rule_string to keep results concise."
args_schema: Type[BaseModel] = QueryRuleGroupsInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据查询参数生成友好的提示消息"""
return "正在查询所有规则组"
async def run(self, **kwargs) -> str:
logger.info(f"执行工具: {self.name}")
try:
rule_helper = RuleHelper()
rule_groups = rule_helper.get_rule_groups()
if not rule_groups:
return json.dumps({
"message": "未找到任何规则组",
"rule_groups": []
}, ensure_ascii=False, indent=2)
# 精简字段,过滤掉 rule_string 避免结果过大
simplified_groups = []
for group in rule_groups:
simplified = {
"name": group.name,
"media_type": group.media_type,
"category": group.category
}
simplified_groups.append(simplified)
result = {
"message": f"找到 {len(simplified_groups)} 个规则组",
"rule_groups": simplified_groups
}
return json.dumps(result, ensure_ascii=False, indent=2)
except Exception as e:
error_message = f"查询规则组失败: {str(e)}"
logger.error(f"查询规则组失败: {e}", exc_info=True)
return json.dumps({
"success": False,
"message": error_message,
"rule_groups": []
}, ensure_ascii=False)

View File

@@ -0,0 +1,55 @@
"""查询定时服务工具"""
import json
from typing import Optional, Type
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.log import logger
from app.scheduler import Scheduler
class QuerySchedulersInput(BaseModel):
"""查询定时服务工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
class QuerySchedulersTool(MoviePilotTool):
name: str = "query_schedulers"
description: str = "Query scheduled tasks and list all available scheduler jobs. Shows job status, next run time, and provider information."
args_schema: Type[BaseModel] = QuerySchedulersInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""生成友好的提示消息"""
return "正在查询定时服务"
async def run(self, **kwargs) -> str:
logger.info(f"执行工具: {self.name}")
try:
scheduler = Scheduler()
schedulers = scheduler.list()
if schedulers:
# 转换为字典列表以便JSON序列化
schedulers_list = []
for s in schedulers:
schedulers_list.append({
"id": s.id,
"name": s.name,
"provider": s.provider,
"status": s.status,
"next_run": s.next_run
})
result_json = json.dumps(schedulers_list, ensure_ascii=False, indent=2)
# 限制最多30条结果
total_count = len(schedulers_list)
if total_count > 30:
limited_schedulers = schedulers_list[:30]
limited_json = json.dumps(limited_schedulers, ensure_ascii=False, indent=2)
return f"注意:查询结果共找到 {total_count} 条,为节省上下文空间,仅显示前 30 条结果。\n\n{limited_json}"
return result_json
return "未找到定时服务"
except Exception as e:
logger.error(f"查询定时服务失败: {e}", exc_info=True)
return f"查询定时服务时发生错误: {str(e)}"

View File

@@ -0,0 +1,136 @@
"""查询站点用户数据工具"""
import json
from typing import Optional, Type
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.db import AsyncSessionFactory
from app.db.models.site import Site
from app.db.models.siteuserdata import SiteUserData
from app.log import logger
class QuerySiteUserdataInput(BaseModel):
"""查询站点用户数据工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
site_id: int = Field(..., description="The ID of the site to query user data for")
workdate: Optional[str] = Field(None, description="Work date to query (optional, format: 'YYYY-MM-DD', if not specified returns latest data)")
class QuerySiteUserdataTool(MoviePilotTool):
name: str = "query_site_userdata"
description: str = "Query user data for a specific site including username, user level, upload/download statistics, seeding information, bonus points, and other account details. Supports querying data for a specific date or latest data."
args_schema: Type[BaseModel] = QuerySiteUserdataInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据查询参数生成友好的提示消息"""
site_id = kwargs.get("site_id")
workdate = kwargs.get("workdate")
message = f"正在查询站点 #{site_id} 的用户数据"
if workdate:
message += f" (日期: {workdate})"
else:
message += " (最新数据)"
return message
async def run(self, site_id: int, workdate: Optional[str] = None, **kwargs) -> str:
logger.info(f"执行工具: {self.name}, 参数: site_id={site_id}, workdate={workdate}")
try:
# 获取数据库会话
async with AsyncSessionFactory() as db:
# 获取站点
site = await Site.async_get(db, site_id)
if not site:
return json.dumps({
"success": False,
"message": f"站点不存在: {site_id}"
}, ensure_ascii=False)
# 获取站点用户数据
user_data_list = await SiteUserData.async_get_by_domain(
db,
domain=site.domain,
workdate=workdate
)
if not user_data_list:
return json.dumps({
"success": False,
"message": f"站点 {site.name} ({site.domain}) 暂无用户数据",
"site_id": site_id,
"site_name": site.name,
"site_domain": site.domain,
"workdate": workdate
}, ensure_ascii=False)
# 格式化用户数据
result = {
"success": True,
"site_id": site_id,
"site_name": site.name,
"site_domain": site.domain,
"workdate": workdate,
"data_count": len(user_data_list),
"user_data": []
}
for user_data in user_data_list:
# 格式化上传/下载量(转换为可读格式)
upload_gb = user_data.upload / (1024 ** 3) if user_data.upload else 0
download_gb = user_data.download / (1024 ** 3) if user_data.download else 0
seeding_size_gb = user_data.seeding_size / (1024 ** 3) if user_data.seeding_size else 0
leeching_size_gb = user_data.leeching_size / (1024 ** 3) if user_data.leeching_size else 0
user_data_dict = {
"domain": user_data.domain,
"name": user_data.name,
"username": user_data.username,
"userid": user_data.userid,
"user_level": user_data.user_level,
"join_at": user_data.join_at,
"bonus": user_data.bonus,
"upload": user_data.upload,
"upload_gb": round(upload_gb, 2),
"download": user_data.download,
"download_gb": round(download_gb, 2),
"ratio": round(user_data.ratio, 2) if user_data.ratio else 0,
"seeding": int(user_data.seeding) if user_data.seeding else 0,
"leeching": int(user_data.leeching) if user_data.leeching else 0,
"seeding_size": user_data.seeding_size,
"seeding_size_gb": round(seeding_size_gb, 2),
"leeching_size": user_data.leeching_size,
"leeching_size_gb": round(leeching_size_gb, 2),
"seeding_info": user_data.seeding_info if user_data.seeding_info else [],
"message_unread": user_data.message_unread,
"message_unread_contents": user_data.message_unread_contents if user_data.message_unread_contents else [],
"err_msg": user_data.err_msg,
"updated_day": user_data.updated_day,
"updated_time": user_data.updated_time
}
result["user_data"].append(user_data_dict)
# 如果有多条数据,只返回最新的(按更新时间排序)
if len(result["user_data"]) > 1:
result["user_data"].sort(
key=lambda x: (x.get("updated_day", ""), x.get("updated_time", "")),
reverse=True
)
result["message"] = f"找到 {len(result['user_data'])} 条数据,显示最新的一条"
result["user_data"] = [result["user_data"][0]]
return json.dumps(result, ensure_ascii=False, indent=2)
except Exception as e:
error_message = f"查询站点用户数据失败: {str(e)}"
logger.error(f"查询站点用户数据失败: {e}", exc_info=True)
return json.dumps({
"success": False,
"message": error_message,
"site_id": site_id
}, ensure_ascii=False)

View File

@@ -0,0 +1,82 @@
"""查询站点工具"""
import json
from typing import Optional, Type
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.db.site_oper import SiteOper
from app.log import logger
class QuerySitesInput(BaseModel):
"""查询站点工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
status: Optional[str] = Field("all",
description="Filter sites by status: 'active' for enabled sites, 'inactive' for disabled sites, 'all' for all sites")
name: Optional[str] = Field(None,
description="Filter sites by name (partial match, optional)")
class QuerySitesTool(MoviePilotTool):
name: str = "query_sites"
description: str = "Query site status and list all configured sites. Shows site name, domain, status, priority, and basic configuration. Site priority (pri): smaller values have higher priority (e.g., pri=1 has higher priority than pri=10)."
args_schema: Type[BaseModel] = QuerySitesInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据查询参数生成友好的提示消息"""
status = kwargs.get("status", "all")
name = kwargs.get("name")
parts = ["正在查询站点"]
if status != "all":
status_map = {"active": "已启用", "inactive": "已禁用"}
parts.append(f"状态: {status_map.get(status, status)}")
if name:
parts.append(f"名称: {name}")
return " | ".join(parts) if len(parts) > 1 else parts[0]
async def run(self, status: Optional[str] = "all", name: Optional[str] = None, **kwargs) -> str:
logger.info(f"执行工具: {self.name}, 参数: status={status}, name={name}")
try:
site_oper = SiteOper()
# 获取所有站点(按优先级排序)
sites = await site_oper.async_list()
filtered_sites = []
for site in sites:
# 按状态过滤
if status == "active" and not site.is_active:
continue
if status == "inactive" and site.is_active:
continue
# 按名称过滤(部分匹配)
if name and name.lower() not in (site.name or "").lower():
continue
filtered_sites.append(site)
if filtered_sites:
# 精简字段,只保留关键信息
simplified_sites = []
for s in filtered_sites:
simplified = {
"id": s.id,
"name": s.name,
"domain": s.domain,
"url": s.url,
"pri": s.pri,
"is_active": s.is_active,
"downloader": s.downloader,
"proxy": s.proxy,
"timeout": s.timeout
}
simplified_sites.append(simplified)
result_json = json.dumps(simplified_sites, ensure_ascii=False, indent=2)
return result_json
return "未找到相关站点"
except Exception as e:
logger.error(f"查询站点失败: {e}", exc_info=True)
return f"查询站点时发生错误: {str(e)}"

View File

@@ -0,0 +1,113 @@
"""查询订阅历史工具"""
import json
from typing import Optional, Type
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.db import AsyncSessionFactory
from app.db.models.subscribehistory import SubscribeHistory
from app.log import logger
class QuerySubscribeHistoryInput(BaseModel):
"""查询订阅历史工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
media_type: Optional[str] = Field("all", description="Filter by media type: '电影' for films, '电视剧' for television series, 'all' for all types (default: 'all')")
name: Optional[str] = Field(None, description="Filter by media name (partial match, optional)")
class QuerySubscribeHistoryTool(MoviePilotTool):
name: str = "query_subscribe_history"
description: str = "Query subscription history records. Shows completed subscriptions with their details including name, type, rating, completion date, and other subscription information. Supports filtering by media type and name. Returns up to 30 records."
args_schema: Type[BaseModel] = QuerySubscribeHistoryInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据查询参数生成友好的提示消息"""
media_type = kwargs.get("media_type", "all")
name = kwargs.get("name")
parts = ["正在查询订阅历史"]
if media_type != "all":
parts.append(f"类型: {media_type}")
if name:
parts.append(f"名称: {name}")
return " | ".join(parts) if len(parts) > 1 else parts[0]
async def run(self, media_type: Optional[str] = "all",
name: Optional[str] = None, **kwargs) -> str:
logger.info(f"执行工具: {self.name}, 参数: media_type={media_type}, name={name}")
try:
# 获取数据库会话
async with AsyncSessionFactory() as db:
# 根据类型查询
if media_type == "all":
# 查询所有类型,需要分别查询电影和电视剧
movie_history = await SubscribeHistory.async_list_by_type(db, mtype="movie", page=1, count=100)
tv_history = await SubscribeHistory.async_list_by_type(db, mtype="tv", page=1, count=100)
all_history = list(movie_history) + list(tv_history)
# 按日期排序
all_history.sort(key=lambda x: x.date or "", reverse=True)
else:
# 查询指定类型
all_history = await SubscribeHistory.async_list_by_type(db, mtype=media_type, page=1, count=100)
# 按名称过滤
filtered_history = []
if name:
name_lower = name.lower()
for record in all_history:
if record.name and name_lower in record.name.lower():
filtered_history.append(record)
else:
filtered_history = all_history
if not filtered_history:
return "未找到相关订阅历史记录"
# 限制最多30条
total_count = len(filtered_history)
limited_history = filtered_history[:30]
# 转换为字典格式,只保留关键信息
simplified_records = []
for record in limited_history:
simplified = {
"id": record.id,
"name": record.name,
"year": record.year,
"type": record.type,
"season": record.season,
"tmdbid": record.tmdbid,
"doubanid": record.doubanid,
"bangumiid": record.bangumiid,
"poster": record.poster,
"vote": record.vote,
"total_episode": record.total_episode,
"date": record.date,
"username": record.username
}
# 添加过滤规则信息(如果有)
if record.filter:
simplified["filter"] = record.filter
if record.quality:
simplified["quality"] = record.quality
if record.resolution:
simplified["resolution"] = record.resolution
simplified_records.append(simplified)
result_json = json.dumps(simplified_records, ensure_ascii=False, indent=2)
# 如果结果被裁剪,添加提示信息
if total_count > 30:
return f"注意:查询结果共找到 {total_count} 条,为节省上下文空间,仅显示前 30 条结果。\n\n{result_json}"
return result_json
except Exception as e:
logger.error(f"查询订阅历史失败: {e}", exc_info=True)
return f"查询订阅历史时发生错误: {str(e)}"

View File

@@ -0,0 +1,113 @@
"""查询订阅分享工具"""
import json
from typing import Optional, Type
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.helper.subscribe import SubscribeHelper
from app.log import logger
class QuerySubscribeSharesInput(BaseModel):
"""查询订阅分享工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
name: Optional[str] = Field(None, description="Filter shares by media name (partial match, optional)")
page: Optional[int] = Field(1, description="Page number for pagination (default: 1)")
count: Optional[int] = Field(30, description="Number of items per page (default: 30)")
genre_id: Optional[int] = Field(None, description="Filter by genre ID (optional)")
min_rating: Optional[float] = Field(None, description="Minimum rating filter (optional, e.g., 7.5)")
max_rating: Optional[float] = Field(None, description="Maximum rating filter (optional, e.g., 10.0)")
sort_type: Optional[str] = Field(None, description="Sort type (optional, e.g., 'count', 'rating')")
class QuerySubscribeSharesTool(MoviePilotTool):
name: str = "query_subscribe_shares"
description: str = "Query shared subscriptions from other users. Shows popular subscriptions shared by the community with filtering and pagination support."
args_schema: Type[BaseModel] = QuerySubscribeSharesInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据查询参数生成友好的提示消息"""
name = kwargs.get("name")
page = kwargs.get("page", 1)
min_rating = kwargs.get("min_rating")
max_rating = kwargs.get("max_rating")
parts = ["正在查询订阅分享"]
if name:
parts.append(f"名称: {name}")
if min_rating:
parts.append(f"最低评分: {min_rating}")
if max_rating:
parts.append(f"最高评分: {max_rating}")
if page > 1:
parts.append(f"{page}")
return " | ".join(parts) if len(parts) > 1 else parts[0]
async def run(self, name: Optional[str] = None,
page: Optional[int] = 1,
count: Optional[int] = 30,
genre_id: Optional[int] = None,
min_rating: Optional[float] = None,
max_rating: Optional[float] = None,
sort_type: Optional[str] = None, **kwargs) -> str:
logger.info(
f"执行工具: {self.name}, 参数: name={name}, page={page}, count={count}, genre_id={genre_id}, "
f"min_rating={min_rating}, max_rating={max_rating}, sort_type={sort_type}")
try:
if page is None or page < 1:
page = 1
if count is None or count < 1:
count = 30
subscribe_helper = SubscribeHelper()
shares = await subscribe_helper.async_get_shares(
name=name,
page=page,
count=count,
genre_id=genre_id,
min_rating=min_rating,
max_rating=max_rating,
sort_type=sort_type
)
if not shares:
return "未找到订阅分享数据(可能订阅分享功能未启用)"
# 简化字段,只保留关键信息
simplified_shares = []
for share in shares:
simplified = {
"id": share.get("id"),
"name": share.get("name"),
"year": share.get("year"),
"type": share.get("type"),
"season": share.get("season"),
"tmdbid": share.get("tmdbid"),
"doubanid": share.get("doubanid"),
"bangumiid": share.get("bangumiid"),
"poster": share.get("poster"),
"vote": share.get("vote"),
"share_title": share.get("share_title"),
"share_comment": share.get("share_comment"),
"share_user": share.get("share_user"),
"fork_count": share.get("fork_count", 0)
}
# 截断过长的描述
if simplified.get("description") and len(simplified["description"]) > 200:
simplified["description"] = simplified["description"][:200] + "..."
simplified_shares.append(simplified)
result_json = json.dumps(simplified_shares, ensure_ascii=False, indent=2)
pagination_info = f"{page} 页,每页 {count} 条,共 {len(simplified_shares)} 条结果"
return f"{pagination_info}\n\n{result_json}"
except Exception as e:
logger.error(f"查询订阅分享失败: {e}", exc_info=True)
return f"查询订阅分享时发生错误: {str(e)}"

View File

@@ -0,0 +1,90 @@
"""查询订阅工具"""
import json
from typing import Optional, Type
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.db.subscribe_oper import SubscribeOper
from app.log import logger
class QuerySubscribesInput(BaseModel):
"""查询订阅工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
status: Optional[str] = Field("all",
description="Filter subscriptions by status: 'R' for enabled subscriptions, 'S' for paused ones, 'all' for all subscriptions")
media_type: Optional[str] = Field("all",
description="Filter by media type: '电影' for films, '电视剧' for television series, 'all' for all types")
class QuerySubscribesTool(MoviePilotTool):
name: str = "query_subscribes"
description: str = "Query subscription status and list all user subscriptions. Shows active subscriptions, their download status, and configuration details."
args_schema: Type[BaseModel] = QuerySubscribesInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据查询参数生成友好的提示消息"""
status = kwargs.get("status", "all")
media_type = kwargs.get("media_type", "all")
parts = ["正在查询订阅"]
# 根据状态过滤条件生成提示
if status != "all":
status_map = {"R": "已启用", "S": "已暂停"}
parts.append(f"状态: {status_map.get(status, status)}")
# 根据媒体类型过滤条件生成提示
if media_type != "all":
parts.append(f"类型: {media_type}")
return " | ".join(parts) if len(parts) > 1 else parts[0]
async def run(self, status: Optional[str] = "all", media_type: Optional[str] = "all", **kwargs) -> str:
logger.info(f"执行工具: {self.name}, 参数: status={status}, media_type={media_type}")
try:
subscribe_oper = SubscribeOper()
subscribes = await subscribe_oper.async_list()
filtered_subscribes = []
for sub in subscribes:
if status != "all" and sub.state != status:
continue
if media_type != "all" and sub.type != media_type:
continue
filtered_subscribes.append(sub)
if filtered_subscribes:
# 限制最多50条结果
total_count = len(filtered_subscribes)
limited_subscribes = filtered_subscribes[:50]
# 精简字段,只保留关键信息
simplified_subscribes = []
for s in limited_subscribes:
simplified = {
"id": s.id,
"name": s.name,
"year": s.year,
"type": s.type,
"season": s.season,
"tmdbid": s.tmdbid,
"doubanid": s.doubanid,
"bangumiid": s.bangumiid,
"poster": s.poster,
"vote": s.vote,
"state": s.state,
"total_episode": s.total_episode,
"lack_episode": s.lack_episode,
"last_update": s.last_update,
"username": s.username
}
simplified_subscribes.append(simplified)
result_json = json.dumps(simplified_subscribes, ensure_ascii=False, indent=2)
# 如果结果被裁剪,添加提示信息
if total_count > 50:
return f"注意:查询结果共找到 {total_count} 条,为节省上下文空间,仅显示前 50 条结果。\n\n{result_json}"
return result_json
return "未找到相关订阅"
except Exception as e:
logger.error(f"查询订阅失败: {e}", exc_info=True)
return f"查询订阅时发生错误: {str(e)}"

View File

@@ -0,0 +1,133 @@
"""查询整理历史记录工具"""
import json
from typing import Optional, Type
import jieba
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.db import AsyncSessionFactory
from app.db.models.transferhistory import TransferHistory
from app.log import logger
class QueryTransferHistoryInput(BaseModel):
"""查询整理历史记录工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
title: Optional[str] = Field(None, description="Search by title (optional, supports partial match)")
status: Optional[str] = Field("all",
description="Filter by status: 'success' for successful transfers, 'failed' for failed transfers, 'all' for all records (default: 'all')")
page: Optional[int] = Field(1, description="Page number for pagination (default: 1, each page contains 30 records)")
class QueryTransferHistoryTool(MoviePilotTool):
name: str = "query_transfer_history"
description: str = "Query file transfer history records. Shows transfer status, source and destination paths, media information, and transfer details. Supports filtering by title and status."
args_schema: Type[BaseModel] = QueryTransferHistoryInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据查询参数生成友好的提示消息"""
title = kwargs.get("title")
status = kwargs.get("status", "all")
page = kwargs.get("page", 1)
parts = ["正在查询整理历史"]
if title:
parts.append(f"标题: {title}")
if status != "all":
status_map = {"success": "成功", "failed": "失败"}
parts.append(f"状态: {status_map.get(status, status)}")
if page > 1:
parts.append(f"{page}")
return " | ".join(parts) if len(parts) > 1 else parts[0]
async def run(self, title: Optional[str] = None,
status: Optional[str] = "all",
page: Optional[int] = 1, **kwargs) -> str:
logger.info(f"执行工具: {self.name}, 参数: title={title}, status={status}, page={page}")
try:
# 处理状态参数
status_bool = None
if status == "success":
status_bool = True
elif status == "failed":
status_bool = False
# 处理页码参数
if page is None or page < 1:
page = 1
# 每页记录数
count = 50
# 获取数据库会话
async with AsyncSessionFactory() as db:
# 处理标题搜索
if title:
# 使用 jieba 分词处理标题
words = jieba.cut(title, HMM=False)
title_search = "%".join(words)
# 查询记录
result = await TransferHistory.async_list_by_title(
db, title=title_search, page=page, count=count, status=status_bool
)
total = await TransferHistory.async_count_by_title(
db, title=title_search, status=status_bool
)
else:
# 查询所有记录
result = await TransferHistory.async_list_by_page(
db, page=page, count=count, status=status_bool
)
total = await TransferHistory.async_count(db, status=status_bool)
if not result:
return "未找到相关整理历史记录"
# 转换为字典格式,只保留关键信息
simplified_records = []
for record in result:
simplified = {
"id": record.id,
"title": record.title,
"year": record.year,
"type": record.type,
"category": record.category,
"seasons": record.seasons,
"episodes": record.episodes,
"src": record.src,
"dest": record.dest,
"mode": record.mode,
"status": "成功" if record.status else "失败",
"date": record.date,
"downloader": record.downloader,
"download_hash": record.download_hash
}
# 如果失败,添加错误信息
if not record.status and record.errmsg:
simplified["errmsg"] = record.errmsg
# 添加媒体ID信息如果有
if record.tmdbid:
simplified["tmdbid"] = record.tmdbid
if record.imdbid:
simplified["imdbid"] = record.imdbid
if record.doubanid:
simplified["doubanid"] = record.doubanid
simplified_records.append(simplified)
result_json = json.dumps(simplified_records, ensure_ascii=False, indent=2)
# 计算总页数
total_pages = (total + count - 1) // count if total > 0 else 1
# 构建分页信息
pagination_info = f"{page}/{total_pages} 页,共 {total} 条记录(每页 {count} 条)"
return f"{pagination_info}\n\n{result_json}"
except Exception as e:
logger.error(f"查询整理历史记录失败: {e}", exc_info=True)
return f"查询整理历史记录时发生错误: {str(e)}"

View File

@@ -0,0 +1,128 @@
"""查询工作流工具"""
import json
from typing import Optional, Type
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.db import AsyncSessionFactory
from app.db.workflow_oper import WorkflowOper
from app.log import logger
class QueryWorkflowsInput(BaseModel):
"""查询工作流工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
state: Optional[str] = Field("all", description="Filter workflows by state: 'W' for waiting, 'R' for running, 'P' for paused, 'S' for success, 'F' for failed, 'all' for all workflows (default: 'all')")
name: Optional[str] = Field(None, description="Filter workflows by name (partial match, optional)")
trigger_type: Optional[str] = Field("all", description="Filter workflows by trigger type: 'timer' for scheduled, 'event' for event-triggered, 'manual' for manual, 'all' for all types (default: 'all')")
class QueryWorkflowsTool(MoviePilotTool):
name: str = "query_workflows"
description: str = "Query workflow list and status. Shows workflow name, description, trigger type, state, execution count, and other workflow details. Supports filtering by state, name, and trigger type."
args_schema: Type[BaseModel] = QueryWorkflowsInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据查询参数生成友好的提示消息"""
state = kwargs.get("state", "all")
name = kwargs.get("name")
trigger_type = kwargs.get("trigger_type", "all")
parts = ["正在查询工作流"]
if state != "all":
state_map = {"W": "等待", "R": "运行中", "P": "暂停", "S": "成功", "F": "失败"}
parts.append(f"状态: {state_map.get(state, state)}")
if trigger_type != "all":
trigger_map = {"timer": "定时触发", "event": "事件触发", "manual": "手动触发"}
parts.append(f"触发类型: {trigger_map.get(trigger_type, trigger_type)}")
if name:
parts.append(f"名称: {name}")
return " | ".join(parts) if len(parts) > 1 else parts[0]
async def run(self, state: Optional[str] = "all",
name: Optional[str] = None,
trigger_type: Optional[str] = "all", **kwargs) -> str:
logger.info(f"执行工具: {self.name}, 参数: state={state}, name={name}, trigger_type={trigger_type}")
try:
# 获取数据库会话
async with AsyncSessionFactory() as db:
workflow_oper = WorkflowOper(db)
workflows = await workflow_oper.async_list()
# 过滤工作流
filtered_workflows = []
for wf in workflows:
# 按状态过滤
if state != "all" and wf.state != state:
continue
# 按触发类型过滤
if trigger_type != "all":
if trigger_type == "timer" and wf.trigger_type not in ["timer", None]:
continue
elif trigger_type == "event" and wf.trigger_type != "event":
continue
elif trigger_type == "manual" and wf.trigger_type != "manual":
continue
# 按名称过滤(部分匹配)
if name and wf.name and name.lower() not in wf.name.lower():
continue
filtered_workflows.append(wf)
if not filtered_workflows:
return "未找到相关工作流"
# 转换为字典格式,只保留关键信息
simplified_workflows = []
for wf in filtered_workflows:
# 状态说明
state_map = {
"W": "等待",
"R": "运行中",
"P": "暂停",
"S": "成功",
"F": "失败"
}
state_desc = state_map.get(wf.state, wf.state)
# 触发类型说明
trigger_type_map = {
"timer": "定时触发",
"event": "事件触发",
"manual": "手动触发"
}
trigger_type_desc = trigger_type_map.get(wf.trigger_type, wf.trigger_type or "定时触发")
simplified = {
"id": wf.id,
"name": wf.name,
"description": wf.description,
"trigger_type": trigger_type_desc,
"state": state_desc,
"run_count": wf.run_count,
"timer": wf.timer,
"event_type": wf.event_type,
"add_time": wf.add_time,
"last_time": wf.last_time,
"current_action": wf.current_action
}
# 如果有结果,添加结果信息
if wf.result:
simplified["result"] = wf.result
simplified_workflows.append(simplified)
result_json = json.dumps(simplified_workflows, ensure_ascii=False, indent=2)
return result_json
except Exception as e:
logger.error(f"查询工作流失败: {e}", exc_info=True)
return f"查询工作流时发生错误: {str(e)}"

View File

@@ -0,0 +1,162 @@
"""识别媒体信息工具"""
import json
from typing import Optional, Type
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.chain.media import MediaChain
from app.core.context import Context
from app.core.metainfo import MetaInfo
from app.log import logger
class RecognizeMediaInput(BaseModel):
"""识别媒体信息工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
title: Optional[str] = Field(None, description="The title of the torrent/media to recognize (required for torrent recognition)")
subtitle: Optional[str] = Field(None, description="The subtitle or description of the torrent (optional, helps improve recognition accuracy)")
path: Optional[str] = Field(None, description="The file path to recognize (required for file recognition, mutually exclusive with title)")
class RecognizeMediaTool(MoviePilotTool):
name: str = "recognize_media"
description: str = "Extract/identify media information from torrent titles or file paths (NOT database search). Supports two modes: 1) Extract from torrent title and optional subtitle, 2) Extract from file path. Returns detailed media information. Use 'search_media' to search TMDB database, or 'scrape_metadata' to generate metadata files for existing files."
args_schema: Type[BaseModel] = RecognizeMediaInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据识别参数生成友好的提示消息"""
title = kwargs.get("title")
subtitle = kwargs.get("subtitle")
path = kwargs.get("path")
if path:
message = f"正在识别文件媒体信息: {path}"
elif title:
message = f"正在识别种子媒体信息: {title}"
if subtitle:
message += f" ({subtitle})"
else:
message = "正在识别媒体信息"
return message
async def run(self, title: Optional[str] = None, subtitle: Optional[str] = None,
path: Optional[str] = None, **kwargs) -> str:
logger.info(f"执行工具: {self.name}, 参数: title={title}, subtitle={subtitle}, path={path}")
try:
media_chain = MediaChain()
context = None
# 根据提供的参数选择识别方式
if path:
# 文件路径识别
if not path:
return json.dumps({
"success": False,
"message": "文件路径不能为空"
}, ensure_ascii=False)
context = await media_chain.async_recognize_by_path(path)
if context:
return self._format_context_result(context, "文件")
else:
return json.dumps({
"success": False,
"message": f"无法识别文件媒体信息: {path}",
"path": path
}, ensure_ascii=False)
elif title:
# 种子标题识别
metainfo = MetaInfo(title, subtitle)
mediainfo = await media_chain.async_recognize_by_meta(metainfo)
if mediainfo:
context = Context(meta_info=metainfo, media_info=mediainfo)
return self._format_context_result(context, "种子")
else:
return json.dumps({
"success": False,
"message": f"无法识别种子媒体信息: {title}",
"title": title,
"subtitle": subtitle
}, ensure_ascii=False)
else:
return json.dumps({
"success": False,
"message": "必须提供 title标题或 path文件路径参数之一"
}, ensure_ascii=False)
except Exception as e:
error_message = f"识别媒体信息失败: {str(e)}"
logger.error(f"识别媒体信息失败: {e}", exc_info=True)
return json.dumps({
"success": False,
"message": error_message
}, ensure_ascii=False)
def _format_context_result(self, context: Context, source_type: str) -> str:
"""格式化识别结果为JSON字符串"""
if not context:
return json.dumps({
"success": False,
"message": "识别结果为空"
}, ensure_ascii=False)
context_dict = context.to_dict()
media_info = context_dict.get("media_info")
meta_info = context_dict.get("meta_info")
# 构建简化的结果
result = {
"success": True,
"source_type": source_type,
"media_info": None,
"meta_info": None
}
# 处理媒体信息
if media_info:
result["media_info"] = {
"title": media_info.get("title"),
"en_title": media_info.get("en_title"),
"year": media_info.get("year"),
"type": media_info.get("type"),
"season": media_info.get("season"),
"tmdb_id": media_info.get("tmdb_id"),
"imdb_id": media_info.get("imdb_id"),
"douban_id": media_info.get("douban_id"),
"bangumi_id": media_info.get("bangumi_id"),
"overview": media_info.get("overview"),
"vote_average": media_info.get("vote_average"),
"poster_path": media_info.get("poster_path"),
"backdrop_path": media_info.get("backdrop_path"),
"detail_link": media_info.get("detail_link"),
"title_year": media_info.get("title_year"),
"source": media_info.get("source")
}
# 处理元数据信息
if meta_info:
result["meta_info"] = {
"name": meta_info.get("name"),
"title": meta_info.get("title"),
"year": meta_info.get("year"),
"type": meta_info.get("type"),
"begin_season": meta_info.get("begin_season"),
"end_season": meta_info.get("end_season"),
"begin_episode": meta_info.get("begin_episode"),
"end_episode": meta_info.get("end_episode"),
"total_episode": meta_info.get("total_episode"),
"part": meta_info.get("part"),
"season_episode": meta_info.get("season_episode"),
"episode_list": meta_info.get("episode_list"),
"tmdbid": meta_info.get("tmdbid"),
"doubanid": meta_info.get("doubanid")
}
return json.dumps(result, ensure_ascii=False, indent=2)

View File

@@ -0,0 +1,53 @@
"""运行定时服务工具"""
from typing import Optional, Type
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.log import logger
from app.scheduler import Scheduler
class RunSchedulerInput(BaseModel):
"""运行定时服务工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
job_id: str = Field(..., description="The ID of the scheduled job to run (can be obtained from query_schedulers tool)")
class RunSchedulerTool(MoviePilotTool):
name: str = "run_scheduler"
description: str = "Manually trigger a scheduled task to run immediately. This will execute the specified scheduler job by its ID."
args_schema: Type[BaseModel] = RunSchedulerInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据运行参数生成友好的提示消息"""
job_id = kwargs.get("job_id", "")
return f"正在运行定时服务 (ID: {job_id})"
async def run(self, job_id: str, **kwargs) -> str:
logger.info(f"执行工具: {self.name}, 参数: job_id={job_id}")
try:
scheduler = Scheduler()
# 检查定时服务是否存在
schedulers = scheduler.list()
job_exists = False
job_name = None
for s in schedulers:
if s.id == job_id:
job_exists = True
job_name = s.name
break
if not job_exists:
return f"定时服务 ID {job_id} 不存在,请使用 query_schedulers 工具查询可用的定时服务"
# 运行定时服务
scheduler.start(job_id)
return f"成功触发定时服务:{job_name} (ID: {job_id})"
except Exception as e:
logger.error(f"运行定时服务失败: {e}", exc_info=True)
return f"运行定时服务时发生错误: {str(e)}"

View File

@@ -0,0 +1,72 @@
"""执行工作流工具"""
from typing import Optional, Type
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.chain.workflow import WorkflowChain
from app.db import AsyncSessionFactory
from app.db.workflow_oper import WorkflowOper
from app.log import logger
class RunWorkflowInput(BaseModel):
"""执行工作流工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
workflow_identifier: str = Field(..., description="Workflow identifier: can be workflow ID (integer as string) or workflow name")
from_begin: Optional[bool] = Field(True, description="Whether to run workflow from the beginning (default: True, if False will continue from last executed action)")
class RunWorkflowTool(MoviePilotTool):
name: str = "run_workflow"
description: str = "Execute a specific workflow manually. Can run workflow by ID or name. Supports running from the beginning or continuing from the last executed action."
args_schema: Type[BaseModel] = RunWorkflowInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据工作流参数生成友好的提示消息"""
workflow_identifier = kwargs.get("workflow_identifier", "")
from_begin = kwargs.get("from_begin", True)
message = f"正在执行工作流: {workflow_identifier}"
if not from_begin:
message += " (从上次位置继续)"
else:
message += " (从头开始)"
return message
async def run(self, workflow_identifier: str,
from_begin: Optional[bool] = True, **kwargs) -> str:
logger.info(f"执行工具: {self.name}, 参数: workflow_identifier={workflow_identifier}, from_begin={from_begin}")
try:
# 获取数据库会话
async with AsyncSessionFactory() as db:
workflow_oper = WorkflowOper(db)
# 尝试解析为工作流ID
workflow = None
if workflow_identifier.isdigit():
# 如果是数字尝试作为工作流ID查询
workflow = await workflow_oper.async_get(int(workflow_identifier))
# 如果不是ID或ID查询失败尝试按名称查询
if not workflow:
workflow = await workflow_oper.async_get_by_name(workflow_identifier)
if not workflow:
return f"未找到工作流:{workflow_identifier},请使用 query_workflows 工具查询可用的工作流"
# 执行工作流
workflow_chain = WorkflowChain()
state, errmsg = workflow_chain.process(workflow.id, from_begin=from_begin)
if not state:
return f"执行工作流失败:{workflow.name} (ID: {workflow.id})\n错误原因:{errmsg}"
else:
return f"工作流执行成功:{workflow.name} (ID: {workflow.id})"
except Exception as e:
logger.error(f"执行工作流失败: {e}", exc_info=True)
return f"执行工作流时发生错误: {str(e)}"

View File

@@ -0,0 +1,119 @@
"""刮削媒体元数据工具"""
import json
from pathlib import Path
from typing import Optional, Type
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.chain.media import MediaChain
from app.core.config import global_vars
from app.core.metainfo import MetaInfoPath
from app.log import logger
from app.schemas import FileItem
class ScrapeMetadataInput(BaseModel):
"""刮削媒体元数据工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
path: str = Field(...,
description="Path to the file or directory to scrape metadata for (e.g., '/path/to/file.mkv' or '/path/to/directory')")
storage: Optional[str] = Field("local",
description="Storage type: 'local' for local storage, 'smb', 'alist', etc. for remote storage (default: 'local')")
overwrite: Optional[bool] = Field(False,
description="Whether to overwrite existing metadata files (default: False)")
class ScrapeMetadataTool(MoviePilotTool):
name: str = "scrape_metadata"
description: str = "Generate metadata files (NFO files, posters, backgrounds, etc.) for existing media files or directories. Automatically recognizes media information from the file path and creates metadata files. Supports both local and remote storage. Use 'search_media' to search TMDB database, or 'recognize_media' to extract info from torrent titles/file paths without generating files."
args_schema: Type[BaseModel] = ScrapeMetadataInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据刮削参数生成友好的提示消息"""
path = kwargs.get("path", "")
storage = kwargs.get("storage", "local")
overwrite = kwargs.get("overwrite", False)
message = f"正在刮削媒体元数据: {path}"
if storage != "local":
message += f" [存储: {storage}]"
if overwrite:
message += " [覆盖模式]"
return message
async def run(self, path: str, storage: Optional[str] = "local",
overwrite: Optional[bool] = False, **kwargs) -> str:
logger.info(f"执行工具: {self.name}, 参数: path={path}, storage={storage}, overwrite={overwrite}")
try:
# 验证路径
if not path:
return json.dumps({
"success": False,
"message": "刮削路径不能为空"
}, ensure_ascii=False)
# 创建 FileItem
fileitem = FileItem(
storage=storage,
path=path,
type="file" if Path(path).suffix else "dir"
)
# 检查本地存储路径是否存在
if storage == "local":
scrape_path = Path(path)
if not scrape_path.exists():
return json.dumps({
"success": False,
"message": f"刮削路径不存在: {path}"
}, ensure_ascii=False)
# 识别媒体信息
media_chain = MediaChain()
scrape_path = Path(path)
meta = MetaInfoPath(scrape_path)
mediainfo = await media_chain.async_recognize_by_meta(meta)
if not mediainfo:
return json.dumps({
"success": False,
"message": f"刮削失败,无法识别媒体信息: {path}",
"path": path
}, ensure_ascii=False)
# 在线程池中执行同步的刮削操作
await global_vars.loop.run_in_executor(
None,
lambda: media_chain.scrape_metadata(
fileitem=fileitem,
meta=meta,
mediainfo=mediainfo,
overwrite=overwrite
)
)
return json.dumps({
"success": True,
"message": f"{path} 刮削完成",
"path": path,
"media_info": {
"title": mediainfo.title,
"year": mediainfo.year,
"type": mediainfo.type.value if mediainfo.type else None,
"tmdb_id": mediainfo.tmdb_id,
"season": mediainfo.season
}
}, ensure_ascii=False, indent=2)
except Exception as e:
error_message = f"刮削媒体元数据失败: {str(e)}"
logger.error(f"刮削媒体元数据失败: {e}", exc_info=True)
return json.dumps({
"success": False,
"message": error_message,
"path": path
}, ensure_ascii=False)

View File

@@ -0,0 +1,104 @@
"""搜索媒体工具"""
import json
from typing import Optional, Type
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.chain.media import MediaChain
from app.log import logger
from app.schemas.types import MediaType
class SearchMediaInput(BaseModel):
"""搜索媒体工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
title: str = Field(..., description="The title of the media to search for (e.g., 'The Matrix', 'Breaking Bad')")
year: Optional[str] = Field(None, description="Release year of the media (optional, helps narrow down results)")
media_type: Optional[str] = Field(None,
description="Type of media content: '电影' for films, '电视剧' for television series or anime series")
season: Optional[int] = Field(None,
description="Season number for TV shows and anime (optional, only applicable for series)")
class SearchMediaTool(MoviePilotTool):
name: str = "search_media"
description: str = "Search TMDB database for media resources (movies, TV shows, anime, etc.) by title, year, type, and other criteria. Returns detailed media information from TMDB. Use 'recognize_media' to extract info from torrent titles/file paths, or 'scrape_metadata' to generate metadata files."
args_schema: Type[BaseModel] = SearchMediaInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据搜索参数生成友好的提示消息"""
title = kwargs.get("title", "")
year = kwargs.get("year")
media_type = kwargs.get("media_type")
season = kwargs.get("season")
message = f"正在搜索媒体: {title}"
if year:
message += f" ({year})"
if media_type:
message += f" [{media_type}]"
if season:
message += f"{season}"
return message
async def run(self, title: str, year: Optional[str] = None,
media_type: Optional[str] = None, season: Optional[int] = None, **kwargs) -> str:
logger.info(
f"执行工具: {self.name}, 参数: title={title}, year={year}, media_type={media_type}, season={season}")
try:
media_chain = MediaChain()
# 使用 MediaChain.search 方法
meta, results = await media_chain.async_search(title=title)
# 过滤结果
if results:
filtered_results = []
for result in results:
if year and result.year != year:
continue
if media_type:
if result.type != MediaType(media_type):
continue
if season and result.season != season:
continue
filtered_results.append(result)
if filtered_results:
# 限制最多30条结果
total_count = len(filtered_results)
limited_results = filtered_results[:30]
# 精简字段,只保留关键信息
simplified_results = []
for r in limited_results:
simplified = {
"title": r.title,
"en_title": r.en_title,
"year": r.year,
"type": r.type.value if r.type else None,
"season": r.season,
"tmdb_id": r.tmdb_id,
"imdb_id": r.imdb_id,
"douban_id": r.douban_id,
"overview": r.overview[:200] + "..." if r.overview and len(r.overview) > 200 else r.overview,
"vote_average": r.vote_average,
"poster_path": r.poster_path,
"detail_link": r.detail_link
}
simplified_results.append(simplified)
result_json = json.dumps(simplified_results, ensure_ascii=False, indent=2)
# 如果结果被裁剪,添加提示信息
if total_count > 30:
return f"注意:搜索结果共找到 {total_count} 条,为节省上下文空间,仅显示前 30 条结果。\n\n{result_json}"
return result_json
else:
return f"未找到符合条件的媒体资源: {title}"
else:
return f"未找到相关媒体资源: {title}"
except Exception as e:
error_message = f"搜索媒体失败: {str(e)}"
logger.error(f"搜索媒体失败: {e}", exc_info=True)
return error_message

View File

@@ -0,0 +1,83 @@
"""搜索人物工具"""
import json
from typing import Optional, Type
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.chain.media import MediaChain
from app.log import logger
class SearchPersonInput(BaseModel):
"""搜索人物工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
name: str = Field(..., description="The name of the person to search for (e.g., 'Tom Hanks', '周杰伦')")
class SearchPersonTool(MoviePilotTool):
name: str = "search_person"
description: str = "Search for person information including actors, directors, etc. Supports searching by name. Returns detailed person information from TMDB, Douban, or Bangumi database."
args_schema: Type[BaseModel] = SearchPersonInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据搜索参数生成友好的提示消息"""
name = kwargs.get("name", "")
return f"正在搜索人物: {name}"
async def run(self, name: str, **kwargs) -> str:
logger.info(f"执行工具: {self.name}, 参数: name={name}")
try:
media_chain = MediaChain()
# 使用 MediaChain.async_search_persons 方法搜索人物
persons = await media_chain.async_search_persons(name=name)
if persons:
# 限制最多30条结果
total_count = len(persons)
limited_persons = persons[:30]
# 精简字段,只保留关键信息
simplified_results = []
for person in limited_persons:
simplified = {
"name": person.name,
"id": person.id,
"source": person.source,
"profile_path": person.profile_path,
"original_name": person.original_name,
"known_for_department": person.known_for_department,
"popularity": person.popularity,
"biography": person.biography[:200] + "..." if person.biography and len(person.biography) > 200 else person.biography,
"birthday": person.birthday,
"deathday": person.deathday,
"place_of_birth": person.place_of_birth,
"gender": person.gender,
"imdb_id": person.imdb_id,
"also_known_as": person.also_known_as[:5] if person.also_known_as else [], # 限制别名数量
}
# 添加豆瓣特有字段
if person.source == "douban":
simplified["url"] = person.url
simplified["avatar"] = person.avatar
simplified["latin_name"] = person.latin_name
simplified["roles"] = person.roles[:5] if person.roles else [] # 限制角色数量
# 添加Bangumi特有字段
if person.source == "bangumi":
simplified["career"] = person.career
simplified["relation"] = person.relation
simplified_results.append(simplified)
result_json = json.dumps(simplified_results, ensure_ascii=False, indent=2)
# 如果结果被裁剪,添加提示信息
if total_count > 30:
return f"注意:搜索结果共找到 {total_count} 条,为节省上下文空间,仅显示前 30 条结果。\n\n{result_json}"
return result_json
else:
return f"未找到相关人物信息: {name}"
except Exception as e:
error_message = f"搜索人物失败: {str(e)}"
logger.error(f"搜索人物失败: {e}", exc_info=True)
return error_message

View File

@@ -0,0 +1,85 @@
"""搜索演员参演作品工具"""
import json
from typing import Optional, Type
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.chain.douban import DoubanChain
from app.chain.tmdb import TmdbChain
from app.chain.bangumi import BangumiChain
from app.log import logger
class SearchPersonCreditsInput(BaseModel):
"""搜索演员参演作品工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
person_id: int = Field(..., description="The ID of the person/actor to search for credits (e.g., 31 for Tom Hanks in TMDB)")
source: str = Field(..., description="The data source: 'tmdb' for TheMovieDB, 'douban' for Douban, 'bangumi' for Bangumi")
page: Optional[int] = Field(1, description="Page number for pagination (default: 1)")
class SearchPersonCreditsTool(MoviePilotTool):
name: str = "search_person_credits"
description: str = "Search for films and TV shows that a person/actor has appeared in (filmography). Supports searching by person ID from TMDB, Douban, or Bangumi database. Returns a list of media works the person has participated in."
args_schema: Type[BaseModel] = SearchPersonCreditsInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据搜索参数生成友好的提示消息"""
person_id = kwargs.get("person_id", "")
source = kwargs.get("source", "")
return f"正在搜索人物参演作品: {source} ID {person_id}"
async def run(self, person_id: int, source: str, page: Optional[int] = 1, **kwargs) -> str:
logger.info(f"执行工具: {self.name}, 参数: person_id={person_id}, source={source}, page={page}")
try:
# 根据source选择相应的chain
if source.lower() == "tmdb":
tmdb_chain = TmdbChain()
medias = await tmdb_chain.async_person_credits(person_id=person_id, page=page)
elif source.lower() == "douban":
douban_chain = DoubanChain()
medias = await douban_chain.async_person_credits(person_id=person_id, page=page)
elif source.lower() == "bangumi":
bangumi_chain = BangumiChain()
medias = await bangumi_chain.async_person_credits(person_id=person_id)
else:
return f"不支持的数据源: {source}。支持的数据源: tmdb, douban, bangumi"
if medias:
# 限制最多30条结果
total_count = len(medias)
limited_medias = medias[:30]
# 精简字段,只保留关键信息
simplified_results = []
for media in limited_medias:
simplified = {
"title": media.title,
"en_title": media.en_title,
"year": media.year,
"type": media.type.value if media.type else None,
"season": media.season,
"tmdb_id": media.tmdb_id,
"imdb_id": media.imdb_id,
"douban_id": media.douban_id,
"overview": media.overview[:200] + "..." if media.overview and len(media.overview) > 200 else media.overview,
"vote_average": media.vote_average,
"poster_path": media.poster_path,
"backdrop_path": media.backdrop_path,
"detail_link": media.detail_link
}
simplified_results.append(simplified)
result_json = json.dumps(simplified_results, ensure_ascii=False, indent=2)
# 如果结果被裁剪,添加提示信息
if total_count > 30:
return f"注意:搜索结果共找到 {total_count} 条,为节省上下文空间,仅显示前 30 条结果。\n\n{result_json}"
return result_json
else:
return f"未找到人物 ID {person_id} ({source}) 的参演作品"
except Exception as e:
error_message = f"搜索演员参演作品失败: {str(e)}"
logger.error(f"搜索演员参演作品失败: {e}", exc_info=True)
return error_message

View File

@@ -0,0 +1,127 @@
"""搜索订阅缺失剧集工具"""
import json
from typing import Optional, Type, List
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.chain.subscribe import SubscribeChain
from app.core.config import global_vars
from app.db.subscribe_oper import SubscribeOper
from app.log import logger
class SearchSubscribeInput(BaseModel):
"""搜索订阅缺失剧集工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
subscribe_id: int = Field(..., description="The ID of the subscription to search for missing episodes")
manual: Optional[bool] = Field(False, description="Whether this is a manual search (default: False)")
filter_groups: Optional[List[str]] = Field(None,
description="List of filter rule group names to apply for this search (optional, use query_rule_groups tool to get available rule groups. If provided, will temporarily update the subscription's filter groups before searching)")
class SearchSubscribeTool(MoviePilotTool):
name: str = "search_subscribe"
description: str = "Search for missing episodes/resources for a specific subscription. This tool will search torrent sites for the missing episodes of the subscription and automatically download matching resources. Use this when a user wants to search for missing episodes of a specific subscription."
args_schema: Type[BaseModel] = SearchSubscribeInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据搜索参数生成友好的提示消息"""
subscribe_id = kwargs.get("subscribe_id")
manual = kwargs.get("manual", False)
message = f"正在搜索订阅 #{subscribe_id} 的缺失剧集"
if manual:
message += "(手动搜索)"
return message
async def run(self, subscribe_id: int, manual: Optional[bool] = False,
filter_groups: Optional[List[str]] = None, **kwargs) -> str:
logger.info(
f"执行工具: {self.name}, 参数: subscribe_id={subscribe_id}, manual={manual}, filter_groups={filter_groups}")
try:
# 先验证订阅是否存在
subscribe_oper = SubscribeOper()
subscribe = subscribe_oper.get(subscribe_id)
if not subscribe:
return json.dumps({
"success": False,
"message": f"订阅不存在: {subscribe_id}"
}, ensure_ascii=False)
# 获取订阅信息用于返回
subscribe_info = {
"id": subscribe.id,
"name": subscribe.name,
"year": subscribe.year,
"type": subscribe.type,
"season": subscribe.season,
"state": subscribe.state,
"total_episode": subscribe.total_episode,
"lack_episode": subscribe.lack_episode,
"tmdbid": subscribe.tmdbid,
"doubanid": subscribe.doubanid
}
# 检查订阅状态
if subscribe.state == "S":
return json.dumps({
"success": False,
"message": f"订阅 #{subscribe_id} ({subscribe.name}) 已暂停,无法搜索",
"subscribe": subscribe_info
}, ensure_ascii=False)
# 如果提供了 filter_groups 参数,先更新订阅的规则组
if filter_groups is not None:
subscribe_oper.update(subscribe_id, {"filter_groups": filter_groups})
logger.info(f"更新订阅 #{subscribe_id} 的规则组为: {filter_groups}")
# 调用 SubscribeChain 的 search 方法
# search 方法是同步的,需要在异步环境中运行
subscribe_chain = SubscribeChain()
# 在线程池中执行同步的搜索操作
# 当 sid 有值时state 参数会被忽略,直接处理该订阅
await global_vars.loop.run_in_executor(
None,
lambda: subscribe_chain.search(
sid=subscribe_id,
state='R', # 默认状态,当 sid 有值时此参数会被忽略
manual=manual
)
)
# 重新获取订阅信息以获取更新后的状态
updated_subscribe = subscribe_oper.get(subscribe_id)
if updated_subscribe:
subscribe_info.update({
"state": updated_subscribe.state,
"lack_episode": updated_subscribe.lack_episode,
"last_update": updated_subscribe.last_update,
"filter_groups": updated_subscribe.filter_groups
})
# 如果提供了规则组,会在返回信息中显示
result = {
"success": True,
"message": f"订阅 #{subscribe_id} ({subscribe.name}) 搜索完成",
"subscribe": subscribe_info
}
if filter_groups is not None:
result["message"] += f"(已应用规则组: {', '.join(filter_groups)}"
return json.dumps(result, ensure_ascii=False, indent=2)
except Exception as e:
error_message = f"搜索订阅缺失剧集失败: {str(e)}"
logger.error(f"搜索订阅缺失剧集失败: {e}", exc_info=True)
return json.dumps({
"success": False,
"message": error_message,
"subscribe_id": subscribe_id
}, ensure_ascii=False)

View File

@@ -0,0 +1,143 @@
"""搜索种子工具"""
import json
import re
from typing import List, Optional, Type
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.chain.search import SearchChain
from app.log import logger
from app.schemas.types import MediaType
from app.utils.string import StringUtils
class SearchTorrentsInput(BaseModel):
"""搜索种子工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
title: str = Field(...,
description="The title of the media resource to search for (e.g., 'The Matrix 1999', 'Breaking Bad S01E01')")
year: Optional[str] = Field(None,
description="Release year of the media (optional, helps narrow down search results)")
media_type: Optional[str] = Field(None,
description="Type of media content: '电影' for films, '电视剧' for television series or anime series")
season: Optional[int] = Field(None, description="Season number for TV shows (optional, only applicable for series)")
sites: Optional[List[int]] = Field(None,
description="Array of specific site IDs to search on (optional, if not provided searches all configured sites)")
filter_pattern: Optional[str] = Field(None,
description="Regular expression pattern to filter torrent titles by resolution, quality, or other keywords (e.g., '4K|2160p|UHD' for 4K content, '1080p|BluRay' for 1080p BluRay)")
class SearchTorrentsTool(MoviePilotTool):
name: str = "search_torrents"
description: str = "Search for torrent files across configured indexer sites based on media information. Returns available torrent downloads with details like file size, quality, and download links."
args_schema: Type[BaseModel] = SearchTorrentsInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据搜索参数生成友好的提示消息"""
title = kwargs.get("title", "")
year = kwargs.get("year")
media_type = kwargs.get("media_type")
season = kwargs.get("season")
filter_pattern = kwargs.get("filter_pattern")
message = f"正在搜索种子: {title}"
if year:
message += f" ({year})"
if media_type:
message += f" [{media_type}]"
if season:
message += f"{season}"
if filter_pattern:
message += f" 过滤: {filter_pattern}"
return message
async def run(self, title: str, year: Optional[str] = None,
media_type: Optional[str] = None, season: Optional[int] = None,
sites: Optional[List[int]] = None, filter_pattern: Optional[str] = None, **kwargs) -> str:
logger.info(
f"执行工具: {self.name}, 参数: title={title}, year={year}, media_type={media_type}, season={season}, sites={sites}, filter_pattern={filter_pattern}")
try:
search_chain = SearchChain()
torrents = await search_chain.async_search_by_title(title=title, sites=sites)
filtered_torrents = []
# 编译正则表达式(如果提供)
regex_pattern = None
if filter_pattern:
try:
regex_pattern = re.compile(filter_pattern, re.IGNORECASE)
except re.error as e:
logger.warning(f"正则表达式编译失败: {filter_pattern}, 错误: {e}")
return f"正则表达式格式错误: {str(e)}"
for torrent in torrents:
# torrent 是 Context 对象,需要通过 meta_info 和 media_info 访问属性
if year and torrent.meta_info and torrent.meta_info.year != year:
continue
if media_type and torrent.media_info:
if torrent.media_info.type != MediaType(media_type):
continue
if season and torrent.meta_info and torrent.meta_info.begin_season != season:
continue
# 使用正则表达式过滤标题(分辨率、质量等关键字)
if regex_pattern and torrent.torrent_info and torrent.torrent_info.title:
if not regex_pattern.search(torrent.torrent_info.title):
continue
filtered_torrents.append(torrent)
if filtered_torrents:
# 限制最多50条结果
total_count = len(filtered_torrents)
limited_torrents = filtered_torrents[:50]
# 精简字段,只保留关键信息
simplified_torrents = []
for t in limited_torrents:
simplified = {}
# 精简 torrent_info
if t.torrent_info:
simplified["torrent_info"] = {
"title": t.torrent_info.title,
"size": StringUtils.format_size(t.torrent_info.size),
"seeders": t.torrent_info.seeders,
"peers": t.torrent_info.peers,
"site_name": t.torrent_info.site_name,
"enclosure": t.torrent_info.enclosure,
"page_url": t.torrent_info.page_url,
"volume_factor": t.torrent_info.volume_factor,
"pubdate": t.torrent_info.pubdate
}
# 精简 media_info
if t.media_info:
simplified["media_info"] = {
"title": t.media_info.title,
"en_title": t.media_info.en_title,
"year": t.media_info.year,
"type": t.media_info.type.value if t.media_info.type else None,
"season": t.media_info.season,
"tmdb_id": t.media_info.tmdb_id
}
# 精简 meta_info
if t.meta_info:
simplified["meta_info"] = {
"name": t.meta_info.name,
"cn_name": t.meta_info.cn_name,
"en_name": t.meta_info.en_name,
"year": t.meta_info.year,
"type": t.meta_info.type.value if t.meta_info.type else None,
"begin_season": t.meta_info.begin_season
}
simplified_torrents.append(simplified)
result_json = json.dumps(simplified_torrents, ensure_ascii=False, indent=2)
# 如果结果被裁剪,添加提示信息
if total_count > 50:
return f"注意:搜索结果共找到 {total_count} 条,为节省上下文空间,仅显示前 50 条结果。\n\n{result_json}"
return result_json
else:
return f"未找到相关种子资源: {title}"
except Exception as e:
error_message = f"搜索种子时发生错误: {str(e)}"
logger.error(f"搜索种子失败: {e}", exc_info=True)
return error_message

View File

@@ -0,0 +1,182 @@
import asyncio
import json
import re
from typing import Optional, Type, List, Dict
import httpx
from ddgs import DDGS
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.core.config import settings
from app.log import logger
# 搜索超时时间(秒)
SEARCH_TIMEOUT = 20
class SearchWebInput(BaseModel):
"""搜索网络内容工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
query: str = Field(..., description="The search query string to search for on the web")
max_results: Optional[int] = Field(5,
description="Maximum number of search results to return (default: 5, max: 10)")
class SearchWebTool(MoviePilotTool):
name: str = "search_web"
description: str = "Search the web for information when you need to find current information, facts, or references that you're uncertain about. Returns search results with titles, snippets, and URLs. Use this tool to get up-to-date information from the internet."
args_schema: Type[BaseModel] = SearchWebInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据搜索参数生成友好的提示消息"""
query = kwargs.get("query", "")
max_results = kwargs.get("max_results", 5)
return f"正在搜索网络内容: {query} (最多返回 {max_results} 条结果)"
async def run(self, query: str, max_results: Optional[int] = 5, **kwargs) -> str:
"""
执行网络搜索
"""
logger.info(f"执行工具: {self.name}, 参数: query={query}, max_results={max_results}")
try:
# 限制最大结果数
max_results = min(max(1, max_results or 5), 10)
results = []
# 1. 优先使用 Tavily (如果配置了 API Key)
if settings.TAVILY_API_KEY:
logger.info("使用 Tavily 进行搜索...")
results = await self._search_tavily(query, max_results)
# 2. 如果没有结果或未配置 Tavily使用 DuckDuckGo
if not results:
logger.info("使用 DuckDuckGo 进行搜索...")
results = await self._search_duckduckgo(query, max_results)
if not results:
return f"未找到与 '{query}' 相关的搜索结果"
# 格式化并裁剪结果
formatted_results = self._format_and_truncate_results(results, max_results)
return json.dumps(formatted_results, ensure_ascii=False, indent=2)
except Exception as e:
error_message = f"搜索网络内容失败: {str(e)}"
logger.error(f"搜索网络内容失败: {e}", exc_info=True)
return error_message
@staticmethod
async def _search_tavily(query: str, max_results: int) -> List[Dict]:
"""使用 Tavily API 进行搜索"""
try:
async with httpx.AsyncClient(timeout=SEARCH_TIMEOUT) as client:
response = await client.post(
"https://api.tavily.com/search",
json={
"api_key": settings.TAVILY_API_KEY,
"query": query,
"search_depth": "basic",
"max_results": max_results,
"include_answer": False,
"include_images": False,
"include_raw_content": False,
}
)
response.raise_for_status()
data = response.json()
results = []
for result in data.get("results", []):
results.append({
'title': result.get('title', ''),
'snippet': result.get('content', ''),
'url': result.get('url', ''),
'source': 'Tavily'
})
return results
except Exception as e:
logger.warning(f"Tavily 搜索失败: {e}")
return []
@staticmethod
def _get_proxy_url(proxy_setting) -> Optional[str]:
"""从代理设置中提取代理URL"""
if not proxy_setting:
return None
if isinstance(proxy_setting, dict):
return proxy_setting.get('http') or proxy_setting.get('https')
return proxy_setting
async def _search_duckduckgo(self, query: str, max_results: int) -> List[Dict]:
"""使用 duckduckgo-search (DDGS) 进行搜索"""
try:
def sync_search():
results = []
ddgs_kwargs = {
'timeout': SEARCH_TIMEOUT
}
proxy_url = self._get_proxy_url(settings.PROXY)
if proxy_url:
ddgs_kwargs['proxy'] = proxy_url
try:
with DDGS(**ddgs_kwargs) as ddgs:
ddgs_gen = ddgs.text(
query,
max_results=max_results
)
if ddgs_gen:
for result in ddgs_gen:
results.append({
'title': result.get('title', ''),
'snippet': result.get('body', ''),
'url': result.get('href', ''),
'source': 'DuckDuckGo'
})
except Exception as err:
logger.warning(f"DuckDuckGo search process failed: {err}")
return results
loop = asyncio.get_running_loop()
return await loop.run_in_executor(None, sync_search)
except Exception as e:
logger.warning(f"DuckDuckGo 搜索失败: {e}")
return []
@staticmethod
def _format_and_truncate_results(results: List[Dict], max_results: int) -> Dict:
"""格式化并裁剪搜索结果"""
formatted = {
"total_results": len(results),
"results": []
}
for idx, result in enumerate(results[:max_results], 1):
title = result.get("title", "")[:200]
snippet = result.get("snippet", "")
url = result.get("url", "")
source = result.get("source", "Unknown")
# 裁剪摘要
max_snippet_length = 500 # 增加到500字符提供更多上下文
if len(snippet) > max_snippet_length:
snippet = snippet[:max_snippet_length] + "..."
# 清理文本
snippet = re.sub(r'\s+', ' ', snippet).strip()
formatted["results"].append({
"rank": idx,
"title": title,
"snippet": snippet,
"url": url,
"source": source
})
if len(results) > max_results:
formatted["note"] = f"仅显示前 {max_results} 条结果。"
return formatted

View File

@@ -0,0 +1,45 @@
"""发送消息工具"""
from typing import Optional, Type
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.log import logger
class SendMessageInput(BaseModel):
"""发送消息工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
message: str = Field(..., description="The message content to send to the user (should be clear and informative)")
message_type: Optional[str] = Field("info",
description="Type of message: 'info' for general information, 'success' for successful operations, 'warning' for warnings, 'error' for error messages")
class SendMessageTool(MoviePilotTool):
name: str = "send_message"
description: str = "Send notification message to the user through configured notification channels (Telegram, Slack, WeChat, etc.). Used to inform users about operation results, errors, or important updates."
args_schema: Type[BaseModel] = SendMessageInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据消息参数生成友好的提示消息"""
message = kwargs.get("message", "")
message_type = kwargs.get("message_type", "info")
type_map = {"info": "信息", "success": "成功", "warning": "警告", "error": "错误"}
type_desc = type_map.get(message_type, message_type)
# 截断过长的消息
if len(message) > 50:
message = message[:50] + "..."
return f"正在发送{type_desc}消息: {message}"
async def run(self, message: str, message_type: Optional[str] = None, **kwargs) -> str:
logger.info(f"执行工具: {self.name}, 参数: message={message}, message_type={message_type}")
try:
await self.send_tool_message(message, title=message_type)
return "消息已发送"
except Exception as e:
logger.error(f"发送消息失败: {e}")
return f"发送消息时发生错误: {str(e)}"

View File

@@ -0,0 +1,72 @@
"""测试站点连通性工具"""
from typing import Optional, Type
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.chain.site import SiteChain
from app.db.site_oper import SiteOper
from app.log import logger
from app.utils.string import StringUtils
class TestSiteInput(BaseModel):
"""测试站点连通性工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
site_identifier: str = Field(..., description="Site identifier: can be site ID (integer as string), site name, or site domain/URL")
class TestSiteTool(MoviePilotTool):
name: str = "test_site"
description: str = "Test site connectivity and availability. This will check if a site is accessible and can be logged in. Accepts site ID, site name, or site domain/URL as identifier."
args_schema: Type[BaseModel] = TestSiteInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据测试参数生成友好的提示消息"""
site_identifier = kwargs.get("site_identifier", "")
return f"正在测试站点连通性: {site_identifier}"
async def run(self, site_identifier: str, **kwargs) -> str:
logger.info(f"执行工具: {self.name}, 参数: site_identifier={site_identifier}")
try:
site_oper = SiteOper()
site_chain = SiteChain()
# 尝试解析为站点ID
site = None
if site_identifier.isdigit():
# 如果是数字尝试作为站点ID查询
site = await site_oper.async_get(int(site_identifier))
# 如果不是ID或ID查询失败尝试按名称或域名查询
if not site:
# 尝试按名称查询
sites = await site_oper.async_list()
for s in sites:
if (site_identifier.lower() in (s.name or "").lower()) or \
(site_identifier.lower() in (s.domain or "").lower()):
site = s
break
# 如果还是没找到尝试从URL提取域名
if not site:
domain = StringUtils.get_url_domain(site_identifier)
if domain:
site = await site_oper.async_get_by_domain(domain)
if not site:
return f"未找到站点:{site_identifier},请使用 query_sites 工具查询可用的站点"
# 测试站点连通性
status, message = site_chain.test(site.domain)
if status:
return f"站点连通性测试成功:{site.name} ({site.domain})\n{message}"
else:
return f"站点连通性测试失败:{site.name} ({site.domain})\n{message}"
except Exception as e:
logger.error(f"测试站点连通性失败: {e}", exc_info=True)
return f"测试站点连通性时发生错误: {str(e)}"

View File

@@ -0,0 +1,134 @@
"""整理文件或目录工具"""
from pathlib import Path
from typing import Optional, Type
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.chain.transfer import TransferChain
from app.log import logger
from app.schemas import FileItem, MediaType
class TransferFileInput(BaseModel):
"""整理文件或目录工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
file_path: str = Field(..., description="Path to the file or directory to transfer (e.g., '/path/to/file.mkv' or '/path/to/directory')")
storage: Optional[str] = Field("local", description="Storage type of the source file (default: 'local', can be 'smb', 'alist', etc.)")
target_path: Optional[str] = Field(None, description="Target path for the transferred file/directory (optional, uses default library path if not specified)")
target_storage: Optional[str] = Field(None, description="Target storage type (optional, uses default storage if not specified)")
media_type: Optional[str] = Field(None, description="Media type: '电影' for films, '电视剧' for television series (optional, will be auto-detected if not specified)")
tmdbid: Optional[int] = Field(None, description="TMDB ID for precise media identification (optional but recommended for accuracy)")
doubanid: Optional[str] = Field(None, description="Douban ID for media identification (optional)")
season: Optional[int] = Field(None, description="Season number for TV shows (optional)")
transfer_type: Optional[str] = Field(None, description="Transfer mode: 'move' to move files, 'copy' to copy files, 'link' for hard link, 'softlink' for symbolic link (optional, uses default mode if not specified)")
background: Optional[bool] = Field(False, description="Whether to run transfer in background (default: False, runs synchronously)")
class TransferFileTool(MoviePilotTool):
name: str = "transfer_file"
description: str = "Transfer/organize a file or directory to the media library. Automatically recognizes media information and organizes files according to configured rules. Supports custom target paths, media identification, and transfer modes."
args_schema: Type[BaseModel] = TransferFileInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据整理参数生成友好的提示消息"""
file_path = kwargs.get("file_path", "")
media_type = kwargs.get("media_type")
transfer_type = kwargs.get("transfer_type")
background = kwargs.get("background", False)
message = f"正在整理文件: {file_path}"
if media_type:
message += f" [{media_type}]"
if transfer_type:
transfer_map = {"move": "移动", "copy": "复制", "link": "硬链接", "softlink": "软链接"}
message += f" 模式: {transfer_map.get(transfer_type, transfer_type)}"
if background:
message += " [后台运行]"
return message
async def run(self, file_path: str, storage: Optional[str] = "local",
target_path: Optional[str] = None,
target_storage: Optional[str] = None,
media_type: Optional[str] = None,
tmdbid: Optional[int] = None,
doubanid: Optional[str] = None,
season: Optional[int] = None,
transfer_type: Optional[str] = None,
background: Optional[bool] = False, **kwargs) -> str:
logger.info(
f"执行工具: {self.name}, 参数: file_path={file_path}, storage={storage}, target_path={target_path}, "
f"target_storage={target_storage}, media_type={media_type}, tmdbid={tmdbid}, doubanid={doubanid}, "
f"season={season}, transfer_type={transfer_type}, background={background}")
try:
if not file_path:
return "错误:必须提供文件或目录路径"
# 规范化路径
if storage == "local":
# 本地路径处理
if not file_path.startswith("/") and not (len(file_path) > 1 and file_path[1] == ":"):
# 相对路径,尝试转换为绝对路径
file_path = str(Path(file_path).resolve())
else:
# 远程存储路径,确保以/开头
if not file_path.startswith("/"):
file_path = "/" + file_path
# 创建FileItem
fileitem = FileItem(
storage=storage or "local",
path=file_path,
type="dir" if file_path.endswith("/") else "file"
)
# 处理目标路径
target_path_obj = None
if target_path:
target_path_obj = Path(target_path)
# 处理媒体类型
mtype = None
if media_type:
try:
mtype = MediaType(media_type)
except ValueError:
return f"错误:无效的媒体类型 '{media_type}',支持的类型:'movie', 'tv'"
# 调用整理方法
transfer_chain = TransferChain()
state, errormsg = transfer_chain.manual_transfer(
fileitem=fileitem,
target_storage=target_storage,
target_path=target_path_obj,
tmdbid=tmdbid,
doubanid=doubanid,
mtype=mtype,
season=season,
transfer_type=transfer_type,
background=background
)
if not state:
# 处理错误信息
if isinstance(errormsg, list):
error_text = f"整理完成,{len(errormsg)} 个文件转移失败"
if errormsg:
error_text += f"\n" + "\n".join(str(e) for e in errormsg[:5]) # 只显示前5个错误
if len(errormsg) > 5:
error_text += f"\n... 还有 {len(errormsg) - 5} 个错误"
else:
error_text = str(errormsg)
return f"整理失败:{error_text}"
else:
if background:
return f"整理任务已提交到后台运行:{file_path}"
else:
return f"整理成功:{file_path}"
except Exception as e:
logger.error(f"整理文件失败: {e}", exc_info=True)
return f"整理文件时发生错误: {str(e)}"

View File

@@ -0,0 +1,203 @@
"""更新站点工具"""
import json
from typing import Optional, Type
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.core.event import eventmanager
from app.db import AsyncSessionFactory
from app.db.models.site import Site
from app.log import logger
from app.schemas.types import EventType
from app.utils.string import StringUtils
class UpdateSiteInput(BaseModel):
"""更新站点工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
site_id: int = Field(..., description="The ID of the site to update")
name: Optional[str] = Field(None, description="Site name (optional)")
url: Optional[str] = Field(None, description="Site URL (optional, will be automatically formatted)")
pri: Optional[int] = Field(None, description="Site priority (optional, smaller value = higher priority, e.g., pri=1 has higher priority than pri=10)")
rss: Optional[str] = Field(None, description="RSS feed URL (optional)")
cookie: Optional[str] = Field(None, description="Site cookie (optional)")
ua: Optional[str] = Field(None, description="User-Agent string (optional)")
apikey: Optional[str] = Field(None, description="API key (optional)")
token: Optional[str] = Field(None, description="API token (optional)")
proxy: Optional[int] = Field(None, description="Whether to use proxy: 0 for no, 1 for yes (optional)")
filter: Optional[str] = Field(None, description="Filter rule as regular expression (optional)")
note: Optional[str] = Field(None, description="Site notes/remarks (optional)")
timeout: Optional[int] = Field(None, description="Request timeout in seconds (optional, default: 15)")
limit_interval: Optional[int] = Field(None, description="Rate limit interval in seconds (optional)")
limit_count: Optional[int] = Field(None, description="Rate limit count per interval (optional)")
limit_seconds: Optional[int] = Field(None, description="Rate limit seconds between requests (optional)")
is_active: Optional[bool] = Field(None, description="Whether site is active: True for enabled, False for disabled (optional)")
downloader: Optional[str] = Field(None, description="Downloader name for this site (optional)")
class UpdateSiteTool(MoviePilotTool):
name: str = "update_site"
description: str = "Update site configuration including URL, priority, authentication credentials (cookie, UA, API key), proxy settings, rate limits, and other site properties. Supports updating multiple site attributes at once. Site priority (pri): smaller values have higher priority (e.g., pri=1 has higher priority than pri=10)."
args_schema: Type[BaseModel] = UpdateSiteInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据更新参数生成友好的提示消息"""
site_id = kwargs.get("site_id")
fields_updated = []
if kwargs.get("name"):
fields_updated.append("名称")
if kwargs.get("url"):
fields_updated.append("URL")
if kwargs.get("pri") is not None:
fields_updated.append("优先级")
if kwargs.get("cookie"):
fields_updated.append("Cookie")
if kwargs.get("ua"):
fields_updated.append("User-Agent")
if kwargs.get("proxy") is not None:
fields_updated.append("代理设置")
if kwargs.get("is_active") is not None:
fields_updated.append("启用状态")
if kwargs.get("downloader"):
fields_updated.append("下载器")
if fields_updated:
return f"正在更新站点 #{site_id}: {', '.join(fields_updated)}"
return f"正在更新站点 #{site_id}"
async def run(self, site_id: int,
name: Optional[str] = None,
url: Optional[str] = None,
pri: Optional[int] = None,
rss: Optional[str] = None,
cookie: Optional[str] = None,
ua: Optional[str] = None,
apikey: Optional[str] = None,
token: Optional[str] = None,
proxy: Optional[int] = None,
filter: Optional[str] = None,
note: Optional[str] = None,
timeout: Optional[int] = None,
limit_interval: Optional[int] = None,
limit_count: Optional[int] = None,
limit_seconds: Optional[int] = None,
is_active: Optional[bool] = None,
downloader: Optional[str] = None,
**kwargs) -> str:
logger.info(f"执行工具: {self.name}, 参数: site_id={site_id}")
try:
# 获取数据库会话
async with AsyncSessionFactory() as db:
# 获取站点
site = await Site.async_get(db, site_id)
if not site:
return json.dumps({
"success": False,
"message": f"站点不存在: {site_id}"
}, ensure_ascii=False)
# 构建更新字典
site_dict = {}
# 基本信息
if name is not None:
site_dict["name"] = name
# URL处理需要校正格式
if url is not None:
_scheme, _netloc = StringUtils.get_url_netloc(url)
site_dict["url"] = f"{_scheme}://{_netloc}/"
if pri is not None:
site_dict["pri"] = pri
if rss is not None:
site_dict["rss"] = rss
# 认证信息
if cookie is not None:
site_dict["cookie"] = cookie
if ua is not None:
site_dict["ua"] = ua
if apikey is not None:
site_dict["apikey"] = apikey
if token is not None:
site_dict["token"] = token
# 配置选项
if proxy is not None:
site_dict["proxy"] = proxy
if filter is not None:
site_dict["filter"] = filter
if note is not None:
site_dict["note"] = note
if timeout is not None:
site_dict["timeout"] = timeout
# 流控设置
if limit_interval is not None:
site_dict["limit_interval"] = limit_interval
if limit_count is not None:
site_dict["limit_count"] = limit_count
if limit_seconds is not None:
site_dict["limit_seconds"] = limit_seconds
# 状态和下载器
if is_active is not None:
site_dict["is_active"] = is_active
if downloader is not None:
site_dict["downloader"] = downloader
# 如果没有要更新的字段
if not site_dict:
return json.dumps({
"success": False,
"message": "没有提供要更新的字段"
}, ensure_ascii=False)
# 更新站点
await site.async_update(db, site_dict)
# 重新获取更新后的站点数据
updated_site = await Site.async_get(db, site_id)
# 发送站点更新事件
await eventmanager.async_send_event(EventType.SiteUpdated, {
"domain": updated_site.domain if updated_site else site.domain
})
# 构建返回结果
result = {
"success": True,
"message": f"站点 #{site_id} 更新成功",
"site_id": site_id,
"updated_fields": list(site_dict.keys())
}
if updated_site:
result["site"] = {
"id": updated_site.id,
"name": updated_site.name,
"domain": updated_site.domain,
"url": updated_site.url,
"pri": updated_site.pri,
"is_active": updated_site.is_active,
"downloader": updated_site.downloader,
"proxy": updated_site.proxy,
"timeout": updated_site.timeout
}
return json.dumps(result, ensure_ascii=False, indent=2)
except Exception as e:
error_message = f"更新站点失败: {str(e)}"
logger.error(f"更新站点失败: {e}", exc_info=True)
return json.dumps({
"success": False,
"message": error_message,
"site_id": site_id
}, ensure_ascii=False)

View File

@@ -0,0 +1,88 @@
"""更新站点Cookie和UA工具"""
from typing import Optional, Type
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.chain.site import SiteChain
from app.db.site_oper import SiteOper
from app.log import logger
from app.utils.string import StringUtils
class UpdateSiteCookieInput(BaseModel):
"""更新站点Cookie和UA工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
site_identifier: str = Field(..., description="Site identifier: can be site ID (integer as string), site name, or site domain/URL")
username: str = Field(..., description="Site login username")
password: str = Field(..., description="Site login password")
two_step_code: Optional[str] = Field(None, description="Two-step verification code or secret key (optional, required for sites with 2FA enabled)")
class UpdateSiteCookieTool(MoviePilotTool):
name: str = "update_site_cookie"
description: str = "Update site Cookie and User-Agent by logging in with username and password. This tool can automatically obtain and update the site's authentication credentials. Supports two-step verification for sites that require it. Accepts site ID, site name, or site domain/URL as identifier."
args_schema: Type[BaseModel] = UpdateSiteCookieInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据更新参数生成友好的提示消息"""
site_identifier = kwargs.get("site_identifier", "")
username = kwargs.get("username", "")
two_step_code = kwargs.get("two_step_code")
message = f"正在更新站点Cookie: {site_identifier} (用户: {username})"
if two_step_code:
message += " [需要两步验证]"
return message
async def run(self, site_identifier: str, username: str, password: str,
two_step_code: Optional[str] = None, **kwargs) -> str:
logger.info(f"执行工具: {self.name}, 参数: site_identifier={site_identifier}, username={username}")
try:
site_oper = SiteOper()
site_chain = SiteChain()
# 尝试解析为站点ID
site = None
if site_identifier.isdigit():
# 如果是数字尝试作为站点ID查询
site = await site_oper.async_get(int(site_identifier))
# 如果不是ID或ID查询失败尝试按名称或域名查询
if not site:
# 尝试按名称查询
sites = await site_oper.async_list()
for s in sites:
if (site_identifier.lower() in (s.name or "").lower()) or \
(site_identifier.lower() in (s.domain or "").lower()):
site = s
break
# 如果还是没找到尝试从URL提取域名
if not site:
domain = StringUtils.get_url_domain(site_identifier)
if domain:
site = await site_oper.async_get_by_domain(domain)
if not site:
return f"未找到站点:{site_identifier},请使用 query_sites 工具查询可用的站点"
# 更新站点Cookie和UA
status, message = site_chain.update_cookie(
site_info=site,
username=username,
password=password,
two_step_code=two_step_code
)
if status:
return f"站点【{site.name}】Cookie和UA更新成功\n{message}"
else:
return f"站点【{site.name}】Cookie和UA更新失败\n错误原因:{message}"
except Exception as e:
logger.error(f"更新站点Cookie和UA失败: {e}", exc_info=True)
return f"更新站点Cookie和UA时发生错误: {str(e)}"

View File

@@ -0,0 +1,239 @@
"""更新订阅工具"""
import json
from typing import Optional, Type, List
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.core.event import eventmanager
from app.db import AsyncSessionFactory
from app.db.models.subscribe import Subscribe
from app.log import logger
from app.schemas.types import EventType
class UpdateSubscribeInput(BaseModel):
"""更新订阅工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
subscribe_id: int = Field(..., description="The ID of the subscription to update")
name: Optional[str] = Field(None, description="Subscription name/title (optional)")
year: Optional[str] = Field(None, description="Release year (optional)")
season: Optional[int] = Field(None, description="Season number for TV shows (optional)")
total_episode: Optional[int] = Field(None, description="Total number of episodes (optional)")
lack_episode: Optional[int] = Field(None, description="Number of missing episodes (optional)")
start_episode: Optional[int] = Field(None, description="Starting episode number (optional)")
quality: Optional[str] = Field(None, description="Quality filter as regular expression (optional, e.g., 'BluRay|WEB-DL|HDTV')")
resolution: Optional[str] = Field(None, description="Resolution filter as regular expression (optional, e.g., '1080p|720p|2160p')")
effect: Optional[str] = Field(None, description="Effect filter as regular expression (optional, e.g., 'HDR|DV|SDR')")
include: Optional[str] = Field(None, description="Include filter as regular expression (optional)")
exclude: Optional[str] = Field(None, description="Exclude filter as regular expression (optional)")
filter: Optional[str] = Field(None, description="Filter rule as regular expression (optional)")
state: Optional[str] = Field(None, description="Subscription state: 'R' for enabled, 'P' for pending, 'S' for paused (optional)")
sites: Optional[List[int]] = Field(None, description="List of site IDs to search from (optional)")
downloader: Optional[str] = Field(None, description="Downloader name (optional)")
save_path: Optional[str] = Field(None, description="Save path for downloaded files (optional)")
best_version: Optional[int] = Field(None, description="Whether to upgrade to best version: 0 for no, 1 for yes (optional)")
custom_words: Optional[str] = Field(None, description="Custom recognition words (optional)")
media_category: Optional[str] = Field(None, description="Custom media category (optional)")
episode_group: Optional[str] = Field(None, description="Episode group ID (optional)")
class UpdateSubscribeTool(MoviePilotTool):
name: str = "update_subscribe"
description: str = "Update subscription properties including filters, episode counts, state, and other settings. Supports updating quality/resolution filters, episode tracking, subscription state, and download configuration."
args_schema: Type[BaseModel] = UpdateSubscribeInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据更新参数生成友好的提示消息"""
subscribe_id = kwargs.get("subscribe_id")
fields_updated = []
if kwargs.get("name"):
fields_updated.append("名称")
if kwargs.get("total_episode") is not None:
fields_updated.append("总集数")
if kwargs.get("lack_episode") is not None:
fields_updated.append("缺失集数")
if kwargs.get("quality"):
fields_updated.append("质量过滤")
if kwargs.get("resolution"):
fields_updated.append("分辨率过滤")
if kwargs.get("state"):
state_map = {"R": "启用", "P": "禁用", "S": "暂停"}
fields_updated.append(f"状态({state_map.get(kwargs.get('state'), kwargs.get('state'))})")
if kwargs.get("sites"):
fields_updated.append("站点")
if kwargs.get("downloader"):
fields_updated.append("下载器")
if fields_updated:
return f"正在更新订阅 #{subscribe_id}: {', '.join(fields_updated)}"
return f"正在更新订阅 #{subscribe_id}"
async def run(self, subscribe_id: int,
name: Optional[str] = None,
year: Optional[str] = None,
season: Optional[int] = None,
total_episode: Optional[int] = None,
lack_episode: Optional[int] = None,
start_episode: Optional[int] = None,
quality: Optional[str] = None,
resolution: Optional[str] = None,
effect: Optional[str] = None,
include: Optional[str] = None,
exclude: Optional[str] = None,
filter: Optional[str] = None,
state: Optional[str] = None,
sites: Optional[List[int]] = None,
downloader: Optional[str] = None,
save_path: Optional[str] = None,
best_version: Optional[int] = None,
custom_words: Optional[str] = None,
media_category: Optional[str] = None,
episode_group: Optional[str] = None,
**kwargs) -> str:
logger.info(f"执行工具: {self.name}, 参数: subscribe_id={subscribe_id}")
try:
# 获取数据库会话
async with AsyncSessionFactory() as db:
# 获取订阅
subscribe = await Subscribe.async_get(db, subscribe_id)
if not subscribe:
return json.dumps({
"success": False,
"message": f"订阅不存在: {subscribe_id}"
}, ensure_ascii=False)
# 保存旧数据用于事件
old_subscribe_dict = subscribe.to_dict()
# 构建更新字典
subscribe_dict = {}
# 基本信息
if name is not None:
subscribe_dict["name"] = name
if year is not None:
subscribe_dict["year"] = year
if season is not None:
subscribe_dict["season"] = season
# 集数相关
if total_episode is not None:
subscribe_dict["total_episode"] = total_episode
# 如果总集数增加,缺失集数也要相应增加
if total_episode > (subscribe.total_episode or 0):
old_lack = subscribe.lack_episode or 0
subscribe_dict["lack_episode"] = old_lack + (total_episode - (subscribe.total_episode or 0))
# 标记为手动修改过总集数
subscribe_dict["manual_total_episode"] = 1
# 缺失集数处理(只有在没有提供总集数时才单独处理)
# 注意:如果 lack_episode 为 0不更新避免更新为0
if lack_episode is not None and total_episode is None:
if lack_episode > 0:
subscribe_dict["lack_episode"] = lack_episode
# 如果 lack_episode 为 0不添加到更新字典中保持原值或由总集数逻辑处理
if start_episode is not None:
subscribe_dict["start_episode"] = start_episode
# 过滤规则
if quality is not None:
subscribe_dict["quality"] = quality
if resolution is not None:
subscribe_dict["resolution"] = resolution
if effect is not None:
subscribe_dict["effect"] = effect
if include is not None:
subscribe_dict["include"] = include
if exclude is not None:
subscribe_dict["exclude"] = exclude
if filter is not None:
subscribe_dict["filter"] = filter
# 状态
if state is not None:
valid_states = ["R", "P", "S", "N"]
if state not in valid_states:
return json.dumps({
"success": False,
"message": f"无效的订阅状态: {state},有效状态: {', '.join(valid_states)}"
}, ensure_ascii=False)
subscribe_dict["state"] = state
# 下载配置
if sites is not None:
subscribe_dict["sites"] = sites
if downloader is not None:
subscribe_dict["downloader"] = downloader
if save_path is not None:
subscribe_dict["save_path"] = save_path
if best_version is not None:
subscribe_dict["best_version"] = best_version
# 其他配置
if custom_words is not None:
subscribe_dict["custom_words"] = custom_words
if media_category is not None:
subscribe_dict["media_category"] = media_category
if episode_group is not None:
subscribe_dict["episode_group"] = episode_group
# 如果没有要更新的字段
if not subscribe_dict:
return json.dumps({
"success": False,
"message": "没有提供要更新的字段"
}, ensure_ascii=False)
# 更新订阅
await subscribe.async_update(db, subscribe_dict)
# 重新获取更新后的订阅数据
updated_subscribe = await Subscribe.async_get(db, subscribe_id)
# 发送订阅调整事件
await eventmanager.async_send_event(EventType.SubscribeModified, {
"subscribe_id": subscribe_id,
"old_subscribe_info": old_subscribe_dict,
"subscribe_info": updated_subscribe.to_dict() if updated_subscribe else {},
})
# 构建返回结果
result = {
"success": True,
"message": f"订阅 #{subscribe_id} 更新成功",
"subscribe_id": subscribe_id,
"updated_fields": list(subscribe_dict.keys())
}
if updated_subscribe:
result["subscribe"] = {
"id": updated_subscribe.id,
"name": updated_subscribe.name,
"year": updated_subscribe.year,
"type": updated_subscribe.type,
"season": updated_subscribe.season,
"state": updated_subscribe.state,
"total_episode": updated_subscribe.total_episode,
"lack_episode": updated_subscribe.lack_episode,
"start_episode": updated_subscribe.start_episode,
"quality": updated_subscribe.quality,
"resolution": updated_subscribe.resolution,
"effect": updated_subscribe.effect
}
return json.dumps(result, ensure_ascii=False, indent=2)
except Exception as e:
error_message = f"更新订阅失败: {str(e)}"
logger.error(f"更新订阅失败: {e}", exc_info=True)
return json.dumps({
"success": False,
"message": error_message,
"subscribe_id": subscribe_id
}, ensure_ascii=False)

274
app/agent/tools/manager.py Normal file
View File

@@ -0,0 +1,274 @@
import json
import uuid
from typing import Any, Dict, List, Optional
from app.agent.tools.factory import MoviePilotToolFactory
from app.log import logger
class ToolDefinition:
"""
工具定义
"""
def __init__(self, name: str, description: str, input_schema: Dict[str, Any]):
self.name = name
self.description = description
self.input_schema = input_schema
class MoviePilotToolsManager:
"""
MoviePilot工具管理器用于HTTP API
"""
def __init__(self, user_id: str = "api_user", session_id: str = uuid.uuid4()):
"""
初始化工具管理器
Args:
user_id: 用户ID
session_id: 会话ID
"""
self.user_id = user_id
self.session_id = session_id
self.tools: List[Any] = []
self._load_tools()
def _load_tools(self):
"""
加载所有MoviePilot工具
"""
try:
# 创建工具实例
self.tools = MoviePilotToolFactory.create_tools(
session_id=self.session_id,
user_id=self.user_id,
channel=None,
source="api",
username="API Client",
callback_handler=None,
)
logger.info(f"成功加载 {len(self.tools)} 个工具")
except Exception as e:
logger.error(f"加载工具失败: {e}", exc_info=True)
self.tools = []
def list_tools(self) -> List[ToolDefinition]:
"""
列出所有可用的工具
Returns:
工具定义列表
"""
tools_list = []
for tool in self.tools:
# 获取工具的输入参数模型
args_schema = getattr(tool, 'args_schema', None)
if args_schema:
# 将Pydantic模型转换为JSON Schema
input_schema = self._convert_to_json_schema(args_schema)
else:
# 如果没有args_schema使用基本信息
input_schema = {
"type": "object",
"properties": {},
"required": []
}
tools_list.append(ToolDefinition(
name=tool.name,
description=tool.description or "",
input_schema=input_schema
))
return tools_list
def get_tool(self, tool_name: str) -> Optional[Any]:
"""
获取指定工具实例
Args:
tool_name: 工具名称
Returns:
工具实例如果未找到返回None
"""
for tool in self.tools:
if tool.name == tool_name:
return tool
return None
@staticmethod
def _normalize_arguments(tool_instance: Any, arguments: Dict[str, Any]) -> Dict[str, Any]:
"""
根据工具的参数schema规范化参数类型
Args:
tool_instance: 工具实例
arguments: 原始参数
Returns:
规范化后的参数
"""
# 获取工具的参数schema
args_schema = getattr(tool_instance, 'args_schema', None)
if not args_schema:
return arguments
# 获取schema中的字段定义
try:
schema = args_schema.model_json_schema()
properties = schema.get("properties", {})
except Exception as e:
logger.warning(f"获取工具schema失败: {e}")
return arguments
# 规范化参数
normalized = {}
for key, value in arguments.items():
if key not in properties:
# 参数不在schema中保持原样
normalized[key] = value
continue
field_info = properties[key]
field_type = field_info.get("type")
# 处理 anyOf 类型(例如 Optional[int] 会生成 anyOf
any_of = field_info.get("anyOf")
if any_of and not field_type:
# 从 anyOf 中提取实际类型
for type_option in any_of:
if "type" in type_option and type_option["type"] != "null":
field_type = type_option["type"]
break
# 根据类型进行转换
if field_type == "integer" and isinstance(value, str):
try:
normalized[key] = int(value)
except (ValueError, TypeError):
logger.warning(f"无法将参数 {key}='{value}' 转换为整数,保持原值")
normalized[key] = None
elif field_type == "number" and isinstance(value, str):
try:
normalized[key] = float(value)
except (ValueError, TypeError):
logger.warning(f"无法将参数 {key}='{value}' 转换为浮点数,保持原值")
normalized[key] = None
elif field_type == "boolean":
if isinstance(value, str):
normalized[key] = value.lower() in ("true", "1", "yes", "on")
elif isinstance(value, (int, float)):
normalized[key] = value != 0
else:
normalized[key] = True
else:
normalized[key] = value
return normalized
async def call_tool(self, tool_name: str, arguments: Dict[str, Any]) -> str:
"""
调用工具
Args:
tool_name: 工具名称
arguments: 工具参数
Returns:
工具执行结果(字符串)
"""
tool_instance = self.get_tool(tool_name)
if not tool_instance:
error_msg = json.dumps({
"error": f"工具 '{tool_name}' 未找到"
}, ensure_ascii=False)
return error_msg
try:
# 规范化参数类型
normalized_arguments = self._normalize_arguments(tool_instance, arguments)
# 调用工具的run方法
result = await tool_instance.run(**normalized_arguments)
# 确保返回字符串
if isinstance(result, str):
formated_result = result
elif isinstance(result, int, float):
formated_result = str(result)
else:
try:
formated_result = json.dumps(result, ensure_ascii=False, indent=2)
except Exception as e:
logger.warning(f"结果转换为JSON失败: {e}, 使用字符串表示")
formated_result = str(result)
return formated_result
except Exception as e:
logger.error(f"调用工具 {tool_name} 时发生错误: {e}", exc_info=True)
error_msg = json.dumps({
"error": f"调用工具 '{tool_name}' 时发生错误: {str(e)}"
}, ensure_ascii=False)
return error_msg
@staticmethod
def _convert_to_json_schema(args_schema: Any) -> Dict[str, Any]:
"""
将Pydantic模型转换为JSON Schema
Args:
args_schema: Pydantic模型类
Returns:
JSON Schema字典
"""
# 获取Pydantic模型的字段信息
schema = args_schema.model_json_schema()
# 构建JSON Schema
properties = {}
required = []
if "properties" in schema:
for field_name, field_info in schema["properties"].items():
# 转换字段类型
field_type = field_info.get("type", "string")
field_description = field_info.get("description", "")
# 处理可选字段
if field_name not in schema.get("required", []):
# 可选字段
default_value = field_info.get("default")
properties[field_name] = {
"type": field_type,
"description": field_description
}
if default_value is not None:
properties[field_name]["default"] = default_value
else:
properties[field_name] = {
"type": field_type,
"description": field_description
}
required.append(field_name)
# 处理枚举类型
if "enum" in field_info:
properties[field_name]["enum"] = field_info["enum"]
# 处理数组类型
if field_type == "array" and "items" in field_info:
properties[field_name]["items"] = field_info["items"]
return {
"type": "object",
"properties": properties,
"required": required
}
moviepilot_tool_manager = MoviePilotToolsManager()

View File

@@ -1,12 +1,13 @@
from fastapi import APIRouter
from app.api.endpoints import login, user, site, message, webhook, subscribe, \
from app.api.endpoints import login, user, webhook, message, site, subscribe, \
media, douban, search, plugin, tmdb, history, system, download, dashboard, \
transfer, mediaserver, bangumi, storage, discover, recommend, workflow
transfer, mediaserver, bangumi, storage, discover, recommend, workflow, torrent, mcp, mfa
api_router = APIRouter()
api_router.include_router(login.router, prefix="/login", tags=["login"])
api_router.include_router(user.router, prefix="/user", tags=["user"])
api_router.include_router(mfa.router, prefix="/mfa", tags=["mfa"])
api_router.include_router(site.router, prefix="/site", tags=["site"])
api_router.include_router(message.router, prefix="/message", tags=["message"])
api_router.include_router(webhook.router, prefix="/webhook", tags=["webhook"])
@@ -27,3 +28,5 @@ api_router.include_router(bangumi.router, prefix="/bangumi", tags=["bangumi"])
api_router.include_router(discover.router, prefix="/discover", tags=["discover"])
api_router.include_router(recommend.router, prefix="/recommend", tags=["recommend"])
api_router.include_router(workflow.router, prefix="/workflow", tags=["workflow"])
api_router.include_router(torrent.router, prefix="/torrent", tags=["torrent"])
api_router.include_router(mcp.router, prefix="/mcp", tags=["mcp"])

View File

@@ -1,4 +1,4 @@
from typing import List, Any
from typing import List, Any, Optional
from fastapi import APIRouter, Depends
@@ -11,63 +11,63 @@ router = APIRouter()
@router.get("/credits/{bangumiid}", summary="查询Bangumi演职员表", response_model=List[schemas.MediaPerson])
def bangumi_credits(bangumiid: int,
page: int = 1,
count: int = 20,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def bangumi_credits(bangumiid: int,
page: Optional[int] = 1,
count: Optional[int] = 20,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询Bangumi演职员表
"""
persons = BangumiChain().bangumi_credits(bangumiid)
persons = await BangumiChain().async_bangumi_credits(bangumiid)
if persons:
return persons[(page - 1) * count: page * count]
return []
@router.get("/recommend/{bangumiid}", summary="查询Bangumi推荐", response_model=List[schemas.MediaInfo])
def bangumi_recommend(bangumiid: int,
page: int = 1,
count: int = 20,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def bangumi_recommend(bangumiid: int,
page: Optional[int] = 1,
count: Optional[int] = 20,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询Bangumi推荐
"""
medias = BangumiChain().bangumi_recommend(bangumiid)
medias = await BangumiChain().async_bangumi_recommend(bangumiid)
if medias:
return [media.to_dict() for media in medias[(page - 1) * count: page * count]]
return []
@router.get("/person/{person_id}", summary="人物详情", response_model=schemas.MediaPerson)
def bangumi_person(person_id: int,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def bangumi_person(person_id: int,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据人物ID查询人物详情
"""
return BangumiChain().person_detail(person_id=person_id)
return await BangumiChain().async_person_detail(person_id=person_id)
@router.get("/person/credits/{person_id}", summary="人物参演作品", response_model=List[schemas.MediaInfo])
def bangumi_person_credits(person_id: int,
page: int = 1,
count: int = 20,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def bangumi_person_credits(person_id: int,
page: Optional[int] = 1,
count: Optional[int] = 20,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据人物ID查询人物参演作品
"""
medias = BangumiChain().person_credits(person_id=person_id)
medias = await BangumiChain().async_person_credits(person_id=person_id)
if medias:
return [media.to_dict() for media in medias[(page - 1) * count: page * count]]
return []
@router.get("/{bangumiid}", summary="查询Bangumi详情", response_model=schemas.MediaInfo)
def bangumi_info(bangumiid: int,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def bangumi_info(bangumiid: int,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询Bangumi详情
"""
info = BangumiChain().bangumi_info(bangumiid)
info = await BangumiChain().async_bangumi_info(bangumiid)
if info:
return MediaInfo(bangumi_info=info).to_dict()
else:

View File

@@ -1,5 +1,5 @@
from pathlib import Path
from typing import Any, List, Optional
from typing import Any, List, Optional, Annotated
from fastapi import APIRouter, Depends
from sqlalchemy.orm import Session
@@ -18,7 +18,7 @@ router = APIRouter()
@router.get("/statistic", summary="媒体数量统计", response_model=schemas.Statistic)
def statistic(name: str = None, _: schemas.TokenPayload = Depends(verify_token)) -> Any:
def statistic(name: Optional[str] = None, _: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询媒体数量统计信息
"""
@@ -37,7 +37,7 @@ def statistic(name: str = None, _: schemas.TokenPayload = Depends(verify_token))
@router.get("/statistic2", summary="媒体数量统计API_TOKEN", response_model=schemas.Statistic)
def statistic2(_: str = Depends(verify_apitoken)) -> Any:
def statistic2(_: Annotated[str, Depends(verify_apitoken)]) -> Any:
"""
查询媒体数量统计信息 API_TOKEN认证?token=xxx
"""
@@ -66,7 +66,7 @@ def storage(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
@router.get("/storage2", summary="本地存储空间API_TOKEN", response_model=schemas.Storage)
def storage2(_: str = Depends(verify_apitoken)) -> Any:
def storage2(_: Annotated[str, Depends(verify_apitoken)]) -> Any:
"""
查询本地存储空间信息 API_TOKEN认证?token=xxx
"""
@@ -82,7 +82,7 @@ def processes(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
@router.get("/downloader", summary="下载器信息", response_model=schemas.DownloaderInfo)
def downloader(name: str = None, _: schemas.TokenPayload = Depends(verify_token)) -> Any:
def downloader(name: Optional[str] = None, _: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询下载器信息
"""
@@ -103,7 +103,7 @@ def downloader(name: str = None, _: schemas.TokenPayload = Depends(verify_token)
@router.get("/downloader2", summary="下载器信息API_TOKEN", response_model=schemas.DownloaderInfo)
def downloader2(_: str = Depends(verify_apitoken)) -> Any:
def downloader2(_: Annotated[str, Depends(verify_apitoken)]) -> Any:
"""
查询下载器信息 API_TOKEN认证?token=xxx
"""
@@ -111,7 +111,7 @@ def downloader2(_: str = Depends(verify_apitoken)) -> Any:
@router.get("/schedule", summary="后台服务", response_model=List[schemas.ScheduleInfo])
def schedule(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def schedule(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询后台服务信息
"""
@@ -119,24 +119,25 @@ def schedule(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
@router.get("/schedule2", summary="后台服务API_TOKEN", response_model=List[schemas.ScheduleInfo])
def schedule2(_: str = Depends(verify_apitoken)) -> Any:
async def schedule2(_: Annotated[str, Depends(verify_apitoken)]) -> Any:
"""
查询下载器信息 API_TOKEN认证?token=xxx
"""
return schedule()
return await schedule()
@router.get("/transfer", summary="文件整理统计", response_model=List[int])
def transfer(days: int = 7, db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def transfer(days: Optional[int] = 7,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询文件整理统计信息
"""
transfer_stat = TransferHistory.statistic(db, days)
transfer_stat = await TransferHistory.async_statistic(db, days)
return [stat[1] for stat in transfer_stat]
@router.get("/cpu", summary="获取当前CPU使用率", response_model=int)
@router.get("/cpu", summary="获取当前CPU使用率", response_model=float)
def cpu(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
获取当前CPU使用率
@@ -144,8 +145,8 @@ def cpu(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
return SystemUtils.cpu_usage()
@router.get("/cpu2", summary="获取当前CPU使用率API_TOKEN", response_model=int)
def cpu2(_: str = Depends(verify_apitoken)) -> Any:
@router.get("/cpu2", summary="获取当前CPU使用率API_TOKEN", response_model=float)
def cpu2(_: Annotated[str, Depends(verify_apitoken)]) -> Any:
"""
获取当前CPU使用率 API_TOKEN认证?token=xxx
"""
@@ -161,8 +162,24 @@ def memory(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
@router.get("/memory2", summary="获取当前内存使用量和使用率API_TOKEN", response_model=List[int])
def memory2(_: str = Depends(verify_apitoken)) -> Any:
def memory2(_: Annotated[str, Depends(verify_apitoken)]) -> Any:
"""
获取当前内存使用率 API_TOKEN认证?token=xxx
"""
return memory()
@router.get("/network", summary="获取当前网络流量", response_model=List[int])
def network(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
获取当前网络流量上行和下行流量单位bytes/s
"""
return SystemUtils.network_usage()
@router.get("/network2", summary="获取当前网络流量API_TOKEN", response_model=List[int])
def network2(_: Annotated[str, Depends(verify_apitoken)]) -> Any:
"""
获取当前网络流量 API_TOKEN认证?token=xxx
"""
return network()

View File

@@ -1,15 +1,15 @@
from typing import Any, List
from typing import Any, List, Optional
from fastapi import APIRouter, Depends
from app import schemas
from app.chain.bangumi import BangumiChain
from app.chain.douban import DoubanChain
from app.chain.tmdb import TmdbChain
from app.core.event import eventmanager
from app.core.security import verify_token
from app.schemas import DiscoverSourceEventData
from app.schemas.types import ChainEventType, MediaType
from chain.bangumi import BangumiChain
from chain.douban import DoubanChain
from chain.tmdb import TmdbChain
router = APIRouter()
@@ -31,100 +31,100 @@ def source(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
@router.get("/bangumi", summary="探索Bangumi", response_model=List[schemas.MediaInfo])
def bangumi(type: int = 2,
cat: int = None,
sort: str = 'rank',
year: int = None,
page: int = 1,
count: int = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def bangumi(type: Optional[int] = 2,
cat: Optional[int] = None,
sort: Optional[str] = 'rank',
year: Optional[str] = None,
page: Optional[int] = 1,
count: Optional[int] = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
探索Bangumi
"""
medias = BangumiChain().discover(type=type, cat=cat, sort=sort, year=year,
limit=count, offset=(page - 1) * count)
medias = await BangumiChain().async_discover(type=type, cat=cat, sort=sort, year=year,
limit=count, offset=(page - 1) * count)
if medias:
return [media.to_dict() for media in medias]
return []
@router.get("/douban_movies", summary="探索豆瓣电影", response_model=List[schemas.MediaInfo])
def douban_movies(sort: str = "R",
tags: str = "",
page: int = 1,
count: int = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def douban_movies(sort: Optional[str] = "R",
tags: Optional[str] = "",
page: Optional[int] = 1,
count: Optional[int] = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览豆瓣电影信息
"""
movies = DoubanChain().douban_discover(mtype=MediaType.MOVIE,
sort=sort, tags=tags, page=page, count=count)
movies = await DoubanChain().async_douban_discover(mtype=MediaType.MOVIE,
sort=sort, tags=tags, page=page, count=count)
return [media.to_dict() for media in movies] if movies else []
@router.get("/douban_tvs", summary="探索豆瓣剧集", response_model=List[schemas.MediaInfo])
def douban_tvs(sort: str = "R",
tags: str = "",
page: int = 1,
count: int = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def douban_tvs(sort: Optional[str] = "R",
tags: Optional[str] = "",
page: Optional[int] = 1,
count: Optional[int] = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览豆瓣剧集信息
"""
tvs = DoubanChain().douban_discover(mtype=MediaType.TV,
sort=sort, tags=tags, page=page, count=count)
tvs = await DoubanChain().async_douban_discover(mtype=MediaType.TV,
sort=sort, tags=tags, page=page, count=count)
return [media.to_dict() for media in tvs] if tvs else []
@router.get("/tmdb_movies", summary="探索TMDB电影", response_model=List[schemas.MediaInfo])
def tmdb_movies(sort_by: str = "popularity.desc",
with_genres: str = "",
with_original_language: str = "",
with_keywords: str = "",
with_watch_providers: str = "",
vote_average: float = 0,
vote_count: int = 0,
release_date: str = "",
page: int = 1,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def tmdb_movies(sort_by: Optional[str] = "popularity.desc",
with_genres: Optional[str] = "",
with_original_language: Optional[str] = "",
with_keywords: Optional[str] = "",
with_watch_providers: Optional[str] = "",
vote_average: Optional[float] = 0.0,
vote_count: Optional[int] = 0,
release_date: Optional[str] = "",
page: Optional[int] = 1,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览TMDB电影信息
"""
movies = TmdbChain().tmdb_discover(mtype=MediaType.MOVIE,
sort_by=sort_by,
with_genres=with_genres,
with_original_language=with_original_language,
with_keywords=with_keywords,
with_watch_providers=with_watch_providers,
vote_average=vote_average,
vote_count=vote_count,
release_date=release_date,
page=page)
movies = await TmdbChain().async_tmdb_discover(mtype=MediaType.MOVIE,
sort_by=sort_by,
with_genres=with_genres,
with_original_language=with_original_language,
with_keywords=with_keywords,
with_watch_providers=with_watch_providers,
vote_average=vote_average,
vote_count=vote_count,
release_date=release_date,
page=page)
return [movie.to_dict() for movie in movies] if movies else []
@router.get("/tmdb_tvs", summary="探索TMDB剧集", response_model=List[schemas.MediaInfo])
def tmdb_tvs(sort_by: str = "popularity.desc",
with_genres: str = "",
with_original_language: str = "",
with_keywords: str = "",
with_watch_providers: str = "",
vote_average: float = 0,
vote_count: int = 0,
release_date: str = "",
page: int = 1,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def tmdb_tvs(sort_by: Optional[str] = "popularity.desc",
with_genres: Optional[str] = "",
with_original_language: Optional[str] = "",
with_keywords: Optional[str] = "",
with_watch_providers: Optional[str] = "",
vote_average: Optional[float] = 0.0,
vote_count: Optional[int] = 0,
release_date: Optional[str] = "",
page: Optional[int] = 1,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览TMDB剧集信息
"""
tvs = TmdbChain().tmdb_discover(mtype=MediaType.TV,
sort_by=sort_by,
with_genres=with_genres,
with_original_language=with_original_language,
with_keywords=with_keywords,
with_watch_providers=with_watch_providers,
vote_average=vote_average,
vote_count=vote_count,
release_date=release_date,
page=page)
tvs = await TmdbChain().async_tmdb_discover(mtype=MediaType.TV,
sort_by=sort_by,
with_genres=with_genres,
with_original_language=with_original_language,
with_keywords=with_keywords,
with_watch_providers=with_watch_providers,
vote_average=vote_average,
vote_count=vote_count,
release_date=release_date,
page=page)
return [tv.to_dict() for tv in tvs] if tvs else []

View File

@@ -1,4 +1,4 @@
from typing import Any, List
from typing import Any, List, Optional
from fastapi import APIRouter, Depends
@@ -12,54 +12,54 @@ router = APIRouter()
@router.get("/person/{person_id}", summary="人物详情", response_model=schemas.MediaPerson)
def douban_person(person_id: int,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def douban_person(person_id: int,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据人物ID查询人物详情
"""
return DoubanChain().person_detail(person_id=person_id)
return await DoubanChain().async_person_detail(person_id=person_id)
@router.get("/person/credits/{person_id}", summary="人物参演作品", response_model=List[schemas.MediaInfo])
def douban_person_credits(person_id: int,
page: int = 1,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def douban_person_credits(person_id: int,
page: Optional[int] = 1,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据人物ID查询人物参演作品
"""
medias = DoubanChain().person_credits(person_id=person_id, page=page)
medias = await DoubanChain().async_person_credits(person_id=person_id, page=page)
if medias:
return [media.to_dict() for media in medias]
return []
@router.get("/credits/{doubanid}/{type_name}", summary="豆瓣演员阵容", response_model=List[schemas.MediaPerson])
def douban_credits(doubanid: str,
type_name: str,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def douban_credits(doubanid: str,
type_name: str,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据豆瓣ID查询演员阵容type_name: 电影/电视剧
"""
mediatype = MediaType(type_name)
if mediatype == MediaType.MOVIE:
return DoubanChain().movie_credits(doubanid=doubanid)
return await DoubanChain().async_movie_credits(doubanid=doubanid)
elif mediatype == MediaType.TV:
return DoubanChain().tv_credits(doubanid=doubanid)
return await DoubanChain().async_tv_credits(doubanid=doubanid)
return []
@router.get("/recommend/{doubanid}/{type_name}", summary="豆瓣推荐电影/电视剧", response_model=List[schemas.MediaInfo])
def douban_recommend(doubanid: str,
type_name: str,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def douban_recommend(doubanid: str,
type_name: str,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据豆瓣ID查询推荐电影/电视剧type_name: 电影/电视剧
"""
mediatype = MediaType(type_name)
if mediatype == MediaType.MOVIE:
medias = DoubanChain().movie_recommend(doubanid=doubanid)
medias = await DoubanChain().async_movie_recommend(doubanid=doubanid)
elif mediatype == MediaType.TV:
medias = DoubanChain().tv_recommend(doubanid=doubanid)
medias = await DoubanChain().async_tv_recommend(doubanid=doubanid)
else:
return []
if medias:
@@ -68,12 +68,12 @@ def douban_recommend(doubanid: str,
@router.get("/{doubanid}", summary="查询豆瓣详情", response_model=schemas.MediaInfo)
def douban_info(doubanid: str,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def douban_info(doubanid: str,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据豆瓣ID查询豆瓣媒体信息
"""
doubaninfo = DoubanChain().douban_info(doubanid=doubanid)
doubaninfo = await DoubanChain().async_douban_info(doubanid=doubanid)
if doubaninfo:
return MediaInfo(douban_info=doubaninfo).to_dict()
else:

View File

@@ -1,4 +1,4 @@
from typing import Any, List
from typing import Any, List, Annotated, Optional
from fastapi import APIRouter, Depends, Body
@@ -6,19 +6,20 @@ from app import schemas
from app.chain.download import DownloadChain
from app.chain.media import MediaChain
from app.core.context import MediaInfo, Context, TorrentInfo
from app.core.event import eventmanager
from app.core.metainfo import MetaInfo
from app.core.security import verify_token
from app.db.models.user import User
from app.db.systemconfig_oper import SystemConfigOper
from app.db.user_oper import get_current_active_user
from app.schemas.types import SystemConfigKey
from app.schemas.types import ChainEventType, SystemConfigKey
router = APIRouter()
@router.get("/", summary="正在下载", response_model=List[schemas.DownloadingTorrent])
def current(
name: str = None,
name: Optional[str] = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询正在下载的任务
@@ -30,8 +31,8 @@ def current(
def download(
media_in: schemas.MediaInfo,
torrent_in: schemas.TorrentInfo,
downloader: str = Body(None),
save_path: str = Body(None),
downloader: Annotated[str | None, Body()] = None,
save_path: Annotated[str | None, Body()] = None,
current_user: User = Depends(get_current_active_user)) -> Any:
"""
添加下载任务(含媒体信息)
@@ -40,10 +41,12 @@ def download(
metainfo = MetaInfo(title=torrent_in.title, subtitle=torrent_in.description)
# 媒体信息
mediainfo = MediaInfo()
mediainfo.from_dict(media_in.dict())
mediainfo.from_dict(media_in.model_dump())
# 种子信息
torrentinfo = TorrentInfo()
torrentinfo.from_dict(torrent_in.dict())
torrentinfo.from_dict(torrent_in.model_dump())
# 手动下载始终使用选择的下载器
torrentinfo.site_downloader = downloader
# 上下文
context = Context(
meta_info=metainfo,
@@ -51,7 +54,7 @@ def download(
torrent_info=torrentinfo
)
did = DownloadChain().download_single(context=context, username=current_user.name,
downloader=downloader, save_path=save_path, source="Manual")
save_path=save_path, source="Manual")
if not did:
return schemas.Response(success=False, message="任务添加失败")
return schemas.Response(success=True, data={
@@ -62,8 +65,11 @@ def download(
@router.post("/add", summary="添加下载(不含媒体信息)", response_model=schemas.Response)
def add(
torrent_in: schemas.TorrentInfo,
downloader: str = Body(None),
save_path: str = Body(None),
tmdbid: Annotated[int | None, Body()] = None,
doubanid: Annotated[str | None, Body()] = None,
downloader: Annotated[str | None, Body()] = None,
# 保存路径, 支持<storage>:<path>, 如rclone:/MP, smb:/server/share/Movies等
save_path: Annotated[str | None, Body()] = None,
current_user: User = Depends(get_current_active_user)) -> Any:
"""
添加下载任务(不含媒体信息)
@@ -71,18 +77,23 @@ def add(
# 元数据
metainfo = MetaInfo(title=torrent_in.title, subtitle=torrent_in.description)
# 媒体信息
mediainfo = MediaChain().recognize_media(meta=metainfo)
mediainfo = MediaChain().recognize_media(meta=metainfo, tmdbid=tmdbid, doubanid=doubanid)
if not mediainfo:
return schemas.Response(success=False, message="无法识别媒体信息")
# 尝试使用辅助识别,如果有注册响应事件的话
if eventmanager.check(ChainEventType.NameRecognize):
mediainfo = MediaChain().recognize_help(title=torrent_in.title, org_meta=metainfo)
if not mediainfo:
return schemas.Response(success=False, message="无法识别媒体信息")
# 种子信息
torrentinfo = TorrentInfo()
torrentinfo.from_dict(torrent_in.dict())
torrentinfo.from_dict(torrent_in.model_dump())
# 上下文
context = Context(
meta_info=metainfo,
media_info=mediainfo,
torrent_info=torrentinfo
)
did = DownloadChain().download_single(context=context, username=current_user.name,
downloader=downloader, save_path=save_path, source="Manual")
if not did:
@@ -94,27 +105,27 @@ def add(
@router.get("/start/{hashString}", summary="开始任务", response_model=schemas.Response)
def start(
hashString: str,
hashString: str, name: Optional[str] = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
开如下载任务
"""
ret = DownloadChain().set_downloading(hashString, "start")
ret = DownloadChain().set_downloading(hashString, "start", name=name)
return schemas.Response(success=True if ret else False)
@router.get("/stop/{hashString}", summary="暂停任务", response_model=schemas.Response)
def stop(hashString: str,
def stop(hashString: str, name: Optional[str] = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
暂停下载任务
"""
ret = DownloadChain().set_downloading(hashString, "stop")
ret = DownloadChain().set_downloading(hashString, "stop", name=name)
return schemas.Response(success=True if ret else False)
@router.get("/clients", summary="查询可用下载器", response_model=List[dict])
def clients(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def clients(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询可用下载器
"""
@@ -125,10 +136,10 @@ def clients(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
@router.delete("/{hashString}", summary="删除下载任务", response_model=schemas.Response)
def delete(hashString: str,
def delete(hashString: str, name: Optional[str] = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
删除下载任务
"""
ret = DownloadChain().remove_downloading(hashString)
ret = DownloadChain().remove_downloading(hashString, name=name)
return schemas.Response(success=True if ret else False)

View File

@@ -1,53 +1,54 @@
from typing import List, Any
from typing import List, Any, Optional
import jieba
from fastapi import APIRouter, Depends
from sqlalchemy.ext.asyncio import AsyncSession
from sqlalchemy.orm import Session
from pathlib import Path
from app import schemas
from app.chain.storage import StorageChain
from app.core.config import settings
from app.core.event import eventmanager
from app.core.security import verify_token
from app.db import get_db
from app.db import get_async_db, get_db
from app.db.models import User
from app.db.models.downloadhistory import DownloadHistory
from app.db.models.downloadhistory import DownloadHistory, DownloadFiles
from app.db.models.transferhistory import TransferHistory
from app.db.user_oper import get_current_active_superuser
from app.schemas.types import EventType, MediaType
from app.db.user_oper import get_current_active_superuser_async, get_current_active_superuser
from app.schemas.types import EventType
router = APIRouter()
@router.get("/download", summary="查询下载历史记录", response_model=List[schemas.DownloadHistory])
def download_history(page: int = 1,
count: int = 30,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def download_history(page: Optional[int] = 1,
count: Optional[int] = 30,
db: AsyncSession = Depends(get_async_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询下载历史记录
"""
return DownloadHistory.list_by_page(db, page, count)
return await DownloadHistory.async_list_by_page(db, page, count)
@router.delete("/download", summary="删除下载历史记录", response_model=schemas.Response)
def delete_download_history(history_in: schemas.DownloadHistory,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def delete_download_history(history_in: schemas.DownloadHistory,
db: AsyncSession = Depends(get_async_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
删除下载历史记录
"""
DownloadHistory.delete(db, history_in.id)
await DownloadHistory.async_delete(db, history_in.id)
return schemas.Response(success=True)
@router.get("/transfer", summary="查询整理记录", response_model=schemas.Response)
def transfer_history(title: str = None,
page: int = 1,
count: int = 30,
status: bool = None,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def transfer_history(title: Optional[str] = None,
page: Optional[int] = 1,
count: Optional[int] = 30,
status: Optional[bool] = None,
db: AsyncSession = Depends(get_async_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询整理记录
"""
@@ -59,29 +60,28 @@ def transfer_history(title: str = None,
status = True
if title:
if settings.TOKENIZED_SEARCH:
words = jieba.cut(title, HMM=False)
title = "%".join(words)
total = TransferHistory.count_by_title(db, title=title, status=status)
result = TransferHistory.list_by_title(db, title=title, page=page,
count=count, status=status)
words = jieba.cut(title, HMM=False)
title = "%".join(words)
total = await TransferHistory.async_count_by_title(db, title=title, status=status)
result = await TransferHistory.async_list_by_title(db, title=title, page=page,
count=count, status=status)
else:
result = TransferHistory.list_by_page(db, page=page, count=count, status=status)
total = TransferHistory.count(db, status=status)
result = await TransferHistory.async_list_by_page(db, page=page, count=count, status=status)
total = await TransferHistory.async_count(db, status=status)
return schemas.Response(success=True,
data={
"list": result,
"list": [item.to_dict() for item in result],
"total": total,
})
@router.delete("/transfer", summary="删除整理记录", response_model=schemas.Response)
def delete_transfer_history(history_in: schemas.TransferHistory,
deletesrc: bool = False,
deletedest: bool = False,
deletesrc: Optional[bool] = False,
deletedest: Optional[bool] = False,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(get_current_active_superuser)) -> Any:
_: User = Depends(get_current_active_superuser)) -> Any:
"""
删除整理记录
"""
@@ -91,7 +91,7 @@ def delete_transfer_history(history_in: schemas.TransferHistory,
# 册除媒体库文件
if deletedest and history.dest_fileitem:
dest_fileitem = schemas.FileItem(**history.dest_fileitem)
StorageChain().delete_media_file(fileitem=dest_fileitem, mtype=MediaType(history.type))
StorageChain().delete_media_file(dest_fileitem)
# 删除源文件
if deletesrc and history.src_fileitem:
@@ -99,6 +99,8 @@ def delete_transfer_history(history_in: schemas.TransferHistory,
state = StorageChain().delete_media_file(src_fileitem)
if not state:
return schemas.Response(success=False, message=f"{src_fileitem.path} 删除失败")
# 删除下载记录中关联的文件
DownloadFiles.delete_by_fullpath(db, Path(src_fileitem.path).as_posix())
# 发送事件
eventmanager.send_event(
EventType.DownloadFileDeleted,
@@ -113,10 +115,10 @@ def delete_transfer_history(history_in: schemas.TransferHistory,
@router.get("/empty/transfer", summary="清空整理记录", response_model=schemas.Response)
def delete_transfer_history(db: Session = Depends(get_db),
_: User = Depends(get_current_active_superuser)) -> Any:
async def empty_transfer_history(db: AsyncSession = Depends(get_async_db),
_: User = Depends(get_current_active_superuser_async)) -> Any:
"""
清空整理记录
"""
TransferHistory.truncate(db)
await TransferHistory.async_truncate(db)
return schemas.Response(success=True)

View File

@@ -1,25 +1,25 @@
from datetime import timedelta
from typing import Any, List
from typing import Any, List, Annotated
from fastapi import APIRouter, Depends, Form, HTTPException
from fastapi.security import OAuth2PasswordRequestForm
from app import schemas
from app.chain.tmdb import TmdbChain
from app.chain.user import UserChain
from app.chain.mediaserver import MediaServerChain
from app.core import security
from app.core.config import settings
from app.helper.sites import SitesHelper
from app.utils.web import WebUtils
from app.db.systemconfig_oper import SystemConfigOper
from app.helper.sites import SitesHelper # noqa
from app.helper.image import WallpaperHelper
from app.schemas.types import SystemConfigKey
router = APIRouter()
@router.post("/access-token", summary="获取token", response_model=schemas.Token)
def login_access_token(
form_data: OAuth2PasswordRequestForm = Depends(),
otp_password: str = Form(None)
form_data: Annotated[OAuth2PasswordRequestForm, Depends()],
otp_password: Annotated[str | None, Form()] = None
) -> Any:
"""
获取认证Token
@@ -29,9 +29,19 @@ def login_access_token(
mfa_code=otp_password)
if not success:
raise HTTPException(status_code=401, detail=user_or_message)
# 如果是需要MFA验证返回特殊标识
if user_or_message == "MFA_REQUIRED":
raise HTTPException(
status_code=401,
detail="需要双重验证,请提供验证码或使用通行密钥",
headers={"X-MFA-Required": "true"}
)
raise HTTPException(status_code=401, detail="用户名或密码错误")
# 用户等级
level = SitesHelper().auth_level
# 是否显示配置向导
show_wizard = not SystemConfigOper().get(SystemConfigKey.SetupWizardState) and not settings.ADVANCED_MODE
return schemas.Token(
access_token=security.create_access_token(
userid=user_or_message.id,
@@ -45,7 +55,9 @@ def login_access_token(
user_id=user_or_message.id,
user_name=user_or_message.name,
avatar=user_or_message.avatar,
level=level
level=level,
permissions=user_or_message.permissions or {},
wizard=show_wizard
)
@@ -54,12 +66,7 @@ def wallpaper() -> Any:
"""
获取登录页面电影海报
"""
if settings.WALLPAPER == "bing":
url = WebUtils.get_bing_wallpaper()
elif settings.WALLPAPER == "mediaserver":
url = MediaServerChain().get_latest_wallpaper()
else:
url = TmdbChain().get_random_wallpager()
url = WallpaperHelper().get_wallpaper()
if url:
return schemas.Response(
success=True,
@@ -73,9 +80,4 @@ def wallpapers() -> Any:
"""
获取登录页面电影海报
"""
if settings.WALLPAPER == "bing":
return WebUtils.get_bing_wallpapers()
elif settings.WALLPAPER == "mediaserver":
return MediaServerChain().get_latest_wallpapers()
else:
return TmdbChain().get_trending_wallpapers()
return WallpaperHelper().get_wallpapers()

353
app/api/endpoints/mcp.py Normal file
View File

@@ -0,0 +1,353 @@
from typing import List, Any, Dict, Annotated, Union
from fastapi import APIRouter, Depends, HTTPException, Request
from fastapi.responses import JSONResponse, Response
from app import schemas
from app.agent.tools.manager import moviepilot_tool_manager
from app.core.security import verify_apikey
from app.log import logger
# 导入版本号
try:
from version import APP_VERSION
except ImportError:
APP_VERSION = "unknown"
router = APIRouter()
# MCP 协议版本
MCP_PROTOCOL_VERSIONS = ["2025-11-25", "2025-06-18", "2024-11-05"]
MCP_PROTOCOL_VERSION = MCP_PROTOCOL_VERSIONS[0] # 默认使用最新版本
def create_jsonrpc_response(request_id: Union[str, int, None], result: Any) -> Dict[str, Any]:
"""
创建 JSON-RPC 成功响应
"""
response = {
"jsonrpc": "2.0",
"id": request_id,
"result": result
}
return response
def create_jsonrpc_error(request_id: Union[str, int, None], code: int, message: str, data: Any = None) -> Dict[
str, Any]:
"""
创建 JSON-RPC 错误响应
"""
error = {
"jsonrpc": "2.0",
"id": request_id,
"error": {
"code": code,
"message": message
}
}
if data is not None:
error["error"]["data"] = data
return error
@router.post("", summary="MCP JSON-RPC 端点", response_model=None)
async def mcp_jsonrpc(
request: Request,
_: Annotated[str, Depends(verify_apikey)] = None
) -> Union[JSONResponse, Response]:
"""
MCP 标准 JSON-RPC 2.0 端点
处理所有 MCP 协议消息(初始化、工具列表、工具调用等)
"""
try:
body = await request.json()
except Exception as e:
logger.error(f"解析请求体失败: {e}")
return JSONResponse(
status_code=400,
content=create_jsonrpc_error(None, -32700, "Parse error", str(e))
)
# 验证 JSON-RPC 格式
if not isinstance(body, dict) or body.get("jsonrpc") != "2.0":
return JSONResponse(
status_code=400,
content=create_jsonrpc_error(body.get("id"), -32600, "Invalid Request")
)
method = body.get("method")
params = body.get("params", {})
request_id = body.get("id")
# 如果有 id则为请求没有 id 则为通知
is_notification = request_id is None
try:
# 处理初始化请求
if method == "initialize":
result = await handle_initialize(params)
return JSONResponse(content=create_jsonrpc_response(request_id, result))
# 处理已初始化通知
elif method == "notifications/initialized":
if is_notification:
return Response(status_code=204)
else:
return JSONResponse(
status_code=400,
content={"error": "initialized must be a notification"}
)
# 处理工具列表请求
if method == "tools/list":
result = await handle_tools_list()
return JSONResponse(content=create_jsonrpc_response(request_id, result))
# 处理工具调用请求
elif method == "tools/call":
result = await handle_tools_call(params)
return JSONResponse(content=create_jsonrpc_response(request_id, result))
# 处理 ping 请求
elif method == "ping":
return JSONResponse(content=create_jsonrpc_response(request_id, {}))
# 未知方法
else:
return JSONResponse(
content=create_jsonrpc_error(request_id, -32601, f"Method not found: {method}")
)
except ValueError as e:
logger.warning(f"MCP 请求参数错误: {e}")
return JSONResponse(
status_code=400,
content=create_jsonrpc_error(request_id, -32602, "Invalid params", str(e))
)
except Exception as e:
logger.error(f"处理 MCP 请求失败: {e}", exc_info=True)
return JSONResponse(
status_code=500,
content=create_jsonrpc_error(request_id, -32603, "Internal error", str(e))
)
async def handle_initialize(params: Dict[str, Any]) -> Dict[str, Any]:
"""
处理初始化请求
"""
protocol_version = params.get("protocolVersion")
client_info = params.get("clientInfo", {})
logger.info(f"MCP 初始化请求: 客户端={client_info.get('name')}, 协议版本={protocol_version}")
# 版本协商:选择客户端和服务器都支持的版本
negotiated_version = MCP_PROTOCOL_VERSION
if protocol_version in MCP_PROTOCOL_VERSIONS:
# 客户端版本在支持列表中,使用客户端版本
negotiated_version = protocol_version
logger.info(f"使用客户端协议版本: {negotiated_version}")
else:
# 客户端版本不支持,使用服务器默认版本
logger.warning(f"协议版本不匹配: 客户端={protocol_version}, 使用服务器版本={negotiated_version}")
return {
"protocolVersion": negotiated_version,
"capabilities": {
"tools": {
"listChanged": False # 暂不支持工具列表变更通知
},
"logging": {}
},
"serverInfo": {
"name": "MoviePilot",
"version": APP_VERSION,
"description": "MoviePilot MCP Server - 电影自动化管理工具",
},
"instructions": "MoviePilot MCP 服务器,提供媒体管理、订阅、下载等工具。"
}
async def handle_tools_list() -> Dict[str, Any]:
"""
处理工具列表请求
"""
tools = moviepilot_tool_manager.list_tools()
# 转换为 MCP 工具格式
mcp_tools = []
for tool in tools:
mcp_tool = {
"name": tool.name,
"description": tool.description,
"inputSchema": tool.input_schema
}
mcp_tools.append(mcp_tool)
return {
"tools": mcp_tools
}
async def handle_tools_call(params: Dict[str, Any]) -> Dict[str, Any]:
"""
处理工具调用请求
"""
tool_name = params.get("name")
arguments = params.get("arguments", {})
if not tool_name:
raise ValueError("Missing tool name")
try:
result_text = await moviepilot_tool_manager.call_tool(tool_name, arguments)
return {
"content": [
{
"type": "text",
"text": result_text
}
]
}
except Exception as e:
logger.error(f"工具调用失败: {tool_name}, 错误: {e}", exc_info=True)
return {
"content": [
{
"type": "text",
"text": f"错误: {str(e)}"
}
],
"isError": True
}
@router.delete("", summary="终止 MCP 会话", response_model=None)
async def delete_mcp_session(
_: Annotated[str, Depends(verify_apikey)] = None
) -> Union[JSONResponse, Response]:
"""
终止 MCP 会话(无状态模式下仅返回成功)
"""
return Response(status_code=204)
# ==================== 兼容的 RESTful API 端点 ====================
@router.get("/tools", summary="列出所有可用工具", response_model=List[Dict[str, Any]])
async def list_tools(
_: Annotated[str, Depends(verify_apikey)]
) -> Any:
"""
获取所有可用的工具列表
返回每个工具的名称、描述和参数定义
"""
try:
# 获取所有工具定义
tools = moviepilot_tool_manager.list_tools()
# 转换为字典格式
tools_list = []
for tool in tools:
tool_dict = {
"name": tool.name,
"description": tool.description,
"inputSchema": tool.input_schema
}
tools_list.append(tool_dict)
return tools_list
except Exception as e:
logger.error(f"获取工具列表失败: {e}", exc_info=True)
raise HTTPException(status_code=500, detail=f"获取工具列表失败: {str(e)}")
@router.post("/tools/call", summary="调用工具", response_model=schemas.ToolCallResponse)
async def call_tool(
request: schemas.ToolCallRequest,
_: Annotated[str, Depends(verify_apikey)] = None
) -> Any:
"""
调用指定的工具
Returns:
工具执行结果
"""
try:
# 调用工具
result_text = await moviepilot_tool_manager.call_tool(request.tool_name, request.arguments)
return schemas.ToolCallResponse(
success=True,
result=result_text
)
except Exception as e:
logger.error(f"调用工具 {request.tool_name} 失败: {e}", exc_info=True)
return schemas.ToolCallResponse(
success=False,
error=f"调用工具失败: {str(e)}"
)
@router.get("/tools/{tool_name}", summary="获取工具详情", response_model=Dict[str, Any])
async def get_tool_info(
tool_name: str,
_: Annotated[str, Depends(verify_apikey)]
) -> Any:
"""
获取指定工具的详细信息
Returns:
工具的详细信息,包括名称、描述和参数定义
"""
try:
# 获取所有工具
tools = moviepilot_tool_manager.list_tools()
# 查找指定工具
for tool in tools:
if tool.name == tool_name:
return {
"name": tool.name,
"description": tool.description,
"inputSchema": tool.input_schema
}
raise HTTPException(status_code=404, detail=f"工具 '{tool_name}' 未找到")
except HTTPException:
raise
except Exception as e:
logger.error(f"获取工具信息失败: {e}", exc_info=True)
raise HTTPException(status_code=500, detail=f"获取工具信息失败: {str(e)}")
@router.get("/tools/{tool_name}/schema", summary="获取工具参数Schema", response_model=Dict[str, Any])
async def get_tool_schema(
tool_name: str,
_: Annotated[str, Depends(verify_apikey)]
) -> Any:
"""
获取指定工具的参数SchemaJSON Schema格式
Returns:
工具的JSON Schema定义
"""
try:
# 获取所有工具
tools = moviepilot_tool_manager.list_tools()
# 查找指定工具
for tool in tools:
if tool.name == tool_name:
return tool.input_schema
raise HTTPException(status_code=404, detail=f"工具 '{tool_name}' 未找到")
except HTTPException:
raise
except Exception as e:
logger.error(f"获取工具Schema失败: {e}", exc_info=True)
raise HTTPException(status_code=500, detail=f"获取工具Schema失败: {str(e)}")

View File

@@ -1,5 +1,5 @@
from pathlib import Path
from typing import List, Any, Union
from typing import List, Any, Union, Annotated, Optional
from fastapi import APIRouter, Depends
@@ -11,67 +11,71 @@ from app.core.context import Context
from app.core.event import eventmanager
from app.core.metainfo import MetaInfo, MetaInfoPath
from app.core.security import verify_token, verify_apitoken
from app.db.models import User
from app.db.user_oper import get_current_active_user, get_current_active_superuser
from app.schemas import MediaType, MediaRecognizeConvertEventData
from app.schemas.category import CategoryConfig
from app.schemas.types import ChainEventType
router = APIRouter()
@router.get("/recognize", summary="识别媒体信息(种子)", response_model=schemas.Context)
def recognize(title: str,
subtitle: str = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def recognize(title: str,
subtitle: Optional[str] = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据标题、副标题识别媒体信息
"""
# 识别媒体信息
metainfo = MetaInfo(title, subtitle)
mediainfo = MediaChain().recognize_by_meta(metainfo)
mediainfo = await MediaChain().async_recognize_by_meta(metainfo)
if mediainfo:
return Context(meta_info=metainfo, media_info=mediainfo).to_dict()
return schemas.Context()
@router.get("/recognize2", summary="识别种子媒体信息API_TOKEN", response_model=schemas.Context)
def recognize2(title: str,
subtitle: str = None,
_: str = Depends(verify_apitoken)) -> Any:
async def recognize2(_: Annotated[str, Depends(verify_apitoken)],
title: str,
subtitle: Optional[str] = None
) -> Any:
"""
根据标题、副标题识别媒体信息 API_TOKEN认证?token=xxx
"""
# 识别媒体信息
return recognize(title, subtitle)
return await recognize(title, subtitle)
@router.get("/recognize_file", summary="识别媒体信息(文件)", response_model=schemas.Context)
def recognize_file(path: str,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def recognize_file(path: str,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据文件路径识别媒体信息
"""
# 识别媒体信息
context = MediaChain().recognize_by_path(path)
context = await MediaChain().async_recognize_by_path(path)
if context:
return context.to_dict()
return schemas.Context()
@router.get("/recognize_file2", summary="识别文件媒体信息API_TOKEN", response_model=schemas.Context)
def recognize_file2(path: str,
_: str = Depends(verify_apitoken)) -> Any:
async def recognize_file2(path: str,
_: Annotated[str, Depends(verify_apitoken)]) -> Any:
"""
根据文件路径识别媒体信息 API_TOKEN认证?token=xxx
"""
# 识别媒体信息
return recognize_file(path)
return await recognize_file(path)
@router.get("/search", summary="搜索媒体/人物信息", response_model=List[dict])
def search(title: str,
type: str = "media",
page: int = 1,
count: int = 8,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def search(title: str,
type: Optional[str] = "media",
page: int = 1,
count: int = 8,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
模糊搜索媒体/人物信息列表 media媒体信息person人物信息
"""
@@ -84,28 +88,31 @@ def search(title: str,
return obj.get("source")
return obj.source
result = []
media_chain = MediaChain()
if type == "media":
_, medias = MediaChain().search(title=title)
if medias:
result = [media.to_dict() for media in medias]
_, medias = await media_chain.async_search(title=title)
result = [media.to_dict() for media in medias] if medias else []
elif type == "collection":
result = MediaChain().search_collections(name=title)
else:
result = MediaChain().search_persons(name=title)
if result:
# 按设置的顺序对结果进行排序
setting_order = settings.SEARCH_SOURCE.split(',') or []
sort_order = {}
for index, source in enumerate(setting_order):
sort_order[source] = index
result = sorted(result, key=lambda x: sort_order.get(__get_source(x), 4))
return result[(page - 1) * count:page * count]
collections = await media_chain.async_search_collections(name=title)
result = [collection.to_dict() for collection in collections] if collections else []
else: # person
persons = await media_chain.async_search_persons(name=title)
result = [person.model_dump() for person in persons] if persons else []
if not result:
return []
# 排序和分页
setting_order = settings.SEARCH_SOURCE.split(',') if settings.SEARCH_SOURCE else []
sort_order = {source: index for index, source in enumerate(setting_order)}
sorted_result = sorted(result, key=lambda x: sort_order.get(__get_source(x), 4))
return sorted_result[(page - 1) * count:page * count]
@router.post("/scrape/{storage}", summary="刮削媒体信息", response_model=schemas.Response)
def scrape(fileitem: schemas.FileItem,
storage: str = "local",
storage: Optional[str] = "local",
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
刮削媒体信息
@@ -122,32 +129,71 @@ def scrape(fileitem: schemas.FileItem,
if storage == "local":
if not scrape_path.exists():
return schemas.Response(success=False, message="刮削路径不存在")
# 手动刮削
# 手动刮削 (暂时使用同步版本,可以后续优化为异步)
chain.scrape_metadata(fileitem=fileitem, meta=meta, mediainfo=mediainfo, overwrite=True)
return schemas.Response(success=True, message=f"{fileitem.path} 刮削完成")
@router.get("/category/config", summary="获取分类策略配置", response_model=schemas.Response)
def get_category_config(_: User = Depends(get_current_active_user)):
"""
获取分类策略配置
"""
config = MediaChain().category_config()
return schemas.Response(success=True, data=config.model_dump())
@router.post("/category/config", summary="保存分类策略配置", response_model=schemas.Response)
def save_category_config(config: CategoryConfig, _: User = Depends(get_current_active_superuser)):
"""
保存分类策略配置
"""
if MediaChain().save_category_config(config):
return schemas.Response(success=True, message="保存成功")
else:
return schemas.Response(success=False, message="保存失败")
@router.get("/category", summary="查询自动分类配置", response_model=dict)
def category(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def category(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询自动分类配置
"""
return MediaChain().media_category() or {}
@router.get("/group/seasons/{episode_group}", summary="查询剧集组季信息", response_model=List[schemas.MediaSeason])
async def group_seasons(episode_group: str, _: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询剧集组季信息themoviedb
"""
return await TmdbChain().async_tmdb_group_seasons(group_id=episode_group)
@router.get("/groups/{tmdbid}", summary="查询媒体剧集组", response_model=List[dict])
async def groups(tmdbid: int, _: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询媒体剧集组列表themoviedb
"""
mediainfo = await MediaChain().async_recognize_media(tmdbid=tmdbid, mtype=MediaType.TV)
if not mediainfo:
return []
return mediainfo.episode_groups
@router.get("/seasons", summary="查询媒体季信息", response_model=List[schemas.MediaSeason])
def seasons(mediaid: str = None,
title: str = None,
year: int = None,
season: int = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def seasons(mediaid: Optional[str] = None,
title: Optional[str] = None,
year: str = None,
season: int = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询媒体季信息
"""
if mediaid:
if mediaid.startswith("tmdb:"):
tmdbid = int(mediaid[5:])
seasons_info = TmdbChain().tmdb_seasons(tmdbid=tmdbid)
seasons_info = await TmdbChain().async_tmdb_seasons(tmdbid=tmdbid)
if seasons_info:
if season:
return [sea for sea in seasons_info if sea.season_number == season]
@@ -156,17 +202,17 @@ def seasons(mediaid: str = None,
meta = MetaInfo(title)
if year:
meta.year = year
mediainfo = MediaChain().recognize_media(meta, mtype=MediaType.TV)
mediainfo = await MediaChain().async_recognize_media(meta, mtype=MediaType.TV)
if mediainfo:
if settings.RECOGNIZE_SOURCE == "themoviedb":
seasons_info = TmdbChain().tmdb_seasons(tmdbid=mediainfo.tmdb_id)
seasons_info = await TmdbChain().async_tmdb_seasons(tmdbid=mediainfo.tmdb_id)
if seasons_info:
if season:
return [sea for sea in seasons_info if sea.season_number == season]
return seasons_info
else:
sea = season or 1
return schemas.MediaSeason(
return [schemas.MediaSeason(
season_number=sea,
poster_path=mediainfo.poster_path,
name=f"{sea}",
@@ -174,40 +220,40 @@ def seasons(mediaid: str = None,
overview=mediainfo.overview,
vote_average=mediainfo.vote_average,
episode_count=mediainfo.number_of_episodes
)
)]
return []
@router.get("/{mediaid}", summary="查询媒体详情", response_model=schemas.MediaInfo)
def detail(mediaid: str, type_name: str, title: str = None, year: int = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def detail(mediaid: str, type_name: str, title: Optional[str] = None, year: str = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据媒体ID查询themoviedb或豆瓣媒体信息type_name: 电影/电视剧
"""
mtype = MediaType(type_name)
mediainfo = None
mediachain = MediaChain()
if mediaid.startswith("tmdb:"):
mediainfo = MediaChain().recognize_media(tmdbid=int(mediaid[5:]), mtype=mtype)
mediainfo = await mediachain.async_recognize_media(tmdbid=int(mediaid[5:]), mtype=mtype)
elif mediaid.startswith("douban:"):
mediainfo = MediaChain().recognize_media(doubanid=mediaid[7:], mtype=mtype)
mediainfo = await mediachain.async_recognize_media(doubanid=mediaid[7:], mtype=mtype)
elif mediaid.startswith("bangumi:"):
mediainfo = MediaChain().recognize_media(bangumiid=int(mediaid[8:]), mtype=mtype)
mediainfo = await mediachain.async_recognize_media(bangumiid=int(mediaid[8:]), mtype=mtype)
else:
# 广播事件解析媒体信息
event_data = MediaRecognizeConvertEventData(
mediaid=mediaid,
convert_type=settings.RECOGNIZE_SOURCE
)
event = eventmanager.send_event(ChainEventType.MediaRecognizeConvert, event_data)
event = await eventmanager.async_send_event(ChainEventType.MediaRecognizeConvert, event_data)
# 使用事件返回的上下文数据
if event and event.event_data:
if event and event.event_data and event.event_data.media_dict:
event_data: MediaRecognizeConvertEventData = event.event_data
if event_data.media_dict:
new_id = event_data.media_dict.get("id")
if event_data.convert_type == "themoviedb":
mediainfo = MediaChain().recognize_media(tmdbid=new_id, mtype=mtype)
elif event_data.convert_type == "douban":
mediainfo = MediaChain().recognize_media(doubanid=new_id, mtype=mtype)
new_id = event_data.media_dict.get("id")
if event_data.convert_type == "themoviedb":
mediainfo = await mediachain.async_recognize_media(tmdbid=new_id, mtype=mtype)
elif event_data.convert_type == "douban":
mediainfo = await mediachain.async_recognize_media(doubanid=new_id, mtype=mtype)
elif title:
# 使用名称识别兜底
meta = MetaInfo(title)
@@ -215,10 +261,10 @@ def detail(mediaid: str, type_name: str, title: str = None, year: int = None,
meta.year = year
if mtype:
meta.type = mtype
mediainfo = MediaChain().recognize_media(meta=meta)
mediainfo = await mediachain.async_recognize_media(meta=meta)
# 识别
if mediainfo:
MediaChain().obtain_images(mediainfo)
await mediachain.async_obtain_images(mediainfo)
return mediainfo.to_dict()
return schemas.MediaInfo()

View File

@@ -1,7 +1,7 @@
from typing import Any, List, Dict
from typing import Any, List, Dict, Optional
from fastapi import APIRouter, Depends
from sqlalchemy.orm import Session
from sqlalchemy.ext.asyncio import AsyncSession
from app import schemas
from app.chain.download import DownloadChain
@@ -9,7 +9,7 @@ from app.chain.mediaserver import MediaServerChain
from app.core.context import MediaInfo
from app.core.metainfo import MetaInfo
from app.core.security import verify_token
from app.db import get_db
from app.db import get_async_db
from app.db.mediaserver_oper import MediaServerOper
from app.db.models import MediaServerItem
from app.db.systemconfig_oper import SystemConfigOper
@@ -43,13 +43,13 @@ def play_item(itemid: str, _: schemas.TokenPayload = Depends(verify_token)) -> s
@router.get("/exists", summary="查询本地是否存在(数据库)", response_model=schemas.Response)
def exists_local(title: str = None,
year: int = None,
mtype: str = None,
tmdbid: int = None,
season: int = None,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def exists_local(title: Optional[str] = None,
year: Optional[str] = None,
mtype: Optional[str] = None,
tmdbid: Optional[int] = None,
season: Optional[int] = None,
db: AsyncSession = Depends(get_async_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
判断本地是否存在
"""
@@ -59,7 +59,7 @@ def exists_local(title: str = None,
# 返回对象
ret_info = {}
# 本地数据库是否存在
exist: MediaServerItem = MediaServerOper(db).exists(
exist: MediaServerItem = await MediaServerOper(db).async_exists(
title=meta.name, year=year, mtype=mtype, tmdbid=tmdbid, season=season
)
if exist:
@@ -79,10 +79,10 @@ def exists(media_in: schemas.MediaInfo,
"""
# 转化为媒体信息对象
mediainfo = MediaInfo()
mediainfo.from_dict(media_in.dict())
mediainfo.from_dict(media_in.model_dump())
existsinfo: schemas.ExistMediaInfo = MediaServerChain().media_exists(mediainfo=mediainfo)
if not existsinfo:
return []
return {}
if media_in.season:
return {
media_in.season: existsinfo.seasons.get(media_in.season) or []
@@ -108,7 +108,7 @@ def not_exists(media_in: schemas.MediaInfo,
meta.year = media_in.year
# 转化为媒体信息对象
mediainfo = MediaInfo()
mediainfo.from_dict(media_in.dict())
mediainfo.from_dict(media_in.model_dump())
exist_flag, no_exists = DownloadChain().get_no_exists_info(meta=meta, mediainfo=mediainfo)
mediakey = mediainfo.tmdb_id or mediainfo.douban_id
if mediainfo.type == MediaType.MOVIE:
@@ -121,7 +121,7 @@ def not_exists(media_in: schemas.MediaInfo,
@router.get("/latest", summary="最新入库条目", response_model=List[schemas.MediaServerPlayItem])
def latest(server: str, count: int = 18,
def latest(server: str, count: Optional[int] = 20,
userinfo: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
获取媒体服务器最新入库条目
@@ -130,7 +130,7 @@ def latest(server: str, count: int = 18,
@router.get("/playing", summary="正在播放条目", response_model=List[schemas.MediaServerPlayItem])
def playing(server: str, count: int = 12,
def playing(server: str, count: Optional[int] = 12,
userinfo: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
获取媒体服务器正在播放条目
@@ -139,7 +139,7 @@ def playing(server: str, count: int = 12,
@router.get("/library", summary="媒体库列表", response_model=List[schemas.MediaServerLibrary])
def library(server: str, hidden: bool = False,
def library(server: str, hidden: Optional[bool] = False,
userinfo: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
获取媒体服务器媒体库列表
@@ -148,7 +148,7 @@ def library(server: str, hidden: bool = False,
@router.get("/clients", summary="查询可用媒体服务器", response_model=List[dict])
def clients(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def clients(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询可用媒体服务器
"""

View File

@@ -1,16 +1,16 @@
import json
from typing import Union, Any, List
from typing import Union, Any, List, Optional
from fastapi import APIRouter, BackgroundTasks, Depends, Request
from pywebpush import WebPushException, webpush
from sqlalchemy.orm import Session
from sqlalchemy.ext.asyncio import AsyncSession
from starlette.responses import PlainTextResponse
from app import schemas
from app.chain.message import MessageChain
from app.core.config import settings, global_vars
from app.core.security import verify_token, verify_apitoken
from app.db import get_db
from app.db import get_async_db
from app.db.models import User
from app.db.models.message import Message
from app.db.user_oper import get_current_active_superuser
@@ -58,15 +58,15 @@ def web_message(text: str, current_user: User = Depends(get_current_active_super
@router.get("/web", summary="获取WEB消息", response_model=List[dict])
def get_web_message(_: schemas.TokenPayload = Depends(verify_token),
db: Session = Depends(get_db),
page: int = 1,
count: int = 20):
async def get_web_message(_: schemas.TokenPayload = Depends(verify_token),
db: AsyncSession = Depends(get_async_db),
page: Optional[int] = 1,
count: Optional[int] = 20):
"""
获取WEB消息列表
"""
ret_messages = []
messages = Message.list_by_page(db, page=page, count=count)
messages = await Message.async_list_by_page(db, page=page, count=count)
for message in messages:
try:
ret_messages.append(message.to_dict())
@@ -77,7 +77,7 @@ def get_web_message(_: schemas.TokenPayload = Depends(verify_token),
def wechat_verify(echostr: str, msg_signature: str, timestamp: Union[str, int], nonce: str,
source: str = None) -> Any:
source: Optional[str] = None) -> Any:
"""
微信验证响应
"""
@@ -114,8 +114,8 @@ def vocechat_verify() -> Any:
@router.get("/", summary="回调请求验证")
def incoming_verify(token: str = None, echostr: str = None, msg_signature: str = None,
timestamp: Union[str, int] = None, nonce: str = None, source: str = None,
def incoming_verify(token: Optional[str] = None, echostr: Optional[str] = None, msg_signature: Optional[str] = None,
timestamp: Union[str, int] = None, nonce: Optional[str] = None, source: Optional[str] = None,
_: schemas.TokenPayload = Depends(verify_apitoken)) -> Any:
"""
微信/VoceChat等验证响应
@@ -128,11 +128,11 @@ def incoming_verify(token: str = None, echostr: str = None, msg_signature: str =
@router.post("/webpush/subscribe", summary="客户端webpush通知订阅", response_model=schemas.Response)
def subscribe(subscription: schemas.Subscription, _: schemas.TokenPayload = Depends(verify_token)):
async def subscribe(subscription: schemas.Subscription, _: schemas.TokenPayload = Depends(verify_token)):
"""
客户端webpush通知订阅
"""
subinfo = subscription.dict()
subinfo = subscription.model_dump()
if subinfo not in global_vars.get_subscriptions():
global_vars.push_subscription(subinfo)
logger.debug(f"通知订阅成功: {subinfo}")
@@ -148,7 +148,7 @@ def send_notification(payload: schemas.SubscriptionMessage, _: schemas.TokenPayl
try:
webpush(
subscription_info=sub,
data=json.dumps(payload.dict()),
data=json.dumps(payload.model_dump()),
vapid_private_key=settings.VAPID.get("privateKey"),
vapid_claims={
"sub": settings.VAPID.get("subject")

498
app/api/endpoints/mfa.py Normal file
View File

@@ -0,0 +1,498 @@
"""
MFA (Multi-Factor Authentication) API 端点
包含 OTP 和 PassKey 相关功能
"""
from datetime import timedelta
from typing import Any, Annotated, Optional
from app.helper.sites import SitesHelper
from fastapi import APIRouter, Depends, HTTPException, Body
from sqlalchemy.ext.asyncio import AsyncSession
from app import schemas
from app.core import security
from app.core.config import settings
from app.db import get_async_db
from app.db.models.passkey import PassKey
from app.db.models.user import User
from app.db.systemconfig_oper import SystemConfigOper
from app.db.user_oper import get_current_active_user, get_current_active_user_async
from app.helper.passkey import PassKeyHelper
from app.log import logger
from app.schemas.types import SystemConfigKey
from app.utils.otp import OtpUtils
router = APIRouter()
# ==================== 辅助函数 ====================
def _build_credential_list(passkeys: list[PassKey]) -> list[dict[str, Any]]:
"""
构建凭证列表
:param passkeys: PassKey 列表
:return: 凭证字典列表
"""
return [
{
'credential_id': pk.credential_id,
'transports': pk.transports
}
for pk in passkeys
] if passkeys else []
def _extract_and_standardize_credential_id(credential: dict) -> str:
"""
从凭证中提取并标准化 credential_id
:param credential: 凭证字典
:return: 标准化后的 credential_id
:raises ValueError: 如果凭证无效
"""
credential_id_raw = credential.get('id') or credential.get('rawId')
if not credential_id_raw:
raise ValueError("无效的凭证")
return PassKeyHelper.standardize_credential_id(credential_id_raw)
def _verify_passkey_and_update(
credential: dict,
challenge: str,
passkey: PassKey
) -> tuple[bool, int]:
"""
验证 PassKey 并更新使用时间和签名计数
:param credential: 凭证字典
:param challenge: 挑战值
:param passkey: PassKey 对象
:return: (验证是否成功, 新的签名计数)
"""
success, new_sign_count = PassKeyHelper.verify_authentication_response(
credential=credential,
expected_challenge=challenge,
credential_public_key=passkey.public_key,
credential_current_sign_count=passkey.sign_count
)
if success:
passkey.update_last_used(db=None, sign_count=new_sign_count)
return success, new_sign_count
async def _check_user_has_passkey(db: AsyncSession, user_id: int) -> bool:
"""
检查用户是否有 PassKey
:param db: 数据库会话
:param user_id: 用户 ID
:return: 是否有 PassKey
"""
return bool(await PassKey.async_get_by_user_id(db=db, user_id=user_id))
# ==================== 请求模型 ====================
class OtpVerifyRequest(schemas.BaseModel):
"""OTP验证请求"""
uri: str
otpPassword: str
class OtpDisableRequest(schemas.BaseModel):
"""OTP禁用请求"""
password: str
class PassKeyDeleteRequest(schemas.BaseModel):
"""PassKey删除请求"""
passkey_id: int
password: str
# ==================== 通用 MFA 接口 ====================
@router.get('/status/{username}', summary='判断用户是否开启双重验证(MFA)', response_model=schemas.Response)
async def mfa_status(username: str, db: AsyncSession = Depends(get_async_db)) -> Any:
"""
检查指定用户是否启用了任何双重验证方式OTP 或 PassKey
"""
user: User = await User.async_get_by_name(db, username)
if not user:
return schemas.Response(success=False)
# 检查是否启用了OTP
has_otp = user.is_otp
# 检查是否有PassKey
has_passkey = await _check_user_has_passkey(db, user.id)
# 只要有任何一种验证方式,就需要双重验证
return schemas.Response(success=(has_otp or has_passkey))
# ==================== OTP 相关接口 ====================
@router.post('/otp/generate', summary='生成 OTP 验证 URI', response_model=schemas.Response)
def otp_generate(
current_user: Annotated[User, Depends(get_current_active_user)]
) -> Any:
"""生成 OTP 密钥及对应的 URI"""
secret, uri = OtpUtils.generate_secret_key(current_user.name)
return schemas.Response(success=secret != "", data={'secret': secret, 'uri': uri})
@router.post('/otp/verify', summary='绑定并验证 OTP', response_model=schemas.Response)
async def otp_verify(
data: OtpVerifyRequest,
db: AsyncSession = Depends(get_async_db),
current_user: User = Depends(get_current_active_user_async)
) -> Any:
"""验证用户输入的 OTP 码,验证通过后正式开启 OTP 验证"""
if not OtpUtils.is_legal(data.uri, data.otpPassword):
return schemas.Response(success=False, message="验证码错误")
await current_user.async_update_otp_by_name(db, current_user.name, True, OtpUtils.get_secret(data.uri))
return schemas.Response(success=True)
@router.post('/otp/disable', summary='关闭当前用户的 OTP 验证', response_model=schemas.Response)
async def otp_disable(
data: OtpDisableRequest,
db: AsyncSession = Depends(get_async_db),
current_user: User = Depends(get_current_active_user_async)
) -> Any:
"""关闭当前用户的 OTP 验证功能"""
# 安全检查:如果存在 PassKey默认不允许关闭 OTP除非配置允许
has_passkey = await _check_user_has_passkey(db, current_user.id)
if has_passkey and not settings.PASSKEY_ALLOW_REGISTER_WITHOUT_OTP:
return schemas.Response(
success=False,
message="您已注册通行密钥,为了防止域名配置变更导致无法登录,请先删除所有通行密钥再关闭 OTP 验证"
)
# 验证密码
if not security.verify_password(data.password, str(current_user.hashed_password)):
return schemas.Response(success=False, message="密码错误")
await current_user.async_update_otp_by_name(db, current_user.name, False, "")
return schemas.Response(success=True)
# ==================== PassKey 相关接口 ====================
class PassKeyRegistrationStart(schemas.BaseModel):
"""PassKey注册开始请求"""
name: str = "通行密钥"
class PassKeyRegistrationFinish(schemas.BaseModel):
"""PassKey注册完成请求"""
credential: dict
challenge: str
name: str = "通行密钥"
class PassKeyAuthenticationStart(schemas.BaseModel):
"""PassKey认证开始请求"""
username: Optional[str] = None
class PassKeyAuthenticationFinish(schemas.BaseModel):
"""PassKey认证完成请求"""
credential: dict
challenge: str
@router.post("/passkey/register/start", summary="开始注册 PassKey", response_model=schemas.Response)
def passkey_register_start(
current_user: Annotated[User, Depends(get_current_active_user)]
) -> Any:
"""开始注册 PassKey - 生成注册选项"""
try:
# 安全检查:默认需要先启用 OTP除非配置允许在未启用 OTP 时注册
if not current_user.is_otp and not settings.PASSKEY_ALLOW_REGISTER_WITHOUT_OTP:
return schemas.Response(
success=False,
message="为了确保在域名配置错误时仍能找回访问权限,请先启用 OTP 验证码再注册通行密钥"
)
# 获取用户已有的PassKey
existing_passkeys = PassKey.get_by_user_id(db=None, user_id=current_user.id)
existing_credentials = _build_credential_list(existing_passkeys) if existing_passkeys else None
# 生成注册选项
options_json, challenge = PassKeyHelper.generate_registration_options(
user_id=current_user.id,
username=current_user.name,
display_name=current_user.settings.get('nickname') if current_user.settings else None,
existing_credentials=existing_credentials
)
return schemas.Response(
success=True,
data={
'options': options_json,
'challenge': challenge
}
)
except Exception as e:
logger.error(f"生成PassKey注册选项失败: {e}")
return schemas.Response(
success=False,
message=f"生成注册选项失败: {str(e)}"
)
@router.post("/passkey/register/finish", summary="完成注册 PassKey", response_model=schemas.Response)
def passkey_register_finish(
passkey_req: PassKeyRegistrationFinish,
current_user: Annotated[User, Depends(get_current_active_user)]
) -> Any:
"""完成注册 PassKey - 验证并保存凭证"""
try:
# 验证注册响应
credential_id, public_key, sign_count, aaguid = PassKeyHelper.verify_registration_response(
credential=passkey_req.credential,
expected_challenge=passkey_req.challenge
)
# 提取transports
transports = None
if 'response' in passkey_req.credential and 'transports' in passkey_req.credential['response']:
transports = ','.join(passkey_req.credential['response']['transports'])
# 保存到数据库
passkey = PassKey(
user_id=current_user.id,
credential_id=credential_id,
public_key=public_key,
sign_count=sign_count,
name=passkey_req.name or "通行密钥",
aaguid=aaguid,
transports=transports
)
passkey.create()
logger.info(f"用户 {current_user.name} 成功注册PassKey: {passkey_req.name}")
return schemas.Response(
success=True,
message="通行密钥注册成功"
)
except Exception as e:
logger.error(f"注册PassKey失败: {e}")
return schemas.Response(
success=False,
message=f"注册失败: {str(e)}"
)
@router.post("/passkey/authenticate/start", summary="开始 PassKey 认证", response_model=schemas.Response)
def passkey_authenticate_start(
passkey_req: PassKeyAuthenticationStart = Body(...)
) -> Any:
"""开始 PassKey 认证 - 生成认证选项"""
try:
existing_credentials = None
# 如果指定了用户名只允许该用户的PassKey
if passkey_req.username:
user = User.get_by_name(db=None, name=passkey_req.username)
existing_passkeys = PassKey.get_by_user_id(db=None, user_id=user.id) if user else None
if not user or not existing_passkeys:
return schemas.Response(
success=False,
message="认证失败"
)
existing_credentials = _build_credential_list(existing_passkeys)
# 生成认证选项
options_json, challenge = PassKeyHelper.generate_authentication_options(
existing_credentials=existing_credentials
)
return schemas.Response(
success=True,
data={
'options': options_json,
'challenge': challenge
}
)
except Exception as e:
logger.error(f"生成PassKey认证选项失败: {e}")
return schemas.Response(
success=False,
message="认证失败"
)
@router.post("/passkey/authenticate/finish", summary="完成 PassKey 认证", response_model=schemas.Token)
def passkey_authenticate_finish(
passkey_req: PassKeyAuthenticationFinish
) -> Any:
"""完成 PassKey 认证 - 验证凭证并返回 token"""
try:
# 提取并标准化凭证ID
try:
credential_id = _extract_and_standardize_credential_id(passkey_req.credential)
except ValueError as e:
logger.warning(f"PassKey认证失败提供的凭证无效: {e}")
raise HTTPException(status_code=401, detail="认证失败")
# 查找PassKey并获取用户
passkey = PassKey.get_by_credential_id(db=None, credential_id=credential_id)
user = User.get_by_id(db=None, user_id=passkey.user_id) if passkey else None
if not passkey or not user or not user.is_active:
raise HTTPException(status_code=401, detail="认证失败")
# 验证认证响应并更新
success, _ = _verify_passkey_and_update(
credential=passkey_req.credential,
challenge=passkey_req.challenge,
passkey=passkey
)
if not success:
raise HTTPException(status_code=401, detail="认证失败")
logger.info(f"用户 {user.name} 通过PassKey认证成功")
# 生成token
level = SitesHelper().auth_level
show_wizard = not SystemConfigOper().get(SystemConfigKey.SetupWizardState) and not settings.ADVANCED_MODE
return schemas.Token(
access_token=security.create_access_token(
userid=user.id,
username=user.name,
super_user=user.is_superuser,
expires_delta=timedelta(minutes=settings.ACCESS_TOKEN_EXPIRE_MINUTES),
level=level
),
token_type="bearer",
super_user=user.is_superuser,
user_id=user.id,
user_name=user.name,
avatar=user.avatar,
level=level,
permissions=user.permissions or {},
wizard=show_wizard
)
except HTTPException:
raise
except Exception as e:
logger.error(f"PassKey认证失败: {e}")
raise HTTPException(status_code=401, detail="认证失败")
@router.get("/passkey/list", summary="获取当前用户的 PassKey 列表", response_model=schemas.Response)
def passkey_list(
current_user: Annotated[User, Depends(get_current_active_user)]
) -> Any:
"""获取当前用户的所有 PassKey"""
try:
passkeys = PassKey.get_by_user_id(db=None, user_id=current_user.id)
key_list = [
{
'id': pk.id,
'name': pk.name,
'created_at': pk.created_at.isoformat() if pk.created_at else None,
'last_used_at': pk.last_used_at.isoformat() if pk.last_used_at else None,
'aaguid': pk.aaguid,
'transports': pk.transports
}
for pk in passkeys
] if passkeys else []
return schemas.Response(
success=True,
data=key_list
)
except Exception as e:
logger.error(f"获取PassKey列表失败: {e}")
return schemas.Response(
success=False,
message=f"获取列表失败: {str(e)}"
)
@router.post("/passkey/delete", summary="删除 PassKey", response_model=schemas.Response)
async def passkey_delete(
data: PassKeyDeleteRequest,
current_user: User = Depends(get_current_active_user_async)
) -> Any:
"""删除指定的 PassKey"""
try:
# 验证密码
if not security.verify_password(data.password, str(current_user.hashed_password)):
return schemas.Response(success=False, message="密码错误")
success = PassKey.delete_by_id(db=None, passkey_id=data.passkey_id, user_id=current_user.id)
if success:
logger.info(f"用户 {current_user.name} 删除了PassKey: {data.passkey_id}")
return schemas.Response(
success=True,
message="通行密钥已删除"
)
else:
return schemas.Response(
success=False,
message="通行密钥不存在或无权删除"
)
except Exception as e:
logger.error(f"删除PassKey失败: {e}")
return schemas.Response(
success=False,
message=f"删除失败: {str(e)}"
)
@router.post("/passkey/verify", summary="PassKey 二次验证", response_model=schemas.Response)
def passkey_verify_mfa(
passkey_req: PassKeyAuthenticationFinish,
current_user: Annotated[User, Depends(get_current_active_user)]
) -> Any:
"""使用 PassKey 进行二次验证MFA"""
try:
# 提取并标准化凭证ID
try:
credential_id = _extract_and_standardize_credential_id(passkey_req.credential)
except ValueError as e:
logger.warning(f"PassKey二次验证失败提供的凭证无效: {e}")
return schemas.Response(success=False, message="验证失败")
# 查找PassKey必须属于当前用户
passkey = PassKey.get_by_credential_id(db=None, credential_id=credential_id)
if not passkey or passkey.user_id != current_user.id:
return schemas.Response(
success=False,
message="通行密钥不存在或不属于当前用户"
)
# 验证认证响应并更新
success, _ = _verify_passkey_and_update(
credential=passkey_req.credential,
challenge=passkey_req.challenge,
passkey=passkey
)
if not success:
return schemas.Response(
success=False,
message="通行密钥验证失败"
)
logger.info(f"用户 {current_user.name} 通过PassKey二次验证成功")
return schemas.Response(
success=True,
message="二次验证成功"
)
except Exception as e:
logger.error(f"PassKey二次验证失败: {e}")
return schemas.Response(
success=False,
message="验证失败"
)

View File

@@ -1,14 +1,22 @@
import mimetypes
import shutil
from typing import Annotated, Any, List, Optional
from fastapi import APIRouter, Depends, Header
import aiofiles
from anyio import Path as AsyncPath
from fastapi import APIRouter, Depends, Header, HTTPException
from fastapi.concurrency import run_in_threadpool
from starlette import status
from starlette.responses import StreamingResponse
from app import schemas
from app.command import Command
from app.core.config import settings
from app.core.plugin import PluginManager
from app.core.security import verify_apikey, verify_token
from app.db.models import User
from app.db.systemconfig_oper import SystemConfigOper
from app.db.user_oper import get_current_active_superuser
from app.db.user_oper import get_current_active_superuser, get_current_active_superuser_async
from app.factory import app
from app.helper.plugin import PluginHelper
from app.log import logger
@@ -16,7 +24,6 @@ from app.scheduler import Scheduler
from app.schemas.types import SystemConfigKey
PROTECTED_ROUTES = {"/api/v1/openapi.json", "/docs", "/docs/oauth2-redirect", "/redoc"}
PLUGIN_PREFIX = f"{settings.API_V1_STR}/plugin"
router = APIRouter()
@@ -66,9 +73,13 @@ def _update_plugin_api_routes(plugin_id: Optional[str], action: str):
try:
api["path"] = api_path
allow_anonymous = api.pop("allow_anonymous", False)
auth_mode = api.pop("auth", "apikey")
dependencies = api.setdefault("dependencies", [])
if not allow_anonymous and Depends(verify_apikey) not in dependencies:
dependencies.append(Depends(verify_apikey))
if not allow_anonymous:
if auth_mode == "bear" and Depends(verify_token) not in dependencies:
dependencies.append(Depends(verify_token))
elif Depends(verify_apikey) not in dependencies:
dependencies.append(Depends(verify_apikey))
app.add_api_route(**api, tags=["plugin"])
is_modified = True
logger.debug(f"Added plugin route: {api_path}")
@@ -116,23 +127,36 @@ def _clean_protected_routes(existing_paths: dict):
logger.error(f"Error removing protected route {protected_route}: {str(e)}")
def register_plugin(plugin_id: str):
"""
注册一个插件相关的服务
"""
# 注册插件服务
Scheduler().update_plugin_job(plugin_id)
# 注册菜单命令
Command().init_commands(plugin_id)
# 注册插件API
register_plugin_api(plugin_id)
@router.get("/", summary="所有插件", response_model=List[schemas.Plugin])
def all_plugins(_: schemas.TokenPayload = Depends(get_current_active_superuser),
state: str = "all") -> List[schemas.Plugin]:
async def all_plugins(_: User = Depends(get_current_active_superuser_async),
state: Optional[str] = "all", force: bool = False) -> List[schemas.Plugin]:
"""
查询所有插件清单包括本地插件和在线插件插件状态installed, market, all
"""
# 本地插件
local_plugins = PluginManager().get_local_plugins()
plugin_manager = PluginManager()
local_plugins = plugin_manager.get_local_plugins()
# 已安装插件
installed_plugins = [plugin for plugin in local_plugins if plugin.installed]
# 未安装的本地插件
not_installed_plugins = [plugin for plugin in local_plugins if not plugin.installed]
if state == "installed":
return installed_plugins
# 未安装的本地插件
not_installed_plugins = [plugin for plugin in local_plugins if not plugin.installed]
# 在线插件
online_plugins = PluginManager().get_online_plugins()
online_plugins = await plugin_manager.async_get_online_plugins(force)
if not online_plugins:
# 没有获取在线插件
if state == "market":
@@ -159,12 +183,13 @@ def all_plugins(_: schemas.TokenPayload = Depends(get_current_active_superuser),
if state == "market":
# 返回未安装的插件
return market_plugins
# 返回所有插件
return installed_plugins + market_plugins
@router.get("/installed", summary="已安装插件", response_model=List[str])
def installed(_: schemas.TokenPayload = Depends(get_current_active_superuser)) -> Any:
async def installed(_: User = Depends(get_current_active_superuser_async)) -> Any:
"""
查询用户已安装插件清单
"""
@@ -172,30 +197,43 @@ def installed(_: schemas.TokenPayload = Depends(get_current_active_superuser)) -
@router.get("/statistic", summary="插件安装统计", response_model=dict)
def statistic(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def statistic(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
插件安装统计
"""
return PluginHelper().get_statistic()
return await PluginHelper().async_get_statistic()
@router.get("/reload/{plugin_id}", summary="重新加载插件", response_model=schemas.Response)
def reload_plugin(plugin_id: str, _: User = Depends(get_current_active_superuser)) -> Any:
"""
重新加载插件
"""
# 重新加载插件
PluginManager().reload_plugin(plugin_id)
# 注册插件服务
register_plugin(plugin_id)
return schemas.Response(success=True)
@router.get("/install/{plugin_id}", summary="安装插件", response_model=schemas.Response)
def install(plugin_id: str,
repo_url: str = "",
force: bool = False,
_: schemas.TokenPayload = Depends(get_current_active_superuser)) -> Any:
async def install(plugin_id: str,
repo_url: Optional[str] = "",
force: Optional[bool] = False,
_: User = Depends(get_current_active_superuser_async)) -> Any:
"""
安装插件
"""
# 已安装插件
install_plugins = SystemConfigOper().get(SystemConfigKey.UserInstalledPlugins) or []
# 首先检查插件是否已经存在,并且是否强制安装,否则只进行安装统计
plugin_helper = PluginHelper()
if not force and plugin_id in PluginManager().get_plugin_ids():
PluginHelper().install_reg(pid=plugin_id)
await plugin_helper.async_install_reg(pid=plugin_id)
else:
# 插件不存在或需要强制安装,下载安装并注册插件
if repo_url:
state, msg = PluginHelper().install(pid=plugin_id, repo_url=repo_url)
state, msg = await plugin_helper.async_install(pid=plugin_id, repo_url=repo_url)
# 安装失败则直接响应
if not state:
return schemas.Response(success=False, message=msg)
@@ -206,37 +244,67 @@ def install(plugin_id: str,
if plugin_id not in install_plugins:
install_plugins.append(plugin_id)
# 保存设置
SystemConfigOper().set(SystemConfigKey.UserInstalledPlugins, install_plugins)
# 加载插件到内存
PluginManager().reload_plugin(plugin_id)
# 注册插件服务
Scheduler().update_plugin_job(plugin_id)
# 注册菜单命令
Command().init_commands(plugin_id)
# 注册插件API
register_plugin_api(plugin_id)
await SystemConfigOper().async_set(SystemConfigKey.UserInstalledPlugins, install_plugins)
# 重新加载插件
await run_in_threadpool(reload_plugin, plugin_id)
return schemas.Response(success=True)
@router.get("/remotes", summary="获取插件联邦组件列表", response_model=List[dict])
async def remotes(token: str) -> Any:
"""
获取插件联邦组件列表
"""
if token != "moviepilot":
raise HTTPException(status_code=403, detail="Forbidden")
return PluginManager().get_plugin_remotes()
@router.get("/form/{plugin_id}", summary="获取插件表单页面")
def plugin_form(plugin_id: str,
_: schemas.TokenPayload = Depends(get_current_active_superuser)) -> dict:
_: User = Depends(get_current_active_superuser)) -> dict:
"""
根据插件ID获取插件配置表单
根据插件ID获取插件配置表单或Vue组件URL
"""
conf, model = PluginManager().get_plugin_form(plugin_id)
return {
"conf": conf,
"model": model
}
plugin_manager = PluginManager()
plugin_instance = plugin_manager.running_plugins.get(plugin_id)
if not plugin_instance:
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail=f"插件 {plugin_id} 不存在或未加载")
# 渲染模式
render_mode, _ = plugin_instance.get_render_mode()
try:
conf, model = plugin_instance.get_form()
return {
"render_mode": render_mode,
"conf": conf,
"model": plugin_manager.get_plugin_config(plugin_id) or model
}
except Exception as e:
logger.error(f"插件 {plugin_id} 调用方法 get_form 出错: {str(e)}")
return {}
@router.get("/page/{plugin_id}", summary="获取插件数据页面")
def plugin_page(plugin_id: str, _: schemas.TokenPayload = Depends(get_current_active_superuser)) -> List[dict]:
def plugin_page(plugin_id: str, _: User = Depends(get_current_active_superuser)) -> dict:
"""
根据插件ID获取插件数据页面
"""
return PluginManager().get_plugin_page(plugin_id)
plugin_instance = PluginManager().running_plugins.get(plugin_id)
if not plugin_instance:
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail=f"插件 {plugin_id} 不存在或未加载")
# 渲染模式
render_mode, _ = plugin_instance.get_render_mode()
try:
page = plugin_instance.get_page()
return {
"render_mode": render_mode,
"page": page or []
}
except Exception as e:
logger.error(f"插件 {plugin_id} 调用方法 get_page 出错: {str(e)}")
return {}
@router.get("/dashboard/meta", summary="获取所有插件仪表板元信息")
@@ -247,48 +315,187 @@ def plugin_dashboard_meta(_: schemas.TokenPayload = Depends(verify_token)) -> Li
return PluginManager().get_plugin_dashboard_meta()
@router.get("/dashboard/{plugin_id}/{key}", summary="获取插件仪表板配置")
def plugin_dashboard_by_key(plugin_id: str, key: str, user_agent: Annotated[str | None, Header()] = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Optional[schemas.PluginDashboard]:
"""
根据插件ID获取插件仪表板
"""
return PluginManager().get_plugin_dashboard(plugin_id, key, user_agent)
@router.get("/dashboard/{plugin_id}", summary="获取插件仪表板配置")
def plugin_dashboard(plugin_id: str, user_agent: Annotated[str | None, Header()] = None,
_: schemas.TokenPayload = Depends(verify_token)) -> schemas.PluginDashboard:
"""
根据插件ID获取插件仪表板
"""
return PluginManager().get_plugin_dashboard(plugin_id, user_agent=user_agent)
@router.get("/dashboard/{plugin_id}/{key}", summary="获取插件仪表板配置")
def plugin_dashboard(plugin_id: str, key: str, user_agent: Annotated[str | None, Header()] = None,
_: schemas.TokenPayload = Depends(verify_token)) -> schemas.PluginDashboard:
"""
根据插件ID获取插件仪表板
"""
return PluginManager().get_plugin_dashboard(plugin_id, key=key, user_agent=user_agent)
return plugin_dashboard_by_key(plugin_id, "", user_agent)
@router.get("/reset/{plugin_id}", summary="重置插件配置及数据", response_model=schemas.Response)
def reset_plugin(plugin_id: str,
_: schemas.TokenPayload = Depends(get_current_active_superuser)) -> Any:
_: User = Depends(get_current_active_superuser)) -> Any:
"""
根据插件ID重置插件配置及数据
"""
plugin_manager = PluginManager()
# 删除配置
PluginManager().delete_plugin_config(plugin_id)
plugin_manager.delete_plugin_config(plugin_id)
# 删除插件所有数据
PluginManager().delete_plugin_data(plugin_id)
# 重新生效插件
PluginManager().reload_plugin(plugin_id)
# 注册插件服务
Scheduler().update_plugin_job(plugin_id)
# 注册菜单命令
Command().init_commands(plugin_id)
# 注册插件API
register_plugin_api(plugin_id)
plugin_manager.delete_plugin_data(plugin_id)
# 重新加载插件
reload_plugin(plugin_id)
return schemas.Response(success=True)
@router.get("/file/{plugin_id}/{filepath:path}", summary="获取插件静态文件")
async def plugin_static_file(plugin_id: str, filepath: str):
"""
获取插件静态文件
"""
# 基础安全检查
if ".." in filepath or ".." in plugin_id:
logger.warning(f"Static File API: Path traversal attempt detected: {plugin_id}/{filepath}")
raise HTTPException(status_code=status.HTTP_403_FORBIDDEN, detail="Forbidden")
plugin_base_dir = AsyncPath(settings.ROOT_PATH) / "app" / "plugins" / plugin_id.lower()
plugin_file_path = plugin_base_dir / filepath
if not await plugin_file_path.exists():
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail=f"{plugin_file_path} 不存在")
if not await plugin_file_path.is_file():
raise HTTPException(status_code=status.HTTP_403_FORBIDDEN, detail=f"{plugin_file_path} 不是文件")
# 判断 MIME 类型
response_type, _ = mimetypes.guess_type(str(plugin_file_path))
suffix = plugin_file_path.suffix.lower()
# 强制修正 .mjs 和 .js 的 MIME 类型
if suffix in ['.js', '.mjs']:
response_type = 'application/javascript'
elif suffix == '.css' and not response_type: # 如果 guess_type 没猜对 css也修正
response_type = 'text/css'
elif not response_type: # 对于其他猜不出的类型
response_type = 'application/octet-stream'
try:
# 异步生成器函数,用于流式读取文件
async def file_generator():
async with aiofiles.open(plugin_file_path, mode='rb') as file:
# 8KB 块大小
while chunk := await file.read(8192):
yield chunk
return StreamingResponse(
file_generator(),
media_type=response_type,
headers={"Content-Disposition": f"inline; filename={plugin_file_path.name}"}
)
except Exception as e:
logger.error(f"Error creating/sending StreamingResponse for {plugin_file_path}: {e}", exc_info=True)
raise HTTPException(status_code=500, detail="Internal Server Error")
@router.get("/folders", summary="获取插件文件夹配置", response_model=dict)
async def get_plugin_folders(_: User = Depends(get_current_active_superuser_async)) -> dict:
"""
获取插件文件夹分组配置
"""
try:
result = SystemConfigOper().get(SystemConfigKey.PluginFolders) or {}
return result
except Exception as e:
logger.error(f"[文件夹API] 获取文件夹配置失败: {str(e)}")
return {}
@router.post("/folders", summary="保存插件文件夹配置", response_model=schemas.Response)
async def save_plugin_folders(folders: dict, _: User = Depends(get_current_active_superuser_async)) -> Any:
"""
保存插件文件夹分组配置
"""
try:
SystemConfigOper().set(SystemConfigKey.PluginFolders, folders)
return schemas.Response(success=True)
except Exception as e:
logger.error(f"[文件夹API] 保存文件夹配置失败: {str(e)}")
return schemas.Response(success=False, message=str(e))
@router.post("/folders/{folder_name}", summary="创建插件文件夹", response_model=schemas.Response)
async def create_plugin_folder(folder_name: str,
_: User = Depends(get_current_active_superuser_async)) -> Any:
"""
创建新的插件文件夹
"""
folders = SystemConfigOper().get(SystemConfigKey.PluginFolders) or {}
if folder_name not in folders:
folders[folder_name] = []
SystemConfigOper().set(SystemConfigKey.PluginFolders, folders)
return schemas.Response(success=True, message=f"文件夹 '{folder_name}' 创建成功")
else:
return schemas.Response(success=False, message=f"文件夹 '{folder_name}' 已存在")
@router.delete("/folders/{folder_name}", summary="删除插件文件夹", response_model=schemas.Response)
async def delete_plugin_folder(folder_name: str,
_: User = Depends(get_current_active_superuser_async)) -> Any:
"""
删除插件文件夹
"""
folders = SystemConfigOper().get(SystemConfigKey.PluginFolders) or {}
if folder_name in folders:
del folders[folder_name]
await SystemConfigOper().async_set(SystemConfigKey.PluginFolders, folders)
return schemas.Response(success=True, message=f"文件夹 '{folder_name}' 删除成功")
else:
return schemas.Response(success=False, message=f"文件夹 '{folder_name}' 不存在")
@router.put("/folders/{folder_name}/plugins", summary="更新文件夹中的插件", response_model=schemas.Response)
async def update_folder_plugins(folder_name: str, plugin_ids: List[str],
_: User = Depends(get_current_active_superuser_async)) -> Any:
"""
更新指定文件夹中的插件列表
"""
folders = SystemConfigOper().get(SystemConfigKey.PluginFolders) or {}
folders[folder_name] = plugin_ids
await SystemConfigOper().async_set(SystemConfigKey.PluginFolders, folders)
return schemas.Response(success=True, message=f"文件夹 '{folder_name}' 中的插件已更新")
@router.post("/clone/{plugin_id}", summary="创建插件分身", response_model=schemas.Response)
def clone_plugin(plugin_id: str,
clone_data: dict,
_: User = Depends(get_current_active_superuser)) -> Any:
"""
创建插件分身
"""
try:
success, message = PluginManager().clone_plugin(
plugin_id=plugin_id,
suffix=clone_data.get("suffix", ""),
name=clone_data.get("name", ""),
description=clone_data.get("description", ""),
version=clone_data.get("version", ""),
icon=clone_data.get("icon", "")
)
if success:
# 注册插件服务
reload_plugin(message)
# 将分身插件添加到原插件所在的文件夹中
_add_clone_to_plugin_folder(plugin_id, message)
return schemas.Response(success=True, message="插件分身创建成功")
else:
return schemas.Response(success=False, message=message)
except Exception as e:
logger.error(f"创建插件分身失败:{str(e)}")
return schemas.Response(success=False, message=f"创建插件分身失败:{str(e)}")
@router.get("/{plugin_id}", summary="获取插件配置")
def plugin_config(plugin_id: str,
_: schemas.TokenPayload = Depends(get_current_active_superuser)) -> dict:
async def plugin_config(plugin_id: str,
_: User = Depends(get_current_active_superuser_async)) -> dict:
"""
根据插件ID获取插件配置信息
"""
@@ -297,45 +504,143 @@ def plugin_config(plugin_id: str,
@router.put("/{plugin_id}", summary="更新插件配置", response_model=schemas.Response)
def set_plugin_config(plugin_id: str, conf: dict,
_: schemas.TokenPayload = Depends(get_current_active_superuser)) -> Any:
_: User = Depends(get_current_active_superuser)) -> Any:
"""
更新插件配置
"""
plugin_manager = PluginManager()
# 保存配置
PluginManager().save_plugin_config(plugin_id, conf)
plugin_manager.save_plugin_config(plugin_id, conf)
# 重新生效插件
PluginManager().init_plugin(plugin_id, conf)
plugin_manager.init_plugin(plugin_id, conf)
# 注册插件服务
Scheduler().update_plugin_job(plugin_id)
# 注册菜单命令
Command().init_commands(plugin_id)
# 注册插件API
register_plugin_api(plugin_id)
register_plugin(plugin_id)
return schemas.Response(success=True)
@router.delete("/{plugin_id}", summary="卸载插件", response_model=schemas.Response)
def uninstall_plugin(plugin_id: str,
_: schemas.TokenPayload = Depends(get_current_active_superuser)) -> Any:
_: User = Depends(get_current_active_superuser)) -> Any:
"""
卸载插件
"""
config_oper = SystemConfigOper()
# 删除已安装信息
install_plugins = SystemConfigOper().get(SystemConfigKey.UserInstalledPlugins) or []
install_plugins = config_oper.get(SystemConfigKey.UserInstalledPlugins) or []
for plugin in install_plugins:
if plugin == plugin_id:
install_plugins.remove(plugin)
break
# 保存
SystemConfigOper().set(SystemConfigKey.UserInstalledPlugins, install_plugins)
config_oper.set(SystemConfigKey.UserInstalledPlugins, install_plugins)
# 移除插件API
remove_plugin_api(plugin_id)
# 移除插件服务
Scheduler().remove_plugin_job(plugin_id)
# 判断是否为分身
plugin_manager = PluginManager()
plugin_class = plugin_manager.plugins.get(plugin_id)
if getattr(plugin_class, "is_clone", False):
# 如果是分身插件,则删除分身数据和配置
plugin_manager.delete_plugin_config(plugin_id)
plugin_manager.delete_plugin_data(plugin_id)
# 删除分身文件
plugin_base_dir = settings.ROOT_PATH / "app" / "plugins" / plugin_id.lower()
if plugin_base_dir.exists():
try:
shutil.rmtree(plugin_base_dir)
plugin_manager.plugins.pop(plugin_id, None)
except Exception as e:
logger.error(f"删除插件分身目录 {plugin_base_dir} 失败: {str(e)}")
# 从插件文件夹中移除该插件
_remove_plugin_from_folders(plugin_id)
# 移除插件
PluginManager().remove_plugin(plugin_id)
plugin_manager.remove_plugin(plugin_id)
return schemas.Response(success=True)
# 注册全部插件API
register_plugin_api()
def _add_clone_to_plugin_folder(original_plugin_id: str, clone_plugin_id: str):
"""
将分身插件添加到原插件所在的文件夹中
:param original_plugin_id: 原插件ID
:param clone_plugin_id: 分身插件ID
"""
try:
config_oper = SystemConfigOper()
# 获取插件文件夹配置
folders = config_oper.get(SystemConfigKey.PluginFolders) or {}
# 查找原插件所在的文件夹
target_folder = None
for folder_name, folder_data in folders.items():
if isinstance(folder_data, dict) and 'plugins' in folder_data:
# 新格式:{"plugins": [...], "order": ..., "icon": ...}
if original_plugin_id in folder_data['plugins']:
target_folder = folder_name
break
elif isinstance(folder_data, list):
# 旧格式:直接是插件列表
if original_plugin_id in folder_data:
target_folder = folder_name
break
# 如果找到了原插件所在的文件夹,则将分身插件也添加到该文件夹中
if target_folder:
folder_data = folders[target_folder]
if isinstance(folder_data, dict) and 'plugins' in folder_data:
# 新格式
if clone_plugin_id not in folder_data['plugins']:
folder_data['plugins'].append(clone_plugin_id)
logger.info(f"已将分身插件 {clone_plugin_id} 添加到文件夹 '{target_folder}'")
elif isinstance(folder_data, list):
# 旧格式
if clone_plugin_id not in folder_data:
folder_data.append(clone_plugin_id)
logger.info(f"已将分身插件 {clone_plugin_id} 添加到文件夹 '{target_folder}'")
# 保存更新后的文件夹配置
config_oper.set(SystemConfigKey.PluginFolders, folders)
else:
logger.info(f"原插件 {original_plugin_id} 不在任何文件夹中,分身插件 {clone_plugin_id} 将保持独立")
except Exception as e:
logger.error(f"处理插件文件夹时出错:{str(e)}")
# 文件夹处理失败不影响插件分身创建的整体流程
def _remove_plugin_from_folders(plugin_id: str):
"""
从所有文件夹中移除指定的插件
:param plugin_id: 要移除的插件ID
"""
try:
config_oper = SystemConfigOper()
# 获取插件文件夹配置
folders = config_oper.get(SystemConfigKey.PluginFolders) or {}
# 标记是否有修改
modified = False
# 遍历所有文件夹,移除指定插件
for folder_name, folder_data in folders.items():
if isinstance(folder_data, dict) and 'plugins' in folder_data:
# 新格式:{"plugins": [...], "order": ..., "icon": ...}
if plugin_id in folder_data['plugins']:
folder_data['plugins'].remove(plugin_id)
logger.info(f"已从文件夹 '{folder_name}' 中移除插件 {plugin_id}")
modified = True
elif isinstance(folder_data, list):
# 旧格式:直接是插件列表
if plugin_id in folder_data:
folder_data.remove(plugin_id)
logger.info(f"已从文件夹 '{folder_name}' 中移除插件 {plugin_id}")
modified = True
# 如果有修改,保存更新后的文件夹配置
if modified:
config_oper.set(SystemConfigKey.PluginFolders, folders)
else:
logger.debug(f"插件 {plugin_id} 不在任何文件夹中,无需移除")
except Exception as e:
logger.error(f"从文件夹中移除插件时出错:{str(e)}")
# 文件夹处理失败不影响插件卸载的整体流程

View File

@@ -1,13 +1,13 @@
from typing import Any, List
from typing import Any, List, Optional
from fastapi import APIRouter, Depends
from app import schemas
from app.chain.recommend import RecommendChain
from app.core.event import eventmanager
from app.core.security import verify_token
from app.schemas import RecommendSourceEventData
from app.schemas.types import ChainEventType
from chain.recommend import RecommendChain
from schemas import RecommendSourceEventData
router = APIRouter()
@@ -29,163 +29,163 @@ def source(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
@router.get("/bangumi_calendar", summary="Bangumi每日放送", response_model=List[schemas.MediaInfo])
def bangumi_calendar(page: int = 1,
count: int = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def bangumi_calendar(page: Optional[int] = 1,
count: Optional[int] = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览Bangumi每日放送
"""
return RecommendChain().bangumi_calendar(page=page, count=count)
return await RecommendChain().async_bangumi_calendar(page=page, count=count)
@router.get("/douban_showing", summary="豆瓣正在热映", response_model=List[schemas.MediaInfo])
def douban_showing(page: int = 1,
count: int = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def douban_showing(page: Optional[int] = 1,
count: Optional[int] = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览豆瓣正在热映
"""
return RecommendChain().douban_movie_showing(page=page, count=count)
return await RecommendChain().async_douban_movie_showing(page=page, count=count)
@router.get("/douban_movies", summary="豆瓣电影", response_model=List[schemas.MediaInfo])
def douban_movies(sort: str = "R",
tags: str = "",
page: int = 1,
count: int = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def douban_movies(sort: Optional[str] = "R",
tags: Optional[str] = "",
page: Optional[int] = 1,
count: Optional[int] = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览豆瓣电影信息
"""
return RecommendChain().douban_movies(sort=sort, tags=tags, page=page, count=count)
return await RecommendChain().async_douban_movies(sort=sort, tags=tags, page=page, count=count)
@router.get("/douban_tvs", summary="豆瓣剧集", response_model=List[schemas.MediaInfo])
def douban_tvs(sort: str = "R",
tags: str = "",
page: int = 1,
count: int = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def douban_tvs(sort: Optional[str] = "R",
tags: Optional[str] = "",
page: Optional[int] = 1,
count: Optional[int] = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览豆瓣剧集信息
"""
return RecommendChain().douban_tvs(sort=sort, tags=tags, page=page, count=count)
return await RecommendChain().async_douban_tvs(sort=sort, tags=tags, page=page, count=count)
@router.get("/douban_movie_top250", summary="豆瓣电影TOP250", response_model=List[schemas.MediaInfo])
def douban_movie_top250(page: int = 1,
count: int = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def douban_movie_top250(page: Optional[int] = 1,
count: Optional[int] = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览豆瓣剧集信息
"""
return RecommendChain().douban_movie_top250(page=page, count=count)
return await RecommendChain().async_douban_movie_top250(page=page, count=count)
@router.get("/douban_tv_weekly_chinese", summary="豆瓣国产剧集周榜", response_model=List[schemas.MediaInfo])
def douban_tv_weekly_chinese(page: int = 1,
count: int = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def douban_tv_weekly_chinese(page: Optional[int] = 1,
count: Optional[int] = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
中国每周剧集口碑榜
"""
return RecommendChain().douban_tv_weekly_chinese(page=page, count=count)
return await RecommendChain().async_douban_tv_weekly_chinese(page=page, count=count)
@router.get("/douban_tv_weekly_global", summary="豆瓣全球剧集周榜", response_model=List[schemas.MediaInfo])
def douban_tv_weekly_global(page: int = 1,
count: int = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def douban_tv_weekly_global(page: Optional[int] = 1,
count: Optional[int] = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
全球每周剧集口碑榜
"""
return RecommendChain().douban_tv_weekly_global(page=page, count=count)
return await RecommendChain().async_douban_tv_weekly_global(page=page, count=count)
@router.get("/douban_tv_animation", summary="豆瓣动画剧集", response_model=List[schemas.MediaInfo])
def douban_tv_animation(page: int = 1,
count: int = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def douban_tv_animation(page: Optional[int] = 1,
count: Optional[int] = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
热门动画剧集
"""
return RecommendChain().douban_tv_animation(page=page, count=count)
return await RecommendChain().async_douban_tv_animation(page=page, count=count)
@router.get("/douban_movie_hot", summary="豆瓣热门电影", response_model=List[schemas.MediaInfo])
def douban_movie_hot(page: int = 1,
count: int = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def douban_movie_hot(page: Optional[int] = 1,
count: Optional[int] = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
热门电影
"""
return RecommendChain().douban_movie_hot(page=page, count=count)
return await RecommendChain().async_douban_movie_hot(page=page, count=count)
@router.get("/douban_tv_hot", summary="豆瓣热门电视剧", response_model=List[schemas.MediaInfo])
def douban_tv_hot(page: int = 1,
count: int = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def douban_tv_hot(page: Optional[int] = 1,
count: Optional[int] = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
热门电视剧
"""
return RecommendChain().douban_tv_hot(page=page, count=count)
return await RecommendChain().async_douban_tv_hot(page=page, count=count)
@router.get("/tmdb_movies", summary="TMDB电影", response_model=List[schemas.MediaInfo])
def tmdb_movies(sort_by: str = "popularity.desc",
with_genres: str = "",
with_original_language: str = "",
with_keywords: str = "",
with_watch_providers: str = "",
vote_average: float = 0,
vote_count: int = 0,
release_date: str = "",
page: int = 1,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def tmdb_movies(sort_by: Optional[str] = "popularity.desc",
with_genres: Optional[str] = "",
with_original_language: Optional[str] = "",
with_keywords: Optional[str] = "",
with_watch_providers: Optional[str] = "",
vote_average: Optional[float] = 0.0,
vote_count: Optional[int] = 0,
release_date: Optional[str] = "",
page: Optional[int] = 1,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览TMDB电影信息
"""
return RecommendChain().tmdb_movies(sort_by=sort_by,
with_genres=with_genres,
with_original_language=with_original_language,
with_keywords=with_keywords,
with_watch_providers=with_watch_providers,
vote_average=vote_average,
vote_count=vote_count,
release_date=release_date,
page=page)
return await RecommendChain().async_tmdb_movies(sort_by=sort_by,
with_genres=with_genres,
with_original_language=with_original_language,
with_keywords=with_keywords,
with_watch_providers=with_watch_providers,
vote_average=vote_average,
vote_count=vote_count,
release_date=release_date,
page=page)
@router.get("/tmdb_tvs", summary="TMDB剧集", response_model=List[schemas.MediaInfo])
def tmdb_tvs(sort_by: str = "popularity.desc",
with_genres: str = "",
with_original_language: str = "",
with_keywords: str = "",
with_watch_providers: str = "",
vote_average: float = 0,
vote_count: int = 0,
release_date: str = "",
page: int = 1,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def tmdb_tvs(sort_by: Optional[str] = "popularity.desc",
with_genres: Optional[str] = "",
with_original_language: Optional[str] = "",
with_keywords: Optional[str] = "",
with_watch_providers: Optional[str] = "",
vote_average: Optional[float] = 0.0,
vote_count: Optional[int] = 0,
release_date: Optional[str] = "",
page: Optional[int] = 1,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览TMDB剧集信息
"""
return RecommendChain().tmdb_tvs(sort_by=sort_by,
with_genres=with_genres,
with_original_language=with_original_language,
with_keywords=with_keywords,
with_watch_providers=with_watch_providers,
vote_average=vote_average,
vote_count=vote_count,
release_date=release_date,
page=page)
return await RecommendChain().async_tmdb_tvs(sort_by=sort_by,
with_genres=with_genres,
with_original_language=with_original_language,
with_keywords=with_keywords,
with_watch_providers=with_watch_providers,
vote_average=vote_average,
vote_count=vote_count,
release_date=release_date,
page=page)
@router.get("/tmdb_trending", summary="TMDB流行趋势", response_model=List[schemas.MediaInfo])
def tmdb_trending(page: int = 1,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def tmdb_trending(page: Optional[int] = 1,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
TMDB流行趋势
"""
return RecommendChain().tmdb_trending(page=page)
return await RecommendChain().async_tmdb_trending(page=page)

View File

@@ -1,14 +1,16 @@
from typing import List, Any
from typing import List, Any, Optional
from fastapi import APIRouter, Depends
from fastapi import APIRouter, Depends, Body
from app import schemas
from app.chain.media import MediaChain
from app.chain.search import SearchChain
from app.chain.ai_recommend import AIRecommendChain
from app.core.config import settings
from app.core.event import eventmanager
from app.core.metainfo import MetaInfo
from app.core.security import verify_token
from app.log import logger
from app.schemas import MediaRecognizeConvertEventData
from app.schemas.types import MediaType, ChainEventType
@@ -16,84 +18,95 @@ router = APIRouter()
@router.get("/last", summary="查询搜索结果", response_model=List[schemas.Context])
def search_latest(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def search_latest(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询搜索结果
"""
torrents = SearchChain().last_search_results()
torrents = await SearchChain().async_last_search_results() or []
return [torrent.to_dict() for torrent in torrents]
@router.get("/media/{mediaid}", summary="精确搜索资源", response_model=schemas.Response)
def search_by_id(mediaid: str,
mtype: str = None,
area: str = "title",
title: str = None,
year: int = None,
season: str = None,
sites: str = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def search_by_id(mediaid: str,
mtype: Optional[str] = None,
area: Optional[str] = "title",
title: Optional[str] = None,
year: Optional[str] = None,
season: Optional[str] = None,
sites: Optional[str] = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据TMDBID/豆瓣ID精确搜索站点资源 tmdb:/douban:/bangumi:
"""
# 取消正在运行的AI推荐会清除数据库缓存
AIRecommendChain().cancel_ai_recommend()
if mtype:
mtype = MediaType(mtype)
media_type = MediaType(mtype)
else:
media_type = None
if season:
season = int(season)
media_season = int(season)
else:
media_season = None
if sites:
site_list = [int(site) for site in sites.split(",") if site]
else:
site_list = None
torrents = None
media_chain = MediaChain()
search_chain = SearchChain()
# 根据前缀识别媒体ID
if mediaid.startswith("tmdb:"):
tmdbid = int(mediaid.replace("tmdb:", ""))
if settings.RECOGNIZE_SOURCE == "douban":
# 通过TMDBID识别豆瓣ID
doubaninfo = MediaChain().get_doubaninfo_by_tmdbid(tmdbid=tmdbid, mtype=mtype)
doubaninfo = await media_chain.async_get_doubaninfo_by_tmdbid(tmdbid=tmdbid, mtype=media_type)
if doubaninfo:
torrents = SearchChain().search_by_id(doubanid=doubaninfo.get("id"),
mtype=mtype, area=area, season=season,
sites=site_list)
torrents = await search_chain.async_search_by_id(doubanid=doubaninfo.get("id"),
mtype=media_type, area=area, season=media_season,
sites=site_list, cache_local=True)
else:
return schemas.Response(success=False, message="未识别到豆瓣媒体信息")
else:
torrents = SearchChain().search_by_id(tmdbid=tmdbid, mtype=mtype, area=area, season=season,
sites=site_list)
torrents = await search_chain.async_search_by_id(tmdbid=tmdbid, mtype=media_type, area=area,
season=media_season,
sites=site_list, cache_local=True)
elif mediaid.startswith("douban:"):
doubanid = mediaid.replace("douban:", "")
if settings.RECOGNIZE_SOURCE == "themoviedb":
# 通过豆瓣ID识别TMDBID
tmdbinfo = MediaChain().get_tmdbinfo_by_doubanid(doubanid=doubanid, mtype=mtype)
tmdbinfo = await media_chain.async_get_tmdbinfo_by_doubanid(doubanid=doubanid, mtype=media_type)
if tmdbinfo:
if tmdbinfo.get('season') and not season:
season = tmdbinfo.get('season')
torrents = SearchChain().search_by_id(tmdbid=tmdbinfo.get("id"),
mtype=mtype, area=area, season=season,
sites=site_list)
if tmdbinfo.get('season') and not media_season:
media_season = tmdbinfo.get('season')
torrents = await search_chain.async_search_by_id(tmdbid=tmdbinfo.get("id"),
mtype=media_type, area=area, season=media_season,
sites=site_list, cache_local=True)
else:
return schemas.Response(success=False, message="未识别到TMDB媒体信息")
else:
torrents = SearchChain().search_by_id(doubanid=doubanid, mtype=mtype, area=area, season=season,
sites=site_list)
torrents = await search_chain.async_search_by_id(doubanid=doubanid, mtype=media_type, area=area,
season=media_season,
sites=site_list, cache_local=True)
elif mediaid.startswith("bangumi:"):
bangumiid = int(mediaid.replace("bangumi:", ""))
if settings.RECOGNIZE_SOURCE == "themoviedb":
# 通过BangumiID识别TMDBID
tmdbinfo = MediaChain().get_tmdbinfo_by_bangumiid(bangumiid=bangumiid)
tmdbinfo = await media_chain.async_get_tmdbinfo_by_bangumiid(bangumiid=bangumiid)
if tmdbinfo:
torrents = SearchChain().search_by_id(tmdbid=tmdbinfo.get("id"),
mtype=mtype, area=area, season=season,
sites=site_list)
torrents = await search_chain.async_search_by_id(tmdbid=tmdbinfo.get("id"),
mtype=media_type, area=area, season=media_season,
sites=site_list, cache_local=True)
else:
return schemas.Response(success=False, message="未识别到TMDB媒体信息")
else:
# 通过BangumiID识别豆瓣ID
doubaninfo = MediaChain().get_doubaninfo_by_bangumiid(bangumiid=bangumiid)
doubaninfo = await media_chain.async_get_doubaninfo_by_bangumiid(bangumiid=bangumiid)
if doubaninfo:
torrents = SearchChain().search_by_id(doubanid=doubaninfo.get("id"),
mtype=mtype, area=area, season=season,
sites=site_list)
torrents = await search_chain.async_search_by_id(doubanid=doubaninfo.get("id"),
mtype=media_type, area=area, season=media_season,
sites=site_list, cache_local=True)
else:
return schemas.Response(success=False, message="未识别到豆瓣媒体信息")
else:
@@ -102,18 +115,18 @@ def search_by_id(mediaid: str,
mediaid=mediaid,
convert_type=settings.RECOGNIZE_SOURCE
)
event = eventmanager.send_event(ChainEventType.MediaRecognizeConvert, event_data)
event = await eventmanager.async_send_event(ChainEventType.MediaRecognizeConvert, event_data)
# 使用事件返回的上下文数据
if event and event.event_data:
event_data: MediaRecognizeConvertEventData = event.event_data
if event_data.media_dict:
search_id = event_data.media_dict.get("id")
if event_data.convert_type == "themoviedb":
torrents = SearchChain().search_by_id(tmdbid=search_id,
mtype=mtype, area=area, season=season)
torrents = await search_chain.async_search_by_id(tmdbid=search_id, mtype=media_type, area=area,
season=media_season, cache_local=True)
elif event_data.convert_type == "douban":
torrents = SearchChain().search_by_id(doubanid=search_id,
mtype=mtype, area=area, season=season)
torrents = await search_chain.async_search_by_id(doubanid=search_id, mtype=media_type, area=area,
season=media_season, cache_local=True)
else:
if not title:
return schemas.Response(success=False, message="未知的媒体ID")
@@ -121,19 +134,21 @@ def search_by_id(mediaid: str,
meta = MetaInfo(title)
if year:
meta.year = year
if mtype:
meta.type = mtype
if season:
if media_type:
meta.type = media_type
if media_season:
meta.type = MediaType.TV
meta.begin_season = season
mediainfo = MediaChain().recognize_media(meta=meta)
meta.begin_season = media_season
mediainfo = await media_chain.async_recognize_media(meta=meta)
if mediainfo:
if settings.RECOGNIZE_SOURCE == "themoviedb":
torrents = SearchChain().search_by_id(tmdbid=mediainfo.tmdb_id,
mtype=mtype, area=area, season=season)
torrents = await search_chain.async_search_by_id(tmdbid=mediainfo.tmdb_id, mtype=media_type,
area=area,
season=media_season, cache_local=True)
else:
torrents = SearchChain().search_by_id(doubanid=mediainfo.douban_id,
mtype=mtype, area=area, season=season)
torrents = await search_chain.async_search_by_id(doubanid=mediainfo.douban_id, mtype=media_type,
area=area,
season=media_season, cache_local=True)
# 返回搜索结果
if not torrents:
return schemas.Response(success=False, message="未搜索到任何资源")
@@ -142,15 +157,105 @@ def search_by_id(mediaid: str,
@router.get("/title", summary="模糊搜索资源", response_model=schemas.Response)
def search_by_title(keyword: str = None,
page: int = 0,
sites: str = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def search_by_title(keyword: Optional[str] = None,
page: Optional[int] = 0,
sites: Optional[str] = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据名称模糊搜索站点资源,支持分页,关键词为空是返回首页资源
"""
torrents = SearchChain().search_by_title(title=keyword, page=page,
sites=[int(site) for site in sites.split(",") if site] if sites else None)
# 取消正在运行的AI推荐并清除数据库缓存
AIRecommendChain().cancel_ai_recommend()
torrents = await SearchChain().async_search_by_title(
title=keyword, page=page,
sites=[int(site) for site in sites.split(",") if site] if sites else None,
cache_local=True
)
if not torrents:
return schemas.Response(success=False, message="未搜索到任何资源")
return schemas.Response(success=True, data=[torrent.to_dict() for torrent in torrents])
@router.post("/recommend", summary="AI推荐资源", response_model=schemas.Response)
async def recommend_search_results(
filtered_indices: Optional[List[int]] = Body(None, embed=True, description="筛选后的索引列表"),
check_only: bool = Body(False, embed=True, description="仅检查状态,不启动新任务"),
force: bool = Body(False, embed=True, description="强制重新推荐,清除旧结果"),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
AI推荐资源 - 轮询接口
前端轮询此接口,发送筛选后的索引(如果有筛选)
后端根据请求变化自动取消旧任务并启动新任务
参数:
- filtered_indices: 筛选后的索引列表(可选,为空或不提供时使用所有结果)
- check_only: 仅检查状态(首次打开页面时使用,避免触发不必要的重新推理)
- force: 强制重新推荐(清除旧结果并重新启动)
返回数据结构:
{
"success": bool,
"message": string, // 错误信息(仅在错误时存在)
"data": {
"status": string, // 状态: disabled | idle | running | completed | error
"results": array // 推荐结果仅status=completed时存在
}
}
"""
# 从缓存获取上次搜索结果
results = await SearchChain().async_last_search_results() or []
if not results:
return schemas.Response(success=False, message="没有可用的搜索结果", data={
"status": "error"
})
recommend_chain = AIRecommendChain()
# 如果是强制模式,先取消并清除旧结果,然后直接启动新任务
if force:
# 检查功能是否启用
if not settings.AI_AGENT_ENABLE or not settings.AI_RECOMMEND_ENABLED:
return schemas.Response(success=True, data={
"status": "disabled"
})
logger.info("收到新推荐请求,清除旧结果并启动新任务")
recommend_chain.cancel_ai_recommend()
recommend_chain.start_recommend_task(filtered_indices, len(results), results)
# 直接返回运行中状态
return schemas.Response(success=True, data={
"status": "running"
})
# 如果是仅检查模式,不传递 filtered_indices避免触发请求变化检测
if check_only:
# 返回当前运行状态,不做任何任务启动或取消操作
current_status = recommend_chain.get_current_status_only()
# 如果有错误将错误信息放到message中
if current_status.get("status") == "error":
error_msg = current_status.pop("error", "未知错误")
return schemas.Response(success=False, message=error_msg, data=current_status)
return schemas.Response(success=True, data=current_status)
# 获取当前状态(会检测请求是否变化)
status_data = recommend_chain.get_status(filtered_indices, len(results))
# 如果功能未启用,直接返回禁用状态
if status_data.get("status") == "disabled":
return schemas.Response(success=True, data=status_data)
# 如果是空闲状态,启动新任务
if status_data["status"] == "idle":
recommend_chain.start_recommend_task(filtered_indices, len(results), results)
# 立即返回运行中状态
return schemas.Response(success=True, data={
"status": "running"
})
# 如果有错误将错误信息放到message中
if status_data.get("status") == "error":
error_msg = status_data.pop("error", "未知错误")
return schemas.Response(success=False, message=error_msg, data=status_data)
# 返回当前状态
return schemas.Response(success=True, data=status_data)

View File

@@ -1,24 +1,28 @@
from typing import List, Any, Dict
from typing import List, Any, Dict, Optional
from fastapi import APIRouter, Depends, HTTPException
from sqlalchemy.ext.asyncio import AsyncSession
from sqlalchemy.orm import Session
from starlette.background import BackgroundTasks
from app import schemas
from app.api.endpoints.plugin import register_plugin_api
from app.chain.site import SiteChain
from app.chain.torrents import TorrentsChain
from app.core.event import EventManager
from app.command import Command
from app.core.event import eventmanager
from app.core.plugin import PluginManager
from app.core.security import verify_token
from app.db import get_db
from app.db import get_db, get_async_db
from app.db.models import User
from app.db.models.site import Site
from app.db.models.siteicon import SiteIcon
from app.db.models.sitestatistic import SiteStatistic
from app.db.models.siteuserdata import SiteUserData
from app.db.site_oper import SiteOper
from app.db.systemconfig_oper import SystemConfigOper
from app.db.user_oper import get_current_active_superuser
from app.helper.sites import SitesHelper
from app.db.user_oper import get_current_active_superuser, get_current_active_superuser_async
from app.helper.sites import SitesHelper # noqa
from app.scheduler import Scheduler
from app.schemas.types import SystemConfigKey, EventType
from app.utils.string import StringUtils
@@ -27,20 +31,20 @@ router = APIRouter()
@router.get("/", summary="所有站点", response_model=List[schemas.Site])
def read_sites(db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(get_current_active_superuser)) -> List[dict]:
async def read_sites(db: AsyncSession = Depends(get_async_db),
_: User = Depends(get_current_active_superuser)) -> List[dict]:
"""
获取站点列表
"""
return Site.list_order_by_pri(db)
return await Site.async_list_order_by_pri(db)
@router.post("/", summary="新增站点", response_model=schemas.Response)
def add_site(
async def add_site(
*,
db: Session = Depends(get_db),
db: AsyncSession = Depends(get_async_db),
site_in: schemas.Site,
_: schemas.TokenPayload = Depends(get_current_active_superuser)
_: User = Depends(get_current_active_superuser)
) -> Any:
"""
新增站点
@@ -50,10 +54,10 @@ def add_site(
if SitesHelper().auth_level < 2:
return schemas.Response(success=False, message="用户未通过认证,无法使用站点功能!")
domain = StringUtils.get_url_domain(site_in.url)
site_info = SitesHelper().get_indexer(domain)
site_info = await SitesHelper().async_get_indexer(domain)
if not site_info:
return schemas.Response(success=False, message="该站点不支持,请检查站点域名是否正确")
if Site.get_by_domain(db, domain):
if await Site.async_get_by_domain(db, domain):
return schemas.Response(success=False, message=f"{domain} 站点己存在")
# 保存站点信息
site_in.domain = domain
@@ -63,42 +67,42 @@ def add_site(
site_in.name = site_info.get("name")
site_in.id = None
site_in.public = 1 if site_info.get("public") else 0
site = Site(**site_in.dict())
site = Site(**site_in.model_dump())
site.create(db)
# 通知站点更新
EventManager().send_event(EventType.SiteUpdated, {
await eventmanager.async_send_event(EventType.SiteUpdated, {
"domain": domain
})
return schemas.Response(success=True)
@router.put("/", summary="更新站点", response_model=schemas.Response)
def update_site(
async def update_site(
*,
db: Session = Depends(get_db),
db: AsyncSession = Depends(get_async_db),
site_in: schemas.Site,
_: schemas.TokenPayload = Depends(get_current_active_superuser)
_: User = Depends(get_current_active_superuser)
) -> Any:
"""
更新站点信息
"""
site = Site.get(db, site_in.id)
site = await Site.async_get(db, site_in.id)
if not site:
return schemas.Response(success=False, message="站点不存在")
# 校正地址格式
_scheme, _netloc = StringUtils.get_url_netloc(site_in.url)
site_in.url = f"{_scheme}://{_netloc}/"
site.update(db, site_in.dict())
await site.async_update(db, site_in.model_dump())
# 通知站点更新
EventManager().send_event(EventType.SiteUpdated, {
await eventmanager.async_send_event(EventType.SiteUpdated, {
"domain": site_in.domain
})
return schemas.Response(success=True)
@router.get("/cookiecloud", summary="CookieCloud同步", response_model=schemas.Response)
def cookie_cloud_sync(background_tasks: BackgroundTasks,
_: schemas.TokenPayload = Depends(get_current_active_superuser)) -> Any:
async def cookie_cloud_sync(background_tasks: BackgroundTasks,
_: User = Depends(get_current_active_superuser_async)) -> Any:
"""
运行CookieCloud同步站点信息
"""
@@ -107,7 +111,7 @@ def cookie_cloud_sync(background_tasks: BackgroundTasks,
@router.get("/reset", summary="重置站点", response_model=schemas.Response)
def reset(db: Session = Depends(get_db),
def reset(db: AsyncSession = Depends(get_db),
_: User = Depends(get_current_active_superuser)) -> Any:
"""
清空所有站点数据并重新同步CookieCloud站点信息
@@ -118,25 +122,25 @@ def reset(db: Session = Depends(get_db),
# 启动定时服务
Scheduler().start("cookiecloud", manual=True)
# 插件站点删除
EventManager().send_event(EventType.SiteDeleted,
{
"site_id": "*"
})
eventmanager.send_event(EventType.SiteDeleted,
{
"site_id": "*"
})
return schemas.Response(success=True, message="站点已重置!")
@router.post("/priorities", summary="批量更新站点优先级", response_model=schemas.Response)
def update_sites_priority(
async def update_sites_priority(
priorities: List[dict],
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(get_current_active_superuser)) -> Any:
db: AsyncSession = Depends(get_async_db),
_: User = Depends(get_current_active_superuser_async)) -> Any:
"""
批量更新站点优先级
"""
for priority in priorities:
site = Site.get(db, priority.get("id"))
site = await Site.async_get(db, priority.get("id"))
if site:
site.update(db, {"pri": priority.get("pri")})
await site.async_update(db, {"pri": priority.get("pri")})
return schemas.Response(success=True)
@@ -145,9 +149,9 @@ def update_cookie(
site_id: int,
username: str,
password: str,
code: str = None,
code: Optional[str] = None,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(get_current_active_superuser)) -> Any:
_: User = Depends(get_current_active_superuser)) -> Any:
"""
使用用户密码更新站点Cookie
"""
@@ -170,7 +174,7 @@ def update_cookie(
def refresh_userdata(
site_id: int,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(get_current_active_superuser)) -> Any:
_: User = Depends(get_current_active_superuser)) -> Any:
"""
刷新站点用户数据
"""
@@ -188,37 +192,37 @@ def refresh_userdata(
@router.get("/userdata/latest", summary="查询所有站点最新用户数据", response_model=List[schemas.SiteUserData])
def read_userdata_latest(
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(get_current_active_superuser)) -> Any:
async def read_userdata_latest(
db: AsyncSession = Depends(get_async_db),
_: User = Depends(get_current_active_superuser_async)) -> Any:
"""
查询所有站点最新用户数据
"""
user_datas = SiteUserData.get_latest(db)
user_datas = await SiteUserData.async_get_latest(db)
if not user_datas:
return []
return [user_data.to_dict() for user_data in user_datas]
@router.get("/userdata/{site_id}", summary="查询某站点用户数据", response_model=schemas.Response)
def read_userdata(
async def read_userdata(
site_id: int,
workdate: str = None,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(get_current_active_superuser)) -> Any:
workdate: Optional[str] = None,
db: AsyncSession = Depends(get_async_db),
_: User = Depends(get_current_active_superuser_async)) -> Any:
"""
查询站点用户数据
"""
site = Site.get(db, site_id)
site = await Site.async_get(db, site_id)
if not site:
raise HTTPException(
status_code=404,
detail=f"站点 {site_id} 不存在",
)
user_data = SiteUserData.get_by_domain(db, domain=site.domain, workdate=workdate)
if not user_data:
user_datas = await SiteUserData.async_get_by_domain(db, domain=site.domain, workdate=workdate)
if not user_datas:
return schemas.Response(success=False, data=[])
return schemas.Response(success=True, data=user_data)
return schemas.Response(success=True, data=[data.to_dict() for data in user_datas])
@router.get("/test/{site_id}", summary="连接测试", response_model=schemas.Response)
@@ -239,19 +243,19 @@ def test_site(site_id: int,
@router.get("/icon/{site_id}", summary="站点图标", response_model=schemas.Response)
def site_icon(site_id: int,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def site_icon(site_id: int,
db: AsyncSession = Depends(get_async_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
获取站点图标base64或者url
"""
site = Site.get(db, site_id)
site = await Site.async_get(db, site_id)
if not site:
raise HTTPException(
status_code=404,
detail=f"站点 {site_id} 不存在",
)
icon = SiteIcon.get_by_domain(db, site.domain)
icon = await SiteIcon.async_get_by_domain(db, site.domain)
if not icon:
return schemas.Response(success=False, message="站点图标不存在!")
return schemas.Response(success=True, data={
@@ -260,19 +264,19 @@ def site_icon(site_id: int,
@router.get("/category/{site_id}", summary="站点分类", response_model=List[schemas.SiteCategory])
def site_category(site_id: int,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def site_category(site_id: int,
db: AsyncSession = Depends(get_async_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
获取站点分类
"""
site = Site.get(db, site_id)
site = await Site.async_get(db, site_id)
if not site:
raise HTTPException(
status_code=404,
detail=f"站点 {site_id} 不存在",
)
indexer = SitesHelper().get_indexer(site.domain)
indexer = await SitesHelper().async_get_indexer(site.domain)
if not indexer:
raise HTTPException(
status_code=404,
@@ -290,38 +294,38 @@ def site_category(site_id: int,
@router.get("/resource/{site_id}", summary="站点资源", response_model=List[schemas.TorrentInfo])
def site_resource(site_id: int,
keyword: str = None,
cat: str = None,
page: int = 0,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(get_current_active_superuser)) -> Any:
async def site_resource(site_id: int,
keyword: Optional[str] = None,
cat: Optional[str] = None,
page: Optional[int] = 0,
db: AsyncSession = Depends(get_async_db),
_: User = Depends(get_current_active_superuser_async)) -> Any:
"""
浏览站点资源
"""
site = Site.get(db, site_id)
site = await Site.async_get(db, site_id)
if not site:
raise HTTPException(
status_code=404,
detail=f"站点 {site_id} 不存在",
)
torrents = TorrentsChain().browse(domain=site.domain, keyword=keyword, cat=cat, page=page)
torrents = await TorrentsChain().async_browse(domain=site.domain, keyword=keyword, cat=cat, page=page)
if not torrents:
return []
return [torrent.to_dict() for torrent in torrents]
@router.get("/domain/{site_url}", summary="站点详情", response_model=schemas.Site)
def read_site_by_domain(
async def read_site_by_domain(
site_url: str,
db: Session = Depends(get_db),
db: AsyncSession = Depends(get_async_db),
_: schemas.TokenPayload = Depends(verify_token)
) -> Any:
"""
通过域名获取站点信息
"""
domain = StringUtils.get_url_domain(site_url)
site = Site.get_by_domain(db, domain)
site = await Site.async_get_by_domain(db, domain)
if not site:
raise HTTPException(
status_code=404,
@@ -330,25 +334,36 @@ def read_site_by_domain(
return site
@router.get("/statistic/{site_url}", summary="站点统计信息", response_model=schemas.SiteStatistic)
def read_site_by_domain(
@router.get("/statistic/{site_url}", summary="特定站点统计信息", response_model=schemas.SiteStatistic)
async def read_statistic_by_domain(
site_url: str,
db: Session = Depends(get_db),
db: AsyncSession = Depends(get_async_db),
_: schemas.TokenPayload = Depends(verify_token)
) -> Any:
"""
通过域名获取站点统计信息
"""
domain = StringUtils.get_url_domain(site_url)
sitestatistic = SiteStatistic.get_by_domain(db, domain)
sitestatistic = await SiteStatistic.async_get_by_domain(db, domain)
if sitestatistic:
return sitestatistic
return schemas.SiteStatistic(domain=domain)
@router.get("/statistic", summary="所有站点统计信息", response_model=List[schemas.SiteStatistic])
async def read_statistics(
db: AsyncSession = Depends(get_async_db),
_: schemas.TokenPayload = Depends(verify_token)
) -> Any:
"""
获取所有站点统计信息
"""
return await SiteStatistic.async_list(db)
@router.get("/rss", summary="所有订阅站点", response_model=List[schemas.Site])
def read_rss_sites(db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> List[dict]:
async def read_rss_sites(db: AsyncSession = Depends(get_async_db),
_: schemas.TokenPayload = Depends(verify_token)) -> List[dict]:
"""
获取站点列表
"""
@@ -356,7 +371,7 @@ def read_rss_sites(db: Session = Depends(get_db),
selected_sites = SystemConfigOper().get(SystemConfigKey.RssSites) or []
# 所有站点
all_site = Site.list_order_by_pri(db)
all_site = await Site.async_list_order_by_pri(db)
if not selected_sites:
return all_site
@@ -366,7 +381,7 @@ def read_rss_sites(db: Session = Depends(get_db),
@router.get("/auth", summary="查询认证站点", response_model=dict)
def read_auth_sites(_: schemas.TokenPayload = Depends(verify_token)) -> dict:
async def read_auth_sites(_: schemas.TokenPayload = Depends(verify_token)) -> dict:
"""
获取可认证站点列表
"""
@@ -384,22 +399,48 @@ def auth_site(
if not auth_info or not auth_info.site or not auth_info.params:
return schemas.Response(success=False, message="请输入认证站点和认证参数")
status, msg = SitesHelper().check_user(auth_info.site, auth_info.params)
SystemConfigOper().set(SystemConfigKey.UserSiteAuthParams, auth_info.dict())
SystemConfigOper().set(SystemConfigKey.UserSiteAuthParams, auth_info.model_dump())
# 认证成功后,重新初始化插件
PluginManager().init_config()
Scheduler().init_plugin_jobs()
Command().init_commands()
register_plugin_api()
return schemas.Response(success=status, message=msg)
@router.get("/mapping", summary="获取站点域名到名称的映射", response_model=schemas.Response)
async def site_mapping(_: User = Depends(get_current_active_superuser_async)):
"""
获取站点域名到名称的映射关系
"""
try:
sites = await SiteOper().async_list()
mapping = {}
for site in sites:
mapping[site.domain] = site.name
return schemas.Response(success=True, data=mapping)
except Exception as e:
return schemas.Response(success=False, message=f"获取映射失败:{str(e)}")
@router.get("/supporting", summary="获取支持的站点列表", response_model=dict)
async def support_sites(_: User = Depends(get_current_active_superuser_async)):
"""
获取支持的站点列表
"""
return SitesHelper().get_indexsites()
@router.get("/{site_id}", summary="站点详情", response_model=schemas.Site)
def read_site(
async def read_site(
site_id: int,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(get_current_active_superuser)
db: AsyncSession = Depends(get_async_db),
_: User = Depends(get_current_active_superuser_async)
) -> Any:
"""
通过ID获取站点信息
"""
site = Site.get(db, site_id)
site = await Site.async_get(db, site_id)
if not site:
raise HTTPException(
status_code=404,
@@ -409,18 +450,18 @@ def read_site(
@router.delete("/{site_id}", summary="删除站点", response_model=schemas.Response)
def delete_site(
async def delete_site(
site_id: int,
db: Session = Depends(get_db),
_: User = Depends(get_current_active_superuser)
db: AsyncSession = Depends(get_async_db),
_: User = Depends(get_current_active_superuser_async)
) -> Any:
"""
删除站点
"""
Site.delete(db, site_id)
await Site.async_delete(db, site_id)
# 插件站点删除
EventManager().send_event(EventType.SiteDeleted,
{
"site_id": site_id
})
await eventmanager.async_send_event(EventType.SiteDeleted,
{
"site_id": site_id
})
return schemas.Response(success=True)

View File

@@ -1,6 +1,6 @@
from datetime import datetime
import math
from pathlib import Path
from typing import Any, List
from typing import Any, List, Optional
from fastapi import APIRouter, Depends, HTTPException
from starlette.responses import FileResponse, Response
@@ -12,9 +12,10 @@ from app.core.config import settings
from app.core.metainfo import MetaInfoPath
from app.core.security import verify_token
from app.db.models import User
from app.db.user_oper import get_current_active_superuser
from app.db.user_oper import get_current_active_superuser, get_current_active_superuser_async
from app.helper.progress import ProgressHelper
from app.schemas.types import ProgressKey
from app.utils.string import StringUtils
router = APIRouter()
@@ -27,11 +28,23 @@ def qrcode(name: str, _: schemas.TokenPayload = Depends(verify_token)) -> Any:
qrcode_data, errmsg = StorageChain().generate_qrcode(name)
if qrcode_data:
return schemas.Response(success=True, data=qrcode_data, message=errmsg)
return schemas.Response(success=False)
return schemas.Response(success=False, message=errmsg)
@router.get("/auth_url/{name}", summary="获取 OAuth2 授权 URL", response_model=schemas.Response)
def auth_url(name: str, _: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
获取 OAuth2 授权 URL
"""
auth_data, errmsg = StorageChain().generate_auth_url(name)
if auth_data:
return schemas.Response(success=True, data=auth_data)
return schemas.Response(success=False, message=errmsg)
@router.get("/check/{name}", summary="二维码登录确认", response_model=schemas.Response)
def check(name: str, ck: str = None, t: str = None, _: schemas.TokenPayload = Depends(verify_token)) -> Any:
def check(name: str, ck: Optional[str] = None, t: Optional[str] = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
二维码登录确认
"""
@@ -55,9 +68,19 @@ def save(name: str,
return schemas.Response(success=True)
@router.get("/reset/{name}", summary="重置存储配置", response_model=schemas.Response)
def reset(name: str,
_: User = Depends(get_current_active_superuser)) -> Any:
"""
重置存储配置
"""
StorageChain().reset_config(name)
return schemas.Response(success=True)
@router.post("/list", summary="所有目录和文件", response_model=List[schemas.FileItem])
def list_files(fileitem: schemas.FileItem,
sort: str = 'updated_at',
sort: Optional[str] = 'updated_at',
_: User = Depends(get_current_active_superuser)) -> Any:
"""
查询当前目录下所有目录和文件
@@ -69,9 +92,9 @@ def list_files(fileitem: schemas.FileItem,
file_list = StorageChain().list_files(fileitem)
if file_list:
if sort == "name":
file_list.sort(key=lambda x: x.name or "")
file_list.sort(key=lambda x: StringUtils.natural_sort_key(x.name or ""))
else:
file_list.sort(key=lambda x: x.modify_time or datetime.min, reverse=True)
file_list.sort(key=lambda x: x.modify_time or -math.inf, reverse=True)
return file_list
@@ -140,7 +163,7 @@ def image(fileitem: schemas.FileItem,
@router.post("/rename", summary="重命名文件或目录", response_model=schemas.Response)
def rename(fileitem: schemas.FileItem,
new_name: str,
recursive: bool = False,
recursive: Optional[bool] = False,
_: User = Depends(get_current_active_superuser)) -> Any:
"""
重命名文件或目录
@@ -151,47 +174,49 @@ def rename(fileitem: schemas.FileItem,
"""
if not new_name:
return schemas.Response(success=False, message="新名称为空")
# 重命名目录内文件
if recursive:
transferchain = TransferChain()
media_exts = settings.RMT_MEDIAEXT + settings.RMT_SUBEXT + settings.RMT_AUDIOEXT
# 递归修改目录内文件(智能识别命名)
sub_files: List[schemas.FileItem] = StorageChain().list_files(fileitem)
if sub_files:
# 开始进度
progress = ProgressHelper(ProgressKey.BatchRename)
progress.start()
total = len(sub_files)
handled = 0
for sub_file in sub_files:
handled += 1
progress.update(value=handled / total * 100,
text=f"正在处理 {sub_file.name} ...")
if sub_file.type == "dir":
continue
if not sub_file.extension:
continue
if f".{sub_file.extension.lower()}" not in media_exts:
continue
sub_path = Path(f"{fileitem.path}{sub_file.name}")
meta = MetaInfoPath(sub_path)
mediainfo = transferchain.recognize_media(meta)
if not mediainfo:
progress.end()
return schemas.Response(success=False, message=f"{sub_path.name} 未识别到媒体信息")
new_path = transferchain.recommend_name(meta=meta, mediainfo=mediainfo)
if not new_path:
progress.end()
return schemas.Response(success=False, message=f"{sub_path.name} 未识别到新名称")
ret: schemas.Response = rename(fileitem=sub_file,
new_name=Path(new_path).name,
recursive=False)
if not ret.success:
progress.end()
return schemas.Response(success=False, message=f"{sub_path.name} 重命名失败!")
progress.end()
# 重命名自己
result = StorageChain().rename_file(fileitem, new_name)
if result:
if recursive:
transferchain = TransferChain()
media_exts = settings.RMT_MEDIAEXT + settings.RMT_SUBEXT + settings.RMT_AUDIO_TRACK_EXT
# 递归修改目录内文件(智能识别命名)
sub_files: List[schemas.FileItem] = StorageChain().list_files(fileitem)
if sub_files:
# 开始进度
progress = ProgressHelper()
progress.start(ProgressKey.BatchRename)
total = len(sub_files)
handled = 0
for sub_file in sub_files:
handled += 1
progress.update(value=handled / total * 100,
text=f"正在处理 {sub_file.name} ...",
key=ProgressKey.BatchRename)
if sub_file.type == "dir":
continue
if not sub_file.extension:
continue
if f".{sub_file.extension.lower()}" not in media_exts:
continue
sub_path = Path(f"{fileitem.path}{sub_file.name}")
meta = MetaInfoPath(sub_path)
mediainfo = transferchain.recognize_media(meta)
if not mediainfo:
progress.end(ProgressKey.BatchRename)
return schemas.Response(success=False, message=f"{sub_path.name} 未识别到媒体信息")
new_path = transferchain.recommend_name(meta=meta, mediainfo=mediainfo)
if not new_path:
progress.end(ProgressKey.BatchRename)
return schemas.Response(success=False, message=f"{sub_path.name} 未识别到新名称")
ret: schemas.Response = rename(fileitem=sub_file,
new_name=Path(new_path).name,
recursive=False)
if not ret.success:
progress.end(ProgressKey.BatchRename)
return schemas.Response(success=False, message=f"{sub_path.name} 重命名失败!")
progress.end(ProgressKey.BatchRename)
return schemas.Response(success=True)
return schemas.Response(success=False)
@@ -208,7 +233,7 @@ def usage(name: str, _: User = Depends(get_current_active_superuser)) -> Any:
@router.get("/transtype/{name}", summary="支持的整理方式获取", response_model=schemas.StorageTransType)
def transtype(name: str, _: User = Depends(get_current_active_superuser)) -> Any:
async def transtype(name: str, _: User = Depends(get_current_active_superuser_async)) -> Any:
"""
查询支持的整理方式
"""

View File

@@ -1,7 +1,8 @@
from typing import List, Any
from typing import List, Any, Annotated, Optional
import cn2an
from fastapi import APIRouter, Request, BackgroundTasks, Depends, HTTPException, Header
from sqlalchemy.ext.asyncio import AsyncSession
from sqlalchemy.orm import Session
from app import schemas
@@ -11,12 +12,12 @@ from app.core.context import MediaInfo
from app.core.event import eventmanager
from app.core.metainfo import MetaInfo
from app.core.security import verify_token, verify_apitoken
from app.db import get_db
from app.db import get_async_db, get_db
from app.db.models.subscribe import Subscribe
from app.db.models.subscribehistory import SubscribeHistory
from app.db.models.user import User
from app.db.systemconfig_oper import SystemConfigOper
from app.db.user_oper import get_current_active_user
from app.db.user_oper import get_current_active_user_async
from app.helper.subscribe import SubscribeHelper
from app.scheduler import Scheduler
from app.schemas.types import MediaType, EventType, SystemConfigKey
@@ -34,28 +35,28 @@ def start_subscribe_add(title: str, year: str,
@router.get("/", summary="查询所有订阅", response_model=List[schemas.Subscribe])
def read_subscribes(
db: Session = Depends(get_db),
async def read_subscribes(
db: AsyncSession = Depends(get_async_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询所有订阅
"""
return Subscribe.list(db)
return await Subscribe.async_list(db)
@router.get("/list", summary="查询所有订阅API_TOKEN", response_model=List[schemas.Subscribe])
def list_subscribes(_: str = Depends(verify_apitoken)) -> Any:
async def list_subscribes(_: Annotated[str, Depends(verify_apitoken)]) -> Any:
"""
查询所有订阅 API_TOKEN认证?token=xxx
"""
return read_subscribes()
return await read_subscribes()
@router.post("/", summary="新增订阅", response_model=schemas.Response)
def create_subscribe(
async def create_subscribe(
*,
subscribe_in: schemas.Subscribe,
current_user: User = Depends(get_current_active_user),
current_user: User = Depends(get_current_active_user_async),
) -> schemas.Response:
"""
新增订阅
@@ -75,43 +76,37 @@ def create_subscribe(
title = subscribe_in.name
else:
title = None
sid, message = SubscribeChain().add(mtype=mtype,
title=title,
year=subscribe_in.year,
tmdbid=subscribe_in.tmdbid,
season=subscribe_in.season,
doubanid=subscribe_in.doubanid,
bangumiid=subscribe_in.bangumiid,
mediaid=subscribe_in.mediaid,
username=current_user.name,
best_version=subscribe_in.best_version,
save_path=subscribe_in.save_path,
search_imdbid=subscribe_in.search_imdbid,
custom_words=subscribe_in.custom_words,
media_category=subscribe_in.media_category,
filter_groups=subscribe_in.filter_groups,
exist_ok=True)
# 订阅用户
subscribe_in.username = current_user.name
# 转化为字典
subscribe_dict = subscribe_in.model_dump()
if subscribe_in.id:
subscribe_dict.pop("id", None)
sid, message = await SubscribeChain().async_add(mtype=mtype,
title=title,
exist_ok=True,
**subscribe_dict)
return schemas.Response(
success=bool(sid), message=message, data={"id": sid}
)
@router.put("/", summary="更新订阅", response_model=schemas.Response)
def update_subscribe(
async def update_subscribe(
*,
subscribe_in: schemas.Subscribe,
db: Session = Depends(get_db),
db: AsyncSession = Depends(get_async_db),
_: schemas.TokenPayload = Depends(verify_token)
) -> Any:
"""
更新订阅信息
"""
subscribe = Subscribe.get(db, subscribe_in.id)
subscribe = await Subscribe.async_get(db, subscribe_in.id)
if not subscribe:
return schemas.Response(success=False, message="订阅不存在")
# 避免更新缺失集数
old_subscribe_dict = subscribe.to_dict()
subscribe_dict = subscribe_in.dict()
subscribe_dict = subscribe_in.model_dump()
if not subscribe_in.lack_episode:
# 没有缺失集数时缺失集数清空避免更新为0
subscribe_dict.pop("lack_episode")
@@ -124,50 +119,55 @@ def update_subscribe(
# 是否手动修改过总集数
if subscribe_in.total_episode != subscribe.total_episode:
subscribe_dict["manual_total_episode"] = 1
subscribe.update(db, subscribe_dict)
# 更新到数据库
await subscribe.async_update(db, subscribe_dict)
# 重新获取更新后的订阅数据
updated_subscribe = await Subscribe.async_get(db, subscribe_in.id)
# 发送订阅调整事件
eventmanager.send_event(EventType.SubscribeModified, {
"subscribe_id": subscribe.id,
await eventmanager.async_send_event(EventType.SubscribeModified, {
"subscribe_id": subscribe_in.id,
"old_subscribe_info": old_subscribe_dict,
"subscribe_info": subscribe.to_dict(),
"subscribe_info": updated_subscribe.to_dict() if updated_subscribe else {},
})
return schemas.Response(success=True)
@router.put("/status/{subid}", summary="更新订阅状态", response_model=schemas.Response)
def update_subscribe_status(
async def update_subscribe_status(
subid: int,
state: str,
db: Session = Depends(get_db),
db: AsyncSession = Depends(get_async_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
更新订阅状态
"""
subscribe = Subscribe.get(db, subid)
subscribe = await Subscribe.async_get(db, subid)
if not subscribe:
return schemas.Response(success=False, message="订阅不存在")
valid_states = ["R", "P", "S"]
if state not in valid_states:
return schemas.Response(success=False, message="无效的订阅状态")
old_subscribe_dict = subscribe.to_dict()
subscribe.update(db, {
await subscribe.async_update(db, {
"state": state
})
# 重新获取更新后的订阅数据
updated_subscribe = await Subscribe.async_get(db, subid)
# 发送订阅调整事件
eventmanager.send_event(EventType.SubscribeModified, {
"subscribe_id": subscribe.id,
await eventmanager.async_send_event(EventType.SubscribeModified, {
"subscribe_id": subid,
"old_subscribe_info": old_subscribe_dict,
"subscribe_info": subscribe.to_dict(),
"subscribe_info": updated_subscribe.to_dict() if updated_subscribe else {},
})
return schemas.Response(success=True)
@router.get("/media/{mediaid}", summary="查询订阅", response_model=schemas.Subscribe)
def subscribe_mediaid(
async def subscribe_mediaid(
mediaid: str,
season: int = None,
title: str = None,
db: Session = Depends(get_db),
season: Optional[int] = None,
title: Optional[str] = None,
db: AsyncSession = Depends(get_async_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据 TMDBID/豆瓣ID/BangumiId 查询订阅 tmdb:/douban:
@@ -177,23 +177,23 @@ def subscribe_mediaid(
tmdbid = mediaid[5:]
if not tmdbid or not str(tmdbid).isdigit():
return Subscribe()
result = Subscribe.exists(db, tmdbid=int(tmdbid), season=season)
result = await Subscribe.async_exists(db, tmdbid=int(tmdbid), season=season)
elif mediaid.startswith("douban:"):
doubanid = mediaid[7:]
if not doubanid:
return Subscribe()
result = Subscribe.get_by_doubanid(db, doubanid)
result = await Subscribe.async_get_by_doubanid(db, doubanid)
if not result and title:
title_check = True
elif mediaid.startswith("bangumi:"):
bangumiid = mediaid[8:]
if not bangumiid or not str(bangumiid).isdigit():
return Subscribe()
result = Subscribe.get_by_bangumiid(db, int(bangumiid))
result = await Subscribe.async_get_by_bangumiid(db, int(bangumiid))
if not result and title:
title_check = True
else:
result = Subscribe.get_by_mediaid(db, mediaid)
result = await Subscribe.async_get_by_mediaid(db, mediaid)
if not result and title:
title_check = True
# 使用名称检查订阅
@@ -201,7 +201,7 @@ def subscribe_mediaid(
meta = MetaInfo(title)
if season:
meta.begin_season = season
result = Subscribe.get_by_title(db, title=meta.name, season=meta.begin_season)
result = await Subscribe.async_get_by_title(db, title=meta.name, season=meta.begin_season)
return result if result else Subscribe()
@@ -217,26 +217,30 @@ def refresh_subscribes(
@router.get("/reset/{subid}", summary="重置订阅", response_model=schemas.Response)
def reset_subscribes(
async def reset_subscribes(
subid: int,
db: Session = Depends(get_db),
db: AsyncSession = Depends(get_async_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
重置订阅
"""
subscribe = Subscribe.get(db, subid)
subscribe = await Subscribe.async_get(db, subid)
if subscribe:
# 在更新之前获取旧数据
old_subscribe_dict = subscribe.to_dict()
subscribe.update(db, {
# 更新订阅
await subscribe.async_update(db, {
"note": [],
"lack_episode": subscribe.total_episode,
"state": "R"
})
# 重新获取更新后的订阅数据
updated_subscribe = await Subscribe.async_get(db, subid)
# 发送订阅调整事件
eventmanager.send_event(EventType.SubscribeModified, {
"subscribe_id": subscribe.id,
await eventmanager.async_send_event(EventType.SubscribeModified, {
"subscribe_id": subid,
"old_subscribe_info": old_subscribe_dict,
"subscribe_info": subscribe.to_dict(),
"subscribe_info": updated_subscribe.to_dict() if updated_subscribe else {},
})
return schemas.Response(success=True)
return schemas.Response(success=False, message="订阅不存在")
@@ -253,7 +257,7 @@ def check_subscribes(
@router.get("/search", summary="搜索所有订阅", response_model=schemas.Response)
def search_subscribes(
async def search_subscribes(
background_tasks: BackgroundTasks,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
@@ -272,7 +276,7 @@ def search_subscribes(
@router.get("/search/{subscribe_id}", summary="搜索订阅", response_model=schemas.Response)
def search_subscribe(
async def search_subscribe(
subscribe_id: int,
background_tasks: BackgroundTasks,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
@@ -292,10 +296,10 @@ def search_subscribe(
@router.delete("/media/{mediaid}", summary="删除订阅", response_model=schemas.Response)
def delete_subscribe_by_mediaid(
async def delete_subscribe_by_mediaid(
mediaid: str,
season: int = None,
db: Session = Depends(get_db),
season: Optional[int] = None,
db: AsyncSession = Depends(get_async_db),
_: schemas.TokenPayload = Depends(verify_token)
) -> Any:
"""
@@ -306,32 +310,35 @@ def delete_subscribe_by_mediaid(
tmdbid = mediaid[5:]
if not tmdbid or not str(tmdbid).isdigit():
return schemas.Response(success=False)
subscribes = Subscribe().get_by_tmdbid(db, int(tmdbid), season)
subscribes = await Subscribe.async_get_by_tmdbid(db, int(tmdbid), season)
delete_subscribes.extend(subscribes)
elif mediaid.startswith("douban:"):
doubanid = mediaid[7:]
if not doubanid:
return schemas.Response(success=False)
subscribe = Subscribe().get_by_doubanid(db, doubanid)
subscribe = await Subscribe.async_get_by_doubanid(db, doubanid)
if subscribe:
delete_subscribes.append(subscribe)
else:
subscribe = Subscribe().get_by_mediaid(db, mediaid)
subscribe = await Subscribe.async_get_by_mediaid(db, mediaid)
if subscribe:
delete_subscribes.append(subscribe)
for subscribe in delete_subscribes:
Subscribe().delete(db, subscribe.id)
# 在删除之前获取订阅信息
subscribe_info = subscribe.to_dict()
subscribe_id = subscribe.id
await Subscribe.async_delete(db, subscribe_id)
# 发送事件
eventmanager.send_event(EventType.SubscribeDeleted, {
"subscribe_id": subscribe.id,
"subscribe_info": subscribe.to_dict()
await eventmanager.async_send_event(EventType.SubscribeDeleted, {
"subscribe_id": subscribe_id,
"subscribe_info": subscribe_info
})
return schemas.Response(success=True)
@router.post("/seerr", summary="OverSeerr/JellySeerr通知订阅", response_model=schemas.Response)
async def seerr_subscribe(request: Request, background_tasks: BackgroundTasks,
authorization: str = Header(None)) -> Any:
authorization: Annotated[str | None, Header()] = None) -> Any:
"""
Jellyseerr/Overseerr网络勾子通知订阅
"""
@@ -383,42 +390,54 @@ async def seerr_subscribe(request: Request, background_tasks: BackgroundTasks,
@router.get("/history/{mtype}", summary="查询订阅历史", response_model=List[schemas.Subscribe])
def subscribe_history(
async def subscribe_history(
mtype: str,
page: int = 1,
count: int = 30,
db: Session = Depends(get_db),
page: Optional[int] = 1,
count: Optional[int] = 30,
db: AsyncSession = Depends(get_async_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询电影/电视剧订阅历史
"""
return SubscribeHistory.list_by_type(db, mtype=mtype, page=page, count=count)
return await SubscribeHistory.async_list_by_type(db, mtype=mtype, page=page, count=count)
@router.delete("/history/{history_id}", summary="删除订阅历史", response_model=schemas.Response)
def delete_subscribe(
async def delete_subscribe(
history_id: int,
db: Session = Depends(get_db),
db: AsyncSession = Depends(get_async_db),
_: schemas.TokenPayload = Depends(verify_token)
) -> Any:
"""
删除订阅历史
"""
SubscribeHistory.delete(db, history_id)
await SubscribeHistory.async_delete(db, history_id)
return schemas.Response(success=True)
@router.get("/popular", summary="热门订阅(基于用户共享数据)", response_model=List[schemas.MediaInfo])
def popular_subscribes(
async def popular_subscribes(
stype: str,
page: int = 1,
count: int = 30,
min_sub: int = None,
page: Optional[int] = 1,
count: Optional[int] = 30,
min_sub: Optional[int] = None,
genre_id: Optional[int] = None,
min_rating: Optional[float] = None,
max_rating: Optional[float] = None,
sort_type: Optional[str] = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询热门订阅
"""
subscribes = SubscribeHelper().get_statistic(stype=stype, page=page, count=count)
subscribes = await SubscribeHelper().async_get_statistic(
stype=stype,
page=page,
count=count,
genre_id=genre_id,
min_rating=min_rating,
max_rating=max_rating,
sort_type=sort_type
)
if subscribes:
ret_medias = []
for sub in subscribes:
@@ -454,14 +473,14 @@ def popular_subscribes(
@router.get("/user/{username}", summary="用户订阅", response_model=List[schemas.Subscribe])
def user_subscribes(
async def user_subscribes(
username: str,
db: Session = Depends(get_db),
db: AsyncSession = Depends(get_async_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询用户订阅
"""
return Subscribe.list_by_username(db, username)
return await Subscribe.async_list_by_username(db, username)
@router.get("/files/{subscribe_id}", summary="订阅相关文件信息", response_model=schemas.SubscrbieInfo)
@@ -479,51 +498,51 @@ def subscribe_files(
@router.post("/share", summary="分享订阅", response_model=schemas.Response)
def subscribe_share(
async def subscribe_share(
sub: schemas.SubscribeShare,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
分享订阅
"""
state, errmsg = SubscribeHelper().sub_share(subscribe_id=sub.subscribe_id,
share_title=sub.share_title,
share_comment=sub.share_comment,
share_user=sub.share_user)
state, errmsg = await SubscribeHelper().async_sub_share(subscribe_id=sub.subscribe_id,
share_title=sub.share_title,
share_comment=sub.share_comment,
share_user=sub.share_user)
return schemas.Response(success=state, message=errmsg)
@router.delete("/share/{share_id}", summary="删除分享", response_model=schemas.Response)
def subscribe_share_delete(
async def subscribe_share_delete(
share_id: int,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
删除分享
"""
state, errmsg = SubscribeHelper().share_delete(share_id=share_id)
state, errmsg = await SubscribeHelper().async_share_delete(share_id=share_id)
return schemas.Response(success=state, message=errmsg)
@router.post("/fork", summary="复用订阅", response_model=schemas.Response)
def subscribe_fork(
async def subscribe_fork(
sub: schemas.SubscribeShare,
current_user: User = Depends(get_current_active_user)) -> Any:
current_user: User = Depends(get_current_active_user_async)) -> Any:
"""
复用订阅
"""
sub_dict = sub.dict()
sub_dict = sub.model_dump()
sub_dict.pop("id")
for key in list(sub_dict.keys()):
if not hasattr(schemas.Subscribe(), key):
sub_dict.pop(key)
result = create_subscribe(subscribe_in=schemas.Subscribe(**sub_dict),
current_user=current_user)
result = await create_subscribe(subscribe_in=schemas.Subscribe(**sub_dict),
current_user=current_user)
if result.success:
SubscribeHelper().sub_fork(share_id=sub.id)
await SubscribeHelper().async_sub_fork(share_id=sub.id)
return result
@router.get("/follow", summary="查询已Follow的订阅分享人", response_model=List[str])
def followed_subscribers(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def followed_subscribers(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询已Follow的订阅分享人
"""
@@ -531,8 +550,8 @@ def followed_subscribers(_: schemas.TokenPayload = Depends(verify_token)) -> Any
@router.post("/follow", summary="Follow订阅分享人", response_model=schemas.Response)
def follow_subscriber(
share_uid: str = None,
async def follow_subscriber(
share_uid: Optional[str] = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
Follow订阅分享人
@@ -540,13 +559,13 @@ def follow_subscriber(
subscribers = SystemConfigOper().get(SystemConfigKey.FollowSubscribers) or []
if share_uid and share_uid not in subscribers:
subscribers.append(share_uid)
SystemConfigOper().set(SystemConfigKey.FollowSubscribers, subscribers)
await SystemConfigOper().async_set(SystemConfigKey.FollowSubscribers, subscribers)
return schemas.Response(success=True)
@router.delete("/follow", summary="取消Follow订阅分享人", response_model=schemas.Response)
def unfollow_subscriber(
share_uid: str = None,
async def unfollow_subscriber(
share_uid: Optional[str] = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
取消Follow订阅分享人
@@ -554,51 +573,74 @@ def unfollow_subscriber(
subscribers = SystemConfigOper().get(SystemConfigKey.FollowSubscribers) or []
if share_uid and share_uid in subscribers:
subscribers.remove(share_uid)
SystemConfigOper().set(SystemConfigKey.FollowSubscribers, subscribers)
await SystemConfigOper().async_set(SystemConfigKey.FollowSubscribers, subscribers)
return schemas.Response(success=True)
@router.get("/shares", summary="查询分享的订阅", response_model=List[schemas.SubscribeShare])
def popular_subscribes(
name: str = None,
page: int = 1,
count: int = 30,
async def popular_subscribes(
name: Optional[str] = None,
page: Optional[int] = 1,
count: Optional[int] = 30,
genre_id: Optional[int] = None,
min_rating: Optional[float] = None,
max_rating: Optional[float] = None,
sort_type: Optional[str] = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询分享的订阅
"""
return SubscribeHelper().get_shares(name=name, page=page, count=count)
return await SubscribeHelper().async_get_shares(
name=name,
page=page,
count=count,
genre_id=genre_id,
min_rating=min_rating,
max_rating=max_rating,
sort_type=sort_type
)
@router.get("/share/statistics", summary="查询订阅分享统计", response_model=List[schemas.SubscribeShareStatistics])
async def subscribe_share_statistics(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询订阅分享统计
返回每个分享人分享的媒体数量以及总的复用人次
"""
return await SubscribeHelper().async_get_share_statistics()
@router.get("/{subscribe_id}", summary="订阅详情", response_model=schemas.Subscribe)
def read_subscribe(
async def read_subscribe(
subscribe_id: int,
db: Session = Depends(get_db),
db: AsyncSession = Depends(get_async_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据订阅编号查询订阅信息
"""
if not subscribe_id:
return Subscribe()
return Subscribe.get(db, subscribe_id)
return await Subscribe.async_get(db, subscribe_id)
@router.delete("/{subscribe_id}", summary="删除订阅", response_model=schemas.Response)
def delete_subscribe(
async def delete_subscribe(
subscribe_id: int,
db: Session = Depends(get_db),
db: AsyncSession = Depends(get_async_db),
_: schemas.TokenPayload = Depends(verify_token)
) -> Any:
"""
删除订阅信息
"""
subscribe = Subscribe.get(db, subscribe_id)
subscribe = await Subscribe.async_get(db, subscribe_id)
if subscribe:
subscribe.delete(db, subscribe_id)
# 在删除之前获取订阅信息
subscribe_info = subscribe.to_dict()
await Subscribe.async_delete(db, subscribe_id)
# 发送事件
eventmanager.send_event(EventType.SubscribeDeleted, {
await eventmanager.async_send_event(EventType.SubscribeDeleted, {
"subscribe_id": subscribe_id,
"subscribe_info": subscribe.to_dict()
"subscribe_info": subscribe_info
})
# 统计订阅
SubscribeHelper().sub_done_async({

View File

@@ -1,147 +1,99 @@
import asyncio
import io
import json
import tempfile
import re
from collections import deque
from datetime import datetime
from pathlib import Path
from typing import Optional, Union
from typing import Optional, Union, Annotated
import aiofiles
import pillow_avif # noqa 用于自动注册AVIF支持
from PIL import Image
from fastapi import APIRouter, Depends, HTTPException, Header, Request, Response
from anyio import Path as AsyncPath
from app.helper.sites import SitesHelper # noqa # noqa
from fastapi import APIRouter, Body, Depends, HTTPException, Header, Request, Response
from fastapi.responses import StreamingResponse
from app import schemas
from app.chain.mediaserver import MediaServerChain
from app.chain.search import SearchChain
from app.chain.system import SystemChain
from app.core.config import global_vars, settings
from app.core.event import eventmanager
from app.core.metainfo import MetaInfo
from app.core.module import ModuleManager
from app.core.security import verify_apitoken, verify_resource_token, verify_token
from app.db.models import User
from app.db.systemconfig_oper import SystemConfigOper
from app.db.user_oper import get_current_active_superuser
from app.db.user_oper import get_current_active_superuser, get_current_active_superuser_async, \
get_current_active_user_async
from app.helper.llm import LLMHelper
from app.helper.mediaserver import MediaServerHelper
from app.helper.message import MessageHelper
from app.helper.progress import ProgressHelper
from app.helper.rule import RuleHelper
from app.helper.sites import SitesHelper
from app.helper.subscribe import SubscribeHelper
from app.helper.system import SystemHelper
from app.helper.image import ImageHelper
from app.log import logger
from app.monitor import Monitor
from app.scheduler import Scheduler
from app.schemas.types import SystemConfigKey
from app.schemas import ConfigChangeEventData
from app.schemas.types import SystemConfigKey, EventType
from app.utils.crypto import HashUtils
from app.utils.http import RequestUtils
from app.utils.http import RequestUtils, AsyncRequestUtils
from app.utils.security import SecurityUtils
from app.utils.system import SystemUtils
from app.utils.url import UrlUtils
from version import APP_VERSION
router = APIRouter()
def fetch_image(
async def fetch_image(
url: str,
proxy: bool = False,
use_disk_cache: bool = False,
proxy: Optional[bool] = None,
use_cache: bool = False,
if_none_match: Optional[str] = None,
allowed_domains: Optional[set[str]] = None) -> Response:
cookies: Optional[str | dict] = None,
allowed_domains: Optional[set[str]] = None) -> Optional[Response]:
"""
处理图片缓存逻辑支持HTTP缓存和磁盘缓存
"""
if not url:
raise HTTPException(status_code=404, detail="URL not provided")
return None
if allowed_domains is None:
allowed_domains = set(settings.SECURITY_IMAGE_DOMAINS)
# 验证URL安全性
if not SecurityUtils.is_safe_url(url, allowed_domains):
raise HTTPException(status_code=404, detail="Unsafe URL")
logger.warn(f"Blocked unsafe image URL: {url}")
return None
# 后续观察系统性能表现如果发现磁盘缓存和HTTP缓存无法满足高并发情况下的响应速度需求可以考虑重新引入内存缓存
cache_path = None
if use_disk_cache:
# 生成缓存路径
sanitized_path = SecurityUtils.sanitize_url_path(url)
cache_path = settings.CACHE_PATH / "images" / sanitized_path
# 没有文件类型,则添加后缀,在恶意文件类型和实际需求下的折衷选择
if not cache_path.suffix:
cache_path = cache_path.with_suffix(".jpg")
# 确保缓存路径和文件类型合法
if not SecurityUtils.is_safe_path(settings.CACHE_PATH, cache_path, settings.SECURITY_IMAGE_SUFFIXES):
raise HTTPException(status_code=400, detail="Invalid cache path or file type")
# 目前暂不考虑磁盘缓存文件是否过期,后续通过缓存清理机制处理
if cache_path.exists():
try:
content = cache_path.read_bytes()
etag = HashUtils.md5(content)
headers = RequestUtils.generate_cache_headers(etag, max_age=86400 * 7)
if if_none_match == etag:
return Response(status_code=304, headers=headers)
return Response(content=content, media_type="image/jpeg", headers=headers)
except Exception as e:
# 如果读取磁盘缓存发生异常,这里仅记录日志,尝试再次请求远端进行处理
logger.debug(f"Failed to read cache file {cache_path}: {e}")
# 请求远程图片
referer = "https://movie.douban.com/" if "doubanio.com" in url else None
proxies = settings.PROXY if proxy else None
response = RequestUtils(ua=settings.USER_AGENT, proxies=proxies, referer=referer,
accept_type="image/avif,image/webp,image/apng,*/*").get_res(url=url)
if not response:
raise HTTPException(status_code=502, detail="Failed to fetch the image from the remote server")
# 验证下载的内容是否为有效图片
try:
Image.open(io.BytesIO(response.content)).verify()
except Exception as e:
logger.debug(f"Invalid image format for URL {url}: {e}")
raise HTTPException(status_code=502, detail="Invalid image format")
content = response.content
response_headers = response.headers
cache_control_header = response_headers.get("Cache-Control", "")
cache_directive, max_age = RequestUtils.parse_cache_control(cache_control_header)
# 如果需要使用磁盘缓存,则保存到磁盘
if use_disk_cache and cache_path:
try:
if not cache_path.parent.exists():
cache_path.parent.mkdir(parents=True, exist_ok=True)
with tempfile.NamedTemporaryFile(dir=cache_path.parent, delete=False) as tmp_file:
tmp_file.write(content)
temp_path = Path(tmp_file.name)
temp_path.replace(cache_path)
except Exception as e:
logger.debug(f"Failed to write cache file {cache_path}: {e}")
# 检查 If-None-Match
etag = HashUtils.md5(content)
if if_none_match == etag:
headers = RequestUtils.generate_cache_headers(etag, cache_directive, max_age)
return Response(status_code=304, headers=headers)
headers = RequestUtils.generate_cache_headers(etag, cache_directive, max_age)
return Response(
content=content,
media_type=response_headers.get("Content-Type") or UrlUtils.get_mime_type(url, "image/jpeg"),
headers=headers
content = await ImageHelper().async_fetch_image(
url=url,
proxy=proxy,
use_cache=use_cache,
cookies=cookies,
)
if content:
# 检查 If-None-Match
etag = HashUtils.md5(content)
headers = RequestUtils.generate_cache_headers(etag, max_age=86400 * 7)
if if_none_match == etag:
return Response(status_code=304, headers=headers)
# 返回缓存图片
return Response(
content=content,
media_type=UrlUtils.get_mime_type(url, "image/jpeg"),
headers=headers
)
@router.get("/img/{proxy}", summary="图片代理")
def proxy_img(
async def proxy_img(
imgurl: str,
proxy: bool = False,
if_none_match: Optional[str] = Header(None),
cache: bool = False,
use_cookies: bool = False,
if_none_match: Annotated[str | None, Header()] = None,
_: schemas.TokenPayload = Depends(verify_resource_token)
) -> Response:
"""
@@ -151,48 +103,91 @@ def proxy_img(
hosts = [config.config.get("host") for config in MediaServerHelper().get_configs().values() if
config and config.config and config.config.get("host")]
allowed_domains = set(settings.SECURITY_IMAGE_DOMAINS) | set(hosts)
return fetch_image(url=imgurl, proxy=proxy, use_disk_cache=False,
if_none_match=if_none_match, allowed_domains=allowed_domains)
cookies = (
MediaServerChain().get_image_cookies(server=None, image_url=imgurl)
if use_cookies
else None
)
return await fetch_image(url=imgurl, proxy=proxy, use_cache=cache, cookies=cookies,
if_none_match=if_none_match, allowed_domains=allowed_domains)
@router.get("/cache/image", summary="图片缓存")
def cache_img(
async def cache_img(
url: str,
if_none_match: Optional[str] = Header(None),
if_none_match: Annotated[str | None, Header()] = None,
_: schemas.TokenPayload = Depends(verify_resource_token)
) -> Response:
"""
本地缓存图片文件,支持 HTTP 缓存,如果启用全局图片缓存,则使用磁盘缓存
"""
# 如果没有启用全局图片缓存,则不使用磁盘缓存
proxy = "doubanio.com" not in url
return fetch_image(url=url, proxy=proxy, use_disk_cache=settings.GLOBAL_IMAGE_CACHE, if_none_match=if_none_match)
return await fetch_image(url=url, use_cache=settings.GLOBAL_IMAGE_CACHE,
if_none_match=if_none_match)
@router.get("/global", summary="查询非敏感系统设置", response_model=schemas.Response)
def get_global_setting():
def get_global_setting(token: str):
"""
查询非敏感系统设置(无需鉴权)
查询非敏感系统设置(默认鉴权)
仅包含登录前UI初始化必需的字段
"""
# FIXME: 新增敏感配置项时要在此处添加排除项
info = settings.dict(
exclude={"SECRET_KEY", "RESOURCE_SECRET_KEY", "API_TOKEN", "TMDB_API_KEY", "TVDB_API_KEY", "FANART_API_KEY",
"COOKIECLOUD_KEY", "COOKIECLOUD_PASSWORD", "GITHUB_TOKEN", "REPO_GITHUB_TOKEN"}
if token != "moviepilot":
raise HTTPException(status_code=403, detail="Forbidden")
# 白名单模式仅包含登录前UI初始化必需的字段
info = settings.model_dump(
include={
"TMDB_IMAGE_DOMAIN",
"GLOBAL_IMAGE_CACHE",
"ADVANCED_MODE",
}
)
# 追加用户唯一ID
# 追加版本信息(用于版本检查)
info.update({
"USER_UNIQUE_ID": SystemUtils.generate_user_unique_id()
"FRONTEND_VERSION": SystemChain.get_frontend_version(),
"BACKEND_VERSION": APP_VERSION
})
return schemas.Response(success=True,
data=info)
@router.get("/global/user", summary="查询用户相关系统设置", response_model=schemas.Response)
async def get_user_global_setting(_: User = Depends(get_current_active_user_async)):
"""
查询用户相关系统设置(登录后获取)
包含业务功能相关的配置和用户权限信息
"""
# 业务功能相关的配置字段
info = settings.model_dump(
include={
"RECOGNIZE_SOURCE",
"SEARCH_SOURCE",
"AI_RECOMMEND_ENABLED",
"PASSKEY_ALLOW_REGISTER_WITHOUT_OTP"
}
)
# 智能助手总开关未开启智能推荐状态强制返回False
if not settings.AI_AGENT_ENABLE:
info["AI_RECOMMEND_ENABLED"] = False
# 追加用户唯一ID和订阅分享管理权限
share_admin = SubscribeHelper().is_admin_user()
info.update({
"USER_UNIQUE_ID": SubscribeHelper().get_user_uuid(),
"SUBSCRIBE_SHARE_MANAGE": share_admin,
"WORKFLOW_SHARE_MANAGE": share_admin,
})
return schemas.Response(success=True,
data=info)
@router.get("/env", summary="查询系统配置", response_model=schemas.Response)
def get_env_setting(_: User = Depends(get_current_active_superuser)):
async def get_env_setting(_: User = Depends(get_current_active_user_async)):
"""
查询系统环境变量,包括当前版本号(仅管理员)
"""
info = settings.dict(
info = settings.model_dump(
exclude={"SECRET_KEY", "RESOURCE_SECRET_KEY"}
)
info.update({
@@ -206,26 +201,33 @@ def get_env_setting(_: User = Depends(get_current_active_superuser)):
@router.post("/env", summary="更新系统配置", response_model=schemas.Response)
def set_env_setting(env: dict,
_: User = Depends(get_current_active_superuser)):
async def set_env_setting(env: dict,
_: User = Depends(get_current_active_superuser_async)):
"""
更新系统环境变量(仅管理员)
"""
result = settings.update_settings(env=env)
# 统计成功和失败的结果
success_updates = {k: v for k, v in result.items() if v[0]}
failed_updates = {k: v for k, v in result.items() if not v[0]}
failed_updates = {k: v for k, v in result.items() if v[0] is False}
if failed_updates:
return schemas.Response(
success=False,
message="部分配置项更新失败",
message=f"{', '.join([v[1] for v in failed_updates.values()])}",
data={
"success_updates": success_updates,
"failed_updates": failed_updates
}
)
if success_updates:
# 发送配置变更事件
await eventmanager.async_send_event(etype=EventType.ConfigChanged, data=ConfigChangeEventData(
key=success_updates.keys(),
change_type="update"
))
return schemas.Response(
success=True,
message="所有配置项更新成功",
@@ -240,16 +242,16 @@ async def get_progress(request: Request, process_type: str, _: schemas.TokenPayl
"""
实时获取处理进度返回格式为SSE
"""
progress = ProgressHelper()
progress = ProgressHelper(process_type)
async def event_generator():
try:
while not global_vars.is_system_stopped:
if await request.is_disconnected():
break
detail = progress.get(process_type)
detail = progress.get()
yield f"data: {json.dumps(detail)}\n\n"
await asyncio.sleep(0.2)
await asyncio.sleep(0.5)
except asyncio.CancelledError:
return
@@ -257,8 +259,8 @@ async def get_progress(request: Request, process_type: str, _: schemas.TokenPayl
@router.get("/setting/{key}", summary="查询系统设置", response_model=schemas.Response)
def get_setting(key: str,
_: User = Depends(get_current_active_superuser)):
async def get_setting(key: str,
_: User = Depends(get_current_active_user_async)):
"""
查询系统设置(仅管理员)
"""
@@ -272,23 +274,58 @@ def get_setting(key: str,
@router.post("/setting/{key}", summary="更新系统设置", response_model=schemas.Response)
def set_setting(key: str, value: Union[list, dict, bool, int, str] = None,
_: User = Depends(get_current_active_superuser)):
async def set_setting(
key: str,
value: Annotated[Union[list, dict, bool, int, str] | None, Body()] = None,
_: User = Depends(get_current_active_superuser_async),
):
"""
更新系统设置(仅管理员)
"""
if hasattr(settings, key):
success, message = settings.update_setting(key=key, value=value)
if success:
# 发送配置变更事件
await eventmanager.async_send_event(etype=EventType.ConfigChanged, data=ConfigChangeEventData(
key=key,
value=value,
change_type="update"
))
elif success is None:
success = True
return schemas.Response(success=success, message=message)
elif key in {item.value for item in SystemConfigKey}:
SystemConfigOper().set(key, value)
if isinstance(value, list):
value = list(filter(None, value))
value = value if value else None
success = await SystemConfigOper().async_set(key, value)
if success:
# 发送配置变更事件
await eventmanager.async_send_event(etype=EventType.ConfigChanged, data=ConfigChangeEventData(
key=key,
value=value,
change_type="update"
))
return schemas.Response(success=True)
else:
return schemas.Response(success=False, message=f"配置项 '{key}' 不存在")
@router.get("/llm-models", summary="获取LLM模型列表", response_model=schemas.Response)
async def get_llm_models(provider: str, api_key: str, base_url: Optional[str] = None, _: User = Depends(get_current_active_user_async)):
"""
获取LLM模型列表
"""
try:
models = LLMHelper().get_models(provider, api_key, base_url)
return schemas.Response(success=True, data=models)
except Exception as e:
return schemas.Response(success=False, message=str(e))
@router.get("/message", summary="实时消息")
async def get_message(request: Request, role: str = "system", _: schemas.TokenPayload = Depends(verify_resource_token)):
async def get_message(request: Request, role: Optional[str] = "system",
_: schemas.TokenPayload = Depends(verify_resource_token)):
"""
实时获取系统消息返回格式为SSE
"""
@@ -309,67 +346,113 @@ async def get_message(request: Request, role: str = "system", _: schemas.TokenPa
@router.get("/logging", summary="实时日志")
async def get_logging(request: Request, length: int = 50, logfile: str = "moviepilot.log",
async def get_logging(request: Request, length: Optional[int] = 50, logfile: Optional[str] = "moviepilot.log",
_: schemas.TokenPayload = Depends(verify_resource_token)):
"""
实时获取系统日志
length = -1 时, 返回text/plain
否则 返回格式SSE
"""
log_path = settings.LOG_PATH / logfile
base_path = AsyncPath(settings.LOG_PATH)
log_path = base_path / logfile
if not SecurityUtils.is_safe_path(settings.LOG_PATH, log_path, allowed_suffixes={".log"}):
if not await SecurityUtils.async_is_safe_path(base_path=base_path, user_path=log_path, allowed_suffixes={".log"}):
raise HTTPException(status_code=404, detail="Not Found")
if not log_path.exists() or not log_path.is_file():
if not await log_path.exists() or not await log_path.is_file():
raise HTTPException(status_code=404, detail="Not Found")
async def log_generator():
try:
# 使用固定大小的双向队列来限制内存使用
lines_queue = deque(maxlen=max(length, 50))
# 使用 aiofiles 异步读取文件
async with aiofiles.open(log_path, mode="r", encoding="utf-8") as f:
# 逐行读取文件,将每一行存入队列
file_content = await f.read()
for line in file_content.splitlines():
# 取文件大小
file_stat = await log_path.stat()
file_size = file_stat.st_size
# 读取历史日志
async with aiofiles.open(log_path, mode="r", encoding="utf-8", errors="ignore") as f:
# 优化大文件读取策略
if file_size > 100 * 1024:
# 只读取最后100KB的内容
bytes_to_read = min(file_size, 100 * 1024)
position = file_size - bytes_to_read
await f.seek(position)
content = await f.read()
# 找到第一个完整的行
first_newline = content.find('\n')
if first_newline != -1:
content = content[first_newline + 1:]
else:
# 小文件直接读取全部内容
content = await f.read()
# 按行分割并添加到队列,只保留非空行
lines = [line.strip() for line in content.splitlines() if line.strip()]
# 只取最后N行
for line in lines[-max(length, 50):]:
lines_queue.append(line)
for line in lines_queue:
yield f"data: {line}\n\n"
# 输出历史日志
for line in lines_queue:
yield f"data: {line}\n\n"
# 实时监听新日志
async with aiofiles.open(log_path, mode="r", encoding="utf-8", errors="ignore") as f:
# 移动文件指针到文件末尾,继续监听新增内容
await f.seek(0, 2)
# 记录初始文件大小
initial_stat = await log_path.stat()
initial_size = initial_stat.st_size
# 实时监听新日志,使用更短的轮询间隔
while not global_vars.is_system_stopped:
if await request.is_disconnected():
break
line = await f.readline()
if not line:
# 检查文件是否有新内容
current_stat = await log_path.stat()
current_size = current_stat.st_size
if current_size > initial_size:
# 文件有新内容,读取新行
line = await f.readline()
if line:
line = line.strip()
if line:
yield f"data: {line}\n\n"
initial_size = current_size
else:
# 没有新内容,短暂等待
await asyncio.sleep(0.5)
continue
yield f"data: {line}\n\n"
except asyncio.CancelledError:
return
except Exception as err:
logger.error(f"日志读取异常: {err}")
yield f"data: 日志读取异常: {err}\n\n"
# 根据length参数返回不同的响应
if length == -1:
# 返回全部日志作为文本响应
if not log_path.exists():
if not await log_path.exists():
return Response(content="日志文件不存在!", media_type="text/plain")
with open(log_path, "r", encoding='utf-8') as file:
text = file.read()
# 倒序输出
text = "\n".join(text.split("\n")[::-1])
return Response(content=text, media_type="text/plain")
try:
# 使用 aiofiles 异步读取文件
async with aiofiles.open(log_path, mode="r", encoding="utf-8", errors="ignore") as file:
text = await file.read()
# 倒序输出
text = "\n".join(text.split("\n")[::-1])
return Response(content=text, media_type="text/plain")
except Exception as e:
return Response(content=f"读取日志文件失败: {e}", media_type="text/plain")
else:
# 返回SSE流响应
return StreamingResponse(log_generator(), media_type="text/event-stream")
@router.get("/versions", summary="查询Github所有Release版本", response_model=schemas.Response)
def latest_version(_: schemas.TokenPayload = Depends(verify_token)):
async def latest_version(_: schemas.TokenPayload = Depends(verify_token)):
"""
查询Github所有Release版本
"""
version_res = RequestUtils(proxies=settings.PROXY, headers=settings.GITHUB_HEADERS).get_res(
version_res = await AsyncRequestUtils(proxies=settings.PROXY, headers=settings.GITHUB_HEADERS).get_res(
f"https://api.github.com/repos/jxxghp/MoviePilot/releases")
if version_res:
ver_json = version_res.json()
@@ -381,7 +464,7 @@ def latest_version(_: schemas.TokenPayload = Depends(verify_token)):
@router.get("/ruletest", summary="过滤规则测试", response_model=schemas.Response)
def ruletest(title: str,
rulegroup_name: str,
subtitle: str = None,
subtitle: Optional[str] = None,
_: schemas.TokenPayload = Depends(verify_token)):
"""
过滤规则测试,规则类型 1-订阅2-洗版3-搜索
@@ -411,30 +494,80 @@ def ruletest(title: str,
@router.get("/nettest", summary="测试网络连通性")
def nettest(url: str,
proxy: bool,
_: schemas.TokenPayload = Depends(verify_token)):
async def nettest(
url: str,
proxy: bool,
include: Optional[str] = None,
_: schemas.TokenPayload = Depends(verify_token),
):
"""
测试网络连通性
"""
# 记录开始的毫秒数
start_time = datetime.now()
headers = None
# 当前使用的加速代理
proxy_name = ""
if "github" in url:
# 这是github的连通性测试
headers = settings.GITHUB_HEADERS
if "{GITHUB_PROXY}" in url:
url = url.replace(
"{GITHUB_PROXY}", UrlUtils.standardize_base_url(settings.GITHUB_PROXY or "")
)
if settings.GITHUB_PROXY:
proxy_name = "Github加速代理"
if "{PIP_PROXY}" in url:
url = url.replace(
"{PIP_PROXY}",
UrlUtils.standardize_base_url(
settings.PIP_PROXY or "https://pypi.org/simple/"
),
)
if settings.PIP_PROXY:
proxy_name = "PIP加速代理"
url = url.replace("{TMDBAPIKEY}", settings.TMDB_API_KEY)
result = RequestUtils(proxies=settings.PROXY if proxy else None,
ua=settings.USER_AGENT).get_res(url)
result = await AsyncRequestUtils(
proxies=settings.PROXY if proxy else None,
headers=headers,
timeout=10,
ua=settings.NORMAL_USER_AGENT,
).get_res(url)
# 计时结束的毫秒数
end_time = datetime.now()
time = round((end_time - start_time).total_seconds() * 1000)
# 计算相关秒数
if result and result.status_code == 200:
return schemas.Response(success=True, data={
"time": round((end_time - start_time).microseconds / 1000)
})
elif result:
return schemas.Response(success=False, message=f"错误码:{result.status_code}", data={
"time": round((end_time - start_time).microseconds / 1000)
})
if result is None:
return schemas.Response(
success=False, message=f"{proxy_name}无法连接", data={"time": time}
)
elif result.status_code == 200:
if include and not re.search(r"%s" % include, result.text, re.IGNORECASE):
# 通常是被加速代理跳转到其它页面了
logger.error(f"{url} 的响应内容不匹配包含规则 {include}")
if proxy_name:
message = f"{proxy_name}已失效,请检查配置"
else:
message = f"无效响应,不匹配 {include}"
return schemas.Response(
success=False,
message=message,
data={"time": time},
)
return schemas.Response(success=True, data={"time": time})
else:
return schemas.Response(success=False, message="网络连接失败!")
if proxy_name:
# 加速代理失败
message = f"{proxy_name}已失效,错误码:{result.status_code}"
else:
message = f"错误码:{result.status_code}"
if "github" in url:
# 非加速代理访问github
if result.status_code == 401:
message = "Github Token已失效请检查配置"
elif result.status_code in {403, 429}:
message = "触发限流请配置Github Token"
return schemas.Response(success=False, message=message, data={"time": time})
@router.get("/modulelist", summary="查询已加载的模块ID列表", response_model=schemas.Response)
@@ -465,26 +598,15 @@ def restart_system(_: User = Depends(get_current_active_superuser)):
"""
重启系统(仅管理员)
"""
if not SystemUtils.can_restart():
if not SystemHelper.can_restart():
return schemas.Response(success=False, message="当前运行环境不支持重启操作!")
# 标识停止事件
global_vars.stop_system()
# 执行重启
ret, msg = SystemUtils.restart()
ret, msg = SystemHelper.restart()
return schemas.Response(success=ret, message=msg)
@router.get("/reload", summary="重新加载模块", response_model=schemas.Response)
def reload_module(_: User = Depends(get_current_active_superuser)):
"""
重新加载模块(仅管理员)
"""
ModuleManager().reload()
Scheduler().init()
Monitor().init()
return schemas.Response(success=True)
@router.get("/runscheduler", summary="运行服务", response_model=schemas.Response)
def run_scheduler(jobid: str,
_: User = Depends(get_current_active_superuser)):
@@ -499,7 +621,7 @@ def run_scheduler(jobid: str,
@router.get("/runscheduler2", summary="运行服务API_TOKEN", response_model=schemas.Response)
def run_scheduler2(jobid: str,
_: str = Depends(verify_apitoken)):
_: Annotated[str, Depends(verify_apitoken)]):
"""
执行命令API_TOKEN认证
"""

View File

@@ -1,4 +1,4 @@
from typing import List, Any
from typing import List, Any, Optional
from fastapi import APIRouter, Depends
@@ -11,28 +11,28 @@ router = APIRouter()
@router.get("/seasons/{tmdbid}", summary="TMDB所有季", response_model=List[schemas.TmdbSeason])
def tmdb_seasons(tmdbid: int, _: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def tmdb_seasons(tmdbid: int, _: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据TMDBID查询themoviedb所有季信息
"""
seasons_info = TmdbChain().tmdb_seasons(tmdbid=tmdbid)
seasons_info = await TmdbChain().async_tmdb_seasons(tmdbid=tmdbid)
if seasons_info:
return seasons_info
return []
@router.get("/similar/{tmdbid}/{type_name}", summary="类似电影/电视剧", response_model=List[schemas.MediaInfo])
def tmdb_similar(tmdbid: int,
type_name: str,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def tmdb_similar(tmdbid: int,
type_name: str,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据TMDBID查询类似电影/电视剧type_name: 电影/电视剧
"""
mediatype = MediaType(type_name)
if mediatype == MediaType.MOVIE:
medias = TmdbChain().movie_similar(tmdbid=tmdbid)
medias = await TmdbChain().async_movie_similar(tmdbid=tmdbid)
elif mediatype == MediaType.TV:
medias = TmdbChain().tv_similar(tmdbid=tmdbid)
medias = await TmdbChain().async_tv_similar(tmdbid=tmdbid)
else:
return []
if medias:
@@ -41,17 +41,17 @@ def tmdb_similar(tmdbid: int,
@router.get("/recommend/{tmdbid}/{type_name}", summary="推荐电影/电视剧", response_model=List[schemas.MediaInfo])
def tmdb_recommend(tmdbid: int,
type_name: str,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def tmdb_recommend(tmdbid: int,
type_name: str,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据TMDBID查询推荐电影/电视剧type_name: 电影/电视剧
"""
mediatype = MediaType(type_name)
if mediatype == MediaType.MOVIE:
medias = TmdbChain().movie_recommend(tmdbid=tmdbid)
medias = await TmdbChain().async_movie_recommend(tmdbid=tmdbid)
elif mediatype == MediaType.TV:
medias = TmdbChain().tv_recommend(tmdbid=tmdbid)
medias = await TmdbChain().async_tv_recommend(tmdbid=tmdbid)
else:
return []
if medias:
@@ -60,63 +60,63 @@ def tmdb_recommend(tmdbid: int,
@router.get("/collection/{collection_id}", summary="系列合集详情", response_model=List[schemas.MediaInfo])
def tmdb_collection(collection_id: int,
page: int = 1,
count: int = 20,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def tmdb_collection(collection_id: int,
page: Optional[int] = 1,
count: Optional[int] = 20,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据合集ID查询合集详情
"""
medias = TmdbChain().tmdb_collection(collection_id=collection_id)
medias = await TmdbChain().async_tmdb_collection(collection_id=collection_id)
if medias:
return [media.to_dict() for media in medias][(page - 1) * count:page * count]
return []
@router.get("/credits/{tmdbid}/{type_name}", summary="演员阵容", response_model=List[schemas.MediaPerson])
def tmdb_credits(tmdbid: int,
type_name: str,
page: int = 1,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def tmdb_credits(tmdbid: int,
type_name: str,
page: Optional[int] = 1,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据TMDBID查询演员阵容type_name: 电影/电视剧
"""
mediatype = MediaType(type_name)
if mediatype == MediaType.MOVIE:
persons = TmdbChain().movie_credits(tmdbid=tmdbid, page=page)
persons = await TmdbChain().async_movie_credits(tmdbid=tmdbid, page=page)
elif mediatype == MediaType.TV:
persons = TmdbChain().tv_credits(tmdbid=tmdbid, page=page)
persons = await TmdbChain().async_tv_credits(tmdbid=tmdbid, page=page)
else:
return []
return persons or []
@router.get("/person/{person_id}", summary="人物详情", response_model=schemas.MediaPerson)
def tmdb_person(person_id: int,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def tmdb_person(person_id: int,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据人物ID查询人物详情
"""
return TmdbChain().person_detail(person_id=person_id)
return await TmdbChain().async_person_detail(person_id=person_id)
@router.get("/person/credits/{person_id}", summary="人物参演作品", response_model=List[schemas.MediaInfo])
def tmdb_person_credits(person_id: int,
page: int = 1,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def tmdb_person_credits(person_id: int,
page: Optional[int] = 1,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据人物ID查询人物参演作品
"""
medias = TmdbChain().person_credits(person_id=person_id, page=page)
medias = await TmdbChain().async_person_credits(person_id=person_id, page=page)
if medias:
return [media.to_dict() for media in medias]
return []
@router.get("/{tmdbid}/{season}", summary="TMDB季所有集", response_model=List[schemas.TmdbEpisode])
def tmdb_season_episodes(tmdbid: int, season: int,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def tmdb_season_episodes(tmdbid: int, season: int, episode_group: Optional[str] = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据TMDBID查询某季的所有信信息
"""
return TmdbChain().tmdb_episodes(tmdbid=tmdbid, season=season)
return await TmdbChain().async_tmdb_episodes(tmdbid=tmdbid, season=season, episode_group=episode_group)

View File

@@ -0,0 +1,197 @@
from typing import Optional
from fastapi import APIRouter, Depends
from app import schemas
from app.chain.media import MediaChain
from app.chain.torrents import TorrentsChain
from app.core.config import settings
from app.core.context import MediaInfo
from app.core.metainfo import MetaInfo
from app.db.models import User
from app.db.user_oper import get_current_active_superuser, get_current_active_superuser_async
from app.utils.crypto import HashUtils
router = APIRouter()
@router.get("/cache", summary="获取种子缓存", response_model=schemas.Response)
async def torrents_cache(_: User = Depends(get_current_active_superuser_async)):
"""
获取当前种子缓存数据
"""
torrents_chain = TorrentsChain()
# 获取spider和rss两种缓存
if settings.SUBSCRIBE_MODE == "rss":
cache_info = await torrents_chain.async_get_torrents("rss")
else:
cache_info = await torrents_chain.async_get_torrents("spider")
# 统计信息
torrent_count = sum(len(torrents) for torrents in cache_info.values())
# 转换为前端需要的格式
torrent_data = []
for domain, contexts in cache_info.items():
for context in contexts:
torrent_hash = HashUtils.md5(f"{context.torrent_info.title}{context.torrent_info.description}")
torrent_data.append({
"hash": torrent_hash,
"domain": domain,
"title": context.torrent_info.title,
"description": context.torrent_info.description,
"size": context.torrent_info.size,
"pubdate": context.torrent_info.pubdate,
"site_name": context.torrent_info.site_name,
"media_name": context.media_info.title if context.media_info else "",
"media_year": context.media_info.year if context.media_info else "",
"media_type": context.media_info.type if context.media_info else "",
"season_episode": context.meta_info.season_episode if context.meta_info else "",
"resource_term": context.meta_info.resource_term if context.meta_info else "",
"enclosure": context.torrent_info.enclosure,
"page_url": context.torrent_info.page_url,
"poster_path": context.media_info.get_poster_image() if context.media_info else "",
"backdrop_path": context.media_info.get_backdrop_image() if context.media_info else ""
})
return schemas.Response(success=True, data={
"count": torrent_count,
"sites": len(cache_info),
"data": torrent_data
})
@router.delete("/cache/{domain}/{torrent_hash}", summary="删除指定种子缓存", response_model=schemas.Response)
async def delete_cache(domain: str, torrent_hash: str, _: User = Depends(get_current_active_superuser_async)):
"""
删除指定的种子缓存
:param domain: 站点域名
:param torrent_hash: 种子hash使用title+description的md5
:param _: 当前用户,必须是超级用户
"""
torrents_chain = TorrentsChain()
try:
# 获取当前缓存
cache_data = await torrents_chain.async_get_torrents()
if domain not in cache_data:
return schemas.Response(success=False, message=f"站点 {domain} 缓存不存在")
# 查找并删除指定种子
original_count = len(cache_data[domain])
cache_data[domain] = [
context for context in cache_data[domain]
if HashUtils.md5(f"{context.torrent_info.title}{context.torrent_info.description}") != torrent_hash
]
if len(cache_data[domain]) == original_count:
return schemas.Response(success=False, message="未找到指定的种子")
# 保存更新后的缓存
await torrents_chain.async_save_cache(cache_data, torrents_chain.cache_file)
return schemas.Response(success=True, message="种子删除成功")
except Exception as e:
return schemas.Response(success=False, message=f"删除失败:{str(e)}")
@router.delete("/cache", summary="清理种子缓存", response_model=schemas.Response)
async def clear_cache(_: User = Depends(get_current_active_superuser_async)):
"""
清理所有种子缓存
"""
torrents_chain = TorrentsChain()
try:
await torrents_chain.async_clear_torrents()
return schemas.Response(success=True, message="种子缓存清理完成")
except Exception as e:
return schemas.Response(success=False, message=f"清理失败:{str(e)}")
@router.post("/cache/refresh", summary="刷新种子缓存", response_model=schemas.Response)
def refresh_cache(_: User = Depends(get_current_active_superuser)):
"""
刷新种子缓存
"""
from app.chain.torrents import TorrentsChain
torrents_chain = TorrentsChain()
try:
result = torrents_chain.refresh()
# 统计刷新结果
total_count = sum(len(torrents) for torrents in result.values())
sites_count = len(result)
return schemas.Response(success=True, message=f"缓存刷新完成,共刷新 {sites_count} 个站点,{total_count} 个种子")
except Exception as e:
return schemas.Response(success=False, message=f"刷新失败:{str(e)}")
@router.post("/cache/reidentify/{domain}/{torrent_hash}", summary="重新识别种子", response_model=schemas.Response)
async def reidentify_cache(domain: str, torrent_hash: str,
tmdbid: Optional[int] = None, doubanid: Optional[str] = None,
_: User = Depends(get_current_active_superuser_async)):
"""
重新识别指定的种子
:param domain: 站点域名
:param torrent_hash: 种子hash使用title+description的md5
:param tmdbid: 手动指定的TMDB ID
:param doubanid: 手动指定的豆瓣ID
:param _: 当前用户,必须是超级用户
"""
torrents_chain = TorrentsChain()
media_chain = MediaChain()
try:
# 获取当前缓存
cache_data = await torrents_chain.async_get_torrents()
if domain not in cache_data:
return schemas.Response(success=False, message=f"站点 {domain} 缓存不存在")
# 查找指定种子
target_context = None
for context in cache_data[domain]:
if HashUtils.md5(f"{context.torrent_info.title}{context.torrent_info.description}") == torrent_hash:
target_context = context
break
if not target_context:
return schemas.Response(success=False, message="未找到指定的种子")
# 重新识别
meta = MetaInfo(title=target_context.torrent_info.title, subtitle=target_context.torrent_info.description)
if tmdbid or doubanid:
# 手动指定媒体信息
mediainfo = await media_chain.async_recognize_media(meta=meta, tmdbid=tmdbid, doubanid=doubanid)
else:
# 自动重新识别
mediainfo = await media_chain.async_recognize_by_meta(meta)
if not mediainfo:
# 创建空的媒体信息
mediainfo = MediaInfo()
else:
# 清理多余数据
mediainfo.clear()
# 更新上下文中的媒体信息
target_context.media_info = mediainfo
# 保存更新后的缓存
await torrents_chain.async_save_cache(cache_data, TorrentsChain().cache_file)
return schemas.Response(success=True, message="重新识别完成", data={
"media_name": mediainfo.title if mediainfo else "",
"media_year": mediainfo.year if mediainfo else "",
"media_type": mediainfo.type.value if mediainfo and mediainfo.type else ""
})
except Exception as e:
return schemas.Response(success=False, message=f"重新识别失败:{str(e)}")

View File

@@ -1,5 +1,5 @@
from pathlib import Path
from typing import Any, List
from typing import Any, List, Annotated, Optional
from fastapi import APIRouter, Depends
from sqlalchemy.orm import Session
@@ -8,11 +8,14 @@ from app import schemas
from app.chain.media import MediaChain
from app.chain.storage import StorageChain
from app.chain.transfer import TransferChain
from app.core.config import settings, global_vars
from app.core.metainfo import MetaInfoPath
from app.core.security import verify_token, verify_apitoken
from app.db import get_db
from app.db.models import User
from app.db.models.transferhistory import TransferHistory
from app.db.user_oper import get_current_active_superuser
from app.helper.directory import DirectoryHelper
from app.schemas import MediaType, FileItem, ManualTransferItem
router = APIRouter()
@@ -35,11 +38,19 @@ def query_name(path: str, filetype: str,
if not new_path:
return schemas.Response(success=False, message="未识别到新名称")
if filetype == "dir":
parents = Path(new_path).parents
if len(parents) > 2:
new_name = parents[1].name
media_path = DirectoryHelper.get_media_root_path(
rename_format=settings.RENAME_FORMAT(mediainfo.type),
rename_path=Path(new_path),
)
if media_path:
new_name = media_path.name
else:
new_name = parents[0].name
# fallback
parents = Path(new_path).parents
if len(parents) > 2:
new_name = parents[1].name
else:
new_name = parents[0].name
else:
new_name = Path(new_path).name
return schemas.Response(success=True, data={
@@ -48,7 +59,7 @@ def query_name(path: str, filetype: str,
@router.get("/queue", summary="查询整理队列", response_model=List[schemas.TransferJob])
def query_queue(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def query_queue(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询整理队列
:param _: Token校验
@@ -57,21 +68,23 @@ def query_queue(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
@router.delete("/queue", summary="从整理队列中删除任务", response_model=schemas.Response)
def remove_queue(fileitem: schemas.FileItem, _: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def remove_queue(fileitem: schemas.FileItem, _: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询整理队列
:param fileitem: 文件项
:param _: Token校验
"""
TransferChain().remove_from_queue(fileitem)
# 取消整理
global_vars.stop_transfer(fileitem.path)
return schemas.Response(success=True)
@router.post("/manual", summary="手动转移", response_model=schemas.Response)
def manual_transfer(transer_item: ManualTransferItem,
background: bool = False,
background: Optional[bool] = False,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(get_current_active_superuser)) -> Any:
_: User = Depends(get_current_active_superuser)) -> Any:
"""
手动转移,文件或历史记录,支持自定义剧集识别格式
:param transer_item: 手工整理项
@@ -98,7 +111,7 @@ def manual_transfer(transer_item: ManualTransferItem,
if history.dest_fileitem:
# 删除旧的已整理文件
dest_fileitem = FileItem(**history.dest_fileitem)
state = StorageChain().delete_media_file(dest_fileitem, mtype=MediaType(history.type))
state = StorageChain().delete_media_file(dest_fileitem)
if not state:
return schemas.Response(success=False, message=f"{dest_fileitem.path} 删除失败")
@@ -146,6 +159,7 @@ def manual_transfer(transer_item: ManualTransferItem,
doubanid=transer_item.doubanid,
mtype=mtype,
season=transer_item.season,
episode_group=transer_item.episode_group,
transfer_type=transer_item.transfer_type,
epformat=epformat,
min_filesize=transer_item.min_filesize,
@@ -165,7 +179,7 @@ def manual_transfer(transer_item: ManualTransferItem,
@router.get("/now", summary="立即执行下载器文件整理", response_model=schemas.Response)
def now(_: str = Depends(verify_apitoken)) -> Any:
def now(_: Annotated[str, Depends(verify_apitoken)]) -> Any:
"""
立即执行下载器文件整理 API_TOKEN认证?token=xxx
"""

View File

@@ -1,15 +1,16 @@
import base64
import re
from typing import Any, List, Union
from typing import Annotated, Any, List, Union
from fastapi import APIRouter, Depends, HTTPException, UploadFile, File
from sqlalchemy.orm import Session
from fastapi import APIRouter, Body, Depends, HTTPException, UploadFile, File
from sqlalchemy.ext.asyncio import AsyncSession
from app import schemas
from app.core.security import get_password_hash
from app.db import get_db
from app.db import get_async_db
from app.db.models.user import User
from app.db.user_oper import get_current_active_superuser, get_current_active_user
from app.db.user_oper import get_current_active_superuser_async, \
get_current_active_user_async, get_current_active_user
from app.db.userconfig_oper import UserConfigOper
from app.utils.otp import OtpUtils
@@ -17,50 +18,48 @@ router = APIRouter()
@router.get("/", summary="所有用户", response_model=List[schemas.User])
def list_users(
db: Session = Depends(get_db),
current_user: User = Depends(get_current_active_superuser),
async def list_users(
db: AsyncSession = Depends(get_async_db),
current_user: User = Depends(get_current_active_superuser_async),
) -> Any:
"""
查询用户列表
"""
users = current_user.list(db)
return users
return await current_user.async_list(db)
@router.post("/", summary="新增用户", response_model=schemas.Response)
def create_user(
async def create_user(
*,
db: Session = Depends(get_db),
db: AsyncSession = Depends(get_async_db),
user_in: schemas.UserCreate,
current_user: User = Depends(get_current_active_superuser),
current_user: User = Depends(get_current_active_superuser_async),
) -> Any:
"""
新增用户
"""
user = current_user.get_by_name(db, name=user_in.name)
user = await current_user.async_get_by_name(db, name=user_in.name)
if user:
return schemas.Response(success=False, message="用户已存在")
user_info = user_in.dict()
user_info = user_in.model_dump()
if user_info.get("password"):
user_info["hashed_password"] = get_password_hash(user_info["password"])
user_info.pop("password")
user = User(**user_info)
user.create(db)
return schemas.Response(success=True)
user = await User(**user_info).async_create(db)
return schemas.Response(success=True if user else False)
@router.put("/", summary="更新用户", response_model=schemas.Response)
def update_user(
async def update_user(
*,
db: Session = Depends(get_db),
db: AsyncSession = Depends(get_async_db),
user_in: schemas.UserUpdate,
_: User = Depends(get_current_active_superuser),
current_user: User = Depends(get_current_active_superuser_async),
) -> Any:
"""
更新用户
"""
user_info = user_in.dict()
user_info = user_in.model_dump()
if user_info.get("password"):
# 正则表达式匹配密码包含字母、数字、特殊字符中的至少两项
pattern = r'^(?![a-zA-Z]+$)(?!\d+$)(?![^\da-zA-Z\s]+$).{6,50}$'
@@ -69,24 +68,24 @@ def update_user(
message="密码需要同时包含字母、数字、特殊字符中的至少两项且长度大于6位")
user_info["hashed_password"] = get_password_hash(user_info["password"])
user_info.pop("password")
user = User.get_by_id(db, user_id=user_info["id"])
user = await current_user.async_get_by_id(db, user_id=user_info["id"])
user_name = user_info.get("name")
if not user_name:
return schemas.Response(success=False, message="用户名不能为空")
# 新用户名去重
users = User.list(db)
users = await current_user.async_list(db)
for u in users:
if u.name == user_name and u.id != user_info["id"]:
return schemas.Response(success=False, message="用户名已被使用")
if not user:
return schemas.Response(success=False, message="用户不存在")
user.update(db, user_info)
await user.async_update(db, user_info)
return schemas.Response(success=True)
@router.get("/current", summary="当前登录用户信息", response_model=schemas.User)
def read_current_user(
current_user: User = Depends(get_current_active_user)
async def read_current_user(
current_user: User = Depends(get_current_active_user_async)
) -> Any:
"""
当前登录用户信息
@@ -95,62 +94,23 @@ def read_current_user(
@router.post("/avatar/{user_id}", summary="上传用户头像", response_model=schemas.Response)
def upload_avatar(user_id: int, db: Session = Depends(get_db), file: UploadFile = File(...),
_: User = Depends(get_current_active_user)):
async def upload_avatar(user_id: int, db: AsyncSession = Depends(get_async_db), file: UploadFile = File(...),
_: User = Depends(get_current_active_user_async)):
"""
上传用户头像
"""
# 将文件转换为Base64
file_base64 = base64.b64encode(file.file.read())
# 更新到用户表
user = User.get(db, user_id)
user = await User.async_get(db, user_id)
if not user:
return schemas.Response(success=False, message="用户不存在")
user.update(db, {
await user.async_update(db, {
"avatar": f"data:image/ico;base64,{file_base64}"
})
return schemas.Response(success=True, message=file.filename)
@router.post('/otp/generate', summary='生成otp验证uri', response_model=schemas.Response)
def otp_generate(
current_user: User = Depends(get_current_active_user)
) -> Any:
secret, uri = OtpUtils.generate_secret_key(current_user.name)
return schemas.Response(success=secret != "", data={'secret': secret, 'uri': uri})
@router.post('/otp/judge', summary='判断otp验证是否通过', response_model=schemas.Response)
def otp_judge(
data: dict,
db: Session = Depends(get_db),
current_user: User = Depends(get_current_active_user)
) -> Any:
uri = data.get("uri")
otp_password = data.get("otpPassword")
if not OtpUtils.is_legal(uri, otp_password):
return schemas.Response(success=False, message="验证码错误")
current_user.update_otp_by_name(db, current_user.name, True, OtpUtils.get_secret(uri))
return schemas.Response(success=True)
@router.post('/otp/disable', summary='关闭当前用户的otp验证', response_model=schemas.Response)
def otp_disable(
db: Session = Depends(get_db),
current_user: User = Depends(get_current_active_user)
) -> Any:
current_user.update_otp_by_name(db, current_user.name, False, "")
return schemas.Response(success=True)
@router.get('/otp/{userid}', summary='判断当前用户是否开启otp验证', response_model=schemas.Response)
def otp_enable(userid: str, db: Session = Depends(get_db)) -> Any:
user: User = User.get_by_name(db, userid)
if not user:
return schemas.Response(success=False)
return schemas.Response(success=user.is_otp)
@router.get("/config/{key}", summary="查询用户配置", response_model=schemas.Response)
def get_config(key: str,
current_user: User = Depends(get_current_active_user)):
@@ -164,8 +124,11 @@ def get_config(key: str,
@router.post("/config/{key}", summary="更新用户配置", response_model=schemas.Response)
def set_config(key: str, value: Union[list, dict, bool, int, str] = None,
current_user: User = Depends(get_current_active_user)):
def set_config(
key: str,
value: Annotated[Union[list, dict, bool, int, str] | None, Body()] = None,
current_user: User = Depends(get_current_active_user),
):
"""
更新用户配置
"""
@@ -174,49 +137,49 @@ def set_config(key: str, value: Union[list, dict, bool, int, str] = None,
@router.delete("/id/{user_id}", summary="删除用户", response_model=schemas.Response)
def delete_user_by_id(
async def delete_user_by_id(
*,
db: Session = Depends(get_db),
db: AsyncSession = Depends(get_async_db),
user_id: int,
current_user: User = Depends(get_current_active_superuser),
current_user: User = Depends(get_current_active_superuser_async),
) -> Any:
"""
通过唯一ID删除用户
"""
user = current_user.get_by_id(db, user_id=user_id)
user = await current_user.async_get_by_id(db, user_id=user_id)
if not user:
return schemas.Response(success=False, message="用户不存在")
user.delete_by_id(db, user_id)
await current_user.async_delete(db, user_id)
return schemas.Response(success=True)
@router.delete("/name/{user_name}", summary="删除用户", response_model=schemas.Response)
def delete_user_by_name(
async def delete_user_by_name(
*,
db: Session = Depends(get_db),
db: AsyncSession = Depends(get_async_db),
user_name: str,
current_user: User = Depends(get_current_active_superuser),
current_user: User = Depends(get_current_active_superuser_async),
) -> Any:
"""
通过用户名删除用户
"""
user = current_user.get_by_name(db, name=user_name)
user = await current_user.async_get_by_name(db, name=user_name)
if not user:
return schemas.Response(success=False, message="用户不存在")
user.delete_by_name(db, user_name)
await current_user.async_delete(db, user.id)
return schemas.Response(success=True)
@router.get("/{username}", summary="用户详情", response_model=schemas.User)
def read_user_by_name(
async def read_user_by_name(
username: str,
current_user: User = Depends(get_current_active_user),
db: Session = Depends(get_db),
current_user: User = Depends(get_current_active_user_async),
db: AsyncSession = Depends(get_async_db),
) -> Any:
"""
查询用户详情
"""
user = current_user.get_by_name(db, name=username)
user = await current_user.async_get_by_name(db, name=username)
if not user:
raise HTTPException(
status_code=404,

View File

@@ -1,4 +1,4 @@
from typing import Any
from typing import Any, Annotated
from fastapi import APIRouter, BackgroundTasks, Request, Depends
@@ -19,7 +19,7 @@ def start_webhook_chain(body: Any, form: Any, args: Any):
@router.post("/", summary="Webhook消息响应", response_model=schemas.Response)
async def webhook_message(background_tasks: BackgroundTasks,
request: Request,
_: str = Depends(verify_apitoken)
_: Annotated[str, Depends(verify_apitoken)]
) -> Any:
"""
Webhook响应配置请求中需要添加参数token=API_TOKEN&source=媒体服务器名
@@ -32,8 +32,8 @@ async def webhook_message(background_tasks: BackgroundTasks,
@router.get("/", summary="Webhook消息响应", response_model=schemas.Response)
def webhook_message(background_tasks: BackgroundTasks,
request: Request, _: str = Depends(verify_apitoken)) -> Any:
async def webhook_message(background_tasks: BackgroundTasks,
request: Request, _: Annotated[str, Depends(verify_apitoken)]) -> Any:
"""
Webhook响应配置请求中需要添加参数token=API_TOKEN&source=媒体服务器名
"""

View File

@@ -1,98 +1,184 @@
import json
from datetime import datetime
from typing import List, Any
from typing import List, Any, Optional
from fastapi import APIRouter, Depends
from sqlalchemy.ext.asyncio import AsyncSession
from sqlalchemy.orm import Session
from app import schemas
from app.core.config import global_vars
from app.core.workflow import WorkFlowManager
from app.db import get_db
from app.db.models.workflow import Workflow
from app.db.user_oper import get_current_active_user
from app.chain.workflow import WorkflowChain
from app.core.config import global_vars
from app.core.plugin import PluginManager
from app.core.security import verify_token
from app.workflow import WorkFlowManager
from app.db import get_async_db, get_db
from app.db.models import Workflow
from app.db.systemconfig_oper import SystemConfigOper
from app.db.workflow_oper import WorkflowOper
from app.helper.workflow import WorkflowHelper
from app.scheduler import Scheduler
from app.schemas.types import EventType, EVENT_TYPE_NAMES
router = APIRouter()
@router.get("/", summary="所有工作流", response_model=List[schemas.Workflow])
def list_workflows(db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(get_current_active_user)) -> Any:
async def list_workflows(db: AsyncSession = Depends(get_async_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
获取工作流列表
"""
return Workflow.list(db)
return await WorkflowOper(db).async_list()
@router.post("/", summary="创建工作流", response_model=schemas.Response)
def create_workflow(workflow: schemas.Workflow,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(get_current_active_user)) -> Any:
async def create_workflow(workflow: schemas.Workflow,
db: AsyncSession = Depends(get_async_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
创建工作流
"""
if Workflow.get_by_name(db, workflow.name):
if workflow.name and await WorkflowOper(db).async_get_by_name(workflow.name):
return schemas.Response(success=False, message="已存在相同名称的工作流")
if not workflow.add_time:
workflow.add_time = datetime.strftime(datetime.now(), "%Y-%m-%d %H:%M:%S")
if not workflow.state:
workflow.state = "P"
Workflow(**workflow.dict()).create(db)
if not workflow.trigger_type:
workflow.trigger_type = "timer"
workflow_obj = Workflow(**workflow.model_dump())
await workflow_obj.async_create(db)
return schemas.Response(success=True, message="创建工作流成功")
@router.get("/plugin/actions", summary="查询插件动作", response_model=List[dict])
def list_plugin_actions(plugin_id: str = None, _: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
获取所有动作
"""
return PluginManager().get_plugin_actions(plugin_id)
@router.get("/actions", summary="所有动作", response_model=List[dict])
def list_actions(_: schemas.TokenPayload = Depends(get_current_active_user)) -> Any:
async def list_actions(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
获取所有动作
"""
return WorkFlowManager().list_actions()
@router.get("/{workflow_id}", summary="工作流详情", response_model=schemas.Workflow)
def get_workflow(workflow_id: int,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(get_current_active_user)) -> Any:
@router.get("/event_types", summary="获取所有事件类型", response_model=List[dict])
async def get_event_types(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
获取工作流详情
获取所有事件类型
"""
return Workflow.get(db, workflow_id)
return [{
"title": EVENT_TYPE_NAMES.get(event_type, event_type.name),
"value": event_type.value
} for event_type in EventType]
@router.put("/{workflow_id}", summary="更新工作流", response_model=schemas.Response)
def update_workflow(workflow: schemas.Workflow,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(get_current_active_user)) -> Any:
@router.post("/share", summary="分享工作流", response_model=schemas.Response)
async def workflow_share(
workflow: schemas.WorkflowShare,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
更新工作流
分享工作流
"""
wf = Workflow.get(db, workflow.id)
if not wf:
return schemas.Response(success=False, message="工作流不存在")
wf.update(db, workflow.dict())
return schemas.Response(success=True, message="更新成功")
if not workflow.id or not workflow.share_title or not workflow.share_user:
return schemas.Response(success=False, message="请填写工作流ID、分享标题和分享人")
state, errmsg = await WorkflowHelper().async_workflow_share(workflow_id=workflow.id,
share_title=workflow.share_title or "",
share_comment=workflow.share_comment or "",
share_user=workflow.share_user or "")
return schemas.Response(success=state, message=errmsg)
@router.delete("/{workflow_id}", summary="删除工作流", response_model=schemas.Response)
def delete_workflow(workflow_id: int,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(get_current_active_user)) -> Any:
@router.delete("/share/{share_id}", summary="删除分享", response_model=schemas.Response)
async def workflow_share_delete(
share_id: int,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
删除工作流
删除分享
"""
workflow = Workflow.get(db, workflow_id)
if not workflow:
return schemas.Response(success=False, message="工作流不存在")
Scheduler().remove_workflow_job(workflow)
Workflow.delete(db, workflow_id)
return schemas.Response(success=True, message="删除成功")
state, errmsg = await WorkflowHelper().async_share_delete(share_id=share_id)
return schemas.Response(success=state, message=errmsg)
@router.post("/fork", summary="复用工作流", response_model=schemas.Response)
async def workflow_fork(
workflow: schemas.WorkflowShare,
db: AsyncSession = Depends(get_async_db),
_: schemas.User = Depends(verify_token)) -> Any:
"""
复用工作流
"""
if not workflow.name:
return schemas.Response(success=False, message="工作流名称不能为空")
# 解析JSON数据添加错误处理
try:
actions = json.loads(workflow.actions or "[]")
except json.JSONDecodeError:
return schemas.Response(success=False, message="actions字段JSON格式错误")
try:
flows = json.loads(workflow.flows or "[]")
except json.JSONDecodeError:
return schemas.Response(success=False, message="flows字段JSON格式错误")
try:
context = json.loads(workflow.context or "{}")
except json.JSONDecodeError:
return schemas.Response(success=False, message="context字段JSON格式错误")
# 创建工作流
workflow_dict = {
"name": workflow.name,
"description": workflow.description,
"timer": workflow.timer,
"trigger_type": workflow.trigger_type or "timer",
"event_type": workflow.event_type,
"event_conditions": json.loads(workflow.event_conditions or "{}") if workflow.event_conditions else {},
"actions": actions,
"flows": flows,
"context": context,
"state": "P" # 默认暂停状态
}
# 检查名称是否重复
workflow_oper = WorkflowOper(db)
if await workflow_oper.async_get_by_name(workflow_dict["name"]):
return schemas.Response(success=False, message="已存在相同名称的工作流")
# 创建新工作流
workflow = await Workflow(**workflow_dict).async_create(db)
# 更新复用次数
if workflow:
await WorkflowHelper().async_workflow_fork(share_id=workflow.id)
return schemas.Response(success=True, message="复用成功")
@router.get("/shares", summary="查询分享的工作流", response_model=List[schemas.WorkflowShare])
async def workflow_shares(
name: Optional[str] = None,
page: Optional[int] = 1,
count: Optional[int] = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询分享的工作流
"""
return await WorkflowHelper().async_get_shares(name=name, page=page, count=count)
@router.post("/{workflow_id}/run", summary="执行工作流", response_model=schemas.Response)
def run_workflow(workflow_id: int,
from_begin: bool = True,
_: schemas.TokenPayload = Depends(get_current_active_user)) -> Any:
from_begin: Optional[bool] = True,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
执行工作流
"""
@@ -105,15 +191,20 @@ def run_workflow(workflow_id: int,
@router.post("/{workflow_id}/start", summary="启用工作流", response_model=schemas.Response)
def start_workflow(workflow_id: int,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(get_current_active_user)) -> Any:
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
启用工作流
"""
workflow = Workflow.get(db, workflow_id)
workflow = WorkflowOper(db).get(workflow_id)
if not workflow:
return schemas.Response(success=False, message="工作流不存在")
Scheduler().update_workflow_job(workflow)
global_vars.workflow_resume(workflow_id)
if not workflow.trigger_type or workflow.trigger_type == "timer":
# 添加定时任务
Scheduler().update_workflow_job(workflow)
else:
# 事件触发:添加到事件触发器
WorkFlowManager().load_workflow_events(workflow_id)
# 更新状态
workflow.update_state(db, workflow_id, "W")
return schemas.Response(success=True)
@@ -121,14 +212,99 @@ def start_workflow(workflow_id: int,
@router.post("/{workflow_id}/pause", summary="停用工作流", response_model=schemas.Response)
def pause_workflow(workflow_id: int,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(get_current_active_user)) -> Any:
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
停用工作流
"""
workflow = Workflow.get(db, workflow_id)
workflow = WorkflowOper(db).get(workflow_id)
if not workflow:
return schemas.Response(success=False, message="工作流不存在")
Scheduler().remove_workflow_job(workflow)
# 根据触发类型进行不同处理
if workflow.trigger_type == "timer":
# 定时触发:移除定时任务
Scheduler().remove_workflow_job(workflow)
elif workflow.trigger_type == "event":
# 事件触发:从事件触发器中移除
WorkFlowManager().remove_workflow_event(workflow_id, workflow.event_type)
# 停止工作流
global_vars.stop_workflow(workflow_id)
# 更新状态
workflow.update_state(db, workflow_id, "P")
return schemas.Response(success=True)
@router.post("/{workflow_id}/reset", summary="重置工作流", response_model=schemas.Response)
async def reset_workflow(workflow_id: int,
db: AsyncSession = Depends(get_async_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
重置工作流
"""
workflow = await WorkflowOper(db).async_get(workflow_id)
if not workflow:
return schemas.Response(success=False, message="工作流不存在")
# 停止工作流
global_vars.stop_workflow(workflow_id)
# 重置工作流
await Workflow.async_reset(db, workflow_id, reset_count=True)
# 删除缓存
SystemConfigOper().delete(f"WorkflowCache-{workflow_id}")
return schemas.Response(success=True)
@router.get("/{workflow_id}", summary="工作流详情", response_model=schemas.Workflow)
async def get_workflow(workflow_id: int,
db: AsyncSession = Depends(get_async_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
获取工作流详情
"""
return await WorkflowOper(db).async_get(workflow_id)
@router.put("/{workflow_id}", summary="更新工作流", response_model=schemas.Response)
def update_workflow(workflow: schemas.Workflow,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
更新工作流
"""
if not workflow.id:
return schemas.Response(success=False, message="工作流ID不能为空")
workflow_oper = WorkflowOper(db)
wf = workflow_oper.get(workflow.id)
if not wf:
return schemas.Response(success=False, message="工作流不存在")
if not wf.trigger_type:
workflow.trigger_type = "timer"
wf.update(db, workflow.model_dump())
# 更新后的工作流对象
updated_workflow = workflow_oper.get(workflow.id)
# 更新定时任务
Scheduler().update_workflow_job(updated_workflow)
# 更新事件注册
WorkFlowManager().update_workflow_event(updated_workflow)
return schemas.Response(success=True, message="更新成功")
@router.delete("/{workflow_id}", summary="删除工作流", response_model=schemas.Response)
def delete_workflow(workflow_id: int,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
删除工作流
"""
workflow = WorkflowOper(db).get(workflow_id)
if not workflow:
return schemas.Response(success=False, message="工作流不存在")
if not workflow.trigger_type or workflow.trigger_type == "timer":
# 定时触发:删除定时任务
Scheduler().remove_workflow_job(workflow)
else:
# 事件触发:从事件触发器中移除
WorkFlowManager().remove_workflow_event(workflow_id, workflow.event_type)
# 删除工作流
Workflow.delete(db, workflow_id)
# 删除缓存
SystemConfigOper().delete(f"WorkflowCache-{workflow_id}")
return schemas.Response(success=True, message="删除成功")

View File

@@ -1,14 +1,16 @@
from typing import Any, List
from typing import Any, List, Annotated
from fastapi import APIRouter, HTTPException, Depends
from sqlalchemy.ext.asyncio import AsyncSession
from sqlalchemy.orm import Session
from app import schemas
from app.chain.media import MediaChain
from app.chain.subscribe import SubscribeChain
from app.chain.tvdb import TvdbChain
from app.core.metainfo import MetaInfo
from app.core.security import verify_apikey
from app.db import get_db
from app.db import get_db, get_async_db
from app.db.models.subscribe import Subscribe
from app.schemas import RadarrMovie, SonarrSeries
from app.schemas.types import MediaType
@@ -18,7 +20,7 @@ arr_router = APIRouter(tags=['servarr'])
@arr_router.get("/system/status", summary="系统状态")
def arr_system_status(_: str = Depends(verify_apikey)) -> Any:
async def arr_system_status(_: Annotated[str, Depends(verify_apikey)]) -> Any:
"""
模拟Radarr、Sonarr系统状态
"""
@@ -72,7 +74,7 @@ def arr_system_status(_: str = Depends(verify_apikey)) -> Any:
@arr_router.get("/qualityProfile", summary="质量配置")
def arr_qualityProfile(_: str = Depends(verify_apikey)) -> Any:
async def arr_qualityProfile(_: Annotated[str, Depends(verify_apikey)]) -> Any:
"""
模拟Radarr、Sonarr质量配置
"""
@@ -113,7 +115,7 @@ def arr_qualityProfile(_: str = Depends(verify_apikey)) -> Any:
@arr_router.get("/rootfolder", summary="根目录")
def arr_rootfolder(_: str = Depends(verify_apikey)) -> Any:
async def arr_rootfolder(_: Annotated[str, Depends(verify_apikey)]) -> Any:
"""
模拟Radarr、Sonarr根目录
"""
@@ -129,7 +131,7 @@ def arr_rootfolder(_: str = Depends(verify_apikey)) -> Any:
@arr_router.get("/tag", summary="标签")
def arr_tag(_: str = Depends(verify_apikey)) -> Any:
async def arr_tag(_: Annotated[str, Depends(verify_apikey)]) -> Any:
"""
模拟Radarr、Sonarr标签
"""
@@ -142,7 +144,7 @@ def arr_tag(_: str = Depends(verify_apikey)) -> Any:
@arr_router.get("/languageprofile", summary="语言")
def arr_languageprofile(_: str = Depends(verify_apikey)) -> Any:
async def arr_languageprofile(_: Annotated[str, Depends(verify_apikey)]) -> Any:
"""
模拟Radarr、Sonarr语言
"""
@@ -168,7 +170,7 @@ def arr_languageprofile(_: str = Depends(verify_apikey)) -> Any:
@arr_router.get("/movie", summary="所有订阅电影", response_model=List[schemas.RadarrMovie])
def arr_movies(_: str = Depends(verify_apikey), db: Session = Depends(get_db)) -> Any:
async def arr_movies(_: Annotated[str, Depends(verify_apikey)], db: AsyncSession = Depends(get_async_db)) -> Any:
"""
查询Rardar电影
"""
@@ -239,7 +241,7 @@ def arr_movies(_: str = Depends(verify_apikey), db: Session = Depends(get_db)) -
"""
# 查询所有电影订阅
result = []
subscribes = Subscribe.list(db)
subscribes = await Subscribe.async_list(db)
for subscribe in subscribes:
if subscribe.type != MediaType.MOVIE.value:
continue
@@ -259,7 +261,7 @@ def arr_movies(_: str = Depends(verify_apikey), db: Session = Depends(get_db)) -
@arr_router.get("/movie/lookup", summary="查询电影", response_model=List[schemas.RadarrMovie])
def arr_movie_lookup(term: str, db: Session = Depends(get_db), _: str = Depends(verify_apikey)) -> Any:
def arr_movie_lookup(term: str, _: Annotated[str, Depends(verify_apikey)], db: Session = Depends(get_db)) -> Any:
"""
查询Rardar电影 term: `tmdb:${id}`
存在和不存在均不能返回错误
@@ -305,11 +307,12 @@ def arr_movie_lookup(term: str, db: Session = Depends(get_db), _: str = Depends(
@arr_router.get("/movie/{mid}", summary="电影订阅详情", response_model=schemas.RadarrMovie)
def arr_movie(mid: int, db: Session = Depends(get_db), _: str = Depends(verify_apikey)) -> Any:
async def arr_movie(mid: int, _: Annotated[str, Depends(verify_apikey)],
db: AsyncSession = Depends(get_async_db)) -> Any:
"""
查询Rardar电影订阅
"""
subscribe = Subscribe.get(db, mid)
subscribe = await Subscribe.async_get(db, mid)
if subscribe:
return RadarrMovie(
id=subscribe.id,
@@ -331,25 +334,25 @@ def arr_movie(mid: int, db: Session = Depends(get_db), _: str = Depends(verify_a
@arr_router.post("/movie", summary="新增电影订阅")
def arr_add_movie(movie: RadarrMovie,
db: Session = Depends(get_db),
_: str = Depends(verify_apikey)
) -> Any:
async def arr_add_movie(_: Annotated[str, Depends(verify_apikey)],
movie: RadarrMovie,
db: AsyncSession = Depends(get_async_db)
) -> Any:
"""
新增Rardar电影订阅
"""
# 检查订阅是否已存在
subscribe = Subscribe.get_by_tmdbid(db, movie.tmdbId)
subscribe = await Subscribe.async_get_by_tmdbid(db, movie.tmdbId)
if subscribe:
return {
"id": subscribe.id
}
# 添加订阅
sid, message = SubscribeChain().add(title=movie.title,
year=movie.year,
mtype=MediaType.MOVIE,
tmdbid=movie.tmdbId,
username="Seerr")
sid, message = await SubscribeChain().async_add(title=movie.title,
year=movie.year,
mtype=MediaType.MOVIE,
tmdbid=movie.tmdbId,
username="Seerr")
if sid:
return {
"id": sid
@@ -362,13 +365,14 @@ def arr_add_movie(movie: RadarrMovie,
@arr_router.delete("/movie/{mid}", summary="删除电影订阅", response_model=schemas.Response)
def arr_remove_movie(mid: int, db: Session = Depends(get_db), _: str = Depends(verify_apikey)) -> Any:
async def arr_remove_movie(mid: int, _: Annotated[str, Depends(verify_apikey)],
db: AsyncSession = Depends(get_async_db)) -> Any:
"""
删除Rardar电影订阅
"""
subscribe = Subscribe.get(db, mid)
subscribe = await Subscribe.async_get(db, mid)
if subscribe:
subscribe.delete(db, mid)
await subscribe.async_delete(db, mid)
return schemas.Response(success=True)
else:
raise HTTPException(
@@ -378,7 +382,7 @@ def arr_remove_movie(mid: int, db: Session = Depends(get_db), _: str = Depends(v
@arr_router.get("/series", summary="所有剧集", response_model=List[schemas.SonarrSeries])
def arr_series(_: str = Depends(verify_apikey), db: Session = Depends(get_db)) -> Any:
async def arr_series(_: Annotated[str, Depends(verify_apikey)], db: AsyncSession = Depends(get_async_db)) -> Any:
"""
查询Sonarr剧集
"""
@@ -486,7 +490,7 @@ def arr_series(_: str = Depends(verify_apikey), db: Session = Depends(get_db)) -
"""
# 查询所有电视剧订阅
result = []
subscribes = Subscribe.list(db)
subscribes = await Subscribe.async_list(db)
for subscribe in subscribes:
if subscribe.type != MediaType.TV.value:
continue
@@ -514,100 +518,102 @@ def arr_series(_: str = Depends(verify_apikey), db: Session = Depends(get_db)) -
@arr_router.get("/series/lookup", summary="查询剧集")
def arr_series_lookup(term: str, db: Session = Depends(get_db), _: str = Depends(verify_apikey)) -> Any:
def arr_series_lookup(term: str, _: Annotated[str, Depends(verify_apikey)], db: Session = Depends(get_db)) -> Any:
"""
查询Sonarr剧集 term: `tvdb:${id}` title
"""
# 获取TVDBID
if not term.startswith("tvdb:"):
mediainfo = MediaChain().recognize_media(meta=MetaInfo(term),
mtype=MediaType.TV)
if not mediainfo:
return [SonarrSeries()]
tvdbid = mediainfo.tvdb_id
if not tvdbid:
return [SonarrSeries()]
else:
mediainfo = None
tvdbid = int(term.replace("tvdb:", ""))
# 查询TVDB信息
tvdbinfo = MediaChain().tvdb_info(tvdbid=tvdbid)
if not tvdbinfo:
return [SonarrSeries()]
# 季信息
seas: List[int] = []
sea_num = tvdbinfo.get('season')
if sea_num:
seas = list(range(1, int(sea_num) + 1))
# 根据TVDB查询媒体信息
if not mediainfo:
mediainfo = MediaChain().recognize_media(meta=MetaInfo(tvdbinfo.get('seriesName')),
mtype=MediaType.TV)
# 查询是否存在
exists = MediaChain().media_exists(mediainfo)
if exists:
hasfile = True
# tvdbid 列表
tvdbids: List[int] = []
# 获取TVDBID
if not term.startswith("tvdb:"):
title = term.replace("+", " ")
tvdbids = TvdbChain().get_tvdbid_by_name(title=title)
else:
hasfile = False
tvdbid = int(term.replace("tvdb:", ""))
tvdbids.append(tvdbid)
# 查询订阅信息
seasons: List[dict] = []
subscribes = Subscribe.get_by_tmdbid(db, mediainfo.tmdb_id)
if subscribes:
# 已监控
monitored = True
# 已监控季
sub_seas = [sub.season for sub in subscribes]
for sea in seas:
if sea in sub_seas:
seasons.append({
"seasonNumber": sea,
"monitored": True,
})
else:
sonarr_series_list = []
for tvdbid in tvdbids:
# 查询TVDB信息
tvdbinfo = MediaChain().tvdb_info(tvdbid=tvdbid)
if not tvdbinfo:
continue
# 季信息(只取默认季类型,排除特别季)
sea_num = len([season for season in tvdbinfo.get('seasons') if
season['type']['id'] == tvdbinfo.get('defaultSeasonType') and season['number'] > 0])
if sea_num:
seas = list(range(1, int(sea_num) + 1))
# 根据TVDB查询媒体信息
mediainfo = MediaChain().recognize_media(meta=MetaInfo(tvdbinfo.get('name')),
mtype=MediaType.TV)
if not mediainfo:
continue
# 查询是否存在
exists = MediaChain().media_exists(mediainfo)
if exists:
hasfile = True
else:
hasfile = False
# 查询订阅信息
seasons: List[dict] = []
subscribes = Subscribe.get_by_tmdbid(db, mediainfo.tmdb_id)
if subscribes:
# 已监控
monitored = True
# 已监控季
sub_seas = [sub.season for sub in subscribes]
for sea in seas:
if sea in sub_seas:
seasons.append({
"seasonNumber": sea,
"monitored": True,
})
else:
seasons.append({
"seasonNumber": sea,
"monitored": False,
})
subid = subscribes[-1].id
else:
subid = None
monitored = False
for sea in seas:
seasons.append({
"seasonNumber": sea,
"monitored": False,
})
subid = subscribes[-1].id
else:
subid = None
monitored = False
for sea in seas:
seasons.append({
"seasonNumber": sea,
"monitored": False,
})
sonarr_series = SonarrSeries(
id=subid,
title=mediainfo.title,
seasonCount=len(seasons),
seasons=seasons,
remotePoster=mediainfo.get_poster_image(),
year=mediainfo.year,
tmdbId=mediainfo.tmdb_id,
tvdbId=tvdbid,
imdbId=mediainfo.imdb_id,
profileId=1,
languageProfileId=1,
monitored=monitored,
hasFile=hasfile,
)
sonarr_series_list.append(sonarr_series)
return [SonarrSeries(
id=subid,
title=mediainfo.title,
seasonCount=len(seasons),
seasons=seasons,
remotePoster=mediainfo.get_poster_image(),
year=mediainfo.year,
tmdbId=mediainfo.tmdb_id,
tvdbId=mediainfo.tvdb_id,
imdbId=mediainfo.imdb_id,
profileId=1,
languageProfileId=1,
qualityProfileId=1,
isAvailable=True,
monitored=monitored,
hasFile=hasfile
)]
return sonarr_series_list if sonarr_series_list else [SonarrSeries()]
@arr_router.get("/series/{tid}", summary="剧集详情")
def arr_serie(tid: int, db: Session = Depends(get_db), _: str = Depends(verify_apikey)) -> Any:
async def arr_serie(tid: int, _: Annotated[str, Depends(verify_apikey)],
db: AsyncSession = Depends(get_async_db)) -> Any:
"""
查询Sonarr剧集
"""
subscribe = Subscribe.get(db, tid)
subscribe = await Subscribe.async_get(db, tid)
if subscribe:
return SonarrSeries(
id=subscribe.id,
@@ -637,17 +643,17 @@ def arr_serie(tid: int, db: Session = Depends(get_db), _: str = Depends(verify_a
@arr_router.post("/series", summary="新增剧集订阅")
def arr_add_series(tv: schemas.SonarrSeries,
db: Session = Depends(get_db),
_: str = Depends(verify_apikey)) -> Any:
async def arr_add_series(tv: schemas.SonarrSeries,
_: Annotated[str, Depends(verify_apikey)],
db: AsyncSession = Depends(get_async_db)) -> Any:
"""
新增Sonarr剧集订阅
"""
# 检查订阅是否存在
left_seasons = []
for season in tv.seasons:
subscribe = Subscribe.get_by_tmdbid(db, tmdbid=tv.tmdbId,
season=season.get("seasonNumber"))
subscribe = await Subscribe.async_get_by_tmdbid(db, tmdbid=tv.tmdbId,
season=season.get("seasonNumber"))
if subscribe:
continue
left_seasons.append(season)
@@ -662,12 +668,12 @@ def arr_add_series(tv: schemas.SonarrSeries,
for season in left_seasons:
if not season.get("monitored"):
continue
sid, message = SubscribeChain().add(title=tv.title,
year=tv.year,
season=season.get("seasonNumber"),
tmdbid=tv.tmdbId,
mtype=MediaType.TV,
username="Seerr")
sid, message = await SubscribeChain().async_add(title=tv.title,
year=tv.year,
season=season.get("seasonNumber"),
tmdbid=tv.tmdbId,
mtype=MediaType.TV,
username="Seerr")
if sid:
return {
@@ -681,21 +687,22 @@ def arr_add_series(tv: schemas.SonarrSeries,
@arr_router.put("/series", summary="更新剧集订阅")
def arr_update_series(tv: schemas.SonarrSeries) -> Any:
async def arr_update_series(tv: schemas.SonarrSeries, _: Annotated[str, Depends(verify_apikey)]) -> Any:
"""
更新Sonarr剧集订阅
"""
return arr_add_series(tv)
return await arr_add_series(tv)
@arr_router.delete("/series/{tid}", summary="删除剧集订阅")
def arr_remove_series(tid: int, db: Session = Depends(get_db), _: str = Depends(verify_apikey)) -> Any:
async def arr_remove_series(tid: int, _: Annotated[str, Depends(verify_apikey)],
db: AsyncSession = Depends(get_async_db)) -> Any:
"""
删除Sonarr剧集订阅
"""
subscribe = Subscribe.get(db, tid)
subscribe = await Subscribe.async_get(db, tid)
if subscribe:
subscribe.delete(db, tid)
await subscribe.async_delete(db, tid)
return schemas.Response(success=True)
else:
raise HTTPException(

View File

@@ -2,7 +2,9 @@ import gzip
import json
from typing import Annotated, Callable, Any, Dict, Optional
from fastapi import APIRouter, Depends, HTTPException, Path, Request, Response
import aiofiles
from anyio import Path as AsyncPath
from fastapi import APIRouter, Body, Depends, HTTPException, Path, Request, Response
from fastapi.responses import PlainTextResponse
from fastapi.routing import APIRoute
@@ -19,7 +21,7 @@ class GzipRequest(Request):
body = await super().body()
if "gzip" in self.headers.getlist("Content-Encoding"):
body = gzip.decompress(body)
self._body = body # noqa
self._body = body # noqa
return self._body
@@ -50,12 +52,12 @@ cookie_router = APIRouter(route_class=GzipRoute,
@cookie_router.get("/", response_class=PlainTextResponse)
def get_root():
async def get_root():
return "Hello MoviePilot! COOKIECLOUD API ROOT = /cookiecloud"
@cookie_router.post("/", response_class=PlainTextResponse)
def post_root():
async def post_root():
return "Hello MoviePilot! COOKIECLOUD API ROOT = /cookiecloud"
@@ -64,31 +66,31 @@ async def update_cookie(req: schemas.CookieData):
"""
上传Cookie数据
"""
file_path = settings.COOKIE_PATH / f"{req.uuid}.json"
file_path = AsyncPath(settings.COOKIE_PATH) / f"{req.uuid}.json"
content = json.dumps({"encrypted": req.encrypted})
with open(file_path, encoding="utf-8", mode="w") as file:
file.write(content)
with open(file_path, encoding="utf-8", mode="r") as file:
read_content = file.read()
async with aiofiles.open(file_path, encoding="utf-8", mode="w") as file:
await file.write(content)
async with aiofiles.open(file_path, encoding="utf-8", mode="r") as file:
read_content = await file.read()
if read_content == content:
return {"action": "done"}
else:
return {"action": "error"}
def load_encrypt_data(uuid: str) -> Dict[str, Any]:
async def load_encrypt_data(uuid: str) -> Dict[str, Any]:
"""
加载本地加密原始数据
"""
file_path = settings.COOKIE_PATH / f"{uuid}.json"
file_path = AsyncPath(settings.COOKIE_PATH) / f"{uuid}.json"
# 检查文件是否存在
if not file_path.exists():
raise HTTPException(status_code=404, detail="Item not found")
# 读取文件
with open(file_path, encoding="utf-8", mode="r") as file:
read_content = file.read()
async with aiofiles.open(file_path, encoding="utf-8", mode="r") as file:
read_content = await file.read()
data = json.loads(read_content.encode("utf-8"))
return data
@@ -120,15 +122,18 @@ async def get_cookie(
"""
GET 下载加密数据
"""
return load_encrypt_data(uuid)
return await load_encrypt_data(uuid)
@cookie_router.post("/get/{uuid}")
async def post_cookie(
uuid: Annotated[str, Path(min_length=5, pattern="^[a-zA-Z0-9]+$")],
request: schemas.CookiePassword):
request: Optional[schemas.CookiePassword] = Body(None)):
"""
POST 下载加密数据
"""
data = load_encrypt_data(uuid)
return get_decrypted_cookie_data(uuid, request.password, data["encrypted"])
data = await load_encrypt_data(uuid)
if request is not None:
return get_decrypted_cookie_data(uuid, request.password, data["encrypted"])
else:
return data

File diff suppressed because it is too large Load Diff

318
app/chain/ai_recommend.py Normal file
View File

@@ -0,0 +1,318 @@
import re
from typing import List, Optional, Dict, Any
import asyncio
import hashlib
import json
from app.chain import ChainBase
from app.core.config import settings
from app.log import logger
from app.utils.common import log_execution_time
from app.utils.singleton import Singleton
from app.utils.string import StringUtils
class AIRecommendChain(ChainBase, metaclass=Singleton):
"""
AI推荐处理链单例运行
用于基于搜索结果的AI智能推荐
"""
# 缓存文件名
__ai_indices_cache_file = "__ai_recommend_indices__"
# AI推荐状态
_ai_recommend_running = False
_ai_recommend_task: Optional[asyncio.Task] = None
_current_request_hash: Optional[str] = None # 当前请求的哈希值
_ai_recommend_result: Optional[List[int]] = None # AI推荐索引缓存索引列表
_ai_recommend_error: Optional[str] = None # AI推荐错误信息
@staticmethod
def _calculate_request_hash(
filtered_indices: Optional[List[int]], search_results_count: int
) -> str:
"""
计算请求的哈希值,用于判断请求是否变化
"""
request_data = {
"filtered_indices": filtered_indices or [],
"search_results_count": search_results_count,
}
return hashlib.md5(
json.dumps(request_data, sort_keys=True).encode()
).hexdigest()
@property
def is_enabled(self) -> bool:
"""
检查AI推荐功能是否已启用。
"""
return settings.AI_AGENT_ENABLE and settings.AI_RECOMMEND_ENABLED
def _build_status(self) -> Dict[str, Any]:
"""
构建AI推荐状态字典
:return: 状态字典
"""
if not self.is_enabled:
return {"status": "disabled"}
if self._ai_recommend_running:
return {"status": "running"}
# 尝试从数据库加载缓存
if self._ai_recommend_result is None:
cached_indices = self.load_cache(self.__ai_indices_cache_file)
if cached_indices is not None:
self._ai_recommend_result = cached_indices
# 只要有结果始终返回completed状态和数据
if self._ai_recommend_result is not None:
return {"status": "completed", "results": self._ai_recommend_result}
if self._ai_recommend_error is not None:
return {"status": "error", "error": self._ai_recommend_error}
return {"status": "idle"}
def get_current_status_only(self) -> Dict[str, Any]:
"""
获取当前状态不校验hash用于check_only模式
"""
return self._build_status()
def get_status(
self, filtered_indices: Optional[List[int]], search_results_count: int
) -> Dict[str, Any]:
"""
获取AI推荐状态并检查请求是否变化用于首次请求或force模式
如果请求变化筛选条件变化返回idle状态
"""
# 计算当前请求的hash
request_hash = self._calculate_request_hash(
filtered_indices, search_results_count
)
# 检查请求是否变化
is_same_request = request_hash == self._current_request_hash
# 如果请求变化了筛选条件改变返回idle状态
if not is_same_request:
return {"status": "idle"} if self.is_enabled else {"status": "disabled"}
# 请求未变化,返回当前实际状态
return self._build_status()
@log_execution_time(logger=logger)
async def async_ai_recommend(self, items: List[str], preference: str = None) -> str:
"""
AI推荐
:param items: 候选资源列表(JSON字符串格式)
:param preference: 用户偏好(可选)
:return: AI返回的推荐结果
"""
# 设置运行状态
self._ai_recommend_running = True
try:
# 导入LLMHelper
from app.helper.llm import LLMHelper
# 获取LLM实例
llm = LLMHelper.get_llm()
# 构建提示词
user_preference = (
preference
or settings.AI_RECOMMEND_USER_PREFERENCE
or "Prefer high-quality resources with more seeders"
)
# 添加指令
instruction = """
Task: Select the best matching items from the list based on user preferences.
Each item contains:
- index: Item number
- title: Full torrent title
- size: File size
- seeders: Number of seeders
Output Format: Return ONLY a JSON array of "index" numbers (e.g., [0, 3, 1]). Do NOT include any explanations or other text.
"""
message = (
f"User Preference: {user_preference}\n{instruction}\nCandidate Resources:\n"
+ "\n".join(items)
)
# 调用LLM
response = await llm.ainvoke(message)
return response.content
except ValueError as e:
logger.error(f"AI推荐配置错误: {e}")
raise
except Exception as e:
raise
finally:
# 清除运行状态
self._ai_recommend_running = False
self._ai_recommend_task = None
def is_ai_recommend_running(self) -> bool:
"""
检查AI推荐是否正在运行
"""
return self._ai_recommend_running
def cancel_ai_recommend(self):
"""
取消正在运行的AI推荐任务
"""
if self._ai_recommend_task and not self._ai_recommend_task.done():
self._ai_recommend_task.cancel()
self._ai_recommend_running = False
self._ai_recommend_task = None
self._current_request_hash = None
self._ai_recommend_result = None
self._ai_recommend_error = None
self.remove_cache(self.__ai_indices_cache_file)
def start_recommend_task(
self,
filtered_indices: Optional[List[int]],
search_results_count: int,
results: List[Any],
) -> None:
"""
启动AI推荐任务
:param filtered_indices: 筛选后的索引列表
:param search_results_count: 搜索结果总数
:param results: 搜索结果列表
"""
# 防护检查确保AI推荐功能已启用
if not self.is_enabled:
logger.warning("AI推荐功能未启用跳过任务执行")
return
# 计算新请求的哈希值
new_request_hash = self._calculate_request_hash(
filtered_indices, search_results_count
)
# 如果请求变化了,取消旧任务
if new_request_hash != self._current_request_hash:
self.cancel_ai_recommend()
# 更新请求哈希值
self._current_request_hash = new_request_hash
# 重置状态
self._ai_recommend_result = None
self._ai_recommend_error = None
# 启动新任务
async def run_recommend():
# 获取当前任务对象用于在finally中比对
current_task = asyncio.current_task()
try:
self._ai_recommend_running = True
# 准备数据
items = []
valid_indices = []
max_items = settings.AI_RECOMMEND_MAX_ITEMS or 50
# 如果提供了筛选索引,先筛选结果;否则使用所有结果
if filtered_indices is not None and len(filtered_indices) > 0:
results_to_process = [
results[i]
for i in filtered_indices
if 0 <= i < len(results)
]
else:
results_to_process = results
for i, torrent in enumerate(results_to_process):
if len(items) >= max_items:
break
if not torrent.torrent_info:
continue
valid_indices.append(i)
item_info = {
"index": i,
"title": torrent.torrent_info.title or "未知",
"size": (
StringUtils.format_size(torrent.torrent_info.size)
if torrent.torrent_info.size
else "0 B"
),
"seeders": torrent.torrent_info.seeders or 0,
}
items.append(json.dumps(item_info, ensure_ascii=False))
if not items:
self._ai_recommend_error = "没有可用于AI推荐的资源"
return
# 调用AI推荐
ai_response = await self.async_ai_recommend(items)
# 解析AI返回的索引
try:
# 使用正则提取JSON数组非贪婪模式避免匹配多个数组
json_match = re.search(r'\[.*?\]', ai_response, re.DOTALL)
if not json_match:
raise ValueError(ai_response)
ai_indices = json.loads(json_match.group())
if not isinstance(ai_indices, list):
raise ValueError(f"AI返回格式错误: {ai_response}")
# 映射回原始索引
if filtered_indices:
original_indices = [
filtered_indices[valid_indices[i]]
for i in ai_indices
if i < len(valid_indices)
and 0 <= filtered_indices[valid_indices[i]] < len(results)
]
else:
original_indices = [
valid_indices[i]
for i in ai_indices
if i < len(valid_indices)
and 0 <= valid_indices[i] < len(results)
]
# 只返回索引列表,不返回完整数据
self._ai_recommend_result = original_indices
# 保存到数据库
self.save_cache(original_indices, self.__ai_indices_cache_file)
logger.info(f"AI推荐完成: {len(original_indices)}")
except Exception as e:
logger.error(
f"解析AI返回结果失败: {e}, 原始响应: {ai_response}"
)
self._ai_recommend_error = str(e)
except asyncio.CancelledError:
logger.info("AI推荐任务被取消")
except Exception as e:
logger.error(f"AI推荐任务失败: {e}")
self._ai_recommend_error = str(e)
finally:
# 只有当 self._ai_recommend_task 仍然是当前任务时,才清理状态
# 如果任务被取消并启动了新任务self._ai_recommend_task 已经指向新任务,不应重置
if self._ai_recommend_task == current_task:
self._ai_recommend_running = False
self._ai_recommend_task = None
# 创建并启动任务
self._ai_recommend_task = asyncio.create_task(run_recommend())

View File

@@ -3,12 +3,11 @@ from typing import Optional, List
from app import schemas
from app.chain import ChainBase
from app.core.context import MediaInfo
from app.utils.singleton import Singleton
class BangumiChain(ChainBase, metaclass=Singleton):
class BangumiChain(ChainBase):
"""
Bangumi处理链,单例运行
Bangumi处理链
"""
def calendar(self) -> Optional[List[MediaInfo]]:
@@ -58,3 +57,51 @@ class BangumiChain(ChainBase, metaclass=Singleton):
:param person_id: 人物ID
"""
return self.run_module("bangumi_person_credits", person_id=person_id)
async def async_calendar(self) -> Optional[List[MediaInfo]]:
"""
获取Bangumi每日放送异步版本
"""
return await self.async_run_module("async_bangumi_calendar")
async def async_discover(self, **kwargs) -> Optional[List[MediaInfo]]:
"""
发现Bangumi番剧异步版本
"""
return await self.async_run_module("async_bangumi_discover", **kwargs)
async def async_bangumi_info(self, bangumiid: int) -> Optional[dict]:
"""
获取Bangumi信息异步版本
:param bangumiid: BangumiID
:return: Bangumi信息
"""
return await self.async_run_module("async_bangumi_info", bangumiid=bangumiid)
async def async_bangumi_credits(self, bangumiid: int) -> List[schemas.MediaPerson]:
"""
根据BangumiID查询电影演职员表异步版本
:param bangumiid: BangumiID
"""
return await self.async_run_module("async_bangumi_credits", bangumiid=bangumiid)
async def async_bangumi_recommend(self, bangumiid: int) -> Optional[List[MediaInfo]]:
"""
根据BangumiID查询推荐电影异步版本
:param bangumiid: BangumiID
"""
return await self.async_run_module("async_bangumi_recommend", bangumiid=bangumiid)
async def async_person_detail(self, person_id: int) -> Optional[schemas.MediaPerson]:
"""
根据人物ID查询Bangumi人物详情异步版本
:param person_id: 人物ID
"""
return await self.async_run_module("async_bangumi_person_detail", person_id=person_id)
async def async_person_credits(self, person_id: int) -> Optional[List[MediaInfo]]:
"""
根据人物ID查询人物参演作品异步版本
:param person_id: 人物ID
"""
return await self.async_run_module("async_bangumi_person_credits", person_id=person_id)

View File

@@ -2,20 +2,19 @@ from typing import Optional, List
from app import schemas
from app.chain import ChainBase
from app.utils.singleton import Singleton
class DashboardChain(ChainBase, metaclass=Singleton):
class DashboardChain(ChainBase):
"""
各类仪表板统计处理链
"""
def media_statistic(self, server: str = None) -> Optional[List[schemas.Statistic]]:
def media_statistic(self, server: Optional[str] = None) -> Optional[List[schemas.Statistic]]:
"""
媒体数量统计
"""
return self.run_module("media_statistic", server=server)
def downloader_info(self, downloader: str = None) -> Optional[List[schemas.DownloaderInfo]]:
def downloader_info(self, downloader: Optional[str] = None) -> Optional[List[schemas.DownloaderInfo]]:
"""
下载器信息
"""

View File

@@ -4,12 +4,11 @@ from app import schemas
from app.chain import ChainBase
from app.core.context import MediaInfo
from app.schemas import MediaType
from app.utils.singleton import Singleton
class DoubanChain(ChainBase, metaclass=Singleton):
class DoubanChain(ChainBase):
"""
豆瓣处理链,单例运行
豆瓣处理链
"""
def person_detail(self, person_id: int) -> Optional[schemas.MediaPerson]:
@@ -19,7 +18,7 @@ class DoubanChain(ChainBase, metaclass=Singleton):
"""
return self.run_module("douban_person_detail", person_id=person_id)
def person_credits(self, person_id: int, page: int = 1) -> List[MediaInfo]:
def person_credits(self, person_id: int, page: Optional[int] = 1) -> List[MediaInfo]:
"""
根据人物ID查询人物参演作品
:param person_id: 人物ID
@@ -27,7 +26,7 @@ class DoubanChain(ChainBase, metaclass=Singleton):
"""
return self.run_module("douban_person_credits", person_id=person_id, page=page)
def movie_top250(self, page: int = 1, count: int = 30) -> Optional[List[MediaInfo]]:
def movie_top250(self, page: Optional[int] = 1, count: Optional[int] = 30) -> Optional[List[MediaInfo]]:
"""
获取豆瓣电影TOP250
:param page: 页码
@@ -35,26 +34,26 @@ class DoubanChain(ChainBase, metaclass=Singleton):
"""
return self.run_module("movie_top250", page=page, count=count)
def movie_showing(self, page: int = 1, count: int = 30) -> Optional[List[MediaInfo]]:
def movie_showing(self, page: Optional[int] = 1, count: Optional[int] = 30) -> Optional[List[MediaInfo]]:
"""
获取正在上映的电影
"""
return self.run_module("movie_showing", page=page, count=count)
def tv_weekly_chinese(self, page: int = 1, count: int = 30) -> Optional[List[MediaInfo]]:
def tv_weekly_chinese(self, page: Optional[int] = 1, count: Optional[int] = 30) -> Optional[List[MediaInfo]]:
"""
获取本周中国剧集榜
"""
return self.run_module("tv_weekly_chinese", page=page, count=count)
def tv_weekly_global(self, page: int = 1, count: int = 30) -> Optional[List[MediaInfo]]:
def tv_weekly_global(self, page: Optional[int] = 1, count: Optional[int] = 30) -> Optional[List[MediaInfo]]:
"""
获取本周全球剧集榜
"""
return self.run_module("tv_weekly_global", page=page, count=count)
def douban_discover(self, mtype: MediaType, sort: str, tags: str,
page: int = 0, count: int = 30) -> Optional[List[MediaInfo]]:
page: Optional[int] = 0, count: Optional[int] = 30) -> Optional[List[MediaInfo]]:
"""
发现豆瓣电影、剧集
:param mtype: 媒体类型
@@ -67,19 +66,19 @@ class DoubanChain(ChainBase, metaclass=Singleton):
return self.run_module("douban_discover", mtype=mtype, sort=sort, tags=tags,
page=page, count=count)
def tv_animation(self, page: int = 1, count: int = 30) -> Optional[List[MediaInfo]]:
def tv_animation(self, page: Optional[int] = 1, count: Optional[int] = 30) -> Optional[List[MediaInfo]]:
"""
获取动画剧集
"""
return self.run_module("tv_animation", page=page, count=count)
def movie_hot(self, page: int = 1, count: int = 30) -> Optional[List[MediaInfo]]:
def movie_hot(self, page: Optional[int] = 1, count: Optional[int] = 30) -> Optional[List[MediaInfo]]:
"""
获取热门电影
"""
return self.run_module("movie_hot", page=page, count=count)
def tv_hot(self, page: int = 1, count: int = 30) -> Optional[List[MediaInfo]]:
def tv_hot(self, page: Optional[int] = 1, count: Optional[int] = 30) -> Optional[List[MediaInfo]]:
"""
获取热门剧集
"""
@@ -112,3 +111,111 @@ class DoubanChain(ChainBase, metaclass=Singleton):
:param doubanid: 豆瓣ID
"""
return self.run_module("douban_tv_recommend", doubanid=doubanid)
async def async_person_detail(self, person_id: int) -> Optional[schemas.MediaPerson]:
"""
根据人物ID查询豆瓣人物详情异步版本
:param person_id: 人物ID
"""
return await self.async_run_module("async_douban_person_detail", person_id=person_id)
async def async_person_credits(self, person_id: int, page: Optional[int] = 1) -> List[MediaInfo]:
"""
根据人物ID查询人物参演作品异步版本
:param person_id: 人物ID
:param page: 页码
"""
return await self.async_run_module("async_douban_person_credits", person_id=person_id, page=page)
async def async_movie_top250(self, page: Optional[int] = 1,
count: Optional[int] = 30) -> Optional[List[MediaInfo]]:
"""
获取豆瓣电影TOP250异步版本
:param page: 页码
:param count: 每页数量
"""
return await self.async_run_module("async_movie_top250", page=page, count=count)
async def async_movie_showing(self, page: Optional[int] = 1,
count: Optional[int] = 30) -> Optional[List[MediaInfo]]:
"""
获取正在上映的电影(异步版本)
"""
return await self.async_run_module("async_movie_showing", page=page, count=count)
async def async_tv_weekly_chinese(self, page: Optional[int] = 1,
count: Optional[int] = 30) -> Optional[List[MediaInfo]]:
"""
获取本周中国剧集榜(异步版本)
"""
return await self.async_run_module("async_tv_weekly_chinese", page=page, count=count)
async def async_tv_weekly_global(self, page: Optional[int] = 1,
count: Optional[int] = 30) -> Optional[List[MediaInfo]]:
"""
获取本周全球剧集榜(异步版本)
"""
return await self.async_run_module("async_tv_weekly_global", page=page, count=count)
async def async_douban_discover(self, mtype: MediaType, sort: str, tags: str,
page: Optional[int] = 0, count: Optional[int] = 30) -> Optional[List[MediaInfo]]:
"""
发现豆瓣电影、剧集(异步版本)
:param mtype: 媒体类型
:param sort: 排序方式
:param tags: 标签
:param page: 页码
:param count: 数量
:return: 媒体信息列表
"""
return await self.async_run_module("async_douban_discover", mtype=mtype, sort=sort, tags=tags,
page=page, count=count)
async def async_tv_animation(self, page: Optional[int] = 1,
count: Optional[int] = 30) -> Optional[List[MediaInfo]]:
"""
获取动画剧集(异步版本)
"""
return await self.async_run_module("async_tv_animation", page=page, count=count)
async def async_movie_hot(self, page: Optional[int] = 1,
count: Optional[int] = 30) -> Optional[List[MediaInfo]]:
"""
获取热门电影(异步版本)
"""
return await self.async_run_module("async_movie_hot", page=page, count=count)
async def async_tv_hot(self, page: Optional[int] = 1,
count: Optional[int] = 30) -> Optional[List[MediaInfo]]:
"""
获取热门剧集(异步版本)
"""
return await self.async_run_module("async_tv_hot", page=page, count=count)
async def async_movie_credits(self, doubanid: str) -> Optional[List[schemas.MediaPerson]]:
"""
根据TMDBID查询电影演职人员异步版本
:param doubanid: 豆瓣ID
"""
return await self.async_run_module("async_douban_movie_credits", doubanid=doubanid)
async def async_tv_credits(self, doubanid: str) -> Optional[List[schemas.MediaPerson]]:
"""
根据TMDBID查询电视剧演职人员异步版本
:param doubanid: 豆瓣ID
"""
return await self.async_run_module("async_douban_tv_credits", doubanid=doubanid)
async def async_movie_recommend(self, doubanid: str) -> List[MediaInfo]:
"""
根据豆瓣ID查询推荐电影异步版本
:param doubanid: 豆瓣ID
"""
return await self.async_run_module("async_douban_movie_recommend", doubanid=doubanid)
async def async_tv_recommend(self, doubanid: str) -> List[MediaInfo]:
"""
根据豆瓣ID查询推荐电视剧异步版本
:param doubanid: 豆瓣ID
"""
return await self.async_run_module("async_douban_tv_recommend", doubanid=doubanid)

View File

@@ -8,6 +8,7 @@ from typing import List, Optional, Tuple, Set, Dict, Union
from app import schemas
from app.chain import ChainBase
from app.core.cache import FileCache
from app.core.config import settings, global_vars
from app.core.context import MediaInfo, TorrentInfo, Context
from app.core.event import eventmanager, Event
@@ -16,11 +17,12 @@ from app.core.metainfo import MetaInfo
from app.db.downloadhistory_oper import DownloadHistoryOper
from app.db.mediaserver_oper import MediaServerOper
from app.helper.directory import DirectoryHelper
from app.helper.message import MessageHelper
from app.helper.torrent import TorrentHelper
from app.log import logger
from app.schemas import ExistMediaInfo, NotExistMediaInfo, DownloadingTorrent, Notification, ResourceSelectionEventData, ResourceDownloadEventData
from app.schemas.types import MediaType, TorrentStatus, EventType, MessageChannel, NotificationType, ChainEventType
from app.schemas import ExistMediaInfo, FileURI, NotExistMediaInfo, DownloadingTorrent, Notification, ResourceSelectionEventData, \
ResourceDownloadEventData
from app.schemas.types import MediaType, TorrentStatus, EventType, MessageChannel, NotificationType, ContentType, \
ChainEventType
from app.utils.http import RequestUtils
from app.utils.string import StringUtils
@@ -30,82 +32,17 @@ class DownloadChain(ChainBase):
下载处理链
"""
def __init__(self):
super().__init__()
self.torrent = TorrentHelper()
self.downloadhis = DownloadHistoryOper()
self.mediaserver = MediaServerOper()
self.directoryhelper = DirectoryHelper()
self.messagehelper = MessageHelper()
def post_download_message(self, meta: MetaBase, mediainfo: MediaInfo, torrent: TorrentInfo,
channel: MessageChannel = None, username: str = None,
download_episodes: str = None):
"""
发送添加下载的消息,根据消息场景开关决定发给谁
:param meta: 元数据
:param mediainfo: 媒体信息
:param torrent: 种子信息
:param channel: 通知渠道
:param username: 通知显示的下载用户信息
:param download_episodes: 下载的集数
"""
# 拼装消息内容
msg_text = ""
if username:
msg_text = f"用户:{username}"
if torrent.site_name:
msg_text = f"{msg_text}\n站点:{torrent.site_name}"
if meta.resource_term:
msg_text = f"{msg_text}\n质量:{meta.resource_term}"
if torrent.size:
if str(torrent.size).replace(".", "").isdigit():
size = StringUtils.str_filesize(torrent.size)
else:
size = torrent.size
msg_text = f"{msg_text}\n大小:{size}"
if torrent.title:
msg_text = f"{msg_text}\n种子:{torrent.title}"
if torrent.pubdate:
msg_text = f"{msg_text}\n发布时间:{torrent.pubdate}"
if torrent.freedate:
msg_text = f"{msg_text}\n免费时间:{StringUtils.diff_time_str(torrent.freedate)}"
if torrent.seeders:
msg_text = f"{msg_text}\n做种数:{torrent.seeders}"
if torrent.uploadvolumefactor and torrent.downloadvolumefactor:
msg_text = f"{msg_text}\n促销:{torrent.volume_factor}"
if torrent.hit_and_run:
msg_text = f"{msg_text}\nHit&Run"
if torrent.labels:
msg_text = f"{msg_text}\n标签:{' '.join(torrent.labels)}"
if torrent.description:
html_re = re.compile(r'<[^>]+>', re.S)
description = html_re.sub('', torrent.description)
torrent.description = re.sub(r'<[^>]+>', '', description)
msg_text = f"{msg_text}\n描述:{torrent.description}"
# 下载成功按规则发送消息
self.post_message(Notification(
channel=channel,
mtype=NotificationType.Download,
title=f"{mediainfo.title_year} "
f"{'%s %s' % (meta.season, download_episodes) if download_episodes else meta.season_episode} 开始下载",
text=msg_text,
image=mediainfo.get_message_image(),
link=settings.MP_DOMAIN('/#/downloading'),
username=username))
def download_torrent(self, torrent: TorrentInfo,
channel: MessageChannel = None,
source: str = None,
source: Optional[str] = None,
userid: Union[str, int] = None
) -> Tuple[Optional[Union[Path, str]], str, list]:
) -> Tuple[Optional[Union[str, bytes]], str, list]:
"""
下载种子文件,如果是磁力链,会返回磁力链接本身
:return: 种子路径,种子目录名,种子文件清单
:return: 种子内容,种子目录名,种子文件清单
"""
def __get_redict_url(url: str, ua: str = None, cookie: str = None) -> Optional[str]:
def __get_redict_url(url: str, ua: Optional[str] = None, cookie: Optional[str] = None) -> Optional[str]:
"""
获取下载链接, url格式[base64]url
"""
@@ -124,6 +61,8 @@ class DownloadChain(ChainBase):
# 是否使用cookie
if not req_params.get('cookie'):
cookie = None
# 代理
proxy = req_params.get('proxy')
# 请求头
if req_params.get('header'):
headers = req_params.get('header')
@@ -134,14 +73,16 @@ class DownloadChain(ChainBase):
res = RequestUtils(
ua=ua,
cookies=cookie,
headers=headers
headers=headers,
proxies=settings.PROXY if proxy else None
).get_res(url, params=req_params.get('params'))
else:
# POST请求
res = RequestUtils(
ua=ua,
cookies=cookie,
headers=headers
headers=headers,
proxies=settings.PROXY if proxy else None
).post_res(url, params=req_params.get('params'))
if not res:
return None
@@ -177,7 +118,7 @@ class DownloadChain(ChainBase):
logger.error(f"{torrent.title} 无法获取下载地址:{torrent.enclosure}")
return None, "", []
# 下载种子文件
torrent_file, content, download_folder, files, error_msg = self.torrent.download_torrent(
_, content, download_folder, files, error_msg = TorrentHelper().download_torrent(
url=torrent_url,
cookie=site_cookie,
ua=torrent.site_ua or settings.USER_AGENT,
@@ -187,7 +128,7 @@ class DownloadChain(ChainBase):
# 磁力链
return content, "", []
if not torrent_file:
if not content:
logger.error(f"下载种子文件失败:{torrent.title} - {torrent_url}")
self.post_message(Notification(
channel=channel,
@@ -199,30 +140,38 @@ class DownloadChain(ChainBase):
return None, "", []
# 返回 种子文件路径,种子目录名,种子文件清单
return torrent_file, download_folder, files
return content, download_folder, files
def download_single(self, context: Context, torrent_file: Path = None,
def download_single(self, context: Context,
torrent_file: Path = None,
torrent_content: Optional[Union[str, bytes]] = None,
episodes: Set[int] = None,
channel: MessageChannel = None,
source: str = None,
downloader: str = None,
save_path: str = None,
source: Optional[str] = None,
downloader: Optional[str] = None,
save_path: Optional[str] = None,
userid: Union[str, int] = None,
username: str = None,
media_category: str = None) -> Optional[str]:
username: Optional[str] = None,
label: Optional[str] = None) -> Optional[str]:
"""
下载及发送通知
:param context: 资源上下文
:param torrent_file: 种子文件路径
:param torrent_content: 种子内容(磁力链或种子文件内容)
:param episodes: 需要下载的集数
:param channel: 通知渠道
:param source: 来源消息通知、Subscribe、Manual等
:param downloader: 下载器
:param save_path: 保存路径
:param save_path: 保存路径, 支持<storage>:<path>, 如rclone:/MP, smb:/server/share/Movies等
:param userid: 用户ID
:param username: 调用下载的用户名/插件名
:param media_category: 自定义媒体类别
:param label: 自定义标签
"""
_torrent = context.torrent_info
_media = context.media_info
_meta = context.meta_info
_site_downloader = _torrent.site_downloader
# 发送资源下载事件,允许外部拦截下载
event_data = ResourceDownloadEventData(
context=context,
@@ -234,7 +183,7 @@ class DownloadChain(ChainBase):
"save_path": save_path,
"userid": userid,
"username": username,
"media_category": media_category
"media_category": _media.category
}
)
# 触发资源下载事件
@@ -247,42 +196,50 @@ class DownloadChain(ChainBase):
f"Resource download canceled by event: {event_data.source},"
f"Reason: {event_data.reason}")
return None
_torrent = context.torrent_info
_media = context.media_info
_meta = context.meta_info
_site_downloader = _torrent.site_downloader
# 如果事件修改了下载路径,使用新路径
if event_data.options and event_data.options.get("save_path"):
save_path = event_data.options.get("save_path")
# 补充完整的media数据
if not _media.genre_ids:
new_media = self.recognize_media(mtype=_media.type, tmdbid=_media.tmdb_id,
doubanid=_media.douban_id, bangumiid=_media.bangumi_id)
doubanid=_media.douban_id, bangumiid=_media.bangumi_id,
episode_group=_media.episode_group)
if new_media:
_media = new_media
# 实际下载的集数
download_episodes = StringUtils.format_ep(list(episodes)) if episodes else None
_folder_name = ""
if not torrent_file:
if not torrent_file and not torrent_content:
# 下载种子文件,得到的可能是文件也可能是磁力链
content, _folder_name, _file_list = self.download_torrent(_torrent,
channel=channel,
source=source,
userid=userid)
if not content:
return None
else:
content = torrent_file
# 获取种子文件的文件夹名和文件清单
_folder_name, _file_list = self.torrent.get_torrent_info(torrent_file)
torrent_content, _folder_name, _file_list = self.download_torrent(_torrent,
channel=channel,
source=source,
userid=userid)
elif torrent_file:
if torrent_file.exists():
torrent_content = torrent_file.read_bytes()
else:
# 缓存处理器
cache_backend = FileCache()
# 读取缓存的种子文件
torrent_content = cache_backend.get(torrent_file.as_posix(), region="torrents")
if not torrent_content:
return None
# 获取种子文件的文件夹名和文件清单
_folder_name, _file_list = TorrentHelper().get_fileinfo_from_torrent_content(torrent_content)
storage = 'local'
# 下载目录
if save_path:
# 下载目录使用自定义的
download_dir = Path(save_path)
else:
# 根据媒体信息查询下载目录配置
dir_info = self.directoryhelper.get_dir(_media, storage="local", include_unsorted=True)
dir_info = DirectoryHelper().get_dir(_media, include_unsorted=True)
storage = dir_info.storage if dir_info else storage
# 拼装子目录
if dir_info:
# 一级目录
@@ -303,13 +260,16 @@ class DownloadChain(ChainBase):
self.messagehelper.put(f"{_media.type.value} {_media.title_year} 未找到下载目录!",
title="下载失败", role="system")
return None
fileURI = FileURI(storage=storage, path=download_dir.as_posix())
download_dir = Path(fileURI.uri)
# 添加下载
result: Optional[tuple] = self.download(content=content,
result: Optional[tuple] = self.download(content=torrent_content,
cookie=_torrent.site_cookie,
episodes=episodes,
download_dir=download_dir,
category=_media.category,
label=label,
downloader=downloader or _site_downloader)
if result:
_downloader, _hash, _layout, error_msg = result
@@ -331,8 +291,9 @@ class DownloadChain(ChainBase):
_save_path = download_dir if _layout == "NoSubfolder" or not _folder_name else download_path
# 登记下载记录
self.downloadhis.add(
path=str(download_path),
downloadhis = DownloadHistoryOper()
downloadhis.add(
path=download_path.as_posix(),
type=_media.type.value,
title=_media.title,
year=_media.year,
@@ -352,7 +313,8 @@ class DownloadChain(ChainBase):
username=username,
channel=channel.value if channel else None,
date=time.strftime("%Y-%m-%d %H:%M:%S", time.localtime()),
media_category=media_category,
media_category=_media.category,
episode_group=_media.episode_group,
note={"source": source}
)
@@ -365,26 +327,42 @@ class DownloadChain(ChainBase):
if not file_meta.begin_episode \
or file_meta.begin_episode not in episodes:
continue
# 只处理视频格式
# 只处理视频、字幕格式
media_exts = settings.RMT_MEDIAEXT + settings.RMT_SUBEXT + settings.RMT_AUDIOEXT
if not Path(file).suffix \
or Path(file).suffix.lower() not in settings.RMT_MEDIAEXT:
or Path(file).suffix.lower() not in media_exts:
continue
files_to_add.append({
"download_hash": _hash,
"downloader": _downloader,
"fullpath": str(_save_path / file),
"savepath": str(_save_path),
"fullpath": (_save_path / file).as_posix(),
"savepath": _save_path.as_posix(),
"filepath": file,
"torrentname": _meta.org_string,
})
if files_to_add:
self.downloadhis.add_files(files_to_add)
downloadhis.add_files(files_to_add)
# 下载成功发送消息
self.post_download_message(meta=_meta, mediainfo=_media, torrent=_torrent,
username=username, download_episodes=download_episodes)
self.post_message(
Notification(
channel=channel,
source=source if channel else None,
mtype=NotificationType.Download,
ctype=ContentType.DownloadAdded,
image=_media.get_message_image(),
link=settings.MP_DOMAIN('/#/downloading'),
userid=userid,
username=username
),
meta=_meta,
mediainfo=_media,
torrentinfo=_torrent,
download_episodes=download_episodes,
username=username,
)
# 下载成功后处理
self.download_added(context=context, download_dir=download_dir, torrent_path=torrent_file)
self.download_added(context=context, download_dir=download_dir, torrent_content=torrent_content)
# 广播事件
self.eventmanager.send_event(EventType.DownloadAdded, {
"hash": _hash,
@@ -415,24 +393,22 @@ class DownloadChain(ChainBase):
def batch_download(self,
contexts: List[Context],
no_exists: Dict[Union[int, str], Dict[int, NotExistMediaInfo]] = None,
save_path: str = None,
save_path: Optional[str] = None,
channel: MessageChannel = None,
source: str = None,
userid: str = None,
username: str = None,
media_category: str = None,
downloader: str = None
source: Optional[str] = None,
userid: Optional[str] = None,
username: Optional[str] = None,
downloader: Optional[str] = None
) -> Tuple[List[Context], Dict[Union[int, str], Dict[int, NotExistMediaInfo]]]:
"""
根据缺失数据,自动种子列表中组合择优下载
:param contexts: 资源上下文列表
:param no_exists: 缺失的剧集信息
:param save_path: 保存路径
:param save_path: 保存路径, 支持<storage>:<path>, 如rclone:/MP, smb:/server/share/Movies等
:param channel: 通知渠道
:param source: 来源(消息通知、订阅、手工下载等)
:param userid: 用户ID
:param username: 调用下载的用户名/插件名
:param media_category: 自定义媒体类别
:param downloader: 下载器
:return: 已经下载的资源列表、剩余未下载到的剧集 no_exists[tmdb_id/douban_id] = {season: NotExistMediaInfo}
"""
@@ -521,7 +497,7 @@ class DownloadChain(ChainBase):
logger.info(f"开始下载电影 {context.torrent_info.title} ...")
if self.download_single(context, save_path=save_path, channel=channel,
source=source, userid=userid, username=username,
media_category=media_category, downloader=downloader):
downloader=downloader):
# 下载成功
logger.info(f"{context.torrent_info.title} 添加下载成功")
downloaded_list.append(context)
@@ -581,7 +557,7 @@ class DownloadChain(ChainBase):
if isinstance(content, str):
logger.warn(f"{meta.org_string} 下载地址是磁力链,无法确定种子文件集数")
continue
torrent_episodes = self.torrent.get_torrent_episodes(torrent_files)
torrent_episodes = TorrentHelper().get_torrent_episodes(torrent_files)
logger.info(f"{meta.org_string} 解析种子文件集数为 {torrent_episodes}")
if not torrent_episodes:
continue
@@ -600,14 +576,13 @@ class DownloadChain(ChainBase):
logger.info(f"开始下载 {torrent.title} ...")
download_id = self.download_single(
context=context,
torrent_file=content if isinstance(content, Path) else None,
torrent_content=content,
save_path=save_path,
channel=channel,
source=source,
userid=userid,
username=username,
media_category=media_category,
downloader=downloader,
downloader=downloader
)
else:
# 下载
@@ -615,7 +590,6 @@ class DownloadChain(ChainBase):
download_id = self.download_single(context, save_path=save_path,
channel=channel, source=source,
userid=userid, username=username,
media_category=media_category,
downloader=downloader)
if download_id:
@@ -687,7 +661,6 @@ class DownloadChain(ChainBase):
download_id = self.download_single(context, save_path=save_path,
channel=channel, source=source,
userid=userid, username=username,
media_category=media_category,
downloader=downloader)
if download_id:
# 下载成功
@@ -758,7 +731,7 @@ class DownloadChain(ChainBase):
logger.warn(f"{meta.org_string} 下载地址是磁力链,无法解析种子文件集数")
continue
# 种子全部集
torrent_episodes = self.torrent.get_torrent_episodes(torrent_files)
torrent_episodes = TorrentHelper().get_torrent_episodes(torrent_files)
logger.info(f"{torrent.site_name} - {meta.org_string} 解析种子文件集数:{torrent_episodes}")
# 选中的集
selected_episodes = set(torrent_episodes).intersection(set(need_episodes))
@@ -770,14 +743,13 @@ class DownloadChain(ChainBase):
logger.info(f"开始下载 {torrent.title} ...")
download_id = self.download_single(
context=context,
torrent_file=content if isinstance(content, Path) else None,
torrent_content=content,
episodes=selected_episodes,
save_path=save_path,
channel=channel,
source=source,
userid=userid,
username=username,
media_category=media_category,
downloader=downloader
)
if not download_id:
@@ -848,11 +820,12 @@ class DownloadChain(ChainBase):
if not totals:
totals = {}
mediaserver = MediaServerOper()
if mediainfo.type == MediaType.MOVIE:
# 电影
itemid = self.mediaserver.get_item_id(mtype=mediainfo.type.value,
title=mediainfo.title,
tmdbid=mediainfo.tmdb_id)
itemid = mediaserver.get_item_id(mtype=mediainfo.type.value,
title=mediainfo.title,
tmdbid=mediainfo.tmdb_id)
exists_movies: Optional[ExistMediaInfo] = self.media_exists(mediainfo=mediainfo, itemid=itemid)
if exists_movies:
logger.info(f"媒体库中已存在电影:{mediainfo.title_year}")
@@ -863,7 +836,8 @@ class DownloadChain(ChainBase):
# 补充媒体信息
mediainfo: MediaInfo = self.recognize_media(mtype=mediainfo.type,
tmdbid=mediainfo.tmdb_id,
doubanid=mediainfo.douban_id)
doubanid=mediainfo.douban_id,
episode_group=mediainfo.episode_group)
if not mediainfo:
logger.error(f"媒体信息识别失败!")
return False, {}
@@ -871,10 +845,10 @@ class DownloadChain(ChainBase):
logger.error(f"媒体信息中没有季集信息:{mediainfo.title_year}")
return False, {}
# 电视剧
itemid = self.mediaserver.get_item_id(mtype=mediainfo.type.value,
title=mediainfo.title,
tmdbid=mediainfo.tmdb_id,
season=mediainfo.season)
itemid = mediaserver.get_item_id(mtype=mediainfo.type.value,
title=mediainfo.title,
tmdbid=mediainfo.tmdb_id,
season=mediainfo.season)
# 媒体库已存在的剧集
exists_tvs: Optional[ExistMediaInfo] = self.media_exists(mediainfo=mediainfo, itemid=itemid)
if not exists_tvs:
@@ -930,7 +904,7 @@ class DownloadChain(ChainBase):
# 全部存在
return True, no_exists
def remote_downloading(self, channel: MessageChannel, userid: Union[str, int] = None, source: str = None):
def remote_downloading(self, channel: MessageChannel, userid: Union[str, int] = None, source: Optional[str] = None):
"""
查询正在下载的任务,并发送消息
"""
@@ -964,7 +938,7 @@ class DownloadChain(ChainBase):
link=settings.MP_DOMAIN('#/downloading')
))
def downloading(self, name: str = None) -> List[DownloadingTorrent]:
def downloading(self, name: Optional[str] = None) -> List[DownloadingTorrent]:
"""
查询正在下载的任务
"""
@@ -973,7 +947,7 @@ class DownloadChain(ChainBase):
return []
ret_torrents = []
for torrent in torrents:
history = self.downloadhis.get_by_hash(torrent.hash)
history = DownloadHistoryOper().get_by_hash(torrent.hash)
if history:
# 媒体信息
torrent.media = {
@@ -990,21 +964,21 @@ class DownloadChain(ChainBase):
ret_torrents.append(torrent)
return ret_torrents
def set_downloading(self, hash_str, oper: str) -> bool:
def set_downloading(self, hash_str, oper: str, name: Optional[str] = None) -> bool:
"""
控制下载任务 start/stop
"""
if oper == "start":
return self.start_torrents(hashs=[hash_str])
return self.start_torrents(hashs=[hash_str], downloader=name)
elif oper == "stop":
return self.stop_torrents(hashs=[hash_str])
return self.stop_torrents(hashs=[hash_str], downloader=name)
return False
def remove_downloading(self, hash_str: str) -> bool:
def remove_downloading(self, hash_str: str, name: Optional[str] = None) -> bool:
"""
删除下载任务
"""
return self.remove_torrents(hashs=[hash_str])
return self.remove_torrents(hashs=[hash_str], downloader=name)
@eventmanager.register(EventType.DownloadFileDeleted)
def download_file_deleted(self, event: Event):
@@ -1024,7 +998,7 @@ class DownloadChain(ChainBase):
# 发出下载任务删除事件,如需处理辅种,可监听该事件
self.eventmanager.send_event(EventType.DownloadDeleted, {
"hash": hash_str,
"torrents": [torrent.dict() for torrent in torrents]
"torrents": [torrent.model_dump() for torrent in torrents]
})
else:
logger.info(f"没有在下载器中查询到 {hash_str} 对应的下载任务")

File diff suppressed because it is too large Load Diff

View File

@@ -1,8 +1,7 @@
import threading
from typing import List, Union, Optional, Generator
from typing import List, Union, Optional, Generator, Any
from app.chain import ChainBase
from app.core.cache import cached
from app.core.config import global_vars
from app.db.mediaserver_oper import MediaServerOper
from app.helper.service import ServiceConfigHelper
@@ -17,18 +16,15 @@ class MediaServerChain(ChainBase):
媒体服务器处理链
"""
def __init__(self):
super().__init__()
self.dboper = MediaServerOper()
def librarys(self, server: str, username: str = None, hidden: bool = False) -> List[MediaServerLibrary]:
def librarys(self, server: str, username: Optional[str] = None,
hidden: bool = False) -> List[MediaServerLibrary]:
"""
获取媒体服务器所有媒体库
"""
return self.run_module("mediaserver_librarys", server=server, username=username, hidden=hidden)
def items(self, server: str, library_id: Union[str, int], start_index: int = 0, limit: Optional[int] = -1) \
-> Optional[Generator]:
def items(self, server: str, library_id: Union[str, int],
start_index: Optional[int] = 0, limit: Optional[int] = -1) -> Generator[Any, None, None]:
"""
获取媒体服务器项目列表,支持分页和不分页逻辑,默认不分页获取所有数据
@@ -81,28 +77,30 @@ class MediaServerChain(ChainBase):
"""
return self.run_module("mediaserver_tv_episodes", server=server, item_id=item_id)
def playing(self, server: str, count: int = 20, username: str = None) -> List[MediaServerPlayItem]:
def playing(self, server: str, count: Optional[int] = 20,
username: Optional[str] = None) -> List[MediaServerPlayItem]:
"""
获取媒体服务器正在播放信息
"""
return self.run_module("mediaserver_playing", count=count, server=server, username=username)
def latest(self, server: str, count: int = 20, username: str = None) -> List[MediaServerPlayItem]:
def latest(self, server: str, count: Optional[int] = 20,
username: Optional[str] = None) -> List[MediaServerPlayItem]:
"""
获取媒体服务器最新入库条目
"""
return self.run_module("mediaserver_latest", count=count, server=server, username=username)
@cached(maxsize=1, ttl=3600)
def get_latest_wallpapers(self, server: str = None, count: int = 10,
remote: bool = True, username: str = None) -> List[str]:
def get_latest_wallpapers(self, server: Optional[str] = None, count: Optional[int] = 10,
remote: bool = True, username: Optional[str] = None) -> List[str]:
"""
获取最新最新入库条目海报作为壁纸缓存1小时
"""
return self.run_module("mediaserver_latest_images", server=server, count=count,
remote=remote, username=username)
def get_latest_wallpaper(self, server: str = None, remote: bool = True, username: str = None) -> Optional[str]:
def get_latest_wallpaper(self, server: Optional[str] = None,
remote: bool = True, username: Optional[str] = None) -> Optional[str]:
"""
获取最新最新入库条目海报作为壁纸缓存1小时
"""
@@ -115,6 +113,16 @@ class MediaServerChain(ChainBase):
"""
return self.run_module("mediaserver_play_url", server=server, item_id=item_id)
def get_image_cookies(
self, server: Optional[str], image_url: str
) -> Optional[str | dict]:
"""
获取图片的Cookies
"""
return self.run_module(
"mediaserver_image_cookies", server=server, image_url=image_url
)
def sync(self):
"""
同步媒体库所有数据到本地数据库
@@ -127,7 +135,8 @@ class MediaServerChain(ChainBase):
# 汇总统计
total_count = 0
# 清空登记薄
self.dboper.empty()
dboper = MediaServerOper()
dboper.empty()
# 遍历媒体服务器
for mediaserver in mediaservers:
if not mediaserver:
@@ -168,10 +177,10 @@ class MediaServerChain(ChainBase):
for episode in espisodes_info:
seasoninfo[episode.season] = episode.episodes
# 插入数据
item_dict = item.dict()
item_dict = item.model_dump()
item_dict["seasoninfo"] = seasoninfo
item_dict["item_type"] = item_type
self.dboper.add(**item_dict)
dboper.add(**item_dict)
logger.info(f"{server_name} 媒体库 {library.name} 同步完成,共同步数量:{library_count}")
# 总数累加
total_count += library_count

File diff suppressed because it is too large Load Diff

View File

@@ -1,48 +1,37 @@
import io
import tempfile
from pathlib import Path
from typing import List
from typing import List, Optional
import pillow_avif # noqa 用于自动注册AVIF支持
from PIL import Image
from app.chain import ChainBase
from app.chain.bangumi import BangumiChain
from app.chain.douban import DoubanChain
from app.chain.tmdb import TmdbChain
from app.core.cache import cache_backend, cached
from app.core.cache import cached
from app.core.config import settings, global_vars
from app.helper.image import ImageHelper
from app.log import logger
from app.schemas import MediaType
from app.utils.common import log_execution_time
from app.utils.http import RequestUtils
from app.utils.security import SecurityUtils
from app.utils.singleton import Singleton
# 推荐相关的专用缓存
recommend_ttl = 24 * 3600
recommend_cache_region = "recommend"
class RecommendChain(ChainBase, metaclass=Singleton):
"""
推荐处理链,单例运行
"""
def __init__(self):
super().__init__()
self.tmdbchain = TmdbChain()
self.doubanchain = DoubanChain()
self.bangumichain = BangumiChain()
self.cache_max_pages = 5
# 推荐缓存时间
recommend_ttl = 24 * 3600
# 推荐缓存页数
cache_max_pages = 5
# 推荐缓存区域
recommend_cache_region = "recommend"
def refresh_recommend(self):
"""
刷新推荐
"""
logger.debug("Starting to refresh Recommend data.")
cache_backend.clear(region=recommend_cache_region)
logger.debug("Recommend Cache has been cleared.")
# 推荐来源方法
recommend_methods = [
@@ -110,97 +99,23 @@ class RecommendChain(ChainBase, metaclass=Singleton):
请求并保存图片
:param url: 图片路径
"""
if not settings.GLOBAL_IMAGE_CACHE or not url:
return
# 生成缓存路径
sanitized_path = SecurityUtils.sanitize_url_path(url)
cache_path = settings.CACHE_PATH / "images" / sanitized_path
# 没有文件类型,则添加后缀,在恶意文件类型和实际需求下的折衷选择
if not cache_path.suffix:
cache_path = cache_path.with_suffix(".jpg")
# 确保缓存路径和文件类型合法
if not SecurityUtils.is_safe_path(settings.CACHE_PATH, cache_path, settings.SECURITY_IMAGE_SUFFIXES):
logger.debug(f"Invalid cache path or file type for URL: {url}, sanitized path: {sanitized_path}")
return
# 本地存在缓存图片,则直接跳过
if cache_path.exists():
logger.debug(f"Cache hit: Image already exists at {cache_path}")
return
# 请求远程图片
referer = "https://movie.douban.com/" if "doubanio.com" in url else None
proxies = settings.PROXY if not referer else None
response = RequestUtils(ua=settings.USER_AGENT, proxies=proxies, referer=referer).get_res(url=url)
if not response:
logger.debug(f"Empty response for URL: {url}")
return
# 验证下载的内容是否为有效图片
try:
Image.open(io.BytesIO(response.content)).verify()
except Exception as e:
logger.debug(f"Invalid image format for URL {url}: {e}")
return
if not cache_path:
return
try:
if not cache_path.parent.exists():
cache_path.parent.mkdir(parents=True, exist_ok=True)
with tempfile.NamedTemporaryFile(dir=cache_path.parent, delete=False) as tmp_file:
tmp_file.write(response.content)
temp_path = Path(tmp_file.name)
temp_path.replace(cache_path)
logger.debug(f"Successfully cached image at {cache_path} for URL: {url}")
except Exception as e:
logger.debug(f"Failed to write cache file {cache_path} for URL {url}: {e}")
ImageHelper().fetch_image(url=url)
@log_execution_time(logger=logger)
@cached(ttl=recommend_ttl, region=recommend_cache_region)
def tmdb_movies(self, sort_by: str = "popularity.desc",
with_genres: str = "",
with_original_language: str = "",
with_keywords: str = "",
with_watch_providers: str = "",
vote_average: float = 0,
vote_count: int = 0,
release_date: str = "",
page: int = 1) -> List[dict]:
def tmdb_movies(self, sort_by: Optional[str] = "popularity.desc",
with_genres: Optional[str] = "",
with_original_language: Optional[str] = "",
with_keywords: Optional[str] = "",
with_watch_providers: Optional[str] = "",
vote_average: Optional[float] = 0.0,
vote_count: Optional[int] = 0,
release_date: Optional[str] = "",
page: Optional[int] = 1) -> List[dict]:
"""
TMDB热门电影
"""
movies = self.tmdbchain.tmdb_discover(mtype=MediaType.MOVIE,
sort_by=sort_by,
with_genres=with_genres,
with_original_language=with_original_language,
with_keywords=with_keywords,
with_watch_providers=with_watch_providers,
vote_average=vote_average,
vote_count=vote_count,
release_date=release_date,
page=page)
return [movie.to_dict() for movie in movies] if movies else []
@log_execution_time(logger=logger)
@cached(ttl=recommend_ttl, region=recommend_cache_region)
def tmdb_tvs(self, sort_by: str = "popularity.desc",
with_genres: str = "",
with_original_language: str = "zh|en|ja|ko",
with_keywords: str = "",
with_watch_providers: str = "",
vote_average: float = 0,
vote_count: int = 0,
release_date: str = "",
page: int = 1) -> List[dict]:
"""
TMDB热门电视剧
"""
tvs = self.tmdbchain.tmdb_discover(mtype=MediaType.TV,
movies = TmdbChain().tmdb_discover(mtype=MediaType.MOVIE,
sort_by=sort_by,
with_genres=with_genres,
with_original_language=with_original_language,
@@ -210,105 +125,288 @@ class RecommendChain(ChainBase, metaclass=Singleton):
vote_count=vote_count,
release_date=release_date,
page=page)
return [movie.to_dict() for movie in movies] if movies else []
@log_execution_time(logger=logger)
@cached(ttl=recommend_ttl, region=recommend_cache_region)
def tmdb_tvs(self, sort_by: Optional[str] = "popularity.desc",
with_genres: Optional[str] = "",
with_original_language: Optional[str] = "zh|en|ja|ko",
with_keywords: Optional[str] = "",
with_watch_providers: Optional[str] = "",
vote_average: Optional[float] = 0.0,
vote_count: Optional[int] = 0,
release_date: Optional[str] = "",
page: Optional[int] = 1) -> List[dict]:
"""
TMDB热门电视剧
"""
tvs = TmdbChain().tmdb_discover(mtype=MediaType.TV,
sort_by=sort_by,
with_genres=with_genres,
with_original_language=with_original_language,
with_keywords=with_keywords,
with_watch_providers=with_watch_providers,
vote_average=vote_average,
vote_count=vote_count,
release_date=release_date,
page=page)
return [tv.to_dict() for tv in tvs] if tvs else []
@log_execution_time(logger=logger)
@cached(ttl=recommend_ttl, region=recommend_cache_region)
def tmdb_trending(self, page: int = 1) -> List[dict]:
def tmdb_trending(self, page: Optional[int] = 1) -> List[dict]:
"""
TMDB流行趋势
"""
infos = self.tmdbchain.tmdb_trending(page=page)
infos = TmdbChain().tmdb_trending(page=page)
return [info.to_dict() for info in infos] if infos else []
@log_execution_time(logger=logger)
@cached(ttl=recommend_ttl, region=recommend_cache_region)
def bangumi_calendar(self, page: int = 1, count: int = 30) -> List[dict]:
def bangumi_calendar(self, page: Optional[int] = 1, count: Optional[int] = 30) -> List[dict]:
"""
Bangumi每日放送
"""
medias = self.bangumichain.calendar()
medias = BangumiChain().calendar()
return [media.to_dict() for media in medias[(page - 1) * count: page * count]] if medias else []
@log_execution_time(logger=logger)
@cached(ttl=recommend_ttl, region=recommend_cache_region)
def douban_movie_showing(self, page: int = 1, count: int = 30) -> List[dict]:
def douban_movie_showing(self, page: Optional[int] = 1, count: Optional[int] = 30) -> List[dict]:
"""
豆瓣正在热映
"""
movies = self.doubanchain.movie_showing(page=page, count=count)
movies = DoubanChain().movie_showing(page=page, count=count)
return [media.to_dict() for media in movies] if movies else []
@log_execution_time(logger=logger)
@cached(ttl=recommend_ttl, region=recommend_cache_region)
def douban_movies(self, sort: str = "R", tags: str = "", page: int = 1, count: int = 30) -> List[dict]:
def douban_movies(self, sort: Optional[str] = "R", tags: Optional[str] = "",
page: Optional[int] = 1, count: Optional[int] = 30) -> List[dict]:
"""
豆瓣最新电影
"""
movies = self.doubanchain.douban_discover(mtype=MediaType.MOVIE,
sort=sort, tags=tags, page=page, count=count)
movies = DoubanChain().douban_discover(mtype=MediaType.MOVIE,
sort=sort, tags=tags, page=page, count=count)
return [media.to_dict() for media in movies] if movies else []
@log_execution_time(logger=logger)
@cached(ttl=recommend_ttl, region=recommend_cache_region)
def douban_tvs(self, sort: str = "R", tags: str = "", page: int = 1, count: int = 30) -> List[dict]:
def douban_tvs(self, sort: Optional[str] = "R", tags: Optional[str] = "",
page: Optional[int] = 1, count: Optional[int] = 30) -> List[dict]:
"""
豆瓣最新电视剧
"""
tvs = self.doubanchain.douban_discover(mtype=MediaType.TV,
sort=sort, tags=tags, page=page, count=count)
tvs = DoubanChain().douban_discover(mtype=MediaType.TV,
sort=sort, tags=tags, page=page, count=count)
return [media.to_dict() for media in tvs] if tvs else []
@log_execution_time(logger=logger)
@cached(ttl=recommend_ttl, region=recommend_cache_region)
def douban_movie_top250(self, page: int = 1, count: int = 30) -> List[dict]:
def douban_movie_top250(self, page: Optional[int] = 1, count: Optional[int] = 30) -> List[dict]:
"""
豆瓣电影TOP250
"""
movies = self.doubanchain.movie_top250(page=page, count=count)
movies = DoubanChain().movie_top250(page=page, count=count)
return [media.to_dict() for media in movies] if movies else []
@log_execution_time(logger=logger)
@cached(ttl=recommend_ttl, region=recommend_cache_region)
def douban_tv_weekly_chinese(self, page: int = 1, count: int = 30) -> List[dict]:
def douban_tv_weekly_chinese(self, page: Optional[int] = 1, count: Optional[int] = 30) -> List[dict]:
"""
豆瓣国产剧集榜
"""
tvs = self.doubanchain.tv_weekly_chinese(page=page, count=count)
tvs = DoubanChain().tv_weekly_chinese(page=page, count=count)
return [media.to_dict() for media in tvs] if tvs else []
@log_execution_time(logger=logger)
@cached(ttl=recommend_ttl, region=recommend_cache_region)
def douban_tv_weekly_global(self, page: int = 1, count: int = 30) -> List[dict]:
def douban_tv_weekly_global(self, page: Optional[int] = 1, count: Optional[int] = 30) -> List[dict]:
"""
豆瓣全球剧集榜
"""
tvs = self.doubanchain.tv_weekly_global(page=page, count=count)
tvs = DoubanChain().tv_weekly_global(page=page, count=count)
return [media.to_dict() for media in tvs] if tvs else []
@log_execution_time(logger=logger)
@cached(ttl=recommend_ttl, region=recommend_cache_region)
def douban_tv_animation(self, page: int = 1, count: int = 30) -> List[dict]:
def douban_tv_animation(self, page: Optional[int] = 1, count: Optional[int] = 30) -> List[dict]:
"""
豆瓣热门动漫
"""
tvs = self.doubanchain.tv_animation(page=page, count=count)
tvs = DoubanChain().tv_animation(page=page, count=count)
return [media.to_dict() for media in tvs] if tvs else []
@log_execution_time(logger=logger)
@cached(ttl=recommend_ttl, region=recommend_cache_region)
def douban_movie_hot(self, page: int = 1, count: int = 30) -> List[dict]:
def douban_movie_hot(self, page: Optional[int] = 1, count: Optional[int] = 30) -> List[dict]:
"""
豆瓣热门电影
"""
movies = self.doubanchain.movie_hot(page=page, count=count)
movies = DoubanChain().movie_hot(page=page, count=count)
return [media.to_dict() for media in movies] if movies else []
@log_execution_time(logger=logger)
@cached(ttl=recommend_ttl, region=recommend_cache_region)
def douban_tv_hot(self, page: int = 1, count: int = 30) -> List[dict]:
def douban_tv_hot(self, page: Optional[int] = 1, count: Optional[int] = 30) -> List[dict]:
"""
豆瓣热门电视剧
"""
tvs = self.doubanchain.tv_hot(page=page, count=count)
tvs = DoubanChain().tv_hot(page=page, count=count)
return [media.to_dict() for media in tvs] if tvs else []
@log_execution_time(logger=logger)
@cached(ttl=recommend_ttl, region=recommend_cache_region)
async def async_tmdb_movies(self, sort_by: Optional[str] = "popularity.desc",
with_genres: Optional[str] = "",
with_original_language: Optional[str] = "",
with_keywords: Optional[str] = "",
with_watch_providers: Optional[str] = "",
vote_average: Optional[float] = 0.0,
vote_count: Optional[int] = 0,
release_date: Optional[str] = "",
page: Optional[int] = 1) -> List[dict]:
"""
异步TMDB热门电影
"""
movies = await TmdbChain().async_run_module("async_tmdb_discover", mtype=MediaType.MOVIE,
sort_by=sort_by,
with_genres=with_genres,
with_original_language=with_original_language,
with_keywords=with_keywords,
with_watch_providers=with_watch_providers,
vote_average=vote_average,
vote_count=vote_count,
release_date=release_date,
page=page)
return [movie.to_dict() for movie in movies] if movies else []
@log_execution_time(logger=logger)
@cached(ttl=recommend_ttl, region=recommend_cache_region)
async def async_tmdb_tvs(self, sort_by: Optional[str] = "popularity.desc",
with_genres: Optional[str] = "",
with_original_language: Optional[str] = "zh|en|ja|ko",
with_keywords: Optional[str] = "",
with_watch_providers: Optional[str] = "",
vote_average: Optional[float] = 0.0,
vote_count: Optional[int] = 0,
release_date: Optional[str] = "",
page: Optional[int] = 1) -> List[dict]:
"""
异步TMDB热门电视剧
"""
tvs = await TmdbChain().async_run_module("async_tmdb_discover", mtype=MediaType.TV,
sort_by=sort_by,
with_genres=with_genres,
with_original_language=with_original_language,
with_keywords=with_keywords,
with_watch_providers=with_watch_providers,
vote_average=vote_average,
vote_count=vote_count,
release_date=release_date,
page=page)
return [tv.to_dict() for tv in tvs] if tvs else []
@log_execution_time(logger=logger)
@cached(ttl=recommend_ttl, region=recommend_cache_region)
async def async_tmdb_trending(self, page: Optional[int] = 1) -> List[dict]:
"""
异步TMDB流行趋势
"""
infos = await TmdbChain().async_run_module("async_tmdb_trending", page=page)
return [info.to_dict() for info in infos] if infos else []
@log_execution_time(logger=logger)
@cached(ttl=recommend_ttl, region=recommend_cache_region)
async def async_bangumi_calendar(self, page: Optional[int] = 1, count: Optional[int] = 30) -> List[dict]:
"""
异步Bangumi每日放送
"""
medias = await BangumiChain().async_run_module("async_bangumi_calendar")
return [media.to_dict() for media in medias[(page - 1) * count: page * count]] if medias else []
@log_execution_time(logger=logger)
@cached(ttl=recommend_ttl, region=recommend_cache_region)
async def async_douban_movie_showing(self, page: Optional[int] = 1, count: Optional[int] = 30) -> List[dict]:
"""
异步豆瓣正在热映
"""
movies = await DoubanChain().async_run_module("async_movie_showing", page=page, count=count)
return [media.to_dict() for media in movies] if movies else []
@log_execution_time(logger=logger)
@cached(ttl=recommend_ttl, region=recommend_cache_region)
async def async_douban_movies(self, sort: Optional[str] = "R", tags: Optional[str] = "",
page: Optional[int] = 1, count: Optional[int] = 30) -> List[dict]:
"""
异步豆瓣最新电影
"""
movies = await DoubanChain().async_run_module("async_douban_discover", mtype=MediaType.MOVIE,
sort=sort, tags=tags, page=page, count=count)
return [media.to_dict() for media in movies] if movies else []
@log_execution_time(logger=logger)
@cached(ttl=recommend_ttl, region=recommend_cache_region)
async def async_douban_tvs(self, sort: Optional[str] = "R", tags: Optional[str] = "",
page: Optional[int] = 1, count: Optional[int] = 30) -> List[dict]:
"""
异步豆瓣最新电视剧
"""
tvs = await DoubanChain().async_run_module("async_douban_discover", mtype=MediaType.TV,
sort=sort, tags=tags, page=page, count=count)
return [media.to_dict() for media in tvs] if tvs else []
@log_execution_time(logger=logger)
@cached(ttl=recommend_ttl, region=recommend_cache_region)
async def async_douban_movie_top250(self, page: Optional[int] = 1, count: Optional[int] = 30) -> List[dict]:
"""
异步豆瓣电影TOP250
"""
movies = await DoubanChain().async_run_module("async_movie_top250", page=page, count=count)
return [media.to_dict() for media in movies] if movies else []
@log_execution_time(logger=logger)
@cached(ttl=recommend_ttl, region=recommend_cache_region)
async def async_douban_tv_weekly_chinese(self, page: Optional[int] = 1, count: Optional[int] = 30) -> List[dict]:
"""
异步豆瓣国产剧集榜
"""
tvs = await DoubanChain().async_run_module("async_tv_weekly_chinese", page=page, count=count)
return [media.to_dict() for media in tvs] if tvs else []
@log_execution_time(logger=logger)
@cached(ttl=recommend_ttl, region=recommend_cache_region)
async def async_douban_tv_weekly_global(self, page: Optional[int] = 1, count: Optional[int] = 30) -> List[dict]:
"""
异步豆瓣全球剧集榜
"""
tvs = await DoubanChain().async_run_module("async_tv_weekly_global", page=page, count=count)
return [media.to_dict() for media in tvs] if tvs else []
@log_execution_time(logger=logger)
@cached(ttl=recommend_ttl, region=recommend_cache_region)
async def async_douban_tv_animation(self, page: Optional[int] = 1, count: Optional[int] = 30) -> List[dict]:
"""
异步豆瓣热门动漫
"""
tvs = await DoubanChain().async_run_module("async_tv_animation", page=page, count=count)
return [media.to_dict() for media in tvs] if tvs else []
@log_execution_time(logger=logger)
@cached(ttl=recommend_ttl, region=recommend_cache_region)
async def async_douban_movie_hot(self, page: Optional[int] = 1, count: Optional[int] = 30) -> List[dict]:
"""
异步豆瓣热门电影
"""
movies = await DoubanChain().async_run_module("async_movie_hot", page=page, count=count)
return [media.to_dict() for media in movies] if movies else []
@log_execution_time(logger=logger)
@cached(ttl=recommend_ttl, region=recommend_cache_region)
async def async_douban_tv_hot(self, page: Optional[int] = 1, count: Optional[int] = 30) -> List[dict]:
"""
异步豆瓣热门电视剧
"""
tvs = await DoubanChain().async_run_module("async_tv_hot", page=page, count=count)
return [media.to_dict() for media in tvs] if tvs else []

Some files were not shown because too many files have changed in this diff Show More