Compare commits

...

663 Commits

Author SHA1 Message Date
jxxghp
c83589cac6 rollback telegram 2025-11-18 21:56:55 +08:00
jxxghp
d64492bda5 fix QueryEpisodeScheduleTool 2025-11-18 21:46:11 +08:00
jxxghp
33d6c75924 fix telegram 2025-11-18 21:43:55 +08:00
jxxghp
89f01bad42 fix telegram 2025-11-18 21:32:09 +08:00
jxxghp
767496f81b fix telegram 2025-11-18 21:31:44 +08:00
jxxghp
147a477365 fix site 2025-11-18 21:13:45 +08:00
jxxghp
13171f636f fix 2025-11-18 20:34:53 +08:00
jxxghp
fea3f0d3e0 fix telegram markdown 2025-11-18 19:53:27 +08:00
jxxghp
a3a254c2ea fix telegram markdown 2025-11-18 19:09:36 +08:00
jxxghp
bd9d5f7fc0 Merge pull request #5135 from jxxghp/cursor/handle-telegram-hyphen-escape-error-9f51 2025-11-18 17:49:15 +08:00
Cursor Agent
726738ee9e Refactor: Protect only markdown delimiters, not content
Co-authored-by: jxxghp <jxxghp@qq.com>
2025-11-18 09:43:42 +00:00
jxxghp
725244bb2f fix escape_markdown 2025-11-18 17:30:00 +08:00
jxxghp
d2ac2b8990 feat: add QueryEpisodeScheduleTool to MoviePilotToolFactory
- Included QueryEpisodeScheduleTool in the tool definitions of MoviePilotToolFactory to enhance episode scheduling capabilities.
- Updated factory.py to reflect the addition of this new tool, improving overall functionality.
2025-11-18 17:20:23 +08:00
jxxghp
116569223c fix 2025-11-18 16:44:39 +08:00
jxxghp
05442a019f feat: add UpdateSubscribeTool and additional query tools to MoviePilotToolFactory
- Included UpdateSubscribeTool and QuerySiteUserdataTool in the tool definitions of MoviePilotToolFactory to enhance subscription management and user data querying capabilities.
- Updated the factory.py file to reflect the addition of these new tools, improving overall functionality.
2025-11-18 16:41:15 +08:00
jxxghp
db67080bf8 feat: add ScrapeMetadataTool to MoviePilotToolFactory
- Included ScrapeMetadataTool in the tool definitions of MoviePilotToolFactory to enhance metadata scraping capabilities.
- Updated the factory.py file to reflect the addition of the new tool.
2025-11-18 16:26:18 +08:00
jxxghp
21fabf7436 feat: enhance GetRecommendationsTool and update query tools for improved functionality
- Expanded the GetRecommendationsTool to support additional recommendation sources, including TMDB popular movies and TV shows, as well as various Douban categories.
- Updated the limit for results in QuerySubscribesTool, SearchMediaTool, and QueryTransferHistoryTool from 20 to 50 or 30, respectively, to provide more comprehensive results.
- Removed unnecessary description fields from media objects in QueryPopularSubscribesTool, QuerySubscribeHistoryTool, and QuerySubscribeSharesTool for cleaner output.
2025-11-18 16:21:13 +08:00
jxxghp
a8c6516b31 refactor: remove deprecated tools and add RecognizeMediaTool
- Removed the old tool definitions from __init__.py to streamline the module.
- Added RecognizeMediaTool to factory.py for enhanced media recognition capabilities.
- Updated tool exports to reflect the changes in available tools.
2025-11-18 16:07:11 +08:00
jxxghp
f5ca48a56e fix: update recommendation fetching logic in GetRecommendationsTool
- Adjusted the recommendation fetching methods to accept page and count parameters for better control over result limits.
- Implemented checks to ensure the format of recommendation results is valid, enhancing robustness.
- Added truncation for long overview descriptions to maintain consistency in output.
2025-11-18 13:08:09 +08:00
jxxghp
65ceff9824 v2.8.3
- MoviePilot助手增加了多个智能体工具,大幅提升智能体能力
- 智能体对话支持上下文记忆,发送 `/clear_session` 指令或超过15分钟没动作将清除记忆
2025-11-18 12:52:58 +08:00
jxxghp
ed73cfdcc7 fix: update message formatting in MoviePilotTool for clarity
- Changed the tool message format to use "=>" instead of wrapping with separators, enhancing readability and user experience during tool execution.
2025-11-18 12:43:27 +08:00
jxxghp
9cb79a7827 feat: enhance MoviePilotTool with customizable tool messages
- Added `get_tool_message` method to `MoviePilotTool` and its subclasses for generating user-friendly execution messages based on parameters.
- Improved message formatting for various tools, including `AddDownloadTool`, `AddSubscribeTool`, `DeleteDownloadTool`, and others, to provide clearer feedback during operations.
- This enhancement allows for more personalized and informative messages, improving user experience during tool execution.
2025-11-18 12:42:24 +08:00
jxxghp
984f29005a 更新 message.py 2025-11-18 12:17:12 +08:00
jxxghp
805c3719af feat: add new query tools for enhanced subscription management
- Introduced QuerySubscribeSharesTool, QueryPopularSubscribesTool, and QuerySubscribeHistoryTool to improve subscription querying capabilities.
- Updated __all__ exports in init.py and factory.py to include the new tools.
- Enhanced QuerySubscribesTool to support media type filtering with localized descriptions.
2025-11-18 12:05:06 +08:00
jxxghp
ea646149c0 feat: add new tools for download management and enhance query capabilities
- Introduced DeleteDownloadTool, QueryDirectoriesTool, ListDirectoryTool, QueryTransferHistoryTool, and TransferFileTool to the toolset for improved download management.
- Updated __all__ exports in init.py and factory.py to include the new tools.
- Enhanced QueryDownloadsTool to support querying downloads by hash and title, providing more flexible search options and detailed results.
2025-11-18 11:57:01 +08:00
jxxghp
eae1f8ee4d feat: add UpdateSiteCookieTool and enhance site operations
- Introduced UpdateSiteCookieTool to the toolset for managing site cookies.
- Updated __all__ exports in init.py and factory.py to include UpdateSiteCookieTool.
- Added async_get method in SiteOper for asynchronous retrieval of individual site records, improving database interaction.
2025-11-18 11:34:37 +08:00
jxxghp
8d1de245a6 feat: add TestSiteTool and enhance AddSubscribeTool with advanced filtering options
- Introduced TestSiteTool to the toolset for site testing functionalities.
- Updated __all__ exports in init.py and factory.py to include TestSiteTool.
- Enhanced AddSubscribeTool to support additional parameters for episode management and media quality filtering, improving subscription customization.
2025-11-18 08:51:32 +08:00
jxxghp
b8ef5d1efc feat: add session management to MessageChain
- Implemented session ID creation and reuse based on user activity, with a timeout of 15 minutes.
- Added remote command to clear user sessions, enhancing user session management capabilities.
- Updated Command class to include a new command for clearing sessions.
2025-11-18 08:41:05 +08:00
jxxghp
e1098b34e8 feat: add new tools for subscription management and scheduling
- Introduced DeleteSubscribeTool for managing subscriptions.
- Added QuerySchedulersTool and RunSchedulerTool for enhanced scheduling capabilities.
- Updated __all__ exports in init.py and factory.py to include new tools.
2025-11-18 08:34:00 +08:00
jxxghp
8296f8d2da Enhance tool message formatting in MoviePilotTool: wrap execution messages with separators and icons for better distinction from regular agent messages. 2025-11-17 17:14:05 +08:00
jxxghp
867c83383d Merge pull request #5122 from Seed680/v2 2025-11-17 15:58:53 +08:00
noone
1354119d6d fix(telegram):优化消息发送错误日志记录
- 在 send_msg 方法中细化错误日志,明确指出发送失败的位置
- 在 send_medias_msg 方法中增加标题转义注释并调整日志描述
- 在 send_torrents_msg 方法中补充标题转义逻辑及错误日志说明
2025-11-17 15:22:25 +08:00
noone
53af7f81bb fix:对telegram发送标题进行转义 2025-11-17 15:15:28 +08:00
jxxghp
48b1ac28de v2.8.2
- 新增 `MoviePilot助手` 智能体,支持自然语言对话完成任务(Beta版本,设置中打开开关并配置好大模型参数,通过 `/ai` 发送聊天内容),支持通过插件完善智能体能力
- 适配了新版本的飞牛影视
- 其它问题修复与细节改进

注意:基础组件升级,个别插件可能会有兼容性问题,需要插件适配。
2025-11-17 14:00:39 +08:00
jxxghp
6e329b17a9 Enhance Telegram message formatting: add detailed guidelines for MarkdownV2 usage, including support for strikethrough, headings, and lists. Implement smart escaping for Markdown to preserve formatting while avoiding API errors. 2025-11-17 13:49:56 +08:00
jxxghp
6a492198a8 fix post_message 2025-11-17 13:33:01 +08:00
jxxghp
8bf9b6e7cb feat:Agent插件工具发现 2025-11-17 13:00:23 +08:00
jxxghp
42e23ef564 Refactor agent workflows: streamline subscription and download processes, enhance query status workflow, and improve tool usage guidelines for better user interaction. 2025-11-17 12:49:03 +08:00
jxxghp
c6806ee648 fix agent tools 2025-11-17 12:34:20 +08:00
jxxghp
076fae696c fix 2025-11-17 11:57:46 +08:00
jxxghp
ed294d3ea4 Revert "fix schemas"
This reverts commit a5e7483870.
2025-11-17 11:48:18 +08:00
jxxghp
043be409d0 Enhance agent workflows and tools: unify subscription and download processes, add site querying functionality, and improve error handling in download operations. 2025-11-17 11:39:08 +08:00
jxxghp
a5e7483870 fix schemas 2025-11-17 10:58:24 +08:00
jxxghp
365335be46 rollback 2025-11-17 10:51:16 +08:00
jxxghp
62543dd171 fix:优化Agent消息发送格式 2025-11-17 10:43:16 +08:00
jxxghp
e2eef8ff21 fix agent message title 2025-11-17 10:18:05 +08:00
jxxghp
3acf937d56 fix add_download tool 2025-11-17 10:16:54 +08:00
jxxghp
d572e523ba 优化Agent上下文大小 2025-11-17 09:57:12 +08:00
jxxghp
82113abe88 fix agent sendmsg 2025-11-17 09:42:27 +08:00
jxxghp
b7d121c58f fix agent tools 2025-11-17 09:28:18 +08:00
jxxghp
6d5a85b144 fix search tools 2025-11-17 09:14:36 +08:00
jxxghp
78121917c6 Merge pull request #5112 from wikrin/fix_tests 2025-11-12 20:39:38 +08:00
jxxghp
a0913f0e32 Merge pull request #5109 from jiongjiongJOJO/dev 2025-11-12 20:39:10 +08:00
jxxghp
e96e284715 Merge pull request #5107 from wikrin/fix 2025-11-12 20:38:40 +08:00
Attente
c572a1b607 fix(tests): 修正 restype, 测试用例不使用识别词 2025-11-12 14:13:05 +08:00
囧囧JOJO
1845311f98 fix: 修复Docker编译时版本不兼容导致的报错问题
参考三楼回复:
https://stackoverflow.com/questions/76717537/valueerror-requirement-object-has-no-field-use-pep517-when-installing-pytho
2025-11-11 17:46:34 +08:00
Attente
4f806db8b7 fix: 修复变更默认下载器不生效的问题
- 配置模块迁移到 `SettingsConfigDict` 以支持 Pydantic v2 的配置方式
- 在 `MediaInfo` 中新增 `release_dates` 字段,用于存储多地区发行日期信息
- 修改 `MetaVideo` 类中的 token 传递逻辑,以修复搜索站点资源序列化错误的问题
2025-11-11 10:44:45 +08:00
jxxghp
22858cc1e9 Merge pull request #5100 from Seed680/v2 2025-11-06 18:43:41 +08:00
noone
a0329a3eb0 feat(rss): 支持自定义User-Agent获取RSS。目前有些站点没有配置UA时会不能正确获取RSS内容
- 在RSS方法中新增ua参数用于指定User-Agent
- 更新RequestUtils调用以传递自定义User-Agent
- 修改torrents链中RSS解析逻辑以支持站点配置的ua字段
- 设置默认超时时间为30秒以增强稳定性
2025-11-06 16:32:01 +08:00
jxxghp
b3e92088ee Merge pull request #5097 from wumode/refector-check_method 2025-11-05 23:15:24 +08:00
jxxghp
46db1c20f1 Merge pull request #5096 from cddjr/fix_trimemedia_cookies 2025-11-05 23:14:18 +08:00
wumode
9d182e53b2 fix: type hints 2025-11-05 15:41:31 +08:00
景大侠
1205fc7fdb 避免不必要的图片cookies查询 2025-11-05 15:22:02 +08:00
wumode
ff2826a448 feat(utils): Refactor check_method to use ast
- 使用 AST 解析函数源码,相比基于字符串的方法更稳定,能够正确处理具有多行 def 语句的函数
- 为 check_method 添加了单元测试
2025-11-05 14:16:37 +08:00
大虾
ee750115ec Update
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-11-04 13:29:45 +08:00
景大侠
0e13d22c97 fix 适配新版飞牛影视 2025-11-04 13:25:18 +08:00
jxxghp
8e7d040ac4 Merge pull request #5091 from wikrin/cached 2025-11-03 09:51:37 +08:00
Attente
6755202958 feat(cache): 使用 fresh 和 async_fresh 统一缓存控制方式
- 修复因缓存导致的插件更新后仍有更新提示的问题
- 统一使用 fresh/async_fresh 控制缓存行为
- 调整 TMDb 模块缓存策略,优化异步请求缓存清除机制
- 移除冗余的缓存方法封装,减少调用层级
- 简化 PluginHelper 中的缓存方法结构,移除 force 参数
2025-11-03 07:41:42 +08:00
jxxghp
8b7374a687 Merge pull request #5090 from wikrin/fix 2025-11-02 07:35:00 +08:00
Attente
c17cca2365 fix(update_setting): 修复设置保存错误的问题
- adapt to Pydantic V2
2025-11-01 23:51:59 +08:00
jxxghp
8016a9539a fix agent 2025-11-01 19:08:05 +08:00
jxxghp
e885fb15a0 Merge pull request #5089 from wikrin/fix 2025-11-01 18:27:35 +08:00
Attente
c7f098771b feat: adapt to Pydantic V2 2025-11-01 17:56:37 +08:00
Attente
fcd0908032 fix(transfer): 修复指定part不生效的问题 2025-11-01 17:56:23 +08:00
jxxghp
7ff1285084 fix agent tools 2025-11-01 12:07:17 +08:00
jxxghp
b45b603b97 fix agent tools 2025-11-01 12:01:48 +08:00
jxxghp
247208b8a9 fix agent 2025-11-01 11:41:22 +08:00
jxxghp
182c46037b fix agent 2025-11-01 10:40:45 +08:00
jxxghp
438d3210bc fix agent 2025-11-01 10:39:08 +08:00
jxxghp
d523c7c916 fix pydantic 2025-11-01 09:51:23 +08:00
jxxghp
09a19e94d5 fix config 2025-11-01 09:23:52 +08:00
jxxghp
3971c145df refactor: streamline data serialization in tool implementations
- Replaced model_dump and to_dict methods with direct calls to dict for improved consistency and performance in JSON serialization across multiple tools.
- Updated ConversationMemoryManager, GetRecommendationsTool, QueryDownloadsTool, and QueryMediaLibraryTool to enhance data handling.
2025-10-31 11:36:50 +08:00
jxxghp
055117d83d refactor: enhance tool message handling and improve error logging
- Updated _send_tool_message to accept a title parameter for better message context.
- Modified various tool implementations to utilize the new title parameter for clearer messaging.
- Improved error logging across multiple tools to include exception details for better debugging.
2025-10-31 09:16:53 +08:00
jxxghp
c6baf43986 Merge pull request #5085 from wumode/fix-event-handler-params 2025-10-29 07:50:47 +08:00
wumode
4ff16af3a7 fix: __invoke_plugin_method_async 中 __handle_event_error 参数传递错误 2025-10-28 20:09:44 +08:00
jxxghp
17a1bd352b Merge pull request #5071 from wikrin/optimize-file-size 2025-10-23 22:54:43 +08:00
Attente
7421ca09cc fix(transfer): 修复部分情况下无法正确统计已完成任务总大小的问题
- get_directory_size 使用 os.scandir 递归遍历提升性能
- 当任务文件项存储类型为 local 时,若其大小为空,则通过 SystemUtils 获取目录大小以确保
完成任务的准确统计。

fix(cache): 修改 fresh 和 async_fresh 默认参数为 True

refactor(filemanager): 移除整理后总大小计算逻辑

- 删除 TransHandler 中对整理目录总大小的冗余计算,提升性能并简化流程。

perf(system): 使用 scandir 优化文件扫描性能

- 重构 SystemUtils 中的文件扫描方法(list_files、exists_file、list_sub_files),
- 采用 os.scandir 替代 glob 实现,并预编译正则表达式以提升目录遍历与文件匹配性能。
2025-10-23 19:21:24 +08:00
jxxghp
9797e696e5 Merge pull request #5073 from WAY29/v2 2025-10-23 13:05:10 +08:00
jxxghp
c36d6d8b2d Merge pull request #5072 from wumode/fix_retry 2025-10-23 06:52:29 +08:00
wumode
3873786b99 fix: retry 2025-10-23 00:58:34 +08:00
WAY29
76fdba7f09 feat(endpoints): /download/add allow tmdbid/doubanid/bangumiid 2025-10-22 22:02:33 +08:00
jxxghp
72799e9638 Merge pull request #5068 from little6neko/v2 2025-10-22 06:18:28 +08:00
小六妞儿
2e77d03fe9 目录监控添加异常处理避免程序意外退出 2025-10-22 00:21:31 +08:00
jxxghp
0c58eae5e7 Merge pull request #5060 from wikrin/cached 2025-10-19 22:37:33 +08:00
Attente
b609567c38 feat(cache): 引入 fresh 和 async_fresh 以控制缓存行为
- 新增 `fresh` 和 `async_fresh` 用于在同步和异步函数中
临时禁用缓存。
- 通过 `_fresh` 这一 contextvars 变量实现上下文感知的
缓存刷新机制
- 修改了 `cached` 装饰器逻辑,在 `is_fresh()` 为 True
时跳过缓存读取。

- 修复 download 模块中路径处理问题,使用 `Path.as_posix()` 确保跨平台兼容性。
2025-10-19 22:31:50 +08:00
jxxghp
7ecfa44fa0 Merge pull request #5057 from xiaoQQya/v2 2025-10-19 06:49:23 +08:00
xiaoQQya
a685b1dc3b fix(douban): 修复 imdbid 匹配豆瓣信息成功错误返回 None 的问题 2025-10-18 22:34:55 +08:00
jxxghp
63ce49a17c Merge pull request #5056 from xiaoQQya/v2 2025-10-18 22:16:03 +08:00
xiaoQQya
820fbe4076 fix(douban): 修复使用 imdbid 未匹配到豆瓣信息时回退到使用名称匹配豆瓣信息失败的问题 2025-10-18 22:07:54 +08:00
jxxghp
efa05b7775 Update media tool descriptions for clarity and detail in JSON configuration 2025-10-18 22:00:24 +08:00
jxxghp
003781e903 add MoviePilot AI agent implementation and workflow manager 2025-10-18 21:55:31 +08:00
jxxghp
ee71bafc96 fix 2025-10-18 21:32:46 +08:00
jxxghp
bdd5f1231e add ai agent 2025-10-18 21:26:51 +08:00
jxxghp
6fee532c96 add ai agent 2025-10-18 21:26:36 +08:00
jxxghp
78aaad7b59 Merge pull request #5028 from ThedoRap/v2 2025-10-07 23:00:21 +08:00
Reaper
b128b0ede2 修复知行 极速之星 框架解析 做种信息 2025-10-02 20:43:06 +08:00
Reaper
737d2f3bc6 优化知行 极速之星 框架解析 2025-10-02 20:03:28 +08:00
jxxghp
179be53a65 Merge pull request #5025 from ThedoRap/v2 2025-10-02 06:50:37 +08:00
Reaper
1867f5e7c2 增加知行 极速之星 框架解析 2025-10-02 04:27:35 +08:00
jxxghp
6662d24565 Merge pull request #5019 from xiaoQQya/develop 2025-10-01 11:53:24 +08:00
jxxghp
5880566a99 Merge pull request #5018 from Aqr-K/fix-plugin 2025-10-01 11:52:43 +08:00
xiaoQQya
5d05b32711 fix: 修复 README 本地运行提示 No module named 'app' 的问题 2025-09-30 23:45:25 +08:00
Aqr-K
fa2b720e92 Refactor(plugins): Use pathlib.relative_to for robust plugin path resolution 2025-09-30 20:03:08 +08:00
jxxghp
d381238f83 Merge pull request #5017 from Aqr-K/fix-plugin 2025-09-30 19:55:46 +08:00
Aqr-K
751d627ead Merge branch 'fix-plugin' of https://github.com/aqr-k/MoviePilot into fix-plugin 2025-09-30 19:48:46 +08:00
Aqr-K
3e66a8de9b Rollback cache 2025-09-30 19:35:50 +08:00
Aqr-K
266052b12b Update app/core/plugin.py
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-09-30 19:26:33 +08:00
Aqr-K
803f4328f4 fix(plugins): Improve hot-reload robustness for multi-inheritance plugins 2025-09-30 18:34:26 +08:00
Aqr-K
8e95568e11 refactor(plugins): Improve hot-reloading with watchfiles 2025-09-30 18:01:02 +08:00
jxxghp
ab09ee4819 Merge pull request #4998 from Seed680/v2 2025-09-24 15:02:26 +08:00
noone
41f94a172f fix:对telegram发送标题进行转义 2025-09-24 14:29:42 +08:00
noone
566e597994 fix:撤销不必要转义 2025-09-24 14:26:09 +08:00
noone
765fb9c05f fix:更新Telegram解析模式为MarkdownV2;Telegram发送的内容按 Telegram V2 规则转义特殊字符 2025-09-24 14:14:11 +08:00
jxxghp
b6720a19f7 更新 plugin.py 2025-09-22 17:56:12 +08:00
jxxghp
3b130651c4 Merge pull request #4987 from Aqr-K/refactor/plugin-monitor 2025-09-22 11:41:13 +08:00
jxxghp
3f6c35dabe Merge pull request #4986 from Aqr-K/fix-plugin-reload 2025-09-22 07:08:33 +08:00
Aqr-K
db2a952bca refactor(plugin): Enhance hot reload with debounce and subdirectory support 2025-09-22 02:48:49 +08:00
Aqr-K
0ea9770bc3 Create debounce.py 2025-09-22 02:38:15 +08:00
Aqr-K
0b20956c90 fix 2025-09-21 22:42:18 +08:00
jxxghp
9f73b47d54 Merge pull request #4977 from jxxghp/cursor/fix-moviepilot-issue-4975-ff74 2025-09-19 18:15:08 +08:00
Cursor Agent
ce9c99af71 Refactor: Use copy instead of move for file operations
Co-authored-by: jxxghp <jxxghp@qq.com>
2025-09-19 09:54:44 +00:00
jxxghp
784024fb5d 更新 version.py 2025-09-19 08:50:33 +08:00
jxxghp
1145b32299 fix plugin install 2025-09-18 22:32:04 +08:00
jxxghp
ab71df0011 Merge pull request #4971 from cddjr/fix_glitch 2025-09-18 21:00:00 +08:00
jxxghp
fb137252a9 fix plugin id lower case 2025-09-18 18:00:15 +08:00
jxxghp
f57a680306 插件安装支持传递 repo_url 参数 2025-09-18 17:42:12 +08:00
景大侠
8bb3eaa320 fix 获取上次搜索结果时产生的NoneType异常
glitchtip#14
2025-09-18 17:23:20 +08:00
景大侠
9489730a44 fix u115刷新access_token失败会产生NoneType异常
glitchtip#49549
2025-09-18 17:23:20 +08:00
景大侠
d4795bb897 fix u115重试请求时报错unexpected keyword argument
glitchtip#136696
2025-09-18 17:23:19 +08:00
景大侠
63775872c7 fix TMDB因连接失败产生的NoneType错误
glitchtip#11
2025-09-18 17:05:09 +08:00
jxxghp
beff508a1f Merge pull request #4970 from cddjr/fix_trimemedia 2025-09-18 15:55:46 +08:00
景大侠
deaae8a2c6 fix 2025-09-18 15:39:10 +08:00
景大侠
46a27bd50c fix: 飞牛影视 2025-09-18 15:27:02 +08:00
jxxghp
24f2993433 Merge pull request #4958 from cddjr/fix_browse_mteam 2025-09-17 07:04:59 +08:00
景大侠
c80bfbfac5 fix: 浏览馒头报错NoneType 2025-09-17 01:59:28 +08:00
jxxghp
06abfc45c7 更新 version.py 2025-09-16 20:30:38 +08:00
jxxghp
440a773081 fix 2025-09-16 17:56:44 +08:00
jxxghp
0797bcb38b fix 2025-09-16 13:10:31 +08:00
jxxghp
d463b5bf0d Merge pull request #4955 from jxxghp/cursor/add-sort-type-to-subscription-queries-af67 2025-09-16 11:41:08 +08:00
Cursor Agent
0733c8edcc Add sort_type parameter to subscribe endpoints
Co-authored-by: jxxghp <jxxghp@qq.com>
2025-09-16 03:29:28 +00:00
jxxghp
86c7c05cb1 feat: 在获取订阅分享数据的接口中添加可选参数 2025-09-16 07:38:56 +08:00
jxxghp
18ff7ce753 feat: 在订阅统计中添加可选参数 2025-09-16 07:37:14 +08:00
jxxghp
8f2ed1004d Merge pull request #4952 from cddjr/fix_file_perm 2025-09-16 07:00:45 +08:00
景大侠
14961323c3 fix umask 2025-09-15 22:01:00 +08:00
景大侠
f8c682b183 fix: 修复刮削的文件权限只有0600的问题 2025-09-15 21:49:37 +08:00
jxxghp
dd92708f60 Merge pull request #4947 from pluto0x0/fix/4941-mttorent-imdb-search 2025-09-15 14:23:17 +08:00
Zifan Ying
4d9eeccefa fix: mtorrent搜索imdb时提供完整链接
fix: mtorrent搜索imdb时需要提供完整链接(例如https://www.imdb.com/title/tt3058674)
keyword为imdb条目时添加链接前缀
参考 https://wiki.m-team.cc/zh-tw/imdbtosearch
 
issue: https://github.com/jxxghp/MoviePilot/issues/4941
2025-09-15 00:31:45 -05:00
jxxghp
cd7b251031 Merge pull request #4946 from developer-wlj/wlj0914 2025-09-14 17:30:11 +08:00
developer-wlj
db614180b9 Revert "refactor: 优化临时文件的创建和上传逻辑"
This reverts commit 77c0f8f39e.
2025-09-14 17:14:52 +08:00
jxxghp
b6e527e5f4 Merge pull request #4945 from developer-wlj/wlj0914 2025-09-14 16:54:37 +08:00
developer-wlj
77c0f8f39e refactor: 优化临时文件的创建和上传逻辑
- 使用 with 语句自动管理临时文件的创建和关闭,提高代码的可读性和安全性
- 优化了代码结构,减少了嵌套的 try 语句,使代码更加清晰
2025-09-14 16:46:27 +08:00
jxxghp
58816d73c8 Merge pull request #4944 from developer-wlj/wlj0914 2025-09-14 16:42:37 +08:00
developer-wlj
3b194d282e fix: 修复在windows下因临时文件被占用,导致刮削失败
- 修改了两个函数中的临时文件创建和删除逻辑
- 使用手动删除代替自动删除,确保临时文件被正确清理
- 添加了异常处理,记录临时文件删除失败的情况
2025-09-14 16:28:24 +08:00
jxxghp
397f66433d v2.8.0 2025-09-13 15:58:00 +08:00
jxxghp
04a4ed1d0e fix delete_media_file 2025-09-13 14:10:15 +08:00
jxxghp
625850d4e7 fix 2025-09-13 13:35:51 +08:00
jxxghp
6c572baca5 rollback 2025-09-13 13:32:48 +08:00
jxxghp
ee0406a13f Handle smb protocol key error during disconnect (#4938)
* Refactor: Improve SMB connection handling and add signal handling

Co-authored-by: jxxghp <jxxghp@qq.com>

* Remove test_smb_fix.py

Co-authored-by: jxxghp <jxxghp@qq.com>

---------

Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: jxxghp <jxxghp@qq.com>
2025-09-13 11:25:29 +08:00
jxxghp
608a049ba3 fix smb delete 2025-09-13 11:05:21 +08:00
jxxghp
4d9b5198e2 增强SMB存储的删除功能 2025-09-13 10:56:45 +08:00
jxxghp
24b6c970aa feat:emby用户名 2025-09-13 10:34:41 +08:00
jxxghp
239c47f469 fix #4917 2025-09-13 10:13:33 +08:00
jxxghp
f0fc64c517 fix #4917 2025-09-13 10:12:40 +08:00
jxxghp
8481fd38ce fix #4933 2025-09-13 09:54:28 +08:00
jxxghp
5f425129d5 fix #4934 2025-09-13 09:46:04 +08:00
jxxghp
92955b1315 fix:在fork进程中执行文件整理 2025-09-13 08:56:05 +08:00
jxxghp
a3872d5bb5 fix:在fork进程中执行文件整理 2025-09-13 08:50:20 +08:00
jxxghp
a123ff2c04 feat:在fork进程中执行文件整理 2025-09-13 08:32:31 +08:00
jxxghp
188de34306 mini chunk size 2025-09-12 21:45:26 +08:00
jxxghp
3d43750e9b fix async event 2025-09-10 17:33:12 +08:00
jxxghp
fea228c68d add SUPERUSER_PASSWORD 2025-09-10 15:42:17 +08:00
jxxghp
a71a28e563 更新 config.py 2025-09-10 07:00:10 +08:00
jxxghp
3b5d4982b5 add wizard flag 2025-09-09 13:50:11 +08:00
jxxghp
b201e9ab8c Revert "feat:在子进程中操作文件"
This reverts commit 4f304a70b7.
2025-09-08 17:23:25 +08:00
jxxghp
d30b9282fd fix alipan u115 error log 2025-09-08 17:13:01 +08:00
jxxghp
4f304a70b7 feat:在子进程中操作文件 2025-09-08 16:59:29 +08:00
jxxghp
59a54d4f04 fix plugin cache 2025-09-08 13:27:32 +08:00
jxxghp
1e94d794ed fix log 2025-09-08 12:12:00 +08:00
jxxghp
5bd210406b Merge pull request #4918 from cddjr/fix_4853 2025-09-08 11:36:41 +08:00
景大侠
e00514d36d fix: 将RSS中的发布日期转为本地时区 2025-09-08 11:28:08 +08:00
jxxghp
f013bf1931 fix 2025-09-08 10:59:28 +08:00
jxxghp
107cbbad1d fix 2025-09-08 10:54:45 +08:00
jxxghp
481f1f9d30 add full gc scheduler 2025-09-08 10:49:09 +08:00
jxxghp
704364061c fix redis test 2025-09-08 09:59:11 +08:00
jxxghp
c1bd2d6cf1 fix:优化下载 2025-09-08 09:50:08 +08:00
jxxghp
a018e1228c Merge pull request #4904 from DDS-Derek/fix_gosu 2025-09-05 21:40:41 +08:00
DDSRem
d962d9c7f6 feat(docker): add START_NOGOSU mode
fix https://github.com/jxxghp/MoviePilot/issues/4889
2025-09-05 21:30:59 +08:00
jxxghp
4ea28cbca5 fix #4902 2025-09-05 21:09:05 +08:00
jxxghp
1b48b8b4cc Merge pull request #4902 from DDS-Derek/dev 2025-09-05 20:06:42 +08:00
jxxghp
73df197e33 Merge pull request #4903 from imtms/v2 2025-09-05 20:05:28 +08:00
TMs
bdc66e55ca fix(LocalStorage): 添加源文件与目标文件相同的检查,防止文件被删除。 2025-09-05 20:02:37 +08:00
DDSRem
926343ee86 fix(u115): code logic vulnerabilities 2025-09-05 19:37:41 +08:00
DDSRem
8e6021c5e7 fix(u115): code logic vulnerabilities 2025-09-05 19:23:23 +08:00
jxxghp
ac2b6c76ce 更新 version.py 2025-09-05 12:04:26 +08:00
jxxghp
9e966d0a7f Merge pull request #4898 from wumode/fix_alist 2025-09-04 21:16:58 +08:00
wumode
6c10defaa1 fix(Alist): add type hints 2025-09-04 21:08:25 +08:00
wumode
b6a76f6f7c fix(Alist): 添加__len__() 2025-09-04 20:47:13 +08:00
jxxghp
84e5b77a5c rollback orjson 2025-09-04 11:53:39 +08:00
jxxghp
89b0ea0bf1 remove monitoring 2025-09-04 11:23:22 +08:00
jxxghp
48aeb98bf1 add orjson 2025-09-04 08:52:36 +08:00
jxxghp
8a5d864812 更新 config.py 2025-09-04 08:28:42 +08:00
jxxghp
ae79e645a6 Merge pull request #4893 from Aqr-K/feat-plugin-wheels 2025-09-03 14:30:01 +08:00
Aqr-K
0947deb372 fix plugin.py 2025-09-03 14:27:24 +08:00
jxxghp
69c92911a2 更新 category.yaml 2025-09-03 14:26:40 +08:00
jxxghp
b16bb37b75 Merge pull request #4892 from Aqr-K/feat-plugin-wheels 2025-09-03 14:21:08 +08:00
Aqr-K
9c9ec8adf2 feat(plugin): Implement robust dependency installation with embedded wheels
- 通过在插件中嵌入轮子来支持安装依赖项
2025-09-03 14:13:32 +08:00
jxxghp
eb0e67fc42 fix logging 2025-09-03 12:42:13 +08:00
jxxghp
9cc50bddab Merge pull request #4764 from 2Dou/v2 2025-09-03 12:01:37 +08:00
jxxghp
d3ba0fa487 更新 category.py
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-09-03 11:58:07 +08:00
jxxghp
39f6505a80 fix:优化参数、使用orjson 2025-09-03 09:51:24 +08:00
jxxghp
36a6802439 fix:#4876 2025-09-02 12:45:44 +08:00
jxxghp
d7e2633a92 fix:移除更新阻断 2025-09-02 12:16:45 +08:00
jxxghp
88049e741e add SUBSCRIBE_SEARCH_INTERVAL 2025-09-02 11:41:52 +08:00
jxxghp
ff7fb14087 fix cache_clear 2025-09-02 08:35:48 +08:00
jxxghp
816c64bd48 Merge pull request #4883 from cikezhu/v2 2025-09-01 18:32:21 +08:00
cikezhu
d2756e6f2d schedule() # 这会返回一个协程对象,但我们没有等待它 2025-09-01 17:39:46 +08:00
jxxghp
147e12acbb Merge pull request #4879 from sebastian0619/v2 2025-08-31 19:04:38 +08:00
jxxghp
4098018ee9 更新 entrypoint.sh
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-08-31 19:04:24 +08:00
Sebastian
133e7578b9 Update NGINX SSL port configuration 2025-08-31 17:17:26 +08:00
jxxghp
74a2bdbf09 Merge pull request #4872 from Aqr-K/feat/v2.7.8/string/natural_sort 2025-08-30 09:45:23 +08:00
Aqr-K
f22bc68af4 Update string.py 2025-08-30 08:59:35 +08:00
Aqr-K
26cc6da650 fix(storage): Adjust to use natural_stort_key 2025-08-30 08:48:38 +08:00
Aqr-K
d21f1f1b87 feat(string): add natural_sort_key function 2025-08-30 08:44:41 +08:00
jxxghp
7cdaafffe1 Merge pull request #4867 from aotuwuxi/hotfix/250829 2025-08-29 13:46:48 +08:00
jxxghp
0265dca197 Merge pull request #4866 from lostwindsenril/patch-1 2025-08-29 13:45:49 +08:00
wuxi
9d68366043 fix: 修复工作流调用插件无法获取到对象属性问题 2025-08-29 13:19:50 +08:00
lostwindsenril
c8c671d915 Update app/modules/indexer/spider/mtorrent.py
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-08-29 13:07:31 +08:00
lostwindsenril
142daa9d15 使馒头(m-team)支持剩余促销期检测
Add freedate to torrent if discountEndTime exists
2025-08-29 13:04:17 +08:00
jxxghp
2552219991 更新 version.py 2025-08-28 11:11:32 +08:00
jxxghp
a038b698d7 fix haidan 2025-08-28 09:36:19 +08:00
jxxghp
a3b222574e add thetvdb cache 2025-08-28 08:05:10 +08:00
jxxghp
e0cd467293 rollback fix #4856 2025-08-28 07:51:05 +08:00
jxxghp
9c056030d2 fix:捕促115&alipan请求异常 2025-08-27 20:21:06 +08:00
jxxghp
19efa9d4cc fix #4795 2025-08-27 16:15:45 +08:00
jxxghp
90633a6495 fix #4851 2025-08-27 15:57:43 +08:00
jxxghp
edc432fbd8 fix #4846 2025-08-27 12:45:23 +08:00
jxxghp
1b7bdbf516 fix #4834 2025-08-27 08:28:16 +08:00
jxxghp
8c1be70c85 更新 version.py 2025-08-26 12:20:16 +08:00
jxxghp
b8e0c0db9e feat:精细化事件错误 2025-08-26 08:41:47 +08:00
jxxghp
7b7fb6cc82 Merge pull request #4836 from jxxghp/cursor/alter-siteuser-data-userid-to-character-type-9f4d 2025-08-25 22:05:19 +08:00
Cursor Agent
62512ba215 Remove SQLite-specific migration code for userid field
Co-authored-by: jxxghp <jxxghp@live.cn>
2025-08-25 14:00:33 +00:00
Cursor Agent
e1beb64c01 Simplify userid conversion to integer in Synology Chat module
Co-authored-by: jxxghp <jxxghp@live.cn>
2025-08-25 13:58:15 +00:00
Cursor Agent
c81f26ddad Remove downgrade methods for PostgreSQL and SQLite userid migration
Co-authored-by: jxxghp <jxxghp@live.cn>
2025-08-25 13:56:21 +00:00
Cursor Agent
340114c2a1 Remove migration README after completing SiteUserData userid type migration
Co-authored-by: jxxghp <jxxghp@live.cn>
2025-08-25 13:54:58 +00:00
Cursor Agent
cd7767b331 Checkpoint before follow-up message
Co-authored-by: jxxghp <jxxghp@live.cn>
2025-08-25 13:54:48 +00:00
Cursor Agent
25289dad8a Migrate SiteUserData userid field from Integer to String type
Co-authored-by: jxxghp <jxxghp@live.cn>
2025-08-25 13:50:58 +00:00
jxxghp
47c6917129 remove _check_restart_policy 2025-08-25 21:30:53 +08:00
jxxghp
6379cda148 fix 异步定时服务 2025-08-25 21:19:07 +08:00
jxxghp
91a124ab8f fix 异步定时服务 2025-08-25 20:44:38 +08:00
jxxghp
2357a7135e fix run_async 2025-08-25 17:46:06 +08:00
jxxghp
da0b3b3de9 fix:日历缓存 2025-08-25 16:46:10 +08:00
jxxghp
6664fb1716 feat:增加插件和日历的自动缓存 2025-08-25 16:37:02 +08:00
jxxghp
1206f24fa9 修复缓存迭代时的并发问题 2025-08-25 13:11:44 +08:00
jxxghp
ffb5823e84 fix #4829 优化模块导入逻辑,增加对 Async 类的特殊处理 2025-08-25 08:14:43 +08:00
jxxghp
d45a7fb262 更新 version.py 2025-08-24 19:59:31 +08:00
jxxghp
918d192c0f OpenList自动延迟重试获取文件项 2025-08-24 19:47:00 +08:00
jxxghp
f7cd6eac50 feat:整理手动中止功能 2025-08-24 19:17:41 +08:00
jxxghp
88f4428ff0 fix bug 2025-08-24 17:07:45 +08:00
jxxghp
069ea22ba2 fix bug 2025-08-24 16:55:37 +08:00
jxxghp
8fac8c5307 fix progress step 2025-08-24 16:33:44 +08:00
jxxghp
2285befebb fix cache set 2025-08-24 16:10:48 +08:00
jxxghp
1cd0648e4e fix cache set 2025-08-24 15:36:56 +08:00
jxxghp
0b7ba285c6 fix:优雅停止超时处理 2025-08-24 13:07:52 +08:00
jxxghp
30446c4526 fix cache is_redis 2025-08-24 12:27:14 +08:00
jxxghp
9b843c9ed2 fix:整理记录登记 2025-08-24 12:19:12 +08:00
jxxghp
2ce1c3bef8 feat:整理进度登记 2025-08-24 12:04:05 +08:00
jxxghp
e463094dc7 feat:整理进度 2025-08-24 09:21:55 +08:00
jxxghp
71a9fe10f4 refactor ProgressHelper 2025-08-24 09:02:55 +08:00
jxxghp
ba146e13ef fix 优化cache模块声明 2025-08-24 08:36:37 +08:00
jxxghp
c060d7e3e0 更新 postgresql-setup.md 2025-08-23 22:26:34 +08:00
jxxghp
ba96678822 v2.7.5 2025-08-23 20:46:36 +08:00
jxxghp
4f6354f383 Merge pull request #4820 from DDS-Derek/dev 2025-08-23 18:46:52 +08:00
DDSRem
2766e80346 fix(database): use logger as log output
Co-Authored-By: Aqr-K <95741669+Aqr-K@users.noreply.github.com>
2025-08-23 18:36:11 +08:00
jxxghp
7cc3777a60 fix async cache 2025-08-23 18:34:47 +08:00
DDSRem
cb1dd9f17d fix(database): upgrade error in pg database
Co-Authored-By: Aqr-K <95741669+Aqr-K@users.noreply.github.com>
2025-08-23 18:12:13 +08:00
jxxghp
31f342fe4f fix torrent 2025-08-23 18:10:33 +08:00
jxxghp
e90359eb08 fix douban 2025-08-23 15:56:30 +08:00
jxxghp
58b0768a30 fix redis key 2025-08-23 15:53:03 +08:00
jxxghp
3b04506893 fix redis key 2025-08-23 15:40:38 +08:00
jxxghp
354165aa0a fix cache 2025-08-23 14:21:50 +08:00
jxxghp
343109836f fix cache 2025-08-23 14:06:44 +08:00
jxxghp
fcadac2adb Merge pull request #4817 from jxxghp/cursor/add-dict-operations-to-cachebackend-3877 2025-08-23 12:42:04 +08:00
Cursor Agent
5e7dcdfe97 Modify cache region key generation to use consistent prefix format
Co-authored-by: jxxghp <jxxghp@live.cn>
2025-08-23 04:13:25 +00:00
Cursor Agent
2ec9a57391 Remove implementation and migration documentation files
Co-authored-by: jxxghp <jxxghp@live.cn>
2025-08-23 04:07:04 +00:00
Cursor Agent
973c545723 Checkpoint before follow-up message
Co-authored-by: jxxghp <jxxghp@live.cn>
2025-08-23 04:06:16 +00:00
Cursor Agent
fd62eecfef Simplify TTLCache, remove dict-like methods, enhance Cache interface
Co-authored-by: jxxghp <jxxghp@live.cn>
2025-08-23 04:01:17 +00:00
Cursor Agent
b5ca7058c2 Add helper methods for cache backend in sync and async versions
Co-authored-by: jxxghp <jxxghp@live.cn>
2025-08-23 03:58:04 +00:00
Cursor Agent
57a48f099f Add dict-like operations to CacheBackend with sync and async support
Co-authored-by: jxxghp <jxxghp@live.cn>
2025-08-23 03:50:52 +00:00
jxxghp
4699f511bf Handle magnet links in torrent parsing and downloader modules (#4815)
Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: jxxghp <jxxghp@live.cn>
2025-08-23 10:51:32 +08:00
jxxghp
cd8f7e72e0 同步错误修复 2025-08-22 17:33:24 +08:00
jxxghp
78803fa284 fix search_imdbid type 2025-08-22 16:37:30 +08:00
jxxghp
2e8d75df16 fix monitor cache 2025-08-22 15:30:49 +08:00
jxxghp
7e3bbfd960 Merge pull request #4807 from carolcoral/v2 2025-08-22 15:23:04 +08:00
jxxghp
1734d53b3c Replace file-based snapshot caching with FileCache implementation (#4809)
Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: jxxghp <jxxghp@live.cn>
2025-08-22 13:59:30 +08:00
jxxghp
f37540f4e5 fix get_rss timeout 2025-08-22 11:44:16 +08:00
jxxghp
addb9d836a remove cache singleton 2025-08-22 11:33:53 +08:00
Carol
4184d8c7ac 补充迁移数据库异常的注意事项
add: sqlite迁移到postgresql的注意事项
2025-08-22 10:55:26 +08:00
jxxghp
724c15a68c add 插件内存统计API 2025-08-22 09:46:11 +08:00
jxxghp
499bdf9b48 fix cache clear 2025-08-22 07:22:23 +08:00
jxxghp
41cd1ccda1 Merge pull request #4803 from Sowevo/v2
兼容负数的LIMIT
2025-08-22 07:20:21 +08:00
jxxghp
b9521cb3a9 Fix typo: change "未就续" to "未就绪" in module status messages (#4804)
Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: jxxghp <jxxghp@live.cn>
2025-08-22 07:05:16 +08:00
jxxghp
1f40663b90 Merge pull request #4802 from Aqr-K/remove-docker 2025-08-22 06:45:45 +08:00
sowevo
5261ed7c4c 兼容两种库对负数的处理 2025-08-22 03:32:26 +08:00
sowevo
aa8768b18a 兼容两种库对负数的处理 2025-08-22 03:00:50 +08:00
Aqr-K
aad07433f4 fix(docker): Remove musl-dev and related code 2025-08-22 01:20:50 +08:00
jxxghp
4a7630079b Merge pull request #4800 from DDS-Derek/dev 2025-08-21 22:18:16 +08:00
DDSRem
44a6ee1994 fix(docker): 作業ディレクトリが間違っています 2025-08-21 22:17:18 +08:00
jxxghp
56bd6e69ed Merge pull request #4799 from DDS-Derek/dev 2025-08-21 22:11:58 +08:00
DDSRem
d1e04588d0 feat(docker): refactor docker build process 2025-08-21 22:09:49 +08:00
jxxghp
21cdaef6d5 Merge pull request #4798 from DDS-Derek/dev 2025-08-21 21:57:49 +08:00
DDSRem
a1723d18fb fix(docker): 不要な権限設定を削除する 2025-08-21 21:54:33 +08:00
jxxghp
9e065138e9 fix cache default 2025-08-21 21:49:00 +08:00
jxxghp
1c73c92bfd fix cache Singleton 2025-08-21 21:45:34 +08:00
jxxghp
bcd560d74e Merge pull request #4797 from DDS-Derek/dev 2025-08-21 21:28:40 +08:00
DDSRem
02339562ed fix(docker): レイヤー数を減らす 2025-08-21 21:28:18 +08:00
DDSRem
e5804378c2 fix(docker): fuck ai bugs 2025-08-21 21:24:09 +08:00
jxxghp
da1c8a162d fix cache maxsize 2025-08-21 20:10:27 +08:00
jxxghp
d457a23a1f fix build 2025-08-21 19:24:04 +08:00
jxxghp
b6154e58b8 rollback dockerfile 2025-08-21 18:44:47 +08:00
jxxghp
5f18776c61 更新 douban_cache.py 2025-08-21 17:52:55 +08:00
jxxghp
68b0b9ec7a 更新 tmdb_cache.py 2025-08-21 17:52:19 +08:00
jxxghp
0f5036972e v2.7.4 2025-08-21 17:03:17 +08:00
jxxghp
0b199b8421 fix TTLCache 2025-08-21 16:54:49 +08:00
jxxghp
a59730f6eb 优化cache模块的默认值 2025-08-21 16:29:49 +08:00
jxxghp
c6c84fe65b rename 2025-08-21 16:02:50 +08:00
jxxghp
03c757bba6 fix TTLCache 2025-08-21 13:17:59 +08:00
jxxghp
bfeb8d238a fix build 2025-08-21 12:45:05 +08:00
jxxghp
daf0c08c4b remove 重复的 aiofiles 2025-08-21 12:33:51 +08:00
jxxghp
d12c1b9ac4 remove musl-dev 2025-08-21 12:32:53 +08:00
jxxghp
bc242f4fd4 fix yield 2025-08-21 12:04:15 +08:00
jxxghp
a240c1bca9 优化 Dockerfile 2025-08-21 09:47:23 +08:00
jxxghp
219aa6c574 Merge pull request #4790 from wikrin/delete_media_file 2025-08-21 09:35:07 +08:00
Attente
abca1b481a refactor(storage): 优化空目录删除逻辑
- 添加对资源目录和媒体库目录的保护机制
- 实现递归向上检查并删除空目录
2025-08-21 09:16:15 +08:00
jxxghp
db72fd2ef5 fix 2025-08-21 09:07:28 +08:00
jxxghp
31cca58943 fix cache 2025-08-21 08:26:32 +08:00
jxxghp
c06a4b759c fix redis 2025-08-21 08:14:21 +08:00
jxxghp
f05a23a490 更新 redis.py 2025-08-21 07:59:34 +08:00
jxxghp
1e0f2ffde0 更新 config.py 2025-08-21 07:48:16 +08:00
jxxghp
06df42ee3d 更新 Dockerfile 2025-08-21 07:21:58 +08:00
jxxghp
65ee1638f7 add VENV_PATH 2025-08-21 00:28:32 +08:00
jxxghp
87eefe7673 Merge pull request #4788 from jxxghp/cursor/install-playwright-dependencies-in-dockerfile-b7d6
Install playwright dependencies in dockerfile
2025-08-21 00:16:48 +08:00
Cursor Agent
5c124d3988 fix: use full path for playwright command in Dockerfile
- Fix 'playwright: not found' error during Docker build
- Use /bin/playwright instead of playwright to ensure
  the command is executed from the virtual environment
- This resolves the issue where playwright install-deps chromium
  was failing because playwright wasn't in the system PATH
2025-08-20 16:16:02 +00:00
jxxghp
8c69ce624f Merge pull request #4787 from jxxghp/cursor/optimize-docker-build-and-pip-environment-e8ad
Optimize docker build and pip environment
2025-08-21 00:08:50 +08:00
Cursor Agent
bb73acdde5 Checkpoint before follow-up message
Co-authored-by: jxxghp <jxxghp@live.cn>
2025-08-20 16:06:39 +00:00
Cursor Agent
993bc3775b Checkpoint before follow-up message
Co-authored-by: jxxghp <jxxghp@live.cn>
2025-08-20 16:04:44 +00:00
jxxghp
3d2ff28bcd fix download 2025-08-20 23:38:51 +08:00
jxxghp
9b78deb802 fix torrent 2025-08-20 23:07:29 +08:00
jxxghp
dadc525d0b feat:种子下载使用缓存 2025-08-20 22:03:18 +08:00
DDSRem
22b2140c94 fix requirement 2025-08-20 21:18:33 +08:00
jxxghp
f07496a4a0 fix cache 2025-08-20 21:11:10 +08:00
jxxghp
1b2938cbc8 Merge pull request #4785 from jxxghp/cursor/fix-postgresql-textual-sql-expression-error-e023 2025-08-20 20:13:56 +08:00
Cursor Agent
d4d2f58830 Checkpoint before follow-up message
Co-authored-by: jxxghp <jxxghp@live.cn>
2025-08-20 12:10:52 +00:00
jxxghp
b3113e13ec refactor:新增文件缓存组合 2025-08-20 19:04:07 +08:00
jxxghp
055c8e26f0 refactor:重构缓存系统 2025-08-20 17:35:32 +08:00
jxxghp
2a7a7239d7 新增全局图片缓存配置和临时文件清理天数设置 2025-08-20 13:52:38 +08:00
jxxghp
2fa40dac3f 优化监控和消息服务的资源管理 2025-08-20 13:35:24 +08:00
jxxghp
6b4fbd7dc2 新增 PostgreSQL 和 Redis 数据库模块,包含模块初始化、连接测试等功能 2025-08-20 13:35:12 +08:00
jxxghp
5b0bb19717 统一使用 app.core.cache 中的 TTLCache 2025-08-20 12:43:30 +08:00
jxxghp
843dfc430a fix log 2025-08-20 09:36:46 +08:00
jxxghp
69cb07c527 优化缓存机制,支持Redis和本地缓存的切换 2025-08-20 09:16:30 +08:00
jxxghp
89e8a64734 重构Redis缓存机制 2025-08-20 08:51:03 +08:00
jxxghp
5eb2dec32d 新增 RedisHelper 类 2025-08-20 08:50:45 +08:00
jxxghp
db0ea7d6c4 Fix database sequence errors (#4777)
* Fix database upgrade script to handle existing identity columns

Co-authored-by: jxxghp <jxxghp@live.cn>

* Improve identity column conversion with error handling and cleanup

Co-authored-by: jxxghp <jxxghp@live.cn>

* Fix database upgrade script to handle existing identity columns

Co-authored-by: jxxghp <jxxghp@live.cn>

---------

Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: jxxghp <jxxghp@live.cn>
2025-08-20 00:29:35 +08:00
jxxghp
1eb85003de 更新 version.py 2025-08-19 17:58:27 +08:00
jxxghp
cca170f84a 更新 emby.py 2025-08-19 15:30:22 +08:00
jxxghp
c8c016caa8 更新 __init__.py 2025-08-19 14:27:02 +08:00
jxxghp
45d5874026 更新 __init__.py 2025-08-19 14:20:46 +08:00
jxxghp
69b1ce60ff fix db config 2025-08-19 14:15:33 +08:00
jxxghp
3ff3e4b106 fix db config 2025-08-19 14:05:24 +08:00
jxxghp
dc50a68b01 修复数据库表名引用 2025-08-19 12:54:47 +08:00
jxxghp
968cfd8654 fix db 2025-08-19 12:41:07 +08:00
jxxghp
cf28d93be6 fix db 2025-08-19 12:35:52 +08:00
jxxghp
be08d6ebb5 fix db 2025-08-19 12:02:53 +08:00
jxxghp
4bc24f3b00 fix db 2025-08-19 11:53:59 +08:00
jxxghp
15833f94cf fix db 2025-08-19 11:40:34 +08:00
jxxghp
aeb297efcf 优化站点激活状态的判断逻辑,简化数据库查询条件 2025-08-19 11:23:09 +08:00
jxxghp
d48c6b98e8 rollback local postgresql 2025-08-19 08:30:07 +08:00
jxxghp
b79ccfafed 优化 entrypoint.sh 中 PostgreSQL 命令的执行方式 2025-08-19 07:15:02 +08:00
jxxghp
c87ba59552 更新 entrypoint.sh 2025-08-18 22:42:55 +08:00
jxxghp
91fd71c858 fix entrypoint.sh 2025-08-18 22:26:01 +08:00
jxxghp
6f64e67538 fix dockerfile 2025-08-18 21:42:44 +08:00
jxxghp
bd7a0b072f fix entrypoint.sh 2025-08-18 21:22:29 +08:00
jxxghp
01ca001c97 fix entrypoint.sh 2025-08-18 21:10:24 +08:00
jxxghp
324ad2a87c 优化 PostgreSQL 数据目录初始化和启动逻辑 2025-08-18 20:55:33 +08:00
jxxghp
d9ad2630f0 fix postgresql 2025-08-18 19:14:47 +08:00
jxxghp
83958a4a48 fix postgresql 2025-08-18 19:12:20 +08:00
jxxghp
f6a6efdc42 fix app.env 2025-08-18 15:17:26 +08:00
jxxghp
1bbe7657b9 fix dockerfile 2025-08-18 11:42:53 +08:00
jxxghp
38189753b5 在构建工作流中添加新的 Docker 镜像配置 2025-08-18 11:31:00 +08:00
jxxghp
5b0e658617 重构配置文件项目顺序 2025-08-18 11:29:04 +08:00
jxxghp
b6cf54d57f 添加对 PostgreSQL 的支持 2025-08-18 11:19:17 +08:00
jxxghp
e8058c8813 添加 PostgreSQL 数据库支持 2025-08-18 11:19:06 +08:00
jxxghp
784868048d 更新 scheduler.py 2025-08-18 07:04:39 +08:00
jxxghp
2bf9779f2f v2.7.2 2025-08-17 11:44:59 +08:00
jxxghp
d98ceea381 fix #4768 2025-08-17 11:44:09 +08:00
jxxghp
1ab2da74b9 use apipathlib 2025-08-17 09:00:02 +08:00
jxxghp
086b1f1403 更新 message.py 2025-08-16 17:27:45 +08:00
2Dou
3723cf8ac2 二级分类配置增加排除功能 2025-08-15 09:54:56 +08:00
jxxghp
19608fa98e Merge pull request #4756 from Sowevo/v2 2025-08-13 17:40:31 +08:00
sowevo
b0d17deda1 从 TMDB 相对链接中解析数值 ID。 2025-08-13 17:11:56 +08:00
sowevo
4c979c458e 从 TMDB 相对链接中解析数值 ID。 2025-08-13 16:54:06 +08:00
jxxghp
c5e93169ad 更新 subscribe_oper.py 2025-08-13 10:10:42 +08:00
jxxghp
1e2ca294de Merge pull request #4747 from Pollo3470/fix-flaresolverr-proxy 2025-08-12 16:59:31 +08:00
Pollo
7165c4a275 fix: 代理需要认证时,flaresolverr使用session 2025-08-12 16:33:51 +08:00
Pollo
cbe81ba33c fix: 修复调用flaresolverr时未将代理认证信息传入的问题 2025-08-12 16:12:22 +08:00
jxxghp
fdbfae953d fix #4741 FlareSolverr使用站点设置的超时时间,未设置时默认60秒
close #4742
close https://github.com/jxxghp/MoviePilot-Frontend/pull/378
2025-08-12 08:04:29 +08:00
jxxghp
c7ba274877 更新 browser.py 2025-08-11 23:35:05 +08:00
jxxghp
8b15a16ca1 更新 browser.py 2025-08-11 22:20:22 +08:00
jxxghp
9f2c8d3811 v2.7.1 2025-08-11 21:51:34 +08:00
jxxghp
7343dfbed8 fix hddolby 2025-08-11 21:41:56 +08:00
jxxghp
90f74d8d2b feat:支持FlareSolverr 2025-08-11 21:14:46 +08:00
jxxghp
7e3e0e1178 fix #4725 2025-08-11 18:29:29 +08:00
jxxghp
d890e38a10 fix #4724 2025-08-11 17:46:46 +08:00
jxxghp
e505b5c85f fix #4733 2025-08-11 16:41:29 +08:00
jxxghp
6230f55116 fix #4734 2025-08-11 16:34:36 +08:00
jxxghp
c8d0c14ebc 更新 plex.py 2025-08-11 13:57:03 +08:00
jxxghp
6ac8455c74 fix 2025-08-11 13:30:15 +08:00
jxxghp
143b21631f Merge pull request #4737 from baozaodetudou/nginx 2025-08-11 13:27:23 +08:00
doumao
d760facad8 nginx cache js bug 2025-08-11 13:13:29 +08:00
jxxghp
3a1a4c5cfe 更新 download.py 2025-08-10 22:15:30 +08:00
jxxghp
c3045e2cd4 更新 mtorrent.py 2025-08-10 22:10:11 +08:00
jxxghp
1efb9af7ab 更新 nginx.common.conf 2025-08-10 21:32:53 +08:00
jxxghp
e03471159a 更新 version.py 2025-08-10 18:45:40 +08:00
jxxghp
a92e493742 fix README 2025-08-10 14:01:26 +08:00
jxxghp
225d413ed1 fix README 2025-08-10 13:52:35 +08:00
jxxghp
184e4ba7d5 fix 插件Release安装逻辑 2025-08-10 13:26:22 +08:00
jxxghp
917cae27b1 更新插件release安装逻辑 2025-08-10 13:06:03 +08:00
jxxghp
60e0463051 fix 2025-08-10 12:53:42 +08:00
jxxghp
c15022c7d5 fix:插件通过release安装 2025-08-10 12:45:38 +08:00
jxxghp
2a84e3a606 feat: 插件异步安装 2025-08-10 10:10:30 +08:00
jxxghp
fddbbd5714 feat:插件通过release安装 2025-08-10 10:00:13 +08:00
jxxghp
51b8f7c713 fix #4721 2025-08-10 09:11:44 +08:00
jxxghp
e97c246741 try fix #4716 2025-08-10 09:04:20 +08:00
jxxghp
9a81f55ac0 fix #4510 2025-08-10 08:51:52 +08:00
jxxghp
a38b702acc fix alist 2025-08-10 08:46:29 +08:00
jxxghp
e4e0605e92 更新 metavideo.py 2025-08-08 10:19:21 +08:00
jxxghp
8875a8f12c 更新 nginx.common.conf 2025-08-07 11:42:52 +08:00
jxxghp
4dd1deefa5 Merge pull request #4709 from wikrin/v2 2025-08-07 06:54:24 +08:00
Attente
1f6dc93ea3 fix(transfer): 修复目录监控下意外删除未完成种子的问题
- 如果种子尚未下载完成,则直接返回 False
2025-08-06 23:13:01 +08:00
jxxghp
426e920fff fix log 2025-08-06 16:54:24 +08:00
jxxghp
1f6bbce326 fix:优化重试识别次数限制 2025-08-06 16:48:37 +08:00
jxxghp
41f89a35fa 切换v2 release为最新 2025-08-06 16:36:29 +08:00
jxxghp
099d7874d7 - 修复日志滚动问题 2025-08-06 16:32:54 +08:00
jxxghp
e2367103a1 - 修复日志滚动问题 2025-08-06 16:29:51 +08:00
jxxghp
37f8ba7d72 fix #4705 2025-08-06 16:24:47 +08:00
jxxghp
c20bd84edd fix plex error 2025-08-06 12:21:02 +08:00
jxxghp
b4ee0d2487 Merge remote-tracking branch 'origin/v2' into v2 2025-08-05 20:14:06 +08:00
jxxghp
420fa7645f mask key 2025-08-05 20:14:00 +08:00
jxxghp
5bb1e72760 Update README.md 2025-08-05 19:37:51 +08:00
jxxghp
e2a007b62a Update README.md 2025-08-05 19:37:33 +08:00
jxxghp
210813367f fix #4694 2025-08-04 20:50:06 +08:00
jxxghp
770a50764e 更新 transferhistory.py 2025-08-04 19:39:51 +08:00
jxxghp
e339a22aa4 更新 version.py 2025-08-04 19:04:32 +08:00
jxxghp
913afed378 fix #4700 2025-08-04 12:19:24 +08:00
jxxghp
db3efb4452 fix SiteStatistic 2025-08-04 08:34:31 +08:00
jxxghp
840351acb7 fix Subscribe api 2025-08-04 07:05:23 +08:00
jxxghp
da76a7f299 Merge pull request #4693 from wumode/fix_4691 2025-08-03 15:40:19 +08:00
wumode
cbd999f88d Update app/modules/qbittorrent/qbittorrent.py
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-08-03 14:12:57 +08:00
wumode
2fa8a266c5 fix:#4691 2025-08-03 13:56:58 +08:00
jxxghp
08aa749a53 更新 subscribe.py 2025-08-02 20:06:26 +08:00
jxxghp
2379f04d2a Merge pull request #4689 from wikrin/v2 2025-08-02 19:48:59 +08:00
Attente
0e73598d1c refactor(transfer): 优化移动模式下种子文件的删除逻辑
- 重构了种子文件删除相关的代码,简化了逻辑
- 新增了 _is_blocked_by_exclude_words 方法,用于检查文件是否被屏蔽
- 新增了 _can_delete_torrent 方法,用于判断是否可以删除种子文件
2025-08-02 19:42:34 +08:00
jxxghp
964e6eb0e8 Merge pull request #4688 from Pollo3470/v2 2025-08-02 16:34:27 +08:00
Pollo
0430e6c6d4 fix: 修复使用socks代理时请求失败的问题 2025-08-02 16:20:52 +08:00
jxxghp
db88358eca 更新 webhook.py 2025-08-02 15:57:08 +08:00
jxxghp
723e9b0018 更新 version.py 2025-08-02 15:05:39 +08:00
jxxghp
f3db27a8da fix SiteStatistic note 2025-08-02 14:23:16 +08:00
jxxghp
0fb7a73fc9 fix RetryException 2025-08-02 11:32:42 +08:00
jxxghp
418e6bd085 fix cache_clear 2025-08-02 10:29:11 +08:00
jxxghp
5a5c4ace6b fix 实时日志性能 2025-08-02 10:24:46 +08:00
jxxghp
c2c8214075 refactor: 添加订阅协程处理 2025-08-02 09:14:38 +08:00
jxxghp
e5d2ade6e6 fix 协程环境中调用插件同步函数处理 2025-08-02 08:41:44 +08:00
jxxghp
e32b6e07b4 fix async apis 2025-08-01 20:27:22 +08:00
jxxghp
cc69d3b8d1 更新 __init__.py 2025-08-01 18:05:06 +08:00
jxxghp
1dd3af44b5 add FastApi实时性能监控 2025-08-01 17:47:55 +08:00
jxxghp
8ab233baef fix bug 2025-08-01 16:39:40 +08:00
jxxghp
104138b9a7 fix:减少无效搜索 2025-08-01 15:18:05 +08:00
jxxghp
0c8fd5121a fix async apis 2025-08-01 14:19:34 +08:00
jxxghp
61f26d331b add MAX_SEARCH_NAME_LIMIT default 2 2025-08-01 12:33:54 +08:00
jxxghp
97817cd808 fix tmdb async 2025-08-01 12:05:08 +08:00
jxxghp
45bcc63c06 fix rate_limit async 2025-08-01 11:48:37 +08:00
jxxghp
00779d0f10 fix search async 2025-08-01 11:38:23 +08:00
jxxghp
d657bf8ed8 feat:协程搜索 part3 2025-08-01 08:40:25 +08:00
jxxghp
4fcdd05e6a fix indexer async 2025-08-01 08:28:19 +08:00
jxxghp
e6916946a9 fix log && run_in_threadpool 2025-08-01 07:10:02 +08:00
jxxghp
acd7013dc6 fix site 2025-07-31 21:43:55 +08:00
jxxghp
039d876e3f feat:协程搜索 part2 2025-07-31 21:39:36 +08:00
jxxghp
3fc2c7d6cc feat:协程搜索 part2 2025-07-31 21:26:55 +08:00
jxxghp
109164b673 feat:协程搜索 part1 2025-07-31 20:51:39 +08:00
jxxghp
673a03e656 feat:查询本地是否存在 使用协程 2025-07-31 20:19:28 +08:00
jxxghp
1e976e6d96 fix db 2025-07-31 19:52:07 +08:00
jxxghp
8efba30adb fix db 2025-07-31 19:51:48 +08:00
jxxghp
713d44eac3 feat:实现非阻塞文件日志处理 2025-07-31 19:34:50 +08:00
jxxghp
aea44c1d97 feat:键式事件协程处理 2025-07-31 17:27:15 +08:00
jxxghp
1e61e60d73 feat:插件查询协程处理 2025-07-31 16:58:54 +08:00
jxxghp
a0e4b4a56e feat:媒体查询协程处理 2025-07-31 15:24:50 +08:00
jxxghp
983f8fcb03 fix httpx 2025-07-31 13:51:43 +08:00
jxxghp
6afdde7dc1 discover更新为异步实现 2025-07-31 13:36:43 +08:00
jxxghp
6873de7243 fix async 2025-07-31 13:32:47 +08:00
jxxghp
ee4d6d0db3 fix cache 2025-07-31 09:55:47 +08:00
jxxghp
dee1212a76 feat:推荐使用异步API 2025-07-31 09:50:49 +08:00
jxxghp
ceda69aedd add async apis 2025-07-31 09:15:38 +08:00
jxxghp
75ea7d7601 add async api 2025-07-31 09:10:45 +08:00
jxxghp
8b75d2312c add async run_module 2025-07-31 08:56:32 +08:00
jxxghp
ca51880798 fix themoviedb api 2025-07-31 08:40:24 +08:00
jxxghp
8b708e8939 fix themoviedb api 2025-07-31 08:34:47 +08:00
jxxghp
b6ff9f7196 fix douban api 2025-07-31 08:18:00 +08:00
jxxghp
67229fd032 fix 2025-07-31 08:11:27 +08:00
jxxghp
d382eab355 fix subscribe helper 2025-07-31 07:26:58 +08:00
jxxghp
d8f10e9ac4 fix workflow helper 2025-07-31 07:17:05 +08:00
jxxghp
749aaeb003 fix async 2025-07-31 07:07:14 +08:00
jxxghp
c5a3bbcecf 更新 subscribe.py 2025-07-31 00:11:40 +08:00
jxxghp
27ac41531b 更新 subscribe.py 2025-07-30 23:46:21 +08:00
jxxghp
423c9af786 为TheMovieDb模块添加异步支持(part 1) 2025-07-30 22:28:12 +08:00
jxxghp
232759829e 为Bangumi和Douban模块添加异步API支持 2025-07-30 22:18:11 +08:00
jxxghp
71f7bc7b1b fix 2025-07-30 21:06:55 +08:00
jxxghp
ae4f03e272 fix logging api 2025-07-30 21:01:28 +08:00
jxxghp
acb5a7e50b fix 2025-07-30 19:59:25 +08:00
jxxghp
c8749b3c9c add aiopath 2025-07-30 19:49:59 +08:00
jxxghp
49647e3bb5 fix asyncio sleep 2025-07-30 18:53:23 +08:00
jxxghp
48d353aa90 fix async oper 2025-07-30 18:48:50 +08:00
jxxghp
edec18cacb fix 2025-07-30 18:37:16 +08:00
jxxghp
cd8661abc1 重构工作流相关API,支持异步操作并引入异步数据库管理 2025-07-30 18:21:13 +08:00
jxxghp
5f6310f5d6 fix httpx proxy 2025-07-30 17:34:09 +08:00
jxxghp
42d955b175 重构订阅和用户相关API,支持异步操作 2025-07-30 15:23:25 +08:00
jxxghp
21541bc468 更新历史记录相关API,支持异步操作 2025-07-30 14:27:38 +08:00
jxxghp
f14f4e1e9b 添加异步数据库支持,更新相关模型和会话管理 2025-07-30 13:18:45 +08:00
jxxghp
6d1de8a2e4 add db异步转换器 2025-07-30 08:59:11 +08:00
jxxghp
0053d31f84 add db异步转换器 2025-07-30 08:54:04 +08:00
jxxghp
f077a9684b 添加异步请求工具类;优化fetch_image和proxy_img函数为异步实现提升性能 2025-07-30 08:30:24 +08:00
jxxghp
2428d58e93 使用aiofiles实现异步文件操作,提升性能;调整uvicorn工作进程数量。 2025-07-30 07:56:56 +08:00
jxxghp
5340e3a0a7 fix 2025-07-28 16:55:22 +08:00
jxxghp
70dd8f0f1d 更新 version.py 2025-07-28 15:15:56 +08:00
jxxghp
8fa76504c3 fix 2025-07-28 08:13:39 +08:00
jxxghp
0899cb4e1d fix 2025-07-28 08:11:39 +08:00
jxxghp
ee7a2a70a6 Merge pull request #4666 from wumode/refactor_polling_observer 2025-07-27 16:15:33 +08:00
wumode
d57d1ac15e fix: bug 2025-07-27 14:58:11 +08:00
wumode
68c29d89c9 refactor: polling_observer 2025-07-27 12:45:57 +08:00
jxxghp
721648ffdf fix #4653 2025-07-26 23:04:40 +08:00
jxxghp
8437f39bf6 fix #4655 2025-07-26 22:59:37 +08:00
jxxghp
48b15c60e7 Merge pull request #4658 from jnwan/v2 2025-07-25 14:06:22 +08:00
jnwan
e350122125 Add flag to ignore check folder modtime for rclone snapshot 2025-07-24 21:34:17 -07:00
jxxghp
0cce97f373 remove gc 2025-07-25 11:47:41 +08:00
jxxghp
d8cacc0811 fix:没有订阅不跑订阅刷新任务 2025-07-24 11:08:47 +08:00
jxxghp
7abaf70bb8 fix workflow 2025-07-24 09:54:46 +08:00
jxxghp
232fe4d15e fix dead lock 2025-07-23 17:03:50 +08:00
jxxghp
d6d12c0335 feat: 添加事件类型中文名称翻译字典 2025-07-23 15:35:04 +08:00
jxxghp
8e4f12804b Merge pull request #4648 from hyuan280/v2 2025-07-23 15:09:05 +08:00
jxxghp
c21ba5c521 Merge pull request #4649 from roukaixin/v2 2025-07-23 15:07:44 +08:00
jxxghp
dfa3d47261 更新 plugin.py 2025-07-23 06:50:01 +08:00
jxxghp
924f59afff fix bug 2025-07-22 21:02:02 +08:00
roukaixin
673b282d6c Merge branch 'jxxghp:v2' into v2 2025-07-22 20:48:29 +08:00
roukaixin
1c761f89e5 fix: 修复TZ环境变量不生效 2025-07-22 20:46:57 +08:00
jxxghp
f61cd969b9 fix 2025-07-22 20:46:42 +08:00
jxxghp
e39a130306 feat:工作流支持事件触发 2025-07-22 20:23:53 +08:00
黄渊
13b6ea985e fix: 浏览资源时分类可能不生效,使用split后再对比分类id 2025-07-22 19:02:25 +08:00
jxxghp
2f1e55fa1e 增加搜索次数统计和强制休眠机制以优化搜索性能 2025-07-21 12:25:52 +08:00
jxxghp
776f629771 fix User-Agent 2025-07-20 15:50:45 +08:00
jxxghp
d9e9edb2c4 Update version.py 2025-07-20 13:32:54 +08:00
jxxghp
753c074e59 fix #4625 2025-07-20 12:45:53 +08:00
jxxghp
d92c82775a fix #4637 2025-07-20 12:28:12 +08:00
jxxghp
215cc09c1f fix 2025-07-20 11:50:44 +08:00
jxxghp
7f302c13c7 fix #4632 2025-07-20 09:14:47 +08:00
jxxghp
de6a094d10 fix display 2025-07-20 08:49:21 +08:00
jxxghp
a94e1a8314 Merge pull request #4631 from ChanningHe/fix-telegram-msg 2025-07-18 21:22:17 +08:00
ChanningHe
f5efdd665b fix: 清理Telegram消息中的@bot部分以确保一致性处理 2025-07-18 21:59:04 +09:00
jxxghp
43e25e8717 fix share cache 2025-07-18 17:36:28 +08:00
ChanningHe
a8026fefc1 fix: 在Telegram chat中只有被at时检测 2025-07-18 17:55:43 +09:00
ChanningHe
fdb36957c9 fix: Telegram 机器人消息无法推送到群组,只能推送到userid 2025-07-18 17:40:06 +09:00
jxxghp
ea433ff807 add site api 2025-07-18 08:04:05 +08:00
jxxghp
8902fb50d6 更新 context.py 2025-07-16 22:22:45 +08:00
jxxghp
b6aa013eb3 v2.6.6 2025-07-16 20:25:43 +08:00
jxxghp
034b43bf70 fix context 2025-07-16 19:59:06 +08:00
jxxghp
59e9032286 add subscribe share statistic api 2025-07-16 08:47:54 +08:00
jxxghp
52a98efd0a add subscribe share statistic api 2025-07-16 08:31:28 +08:00
jxxghp
90cc91aa7f Merge pull request #4614 from Aqr-K/feature-ua 2025-07-15 06:47:34 +08:00
Aqr-K
1973a26e83 fix: 去除冗余代码,简化写法 2025-07-14 22:19:48 +08:00
Aqr-K
6519ad25ca fix is_aarch 2025-07-14 22:17:04 +08:00
Aqr-K
cacfde8166 fix 2025-07-14 22:14:52 +08:00
Aqr-K
df85873726 feat(ua): add cup_arch , USER_AGENT value add cup_arch 2025-07-14 22:04:09 +08:00
jxxghp
dfea294cc9 fix ua 2025-07-14 13:42:49 +08:00
jxxghp
d35b855404 fix ua 2025-07-14 13:30:18 +08:00
jxxghp
7a1cbf70e3 feat:特定默认UA 2025-07-14 12:35:08 +08:00
jxxghp
f260990b86 更新 version.py 2025-07-13 15:14:10 +08:00
jxxghp
6affbe9b55 fix #4558 2025-07-13 15:04:41 +08:00
jxxghp
dbe3a10697 fix 2025-07-13 14:53:39 +08:00
jxxghp
3c25306a5d fix #4590 2025-07-13 14:43:48 +08:00
jxxghp
17f4d49731 fix #4594 2025-07-13 14:24:41 +08:00
jxxghp
e213b5cc64 Merge branch 'v2' of https://github.com/jxxghp/MoviePilot into v2 2025-07-13 14:14:26 +08:00
jxxghp
65e5dad44b 优化移动模式下的种子和残留目录删除逻辑 2025-07-13 14:14:24 +08:00
jxxghp
62ad38ea5d Merge pull request #4605 from wikrin/torrent_optimize 2025-07-13 13:25:35 +08:00
Attente
f98f4c1f77 refactor(helper): 优化 TorrentHelper 类
- 添加检查临时目录中是否存在种子文件
- 修改 match_torrent 方法参数类型
- 优化种子文件下载和处理逻辑
2025-07-13 13:16:36 +08:00
jxxghp
e9f02b58b7 Merge pull request #4604 from cddjr/fix_4602 2025-07-13 06:51:36 +08:00
景大侠
05495e481d fix #4602 2025-07-13 01:10:07 +08:00
jxxghp
5bb2167b78 Merge pull request #4603 from cddjr/fix_nettest 2025-07-12 18:34:54 +08:00
景大侠
b4e0ed66cf 完善网络连通性测试的错误描述 2025-07-12 18:15:19 +08:00
jxxghp
70a0563435 add server_type return 2025-07-12 14:52:18 +08:00
jxxghp
955912b832 fix plex 2025-07-12 14:44:45 +08:00
jxxghp
b65ee75b3d Merge pull request #4601 from cddjr/minimal_deps 2025-07-11 21:46:13 +08:00
景大侠
f642493a38 fix 2025-07-11 21:25:10 +08:00
jxxghp
7f1bfb1e07 Merge pull request #4599 from jtcymc/v2 2025-07-11 21:12:16 +08:00
景大侠
8931e2e016 fix 仅安装用户需要使用的插件依赖 2025-07-11 21:04:33 +08:00
shaw
0465fa77c2 fix(filemanager): 检查目标媒体库目录是否设置
- 在文件整理过程中,增加对目标媒体库目录是否设置的检查- 如果目标媒体库目录未设置,返回错误信息并中断整理过程
- 优化了错误处理逻辑,提高了系统的稳定性和可靠性
2025-07-11 20:02:12 +08:00
jxxghp
575d503cb9 Merge pull request #4598 from cddjr/fix_4586 2025-07-11 18:12:57 +08:00
景大侠
a4fdbdb9ad fix 极空间、Unraid误报网络文件系统 2025-07-11 18:03:19 +08:00
jxxghp
b9cb781a4e rollback size 2025-07-11 08:34:02 +08:00
jxxghp
a3adf867b7 fix 2025-07-10 22:48:08 +08:00
jxxghp
d52cbd2f74 feat:资源下载事件保存路径 2025-07-10 22:16:19 +08:00
jxxghp
8d0003db94 更新 version.py 2025-07-10 11:57:54 +08:00
jxxghp
b775e89e77 fix #4581 2025-07-10 10:44:04 +08:00
jxxghp
0e14b097ba fix #4581 2025-07-10 10:39:22 +08:00
jxxghp
51848b8d8d fix #4581 2025-07-10 10:20:00 +08:00
jxxghp
72658c3e60 Merge pull request #4582 from cddjr/fix_rename_related 2025-07-09 20:42:54 +08:00
jxxghp
036cb6f3b0 remove memory helper 2025-07-09 19:11:37 +08:00
jxxghp
1a86d96bfa Merge pull request #4579 from jxxghp/cursor/bc-f8a13fbf-5ca0-4b0b-ae8d-59c208732d44-b74e 2025-07-09 17:43:46 +08:00
Cursor Agent
f67db38a25 Fix memory analysis performance and timeout issues across platforms
Co-authored-by: jxxghp <jxxghp@163.com>
2025-07-09 09:43:34 +00:00
Cursor Agent
028d18826a Refactor memory analysis with ThreadPoolExecutor for cross-platform timeout
Co-authored-by: jxxghp <jxxghp@163.com>
2025-07-09 09:38:06 +00:00
Cursor Agent
29a605f265 Optimize memory analysis with timeout, sampling, and performance improvements
Co-authored-by: jxxghp <jxxghp@163.com>
2025-07-09 08:57:22 +00:00
jxxghp
4b6959470d Merge pull request #4577 from jxxghp/cursor/analyze-memory-usage-discrepancies-6709 2025-07-09 16:08:00 +08:00
Cursor Agent
600767d2bf Remove memory analysis guide and test script
Co-authored-by: jxxghp <jxxghp@163.com>
2025-07-09 08:07:30 +00:00
Cursor Agent
3efbd47ffd Add comprehensive memory analysis tool with guide and test script
Co-authored-by: jxxghp <jxxghp@163.com>
2025-07-09 08:04:10 +00:00
Cursor Agent
d17e85217b Enhance memory analysis with detailed tracking, leak detection, and system insights
Co-authored-by: jxxghp <jxxghp@163.com>
2025-07-09 07:47:23 +00:00
jxxghp
e608089805 add Note Action 2025-07-09 12:22:22 +08:00
jxxghp
b852acec28 fix workflow 2025-07-09 09:34:53 +08:00
jxxghp
2a3ea8315d fix workflow 2025-07-09 00:19:47 +08:00
jxxghp
9271ee833c Merge pull request #4566 from jxxghp/cursor/helper-91dc
新增工作流分享相关接口和helper
2025-07-09 00:12:56 +08:00
Cursor Agent
570d4ad1a3 Fix workflow API by passing database session to WorkflowOper methods
Co-authored-by: jxxghp <jxxghp@163.com>
2025-07-08 15:44:55 +00:00
Cursor Agent
dccdf3231a Checkpoint before follow-up message 2025-07-08 15:42:31 +00:00
Cursor Agent
b8ee777fd2 Refactor workflow sharing with independent config and improved data access
Co-authored-by: jxxghp <jxxghp@163.com>
2025-07-08 15:33:43 +00:00
Cursor Agent
a2fd3a8d90 Implement workflow sharing feature with new API endpoints and helper
Co-authored-by: jxxghp <jxxghp@163.com>
2025-07-08 15:26:16 +00:00
Cursor Agent
bbffb1420b Add workflow sharing, forking, and related API endpoints
Co-authored-by: jxxghp <jxxghp@163.com>
2025-07-08 15:18:01 +00:00
景大侠
8ea0a32879 fix 优化重命名后的媒体文件根路径获取 2025-07-08 22:37:32 +08:00
景大侠
8c27b8c33e fix 文件管理的自动重命名缺少集信息 2025-07-08 22:37:09 +08:00
景大侠
5c61b22c2f fix 未启用重命名时,整理文件的转移路径不正确 2025-07-08 21:49:31 +08:00
jxxghp
9da9d765a0 fix:静态类引用 2025-07-08 21:40:04 +08:00
jxxghp
f64363728e fix:静态类引用 2025-07-08 21:38:34 +08:00
jxxghp
378777dc7c feat:弱引用单例 2025-07-08 21:29:01 +08:00
jxxghp
6156b9a481 Merge pull request #4561 from jxxghp/cursor/move-media-files-to-season-directory-6ee0 2025-07-08 18:00:50 +08:00
Cursor Agent
8c516c5691 Fix: Ensure parent item exists before saving NFO file
Co-authored-by: jxxghp <jxxghp@163.com>
2025-07-08 09:51:43 +00:00
Cursor Agent
bf9a149898 Fix TV show metadata scraping to use correct parent directory
Co-authored-by: jxxghp <jxxghp@163.com>
2025-07-08 09:31:35 +00:00
jxxghp
277cde8db2 更新 version.py 2025-07-08 12:17:57 +08:00
jxxghp
e06bdaf53e fix:资源包升级失败时一直重启的问题 2025-07-08 12:06:30 +08:00
jxxghp
da367bd138 fix spider 2025-07-08 11:25:36 +08:00
jxxghp
d336bcbf1f fix etree 2025-07-08 11:00:38 +08:00
jxxghp
a8aedba6ff fix https://github.com/jxxghp/MoviePilot/issues/4552 2025-07-08 09:34:24 +08:00
jxxghp
9ede86c6a3 Merge pull request #4555 from cddjr/fix_local_exists 2025-07-07 23:30:51 +08:00
景大侠
1468f2b082 fix 本地媒体文件检查时首选含影视标题的目录
避免了以年份、分辨率等作为重命名第一层目录时的误判问题
2025-07-07 23:24:04 +08:00
jxxghp
e04ae70f89 Merge pull request #4553 from cddjr/fix_trim_task 2025-07-07 22:15:12 +08:00
景大侠
7f7d2c9ba8 fix 飞牛刷新媒体库报错Task duplicate 2025-07-07 21:46:17 +08:00
jxxghp
d73deef8dc Merge pull request #4549 from cddjr/fix_tr 2025-07-07 17:28:28 +08:00
景大侠
f93a1540af fix TR模块报错找不到_protocol属性
v2.5.9引入的bug
2025-07-07 17:05:28 +08:00
jxxghp
c8bd9cb716 Merge pull request #4548 from cddjr/set_lock_timeout 2025-07-07 12:04:46 +08:00
景大侠
2ed13c7e5b fix 订阅匹配锁增加超时,避免罕见的长时间卡任务问题 2025-07-07 11:51:58 +08:00
320 changed files with 28926 additions and 7311 deletions

View File

@@ -1,3 +1,84 @@
# Ignore git
# Git
.github
.git
.git
.gitignore
# Documentation
docs/
README.md
LICENSE
# Development files
.pylintrc
*.pyc
__pycache__/
*.pyo
*.pyd
.Python
*.so
.pytest_cache/
.coverage
htmlcov/
.tox/
.nox/
.hypothesis/
.mypy_cache/
.dmypy.json
dmypy.json
# Virtual environments
venv/
env/
ENV/
env.bak/
venv.bak/
# IDE
.vscode/
.idea/
*.swp
*.swo
*~
# OS
.DS_Store
.DS_Store?
._*
.Spotlight-V100
.Trashes
ehthumbs.db
Thumbs.db
# Logs
*.log
logs/
# Temporary files
*.tmp
*.temp
tmp/
temp/
# Database
*.db
*.sqlite
*.sqlite3
# Test files
tests/
test_*
*_test.py
# Build artifacts
build/
dist/
*.egg-info/
# Docker
Dockerfile*
docker-compose*
.dockerignore
# Other
app.ico
frozen.spec

60
.github/workflows/beta.yml vendored Normal file
View File

@@ -0,0 +1,60 @@
name: MoviePilot Builder Beta
on:
workflow_dispatch:
jobs:
Docker-build:
runs-on: ubuntu-latest
name: Build Docker Image
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Release version
id: release_version
run: |
app_version=$(cat version.py |sed -ne "s/APP_VERSION\s=\s'v\(.*\)'/\1/gp")
echo "app_version=$app_version" >> $GITHUB_ENV
- name: Docker Meta
id: meta
uses: docker/metadata-action@v5
with:
images: |
${{ secrets.DOCKER_USERNAME }}/moviepilot-v2
ghcr.io/${{ github.repository }}
tags: |
type=raw,value=beta
- name: Set Up QEMU
uses: docker/setup-qemu-action@v3
- name: Set Up Buildx
uses: docker/setup-buildx-action@v3
- name: Login DockerHub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Login GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build Image
uses: docker/build-push-action@v5
with:
context: .
file: docker/Dockerfile
platforms: |
linux/amd64
linux/arm64/v8
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha, scope=${{ github.workflow }}-docker
cache-to: type=gha, scope=${{ github.workflow }}-docker

View File

@@ -27,6 +27,7 @@ jobs:
with:
images: |
${{ secrets.DOCKER_USERNAME }}/moviepilot-v2
${{ secrets.DOCKER_USERNAME }}/moviepilot
ghcr.io/${{ github.repository }}
tags: |
type=raw,value=${{ env.app_version }}
@@ -92,6 +93,6 @@ jobs:
body: ${{ env.RELEASE_BODY }}
draft: false
prerelease: false
make_latest: false
make_latest: true
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -18,17 +18,19 @@
## 主要特性
- 前后端分离基于FastApi + Vue3,前端项目地址:[MoviePilot-Frontend](https://github.com/jxxghp/MoviePilot-Frontend)APIhttp://localhost:3001/docs
- 前后端分离基于FastApi + Vue3
- 聚焦核心需求,简化功能和设置,部分设置项可直接使用默认值。
- 重新设计了用户界面,更加美观易用。
## 安装使用
访问官方Wikihttps://wiki.movie-pilot.org
官方Wikihttps://wiki.movie-pilot.org
## 参与开发
需要 `Python 3.12``Node JS v20.12.1`
API文档https://api.movie-pilot.org
本地运行需要 `Python 3.12``Node JS v20.12.1`
- 克隆主项目 [MoviePilot](https://github.com/jxxghp/MoviePilot)
```shell
@@ -38,10 +40,11 @@ git clone https://github.com/jxxghp/MoviePilot
```shell
git clone https://github.com/jxxghp/MoviePilot-Resources
```
- 安装后端依赖,设置`app`为源代码根目录,运行 `main.py` 启动后端服务,默认监听端口:`3001`API文档地址`http://localhost:3001/docs`
- 安装后端依赖,运行 `main.py` 启动后端服务,默认监听端口:`3001`API文档地址`http://localhost:3001/docs`
```shell
cd MoviePilot
pip install -r requirements.txt
python3 main.py
python3 -m app.main
```
- 克隆前端项目 [MoviePilot-Frontend](https://github.com/jxxghp/MoviePilot-Frontend)
```shell
@@ -54,6 +57,20 @@ yarn dev
```
- 参考 [插件开发指引](https://wiki.movie-pilot.org/zh/plugindev) 在 `app/plugins` 目录下开发插件代码
## 相关项目
- [MoviePilot-Frontend](https://github.com/jxxghp/MoviePilot-Frontend)
- [MoviePilot-Resources](https://github.com/jxxghp/MoviePilot-Resources)
- [MoviePilot-Plugins](https://github.com/jxxghp/MoviePilot-Plugins)
- [MoviePilot-Server](https://github.com/jxxghp/MoviePilot-Server)
- [MoviePilot-Wiki](https://github.com/jxxghp/MoviePilot-Wiki)
## 免责申明
- 本软件仅供学习交流使用,任何人不得将本软件用于商业用途,任何人不得将本软件用于违法犯罪活动,软件对用户行为不知情,一切责任由使用者承担。
- 本软件代码开源,基于开源代码进行修改,人为去除相关限制导致软件被分发、传播并造成责任事件的,需由代码修改发布者承担全部责任,不建议对用户认证机制进行规避或修改并公开发布。
- 本项目不接受捐赠,没有在任何地方发布捐赠信息页面,软件本身不收费也不提供任何收费相关服务,请仔细辨别避免误导。
## 贡献者
<a href="https://github.com/jxxghp/MoviePilot/graphs/contributors">

355
app/agent/__init__.py Normal file
View File

@@ -0,0 +1,355 @@
"""MoviePilot AI智能体实现"""
import asyncio
from typing import Dict, List, Any
from langchain.agents import AgentExecutor, create_openai_tools_agent
from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_community.callbacks import get_openai_callback
from langchain_core.chat_history import InMemoryChatMessageHistory
from langchain_core.messages import HumanMessage, AIMessage, ToolCall
from langchain_core.runnables.history import RunnableWithMessageHistory
from app.agent.callback import StreamingCallbackHandler
from app.agent.memory import ConversationMemoryManager
from app.agent.prompt import PromptManager
from app.agent.tools.factory import MoviePilotToolFactory
from app.chain import ChainBase
from app.core.config import settings
from app.helper.message import MessageHelper
from app.log import logger
from app.schemas import Notification
class AgentChain(ChainBase):
pass
class MoviePilotAgent:
"""MoviePilot AI智能体"""
def __init__(self, session_id: str, user_id: str = None,
channel: str = None, source: str = None, username: str = None):
self.session_id = session_id
self.user_id = user_id
self.channel = channel # 消息渠道
self.source = source # 消息来源
self.username = username # 用户名
# 消息助手
self.message_helper = MessageHelper()
# 记忆管理器
self.memory_manager = ConversationMemoryManager()
# 提示词管理器
self.prompt_manager = PromptManager()
# 回调处理器
self.callback_handler = StreamingCallbackHandler(
session_id=session_id
)
# LLM模型
self.llm = self._initialize_llm()
# 工具
self.tools = self._initialize_tools()
# 会话存储
self.session_store = self._initialize_session_store()
# 提示词模板
self.prompt = self._initialize_prompt()
# Agent执行器
self.agent_executor = self._create_agent_executor()
def _initialize_llm(self):
"""初始化LLM模型"""
provider = settings.LLM_PROVIDER.lower()
api_key = settings.LLM_API_KEY
if not api_key:
raise ValueError("未配置 LLM_API_KEY")
if provider == "google":
from langchain_google_genai import ChatGoogleGenerativeAI
return ChatGoogleGenerativeAI(
model=settings.LLM_MODEL,
google_api_key=api_key,
max_retries=3,
temperature=settings.LLM_TEMPERATURE,
streaming=True,
callbacks=[self.callback_handler]
)
elif provider == "deepseek":
from langchain_deepseek import ChatDeepSeek
return ChatDeepSeek(
model=settings.LLM_MODEL,
api_key=api_key,
max_retries=3,
temperature=settings.LLM_TEMPERATURE,
streaming=True,
callbacks=[self.callback_handler],
stream_usage=True
)
else:
from langchain_openai import ChatOpenAI
return ChatOpenAI(
model=settings.LLM_MODEL,
api_key=api_key,
max_retries=3,
base_url=settings.LLM_BASE_URL,
temperature=settings.LLM_TEMPERATURE,
streaming=True,
callbacks=[self.callback_handler],
stream_usage=True
)
def _initialize_tools(self) -> List:
"""初始化工具列表"""
return MoviePilotToolFactory.create_tools(
session_id=self.session_id,
user_id=self.user_id,
channel=self.channel,
source=self.source,
username=self.username,
callback_handler=self.callback_handler
)
@staticmethod
def _initialize_session_store() -> Dict[str, InMemoryChatMessageHistory]:
"""初始化内存存储"""
return {}
def get_session_history(self, session_id: str) -> InMemoryChatMessageHistory:
"""获取会话历史"""
if session_id not in self.session_store:
chat_history = InMemoryChatMessageHistory()
messages: List[dict] = self.memory_manager.get_recent_messages_for_agent(
session_id=session_id,
user_id=self.user_id
)
if messages:
for msg in messages:
if msg.get("role") == "user":
chat_history.add_user_message(HumanMessage(content=msg.get("content", "")))
elif msg.get("role") == "agent":
chat_history.add_ai_message(AIMessage(content=msg.get("content", "")))
elif msg.get("role") == "tool_call":
metadata = msg.get("metadata", {})
chat_history.add_ai_message(AIMessage(
content=msg.get("content", ""),
tool_calls=[ToolCall(
id=metadata.get("call_id"),
name=metadata.get("tool_name"),
args=metadata.get("parameters"),
)]
))
elif msg.get("role") == "tool_result":
chat_history.add_ai_message(AIMessage(content=msg.get("content", "")))
elif msg.get("role") == "system":
chat_history.add_ai_message(AIMessage(content=msg.get("content", "")))
self.session_store[session_id] = chat_history
return self.session_store[session_id]
@staticmethod
def _initialize_prompt() -> ChatPromptTemplate:
"""初始化提示词模板"""
try:
prompt_template = ChatPromptTemplate.from_messages([
("system", "{system_prompt}"),
MessagesPlaceholder(variable_name="chat_history"),
("user", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
])
logger.info("LangChain提示词模板初始化成功")
return prompt_template
except Exception as e:
logger.error(f"初始化提示词失败: {e}")
raise e
def _create_agent_executor(self) -> RunnableWithMessageHistory:
"""创建Agent执行器"""
try:
agent = create_openai_tools_agent(
llm=self.llm,
tools=self.tools,
prompt=self.prompt
)
executor = AgentExecutor(
agent=agent,
tools=self.tools,
verbose=settings.LLM_VERBOSE,
max_iterations=settings.LLM_MAX_ITERATIONS,
return_intermediate_steps=True,
handle_parsing_errors=True,
early_stopping_method="force"
)
return RunnableWithMessageHistory(
executor,
self.get_session_history,
input_messages_key="input",
history_messages_key="chat_history"
)
except Exception as e:
logger.error(f"创建Agent执行器失败: {e}")
raise e
async def process_message(self, message: str) -> str:
"""处理用户消息"""
try:
# 添加用户消息到记忆
await self.memory_manager.add_memory(
self.session_id,
user_id=self.user_id,
role="user",
content=message
)
# 构建输入上下文
input_context = {
"system_prompt": self.prompt_manager.get_agent_prompt(channel=self.channel),
"input": message
}
# 执行Agent
logger.info(f"Agent执行推理: session_id={self.session_id}, input={message}")
await self._execute_agent(input_context)
# 获取Agent回复
agent_message = await self.callback_handler.get_message()
# 发送Agent回复给用户通过原渠道
await self.send_agent_message(agent_message)
# 添加Agent回复到记忆
await self.memory_manager.add_memory(
session_id=self.session_id,
user_id=self.user_id,
role="agent",
content=agent_message
)
return agent_message
except Exception as e:
error_message = f"处理消息时发生错误: {str(e)}"
logger.error(error_message)
# 发送错误消息给用户(通过原渠道)
await self.send_agent_message(error_message)
return error_message
async def _execute_agent(self, input_context: Dict[str, Any]) -> Dict[str, Any]:
"""执行LangChain Agent"""
try:
with get_openai_callback() as cb:
result = await self.agent_executor.ainvoke(
input_context,
config={"configurable": {"session_id": self.session_id}},
callbacks=[self.callback_handler]
)
logger.info(f"LLM调用消耗: \n{cb}")
if cb.total_tokens > 0:
result["token_usage"] = {
"prompt_tokens": cb.prompt_tokens,
"completion_tokens": cb.completion_tokens,
"total_tokens": cb.total_tokens
}
return result
except asyncio.CancelledError:
logger.info(f"Agent执行被取消: session_id={self.session_id}")
return {
"output": "任务已取消",
"intermediate_steps": [],
"token_usage": {}
}
except Exception as e:
logger.error(f"Agent执行失败: {e}")
return {
"output": f"执行过程中发生错误: {str(e)}",
"intermediate_steps": [],
"token_usage": {}
}
async def send_agent_message(self, message: str, title: str = "MoviePilot助手"):
"""通过原渠道发送消息给用户"""
await AgentChain().async_post_message(
Notification(
channel=self.channel,
source=self.source,
userid=self.user_id,
username=self.username,
title=title,
text=message
)
)
async def cleanup(self):
"""清理智能体资源"""
if self.session_id in self.session_store:
del self.session_store[self.session_id]
logger.info(f"MoviePilot智能体已清理: session_id={self.session_id}")
class AgentManager:
"""AI智能体管理器"""
def __init__(self):
self.active_agents: Dict[str, MoviePilotAgent] = {}
self.memory_manager = ConversationMemoryManager()
async def initialize(self):
"""初始化管理器"""
await self.memory_manager.initialize()
async def close(self):
"""关闭管理器"""
await self.memory_manager.close()
# 清理所有活跃的智能体
for agent in self.active_agents.values():
await agent.cleanup()
self.active_agents.clear()
async def process_message(self, session_id: str, user_id: str, message: str,
channel: str = None, source: str = None, username: str = None) -> str:
"""处理用户消息"""
# 获取或创建Agent实例
if session_id not in self.active_agents:
logger.info(f"创建新的AI智能体实例session_id: {session_id}, user_id: {user_id}")
agent = MoviePilotAgent(
session_id=session_id,
user_id=user_id,
channel=channel,
source=source,
username=username
)
agent.memory_manager = self.memory_manager
self.active_agents[session_id] = agent
else:
agent = self.active_agents[session_id]
agent.user_id = user_id # 确保user_id是最新的
# 更新渠道信息
if channel:
agent.channel = channel
if source:
agent.source = source
if username:
agent.username = username
# 处理消息
return await agent.process_message(message)
async def clear_session(self, session_id: str, user_id: str):
"""清空会话"""
if session_id in self.active_agents:
agent = self.active_agents[session_id]
await agent.cleanup()
del self.active_agents[session_id]
await self.memory_manager.clear_memory(session_id, user_id)
logger.info(f"会话 {session_id} 的记忆已清空")
# 全局智能体管理器实例
agent_manager = AgentManager()

View File

@@ -0,0 +1,33 @@
import threading
from langchain_core.callbacks import AsyncCallbackHandler
from app.log import logger
class StreamingCallbackHandler(AsyncCallbackHandler):
"""流式输出回调处理器"""
def __init__(self, session_id: str):
self._lock = threading.Lock()
self.session_id = session_id
self.current_message = ""
async def get_message(self):
"""获取当前消息内容,获取后清空"""
with self._lock:
if not self.current_message:
return ""
msg = self.current_message
logger.info(f"Agent消息: {msg}")
self.current_message = ""
return msg
async def on_llm_new_token(self, token: str, **kwargs):
"""处理新的token"""
if not token:
return
with self._lock:
# 缓存当前消息
self.current_message += token

View File

@@ -0,0 +1,280 @@
"""对话记忆管理器"""
import asyncio
import json
from datetime import datetime, timedelta
from typing import Dict, List, Optional, Any
from app.core.config import settings
from app.helper.redis import AsyncRedisHelper
from app.log import logger
from app.schemas.agent import ConversationMemory
class ConversationMemoryManager:
"""对话记忆管理器"""
def __init__(self):
# 内存中的会话记忆缓存
self.memory_cache: Dict[str, ConversationMemory] = {}
# 使用现有的Redis助手
self.redis_helper = AsyncRedisHelper()
# 内存缓存清理任务Redis通过TTL自动过期
self.cleanup_task: Optional[asyncio.Task] = None
async def initialize(self):
"""初始化记忆管理器"""
try:
# 启动内存缓存清理任务Redis通过TTL自动过期
self.cleanup_task = asyncio.create_task(self._cleanup_expired_memories())
logger.info("对话记忆管理器初始化完成")
except Exception as e:
logger.warning(f"Redis连接失败将使用内存存储: {e}")
async def close(self):
"""关闭记忆管理器"""
if self.cleanup_task:
self.cleanup_task.cancel()
try:
await self.cleanup_task
except asyncio.CancelledError:
pass
await self.redis_helper.close()
logger.info("对话记忆管理器已关闭")
async def get_memory(self, session_id: str, user_id: str) -> ConversationMemory:
"""获取会话记忆"""
# 首先检查缓存
cache_key = f"{user_id}:{session_id}" if user_id else session_id
if cache_key in self.memory_cache:
return self.memory_cache[cache_key]
# 尝试从Redis加载
if settings.CACHE_BACKEND_TYPE == "redis":
try:
redis_key = f"agent_memory:{user_id}:{session_id}" if user_id else f"agent_memory:{session_id}"
memory_data = await self.redis_helper.get(redis_key, region="AI_AGENT")
if memory_data:
memory_dict = json.loads(memory_data) if isinstance(memory_data, str) else memory_data
memory = ConversationMemory(**memory_dict)
self.memory_cache[cache_key] = memory
return memory
except Exception as e:
logger.warning(f"从Redis加载记忆失败: {e}")
# 创建新的记忆
memory = ConversationMemory(session_id=session_id, user_id=user_id)
self.memory_cache[cache_key] = memory
await self._save_memory(memory)
return memory
async def set_title(self, session_id: str, user_id: str, title: str):
"""设置会话标题"""
memory = await self.get_memory(session_id=session_id, user_id=user_id)
memory.title = title
memory.updated_at = datetime.now()
await self._save_memory(memory)
async def get_title(self, session_id: str, user_id: str) -> Optional[str]:
"""获取会话标题"""
memory = await self.get_memory(session_id=session_id, user_id=user_id)
return memory.title
async def list_sessions(self, user_id: str, limit: int = 100) -> List[Dict[str, Any]]:
"""列出历史会话摘要(按更新时间倒序)
- 当启用Redis时遍历 `agent_memory:*` 键并读取摘要
- 当未启用Redis时基于内存缓存返回
"""
sessions: List[ConversationMemory] = []
# 从Redis遍历
if settings.CACHE_BACKEND_TYPE == "redis":
try:
# 使用Redis助手的items方法遍历所有键
async for key, value in self.redis_helper.items(region="AI_AGENT"):
if key.startswith("agent_memory:"):
try:
# 解析键名获取user_id和session_id
key_parts = key.split(":")
if len(key_parts) >= 3:
key_user_id = key_parts[2] if len(key_parts) > 3 else None
if not user_id or key_user_id == user_id:
data = value if isinstance(value, dict) else json.loads(value)
memory = ConversationMemory(**data)
sessions.append(memory)
except Exception as err:
logger.warning(f"解析Redis记忆数据失败: {err}")
continue
except Exception as e:
logger.warning(f"遍历Redis会话失败: {e}")
# 合并内存缓存(确保包含近期的会话)
for cache_key, memory in self.memory_cache.items():
# 如果指定了user_id只返回该用户的会话
if not user_id or memory.user_id == user_id:
sessions.append(memory)
# 去重(以 session_id 为键取最近updated
uniq: Dict[str, ConversationMemory] = {}
for mem in sessions:
existed = uniq.get(mem.session_id)
if (not existed) or (mem.updated_at > existed.updated_at):
uniq[mem.session_id] = mem
# 排序并裁剪
sorted_list = sorted(uniq.values(), key=lambda m: m.updated_at, reverse=True)[:limit]
return [
{
"session_id": m.session_id,
"title": m.title or "新会话",
"message_count": len(m.messages),
"created_at": m.created_at.isoformat(),
"updated_at": m.updated_at.isoformat(),
}
for m in sorted_list
]
async def add_memory(
self,
session_id: str,
user_id: str,
role: str,
content: str,
metadata: Optional[Dict[str, Any]] = None
):
"""添加消息到记忆"""
memory = await self.get_memory(session_id=session_id, user_id=user_id)
message = {
"role": role,
"content": content,
"timestamp": datetime.now().isoformat(),
"metadata": metadata or {}
}
memory.messages.append(message)
memory.updated_at = datetime.now()
# 限制消息数量,避免记忆过大
max_messages = settings.LLM_MAX_MEMORY_MESSAGES
if len(memory.messages) > max_messages:
# 保留最近的消息,但保留第一条系统消息
system_messages = [msg for msg in memory.messages if msg["role"] == "system"]
recent_messages = memory.messages[-(max_messages - len(system_messages)):]
memory.messages = system_messages + recent_messages
await self._save_memory(memory)
logger.debug(f"消息已添加到记忆: session_id={session_id}, user_id={user_id}, role={role}")
def get_recent_messages_for_agent(
self,
session_id: str,
user_id: str
) -> List[Dict[str, Any]]:
"""为Agent获取最近的消息仅内存缓存
如果消息Token数量超过模型最大上下文长度的阀值会自动进行摘要裁剪
"""
cache_key = f"{user_id}:{session_id}" if user_id else session_id
memory = self.memory_cache.get(cache_key)
if not memory:
return []
# 获取所有消息
messages = memory.messages
return messages
async def get_recent_messages(
self,
session_id: str,
user_id: str,
limit: int = 10,
role_filter: Optional[list] = None
) -> List[Dict[str, Any]]:
"""获取最近的消息"""
memory = await self.get_memory(session_id=session_id, user_id=user_id)
messages = memory.messages
if role_filter:
messages = [msg for msg in messages if msg["role"] in role_filter]
return messages[-limit:] if messages else []
async def get_context(self, session_id: str, user_id: str) -> Dict[str, Any]:
"""获取会话上下文"""
memory = await self.get_memory(session_id=session_id, user_id=user_id)
return memory.context
async def clear_memory(self, session_id: str, user_id: str):
"""清空会话记忆"""
cache_key = f"{user_id}:{session_id}" if user_id else session_id
if cache_key in self.memory_cache:
del self.memory_cache[cache_key]
if settings.CACHE_BACKEND_TYPE == "redis":
redis_key = f"agent_memory:{user_id}:{session_id}" if user_id else f"agent_memory:{session_id}"
await self.redis_helper.delete(redis_key, region="AI_AGENT")
logger.info(f"会话记忆已清空: session_id={session_id}, user_id={user_id}")
async def _save_memory(self, memory: ConversationMemory):
"""保存记忆到存储
Redis中的记忆会自动通过TTL机制过期无需手动清理
"""
# 更新内存缓存
cache_key = f"{memory.user_id}:{memory.session_id}" if memory.user_id else memory.session_id
self.memory_cache[cache_key] = memory
# 保存到Redis设置TTL自动过期
if settings.CACHE_BACKEND_TYPE == "redis":
try:
memory_dict = memory.model_dump()
redis_key = f"agent_memory:{memory.user_id}:{memory.session_id}" if memory.user_id else f"agent_memory:{memory.session_id}"
ttl = int(timedelta(days=settings.LLM_REDIS_MEMORY_RETENTION_DAYS).total_seconds())
await self.redis_helper.set(
redis_key,
memory_dict,
ttl=ttl,
region="AI_AGENT"
)
except Exception as e:
logger.warning(f"保存记忆到Redis失败: {e}")
async def _cleanup_expired_memories(self):
"""清理内存中过期记忆的后台任务
注意Redis中的记忆通过TTL机制自动过期这里只清理内存缓存
"""
while True:
try:
# 每小时清理一次
await asyncio.sleep(3600)
current_time = datetime.now()
expired_sessions = []
# 只检查内存缓存中的过期记忆
# Redis中的记忆会通过TTL自动过期无需手动处理
for cache_key, memory in self.memory_cache.items():
if (current_time - memory.updated_at).days > settings.LLM_MEMORY_RETENTION_DAYS:
expired_sessions.append(cache_key)
# 只清理内存缓存不删除Redis中的键Redis会自动过期
for cache_key in expired_sessions:
if cache_key in self.memory_cache:
del self.memory_cache[cache_key]
if expired_sessions:
logger.info(f"清理了{len(expired_sessions)}个过期内存会话记忆")
except asyncio.CancelledError:
break
except Exception as e:
logger.error(f"清理记忆时发生错误: {e}")

View File

@@ -0,0 +1,70 @@
You are MoviePilot's AI assistant, specialized in helping users manage media resources including subscriptions, searching, downloading, and organization.
## Your Identity and Capabilities
You are an AI agent for the MoviePilot media management system with the following core capabilities:
### Media Management Capabilities
- **Search Media Resources**: Search for movies, TV shows, anime, and other media content based on user requirements
- **Add Subscriptions**: Create subscription rules for media content that users are interested in
- **Manage Downloads**: Search and add torrent resources to downloaders
- **Query Status**: Check subscription status, download progress, and media library status
### Intelligent Interaction Capabilities
- **Natural Language Understanding**: Understand user requests in natural language (Chinese/English)
- **Context Memory**: Remember conversation history and user preferences
- **Smart Recommendations**: Recommend related media content based on user preferences
- **Task Execution**: Automatically execute complex media management tasks
## Working Principles
1. **Always respond in Chinese**: All responses must be in Chinese
2. **Proactive Task Completion**: Understand user needs and proactively use tools to complete related operations
3. **Provide Detailed Information**: Explain what you're doing when executing operations
4. **Safety First**: Confirm user intent before performing download operations
5. **Continuous Learning**: Remember user preferences and habits to provide personalized service
## Common Operation Workflows
### Add Subscription Workflow
1. Understand the media content the user wants to subscribe to
2. Search for related media information
3. Create subscription rules
4. Confirm successful subscription
### Search and Download Workflow
1. Understand user requirements (movie names, TV show names, etc.)
2. Search for related media information
3. Search for related torrent resources by media info
4. Filter suitable resources
5. Add to downloader
### Query Status Workflow
1. Understand what information the user wants to know
2. Query related data
3. Organize and present results
## Tool Usage Guidelines
### Tool Usage Principles
- Use tools proactively to complete user requests
- Always explain what you're doing when using tools
- Provide detailed results and explanations
- Handle errors gracefully and suggest alternatives
- Confirm user intent before performing download operations
### Response Format
- Always respond in Chinese
- Use clear and friendly language
- Provide structured information when appropriate
- Include relevant details about media content (title, year, type, etc.)
- Explain the results of tool operations clearly
## Important Notes
- Always confirm user intent before performing download operations
- If search results are not ideal, proactively adjust search strategies
- Maintain a friendly and professional tone
- Seek solutions proactively when encountering problems
- Remember user preferences and provide personalized recommendations
- Handle errors gracefully and provide helpful suggestions

View File

@@ -0,0 +1,118 @@
"""提示词管理器"""
from pathlib import Path
from typing import Dict
from app.log import logger
class PromptManager:
"""提示词管理器"""
def __init__(self, prompts_dir: str = None):
if prompts_dir is None:
self.prompts_dir = Path(__file__).parent
else:
self.prompts_dir = Path(prompts_dir)
self.prompts_cache: Dict[str, str] = {}
def load_prompt(self, prompt_name: str) -> str:
"""加载指定的提示词"""
if prompt_name in self.prompts_cache:
return self.prompts_cache[prompt_name]
prompt_file = self.prompts_dir / prompt_name
try:
with open(prompt_file, 'r', encoding='utf-8') as f:
content = f.read().strip()
# 缓存提示词
self.prompts_cache[prompt_name] = content
logger.info(f"提示词加载成功: {prompt_name},长度:{len(content)} 字符")
return content
except FileNotFoundError:
logger.error(f"提示词文件不存在: {prompt_file}")
raise
except Exception as e:
logger.error(f"加载提示词失败: {prompt_name}, 错误: {e}")
raise
def get_agent_prompt(self, channel: str = None) -> str:
"""
获取智能体提示词
:param channel: 消息渠道Telegram、微信、Slack等
:return: 提示词内容
"""
base_prompt = self.load_prompt("Agent Prompt.txt")
# 根据渠道添加特定的格式说明
if channel:
channel_format_info = self._get_channel_format_info(channel)
if channel_format_info:
base_prompt += f"\n\n## Current Message Channel Format Requirements\n\n{channel_format_info}"
return base_prompt
@staticmethod
def _get_channel_format_info(channel: str) -> str:
"""
获取渠道特定的格式说明
:param channel: 消息渠道
:return: 格式说明文本
"""
channel_lower = channel.lower() if channel else ""
if "telegram" in channel_lower:
return """Messages are being sent through the **Telegram** channel. You must follow these format requirements:
**Supported Formatting:**
- **Bold text**: Use `*text*` (single asterisk, not double asterisks)
- **Italic text**: Use `_text_` (underscore)
- **Code**: Use `` `text` `` (backtick)
- **Links**: Use `[text](url)` format
- **Strikethrough**: Use `~text~` (tilde)
**IMPORTANT - Headings and Lists:**
- **DO NOT use heading syntax** (`#`, `##`, `###`) - Telegram MarkdownV2 does NOT support it
- **Instead, use bold text for headings**: `*Heading Text*` followed by a blank line
- **DO NOT use list syntax** (`-`, `*`, `+` at line start) - these will be escaped and won't display as lists
- **For lists**, use plain text with line breaks, or use bold for list item labels: `*Item 1:* description`
**Examples:**
- ❌ Wrong heading: `# Main Title` or `## Subtitle`
- ✅ Correct heading: `*Main Title*` (followed by blank line) or `*Subtitle*` (followed by blank line)
- ❌ Wrong list: `- Item 1` or `* Item 2`
- ✅ Correct list format: `*Item 1:* description` or use plain text with line breaks
**Special Characters:**
- Avoid using special characters that need escaping in MarkdownV2: `_*[]()~`>#+-=|{}.!` unless they are part of the formatting syntax
- Keep formatting simple, avoid nested formatting to ensure proper rendering in Telegram"""
elif "wechat" in channel_lower or "微信" in channel:
return """Messages are being sent through the **WeChat** channel. Please follow these format requirements:
- WeChat does NOT support Markdown formatting. Use plain text format only.
- Do NOT use any Markdown syntax (such as `**bold**`, `*italic*`, `` `code` `` etc.)
- Use plain text descriptions. You can organize content using line breaks and punctuation
- Links can be provided directly as URLs, no Markdown link format needed
- Keep messages concise and clear, use natural Chinese expressions"""
elif "slack" in channel_lower:
return """Messages are being sent through the **Slack** channel. Please follow these format requirements:
- Slack supports Markdown formatting
- Use `*text*` for bold
- Use `_text_` for italic
- Use `` `text` `` for code
- Link format: `<url|text>` or `[text](url)`"""
# 其他渠道使用标准Markdown
return None
def clear_cache(self):
"""清空缓存"""
self.prompts_cache.clear()
logger.info("提示词缓存已清空")

View File

94
app/agent/tools/base.py Normal file
View File

@@ -0,0 +1,94 @@
"""MoviePilot工具基类"""
from abc import ABCMeta, abstractmethod
from typing import Callable, Any, Optional
from langchain.tools import BaseTool
from pydantic import PrivateAttr
from app.agent import StreamingCallbackHandler
from app.chain import ChainBase
from app.schemas import Notification
class ToolChain(ChainBase):
pass
class MoviePilotTool(BaseTool, metaclass=ABCMeta):
"""MoviePilot专用工具基类"""
_session_id: str = PrivateAttr()
_user_id: str = PrivateAttr()
_channel: str = PrivateAttr(default=None)
_source: str = PrivateAttr(default=None)
_username: str = PrivateAttr(default=None)
_callback_handler: StreamingCallbackHandler = PrivateAttr(default=None)
def __init__(self, session_id: str, user_id: str, **kwargs):
super().__init__(**kwargs)
self._session_id = session_id
self._user_id = user_id
def _run(self, *args: Any, **kwargs: Any) -> Any:
pass
async def _arun(self, **kwargs) -> str:
"""异步运行工具"""
# 发送运行工具前的消息
agent_message = await self._callback_handler.get_message()
if agent_message:
await self.send_tool_message(agent_message, title="MoviePilot助手")
# 发送执行工具说明
# 优先使用工具自定义的提示消息,如果没有则使用 explanation
tool_message = self.get_tool_message(**kwargs)
if not tool_message:
explanation = kwargs.get("explanation")
if explanation:
tool_message = explanation
if tool_message:
formatted_message = f"⚙️ => {tool_message}"
await self.send_tool_message(formatted_message)
return await self.run(**kwargs)
def get_tool_message(self, **kwargs) -> Optional[str]:
"""
获取工具执行时的友好提示消息
子类可以重写此方法,根据实际参数生成个性化的提示消息。
如果返回 None 或空字符串,将回退使用 explanation 参数。
Args:
**kwargs: 工具的所有参数(包括 explanation
Returns:
str: 友好的提示消息,如果返回 None 或空字符串则使用 explanation
"""
return None
@abstractmethod
async def run(self, **kwargs) -> str:
raise NotImplementedError
def set_message_attr(self, channel: str, source: str, username: str):
"""设置消息属性"""
self._channel = channel
self._source = source
self._username = username
def set_callback_handler(self, callback_handler: StreamingCallbackHandler):
"""设置回调处理器"""
self._callback_handler = callback_handler
async def send_tool_message(self, message: str, title: str = ""):
"""发送工具消息"""
await ToolChain().async_post_message(
Notification(
channel=self._channel,
source=self._source,
userid=self._user_id,
username=self._username,
title=title,
text=message
)
)

126
app/agent/tools/factory.py Normal file
View File

@@ -0,0 +1,126 @@
"""MoviePilot工具工厂"""
from typing import List, Callable
from app.agent.tools.impl.add_download import AddDownloadTool
from app.agent.tools.impl.add_subscribe import AddSubscribeTool
from app.agent.tools.impl.update_subscribe import UpdateSubscribeTool
from app.agent.tools.impl.get_recommendations import GetRecommendationsTool
from app.agent.tools.impl.query_downloaders import QueryDownloadersTool
from app.agent.tools.impl.query_downloads import QueryDownloadsTool
from app.agent.tools.impl.query_media_library import QueryMediaLibraryTool
from app.agent.tools.impl.query_sites import QuerySitesTool
from app.agent.tools.impl.update_site import UpdateSiteTool
from app.agent.tools.impl.query_site_userdata import QuerySiteUserdataTool
from app.agent.tools.impl.test_site import TestSiteTool
from app.agent.tools.impl.query_subscribes import QuerySubscribesTool
from app.agent.tools.impl.query_subscribe_shares import QuerySubscribeSharesTool
from app.agent.tools.impl.query_popular_subscribes import QueryPopularSubscribesTool
from app.agent.tools.impl.query_subscribe_history import QuerySubscribeHistoryTool
from app.agent.tools.impl.delete_subscribe import DeleteSubscribeTool
from app.agent.tools.impl.search_media import SearchMediaTool
from app.agent.tools.impl.recognize_media import RecognizeMediaTool
from app.agent.tools.impl.scrape_metadata import ScrapeMetadataTool
from app.agent.tools.impl.query_episode_schedule import QueryEpisodeScheduleTool
from app.agent.tools.impl.search_torrents import SearchTorrentsTool
from app.agent.tools.impl.send_message import SendMessageTool
from app.agent.tools.impl.query_schedulers import QuerySchedulersTool
from app.agent.tools.impl.run_scheduler import RunSchedulerTool
from app.agent.tools.impl.query_workflows import QueryWorkflowsTool
from app.agent.tools.impl.run_workflow import RunWorkflowTool
from app.agent.tools.impl.update_site_cookie import UpdateSiteCookieTool
from app.agent.tools.impl.delete_download import DeleteDownloadTool
from app.agent.tools.impl.query_directories import QueryDirectoriesTool
from app.agent.tools.impl.list_directory import ListDirectoryTool
from app.agent.tools.impl.query_transfer_history import QueryTransferHistoryTool
from app.agent.tools.impl.transfer_file import TransferFileTool
from app.core.plugin import PluginManager
from app.log import logger
from .base import MoviePilotTool
class MoviePilotToolFactory:
"""MoviePilot工具工厂"""
@staticmethod
def create_tools(session_id: str, user_id: str,
channel: str = None, source: str = None, username: str = None,
callback_handler: Callable = None) -> List[MoviePilotTool]:
"""创建MoviePilot工具列表"""
tools = []
tool_definitions = [
SearchMediaTool,
RecognizeMediaTool,
ScrapeMetadataTool,
QueryEpisodeScheduleTool,
AddSubscribeTool,
UpdateSubscribeTool,
SearchTorrentsTool,
AddDownloadTool,
QuerySubscribesTool,
QuerySubscribeSharesTool,
QueryPopularSubscribesTool,
QuerySubscribeHistoryTool,
DeleteSubscribeTool,
QueryDownloadsTool,
DeleteDownloadTool,
QueryDownloadersTool,
QuerySitesTool,
UpdateSiteTool,
QuerySiteUserdataTool,
TestSiteTool,
UpdateSiteCookieTool,
GetRecommendationsTool,
QueryMediaLibraryTool,
QueryDirectoriesTool,
ListDirectoryTool,
QueryTransferHistoryTool,
TransferFileTool,
SendMessageTool,
QuerySchedulersTool,
RunSchedulerTool,
QueryWorkflowsTool,
RunWorkflowTool
]
# 创建内置工具
for ToolClass in tool_definitions:
tool = ToolClass(
session_id=session_id,
user_id=user_id
)
tool.set_message_attr(channel=channel, source=source, username=username)
tool.set_callback_handler(callback_handler=callback_handler)
tools.append(tool)
# 加载插件提供的工具
plugin_tools_count = 0
plugin_tools_info = PluginManager().get_plugin_agent_tools()
for plugin_info in plugin_tools_info:
plugin_id = plugin_info.get("plugin_id")
plugin_name = plugin_info.get("plugin_name")
tool_classes = plugin_info.get("tools", [])
for ToolClass in tool_classes:
try:
# 验证工具类是否继承自 MoviePilotTool
if not issubclass(ToolClass, MoviePilotTool):
logger.warning(f"插件 {plugin_name}({plugin_id}) 提供的工具类 {ToolClass.__name__} 未继承自 MoviePilotTool已跳过")
continue
# 创建工具实例
tool = ToolClass(
session_id=session_id,
user_id=user_id
)
tool.set_message_attr(channel=channel, source=source, username=username)
tool.set_callback_handler(callback_handler=callback_handler)
tools.append(tool)
plugin_tools_count += 1
logger.debug(f"成功加载插件 {plugin_name}({plugin_id}) 的工具: {ToolClass.__name__}")
except Exception as e:
logger.error(f"加载插件 {plugin_name}({plugin_id}) 的工具 {ToolClass.__name__} 失败: {str(e)}")
builtin_tools_count = len(tool_definitions)
if plugin_tools_count > 0:
logger.info(f"成功创建 {len(tools)} 个MoviePilot工具内置工具: {builtin_tools_count} 个,插件工具: {plugin_tools_count} 个)")
else:
logger.info(f"成功创建 {len(tools)} 个MoviePilot工具")
return tools

View File

View File

@@ -0,0 +1,106 @@
"""添加下载工具"""
from typing import Optional, Type
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool, ToolChain
from app.chain.download import DownloadChain
from app.core.context import Context
from app.core.metainfo import MetaInfo
from app.db.site_oper import SiteOper
from app.log import logger
from app.schemas import TorrentInfo
class AddDownloadInput(BaseModel):
"""添加下载工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
site_name: str = Field(..., description="Name of the torrent site/source (e.g., 'The Pirate Bay')")
torrent_title: str = Field(...,
description="The display name/title of the torrent (e.g., 'The.Matrix.1999.1080p.BluRay.x264')")
torrent_url: str = Field(..., description="Direct URL to the torrent file (.torrent) or magnet link")
torrent_description: Optional[str] = Field(None,
description="Brief description of the torrent content (optional)")
downloader: Optional[str] = Field(None,
description="Name of the downloader to use (optional, uses default if not specified)")
save_path: Optional[str] = Field(None,
description="Directory path where the downloaded files should be saved (optional, uses default path if not specified)")
labels: Optional[str] = Field(None,
description="Comma-separated list of labels/tags to assign to the download (optional, e.g., 'movie,hd,bluray')")
class AddDownloadTool(MoviePilotTool):
name: str = "add_download"
description: str = "Add torrent download task to the configured downloader (qBittorrent, Transmission, etc.). Downloads the torrent file and starts the download process with specified settings."
args_schema: Type[BaseModel] = AddDownloadInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据下载参数生成友好的提示消息"""
torrent_title = kwargs.get("torrent_title", "")
site_name = kwargs.get("site_name", "")
downloader = kwargs.get("downloader")
message = f"正在添加下载任务: {torrent_title}"
if site_name:
message += f" (来源: {site_name})"
if downloader:
message += f" [下载器: {downloader}]"
return message
async def run(self, site_name: str, torrent_title: str, torrent_url: str, torrent_description: Optional[str] = None,
downloader: Optional[str] = None, save_path: Optional[str] = None,
labels: Optional[str] = None, **kwargs) -> str:
logger.info(
f"执行工具: {self.name}, 参数: site_name={site_name}, torrent_title={torrent_title}, torrent_url={torrent_url}, downloader={downloader}, save_path={save_path}, labels={labels}")
try:
if not torrent_title or not torrent_url:
return "错误:必须提供种子标题和下载链接"
# 使用DownloadChain添加下载
download_chain = DownloadChain()
# 根据站点名称查询站点cookie
if not site_name:
return "错误:必须提供站点名称,请从搜索资源结果信息中获取"
siteinfo = await SiteOper().async_get_by_name(site_name)
if not siteinfo:
return f"错误:未找到站点信息:{site_name}"
# 创建下载上下文
torrent_info = TorrentInfo(
title=torrent_title,
description=torrent_description,
enclosure=torrent_url,
site_name=site_name,
site_ua=siteinfo.ua,
site_cookie=siteinfo.cookie,
site_proxy=siteinfo.proxy,
site_order=siteinfo.pri,
site_downloader=siteinfo.downloader
)
meta_info = MetaInfo(title=torrent_title, subtitle=torrent_description)
media_info = await ToolChain().async_recognize_media(meta=meta_info)
if not media_info:
return "错误:无法识别媒体信息,无法添加下载任务"
context = Context(
torrent_info=torrent_info,
meta_info=meta_info,
media_info=media_info
)
did = download_chain.download_single(
context=context,
downloader=downloader,
save_path=save_path,
label=labels
)
if did:
return f"成功添加下载任务:{torrent_title}"
else:
return "添加下载任务失败"
except Exception as e:
logger.error(f"添加下载任务失败: {e}", exc_info=True)
return f"添加下载任务时发生错误: {str(e)}"

View File

@@ -0,0 +1,121 @@
"""添加订阅工具"""
from typing import Optional, Type
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.chain.subscribe import SubscribeChain
from app.log import logger
from app.schemas.types import MediaType
class AddSubscribeInput(BaseModel):
"""添加订阅工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
title: str = Field(..., description="The title of the media to subscribe to (e.g., 'The Matrix', 'Breaking Bad')")
year: str = Field(..., description="Release year of the media (required for accurate identification)")
media_type: str = Field(...,
description="Type of media content: '电影' for films, '电视剧' for television series or anime series")
season: Optional[int] = Field(None,
description="Season number for TV shows (optional, if not specified will subscribe to all seasons)")
tmdb_id: Optional[str] = Field(None,
description="TMDB database ID for precise media identification (optional but recommended for accuracy)")
start_episode: Optional[int] = Field(None,
description="Starting episode number for TV shows (optional, defaults to 1 if not specified)")
total_episode: Optional[int] = Field(None,
description="Total number of episodes for TV shows (optional, will be auto-detected from TMDB if not specified)")
quality: Optional[str] = Field(None,
description="Quality filter as regular expression (optional, e.g., 'BluRay|WEB-DL|HDTV')")
resolution: Optional[str] = Field(None,
description="Resolution filter as regular expression (optional, e.g., '1080p|720p|2160p')")
effect: Optional[str] = Field(None,
description="Effect filter as regular expression (optional, e.g., 'HDR|DV|SDR')")
class AddSubscribeTool(MoviePilotTool):
name: str = "add_subscribe"
description: str = "Add media subscription to create automated download rules for movies and TV shows. The system will automatically search and download new episodes or releases based on the subscription criteria. Supports advanced filtering options like quality, resolution, and effect filters using regular expressions."
args_schema: Type[BaseModel] = AddSubscribeInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据订阅参数生成友好的提示消息"""
title = kwargs.get("title", "")
year = kwargs.get("year", "")
media_type = kwargs.get("media_type", "")
season = kwargs.get("season")
message = f"正在添加订阅: {title}"
if year:
message += f" ({year})"
if media_type:
message += f" [{media_type}]"
if season:
message += f"{season}"
return message
async def run(self, title: str, year: str, media_type: str,
season: Optional[int] = None, tmdb_id: Optional[str] = None,
start_episode: Optional[int] = None, total_episode: Optional[int] = None,
quality: Optional[str] = None, resolution: Optional[str] = None,
effect: Optional[str] = None, **kwargs) -> str:
logger.info(
f"执行工具: {self.name}, 参数: title={title}, year={year}, media_type={media_type}, "
f"season={season}, tmdb_id={tmdb_id}, start_episode={start_episode}, "
f"total_episode={total_episode}, quality={quality}, resolution={resolution}, effect={effect}")
try:
subscribe_chain = SubscribeChain()
# 转换 tmdb_id 为整数
tmdbid_int = None
if tmdb_id:
try:
tmdbid_int = int(tmdb_id)
except (ValueError, TypeError):
logger.warning(f"无效的 tmdb_id: {tmdb_id},将忽略")
# 构建额外的订阅参数
subscribe_kwargs = {}
if start_episode is not None:
subscribe_kwargs['start_episode'] = start_episode
if total_episode is not None:
subscribe_kwargs['total_episode'] = total_episode
if quality:
subscribe_kwargs['quality'] = quality
if resolution:
subscribe_kwargs['resolution'] = resolution
if effect:
subscribe_kwargs['effect'] = effect
sid, message = await subscribe_chain.async_add(
mtype=MediaType(media_type),
title=title,
year=year,
tmdbid=tmdbid_int,
season=season,
username=self._user_id,
**subscribe_kwargs
)
if sid:
result_msg = f"成功添加订阅:{title} ({year})"
if subscribe_kwargs:
params = []
if start_episode is not None:
params.append(f"开始集数: {start_episode}")
if total_episode is not None:
params.append(f"总集数: {total_episode}")
if quality:
params.append(f"质量过滤: {quality}")
if resolution:
params.append(f"分辨率过滤: {resolution}")
if effect:
params.append(f"特效过滤: {effect}")
if params:
result_msg += f"\n配置参数: {', '.join(params)}"
return result_msg
else:
return f"添加订阅失败:{message}"
except Exception as e:
logger.error(f"添加订阅失败: {e}", exc_info=True)
return f"添加订阅时发生错误: {str(e)}"

View File

@@ -0,0 +1,76 @@
"""删除下载任务工具"""
from typing import Optional, Type
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.chain.download import DownloadChain
from app.log import logger
class DeleteDownloadInput(BaseModel):
"""删除下载任务工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
task_identifier: str = Field(..., description="Task identifier: can be task hash (unique identifier) or task title/name")
downloader: Optional[str] = Field(None, description="Name of specific downloader (optional, if not provided will search all downloaders)")
delete_files: Optional[bool] = Field(False, description="Whether to delete downloaded files along with the task (default: False, only removes the task from downloader)")
class DeleteDownloadTool(MoviePilotTool):
name: str = "delete_download"
description: str = "Delete a download task from the downloader. Can delete by task hash (unique identifier) or task title/name. Optionally specify the downloader name and whether to delete downloaded files."
args_schema: Type[BaseModel] = DeleteDownloadInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据删除参数生成友好的提示消息"""
task_identifier = kwargs.get("task_identifier", "")
downloader = kwargs.get("downloader")
delete_files = kwargs.get("delete_files", False)
message = f"正在删除下载任务: {task_identifier}"
if downloader:
message += f" [下载器: {downloader}]"
if delete_files:
message += " (包含文件)"
return message
async def run(self, task_identifier: str, downloader: Optional[str] = None,
delete_files: Optional[bool] = False, **kwargs) -> str:
logger.info(f"执行工具: {self.name}, 参数: task_identifier={task_identifier}, downloader={downloader}, delete_files={delete_files}")
try:
download_chain = DownloadChain()
# 如果task_identifier看起来像hash通常是40个字符的十六进制字符串
task_hash = None
if len(task_identifier) == 40 and all(c in '0123456789abcdefABCDEF' for c in task_identifier):
# 直接使用hash
task_hash = task_identifier
else:
# 通过标题查找任务
downloads = download_chain.downloading(name=downloader)
for dl in downloads:
# 检查标题或名称是否匹配
if (task_identifier.lower() in (dl.title or "").lower()) or \
(task_identifier.lower() in (dl.name or "").lower()):
task_hash = dl.hash
break
if not task_hash:
return f"未找到匹配的下载任务:{task_identifier},请使用 query_downloads 工具查询可用的下载任务"
# 删除下载任务
# remove_torrents 支持 delete_file 参数,可以控制是否删除文件
result = download_chain.remove_torrents(hashs=[task_hash], downloader=downloader, delete_file=delete_files)
if result:
files_info = "(包含文件)" if delete_files else "(不包含文件)"
return f"成功删除下载任务:{task_identifier} {files_info}"
else:
return f"删除下载任务失败:{task_identifier},请检查任务是否存在或下载器是否可用"
except Exception as e:
logger.error(f"删除下载任务失败: {e}", exc_info=True)
return f"删除下载任务时发生错误: {str(e)}"

View File

@@ -0,0 +1,63 @@
"""删除订阅工具"""
from typing import Optional, Type
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.core.event import eventmanager
from app.db.subscribe_oper import SubscribeOper
from app.helper.subscribe import SubscribeHelper
from app.log import logger
from app.schemas.types import EventType
class DeleteSubscribeInput(BaseModel):
"""删除订阅工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
subscribe_id: int = Field(..., description="The ID of the subscription to delete (can be obtained from query_subscribes tool)")
class DeleteSubscribeTool(MoviePilotTool):
name: str = "delete_subscribe"
description: str = "Delete a media subscription by its ID. This will remove the subscription and stop automatic downloads for that media."
args_schema: Type[BaseModel] = DeleteSubscribeInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据删除参数生成友好的提示消息"""
subscribe_id = kwargs.get("subscribe_id")
return f"正在删除订阅 (ID: {subscribe_id})"
async def run(self, subscribe_id: int, **kwargs) -> str:
logger.info(f"执行工具: {self.name}, 参数: subscribe_id={subscribe_id}")
try:
subscribe_oper = SubscribeOper()
# 获取订阅信息
subscribe = await subscribe_oper.async_get(subscribe_id)
if not subscribe:
return f"订阅 ID {subscribe_id} 不存在"
# 在删除之前获取订阅信息(用于事件)
subscribe_info = subscribe.to_dict()
# 删除订阅
subscribe_oper.delete(subscribe_id)
# 发送事件
await eventmanager.async_send_event(EventType.SubscribeDeleted, {
"subscribe_id": subscribe_id,
"subscribe_info": subscribe_info
})
# 统计订阅
SubscribeHelper().sub_done_async({
"tmdbid": subscribe.tmdbid,
"doubanid": subscribe.doubanid
})
return f"成功删除订阅:{subscribe.name} ({subscribe.year})"
except Exception as e:
logger.error(f"删除订阅失败: {e}", exc_info=True)
return f"删除订阅时发生错误: {str(e)}"

View File

@@ -0,0 +1,170 @@
"""获取推荐工具"""
import json
from typing import Optional, Type
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.chain.recommend import RecommendChain
from app.log import logger
class GetRecommendationsInput(BaseModel):
"""获取推荐工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
source: Optional[str] = Field("tmdb_trending",
description="Recommendation source: "
"'tmdb_trending' for TMDB trending content, "
"'tmdb_movies' for TMDB popular movies, "
"'tmdb_tvs' for TMDB popular TV shows, "
"'douban_hot' for Douban popular content, "
"'douban_movie_hot' for Douban hot movies, "
"'douban_tv_hot' for Douban hot TV shows, "
"'douban_movie_showing' for Douban movies currently showing, "
"'douban_movies' for Douban latest movies, "
"'douban_tvs' for Douban latest TV shows, "
"'douban_movie_top250' for Douban movie TOP250, "
"'douban_tv_weekly_chinese' for Douban Chinese TV weekly chart, "
"'douban_tv_weekly_global' for Douban global TV weekly chart, "
"'douban_tv_animation' for Douban popular animation, "
"'bangumi_calendar' for Bangumi anime calendar")
media_type: Optional[str] = Field("all",
description="Type of media content: '电影' for films, '电视剧' for television series or anime series, 'all' for all types")
limit: Optional[int] = Field(20,
description="Maximum number of recommendations to return (default: 20, maximum: 100)")
class GetRecommendationsTool(MoviePilotTool):
name: str = "get_recommendations"
description: str = "Get trending and popular media recommendations from various sources. Returns curated lists of popular movies, TV shows, and anime based on different criteria like trending, ratings, or calendar schedules."
args_schema: Type[BaseModel] = GetRecommendationsInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据推荐参数生成友好的提示消息"""
source = kwargs.get("source", "tmdb_trending")
media_type = kwargs.get("media_type", "all")
limit = kwargs.get("limit", 20)
source_map = {
"tmdb_trending": "TMDB流行趋势",
"tmdb_movies": "TMDB热门电影",
"tmdb_tvs": "TMDB热门电视剧",
"douban_hot": "豆瓣热门",
"douban_movie_hot": "豆瓣热门电影",
"douban_tv_hot": "豆瓣热门电视剧",
"douban_movie_showing": "豆瓣正在热映",
"douban_movies": "豆瓣最新电影",
"douban_tvs": "豆瓣最新电视剧",
"douban_movie_top250": "豆瓣电影TOP250",
"douban_tv_weekly_chinese": "豆瓣国产剧集榜",
"douban_tv_weekly_global": "豆瓣全球剧集榜",
"douban_tv_animation": "豆瓣热门动漫",
"bangumi_calendar": "番组计划"
}
source_desc = source_map.get(source, source)
message = f"正在获取推荐: {source_desc}"
if media_type != "all":
message += f" [{media_type}]"
message += f" (限制: {limit}条)"
return message
async def run(self, source: Optional[str] = "tmdb_trending",
media_type: Optional[str] = "all", limit: Optional[int] = 20, **kwargs) -> str:
logger.info(f"执行工具: {self.name}, 参数: source={source}, media_type={media_type}, limit={limit}")
try:
recommend_chain = RecommendChain()
results = []
if source == "tmdb_trending":
# async_tmdb_trending 只接受 page 参数,返回固定数量的结果
# 如果需要限制数量,需要在返回后截取
results = await recommend_chain.async_tmdb_trending(page=1)
if limit and limit > 0:
results = results[:limit]
elif source == "tmdb_movies":
# async_tmdb_movies 接受 page 参数,返回固定数量的结果
results = await recommend_chain.async_tmdb_movies(page=1)
if limit and limit > 0:
results = results[:limit]
elif source == "tmdb_tvs":
# async_tmdb_tvs 接受 page 参数,返回固定数量的结果
results = await recommend_chain.async_tmdb_tvs(page=1)
if limit and limit > 0:
results = results[:limit]
elif source == "douban_hot":
if media_type == "movie":
results = await recommend_chain.async_douban_movie_hot(page=1, count=limit)
elif media_type == "tv":
results = await recommend_chain.async_douban_tv_hot(page=1, count=limit)
else: # all
results.extend(await recommend_chain.async_douban_movie_hot(page=1, count=limit))
results.extend(await recommend_chain.async_douban_tv_hot(page=1, count=limit))
elif source == "douban_movie_hot":
results = await recommend_chain.async_douban_movie_hot(page=1, count=limit)
elif source == "douban_tv_hot":
results = await recommend_chain.async_douban_tv_hot(page=1, count=limit)
elif source == "douban_movie_showing":
results = await recommend_chain.async_douban_movie_showing(page=1, count=limit)
elif source == "douban_movies":
results = await recommend_chain.async_douban_movies(page=1, count=limit)
elif source == "douban_tvs":
results = await recommend_chain.async_douban_tvs(page=1, count=limit)
elif source == "douban_movie_top250":
results = await recommend_chain.async_douban_movie_top250(page=1, count=limit)
elif source == "douban_tv_weekly_chinese":
results = await recommend_chain.async_douban_tv_weekly_chinese(page=1, count=limit)
elif source == "douban_tv_weekly_global":
results = await recommend_chain.async_douban_tv_weekly_global(page=1, count=limit)
elif source == "douban_tv_animation":
results = await recommend_chain.async_douban_tv_animation(page=1, count=limit)
elif source == "bangumi_calendar":
results = await recommend_chain.async_bangumi_calendar(page=1, count=limit)
else:
# 不支持的推荐来源
supported_sources = [
"tmdb_trending", "tmdb_movies", "tmdb_tvs",
"douban_hot", "douban_movie_hot", "douban_tv_hot",
"douban_movie_showing", "douban_movies", "douban_tvs",
"douban_movie_top250", "douban_tv_weekly_chinese",
"douban_tv_weekly_global", "douban_tv_animation",
"bangumi_calendar"
]
return f"不支持的推荐来源: {source}。支持的来源包括: {', '.join(supported_sources)}"
if results:
# 限制最多20条结果
total_count = len(results)
limited_results = results[:20]
# 精简字段,只保留关键信息
simplified_results = []
for r in limited_results:
# r 应该是字典格式to_dict的结果但为了安全起见进行检查
if not isinstance(r, dict):
logger.warning(f"推荐结果格式异常,跳过: {type(r)}")
continue
simplified = {
"title": r.get("title"),
"en_title": r.get("en_title"),
"year": r.get("year"),
"type": r.get("type"),
"season": r.get("season"),
"tmdb_id": r.get("tmdb_id"),
"imdb_id": r.get("imdb_id"),
"douban_id": r.get("douban_id"),
"vote_average": r.get("vote_average"),
"poster_path": r.get("poster_path"),
"detail_link": r.get("detail_link")
}
simplified_results.append(simplified)
result_json = json.dumps(simplified_results, ensure_ascii=False, indent=2)
# 如果结果被裁剪,添加提示信息
if total_count > 20:
return f"注意:推荐结果共找到 {total_count} 条,为节省上下文空间,仅显示前 20 条结果。\n\n{result_json}"
return result_json
return "未找到推荐内容。"
except Exception as e:
logger.error(f"获取推荐失败: {e}", exc_info=True)
return f"获取推荐时发生错误: {str(e)}"

View File

@@ -0,0 +1,130 @@
"""查询文件系统目录内容工具"""
import json
from datetime import datetime
from pathlib import Path
from typing import Optional, Type
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.chain.storage import StorageChain
from app.log import logger
from app.schemas.file import FileItem
from app.utils.string import StringUtils
class ListDirectoryInput(BaseModel):
"""查询文件系统目录内容工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
path: str = Field(..., description="Directory path to list contents (e.g., '/home/user/downloads' or 'C:/Downloads')")
storage: Optional[str] = Field("local", description="Storage type (default: 'local' for local file system, can be 'smb', 'alist', etc.)")
sort_by: Optional[str] = Field("name", description="Sort order: 'name' for alphabetical sorting, 'time' for modification time sorting (default: 'name')")
class ListDirectoryTool(MoviePilotTool):
name: str = "list_directory"
description: str = "List contents of a file system directory. Shows files and subdirectories with their names, types, sizes, and modification times. Returns up to 20 items and the total count if there are more items."
args_schema: Type[BaseModel] = ListDirectoryInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据目录参数生成友好的提示消息"""
path = kwargs.get("path", "")
storage = kwargs.get("storage", "local")
message = f"正在查询目录: {path}"
if storage != "local":
message += f" [存储: {storage}]"
return message
async def run(self, path: str, storage: Optional[str] = "local",
sort_by: Optional[str] = "name", **kwargs) -> str:
logger.info(f"执行工具: {self.name}, 参数: path={path}, storage={storage}, sort_by={sort_by}")
try:
# 规范化路径
if not path:
return "错误:路径不能为空"
# 确保路径格式正确
if storage == "local":
# 本地路径处理
if not path.startswith("/") and not (len(path) > 1 and path[1] == ":"):
# 相对路径,尝试转换为绝对路径
path = str(Path(path).resolve())
else:
# 远程存储路径,确保以/开头
if not path.startswith("/"):
path = "/" + path
# 创建FileItem
fileitem = FileItem(
storage=storage or "local",
path=path,
type="dir"
)
# 查询目录内容
storage_chain = StorageChain()
file_list = storage_chain.list_files(fileitem, recursion=False)
if file_list is None:
return f"无法访问目录:{path},请检查路径是否正确或存储是否可用"
if not file_list:
return f"目录 {path} 为空"
# 排序
if sort_by == "time":
file_list.sort(key=lambda x: x.modify_time or 0, reverse=True)
else:
# 默认按名称排序(目录优先,然后按名称)
file_list.sort(key=lambda x: (
0 if x.type == "dir" else 1,
StringUtils.natural_sort_key(x.name or "")
))
# 限制返回数量
total_count = len(file_list)
limited_list = file_list[:20]
# 转换为字典格式
simplified_items = []
for item in limited_list:
# 格式化文件大小
size_str = None
if item.size:
size_str = StringUtils.str_filesize(item.size)
# 格式化修改时间
modify_time_str = None
if item.modify_time:
try:
modify_time_str = datetime.fromtimestamp(item.modify_time).strftime("%Y-%m-%d %H:%M:%S")
except (ValueError, OSError):
modify_time_str = str(item.modify_time)
simplified = {
"name": item.name,
"type": item.type,
"path": item.path,
"size": size_str,
"modify_time": modify_time_str
}
# 如果是文件,添加扩展名
if item.type == "file" and item.extension:
simplified["extension"] = item.extension
simplified_items.append(simplified)
result_json = json.dumps(simplified_items, ensure_ascii=False, indent=2)
# 如果结果被裁剪,添加提示信息
if total_count > 20:
return f"注意:目录中共有 {total_count} 个项目,为节省上下文空间,仅显示前 20 个项目。\n\n{result_json}"
else:
return result_json
except Exception as e:
logger.error(f"查询目录内容失败: {e}", exc_info=True)
return f"查询目录内容时发生错误: {str(e)}"

View File

@@ -0,0 +1,134 @@
"""查询系统目录设置工具"""
import json
from typing import Optional, Type
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.helper.directory import DirectoryHelper
from app.log import logger
class QueryDirectoriesInput(BaseModel):
"""查询系统目录设置工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
directory_type: Optional[str] = Field("all",
description="Filter directories by type: 'download' for download directories, 'library' for media library directories, 'all' for all directories")
storage_type: Optional[str] = Field("all",
description="Filter directories by storage type: 'local' for local storage, 'remote' for remote storage, 'all' for all storage types")
name: Optional[str] = Field(None,
description="Filter directories by name (partial match, optional)")
class QueryDirectoriesTool(MoviePilotTool):
name: str = "query_directories"
description: str = "Query system directory configuration and list all configured directories. Shows download directories, media library directories, storage settings, transfer modes, and other directory-related configurations."
args_schema: Type[BaseModel] = QueryDirectoriesInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据查询参数生成友好的提示消息"""
directory_type = kwargs.get("directory_type", "all")
storage_type = kwargs.get("storage_type", "all")
name = kwargs.get("name")
parts = ["正在查询目录配置"]
if directory_type != "all":
type_map = {"download": "下载目录", "library": "媒体库目录"}
parts.append(f"类型: {type_map.get(directory_type, directory_type)}")
if storage_type != "all":
storage_map = {"local": "本地存储", "remote": "远程存储"}
parts.append(f"存储: {storage_map.get(storage_type, storage_type)}")
if name:
parts.append(f"名称: {name}")
return " | ".join(parts) if len(parts) > 1 else parts[0]
async def run(self, directory_type: Optional[str] = "all",
storage_type: Optional[str] = "all",
name: Optional[str] = None, **kwargs) -> str:
logger.info(f"执行工具: {self.name}, 参数: directory_type={directory_type}, storage_type={storage_type}, name={name}")
try:
directory_helper = DirectoryHelper()
# 根据目录类型获取目录列表
if directory_type == "download":
dirs = directory_helper.get_download_dirs()
elif directory_type == "library":
dirs = directory_helper.get_library_dirs()
else:
dirs = directory_helper.get_dirs()
# 按存储类型过滤
filtered_dirs = []
for d in dirs:
# 按存储类型过滤
if storage_type == "local":
# 对于下载目录,检查 storage对于媒体库目录检查 library_storage
if directory_type == "download" and d.storage != "local":
continue
elif directory_type == "library" and d.library_storage != "local":
continue
elif directory_type == "all":
# 检查是否有本地存储配置
if d.download_path and d.storage != "local":
continue
if d.library_path and d.library_storage != "local":
continue
elif storage_type == "remote":
# 对于下载目录,检查 storage对于媒体库目录检查 library_storage
if directory_type == "download" and d.storage == "local":
continue
elif directory_type == "library" and d.library_storage == "local":
continue
elif directory_type == "all":
# 检查是否有远程存储配置
if d.download_path and d.storage == "local":
continue
if d.library_path and d.library_storage == "local":
continue
# 按名称过滤(部分匹配)
if name and d.name and name.lower() not in d.name.lower():
continue
filtered_dirs.append(d)
if filtered_dirs:
# 转换为字典格式,只保留关键信息
simplified_dirs = []
for d in filtered_dirs:
simplified = {
"name": d.name,
"priority": d.priority,
"storage": d.storage,
"download_path": d.download_path,
"library_path": d.library_path,
"library_storage": d.library_storage,
"media_type": d.media_type,
"media_category": d.media_category,
"monitor_type": d.monitor_type,
"monitor_mode": d.monitor_mode,
"transfer_type": d.transfer_type,
"overwrite_mode": d.overwrite_mode,
"renaming": d.renaming,
"scraping": d.scraping,
"notify": d.notify,
"download_type_folder": d.download_type_folder,
"download_category_folder": d.download_category_folder,
"library_type_folder": d.library_type_folder,
"library_category_folder": d.library_category_folder
}
simplified_dirs.append(simplified)
result_json = json.dumps(simplified_dirs, ensure_ascii=False, indent=2)
return result_json
return "未找到相关目录配置"
except Exception as e:
logger.error(f"查询系统目录设置失败: {e}", exc_info=True)
return f"查询系统目录设置时发生错误: {str(e)}"

View File

@@ -0,0 +1,38 @@
"""查询下载器工具"""
import json
from typing import Optional, Type
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.db.systemconfig_oper import SystemConfigOper
from app.log import logger
from app.schemas.types import SystemConfigKey
class QueryDownloadersInput(BaseModel):
"""查询下载器工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
class QueryDownloadersTool(MoviePilotTool):
name: str = "query_downloaders"
description: str = "Query downloader configuration and list all available downloaders. Shows downloader status, connection details, and configuration settings."
args_schema: Type[BaseModel] = QueryDownloadersInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""生成友好的提示消息"""
return "正在查询下载器配置"
async def run(self, **kwargs) -> str:
logger.info(f"执行工具: {self.name}")
try:
system_config_oper = SystemConfigOper()
downloaders_config = system_config_oper.get(SystemConfigKey.Downloaders)
if downloaders_config:
return json.dumps(downloaders_config, ensure_ascii=False, indent=2)
return "未配置下载器。"
except Exception as e:
logger.error(f"查询下载器失败: {e}")
return f"查询下载器时发生错误: {str(e)}"

View File

@@ -0,0 +1,197 @@
"""查询下载工具"""
import json
from typing import Optional, Type
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.chain.download import DownloadChain
from app.db.downloadhistory_oper import DownloadHistoryOper
from app.log import logger
class QueryDownloadsInput(BaseModel):
"""查询下载工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
downloader: Optional[str] = Field(None,
description="Name of specific downloader to query (optional, if not provided queries all configured downloaders)")
status: Optional[str] = Field("all",
description="Filter downloads by status: 'downloading' for active downloads, 'completed' for finished downloads, 'paused' for paused downloads, 'all' for all downloads")
hash: Optional[str] = Field(None, description="Query specific download task by hash (optional, if provided will search for this specific task regardless of status)")
title: Optional[str] = Field(None, description="Query download tasks by title/name (optional, supports partial match, searches all tasks if provided)")
class QueryDownloadsTool(MoviePilotTool):
name: str = "query_downloads"
description: str = "Query download status and list download tasks. Can query all active downloads, or search for specific tasks by hash or title. Shows download progress, completion status, and task details from configured downloaders."
args_schema: Type[BaseModel] = QueryDownloadsInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据查询参数生成友好的提示消息"""
downloader = kwargs.get("downloader")
status = kwargs.get("status", "all")
hash_value = kwargs.get("hash")
title = kwargs.get("title")
parts = ["正在查询下载任务"]
if downloader:
parts.append(f"下载器: {downloader}")
if status != "all":
status_map = {"downloading": "下载中", "completed": "已完成", "paused": "已暂停"}
parts.append(f"状态: {status_map.get(status, status)}")
if hash_value:
parts.append(f"Hash: {hash_value[:8]}...")
elif title:
parts.append(f"标题: {title}")
return " | ".join(parts) if len(parts) > 1 else parts[0]
async def run(self, downloader: Optional[str] = None,
status: Optional[str] = "all",
hash: Optional[str] = None,
title: Optional[str] = None, **kwargs) -> str:
logger.info(f"执行工具: {self.name}, 参数: downloader={downloader}, status={status}, hash={hash}, title={title}")
try:
download_chain = DownloadChain()
# 如果提供了hash直接查询该hash的任务不限制状态
if hash:
torrents = download_chain.list_torrents(downloader=downloader, hashs=[hash])
if not torrents:
return f"未找到hash为 {hash} 的下载任务(该任务可能已完成、已删除或不存在)"
# 转换为DownloadingTorrent格式
downloads = []
for torrent in torrents:
# 获取下载历史信息
history = DownloadHistoryOper().get_by_hash(torrent.hash)
if history:
torrent.media = {
"tmdbid": history.tmdbid,
"type": history.type,
"title": history.title,
"season": history.seasons,
"episode": history.episodes,
"image": history.image,
}
torrent.userid = history.userid
torrent.username = history.username
downloads.append(torrent)
filtered_downloads = downloads
elif title:
# 如果提供了title查询所有任务并搜索匹配的标题
# 查询所有状态的任务
all_torrents = download_chain.list_torrents(downloader=downloader) or []
filtered_downloads = []
for torrent in all_torrents:
# 检查标题或名称是否匹配
if (title.lower() in (torrent.title or "").lower()) or \
(title.lower() in (torrent.name or "").lower()):
# 获取下载历史信息
history = DownloadHistoryOper().get_by_hash(torrent.hash)
if history:
torrent.media = {
"tmdbid": history.tmdbid,
"type": history.type,
"title": history.title,
"season": history.seasons,
"episode": history.episodes,
"image": history.image,
}
torrent.userid = history.userid
torrent.username = history.username
filtered_downloads.append(torrent)
if not filtered_downloads:
return f"未找到标题包含 '{title}' 的下载任务"
else:
# 根据status决定查询方式
if status == "downloading":
# 如果status为下载中使用downloading方法
downloads = download_chain.downloading(name=downloader)
filtered_downloads = []
for dl in downloads:
if downloader and dl.downloader != downloader:
continue
filtered_downloads.append(dl)
else:
# 其他状态completed、paused、all使用list_torrents查询所有任务
# 查询所有状态的任务
all_torrents = download_chain.list_torrents(downloader=downloader) or []
filtered_downloads = []
for torrent in all_torrents:
if downloader and torrent.downloader != downloader:
continue
# 根据status过滤
if status == "completed":
# 已完成的任务state为seeding或completed
if torrent.state not in ["seeding", "completed"]:
continue
elif status == "paused":
# 已暂停的任务
if torrent.state != "paused":
continue
# status == "all" 时不过滤
# 获取下载历史信息
history = DownloadHistoryOper().get_by_hash(torrent.hash)
if history:
torrent.media = {
"tmdbid": history.tmdbid,
"type": history.type,
"title": history.title,
"season": history.seasons,
"episode": history.episodes,
"image": history.image,
}
torrent.userid = history.userid
torrent.username = history.username
filtered_downloads.append(torrent)
if filtered_downloads:
# 限制最多20条结果
total_count = len(filtered_downloads)
limited_downloads = filtered_downloads[:20]
# 精简字段,只保留关键信息
simplified_downloads = []
for d in limited_downloads:
simplified = {
"downloader": d.downloader,
"hash": d.hash,
"title": d.title,
"name": d.name,
"year": d.year,
"season_episode": d.season_episode,
"size": d.size,
"progress": d.progress,
"state": d.state,
"upspeed": d.upspeed,
"dlspeed": d.dlspeed,
"left_time": d.left_time
}
# 精简 media 字段
if d.media:
simplified["media"] = {
"tmdbid": d.media.get("tmdbid"),
"type": d.media.get("type"),
"title": d.media.get("title"),
"season": d.media.get("season"),
"episode": d.media.get("episode")
}
simplified_downloads.append(simplified)
result_json = json.dumps(simplified_downloads, ensure_ascii=False, indent=2)
# 如果结果被裁剪,添加提示信息
if total_count > 20:
return f"注意:查询结果共找到 {total_count} 条,为节省上下文空间,仅显示前 20 条结果。\n\n{result_json}"
# 如果查询的是特定hash或title添加明确的状态信息
if hash:
return f"找到hash为 {hash} 的下载任务:\n\n{result_json}"
elif title:
return f"找到 {total_count} 个标题包含 '{title}' 的下载任务:\n\n{result_json}"
return result_json
return "未找到相关下载任务"
except Exception as e:
logger.error(f"查询下载失败: {e}", exc_info=True)
return f"查询下载时发生错误: {str(e)}"

View File

@@ -0,0 +1,116 @@
"""查询剧集上映时间工具"""
import json
from typing import Optional, Type
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.chain.media import MediaChain
from app.chain.tmdb import TmdbChain
from app.log import logger
from app.schemas import MediaType
class QueryEpisodeScheduleInput(BaseModel):
"""查询剧集上映时间工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
tmdb_id: int = Field(..., description="TMDB ID of the TV series")
season: int = Field(..., description="Season number to query")
episode_group: Optional[str] = Field(None, description="Episode group ID (optional)")
class QueryEpisodeScheduleTool(MoviePilotTool):
name: str = "query_episode_schedule"
description: str = "Query TV series episode air dates and schedule. Returns detailed information for each episode including air date, episode number, title, overview, and other metadata. Filters out episodes without air dates."
args_schema: Type[BaseModel] = QueryEpisodeScheduleInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据查询参数生成友好的提示消息"""
tmdb_id = kwargs.get("tmdb_id")
season = kwargs.get("season")
episode_group = kwargs.get("episode_group")
message = f"正在查询剧集上映时间: TMDB ID {tmdb_id}{season}"
if episode_group:
message += f" (剧集组: {episode_group})"
return message
async def run(self, tmdb_id: int, season: int, episode_group: Optional[str] = None, **kwargs) -> str:
logger.info(f"执行工具: {self.name}, 参数: tmdb_id={tmdb_id}, season={season}, episode_group={episode_group}")
try:
# 获取媒体信息(用于获取标题和海报)
media_chain = MediaChain()
mediainfo = await media_chain.async_recognize_media(tmdbid=tmdb_id, mtype=MediaType.TV)
if not mediainfo:
return f"未找到 TMDB ID {tmdb_id} 的媒体信息"
# 获取集列表
tmdb_chain = TmdbChain()
episodes = await tmdb_chain.async_tmdb_episodes(
tmdbid=tmdb_id,
season=season,
episode_group=episode_group
)
if not episodes:
return json.dumps({
"success": False,
"message": f"未找到 TMDB ID {tmdb_id}{season}季的集信息"
}, ensure_ascii=False)
# 过滤掉没有上映日期的集,并构建每集的详细信息
episode_list = []
for episode in episodes:
air_date = episode.air_date
# 过滤掉没有上映日期的数据
if not air_date:
continue
episode_info = {
"episode_number": episode.episode_number,
"name": episode.name,
"air_date": air_date,
"runtime": episode.runtime,
"vote_average": episode.vote_average,
"still_path": episode.still_path,
"episode_type": episode.episode_type,
"season_number": episode.season_number
}
episode_list.append(episode_info)
if not episode_list:
return json.dumps({
"success": False,
"message": f"未找到 TMDB ID {tmdb_id}{season}季的播出时间信息(所有集都没有播出日期)"
}, ensure_ascii=False)
# 按播出日期排序
episode_list.sort(key=lambda x: (x["air_date"] or "", x["episode_number"] or 0))
result = {
"success": True,
"tmdb_id": tmdb_id,
"season": season,
"episode_group": episode_group,
"series_title": mediainfo.title if mediainfo else None,
"series_poster": mediainfo.poster_path if mediainfo else None,
"total_episodes": len(episodes),
"episodes_with_air_date": len(episode_list),
"episodes": episode_list
}
return json.dumps(result, ensure_ascii=False, indent=2)
except Exception as e:
error_message = f"查询剧集上映时间失败: {str(e)}"
logger.error(f"查询剧集上映时间失败: {e}", exc_info=True)
return json.dumps({
"success": False,
"message": error_message,
"tmdb_id": tmdb_id,
"season": season
}, ensure_ascii=False)

View File

@@ -0,0 +1,58 @@
"""查询媒体库工具"""
import json
from typing import Optional, List, Type
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.db.mediaserver_oper import MediaServerOper
from app.log import logger
from app.schemas import MediaServerItem
class QueryMediaLibraryInput(BaseModel):
"""查询媒体库工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
media_type: Optional[str] = Field("all",
description="Type of media content: '电影' for films, '电视剧' for television series or anime series, 'all' for all types")
title: Optional[str] = Field(None,
description="Specific media title to check if it exists in the media library (optional, if provided checks for that specific media)")
year: Optional[str] = Field(None,
description="Release year of the media (optional, helps narrow down search results)")
class QueryMediaLibraryTool(MoviePilotTool):
name: str = "query_media_library"
description: str = "Check if a specific media resource already exists in the media library (Plex, Emby, Jellyfin). Use this tool to verify whether a movie or TV series has been successfully processed and added to the media server before performing operations like downloading or subscribing."
args_schema: Type[BaseModel] = QueryMediaLibraryInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据查询参数生成友好的提示消息"""
media_type = kwargs.get("media_type", "all")
title = kwargs.get("title")
year = kwargs.get("year")
parts = ["正在查询媒体库"]
if title:
parts.append(f"标题: {title}")
if year:
parts.append(f"年份: {year}")
if media_type != "all":
parts.append(f"类型: {media_type}")
return " | ".join(parts) if len(parts) > 1 else parts[0]
async def run(self, media_type: Optional[str] = "all",
title: Optional[str] = None, year: Optional[str] = None, **kwargs) -> str:
logger.info(f"执行工具: {self.name}, 参数: media_type={media_type}, title={title}")
try:
media_server_oper = MediaServerOper()
filtered_medias: List[MediaServerItem] = await media_server_oper.async_exists(title=title, year=year, mtype=media_type)
if filtered_medias:
return json.dumps([m.to_dict() for m in filtered_medias])
return "媒体库中未找到相关媒体"
except Exception as e:
logger.error(f"查询媒体库失败: {e}", exc_info=True)
return f"查询媒体库时发生错误: {str(e)}"

View File

@@ -0,0 +1,152 @@
"""查询热门订阅工具"""
import json
from typing import Optional, Type
import cn2an
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.core.context import MediaInfo
from app.helper.subscribe import SubscribeHelper
from app.log import logger
from app.schemas.types import MediaType
class QueryPopularSubscribesInput(BaseModel):
"""查询热门订阅工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
stype: str = Field(..., description="Media type: '电影' for films, '电视剧' for television series")
page: Optional[int] = Field(1, description="Page number for pagination (default: 1)")
count: Optional[int] = Field(30, description="Number of items per page (default: 30)")
min_sub: Optional[int] = Field(None, description="Minimum number of subscribers filter (optional, e.g., 5)")
genre_id: Optional[int] = Field(None, description="Filter by genre ID (optional)")
min_rating: Optional[float] = Field(None, description="Minimum rating filter (optional, e.g., 7.5)")
max_rating: Optional[float] = Field(None, description="Maximum rating filter (optional, e.g., 10.0)")
sort_type: Optional[str] = Field(None, description="Sort type (optional, e.g., 'count', 'rating')")
class QueryPopularSubscribesTool(MoviePilotTool):
name: str = "query_popular_subscribes"
description: str = "Query popular subscriptions based on user shared data. Shows media with the most subscribers, supports filtering by genre, rating, minimum subscribers, and pagination."
args_schema: Type[BaseModel] = QueryPopularSubscribesInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据查询参数生成友好的提示消息"""
stype = kwargs.get("stype", "")
page = kwargs.get("page", 1)
min_sub = kwargs.get("min_sub")
min_rating = kwargs.get("min_rating")
max_rating = kwargs.get("max_rating")
parts = [f"正在查询热门订阅 [{stype}]"]
if min_sub:
parts.append(f"最少订阅: {min_sub}")
if min_rating:
parts.append(f"最低评分: {min_rating}")
if max_rating:
parts.append(f"最高评分: {max_rating}")
if page > 1:
parts.append(f"{page}")
return " | ".join(parts) if len(parts) > 1 else parts[0]
async def run(self, stype: str,
page: Optional[int] = 1,
count: Optional[int] = 30,
min_sub: Optional[int] = None,
genre_id: Optional[int] = None,
min_rating: Optional[float] = None,
max_rating: Optional[float] = None,
sort_type: Optional[str] = None, **kwargs) -> str:
logger.info(
f"执行工具: {self.name}, 参数: stype={stype}, page={page}, count={count}, min_sub={min_sub}, "
f"genre_id={genre_id}, min_rating={min_rating}, max_rating={max_rating}, sort_type={sort_type}")
try:
if page is None or page < 1:
page = 1
if count is None or count < 1:
count = 30
subscribe_helper = SubscribeHelper()
subscribes = await subscribe_helper.async_get_statistic(
stype=stype,
page=page,
count=count,
genre_id=genre_id,
min_rating=min_rating,
max_rating=max_rating,
sort_type=sort_type
)
if not subscribes:
return "未找到热门订阅数据(可能订阅统计功能未启用)"
# 转换为MediaInfo格式并过滤
ret_medias = []
for sub in subscribes:
# 订阅人数
subscriber_count = sub.get("count", 0)
# 如果设置了最小订阅人数,进行过滤
if min_sub and subscriber_count < min_sub:
continue
media = MediaInfo()
media.type = MediaType(sub.get("type"))
media.tmdb_id = sub.get("tmdbid")
# 处理标题
title = sub.get("name")
season = sub.get("season")
if season and int(season) > 1 and media.tmdb_id:
# 小写数据转大写
season_str = cn2an.an2cn(season, "low")
title = f"{title}{season_str}"
media.title = title
media.year = sub.get("year")
media.douban_id = sub.get("doubanid")
media.bangumi_id = sub.get("bangumiid")
media.tvdb_id = sub.get("tvdbid")
media.imdb_id = sub.get("imdbid")
media.season = sub.get("season")
media.vote_average = sub.get("vote")
media.poster_path = sub.get("poster")
media.backdrop_path = sub.get("backdrop")
media.popularity = subscriber_count
ret_medias.append(media)
if not ret_medias:
return "未找到符合条件的热门订阅"
# 转换为字典格式,只保留关键信息
simplified_medias = []
for media in ret_medias:
media_dict = media.to_dict()
simplified = {
"type": media_dict.get("type"),
"title": media_dict.get("title"),
"year": media_dict.get("year"),
"tmdb_id": media_dict.get("tmdb_id"),
"douban_id": media_dict.get("douban_id"),
"bangumi_id": media_dict.get("bangumi_id"),
"tvdb_id": media_dict.get("tvdb_id"),
"imdb_id": media_dict.get("imdb_id"),
"season": media_dict.get("season"),
"vote_average": media_dict.get("vote_average"),
"poster_path": media_dict.get("poster_path"),
"backdrop_path": media_dict.get("backdrop_path"),
"popularity": media_dict.get("popularity"), # 订阅人数
"subscriber_count": media_dict.get("popularity") # 明确标注为订阅人数
}
simplified_medias.append(simplified)
result_json = json.dumps(simplified_medias, ensure_ascii=False, indent=2)
pagination_info = f"{page} 页,每页 {count} 条,共 {len(simplified_medias)} 条结果"
return f"{pagination_info}\n\n{result_json}"
except Exception as e:
logger.error(f"查询热门订阅失败: {e}", exc_info=True)
return f"查询热门订阅时发生错误: {str(e)}"

View File

@@ -0,0 +1,55 @@
"""查询定时服务工具"""
import json
from typing import Optional, Type
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.log import logger
from app.scheduler import Scheduler
class QuerySchedulersInput(BaseModel):
"""查询定时服务工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
class QuerySchedulersTool(MoviePilotTool):
name: str = "query_schedulers"
description: str = "Query scheduled tasks and list all available scheduler jobs. Shows job status, next run time, and provider information."
args_schema: Type[BaseModel] = QuerySchedulersInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""生成友好的提示消息"""
return "正在查询定时服务"
async def run(self, **kwargs) -> str:
logger.info(f"执行工具: {self.name}")
try:
scheduler = Scheduler()
schedulers = scheduler.list()
if schedulers:
# 转换为字典列表以便JSON序列化
schedulers_list = []
for s in schedulers:
schedulers_list.append({
"id": s.id,
"name": s.name,
"provider": s.provider,
"status": s.status,
"next_run": s.next_run
})
result_json = json.dumps(schedulers_list, ensure_ascii=False, indent=2)
# 限制最多30条结果
total_count = len(schedulers_list)
if total_count > 30:
limited_schedulers = schedulers_list[:30]
limited_json = json.dumps(limited_schedulers, ensure_ascii=False, indent=2)
return f"注意:查询结果共找到 {total_count} 条,为节省上下文空间,仅显示前 30 条结果。\n\n{limited_json}"
return result_json
return "未找到定时服务"
except Exception as e:
logger.error(f"查询定时服务失败: {e}", exc_info=True)
return f"查询定时服务时发生错误: {str(e)}"

View File

@@ -0,0 +1,136 @@
"""查询站点用户数据工具"""
import json
from typing import Optional, Type
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.db import AsyncSessionFactory
from app.db.models.site import Site
from app.db.models.siteuserdata import SiteUserData
from app.log import logger
class QuerySiteUserdataInput(BaseModel):
"""查询站点用户数据工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
site_id: int = Field(..., description="The ID of the site to query user data for")
workdate: Optional[str] = Field(None, description="Work date to query (optional, format: 'YYYY-MM-DD', if not specified returns latest data)")
class QuerySiteUserdataTool(MoviePilotTool):
name: str = "query_site_userdata"
description: str = "Query user data for a specific site including username, user level, upload/download statistics, seeding information, bonus points, and other account details. Supports querying data for a specific date or latest data."
args_schema: Type[BaseModel] = QuerySiteUserdataInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据查询参数生成友好的提示消息"""
site_id = kwargs.get("site_id")
workdate = kwargs.get("workdate")
message = f"正在查询站点 #{site_id} 的用户数据"
if workdate:
message += f" (日期: {workdate})"
else:
message += " (最新数据)"
return message
async def run(self, site_id: int, workdate: Optional[str] = None, **kwargs) -> str:
logger.info(f"执行工具: {self.name}, 参数: site_id={site_id}, workdate={workdate}")
try:
# 获取数据库会话
async with AsyncSessionFactory() as db:
# 获取站点
site = await Site.async_get(db, site_id)
if not site:
return json.dumps({
"success": False,
"message": f"站点不存在: {site_id}"
}, ensure_ascii=False)
# 获取站点用户数据
user_data_list = await SiteUserData.async_get_by_domain(
db,
domain=site.domain,
workdate=workdate
)
if not user_data_list:
return json.dumps({
"success": False,
"message": f"站点 {site.name} ({site.domain}) 暂无用户数据",
"site_id": site_id,
"site_name": site.name,
"site_domain": site.domain,
"workdate": workdate
}, ensure_ascii=False)
# 格式化用户数据
result = {
"success": True,
"site_id": site_id,
"site_name": site.name,
"site_domain": site.domain,
"workdate": workdate,
"data_count": len(user_data_list),
"user_data": []
}
for user_data in user_data_list:
# 格式化上传/下载量(转换为可读格式)
upload_gb = user_data.upload / (1024 ** 3) if user_data.upload else 0
download_gb = user_data.download / (1024 ** 3) if user_data.download else 0
seeding_size_gb = user_data.seeding_size / (1024 ** 3) if user_data.seeding_size else 0
leeching_size_gb = user_data.leeching_size / (1024 ** 3) if user_data.leeching_size else 0
user_data_dict = {
"domain": user_data.domain,
"name": user_data.name,
"username": user_data.username,
"userid": user_data.userid,
"user_level": user_data.user_level,
"join_at": user_data.join_at,
"bonus": user_data.bonus,
"upload": user_data.upload,
"upload_gb": round(upload_gb, 2),
"download": user_data.download,
"download_gb": round(download_gb, 2),
"ratio": round(user_data.ratio, 2) if user_data.ratio else 0,
"seeding": int(user_data.seeding) if user_data.seeding else 0,
"leeching": int(user_data.leeching) if user_data.leeching else 0,
"seeding_size": user_data.seeding_size,
"seeding_size_gb": round(seeding_size_gb, 2),
"leeching_size": user_data.leeching_size,
"leeching_size_gb": round(leeching_size_gb, 2),
"seeding_info": user_data.seeding_info if user_data.seeding_info else [],
"message_unread": user_data.message_unread,
"message_unread_contents": user_data.message_unread_contents if user_data.message_unread_contents else [],
"err_msg": user_data.err_msg,
"updated_day": user_data.updated_day,
"updated_time": user_data.updated_time
}
result["user_data"].append(user_data_dict)
# 如果有多条数据,只返回最新的(按更新时间排序)
if len(result["user_data"]) > 1:
result["user_data"].sort(
key=lambda x: (x.get("updated_day", ""), x.get("updated_time", "")),
reverse=True
)
result["message"] = f"找到 {len(result['user_data'])} 条数据,显示最新的一条"
result["user_data"] = [result["user_data"][0]]
return json.dumps(result, ensure_ascii=False, indent=2)
except Exception as e:
error_message = f"查询站点用户数据失败: {str(e)}"
logger.error(f"查询站点用户数据失败: {e}", exc_info=True)
return json.dumps({
"success": False,
"message": error_message,
"site_id": site_id
}, ensure_ascii=False)

View File

@@ -0,0 +1,82 @@
"""查询站点工具"""
import json
from typing import Optional, Type
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.db.site_oper import SiteOper
from app.log import logger
class QuerySitesInput(BaseModel):
"""查询站点工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
status: Optional[str] = Field("all",
description="Filter sites by status: 'active' for enabled sites, 'inactive' for disabled sites, 'all' for all sites")
name: Optional[str] = Field(None,
description="Filter sites by name (partial match, optional)")
class QuerySitesTool(MoviePilotTool):
name: str = "query_sites"
description: str = "Query site status and list all configured sites. Shows site name, domain, status, priority, and basic configuration."
args_schema: Type[BaseModel] = QuerySitesInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据查询参数生成友好的提示消息"""
status = kwargs.get("status", "all")
name = kwargs.get("name")
parts = ["正在查询站点"]
if status != "all":
status_map = {"active": "已启用", "inactive": "已禁用"}
parts.append(f"状态: {status_map.get(status, status)}")
if name:
parts.append(f"名称: {name}")
return " | ".join(parts) if len(parts) > 1 else parts[0]
async def run(self, status: Optional[str] = "all", name: Optional[str] = None, **kwargs) -> str:
logger.info(f"执行工具: {self.name}, 参数: status={status}, name={name}")
try:
site_oper = SiteOper()
# 获取所有站点(按优先级排序)
sites = await site_oper.async_list()
filtered_sites = []
for site in sites:
# 按状态过滤
if status == "active" and not site.is_active:
continue
if status == "inactive" and site.is_active:
continue
# 按名称过滤(部分匹配)
if name and name.lower() not in (site.name or "").lower():
continue
filtered_sites.append(site)
if filtered_sites:
# 精简字段,只保留关键信息
simplified_sites = []
for s in filtered_sites:
simplified = {
"id": s.id,
"name": s.name,
"domain": s.domain,
"url": s.url,
"pri": s.pri,
"is_active": s.is_active,
"downloader": s.downloader,
"proxy": s.proxy,
"timeout": s.timeout
}
simplified_sites.append(simplified)
result_json = json.dumps(simplified_sites, ensure_ascii=False, indent=2)
return result_json
return "未找到相关站点"
except Exception as e:
logger.error(f"查询站点失败: {e}", exc_info=True)
return f"查询站点时发生错误: {str(e)}"

View File

@@ -0,0 +1,113 @@
"""查询订阅历史工具"""
import json
from typing import Optional, Type
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.db import AsyncSessionFactory
from app.db.models.subscribehistory import SubscribeHistory
from app.log import logger
class QuerySubscribeHistoryInput(BaseModel):
"""查询订阅历史工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
media_type: Optional[str] = Field("all", description="Filter by media type: '电影' for films, '电视剧' for television series, 'all' for all types (default: 'all')")
name: Optional[str] = Field(None, description="Filter by media name (partial match, optional)")
class QuerySubscribeHistoryTool(MoviePilotTool):
name: str = "query_subscribe_history"
description: str = "Query subscription history records. Shows completed subscriptions with their details including name, type, rating, completion date, and other subscription information. Supports filtering by media type and name. Returns up to 30 records."
args_schema: Type[BaseModel] = QuerySubscribeHistoryInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据查询参数生成友好的提示消息"""
media_type = kwargs.get("media_type", "all")
name = kwargs.get("name")
parts = ["正在查询订阅历史"]
if media_type != "all":
parts.append(f"类型: {media_type}")
if name:
parts.append(f"名称: {name}")
return " | ".join(parts) if len(parts) > 1 else parts[0]
async def run(self, media_type: Optional[str] = "all",
name: Optional[str] = None, **kwargs) -> str:
logger.info(f"执行工具: {self.name}, 参数: media_type={media_type}, name={name}")
try:
# 获取数据库会话
async with AsyncSessionFactory() as db:
# 根据类型查询
if media_type == "all":
# 查询所有类型,需要分别查询电影和电视剧
movie_history = await SubscribeHistory.async_list_by_type(db, mtype="movie", page=1, count=100)
tv_history = await SubscribeHistory.async_list_by_type(db, mtype="tv", page=1, count=100)
all_history = list(movie_history) + list(tv_history)
# 按日期排序
all_history.sort(key=lambda x: x.date or "", reverse=True)
else:
# 查询指定类型
all_history = await SubscribeHistory.async_list_by_type(db, mtype=media_type, page=1, count=100)
# 按名称过滤
filtered_history = []
if name:
name_lower = name.lower()
for record in all_history:
if record.name and name_lower in record.name.lower():
filtered_history.append(record)
else:
filtered_history = all_history
if not filtered_history:
return "未找到相关订阅历史记录"
# 限制最多30条
total_count = len(filtered_history)
limited_history = filtered_history[:30]
# 转换为字典格式,只保留关键信息
simplified_records = []
for record in limited_history:
simplified = {
"id": record.id,
"name": record.name,
"year": record.year,
"type": record.type,
"season": record.season,
"tmdbid": record.tmdbid,
"doubanid": record.doubanid,
"bangumiid": record.bangumiid,
"poster": record.poster,
"vote": record.vote,
"total_episode": record.total_episode,
"date": record.date,
"username": record.username
}
# 添加过滤规则信息(如果有)
if record.filter:
simplified["filter"] = record.filter
if record.quality:
simplified["quality"] = record.quality
if record.resolution:
simplified["resolution"] = record.resolution
simplified_records.append(simplified)
result_json = json.dumps(simplified_records, ensure_ascii=False, indent=2)
# 如果结果被裁剪,添加提示信息
if total_count > 30:
return f"注意:查询结果共找到 {total_count} 条,为节省上下文空间,仅显示前 30 条结果。\n\n{result_json}"
return result_json
except Exception as e:
logger.error(f"查询订阅历史失败: {e}", exc_info=True)
return f"查询订阅历史时发生错误: {str(e)}"

View File

@@ -0,0 +1,113 @@
"""查询订阅分享工具"""
import json
from typing import Optional, Type
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.helper.subscribe import SubscribeHelper
from app.log import logger
class QuerySubscribeSharesInput(BaseModel):
"""查询订阅分享工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
name: Optional[str] = Field(None, description="Filter shares by media name (partial match, optional)")
page: Optional[int] = Field(1, description="Page number for pagination (default: 1)")
count: Optional[int] = Field(30, description="Number of items per page (default: 30)")
genre_id: Optional[int] = Field(None, description="Filter by genre ID (optional)")
min_rating: Optional[float] = Field(None, description="Minimum rating filter (optional, e.g., 7.5)")
max_rating: Optional[float] = Field(None, description="Maximum rating filter (optional, e.g., 10.0)")
sort_type: Optional[str] = Field(None, description="Sort type (optional, e.g., 'count', 'rating')")
class QuerySubscribeSharesTool(MoviePilotTool):
name: str = "query_subscribe_shares"
description: str = "Query shared subscriptions from other users. Shows popular subscriptions shared by the community with filtering and pagination support."
args_schema: Type[BaseModel] = QuerySubscribeSharesInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据查询参数生成友好的提示消息"""
name = kwargs.get("name")
page = kwargs.get("page", 1)
min_rating = kwargs.get("min_rating")
max_rating = kwargs.get("max_rating")
parts = ["正在查询订阅分享"]
if name:
parts.append(f"名称: {name}")
if min_rating:
parts.append(f"最低评分: {min_rating}")
if max_rating:
parts.append(f"最高评分: {max_rating}")
if page > 1:
parts.append(f"{page}")
return " | ".join(parts) if len(parts) > 1 else parts[0]
async def run(self, name: Optional[str] = None,
page: Optional[int] = 1,
count: Optional[int] = 30,
genre_id: Optional[int] = None,
min_rating: Optional[float] = None,
max_rating: Optional[float] = None,
sort_type: Optional[str] = None, **kwargs) -> str:
logger.info(
f"执行工具: {self.name}, 参数: name={name}, page={page}, count={count}, genre_id={genre_id}, "
f"min_rating={min_rating}, max_rating={max_rating}, sort_type={sort_type}")
try:
if page is None or page < 1:
page = 1
if count is None or count < 1:
count = 30
subscribe_helper = SubscribeHelper()
shares = await subscribe_helper.async_get_shares(
name=name,
page=page,
count=count,
genre_id=genre_id,
min_rating=min_rating,
max_rating=max_rating,
sort_type=sort_type
)
if not shares:
return "未找到订阅分享数据(可能订阅分享功能未启用)"
# 简化字段,只保留关键信息
simplified_shares = []
for share in shares:
simplified = {
"id": share.get("id"),
"name": share.get("name"),
"year": share.get("year"),
"type": share.get("type"),
"season": share.get("season"),
"tmdbid": share.get("tmdbid"),
"doubanid": share.get("doubanid"),
"bangumiid": share.get("bangumiid"),
"poster": share.get("poster"),
"vote": share.get("vote"),
"share_title": share.get("share_title"),
"share_comment": share.get("share_comment"),
"share_user": share.get("share_user"),
"fork_count": share.get("fork_count", 0)
}
# 截断过长的描述
if simplified.get("description") and len(simplified["description"]) > 200:
simplified["description"] = simplified["description"][:200] + "..."
simplified_shares.append(simplified)
result_json = json.dumps(simplified_shares, ensure_ascii=False, indent=2)
pagination_info = f"{page} 页,每页 {count} 条,共 {len(simplified_shares)} 条结果"
return f"{pagination_info}\n\n{result_json}"
except Exception as e:
logger.error(f"查询订阅分享失败: {e}", exc_info=True)
return f"查询订阅分享时发生错误: {str(e)}"

View File

@@ -0,0 +1,90 @@
"""查询订阅工具"""
import json
from typing import Optional, Type
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.db.subscribe_oper import SubscribeOper
from app.log import logger
class QuerySubscribesInput(BaseModel):
"""查询订阅工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
status: Optional[str] = Field("all",
description="Filter subscriptions by status: 'R' for enabled subscriptions, 'P' for disabled ones, 'all' for all subscriptions")
media_type: Optional[str] = Field("all",
description="Filter by media type: '电影' for films, '电视剧' for television series, 'all' for all types")
class QuerySubscribesTool(MoviePilotTool):
name: str = "query_subscribes"
description: str = "Query subscription status and list all user subscriptions. Shows active subscriptions, their download status, and configuration details."
args_schema: Type[BaseModel] = QuerySubscribesInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据查询参数生成友好的提示消息"""
status = kwargs.get("status", "all")
media_type = kwargs.get("media_type", "all")
parts = ["正在查询订阅"]
# 根据状态过滤条件生成提示
if status != "all":
status_map = {"R": "已启用", "P": "已禁用"}
parts.append(f"状态: {status_map.get(status, status)}")
# 根据媒体类型过滤条件生成提示
if media_type != "all":
parts.append(f"类型: {media_type}")
return " | ".join(parts) if len(parts) > 1 else parts[0]
async def run(self, status: Optional[str] = "all", media_type: Optional[str] = "all", **kwargs) -> str:
logger.info(f"执行工具: {self.name}, 参数: status={status}, media_type={media_type}")
try:
subscribe_oper = SubscribeOper()
subscribes = await subscribe_oper.async_list()
filtered_subscribes = []
for sub in subscribes:
if status != "all" and sub.state != status:
continue
if media_type != "all" and sub.type != media_type:
continue
filtered_subscribes.append(sub)
if filtered_subscribes:
# 限制最多50条结果
total_count = len(filtered_subscribes)
limited_subscribes = filtered_subscribes[:50]
# 精简字段,只保留关键信息
simplified_subscribes = []
for s in limited_subscribes:
simplified = {
"id": s.id,
"name": s.name,
"year": s.year,
"type": s.type,
"season": s.season,
"tmdbid": s.tmdbid,
"doubanid": s.doubanid,
"bangumiid": s.bangumiid,
"poster": s.poster,
"vote": s.vote,
"state": s.state,
"total_episode": s.total_episode,
"lack_episode": s.lack_episode,
"last_update": s.last_update,
"username": s.username
}
simplified_subscribes.append(simplified)
result_json = json.dumps(simplified_subscribes, ensure_ascii=False, indent=2)
# 如果结果被裁剪,添加提示信息
if total_count > 50:
return f"注意:查询结果共找到 {total_count} 条,为节省上下文空间,仅显示前 50 条结果。\n\n{result_json}"
return result_json
return "未找到相关订阅"
except Exception as e:
logger.error(f"查询订阅失败: {e}", exc_info=True)
return f"查询订阅时发生错误: {str(e)}"

View File

@@ -0,0 +1,133 @@
"""查询整理历史记录工具"""
import json
from typing import Optional, Type
import jieba
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.db import AsyncSessionFactory
from app.db.models.transferhistory import TransferHistory
from app.log import logger
class QueryTransferHistoryInput(BaseModel):
"""查询整理历史记录工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
title: Optional[str] = Field(None, description="Search by title (optional, supports partial match)")
status: Optional[str] = Field("all",
description="Filter by status: 'success' for successful transfers, 'failed' for failed transfers, 'all' for all records (default: 'all')")
page: Optional[int] = Field(1, description="Page number for pagination (default: 1, each page contains 30 records)")
class QueryTransferHistoryTool(MoviePilotTool):
name: str = "query_transfer_history"
description: str = "Query file transfer history records. Shows transfer status, source and destination paths, media information, and transfer details. Supports filtering by title and status."
args_schema: Type[BaseModel] = QueryTransferHistoryInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据查询参数生成友好的提示消息"""
title = kwargs.get("title")
status = kwargs.get("status", "all")
page = kwargs.get("page", 1)
parts = ["正在查询整理历史"]
if title:
parts.append(f"标题: {title}")
if status != "all":
status_map = {"success": "成功", "failed": "失败"}
parts.append(f"状态: {status_map.get(status, status)}")
if page > 1:
parts.append(f"{page}")
return " | ".join(parts) if len(parts) > 1 else parts[0]
async def run(self, title: Optional[str] = None,
status: Optional[str] = "all",
page: Optional[int] = 1, **kwargs) -> str:
logger.info(f"执行工具: {self.name}, 参数: title={title}, status={status}, page={page}")
try:
# 处理状态参数
status_bool = None
if status == "success":
status_bool = True
elif status == "failed":
status_bool = False
# 处理页码参数
if page is None or page < 1:
page = 1
# 每页记录数
count = 50
# 获取数据库会话
async with AsyncSessionFactory() as db:
# 处理标题搜索
if title:
# 使用 jieba 分词处理标题
words = jieba.cut(title, HMM=False)
title_search = "%".join(words)
# 查询记录
result = await TransferHistory.async_list_by_title(
db, title=title_search, page=page, count=count, status=status_bool
)
total = await TransferHistory.async_count_by_title(
db, title=title_search, status=status_bool
)
else:
# 查询所有记录
result = await TransferHistory.async_list_by_page(
db, page=page, count=count, status=status_bool
)
total = await TransferHistory.async_count(db, status=status_bool)
if not result:
return "未找到相关整理历史记录"
# 转换为字典格式,只保留关键信息
simplified_records = []
for record in result:
simplified = {
"id": record.id,
"title": record.title,
"year": record.year,
"type": record.type,
"category": record.category,
"seasons": record.seasons,
"episodes": record.episodes,
"src": record.src,
"dest": record.dest,
"mode": record.mode,
"status": "成功" if record.status else "失败",
"date": record.date,
"downloader": record.downloader,
"download_hash": record.download_hash
}
# 如果失败,添加错误信息
if not record.status and record.errmsg:
simplified["errmsg"] = record.errmsg
# 添加媒体ID信息如果有
if record.tmdbid:
simplified["tmdbid"] = record.tmdbid
if record.imdbid:
simplified["imdbid"] = record.imdbid
if record.doubanid:
simplified["doubanid"] = record.doubanid
simplified_records.append(simplified)
result_json = json.dumps(simplified_records, ensure_ascii=False, indent=2)
# 计算总页数
total_pages = (total + count - 1) // count if total > 0 else 1
# 构建分页信息
pagination_info = f"{page}/{total_pages} 页,共 {total} 条记录(每页 {count} 条)"
return f"{pagination_info}\n\n{result_json}"
except Exception as e:
logger.error(f"查询整理历史记录失败: {e}", exc_info=True)
return f"查询整理历史记录时发生错误: {str(e)}"

View File

@@ -0,0 +1,128 @@
"""查询工作流工具"""
import json
from typing import Optional, Type
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.db import AsyncSessionFactory
from app.db.workflow_oper import WorkflowOper
from app.log import logger
class QueryWorkflowsInput(BaseModel):
"""查询工作流工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
state: Optional[str] = Field("all", description="Filter workflows by state: 'W' for waiting, 'R' for running, 'P' for paused, 'S' for success, 'F' for failed, 'all' for all workflows (default: 'all')")
name: Optional[str] = Field(None, description="Filter workflows by name (partial match, optional)")
trigger_type: Optional[str] = Field("all", description="Filter workflows by trigger type: 'timer' for scheduled, 'event' for event-triggered, 'manual' for manual, 'all' for all types (default: 'all')")
class QueryWorkflowsTool(MoviePilotTool):
name: str = "query_workflows"
description: str = "Query workflow list and status. Shows workflow name, description, trigger type, state, execution count, and other workflow details. Supports filtering by state, name, and trigger type."
args_schema: Type[BaseModel] = QueryWorkflowsInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据查询参数生成友好的提示消息"""
state = kwargs.get("state", "all")
name = kwargs.get("name")
trigger_type = kwargs.get("trigger_type", "all")
parts = ["正在查询工作流"]
if state != "all":
state_map = {"W": "等待", "R": "运行中", "P": "暂停", "S": "成功", "F": "失败"}
parts.append(f"状态: {state_map.get(state, state)}")
if trigger_type != "all":
trigger_map = {"timer": "定时触发", "event": "事件触发", "manual": "手动触发"}
parts.append(f"触发类型: {trigger_map.get(trigger_type, trigger_type)}")
if name:
parts.append(f"名称: {name}")
return " | ".join(parts) if len(parts) > 1 else parts[0]
async def run(self, state: Optional[str] = "all",
name: Optional[str] = None,
trigger_type: Optional[str] = "all", **kwargs) -> str:
logger.info(f"执行工具: {self.name}, 参数: state={state}, name={name}, trigger_type={trigger_type}")
try:
# 获取数据库会话
async with AsyncSessionFactory() as db:
workflow_oper = WorkflowOper(db)
workflows = await workflow_oper.async_list()
# 过滤工作流
filtered_workflows = []
for wf in workflows:
# 按状态过滤
if state != "all" and wf.state != state:
continue
# 按触发类型过滤
if trigger_type != "all":
if trigger_type == "timer" and wf.trigger_type not in ["timer", None]:
continue
elif trigger_type == "event" and wf.trigger_type != "event":
continue
elif trigger_type == "manual" and wf.trigger_type != "manual":
continue
# 按名称过滤(部分匹配)
if name and wf.name and name.lower() not in wf.name.lower():
continue
filtered_workflows.append(wf)
if not filtered_workflows:
return "未找到相关工作流"
# 转换为字典格式,只保留关键信息
simplified_workflows = []
for wf in filtered_workflows:
# 状态说明
state_map = {
"W": "等待",
"R": "运行中",
"P": "暂停",
"S": "成功",
"F": "失败"
}
state_desc = state_map.get(wf.state, wf.state)
# 触发类型说明
trigger_type_map = {
"timer": "定时触发",
"event": "事件触发",
"manual": "手动触发"
}
trigger_type_desc = trigger_type_map.get(wf.trigger_type, wf.trigger_type or "定时触发")
simplified = {
"id": wf.id,
"name": wf.name,
"description": wf.description,
"trigger_type": trigger_type_desc,
"state": state_desc,
"run_count": wf.run_count,
"timer": wf.timer,
"event_type": wf.event_type,
"add_time": wf.add_time,
"last_time": wf.last_time,
"current_action": wf.current_action
}
# 如果有结果,添加结果信息
if wf.result:
simplified["result"] = wf.result
simplified_workflows.append(simplified)
result_json = json.dumps(simplified_workflows, ensure_ascii=False, indent=2)
return result_json
except Exception as e:
logger.error(f"查询工作流失败: {e}", exc_info=True)
return f"查询工作流时发生错误: {str(e)}"

View File

@@ -0,0 +1,162 @@
"""识别媒体信息工具"""
import json
from typing import Optional, Type
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.chain.media import MediaChain
from app.core.context import Context
from app.core.metainfo import MetaInfo
from app.log import logger
class RecognizeMediaInput(BaseModel):
"""识别媒体信息工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
title: Optional[str] = Field(None, description="The title of the torrent/media to recognize (required for torrent recognition)")
subtitle: Optional[str] = Field(None, description="The subtitle or description of the torrent (optional, helps improve recognition accuracy)")
path: Optional[str] = Field(None, description="The file path to recognize (required for file recognition, mutually exclusive with title)")
class RecognizeMediaTool(MoviePilotTool):
name: str = "recognize_media"
description: str = "Recognize media information from torrent titles or file paths. Supports two modes: 1) Recognize from torrent title and optional subtitle, 2) Recognize from file path. Returns detailed media information including title, year, type, TMDB ID, overview, and other metadata."
args_schema: Type[BaseModel] = RecognizeMediaInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据识别参数生成友好的提示消息"""
title = kwargs.get("title")
subtitle = kwargs.get("subtitle")
path = kwargs.get("path")
if path:
message = f"正在识别文件媒体信息: {path}"
elif title:
message = f"正在识别种子媒体信息: {title}"
if subtitle:
message += f" ({subtitle})"
else:
message = "正在识别媒体信息"
return message
async def run(self, title: Optional[str] = None, subtitle: Optional[str] = None,
path: Optional[str] = None, **kwargs) -> str:
logger.info(f"执行工具: {self.name}, 参数: title={title}, subtitle={subtitle}, path={path}")
try:
media_chain = MediaChain()
context = None
# 根据提供的参数选择识别方式
if path:
# 文件路径识别
if not path:
return json.dumps({
"success": False,
"message": "文件路径不能为空"
}, ensure_ascii=False)
context = await media_chain.async_recognize_by_path(path)
if context:
return self._format_context_result(context, "文件")
else:
return json.dumps({
"success": False,
"message": f"无法识别文件媒体信息: {path}",
"path": path
}, ensure_ascii=False)
elif title:
# 种子标题识别
metainfo = MetaInfo(title, subtitle)
mediainfo = await media_chain.async_recognize_by_meta(metainfo)
if mediainfo:
context = Context(meta_info=metainfo, media_info=mediainfo)
return self._format_context_result(context, "种子")
else:
return json.dumps({
"success": False,
"message": f"无法识别种子媒体信息: {title}",
"title": title,
"subtitle": subtitle
}, ensure_ascii=False)
else:
return json.dumps({
"success": False,
"message": "必须提供 title标题或 path文件路径参数之一"
}, ensure_ascii=False)
except Exception as e:
error_message = f"识别媒体信息失败: {str(e)}"
logger.error(f"识别媒体信息失败: {e}", exc_info=True)
return json.dumps({
"success": False,
"message": error_message
}, ensure_ascii=False)
def _format_context_result(self, context: Context, source_type: str) -> str:
"""格式化识别结果为JSON字符串"""
if not context:
return json.dumps({
"success": False,
"message": "识别结果为空"
}, ensure_ascii=False)
context_dict = context.to_dict()
media_info = context_dict.get("media_info")
meta_info = context_dict.get("meta_info")
# 构建简化的结果
result = {
"success": True,
"source_type": source_type,
"media_info": None,
"meta_info": None
}
# 处理媒体信息
if media_info:
result["media_info"] = {
"title": media_info.get("title"),
"en_title": media_info.get("en_title"),
"year": media_info.get("year"),
"type": media_info.get("type"),
"season": media_info.get("season"),
"tmdb_id": media_info.get("tmdb_id"),
"imdb_id": media_info.get("imdb_id"),
"douban_id": media_info.get("douban_id"),
"bangumi_id": media_info.get("bangumi_id"),
"overview": media_info.get("overview"),
"vote_average": media_info.get("vote_average"),
"poster_path": media_info.get("poster_path"),
"backdrop_path": media_info.get("backdrop_path"),
"detail_link": media_info.get("detail_link"),
"title_year": media_info.get("title_year"),
"source": media_info.get("source")
}
# 处理元数据信息
if meta_info:
result["meta_info"] = {
"name": meta_info.get("name"),
"title": meta_info.get("title"),
"year": meta_info.get("year"),
"type": meta_info.get("type"),
"begin_season": meta_info.get("begin_season"),
"end_season": meta_info.get("end_season"),
"begin_episode": meta_info.get("begin_episode"),
"end_episode": meta_info.get("end_episode"),
"total_episode": meta_info.get("total_episode"),
"part": meta_info.get("part"),
"season_episode": meta_info.get("season_episode"),
"episode_list": meta_info.get("episode_list"),
"tmdbid": meta_info.get("tmdbid"),
"doubanid": meta_info.get("doubanid")
}
return json.dumps(result, ensure_ascii=False, indent=2)

View File

@@ -0,0 +1,53 @@
"""运行定时服务工具"""
from typing import Optional, Type
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.log import logger
from app.scheduler import Scheduler
class RunSchedulerInput(BaseModel):
"""运行定时服务工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
job_id: str = Field(..., description="The ID of the scheduled job to run (can be obtained from query_schedulers tool)")
class RunSchedulerTool(MoviePilotTool):
name: str = "run_scheduler"
description: str = "Manually trigger a scheduled task to run immediately. This will execute the specified scheduler job by its ID."
args_schema: Type[BaseModel] = RunSchedulerInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据运行参数生成友好的提示消息"""
job_id = kwargs.get("job_id", "")
return f"正在运行定时服务 (ID: {job_id})"
async def run(self, job_id: str, **kwargs) -> str:
logger.info(f"执行工具: {self.name}, 参数: job_id={job_id}")
try:
scheduler = Scheduler()
# 检查定时服务是否存在
schedulers = scheduler.list()
job_exists = False
job_name = None
for s in schedulers:
if s.id == job_id:
job_exists = True
job_name = s.name
break
if not job_exists:
return f"定时服务 ID {job_id} 不存在,请使用 query_schedulers 工具查询可用的定时服务"
# 运行定时服务
scheduler.start(job_id)
return f"成功触发定时服务:{job_name} (ID: {job_id})"
except Exception as e:
logger.error(f"运行定时服务失败: {e}", exc_info=True)
return f"运行定时服务时发生错误: {str(e)}"

View File

@@ -0,0 +1,72 @@
"""执行工作流工具"""
from typing import Optional, Type
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.chain.workflow import WorkflowChain
from app.db import AsyncSessionFactory
from app.db.workflow_oper import WorkflowOper
from app.log import logger
class RunWorkflowInput(BaseModel):
"""执行工作流工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
workflow_identifier: str = Field(..., description="Workflow identifier: can be workflow ID (integer as string) or workflow name")
from_begin: Optional[bool] = Field(True, description="Whether to run workflow from the beginning (default: True, if False will continue from last executed action)")
class RunWorkflowTool(MoviePilotTool):
name: str = "run_workflow"
description: str = "Execute a specific workflow manually. Can run workflow by ID or name. Supports running from the beginning or continuing from the last executed action."
args_schema: Type[BaseModel] = RunWorkflowInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据工作流参数生成友好的提示消息"""
workflow_identifier = kwargs.get("workflow_identifier", "")
from_begin = kwargs.get("from_begin", True)
message = f"正在执行工作流: {workflow_identifier}"
if not from_begin:
message += " (从上次位置继续)"
else:
message += " (从头开始)"
return message
async def run(self, workflow_identifier: str,
from_begin: Optional[bool] = True, **kwargs) -> str:
logger.info(f"执行工具: {self.name}, 参数: workflow_identifier={workflow_identifier}, from_begin={from_begin}")
try:
# 获取数据库会话
async with AsyncSessionFactory() as db:
workflow_oper = WorkflowOper(db)
# 尝试解析为工作流ID
workflow = None
if workflow_identifier.isdigit():
# 如果是数字尝试作为工作流ID查询
workflow = await workflow_oper.async_get(int(workflow_identifier))
# 如果不是ID或ID查询失败尝试按名称查询
if not workflow:
workflow = await workflow_oper.async_get_by_name(workflow_identifier)
if not workflow:
return f"未找到工作流:{workflow_identifier},请使用 query_workflows 工具查询可用的工作流"
# 执行工作流
workflow_chain = WorkflowChain()
state, errmsg = workflow_chain.process(workflow.id, from_begin=from_begin)
if not state:
return f"执行工作流失败:{workflow.name} (ID: {workflow.id})\n错误原因:{errmsg}"
else:
return f"工作流执行成功:{workflow.name} (ID: {workflow.id})"
except Exception as e:
logger.error(f"执行工作流失败: {e}", exc_info=True)
return f"执行工作流时发生错误: {str(e)}"

View File

@@ -0,0 +1,118 @@
"""刮削媒体元数据工具"""
import asyncio
import json
from pathlib import Path
from typing import Optional, Type
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.chain.media import MediaChain
from app.core.metainfo import MetaInfoPath
from app.log import logger
from app.schemas import FileItem
class ScrapeMetadataInput(BaseModel):
"""刮削媒体元数据工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
path: str = Field(..., description="Path to the file or directory to scrape metadata for (e.g., '/path/to/file.mkv' or '/path/to/directory')")
storage: Optional[str] = Field("local", description="Storage type: 'local' for local storage, 'smb', 'alist', etc. for remote storage (default: 'local')")
overwrite: Optional[bool] = Field(False, description="Whether to overwrite existing metadata files (default: False)")
class ScrapeMetadataTool(MoviePilotTool):
name: str = "scrape_metadata"
description: str = "Scrape media metadata (NFO files, posters, backgrounds, etc.) for a file or directory. Automatically recognizes media information from the file path and generates metadata files. Supports both local and remote storage."
args_schema: Type[BaseModel] = ScrapeMetadataInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据刮削参数生成友好的提示消息"""
path = kwargs.get("path", "")
storage = kwargs.get("storage", "local")
overwrite = kwargs.get("overwrite", False)
message = f"正在刮削媒体元数据: {path}"
if storage != "local":
message += f" [存储: {storage}]"
if overwrite:
message += " [覆盖模式]"
return message
async def run(self, path: str, storage: Optional[str] = "local",
overwrite: Optional[bool] = False, **kwargs) -> str:
logger.info(f"执行工具: {self.name}, 参数: path={path}, storage={storage}, overwrite={overwrite}")
try:
# 验证路径
if not path:
return json.dumps({
"success": False,
"message": "刮削路径不能为空"
}, ensure_ascii=False)
# 创建 FileItem
fileitem = FileItem(
storage=storage,
path=path,
type="file" if Path(path).suffix else "dir"
)
# 检查本地存储路径是否存在
if storage == "local":
scrape_path = Path(path)
if not scrape_path.exists():
return json.dumps({
"success": False,
"message": f"刮削路径不存在: {path}"
}, ensure_ascii=False)
# 识别媒体信息
media_chain = MediaChain()
scrape_path = Path(path)
meta = MetaInfoPath(scrape_path)
mediainfo = await media_chain.async_recognize_by_meta(meta)
if not mediainfo:
return json.dumps({
"success": False,
"message": f"刮削失败,无法识别媒体信息: {path}",
"path": path
}, ensure_ascii=False)
# 在线程池中执行同步的刮削操作
loop = asyncio.get_event_loop()
await loop.run_in_executor(
None,
lambda: media_chain.scrape_metadata(
fileitem=fileitem,
meta=meta,
mediainfo=mediainfo,
overwrite=overwrite
)
)
return json.dumps({
"success": True,
"message": f"{path} 刮削完成",
"path": path,
"media_info": {
"title": mediainfo.title,
"year": mediainfo.year,
"type": mediainfo.type.value if mediainfo.type else None,
"tmdb_id": mediainfo.tmdb_id,
"season": mediainfo.season
}
}, ensure_ascii=False, indent=2)
except Exception as e:
error_message = f"刮削媒体元数据失败: {str(e)}"
logger.error(f"刮削媒体元数据失败: {e}", exc_info=True)
return json.dumps({
"success": False,
"message": error_message,
"path": path
}, ensure_ascii=False)

View File

@@ -0,0 +1,113 @@
"""搜索媒体工具"""
import json
from typing import Optional, Type
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.chain.media import MediaChain
from app.log import logger
from app.schemas.types import MediaType
class SearchMediaInput(BaseModel):
"""搜索媒体工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
title: str = Field(..., description="The title of the media to search for (e.g., 'The Matrix', 'Breaking Bad')")
year: Optional[str] = Field(None, description="Release year of the media (optional, helps narrow down results)")
media_type: Optional[str] = Field(None,
description="Type of media content: '电影' for films, '电视剧' for television series or anime series")
season: Optional[int] = Field(None,
description="Season number for TV shows and anime (optional, only applicable for series)")
class SearchMediaTool(MoviePilotTool):
name: str = "search_media"
description: str = "Search for media resources including movies, TV shows, anime, etc. Supports searching by title, year, type, and other criteria. Returns detailed media information from TMDB database."
args_schema: Type[BaseModel] = SearchMediaInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据搜索参数生成友好的提示消息"""
title = kwargs.get("title", "")
year = kwargs.get("year")
media_type = kwargs.get("media_type")
season = kwargs.get("season")
message = f"正在搜索媒体: {title}"
if year:
message += f" ({year})"
if media_type:
message += f" [{media_type}]"
if season:
message += f"{season}"
return message
async def run(self, title: str, year: Optional[str] = None,
media_type: Optional[str] = None, season: Optional[int] = None, **kwargs) -> str:
logger.info(
f"执行工具: {self.name}, 参数: title={title}, year={year}, media_type={media_type}, season={season}")
try:
media_chain = MediaChain()
# 构建搜索标题
search_title = title
if year:
search_title = f"{title} {year}"
if media_type:
search_title = f"{search_title} {media_type}"
if season:
search_title = f"{search_title} S{season:02d}"
# 使用 MediaChain.search 方法
meta, results = await media_chain.async_search(title=search_title)
# 过滤结果
if results:
filtered_results = []
for result in results:
if year and result.year != year:
continue
if media_type:
if result.type != MediaType(media_type):
continue
if season and result.season != season:
continue
filtered_results.append(result)
if filtered_results:
# 限制最多30条结果
total_count = len(filtered_results)
limited_results = filtered_results[:30]
# 精简字段,只保留关键信息
simplified_results = []
for r in limited_results:
simplified = {
"title": r.title,
"en_title": r.en_title,
"year": r.year,
"type": r.type.value if r.type else None,
"season": r.season,
"tmdb_id": r.tmdb_id,
"imdb_id": r.imdb_id,
"douban_id": r.douban_id,
"overview": r.overview[:200] + "..." if r.overview and len(r.overview) > 200 else r.overview,
"vote_average": r.vote_average,
"poster_path": r.poster_path,
"detail_link": r.detail_link
}
simplified_results.append(simplified)
result_json = json.dumps(simplified_results, ensure_ascii=False, indent=2)
# 如果结果被裁剪,添加提示信息
if total_count > 30:
return f"注意:搜索结果共找到 {total_count} 条,为节省上下文空间,仅显示前 30 条结果。\n\n{result_json}"
return result_json
else:
return f"未找到符合条件的媒体资源: {title}"
else:
return f"未找到相关媒体资源: {title}"
except Exception as e:
error_message = f"搜索媒体失败: {str(e)}"
logger.error(f"搜索媒体失败: {e}", exc_info=True)
return error_message

View File

@@ -0,0 +1,142 @@
"""搜索种子工具"""
import json
import re
from typing import List, Optional, Type
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.chain.search import SearchChain
from app.log import logger
from app.schemas.types import MediaType
class SearchTorrentsInput(BaseModel):
"""搜索种子工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
title: str = Field(...,
description="The title of the media resource to search for (e.g., 'The Matrix 1999', 'Breaking Bad S01E01')")
year: Optional[str] = Field(None,
description="Release year of the media (optional, helps narrow down search results)")
media_type: Optional[str] = Field(None,
description="Type of media content: '电影' for films, '电视剧' for television series or anime series")
season: Optional[int] = Field(None, description="Season number for TV shows (optional, only applicable for series)")
sites: Optional[List[int]] = Field(None,
description="Array of specific site IDs to search on (optional, if not provided searches all configured sites)")
filter_pattern: Optional[str] = Field(None,
description="Regular expression pattern to filter torrent titles by resolution, quality, or other keywords (e.g., '4K|2160p|UHD' for 4K content, '1080p|BluRay' for 1080p BluRay)")
class SearchTorrentsTool(MoviePilotTool):
name: str = "search_torrents"
description: str = "Search for torrent files across configured indexer sites based on media information. Returns available torrent downloads with details like file size, quality, and download links."
args_schema: Type[BaseModel] = SearchTorrentsInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据搜索参数生成友好的提示消息"""
title = kwargs.get("title", "")
year = kwargs.get("year")
media_type = kwargs.get("media_type")
season = kwargs.get("season")
filter_pattern = kwargs.get("filter_pattern")
message = f"正在搜索种子: {title}"
if year:
message += f" ({year})"
if media_type:
message += f" [{media_type}]"
if season:
message += f"{season}"
if filter_pattern:
message += f" 过滤: {filter_pattern}"
return message
async def run(self, title: str, year: Optional[str] = None,
media_type: Optional[str] = None, season: Optional[int] = None,
sites: Optional[List[int]] = None, filter_pattern: Optional[str] = None, **kwargs) -> str:
logger.info(
f"执行工具: {self.name}, 参数: title={title}, year={year}, media_type={media_type}, season={season}, sites={sites}, filter_pattern={filter_pattern}")
try:
search_chain = SearchChain()
torrents = await search_chain.async_search_by_title(title=title, sites=sites)
filtered_torrents = []
# 编译正则表达式(如果提供)
regex_pattern = None
if filter_pattern:
try:
regex_pattern = re.compile(filter_pattern, re.IGNORECASE)
except re.error as e:
logger.warning(f"正则表达式编译失败: {filter_pattern}, 错误: {e}")
return f"正则表达式格式错误: {str(e)}"
for torrent in torrents:
# torrent 是 Context 对象,需要通过 meta_info 和 media_info 访问属性
if year and torrent.meta_info and torrent.meta_info.year != year:
continue
if media_type and torrent.media_info:
if torrent.media_info.type != MediaType(media_type):
continue
if season and torrent.meta_info and torrent.meta_info.begin_season != season:
continue
# 使用正则表达式过滤标题(分辨率、质量等关键字)
if regex_pattern and torrent.torrent_info and torrent.torrent_info.title:
if not regex_pattern.search(torrent.torrent_info.title):
continue
filtered_torrents.append(torrent)
if filtered_torrents:
# 限制最多50条结果
total_count = len(filtered_torrents)
limited_torrents = filtered_torrents[:50]
# 精简字段,只保留关键信息
simplified_torrents = []
for t in limited_torrents:
simplified = {}
# 精简 torrent_info
if t.torrent_info:
simplified["torrent_info"] = {
"title": t.torrent_info.title,
"size": t.torrent_info.size,
"seeders": t.torrent_info.seeders,
"peers": t.torrent_info.peers,
"site_name": t.torrent_info.site_name,
"enclosure": t.torrent_info.enclosure,
"page_url": t.torrent_info.page_url,
"volume_factor": t.torrent_info.volume_factor,
"pubdate": t.torrent_info.pubdate
}
# 精简 media_info
if t.media_info:
simplified["media_info"] = {
"title": t.media_info.title,
"en_title": t.media_info.en_title,
"year": t.media_info.year,
"type": t.media_info.type.value if t.media_info.type else None,
"season": t.media_info.season,
"tmdb_id": t.media_info.tmdb_id
}
# 精简 meta_info
if t.meta_info:
simplified["meta_info"] = {
"name": t.meta_info.name,
"cn_name": t.meta_info.cn_name,
"en_name": t.meta_info.en_name,
"year": t.meta_info.year,
"type": t.meta_info.type.value if t.meta_info.type else None,
"begin_season": t.meta_info.begin_season
}
simplified_torrents.append(simplified)
result_json = json.dumps(simplified_torrents, ensure_ascii=False, indent=2)
# 如果结果被裁剪,添加提示信息
if total_count > 50:
return f"注意:搜索结果共找到 {total_count} 条,为节省上下文空间,仅显示前 50 条结果。\n\n{result_json}"
return result_json
else:
return f"未找到相关种子资源: {title}"
except Exception as e:
error_message = f"搜索种子时发生错误: {str(e)}"
logger.error(f"搜索种子失败: {e}", exc_info=True)
return error_message

View File

@@ -0,0 +1,45 @@
"""发送消息工具"""
from typing import Optional, Type
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.log import logger
class SendMessageInput(BaseModel):
"""发送消息工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
message: str = Field(..., description="The message content to send to the user (should be clear and informative)")
message_type: Optional[str] = Field("info",
description="Type of message: 'info' for general information, 'success' for successful operations, 'warning' for warnings, 'error' for error messages")
class SendMessageTool(MoviePilotTool):
name: str = "send_message"
description: str = "Send notification message to the user through configured notification channels (Telegram, Slack, WeChat, etc.). Used to inform users about operation results, errors, or important updates."
args_schema: Type[BaseModel] = SendMessageInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据消息参数生成友好的提示消息"""
message = kwargs.get("message", "")
message_type = kwargs.get("message_type", "info")
type_map = {"info": "信息", "success": "成功", "warning": "警告", "error": "错误"}
type_desc = type_map.get(message_type, message_type)
# 截断过长的消息
if len(message) > 50:
message = message[:50] + "..."
return f"正在发送{type_desc}消息: {message}"
async def run(self, message: str, message_type: Optional[str] = None, **kwargs) -> str:
logger.info(f"执行工具: {self.name}, 参数: message={message}, message_type={message_type}")
try:
await self.send_tool_message(message, title=message_type)
return "消息已发送"
except Exception as e:
logger.error(f"发送消息失败: {e}")
return f"发送消息时发生错误: {str(e)}"

View File

@@ -0,0 +1,72 @@
"""测试站点连通性工具"""
from typing import Optional, Type
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.chain.site import SiteChain
from app.db.site_oper import SiteOper
from app.log import logger
from app.utils.string import StringUtils
class TestSiteInput(BaseModel):
"""测试站点连通性工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
site_identifier: str = Field(..., description="Site identifier: can be site ID (integer as string), site name, or site domain/URL")
class TestSiteTool(MoviePilotTool):
name: str = "test_site"
description: str = "Test site connectivity and availability. This will check if a site is accessible and can be logged in. Accepts site ID, site name, or site domain/URL as identifier."
args_schema: Type[BaseModel] = TestSiteInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据测试参数生成友好的提示消息"""
site_identifier = kwargs.get("site_identifier", "")
return f"正在测试站点连通性: {site_identifier}"
async def run(self, site_identifier: str, **kwargs) -> str:
logger.info(f"执行工具: {self.name}, 参数: site_identifier={site_identifier}")
try:
site_oper = SiteOper()
site_chain = SiteChain()
# 尝试解析为站点ID
site = None
if site_identifier.isdigit():
# 如果是数字尝试作为站点ID查询
site = await site_oper.async_get(int(site_identifier))
# 如果不是ID或ID查询失败尝试按名称或域名查询
if not site:
# 尝试按名称查询
sites = await site_oper.async_list()
for s in sites:
if (site_identifier.lower() in (s.name or "").lower()) or \
(site_identifier.lower() in (s.domain or "").lower()):
site = s
break
# 如果还是没找到尝试从URL提取域名
if not site:
domain = StringUtils.get_url_domain(site_identifier)
if domain:
site = await site_oper.async_get_by_domain(domain)
if not site:
return f"未找到站点:{site_identifier},请使用 query_sites 工具查询可用的站点"
# 测试站点连通性
status, message = site_chain.test(site.domain)
if status:
return f"站点连通性测试成功:{site.name} ({site.domain})\n{message}"
else:
return f"站点连通性测试失败:{site.name} ({site.domain})\n{message}"
except Exception as e:
logger.error(f"测试站点连通性失败: {e}", exc_info=True)
return f"测试站点连通性时发生错误: {str(e)}"

View File

@@ -0,0 +1,134 @@
"""整理文件或目录工具"""
from pathlib import Path
from typing import Optional, Type
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.chain.transfer import TransferChain
from app.log import logger
from app.schemas import FileItem, MediaType
class TransferFileInput(BaseModel):
"""整理文件或目录工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
file_path: str = Field(..., description="Path to the file or directory to transfer (e.g., '/path/to/file.mkv' or '/path/to/directory')")
storage: Optional[str] = Field("local", description="Storage type of the source file (default: 'local', can be 'smb', 'alist', etc.)")
target_path: Optional[str] = Field(None, description="Target path for the transferred file/directory (optional, uses default library path if not specified)")
target_storage: Optional[str] = Field(None, description="Target storage type (optional, uses default storage if not specified)")
media_type: Optional[str] = Field(None, description="Media type: '电影' for films, '电视剧' for television series (optional, will be auto-detected if not specified)")
tmdbid: Optional[int] = Field(None, description="TMDB ID for precise media identification (optional but recommended for accuracy)")
doubanid: Optional[str] = Field(None, description="Douban ID for media identification (optional)")
season: Optional[int] = Field(None, description="Season number for TV shows (optional)")
transfer_type: Optional[str] = Field(None, description="Transfer mode: 'move' to move files, 'copy' to copy files, 'link' for hard link, 'softlink' for symbolic link (optional, uses default mode if not specified)")
background: Optional[bool] = Field(False, description="Whether to run transfer in background (default: False, runs synchronously)")
class TransferFileTool(MoviePilotTool):
name: str = "transfer_file"
description: str = "Transfer/organize a file or directory to the media library. Automatically recognizes media information and organizes files according to configured rules. Supports custom target paths, media identification, and transfer modes."
args_schema: Type[BaseModel] = TransferFileInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据整理参数生成友好的提示消息"""
file_path = kwargs.get("file_path", "")
media_type = kwargs.get("media_type")
transfer_type = kwargs.get("transfer_type")
background = kwargs.get("background", False)
message = f"正在整理文件: {file_path}"
if media_type:
message += f" [{media_type}]"
if transfer_type:
transfer_map = {"move": "移动", "copy": "复制", "link": "硬链接", "softlink": "软链接"}
message += f" 模式: {transfer_map.get(transfer_type, transfer_type)}"
if background:
message += " [后台运行]"
return message
async def run(self, file_path: str, storage: Optional[str] = "local",
target_path: Optional[str] = None,
target_storage: Optional[str] = None,
media_type: Optional[str] = None,
tmdbid: Optional[int] = None,
doubanid: Optional[str] = None,
season: Optional[int] = None,
transfer_type: Optional[str] = None,
background: Optional[bool] = False, **kwargs) -> str:
logger.info(
f"执行工具: {self.name}, 参数: file_path={file_path}, storage={storage}, target_path={target_path}, "
f"target_storage={target_storage}, media_type={media_type}, tmdbid={tmdbid}, doubanid={doubanid}, "
f"season={season}, transfer_type={transfer_type}, background={background}")
try:
if not file_path:
return "错误:必须提供文件或目录路径"
# 规范化路径
if storage == "local":
# 本地路径处理
if not file_path.startswith("/") and not (len(file_path) > 1 and file_path[1] == ":"):
# 相对路径,尝试转换为绝对路径
file_path = str(Path(file_path).resolve())
else:
# 远程存储路径,确保以/开头
if not file_path.startswith("/"):
file_path = "/" + file_path
# 创建FileItem
fileitem = FileItem(
storage=storage or "local",
path=file_path,
type="dir" if file_path.endswith("/") else "file"
)
# 处理目标路径
target_path_obj = None
if target_path:
target_path_obj = Path(target_path)
# 处理媒体类型
mtype = None
if media_type:
try:
mtype = MediaType(media_type)
except ValueError:
return f"错误:无效的媒体类型 '{media_type}',支持的类型:'movie', 'tv'"
# 调用整理方法
transfer_chain = TransferChain()
state, errormsg = transfer_chain.manual_transfer(
fileitem=fileitem,
target_storage=target_storage,
target_path=target_path_obj,
tmdbid=tmdbid,
doubanid=doubanid,
mtype=mtype,
season=season,
transfer_type=transfer_type,
background=background
)
if not state:
# 处理错误信息
if isinstance(errormsg, list):
error_text = f"整理完成,{len(errormsg)} 个文件转移失败"
if errormsg:
error_text += f"\n" + "\n".join(str(e) for e in errormsg[:5]) # 只显示前5个错误
if len(errormsg) > 5:
error_text += f"\n... 还有 {len(errormsg) - 5} 个错误"
else:
error_text = str(errormsg)
return f"整理失败:{error_text}"
else:
if background:
return f"整理任务已提交到后台运行:{file_path}"
else:
return f"整理成功:{file_path}"
except Exception as e:
logger.error(f"整理文件失败: {e}", exc_info=True)
return f"整理文件时发生错误: {str(e)}"

View File

@@ -0,0 +1,203 @@
"""更新站点工具"""
import json
from typing import Optional, Type
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.core.event import eventmanager
from app.db import AsyncSessionFactory
from app.db.models.site import Site
from app.log import logger
from app.schemas.types import EventType
from app.utils.string import StringUtils
class UpdateSiteInput(BaseModel):
"""更新站点工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
site_id: int = Field(..., description="The ID of the site to update")
name: Optional[str] = Field(None, description="Site name (optional)")
url: Optional[str] = Field(None, description="Site URL (optional, will be automatically formatted)")
pri: Optional[int] = Field(None, description="Site priority (optional, higher number = higher priority)")
rss: Optional[str] = Field(None, description="RSS feed URL (optional)")
cookie: Optional[str] = Field(None, description="Site cookie (optional)")
ua: Optional[str] = Field(None, description="User-Agent string (optional)")
apikey: Optional[str] = Field(None, description="API key (optional)")
token: Optional[str] = Field(None, description="API token (optional)")
proxy: Optional[int] = Field(None, description="Whether to use proxy: 0 for no, 1 for yes (optional)")
filter: Optional[str] = Field(None, description="Filter rule as regular expression (optional)")
note: Optional[str] = Field(None, description="Site notes/remarks (optional)")
timeout: Optional[int] = Field(None, description="Request timeout in seconds (optional, default: 15)")
limit_interval: Optional[int] = Field(None, description="Rate limit interval in seconds (optional)")
limit_count: Optional[int] = Field(None, description="Rate limit count per interval (optional)")
limit_seconds: Optional[int] = Field(None, description="Rate limit seconds between requests (optional)")
is_active: Optional[bool] = Field(None, description="Whether site is active: True for enabled, False for disabled (optional)")
downloader: Optional[str] = Field(None, description="Downloader name for this site (optional)")
class UpdateSiteTool(MoviePilotTool):
name: str = "update_site"
description: str = "Update site configuration including URL, priority, authentication credentials (cookie, UA, API key), proxy settings, rate limits, and other site properties. Supports updating multiple site attributes at once."
args_schema: Type[BaseModel] = UpdateSiteInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据更新参数生成友好的提示消息"""
site_id = kwargs.get("site_id")
fields_updated = []
if kwargs.get("name"):
fields_updated.append("名称")
if kwargs.get("url"):
fields_updated.append("URL")
if kwargs.get("pri") is not None:
fields_updated.append("优先级")
if kwargs.get("cookie"):
fields_updated.append("Cookie")
if kwargs.get("ua"):
fields_updated.append("User-Agent")
if kwargs.get("proxy") is not None:
fields_updated.append("代理设置")
if kwargs.get("is_active") is not None:
fields_updated.append("启用状态")
if kwargs.get("downloader"):
fields_updated.append("下载器")
if fields_updated:
return f"正在更新站点 #{site_id}: {', '.join(fields_updated)}"
return f"正在更新站点 #{site_id}"
async def run(self, site_id: int,
name: Optional[str] = None,
url: Optional[str] = None,
pri: Optional[int] = None,
rss: Optional[str] = None,
cookie: Optional[str] = None,
ua: Optional[str] = None,
apikey: Optional[str] = None,
token: Optional[str] = None,
proxy: Optional[int] = None,
filter: Optional[str] = None,
note: Optional[str] = None,
timeout: Optional[int] = None,
limit_interval: Optional[int] = None,
limit_count: Optional[int] = None,
limit_seconds: Optional[int] = None,
is_active: Optional[bool] = None,
downloader: Optional[str] = None,
**kwargs) -> str:
logger.info(f"执行工具: {self.name}, 参数: site_id={site_id}")
try:
# 获取数据库会话
async with AsyncSessionFactory() as db:
# 获取站点
site = await Site.async_get(db, site_id)
if not site:
return json.dumps({
"success": False,
"message": f"站点不存在: {site_id}"
}, ensure_ascii=False)
# 构建更新字典
site_dict = {}
# 基本信息
if name is not None:
site_dict["name"] = name
# URL处理需要校正格式
if url is not None:
_scheme, _netloc = StringUtils.get_url_netloc(url)
site_dict["url"] = f"{_scheme}://{_netloc}/"
if pri is not None:
site_dict["pri"] = pri
if rss is not None:
site_dict["rss"] = rss
# 认证信息
if cookie is not None:
site_dict["cookie"] = cookie
if ua is not None:
site_dict["ua"] = ua
if apikey is not None:
site_dict["apikey"] = apikey
if token is not None:
site_dict["token"] = token
# 配置选项
if proxy is not None:
site_dict["proxy"] = proxy
if filter is not None:
site_dict["filter"] = filter
if note is not None:
site_dict["note"] = note
if timeout is not None:
site_dict["timeout"] = timeout
# 流控设置
if limit_interval is not None:
site_dict["limit_interval"] = limit_interval
if limit_count is not None:
site_dict["limit_count"] = limit_count
if limit_seconds is not None:
site_dict["limit_seconds"] = limit_seconds
# 状态和下载器
if is_active is not None:
site_dict["is_active"] = is_active
if downloader is not None:
site_dict["downloader"] = downloader
# 如果没有要更新的字段
if not site_dict:
return json.dumps({
"success": False,
"message": "没有提供要更新的字段"
}, ensure_ascii=False)
# 更新站点
await site.async_update(db, site_dict)
# 重新获取更新后的站点数据
updated_site = await Site.async_get(db, site_id)
# 发送站点更新事件
await eventmanager.async_send_event(EventType.SiteUpdated, {
"domain": updated_site.domain if updated_site else site.domain
})
# 构建返回结果
result = {
"success": True,
"message": f"站点 #{site_id} 更新成功",
"site_id": site_id,
"updated_fields": list(site_dict.keys())
}
if updated_site:
result["site"] = {
"id": updated_site.id,
"name": updated_site.name,
"domain": updated_site.domain,
"url": updated_site.url,
"pri": updated_site.pri,
"is_active": updated_site.is_active,
"downloader": updated_site.downloader,
"proxy": updated_site.proxy,
"timeout": updated_site.timeout
}
return json.dumps(result, ensure_ascii=False, indent=2)
except Exception as e:
error_message = f"更新站点失败: {str(e)}"
logger.error(f"更新站点失败: {e}", exc_info=True)
return json.dumps({
"success": False,
"message": error_message,
"site_id": site_id
}, ensure_ascii=False)

View File

@@ -0,0 +1,88 @@
"""更新站点Cookie和UA工具"""
from typing import Optional, Type
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.chain.site import SiteChain
from app.db.site_oper import SiteOper
from app.log import logger
from app.utils.string import StringUtils
class UpdateSiteCookieInput(BaseModel):
"""更新站点Cookie和UA工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
site_identifier: str = Field(..., description="Site identifier: can be site ID (integer as string), site name, or site domain/URL")
username: str = Field(..., description="Site login username")
password: str = Field(..., description="Site login password")
two_step_code: Optional[str] = Field(None, description="Two-step verification code or secret key (optional, required for sites with 2FA enabled)")
class UpdateSiteCookieTool(MoviePilotTool):
name: str = "update_site_cookie"
description: str = "Update site Cookie and User-Agent by logging in with username and password. This tool can automatically obtain and update the site's authentication credentials. Supports two-step verification for sites that require it. Accepts site ID, site name, or site domain/URL as identifier."
args_schema: Type[BaseModel] = UpdateSiteCookieInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据更新参数生成友好的提示消息"""
site_identifier = kwargs.get("site_identifier", "")
username = kwargs.get("username", "")
two_step_code = kwargs.get("two_step_code")
message = f"正在更新站点Cookie: {site_identifier} (用户: {username})"
if two_step_code:
message += " [需要两步验证]"
return message
async def run(self, site_identifier: str, username: str, password: str,
two_step_code: Optional[str] = None, **kwargs) -> str:
logger.info(f"执行工具: {self.name}, 参数: site_identifier={site_identifier}, username={username}")
try:
site_oper = SiteOper()
site_chain = SiteChain()
# 尝试解析为站点ID
site = None
if site_identifier.isdigit():
# 如果是数字尝试作为站点ID查询
site = await site_oper.async_get(int(site_identifier))
# 如果不是ID或ID查询失败尝试按名称或域名查询
if not site:
# 尝试按名称查询
sites = await site_oper.async_list()
for s in sites:
if (site_identifier.lower() in (s.name or "").lower()) or \
(site_identifier.lower() in (s.domain or "").lower()):
site = s
break
# 如果还是没找到尝试从URL提取域名
if not site:
domain = StringUtils.get_url_domain(site_identifier)
if domain:
site = await site_oper.async_get_by_domain(domain)
if not site:
return f"未找到站点:{site_identifier},请使用 query_sites 工具查询可用的站点"
# 更新站点Cookie和UA
status, message = site_chain.update_cookie(
site_info=site,
username=username,
password=password,
two_step_code=two_step_code
)
if status:
return f"站点【{site.name}】Cookie和UA更新成功\n{message}"
else:
return f"站点【{site.name}】Cookie和UA更新失败\n错误原因:{message}"
except Exception as e:
logger.error(f"更新站点Cookie和UA失败: {e}", exc_info=True)
return f"更新站点Cookie和UA时发生错误: {str(e)}"

View File

@@ -0,0 +1,239 @@
"""更新订阅工具"""
import json
from typing import Optional, Type, List
from pydantic import BaseModel, Field
from app.agent.tools.base import MoviePilotTool
from app.core.event import eventmanager
from app.db import AsyncSessionFactory
from app.db.models.subscribe import Subscribe
from app.log import logger
from app.schemas.types import EventType
class UpdateSubscribeInput(BaseModel):
"""更新订阅工具的输入参数模型"""
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
subscribe_id: int = Field(..., description="The ID of the subscription to update")
name: Optional[str] = Field(None, description="Subscription name/title (optional)")
year: Optional[str] = Field(None, description="Release year (optional)")
season: Optional[int] = Field(None, description="Season number for TV shows (optional)")
total_episode: Optional[int] = Field(None, description="Total number of episodes (optional)")
lack_episode: Optional[int] = Field(None, description="Number of missing episodes (optional)")
start_episode: Optional[int] = Field(None, description="Starting episode number (optional)")
quality: Optional[str] = Field(None, description="Quality filter as regular expression (optional, e.g., 'BluRay|WEB-DL|HDTV')")
resolution: Optional[str] = Field(None, description="Resolution filter as regular expression (optional, e.g., '1080p|720p|2160p')")
effect: Optional[str] = Field(None, description="Effect filter as regular expression (optional, e.g., 'HDR|DV|SDR')")
include: Optional[str] = Field(None, description="Include filter as regular expression (optional)")
exclude: Optional[str] = Field(None, description="Exclude filter as regular expression (optional)")
filter: Optional[str] = Field(None, description="Filter rule as regular expression (optional)")
state: Optional[str] = Field(None, description="Subscription state: 'R' for enabled, 'P' for disabled, 'S' for paused (optional)")
sites: Optional[List[int]] = Field(None, description="List of site IDs to search from (optional)")
downloader: Optional[str] = Field(None, description="Downloader name (optional)")
save_path: Optional[str] = Field(None, description="Save path for downloaded files (optional)")
best_version: Optional[int] = Field(None, description="Whether to upgrade to best version: 0 for no, 1 for yes (optional)")
custom_words: Optional[str] = Field(None, description="Custom recognition words (optional)")
media_category: Optional[str] = Field(None, description="Custom media category (optional)")
episode_group: Optional[str] = Field(None, description="Episode group ID (optional)")
class UpdateSubscribeTool(MoviePilotTool):
name: str = "update_subscribe"
description: str = "Update subscription properties including filters, episode counts, state, and other settings. Supports updating quality/resolution filters, episode tracking, subscription state, and download configuration."
args_schema: Type[BaseModel] = UpdateSubscribeInput
def get_tool_message(self, **kwargs) -> Optional[str]:
"""根据更新参数生成友好的提示消息"""
subscribe_id = kwargs.get("subscribe_id")
fields_updated = []
if kwargs.get("name"):
fields_updated.append("名称")
if kwargs.get("total_episode") is not None:
fields_updated.append("总集数")
if kwargs.get("lack_episode") is not None:
fields_updated.append("缺失集数")
if kwargs.get("quality"):
fields_updated.append("质量过滤")
if kwargs.get("resolution"):
fields_updated.append("分辨率过滤")
if kwargs.get("state"):
state_map = {"R": "启用", "P": "禁用", "S": "暂停"}
fields_updated.append(f"状态({state_map.get(kwargs.get('state'), kwargs.get('state'))})")
if kwargs.get("sites"):
fields_updated.append("站点")
if kwargs.get("downloader"):
fields_updated.append("下载器")
if fields_updated:
return f"正在更新订阅 #{subscribe_id}: {', '.join(fields_updated)}"
return f"正在更新订阅 #{subscribe_id}"
async def run(self, subscribe_id: int,
name: Optional[str] = None,
year: Optional[str] = None,
season: Optional[int] = None,
total_episode: Optional[int] = None,
lack_episode: Optional[int] = None,
start_episode: Optional[int] = None,
quality: Optional[str] = None,
resolution: Optional[str] = None,
effect: Optional[str] = None,
include: Optional[str] = None,
exclude: Optional[str] = None,
filter: Optional[str] = None,
state: Optional[str] = None,
sites: Optional[List[int]] = None,
downloader: Optional[str] = None,
save_path: Optional[str] = None,
best_version: Optional[int] = None,
custom_words: Optional[str] = None,
media_category: Optional[str] = None,
episode_group: Optional[str] = None,
**kwargs) -> str:
logger.info(f"执行工具: {self.name}, 参数: subscribe_id={subscribe_id}")
try:
# 获取数据库会话
async with AsyncSessionFactory() as db:
# 获取订阅
subscribe = await Subscribe.async_get(db, subscribe_id)
if not subscribe:
return json.dumps({
"success": False,
"message": f"订阅不存在: {subscribe_id}"
}, ensure_ascii=False)
# 保存旧数据用于事件
old_subscribe_dict = subscribe.to_dict()
# 构建更新字典
subscribe_dict = {}
# 基本信息
if name is not None:
subscribe_dict["name"] = name
if year is not None:
subscribe_dict["year"] = year
if season is not None:
subscribe_dict["season"] = season
# 集数相关
if total_episode is not None:
subscribe_dict["total_episode"] = total_episode
# 如果总集数增加,缺失集数也要相应增加
if total_episode > (subscribe.total_episode or 0):
old_lack = subscribe.lack_episode or 0
subscribe_dict["lack_episode"] = old_lack + (total_episode - (subscribe.total_episode or 0))
# 标记为手动修改过总集数
subscribe_dict["manual_total_episode"] = 1
# 缺失集数处理(只有在没有提供总集数时才单独处理)
# 注意:如果 lack_episode 为 0不更新避免更新为0
if lack_episode is not None and total_episode is None:
if lack_episode > 0:
subscribe_dict["lack_episode"] = lack_episode
# 如果 lack_episode 为 0不添加到更新字典中保持原值或由总集数逻辑处理
if start_episode is not None:
subscribe_dict["start_episode"] = start_episode
# 过滤规则
if quality is not None:
subscribe_dict["quality"] = quality
if resolution is not None:
subscribe_dict["resolution"] = resolution
if effect is not None:
subscribe_dict["effect"] = effect
if include is not None:
subscribe_dict["include"] = include
if exclude is not None:
subscribe_dict["exclude"] = exclude
if filter is not None:
subscribe_dict["filter"] = filter
# 状态
if state is not None:
valid_states = ["R", "P", "S", "N"]
if state not in valid_states:
return json.dumps({
"success": False,
"message": f"无效的订阅状态: {state},有效状态: {', '.join(valid_states)}"
}, ensure_ascii=False)
subscribe_dict["state"] = state
# 下载配置
if sites is not None:
subscribe_dict["sites"] = sites
if downloader is not None:
subscribe_dict["downloader"] = downloader
if save_path is not None:
subscribe_dict["save_path"] = save_path
if best_version is not None:
subscribe_dict["best_version"] = best_version
# 其他配置
if custom_words is not None:
subscribe_dict["custom_words"] = custom_words
if media_category is not None:
subscribe_dict["media_category"] = media_category
if episode_group is not None:
subscribe_dict["episode_group"] = episode_group
# 如果没有要更新的字段
if not subscribe_dict:
return json.dumps({
"success": False,
"message": "没有提供要更新的字段"
}, ensure_ascii=False)
# 更新订阅
await subscribe.async_update(db, subscribe_dict)
# 重新获取更新后的订阅数据
updated_subscribe = await Subscribe.async_get(db, subscribe_id)
# 发送订阅调整事件
await eventmanager.async_send_event(EventType.SubscribeModified, {
"subscribe_id": subscribe_id,
"old_subscribe_info": old_subscribe_dict,
"subscribe_info": updated_subscribe.to_dict() if updated_subscribe else {},
})
# 构建返回结果
result = {
"success": True,
"message": f"订阅 #{subscribe_id} 更新成功",
"subscribe_id": subscribe_id,
"updated_fields": list(subscribe_dict.keys())
}
if updated_subscribe:
result["subscribe"] = {
"id": updated_subscribe.id,
"name": updated_subscribe.name,
"year": updated_subscribe.year,
"type": updated_subscribe.type,
"season": updated_subscribe.season,
"state": updated_subscribe.state,
"total_episode": updated_subscribe.total_episode,
"lack_episode": updated_subscribe.lack_episode,
"start_episode": updated_subscribe.start_episode,
"quality": updated_subscribe.quality,
"resolution": updated_subscribe.resolution,
"effect": updated_subscribe.effect
}
return json.dumps(result, ensure_ascii=False, indent=2)
except Exception as e:
error_message = f"更新订阅失败: {str(e)}"
logger.error(f"更新订阅失败: {e}", exc_info=True)
return json.dumps({
"success": False,
"message": error_message,
"subscribe_id": subscribe_id
}, ensure_ascii=False)

View File

@@ -1,6 +1,6 @@
from fastapi import APIRouter
from app.api.endpoints import login, user, site, message, webhook, subscribe, \
from app.api.endpoints import login, user, webhook, message, site, subscribe, \
media, douban, search, plugin, tmdb, history, system, download, dashboard, \
transfer, mediaserver, bangumi, storage, discover, recommend, workflow, torrent

View File

@@ -11,63 +11,63 @@ router = APIRouter()
@router.get("/credits/{bangumiid}", summary="查询Bangumi演职员表", response_model=List[schemas.MediaPerson])
def bangumi_credits(bangumiid: int,
page: Optional[int] = 1,
count: Optional[int] = 20,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def bangumi_credits(bangumiid: int,
page: Optional[int] = 1,
count: Optional[int] = 20,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询Bangumi演职员表
"""
persons = BangumiChain().bangumi_credits(bangumiid)
persons = await BangumiChain().async_bangumi_credits(bangumiid)
if persons:
return persons[(page - 1) * count: page * count]
return []
@router.get("/recommend/{bangumiid}", summary="查询Bangumi推荐", response_model=List[schemas.MediaInfo])
def bangumi_recommend(bangumiid: int,
page: Optional[int] = 1,
count: Optional[int] = 20,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def bangumi_recommend(bangumiid: int,
page: Optional[int] = 1,
count: Optional[int] = 20,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询Bangumi推荐
"""
medias = BangumiChain().bangumi_recommend(bangumiid)
medias = await BangumiChain().async_bangumi_recommend(bangumiid)
if medias:
return [media.to_dict() for media in medias[(page - 1) * count: page * count]]
return []
@router.get("/person/{person_id}", summary="人物详情", response_model=schemas.MediaPerson)
def bangumi_person(person_id: int,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def bangumi_person(person_id: int,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据人物ID查询人物详情
"""
return BangumiChain().person_detail(person_id=person_id)
return await BangumiChain().async_person_detail(person_id=person_id)
@router.get("/person/credits/{person_id}", summary="人物参演作品", response_model=List[schemas.MediaInfo])
def bangumi_person_credits(person_id: int,
page: Optional[int] = 1,
count: Optional[int] = 20,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def bangumi_person_credits(person_id: int,
page: Optional[int] = 1,
count: Optional[int] = 20,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据人物ID查询人物参演作品
"""
medias = BangumiChain().person_credits(person_id=person_id)
medias = await BangumiChain().async_person_credits(person_id=person_id)
if medias:
return [media.to_dict() for media in medias[(page - 1) * count: page * count]]
return []
@router.get("/{bangumiid}", summary="查询Bangumi详情", response_model=schemas.MediaInfo)
def bangumi_info(bangumiid: int,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def bangumi_info(bangumiid: int,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询Bangumi详情
"""
info = BangumiChain().bangumi_info(bangumiid)
info = await BangumiChain().async_bangumi_info(bangumiid)
if info:
return MediaInfo(bangumi_info=info).to_dict()
else:

View File

@@ -111,7 +111,7 @@ def downloader2(_: Annotated[str, Depends(verify_apitoken)]) -> Any:
@router.get("/schedule", summary="后台服务", response_model=List[schemas.ScheduleInfo])
def schedule(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def schedule(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询后台服务信息
"""
@@ -119,24 +119,25 @@ def schedule(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
@router.get("/schedule2", summary="后台服务API_TOKEN", response_model=List[schemas.ScheduleInfo])
def schedule2(_: Annotated[str, Depends(verify_apitoken)]) -> Any:
async def schedule2(_: Annotated[str, Depends(verify_apitoken)]) -> Any:
"""
查询下载器信息 API_TOKEN认证?token=xxx
"""
return schedule()
return await schedule()
@router.get("/transfer", summary="文件整理统计", response_model=List[int])
def transfer(days: Optional[int] = 7, db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def transfer(days: Optional[int] = 7,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询文件整理统计信息
"""
transfer_stat = TransferHistory.statistic(db, days)
transfer_stat = await TransferHistory.async_statistic(db, days)
return [stat[1] for stat in transfer_stat]
@router.get("/cpu", summary="获取当前CPU使用率", response_model=int)
@router.get("/cpu", summary="获取当前CPU使用率", response_model=float)
def cpu(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
获取当前CPU使用率
@@ -144,7 +145,7 @@ def cpu(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
return SystemUtils.cpu_usage()
@router.get("/cpu2", summary="获取当前CPU使用率API_TOKEN", response_model=int)
@router.get("/cpu2", summary="获取当前CPU使用率API_TOKEN", response_model=float)
def cpu2(_: Annotated[str, Depends(verify_apitoken)]) -> Any:
"""
获取当前CPU使用率 API_TOKEN认证?token=xxx

View File

@@ -3,13 +3,13 @@ from typing import Any, List, Optional
from fastapi import APIRouter, Depends
from app import schemas
from app.chain.bangumi import BangumiChain
from app.chain.douban import DoubanChain
from app.chain.tmdb import TmdbChain
from app.core.event import eventmanager
from app.core.security import verify_token
from app.schemas import DiscoverSourceEventData
from app.schemas.types import ChainEventType, MediaType
from app.chain.bangumi import BangumiChain
from app.chain.douban import DoubanChain
from app.chain.tmdb import TmdbChain
router = APIRouter()
@@ -31,100 +31,100 @@ def source(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
@router.get("/bangumi", summary="探索Bangumi", response_model=List[schemas.MediaInfo])
def bangumi(type: Optional[int] = 2,
cat: Optional[int] = None,
sort: Optional[str] = 'rank',
year: Optional[str] = None,
page: Optional[int] = 1,
count: Optional[int] = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def bangumi(type: Optional[int] = 2,
cat: Optional[int] = None,
sort: Optional[str] = 'rank',
year: Optional[str] = None,
page: Optional[int] = 1,
count: Optional[int] = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
探索Bangumi
"""
medias = BangumiChain().discover(type=type, cat=cat, sort=sort, year=year,
limit=count, offset=(page - 1) * count)
medias = await BangumiChain().async_discover(type=type, cat=cat, sort=sort, year=year,
limit=count, offset=(page - 1) * count)
if medias:
return [media.to_dict() for media in medias]
return []
@router.get("/douban_movies", summary="探索豆瓣电影", response_model=List[schemas.MediaInfo])
def douban_movies(sort: Optional[str] = "R",
tags: Optional[str] = "",
page: Optional[int] = 1,
count: Optional[int] = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def douban_movies(sort: Optional[str] = "R",
tags: Optional[str] = "",
page: Optional[int] = 1,
count: Optional[int] = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览豆瓣电影信息
"""
movies = DoubanChain().douban_discover(mtype=MediaType.MOVIE,
sort=sort, tags=tags, page=page, count=count)
movies = await DoubanChain().async_douban_discover(mtype=MediaType.MOVIE,
sort=sort, tags=tags, page=page, count=count)
return [media.to_dict() for media in movies] if movies else []
@router.get("/douban_tvs", summary="探索豆瓣剧集", response_model=List[schemas.MediaInfo])
def douban_tvs(sort: Optional[str] = "R",
tags: Optional[str] = "",
page: Optional[int] = 1,
count: Optional[int] = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def douban_tvs(sort: Optional[str] = "R",
tags: Optional[str] = "",
page: Optional[int] = 1,
count: Optional[int] = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览豆瓣剧集信息
"""
tvs = DoubanChain().douban_discover(mtype=MediaType.TV,
sort=sort, tags=tags, page=page, count=count)
tvs = await DoubanChain().async_douban_discover(mtype=MediaType.TV,
sort=sort, tags=tags, page=page, count=count)
return [media.to_dict() for media in tvs] if tvs else []
@router.get("/tmdb_movies", summary="探索TMDB电影", response_model=List[schemas.MediaInfo])
def tmdb_movies(sort_by: Optional[str] = "popularity.desc",
with_genres: Optional[str] = "",
with_original_language: Optional[str] = "",
with_keywords: Optional[str] = "",
with_watch_providers: Optional[str] = "",
vote_average: Optional[float] = 0.0,
vote_count: Optional[int] = 0,
release_date: Optional[str] = "",
page: Optional[int] = 1,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def tmdb_movies(sort_by: Optional[str] = "popularity.desc",
with_genres: Optional[str] = "",
with_original_language: Optional[str] = "",
with_keywords: Optional[str] = "",
with_watch_providers: Optional[str] = "",
vote_average: Optional[float] = 0.0,
vote_count: Optional[int] = 0,
release_date: Optional[str] = "",
page: Optional[int] = 1,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览TMDB电影信息
"""
movies = TmdbChain().tmdb_discover(mtype=MediaType.MOVIE,
sort_by=sort_by,
with_genres=with_genres,
with_original_language=with_original_language,
with_keywords=with_keywords,
with_watch_providers=with_watch_providers,
vote_average=vote_average,
vote_count=vote_count,
release_date=release_date,
page=page)
movies = await TmdbChain().async_tmdb_discover(mtype=MediaType.MOVIE,
sort_by=sort_by,
with_genres=with_genres,
with_original_language=with_original_language,
with_keywords=with_keywords,
with_watch_providers=with_watch_providers,
vote_average=vote_average,
vote_count=vote_count,
release_date=release_date,
page=page)
return [movie.to_dict() for movie in movies] if movies else []
@router.get("/tmdb_tvs", summary="探索TMDB剧集", response_model=List[schemas.MediaInfo])
def tmdb_tvs(sort_by: Optional[str] = "popularity.desc",
with_genres: Optional[str] = "",
with_original_language: Optional[str] = "",
with_keywords: Optional[str] = "",
with_watch_providers: Optional[str] = "",
vote_average: Optional[float] = 0.0,
vote_count: Optional[int] = 0,
release_date: Optional[str] = "",
page: Optional[int] = 1,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def tmdb_tvs(sort_by: Optional[str] = "popularity.desc",
with_genres: Optional[str] = "",
with_original_language: Optional[str] = "",
with_keywords: Optional[str] = "",
with_watch_providers: Optional[str] = "",
vote_average: Optional[float] = 0.0,
vote_count: Optional[int] = 0,
release_date: Optional[str] = "",
page: Optional[int] = 1,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览TMDB剧集信息
"""
tvs = TmdbChain().tmdb_discover(mtype=MediaType.TV,
sort_by=sort_by,
with_genres=with_genres,
with_original_language=with_original_language,
with_keywords=with_keywords,
with_watch_providers=with_watch_providers,
vote_average=vote_average,
vote_count=vote_count,
release_date=release_date,
page=page)
tvs = await TmdbChain().async_tmdb_discover(mtype=MediaType.TV,
sort_by=sort_by,
with_genres=with_genres,
with_original_language=with_original_language,
with_keywords=with_keywords,
with_watch_providers=with_watch_providers,
vote_average=vote_average,
vote_count=vote_count,
release_date=release_date,
page=page)
return [tv.to_dict() for tv in tvs] if tvs else []

View File

@@ -12,54 +12,54 @@ router = APIRouter()
@router.get("/person/{person_id}", summary="人物详情", response_model=schemas.MediaPerson)
def douban_person(person_id: int,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def douban_person(person_id: int,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据人物ID查询人物详情
"""
return DoubanChain().person_detail(person_id=person_id)
return await DoubanChain().async_person_detail(person_id=person_id)
@router.get("/person/credits/{person_id}", summary="人物参演作品", response_model=List[schemas.MediaInfo])
def douban_person_credits(person_id: int,
page: Optional[int] = 1,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def douban_person_credits(person_id: int,
page: Optional[int] = 1,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据人物ID查询人物参演作品
"""
medias = DoubanChain().person_credits(person_id=person_id, page=page)
medias = await DoubanChain().async_person_credits(person_id=person_id, page=page)
if medias:
return [media.to_dict() for media in medias]
return []
@router.get("/credits/{doubanid}/{type_name}", summary="豆瓣演员阵容", response_model=List[schemas.MediaPerson])
def douban_credits(doubanid: str,
type_name: str,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def douban_credits(doubanid: str,
type_name: str,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据豆瓣ID查询演员阵容type_name: 电影/电视剧
"""
mediatype = MediaType(type_name)
if mediatype == MediaType.MOVIE:
return DoubanChain().movie_credits(doubanid=doubanid)
return await DoubanChain().async_movie_credits(doubanid=doubanid)
elif mediatype == MediaType.TV:
return DoubanChain().tv_credits(doubanid=doubanid)
return await DoubanChain().async_tv_credits(doubanid=doubanid)
return []
@router.get("/recommend/{doubanid}/{type_name}", summary="豆瓣推荐电影/电视剧", response_model=List[schemas.MediaInfo])
def douban_recommend(doubanid: str,
type_name: str,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def douban_recommend(doubanid: str,
type_name: str,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据豆瓣ID查询推荐电影/电视剧type_name: 电影/电视剧
"""
mediatype = MediaType(type_name)
if mediatype == MediaType.MOVIE:
medias = DoubanChain().movie_recommend(doubanid=doubanid)
medias = await DoubanChain().async_movie_recommend(doubanid=doubanid)
elif mediatype == MediaType.TV:
medias = DoubanChain().tv_recommend(doubanid=doubanid)
medias = await DoubanChain().async_tv_recommend(doubanid=doubanid)
else:
return []
if medias:
@@ -68,12 +68,12 @@ def douban_recommend(doubanid: str,
@router.get("/{doubanid}", summary="查询豆瓣详情", response_model=schemas.MediaInfo)
def douban_info(doubanid: str,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def douban_info(doubanid: str,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据豆瓣ID查询豆瓣媒体信息
"""
doubaninfo = DoubanChain().douban_info(doubanid=doubanid)
doubaninfo = await DoubanChain().async_douban_info(doubanid=doubanid)
if doubaninfo:
return MediaInfo(douban_info=doubaninfo).to_dict()
else:

View File

@@ -40,10 +40,10 @@ def download(
metainfo = MetaInfo(title=torrent_in.title, subtitle=torrent_in.description)
# 媒体信息
mediainfo = MediaInfo()
mediainfo.from_dict(media_in.dict())
mediainfo.from_dict(media_in.model_dump())
# 种子信息
torrentinfo = TorrentInfo()
torrentinfo.from_dict(torrent_in.dict())
torrentinfo.from_dict(torrent_in.model_dump())
# 手动下载始终使用选择的下载器
torrentinfo.site_downloader = downloader
# 上下文
@@ -64,6 +64,9 @@ def download(
@router.post("/add", summary="添加下载(不含媒体信息)", response_model=schemas.Response)
def add(
torrent_in: schemas.TorrentInfo,
tmdbid: Annotated[int | None, Body()] = None,
doubanid: Annotated[str | None, Body()] = None,
bangumiid: Annotated[int | None, Body()] = None,
downloader: Annotated[str | None, Body()] = None,
save_path: Annotated[str | None, Body()] = None,
current_user: User = Depends(get_current_active_user)) -> Any:
@@ -73,12 +76,12 @@ def add(
# 元数据
metainfo = MetaInfo(title=torrent_in.title, subtitle=torrent_in.description)
# 媒体信息
mediainfo = MediaChain().recognize_media(meta=metainfo)
mediainfo = MediaChain().recognize_media(meta=metainfo, tmdbid=tmdbid, doubanid=doubanid, bangumiid=bangumiid)
if not mediainfo:
return schemas.Response(success=False, message="无法识别媒体信息")
# 种子信息
torrentinfo = TorrentInfo()
torrentinfo.from_dict(torrent_in.dict())
torrentinfo.from_dict(torrent_in.model_dump())
# 上下文
context = Context(
meta_info=metainfo,
@@ -116,7 +119,7 @@ def stop(hashString: str, name: Optional[str] = None,
@router.get("/clients", summary="查询可用下载器", response_model=List[dict])
def clients(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def clients(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询可用下载器
"""

View File

@@ -2,51 +2,52 @@ from typing import List, Any, Optional
import jieba
from fastapi import APIRouter, Depends
from sqlalchemy.ext.asyncio import AsyncSession
from sqlalchemy.orm import Session
from app import schemas
from app.chain.storage import StorageChain
from app.core.event import eventmanager
from app.core.security import verify_token
from app.db import get_db
from app.db import get_async_db, get_db
from app.db.models import User
from app.db.models.downloadhistory import DownloadHistory
from app.db.models.transferhistory import TransferHistory
from app.db.user_oper import get_current_active_superuser
from app.schemas.types import EventType, MediaType
from app.db.user_oper import get_current_active_superuser_async, get_current_active_superuser
from app.schemas.types import EventType
router = APIRouter()
@router.get("/download", summary="查询下载历史记录", response_model=List[schemas.DownloadHistory])
def download_history(page: Optional[int] = 1,
count: Optional[int] = 30,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def download_history(page: Optional[int] = 1,
count: Optional[int] = 30,
db: AsyncSession = Depends(get_async_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询下载历史记录
"""
return DownloadHistory.list_by_page(db, page, count)
return await DownloadHistory.async_list_by_page(db, page, count)
@router.delete("/download", summary="删除下载历史记录", response_model=schemas.Response)
def delete_download_history(history_in: schemas.DownloadHistory,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def delete_download_history(history_in: schemas.DownloadHistory,
db: AsyncSession = Depends(get_async_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
删除下载历史记录
"""
DownloadHistory.delete(db, history_in.id)
await DownloadHistory.async_delete(db, history_in.id)
return schemas.Response(success=True)
@router.get("/transfer", summary="查询整理记录", response_model=schemas.Response)
def transfer_history(title: Optional[str] = None,
page: Optional[int] = 1,
count: Optional[int] = 30,
status: Optional[bool] = None,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def transfer_history(title: Optional[str] = None,
page: Optional[int] = 1,
count: Optional[int] = 30,
status: Optional[bool] = None,
db: AsyncSession = Depends(get_async_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询整理记录
"""
@@ -60,16 +61,16 @@ def transfer_history(title: Optional[str] = None,
if title:
words = jieba.cut(title, HMM=False)
title = "%".join(words)
total = TransferHistory.count_by_title(db, title=title, status=status)
result = TransferHistory.list_by_title(db, title=title, page=page,
count=count, status=status)
total = await TransferHistory.async_count_by_title(db, title=title, status=status)
result = await TransferHistory.async_list_by_title(db, title=title, page=page,
count=count, status=status)
else:
result = TransferHistory.list_by_page(db, page=page, count=count, status=status)
total = TransferHistory.count(db, status=status)
result = await TransferHistory.async_list_by_page(db, page=page, count=count, status=status)
total = await TransferHistory.async_count(db, status=status)
return schemas.Response(success=True,
data={
"list": result,
"list": [item.to_dict() for item in result],
"total": total,
})
@@ -79,7 +80,7 @@ def delete_transfer_history(history_in: schemas.TransferHistory,
deletesrc: Optional[bool] = False,
deletedest: Optional[bool] = False,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(get_current_active_superuser)) -> Any:
_: User = Depends(get_current_active_superuser)) -> Any:
"""
删除整理记录
"""
@@ -89,7 +90,7 @@ def delete_transfer_history(history_in: schemas.TransferHistory,
# 册除媒体库文件
if deletedest and history.dest_fileitem:
dest_fileitem = schemas.FileItem(**history.dest_fileitem)
StorageChain().delete_media_file(fileitem=dest_fileitem, mtype=MediaType(history.type))
StorageChain().delete_media_file(dest_fileitem)
# 删除源文件
if deletesrc and history.src_fileitem:
@@ -111,10 +112,10 @@ def delete_transfer_history(history_in: schemas.TransferHistory,
@router.get("/empty/transfer", summary="清空整理记录", response_model=schemas.Response)
def delete_transfer_history(db: Session = Depends(get_db),
_: User = Depends(get_current_active_superuser)) -> Any:
async def empty_transfer_history(db: AsyncSession = Depends(get_async_db),
_: User = Depends(get_current_active_superuser_async)) -> Any:
"""
清空整理记录
"""
TransferHistory.truncate(db)
await TransferHistory.async_truncate(db)
return schemas.Response(success=True)

View File

@@ -8,8 +8,10 @@ from app import schemas
from app.chain.user import UserChain
from app.core import security
from app.core.config import settings
from app.helper.sites import SitesHelper
from app.db.systemconfig_oper import SystemConfigOper
from app.helper.sites import SitesHelper # noqa
from app.helper.wallpaper import WallpaperHelper
from app.schemas.types import SystemConfigKey
router = APIRouter()
@@ -29,7 +31,10 @@ def login_access_token(
if not success:
raise HTTPException(status_code=401, detail=user_or_message)
# 用户等级
level = SitesHelper().auth_level
# 是否显示配置向导
show_wizard = not SystemConfigOper().get(SystemConfigKey.SetupWizardState) and not settings.ADVANCED_MODE
return schemas.Token(
access_token=security.create_access_token(
userid=user_or_message.id,
@@ -44,7 +49,8 @@ def login_access_token(
user_name=user_or_message.name,
avatar=user_or_message.avatar,
level=level,
permissions= user_or_message.permissions or {},
permissions=user_or_message.permissions or {},
widzard=show_wizard
)

View File

@@ -18,61 +18,61 @@ router = APIRouter()
@router.get("/recognize", summary="识别媒体信息(种子)", response_model=schemas.Context)
def recognize(title: str,
subtitle: Optional[str] = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def recognize(title: str,
subtitle: Optional[str] = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据标题、副标题识别媒体信息
"""
# 识别媒体信息
metainfo = MetaInfo(title, subtitle)
mediainfo = MediaChain().recognize_by_meta(metainfo)
mediainfo = await MediaChain().async_recognize_by_meta(metainfo)
if mediainfo:
return Context(meta_info=metainfo, media_info=mediainfo).to_dict()
return schemas.Context()
@router.get("/recognize2", summary="识别种子媒体信息API_TOKEN", response_model=schemas.Context)
def recognize2(_: Annotated[str, Depends(verify_apitoken)],
title: str,
subtitle: Optional[str] = None
) -> Any:
async def recognize2(_: Annotated[str, Depends(verify_apitoken)],
title: str,
subtitle: Optional[str] = None
) -> Any:
"""
根据标题、副标题识别媒体信息 API_TOKEN认证?token=xxx
"""
# 识别媒体信息
return recognize(title, subtitle)
return await recognize(title, subtitle)
@router.get("/recognize_file", summary="识别媒体信息(文件)", response_model=schemas.Context)
def recognize_file(path: str,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def recognize_file(path: str,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据文件路径识别媒体信息
"""
# 识别媒体信息
context = MediaChain().recognize_by_path(path)
context = await MediaChain().async_recognize_by_path(path)
if context:
return context.to_dict()
return schemas.Context()
@router.get("/recognize_file2", summary="识别文件媒体信息API_TOKEN", response_model=schemas.Context)
def recognize_file2(path: str,
_: Annotated[str, Depends(verify_apitoken)]) -> Any:
async def recognize_file2(path: str,
_: Annotated[str, Depends(verify_apitoken)]) -> Any:
"""
根据文件路径识别媒体信息 API_TOKEN认证?token=xxx
"""
# 识别媒体信息
return recognize_file(path)
return await recognize_file(path)
@router.get("/search", summary="搜索媒体/人物信息", response_model=List[dict])
def search(title: str,
type: Optional[str] = "media",
page: int = 1,
count: int = 8,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def search(title: str,
type: Optional[str] = "media",
page: int = 1,
count: int = 8,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
模糊搜索媒体/人物信息列表 media媒体信息person人物信息
"""
@@ -86,14 +86,15 @@ def search(title: str,
return obj.source
result = []
media_chain = MediaChain()
if type == "media":
_, medias = MediaChain().search(title=title)
_, medias = await media_chain.async_search(title=title)
if medias:
result = [media.to_dict() for media in medias]
elif type == "collection":
result = MediaChain().search_collections(name=title)
result = await media_chain.async_search_collections(name=title)
else:
result = MediaChain().search_persons(name=title)
result = await media_chain.async_search_persons(name=title)
if result:
# 按设置的顺序对结果进行排序
setting_order = settings.SEARCH_SOURCE.split(',') or []
@@ -101,7 +102,8 @@ def search(title: str,
for index, source in enumerate(setting_order):
sort_order[source] = index
result = sorted(result, key=lambda x: sort_order.get(__get_source(x), 4))
return result[(page - 1) * count:page * count]
return result[(page - 1) * count:page * count]
return []
@router.post("/scrape/{storage}", summary="刮削媒体信息", response_model=schemas.Response)
@@ -123,13 +125,13 @@ def scrape(fileitem: schemas.FileItem,
if storage == "local":
if not scrape_path.exists():
return schemas.Response(success=False, message="刮削路径不存在")
# 手动刮削
# 手动刮削 (暂时使用同步版本,可以后续优化为异步)
chain.scrape_metadata(fileitem=fileitem, meta=meta, mediainfo=mediainfo, overwrite=True)
return schemas.Response(success=True, message=f"{fileitem.path} 刮削完成")
@router.get("/category", summary="查询自动分类配置", response_model=dict)
def category(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def category(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询自动分类配置
"""
@@ -137,37 +139,37 @@ def category(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
@router.get("/group/seasons/{episode_group}", summary="查询剧集组季信息", response_model=List[schemas.MediaSeason])
def group_seasons(episode_group: str, _: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def group_seasons(episode_group: str, _: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询剧集组季信息themoviedb
"""
return TmdbChain().tmdb_group_seasons(group_id=episode_group)
return await TmdbChain().async_tmdb_group_seasons(group_id=episode_group)
@router.get("/groups/{tmdbid}", summary="查询媒体剧集组", response_model=List[dict])
def seasons(tmdbid: int, _: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def groups(tmdbid: int, _: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询媒体剧集组列表themoviedb
"""
mediainfo = MediaChain().recognize_media(tmdbid=tmdbid, mtype=MediaType.TV)
mediainfo = await MediaChain().async_recognize_media(tmdbid=tmdbid, mtype=MediaType.TV)
if not mediainfo:
return []
return mediainfo.episode_groups
@router.get("/seasons", summary="查询媒体季信息", response_model=List[schemas.MediaSeason])
def seasons(mediaid: Optional[str] = None,
title: Optional[str] = None,
year: str = None,
season: int = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def seasons(mediaid: Optional[str] = None,
title: Optional[str] = None,
year: str = None,
season: int = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询媒体季信息
"""
if mediaid:
if mediaid.startswith("tmdb:"):
tmdbid = int(mediaid[5:])
seasons_info = TmdbChain().tmdb_seasons(tmdbid=tmdbid)
seasons_info = await TmdbChain().async_tmdb_seasons(tmdbid=tmdbid)
if seasons_info:
if season:
return [sea for sea in seasons_info if sea.season_number == season]
@@ -176,17 +178,17 @@ def seasons(mediaid: Optional[str] = None,
meta = MetaInfo(title)
if year:
meta.year = year
mediainfo = MediaChain().recognize_media(meta, mtype=MediaType.TV)
mediainfo = await MediaChain().async_recognize_media(meta, mtype=MediaType.TV)
if mediainfo:
if settings.RECOGNIZE_SOURCE == "themoviedb":
seasons_info = TmdbChain().tmdb_seasons(tmdbid=mediainfo.tmdb_id)
seasons_info = await TmdbChain().async_tmdb_seasons(tmdbid=mediainfo.tmdb_id)
if seasons_info:
if season:
return [sea for sea in seasons_info if sea.season_number == season]
return seasons_info
else:
sea = season or 1
return schemas.MediaSeason(
return [schemas.MediaSeason(
season_number=sea,
poster_path=mediainfo.poster_path,
name=f"{sea}",
@@ -194,39 +196,40 @@ def seasons(mediaid: Optional[str] = None,
overview=mediainfo.overview,
vote_average=mediainfo.vote_average,
episode_count=mediainfo.number_of_episodes
)
)]
return []
@router.get("/{mediaid}", summary="查询媒体详情", response_model=schemas.MediaInfo)
def detail(mediaid: str, type_name: str, title: Optional[str] = None, year: str = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def detail(mediaid: str, type_name: str, title: Optional[str] = None, year: str = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据媒体ID查询themoviedb或豆瓣媒体信息type_name: 电影/电视剧
"""
mtype = MediaType(type_name)
mediainfo = None
mediachain = MediaChain()
if mediaid.startswith("tmdb:"):
mediainfo = MediaChain().recognize_media(tmdbid=int(mediaid[5:]), mtype=mtype)
mediainfo = await mediachain.async_recognize_media(tmdbid=int(mediaid[5:]), mtype=mtype)
elif mediaid.startswith("douban:"):
mediainfo = MediaChain().recognize_media(doubanid=mediaid[7:], mtype=mtype)
mediainfo = await mediachain.async_recognize_media(doubanid=mediaid[7:], mtype=mtype)
elif mediaid.startswith("bangumi:"):
mediainfo = MediaChain().recognize_media(bangumiid=int(mediaid[8:]), mtype=mtype)
mediainfo = await mediachain.async_recognize_media(bangumiid=int(mediaid[8:]), mtype=mtype)
else:
# 广播事件解析媒体信息
event_data = MediaRecognizeConvertEventData(
mediaid=mediaid,
convert_type=settings.RECOGNIZE_SOURCE
)
event = eventmanager.send_event(ChainEventType.MediaRecognizeConvert, event_data)
event = await eventmanager.async_send_event(ChainEventType.MediaRecognizeConvert, event_data)
# 使用事件返回的上下文数据
if event and event.event_data and event.event_data.media_dict:
event_data: MediaRecognizeConvertEventData = event.event_data
new_id = event_data.media_dict.get("id")
if event_data.convert_type == "themoviedb":
mediainfo = MediaChain().recognize_media(tmdbid=new_id, mtype=mtype)
mediainfo = await mediachain.async_recognize_media(tmdbid=new_id, mtype=mtype)
elif event_data.convert_type == "douban":
mediainfo = MediaChain().recognize_media(doubanid=new_id, mtype=mtype)
mediainfo = await mediachain.async_recognize_media(doubanid=new_id, mtype=mtype)
elif title:
# 使用名称识别兜底
meta = MetaInfo(title)
@@ -234,10 +237,10 @@ def detail(mediaid: str, type_name: str, title: Optional[str] = None, year: str
meta.year = year
if mtype:
meta.type = mtype
mediainfo = MediaChain().recognize_media(meta=meta)
mediainfo = await mediachain.async_recognize_media(meta=meta)
# 识别
if mediainfo:
MediaChain().obtain_images(mediainfo)
await mediachain.async_obtain_images(mediainfo)
return mediainfo.to_dict()
return schemas.MediaInfo()

View File

@@ -1,7 +1,7 @@
from typing import Any, List, Dict, Optional
from fastapi import APIRouter, Depends
from sqlalchemy.orm import Session
from sqlalchemy.ext.asyncio import AsyncSession
from app import schemas
from app.chain.download import DownloadChain
@@ -9,7 +9,7 @@ from app.chain.mediaserver import MediaServerChain
from app.core.context import MediaInfo
from app.core.metainfo import MetaInfo
from app.core.security import verify_token
from app.db import get_db
from app.db import get_async_db
from app.db.mediaserver_oper import MediaServerOper
from app.db.models import MediaServerItem
from app.db.systemconfig_oper import SystemConfigOper
@@ -43,13 +43,13 @@ def play_item(itemid: str, _: schemas.TokenPayload = Depends(verify_token)) -> s
@router.get("/exists", summary="查询本地是否存在(数据库)", response_model=schemas.Response)
def exists_local(title: Optional[str] = None,
year: Optional[str] = None,
mtype: Optional[str] = None,
tmdbid: Optional[int] = None,
season: Optional[int] = None,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def exists_local(title: Optional[str] = None,
year: Optional[str] = None,
mtype: Optional[str] = None,
tmdbid: Optional[int] = None,
season: Optional[int] = None,
db: AsyncSession = Depends(get_async_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
判断本地是否存在
"""
@@ -59,7 +59,7 @@ def exists_local(title: Optional[str] = None,
# 返回对象
ret_info = {}
# 本地数据库是否存在
exist: MediaServerItem = MediaServerOper(db).exists(
exist: MediaServerItem = await MediaServerOper(db).async_exists(
title=meta.name, year=year, mtype=mtype, tmdbid=tmdbid, season=season
)
if exist:
@@ -79,7 +79,7 @@ def exists(media_in: schemas.MediaInfo,
"""
# 转化为媒体信息对象
mediainfo = MediaInfo()
mediainfo.from_dict(media_in.dict())
mediainfo.from_dict(media_in.model_dump())
existsinfo: schemas.ExistMediaInfo = MediaServerChain().media_exists(mediainfo=mediainfo)
if not existsinfo:
return []
@@ -108,7 +108,7 @@ def not_exists(media_in: schemas.MediaInfo,
meta.year = media_in.year
# 转化为媒体信息对象
mediainfo = MediaInfo()
mediainfo.from_dict(media_in.dict())
mediainfo.from_dict(media_in.model_dump())
exist_flag, no_exists = DownloadChain().get_no_exists_info(meta=meta, mediainfo=mediainfo)
mediakey = mediainfo.tmdb_id or mediainfo.douban_id
if mediainfo.type == MediaType.MOVIE:
@@ -148,7 +148,7 @@ def library(server: str, hidden: Optional[bool] = False,
@router.get("/clients", summary="查询可用媒体服务器", response_model=List[dict])
def clients(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def clients(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询可用媒体服务器
"""

View File

@@ -3,14 +3,14 @@ from typing import Union, Any, List, Optional
from fastapi import APIRouter, BackgroundTasks, Depends, Request
from pywebpush import WebPushException, webpush
from sqlalchemy.orm import Session
from sqlalchemy.ext.asyncio import AsyncSession
from starlette.responses import PlainTextResponse
from app import schemas
from app.chain.message import MessageChain
from app.core.config import settings, global_vars
from app.core.security import verify_token, verify_apitoken
from app.db import get_db
from app.db import get_async_db
from app.db.models import User
from app.db.models.message import Message
from app.db.user_oper import get_current_active_superuser
@@ -58,15 +58,15 @@ def web_message(text: str, current_user: User = Depends(get_current_active_super
@router.get("/web", summary="获取WEB消息", response_model=List[dict])
def get_web_message(_: schemas.TokenPayload = Depends(verify_token),
db: Session = Depends(get_db),
page: Optional[int] = 1,
count: Optional[int] = 20):
async def get_web_message(_: schemas.TokenPayload = Depends(verify_token),
db: AsyncSession = Depends(get_async_db),
page: Optional[int] = 1,
count: Optional[int] = 20):
"""
获取WEB消息列表
"""
ret_messages = []
messages = Message.list_by_page(db, page=page, count=count)
messages = await Message.async_list_by_page(db, page=page, count=count)
for message in messages:
try:
ret_messages.append(message.to_dict())
@@ -128,11 +128,11 @@ def incoming_verify(token: Optional[str] = None, echostr: Optional[str] = None,
@router.post("/webpush/subscribe", summary="客户端webpush通知订阅", response_model=schemas.Response)
def subscribe(subscription: schemas.Subscription, _: schemas.TokenPayload = Depends(verify_token)):
async def subscribe(subscription: schemas.Subscription, _: schemas.TokenPayload = Depends(verify_token)):
"""
客户端webpush通知订阅
"""
subinfo = subscription.dict()
subinfo = subscription.model_dump()
if subinfo not in global_vars.get_subscriptions():
global_vars.push_subscription(subinfo)
logger.debug(f"通知订阅成功: {subinfo}")
@@ -148,7 +148,7 @@ def send_notification(payload: schemas.SubscriptionMessage, _: schemas.TokenPayl
try:
webpush(
subscription_info=sub,
data=json.dumps(payload.dict()),
data=json.dumps(payload.model_dump()),
vapid_private_key=settings.VAPID.get("privateKey"),
vapid_claims={
"sub": settings.VAPID.get("subject")

View File

@@ -2,17 +2,21 @@ import mimetypes
import shutil
from typing import Annotated, Any, List, Optional
import aiofiles
from anyio import Path as AsyncPath
from fastapi import APIRouter, Depends, Header, HTTPException
from fastapi.concurrency import run_in_threadpool
from starlette import status
from starlette.responses import FileResponse
from starlette.responses import StreamingResponse
from app import schemas
from app.command import Command
from app.core.config import settings
from app.core.plugin import PluginManager
from app.core.security import verify_apikey, verify_token
from app.db.models import User
from app.db.systemconfig_oper import SystemConfigOper
from app.db.user_oper import get_current_active_superuser
from app.db.user_oper import get_current_active_superuser, get_current_active_superuser_async
from app.factory import app
from app.helper.plugin import PluginHelper
from app.log import logger
@@ -136,13 +140,14 @@ def register_plugin(plugin_id: str):
@router.get("/", summary="所有插件", response_model=List[schemas.Plugin])
def all_plugins(_: schemas.TokenPayload = Depends(get_current_active_superuser),
state: Optional[str] = "all", force: bool = False) -> List[schemas.Plugin]:
async def all_plugins(_: User = Depends(get_current_active_superuser_async),
state: Optional[str] = "all", force: bool = False) -> List[schemas.Plugin]:
"""
查询所有插件清单包括本地插件和在线插件插件状态installed, market, all
"""
# 本地插件
local_plugins = PluginManager().get_local_plugins()
plugin_manager = PluginManager()
local_plugins = plugin_manager.get_local_plugins()
# 已安装插件
installed_plugins = [plugin for plugin in local_plugins if plugin.installed]
if state == "installed":
@@ -151,7 +156,7 @@ def all_plugins(_: schemas.TokenPayload = Depends(get_current_active_superuser),
# 未安装的本地插件
not_installed_plugins = [plugin for plugin in local_plugins if not plugin.installed]
# 在线插件
online_plugins = PluginManager().get_online_plugins(force)
online_plugins = await plugin_manager.async_get_online_plugins(force)
if not online_plugins:
# 没有获取在线插件
if state == "market":
@@ -184,7 +189,7 @@ def all_plugins(_: schemas.TokenPayload = Depends(get_current_active_superuser),
@router.get("/installed", summary="已安装插件", response_model=List[str])
def installed(_: schemas.TokenPayload = Depends(get_current_active_superuser)) -> Any:
async def installed(_: User = Depends(get_current_active_superuser_async)) -> Any:
"""
查询用户已安装插件清单
"""
@@ -192,15 +197,15 @@ def installed(_: schemas.TokenPayload = Depends(get_current_active_superuser)) -
@router.get("/statistic", summary="插件安装统计", response_model=dict)
def statistic(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def statistic(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
插件安装统计
"""
return PluginHelper().get_statistic()
return await PluginHelper().async_get_statistic()
@router.get("/reload/{plugin_id}", summary="重新加载插件", response_model=schemas.Response)
def reload_plugin(plugin_id: str, _: schemas.TokenPayload = Depends(get_current_active_superuser)) -> Any:
def reload_plugin(plugin_id: str, _: User = Depends(get_current_active_superuser)) -> Any:
"""
重新加载插件
"""
@@ -212,22 +217,23 @@ def reload_plugin(plugin_id: str, _: schemas.TokenPayload = Depends(get_current_
@router.get("/install/{plugin_id}", summary="安装插件", response_model=schemas.Response)
def install(plugin_id: str,
repo_url: Optional[str] = "",
force: Optional[bool] = False,
_: schemas.TokenPayload = Depends(get_current_active_superuser)) -> Any:
async def install(plugin_id: str,
repo_url: Optional[str] = "",
force: Optional[bool] = False,
_: User = Depends(get_current_active_superuser_async)) -> Any:
"""
安装插件
"""
# 已安装插件
install_plugins = SystemConfigOper().get(SystemConfigKey.UserInstalledPlugins) or []
# 首先检查插件是否已经存在,并且是否强制安装,否则只进行安装统计
plugin_helper = PluginHelper()
if not force and plugin_id in PluginManager().get_plugin_ids():
PluginHelper().install_reg(pid=plugin_id)
await plugin_helper.async_install_reg(pid=plugin_id)
else:
# 插件不存在或需要强制安装,下载安装并注册插件
if repo_url:
state, msg = PluginHelper().install(pid=plugin_id, repo_url=repo_url)
state, msg = await plugin_helper.async_install(pid=plugin_id, repo_url=repo_url)
# 安装失败则直接响应
if not state:
return schemas.Response(success=False, message=msg)
@@ -238,14 +244,14 @@ def install(plugin_id: str,
if plugin_id not in install_plugins:
install_plugins.append(plugin_id)
# 保存设置
SystemConfigOper().set(SystemConfigKey.UserInstalledPlugins, install_plugins)
await SystemConfigOper().async_set(SystemConfigKey.UserInstalledPlugins, install_plugins)
# 重新加载插件
reload_plugin(plugin_id)
await run_in_threadpool(reload_plugin, plugin_id)
return schemas.Response(success=True)
@router.get("/remotes", summary="获取插件联邦组件列表", response_model=List[dict])
def remotes(token: str) -> Any:
async def remotes(token: str) -> Any:
"""
获取插件联邦组件列表
"""
@@ -256,11 +262,12 @@ def remotes(token: str) -> Any:
@router.get("/form/{plugin_id}", summary="获取插件表单页面")
def plugin_form(plugin_id: str,
_: schemas.TokenPayload = Depends(get_current_active_superuser)) -> dict:
_: User = Depends(get_current_active_superuser)) -> dict:
"""
根据插件ID获取插件配置表单或Vue组件URL
"""
plugin_instance = PluginManager().running_plugins.get(plugin_id)
plugin_manager = PluginManager()
plugin_instance = plugin_manager.running_plugins.get(plugin_id)
if not plugin_instance:
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail=f"插件 {plugin_id} 不存在或未加载")
@@ -271,7 +278,7 @@ def plugin_form(plugin_id: str,
return {
"render_mode": render_mode,
"conf": conf,
"model": PluginManager().get_plugin_config(plugin_id) or model
"model": plugin_manager.get_plugin_config(plugin_id) or model
}
except Exception as e:
logger.error(f"插件 {plugin_id} 调用方法 get_form 出错: {str(e)}")
@@ -279,7 +286,7 @@ def plugin_form(plugin_id: str,
@router.get("/page/{plugin_id}", summary="获取插件数据页面")
def plugin_page(plugin_id: str, _: schemas.TokenPayload = Depends(get_current_active_superuser)) -> dict:
def plugin_page(plugin_id: str, _: User = Depends(get_current_active_superuser)) -> dict:
"""
根据插件ID获取插件数据页面
"""
@@ -328,7 +335,7 @@ def plugin_dashboard(plugin_id: str, user_agent: Annotated[str | None, Header()]
@router.get("/reset/{plugin_id}", summary="重置插件配置及数据", response_model=schemas.Response)
def reset_plugin(plugin_id: str,
_: schemas.TokenPayload = Depends(get_current_active_superuser)) -> Any:
_: User = Depends(get_current_active_superuser)) -> Any:
"""
根据插件ID重置插件配置及数据
"""
@@ -343,7 +350,7 @@ def reset_plugin(plugin_id: str,
@router.get("/file/{plugin_id}/{filepath:path}", summary="获取插件静态文件")
def plugin_static_file(plugin_id: str, filepath: str):
async def plugin_static_file(plugin_id: str, filepath: str):
"""
获取插件静态文件
"""
@@ -352,11 +359,11 @@ def plugin_static_file(plugin_id: str, filepath: str):
logger.warning(f"Static File API: Path traversal attempt detected: {plugin_id}/{filepath}")
raise HTTPException(status_code=status.HTTP_403_FORBIDDEN, detail="Forbidden")
plugin_base_dir = settings.ROOT_PATH / "app" / "plugins" / plugin_id.lower()
plugin_base_dir = AsyncPath(settings.ROOT_PATH) / "app" / "plugins" / plugin_id.lower()
plugin_file_path = plugin_base_dir / filepath
if not plugin_file_path.exists():
if not await plugin_file_path.exists():
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail=f"{plugin_file_path} 不存在")
if not plugin_file_path.is_file():
if not await plugin_file_path.is_file():
raise HTTPException(status_code=status.HTTP_403_FORBIDDEN, detail=f"{plugin_file_path} 不是文件")
# 判断 MIME 类型
@@ -371,14 +378,25 @@ def plugin_static_file(plugin_id: str, filepath: str):
response_type = 'application/octet-stream'
try:
return FileResponse(plugin_file_path, media_type=response_type)
# 异步生成器函数,用于流式读取文件
async def file_generator():
async with aiofiles.open(plugin_file_path, mode='rb') as file:
# 8KB 块大小
while chunk := await file.read(8192):
yield chunk
return StreamingResponse(
file_generator(),
media_type=response_type,
headers={"Content-Disposition": f"inline; filename={plugin_file_path.name}"}
)
except Exception as e:
logger.error(f"Error creating/sending FileResponse for {plugin_file_path}: {e}", exc_info=True)
logger.error(f"Error creating/sending StreamingResponse for {plugin_file_path}: {e}", exc_info=True)
raise HTTPException(status_code=500, detail="Internal Server Error")
@router.get("/folders", summary="获取插件文件夹配置", response_model=dict)
def get_plugin_folders(_: schemas.TokenPayload = Depends(get_current_active_superuser)) -> dict:
async def get_plugin_folders(_: User = Depends(get_current_active_superuser_async)) -> dict:
"""
获取插件文件夹分组配置
"""
@@ -391,7 +409,7 @@ def get_plugin_folders(_: schemas.TokenPayload = Depends(get_current_active_supe
@router.post("/folders", summary="保存插件文件夹配置", response_model=schemas.Response)
def save_plugin_folders(folders: dict, _: schemas.TokenPayload = Depends(get_current_active_superuser)) -> Any:
async def save_plugin_folders(folders: dict, _: User = Depends(get_current_active_superuser_async)) -> Any:
"""
保存插件文件夹分组配置
"""
@@ -404,7 +422,8 @@ def save_plugin_folders(folders: dict, _: schemas.TokenPayload = Depends(get_cur
@router.post("/folders/{folder_name}", summary="创建插件文件夹", response_model=schemas.Response)
def create_plugin_folder(folder_name: str, _: schemas.TokenPayload = Depends(get_current_active_superuser)) -> Any:
async def create_plugin_folder(folder_name: str,
_: User = Depends(get_current_active_superuser_async)) -> Any:
"""
创建新的插件文件夹
"""
@@ -418,33 +437,65 @@ def create_plugin_folder(folder_name: str, _: schemas.TokenPayload = Depends(get
@router.delete("/folders/{folder_name}", summary="删除插件文件夹", response_model=schemas.Response)
def delete_plugin_folder(folder_name: str, _: schemas.TokenPayload = Depends(get_current_active_superuser)) -> Any:
async def delete_plugin_folder(folder_name: str,
_: User = Depends(get_current_active_superuser_async)) -> Any:
"""
删除插件文件夹
"""
folders = SystemConfigOper().get(SystemConfigKey.PluginFolders) or {}
if folder_name in folders:
del folders[folder_name]
SystemConfigOper().set(SystemConfigKey.PluginFolders, folders)
await SystemConfigOper().async_set(SystemConfigKey.PluginFolders, folders)
return schemas.Response(success=True, message=f"文件夹 '{folder_name}' 删除成功")
else:
return schemas.Response(success=False, message=f"文件夹 '{folder_name}' 不存在")
@router.put("/folders/{folder_name}/plugins", summary="更新文件夹中的插件", response_model=schemas.Response)
def update_folder_plugins(folder_name: str, plugin_ids: List[str], _: schemas.TokenPayload = Depends(get_current_active_superuser)) -> Any:
async def update_folder_plugins(folder_name: str, plugin_ids: List[str],
_: User = Depends(get_current_active_superuser_async)) -> Any:
"""
更新指定文件夹中的插件列表
"""
folders = SystemConfigOper().get(SystemConfigKey.PluginFolders) or {}
folders[folder_name] = plugin_ids
SystemConfigOper().set(SystemConfigKey.PluginFolders, folders)
await SystemConfigOper().async_set(SystemConfigKey.PluginFolders, folders)
return schemas.Response(success=True, message=f"文件夹 '{folder_name}' 中的插件已更新")
@router.post("/clone/{plugin_id}", summary="创建插件分身", response_model=schemas.Response)
def clone_plugin(plugin_id: str,
clone_data: dict,
_: User = Depends(get_current_active_superuser)) -> Any:
"""
创建插件分身
"""
try:
success, message = PluginManager().clone_plugin(
plugin_id=plugin_id,
suffix=clone_data.get("suffix", ""),
name=clone_data.get("name", ""),
description=clone_data.get("description", ""),
version=clone_data.get("version", ""),
icon=clone_data.get("icon", "")
)
if success:
# 注册插件服务
reload_plugin(message)
# 将分身插件添加到原插件所在的文件夹中
_add_clone_to_plugin_folder(plugin_id, message)
return schemas.Response(success=True, message="插件分身创建成功")
else:
return schemas.Response(success=False, message=message)
except Exception as e:
logger.error(f"创建插件分身失败:{str(e)}")
return schemas.Response(success=False, message=f"创建插件分身失败:{str(e)}")
@router.get("/{plugin_id}", summary="获取插件配置")
def plugin_config(plugin_id: str,
_: schemas.TokenPayload = Depends(get_current_active_superuser)) -> dict:
async def plugin_config(plugin_id: str,
_: User = Depends(get_current_active_superuser_async)) -> dict:
"""
根据插件ID获取插件配置信息
"""
@@ -453,7 +504,7 @@ def plugin_config(plugin_id: str,
@router.put("/{plugin_id}", summary="更新插件配置", response_model=schemas.Response)
def set_plugin_config(plugin_id: str, conf: dict,
_: schemas.TokenPayload = Depends(get_current_active_superuser)) -> Any:
_: User = Depends(get_current_active_superuser)) -> Any:
"""
更新插件配置
"""
@@ -469,7 +520,7 @@ def set_plugin_config(plugin_id: str, conf: dict,
@router.delete("/{plugin_id}", summary="卸载插件", response_model=schemas.Response)
def uninstall_plugin(plugin_id: str,
_: schemas.TokenPayload = Depends(get_current_active_superuser)) -> Any:
_: User = Depends(get_current_active_superuser)) -> Any:
"""
卸载插件
"""
@@ -507,36 +558,6 @@ def uninstall_plugin(plugin_id: str,
return schemas.Response(success=True)
@router.post("/clone/{plugin_id}", summary="创建插件分身", response_model=schemas.Response)
def clone_plugin(plugin_id: str,
clone_data: dict,
_: schemas.TokenPayload = Depends(get_current_active_superuser)) -> Any:
"""
创建插件分身
"""
try:
success, message = PluginManager().clone_plugin(
plugin_id=plugin_id,
suffix=clone_data.get("suffix", ""),
name=clone_data.get("name", ""),
description=clone_data.get("description", ""),
version=clone_data.get("version", ""),
icon=clone_data.get("icon", "")
)
if success:
# 注册插件服务
reload_plugin(message)
# 将分身插件添加到原插件所在的文件夹中
_add_clone_to_plugin_folder(plugin_id, message)
return schemas.Response(success=True, message="插件分身创建成功")
else:
return schemas.Response(success=False, message=message)
except Exception as e:
logger.error(f"创建插件分身失败:{str(e)}")
return schemas.Response(success=False, message=f"创建插件分身失败:{str(e)}")
def _add_clone_to_plugin_folder(original_plugin_id: str, clone_plugin_id: str):
"""
将分身插件添加到原插件所在的文件夹中

View File

@@ -3,11 +3,11 @@ from typing import Any, List, Optional
from fastapi import APIRouter, Depends
from app import schemas
from app.chain.recommend import RecommendChain
from app.core.event import eventmanager
from app.core.security import verify_token
from app.schemas.types import ChainEventType
from app.chain.recommend import RecommendChain
from app.schemas import RecommendSourceEventData
from app.schemas.types import ChainEventType
router = APIRouter()
@@ -29,163 +29,163 @@ def source(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
@router.get("/bangumi_calendar", summary="Bangumi每日放送", response_model=List[schemas.MediaInfo])
def bangumi_calendar(page: Optional[int] = 1,
count: Optional[int] = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def bangumi_calendar(page: Optional[int] = 1,
count: Optional[int] = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览Bangumi每日放送
"""
return RecommendChain().bangumi_calendar(page=page, count=count)
return await RecommendChain().async_bangumi_calendar(page=page, count=count)
@router.get("/douban_showing", summary="豆瓣正在热映", response_model=List[schemas.MediaInfo])
def douban_showing(page: Optional[int] = 1,
count: Optional[int] = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def douban_showing(page: Optional[int] = 1,
count: Optional[int] = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览豆瓣正在热映
"""
return RecommendChain().douban_movie_showing(page=page, count=count)
return await RecommendChain().async_douban_movie_showing(page=page, count=count)
@router.get("/douban_movies", summary="豆瓣电影", response_model=List[schemas.MediaInfo])
def douban_movies(sort: Optional[str] = "R",
tags: Optional[str] = "",
page: Optional[int] = 1,
count: Optional[int] = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def douban_movies(sort: Optional[str] = "R",
tags: Optional[str] = "",
page: Optional[int] = 1,
count: Optional[int] = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览豆瓣电影信息
"""
return RecommendChain().douban_movies(sort=sort, tags=tags, page=page, count=count)
return await RecommendChain().async_douban_movies(sort=sort, tags=tags, page=page, count=count)
@router.get("/douban_tvs", summary="豆瓣剧集", response_model=List[schemas.MediaInfo])
def douban_tvs(sort: Optional[str] = "R",
tags: Optional[str] = "",
page: Optional[int] = 1,
count: Optional[int] = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览豆瓣剧集信息
"""
return RecommendChain().douban_tvs(sort=sort, tags=tags, page=page, count=count)
@router.get("/douban_movie_top250", summary="豆瓣电影TOP250", response_model=List[schemas.MediaInfo])
def douban_movie_top250(page: Optional[int] = 1,
count: Optional[int] = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览豆瓣剧集信息
"""
return RecommendChain().douban_movie_top250(page=page, count=count)
@router.get("/douban_tv_weekly_chinese", summary="豆瓣国产剧集周榜", response_model=List[schemas.MediaInfo])
def douban_tv_weekly_chinese(page: Optional[int] = 1,
count: Optional[int] = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
中国每周剧集口碑榜
"""
return RecommendChain().douban_tv_weekly_chinese(page=page, count=count)
@router.get("/douban_tv_weekly_global", summary="豆瓣全球剧集周榜", response_model=List[schemas.MediaInfo])
def douban_tv_weekly_global(page: Optional[int] = 1,
count: Optional[int] = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
全球每周剧集口碑榜
"""
return RecommendChain().douban_tv_weekly_global(page=page, count=count)
@router.get("/douban_tv_animation", summary="豆瓣动画剧集", response_model=List[schemas.MediaInfo])
def douban_tv_animation(page: Optional[int] = 1,
count: Optional[int] = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
热门动画剧集
"""
return RecommendChain().douban_tv_animation(page=page, count=count)
@router.get("/douban_movie_hot", summary="豆瓣热门电影", response_model=List[schemas.MediaInfo])
def douban_movie_hot(page: Optional[int] = 1,
async def douban_tvs(sort: Optional[str] = "R",
tags: Optional[str] = "",
page: Optional[int] = 1,
count: Optional[int] = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览豆瓣剧集信息
"""
return await RecommendChain().async_douban_tvs(sort=sort, tags=tags, page=page, count=count)
@router.get("/douban_movie_top250", summary="豆瓣电影TOP250", response_model=List[schemas.MediaInfo])
async def douban_movie_top250(page: Optional[int] = 1,
count: Optional[int] = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览豆瓣剧集信息
"""
return await RecommendChain().async_douban_movie_top250(page=page, count=count)
@router.get("/douban_tv_weekly_chinese", summary="豆瓣国产剧集周榜", response_model=List[schemas.MediaInfo])
async def douban_tv_weekly_chinese(page: Optional[int] = 1,
count: Optional[int] = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
中国每周剧集口碑榜
"""
return await RecommendChain().async_douban_tv_weekly_chinese(page=page, count=count)
@router.get("/douban_tv_weekly_global", summary="豆瓣全球剧集周榜", response_model=List[schemas.MediaInfo])
async def douban_tv_weekly_global(page: Optional[int] = 1,
count: Optional[int] = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
全球每周剧集口碑榜
"""
return await RecommendChain().async_douban_tv_weekly_global(page=page, count=count)
@router.get("/douban_tv_animation", summary="豆瓣动画剧集", response_model=List[schemas.MediaInfo])
async def douban_tv_animation(page: Optional[int] = 1,
count: Optional[int] = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
热门动画剧集
"""
return await RecommendChain().async_douban_tv_animation(page=page, count=count)
@router.get("/douban_movie_hot", summary="豆瓣热门电影", response_model=List[schemas.MediaInfo])
async def douban_movie_hot(page: Optional[int] = 1,
count: Optional[int] = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
热门电影
"""
return RecommendChain().douban_movie_hot(page=page, count=count)
return await RecommendChain().async_douban_movie_hot(page=page, count=count)
@router.get("/douban_tv_hot", summary="豆瓣热门电视剧", response_model=List[schemas.MediaInfo])
def douban_tv_hot(page: Optional[int] = 1,
count: Optional[int] = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def douban_tv_hot(page: Optional[int] = 1,
count: Optional[int] = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
热门电视剧
"""
return RecommendChain().douban_tv_hot(page=page, count=count)
return await RecommendChain().async_douban_tv_hot(page=page, count=count)
@router.get("/tmdb_movies", summary="TMDB电影", response_model=List[schemas.MediaInfo])
def tmdb_movies(sort_by: Optional[str] = "popularity.desc",
with_genres: Optional[str] = "",
with_original_language: Optional[str] = "",
with_keywords: Optional[str] = "",
with_watch_providers: Optional[str] = "",
vote_average: Optional[float] = 0.0,
vote_count: Optional[int] = 0,
release_date: Optional[str] = "",
page: Optional[int] = 1,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def tmdb_movies(sort_by: Optional[str] = "popularity.desc",
with_genres: Optional[str] = "",
with_original_language: Optional[str] = "",
with_keywords: Optional[str] = "",
with_watch_providers: Optional[str] = "",
vote_average: Optional[float] = 0.0,
vote_count: Optional[int] = 0,
release_date: Optional[str] = "",
page: Optional[int] = 1,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览TMDB电影信息
"""
return RecommendChain().tmdb_movies(sort_by=sort_by,
with_genres=with_genres,
with_original_language=with_original_language,
with_keywords=with_keywords,
with_watch_providers=with_watch_providers,
vote_average=vote_average,
vote_count=vote_count,
release_date=release_date,
page=page)
return await RecommendChain().async_tmdb_movies(sort_by=sort_by,
with_genres=with_genres,
with_original_language=with_original_language,
with_keywords=with_keywords,
with_watch_providers=with_watch_providers,
vote_average=vote_average,
vote_count=vote_count,
release_date=release_date,
page=page)
@router.get("/tmdb_tvs", summary="TMDB剧集", response_model=List[schemas.MediaInfo])
def tmdb_tvs(sort_by: Optional[str] = "popularity.desc",
with_genres: Optional[str] = "",
with_original_language: Optional[str] = "",
with_keywords: Optional[str] = "",
with_watch_providers: Optional[str] = "",
vote_average: Optional[float] = 0.0,
vote_count: Optional[int] = 0,
release_date: Optional[str] = "",
page: Optional[int] = 1,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def tmdb_tvs(sort_by: Optional[str] = "popularity.desc",
with_genres: Optional[str] = "",
with_original_language: Optional[str] = "",
with_keywords: Optional[str] = "",
with_watch_providers: Optional[str] = "",
vote_average: Optional[float] = 0.0,
vote_count: Optional[int] = 0,
release_date: Optional[str] = "",
page: Optional[int] = 1,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览TMDB剧集信息
"""
return RecommendChain().tmdb_tvs(sort_by=sort_by,
with_genres=with_genres,
with_original_language=with_original_language,
with_keywords=with_keywords,
with_watch_providers=with_watch_providers,
vote_average=vote_average,
vote_count=vote_count,
release_date=release_date,
page=page)
return await RecommendChain().async_tmdb_tvs(sort_by=sort_by,
with_genres=with_genres,
with_original_language=with_original_language,
with_keywords=with_keywords,
with_watch_providers=with_watch_providers,
vote_average=vote_average,
vote_count=vote_count,
release_date=release_date,
page=page)
@router.get("/tmdb_trending", summary="TMDB流行趋势", response_model=List[schemas.MediaInfo])
def tmdb_trending(page: Optional[int] = 1,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def tmdb_trending(page: Optional[int] = 1,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
TMDB流行趋势
"""
return RecommendChain().tmdb_trending(page=page)
return await RecommendChain().async_tmdb_trending(page=page)

View File

@@ -16,23 +16,23 @@ router = APIRouter()
@router.get("/last", summary="查询搜索结果", response_model=List[schemas.Context])
def search_latest(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def search_latest(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询搜索结果
"""
torrents = SearchChain().last_search_results()
torrents = await SearchChain().async_last_search_results() or []
return [torrent.to_dict() for torrent in torrents]
@router.get("/media/{mediaid}", summary="精确搜索资源", response_model=schemas.Response)
def search_by_id(mediaid: str,
mtype: Optional[str] = None,
area: Optional[str] = "title",
title: Optional[str] = None,
year: Optional[str] = None,
season: Optional[str] = None,
sites: Optional[str] = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def search_by_id(mediaid: str,
mtype: Optional[str] = None,
area: Optional[str] = "title",
title: Optional[str] = None,
year: Optional[str] = None,
season: Optional[str] = None,
sites: Optional[str] = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据TMDBID/豆瓣ID精确搜索站点资源 tmdb:/douban:/bangumi:
"""
@@ -49,55 +49,59 @@ def search_by_id(mediaid: str,
else:
site_list = None
torrents = None
media_chain = MediaChain()
search_chain = SearchChain()
# 根据前缀识别媒体ID
if mediaid.startswith("tmdb:"):
tmdbid = int(mediaid.replace("tmdb:", ""))
if settings.RECOGNIZE_SOURCE == "douban":
# 通过TMDBID识别豆瓣ID
doubaninfo = MediaChain().get_doubaninfo_by_tmdbid(tmdbid=tmdbid, mtype=media_type)
doubaninfo = await media_chain.async_get_doubaninfo_by_tmdbid(tmdbid=tmdbid, mtype=media_type)
if doubaninfo:
torrents = SearchChain().search_by_id(doubanid=doubaninfo.get("id"),
mtype=media_type, area=area, season=media_season,
sites=site_list, cache_local=True)
torrents = await search_chain.async_search_by_id(doubanid=doubaninfo.get("id"),
mtype=media_type, area=area, season=media_season,
sites=site_list, cache_local=True)
else:
return schemas.Response(success=False, message="未识别到豆瓣媒体信息")
else:
torrents = SearchChain().search_by_id(tmdbid=tmdbid, mtype=media_type, area=area, season=media_season,
sites=site_list, cache_local=True)
torrents = await search_chain.async_search_by_id(tmdbid=tmdbid, mtype=media_type, area=area,
season=media_season,
sites=site_list, cache_local=True)
elif mediaid.startswith("douban:"):
doubanid = mediaid.replace("douban:", "")
if settings.RECOGNIZE_SOURCE == "themoviedb":
# 通过豆瓣ID识别TMDBID
tmdbinfo = MediaChain().get_tmdbinfo_by_doubanid(doubanid=doubanid, mtype=media_type)
tmdbinfo = await media_chain.async_get_tmdbinfo_by_doubanid(doubanid=doubanid, mtype=media_type)
if tmdbinfo:
if tmdbinfo.get('season') and not media_season:
media_season = tmdbinfo.get('season')
torrents = SearchChain().search_by_id(tmdbid=tmdbinfo.get("id"),
mtype=media_type, area=area, season=media_season,
sites=site_list, cache_local=True)
torrents = await search_chain.async_search_by_id(tmdbid=tmdbinfo.get("id"),
mtype=media_type, area=area, season=media_season,
sites=site_list, cache_local=True)
else:
return schemas.Response(success=False, message="未识别到TMDB媒体信息")
else:
torrents = SearchChain().search_by_id(doubanid=doubanid, mtype=media_type, area=area, season=media_season,
sites=site_list, cache_local=True)
torrents = await search_chain.async_search_by_id(doubanid=doubanid, mtype=media_type, area=area,
season=media_season,
sites=site_list, cache_local=True)
elif mediaid.startswith("bangumi:"):
bangumiid = int(mediaid.replace("bangumi:", ""))
if settings.RECOGNIZE_SOURCE == "themoviedb":
# 通过BangumiID识别TMDBID
tmdbinfo = MediaChain().get_tmdbinfo_by_bangumiid(bangumiid=bangumiid)
tmdbinfo = await media_chain.async_get_tmdbinfo_by_bangumiid(bangumiid=bangumiid)
if tmdbinfo:
torrents = SearchChain().search_by_id(tmdbid=tmdbinfo.get("id"),
mtype=media_type, area=area, season=media_season,
sites=site_list, cache_local=True)
torrents = await search_chain.async_search_by_id(tmdbid=tmdbinfo.get("id"),
mtype=media_type, area=area, season=media_season,
sites=site_list, cache_local=True)
else:
return schemas.Response(success=False, message="未识别到TMDB媒体信息")
else:
# 通过BangumiID识别豆瓣ID
doubaninfo = MediaChain().get_doubaninfo_by_bangumiid(bangumiid=bangumiid)
doubaninfo = await media_chain.async_get_doubaninfo_by_bangumiid(bangumiid=bangumiid)
if doubaninfo:
torrents = SearchChain().search_by_id(doubanid=doubaninfo.get("id"),
mtype=media_type, area=area, season=media_season,
sites=site_list, cache_local=True)
torrents = await search_chain.async_search_by_id(doubanid=doubaninfo.get("id"),
mtype=media_type, area=area, season=media_season,
sites=site_list, cache_local=True)
else:
return schemas.Response(success=False, message="未识别到豆瓣媒体信息")
else:
@@ -106,18 +110,18 @@ def search_by_id(mediaid: str,
mediaid=mediaid,
convert_type=settings.RECOGNIZE_SOURCE
)
event = eventmanager.send_event(ChainEventType.MediaRecognizeConvert, event_data)
event = await eventmanager.async_send_event(ChainEventType.MediaRecognizeConvert, event_data)
# 使用事件返回的上下文数据
if event and event.event_data:
event_data: MediaRecognizeConvertEventData = event.event_data
if event_data.media_dict:
search_id = event_data.media_dict.get("id")
if event_data.convert_type == "themoviedb":
torrents = SearchChain().search_by_id(tmdbid=search_id, mtype=media_type, area=area,
season=media_season, cache_local=True)
torrents = await search_chain.async_search_by_id(tmdbid=search_id, mtype=media_type, area=area,
season=media_season, cache_local=True)
elif event_data.convert_type == "douban":
torrents = SearchChain().search_by_id(doubanid=search_id, mtype=media_type, area=area,
season=media_season, cache_local=True)
torrents = await search_chain.async_search_by_id(doubanid=search_id, mtype=media_type, area=area,
season=media_season, cache_local=True)
else:
if not title:
return schemas.Response(success=False, message="未知的媒体ID")
@@ -130,14 +134,16 @@ def search_by_id(mediaid: str,
if media_season:
meta.type = MediaType.TV
meta.begin_season = media_season
mediainfo = MediaChain().recognize_media(meta=meta)
mediainfo = await media_chain.async_recognize_media(meta=meta)
if mediainfo:
if settings.RECOGNIZE_SOURCE == "themoviedb":
torrents = SearchChain().search_by_id(tmdbid=mediainfo.tmdb_id, mtype=media_type, area=area,
season=media_season, cache_local=True)
torrents = await search_chain.async_search_by_id(tmdbid=mediainfo.tmdb_id, mtype=media_type,
area=area,
season=media_season, cache_local=True)
else:
torrents = SearchChain().search_by_id(doubanid=mediainfo.douban_id, mtype=media_type, area=area,
season=media_season, cache_local=True)
torrents = await search_chain.async_search_by_id(doubanid=mediainfo.douban_id, mtype=media_type,
area=area,
season=media_season, cache_local=True)
# 返回搜索结果
if not torrents:
return schemas.Response(success=False, message="未搜索到任何资源")
@@ -146,16 +152,18 @@ def search_by_id(mediaid: str,
@router.get("/title", summary="模糊搜索资源", response_model=schemas.Response)
def search_by_title(keyword: Optional[str] = None,
page: Optional[int] = 0,
sites: Optional[str] = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def search_by_title(keyword: Optional[str] = None,
page: Optional[int] = 0,
sites: Optional[str] = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据名称模糊搜索站点资源,支持分页,关键词为空是返回首页资源
"""
torrents = SearchChain().search_by_title(title=keyword, page=page,
sites=[int(site) for site in sites.split(",") if site] if sites else None,
cache_local=True)
torrents = await SearchChain().async_search_by_title(
title=keyword, page=page,
sites=[int(site) for site in sites.split(",") if site] if sites else None,
cache_local=True
)
if not torrents:
return schemas.Response(success=False, message="未搜索到任何资源")
return schemas.Response(success=True, data=[torrent.to_dict() for torrent in torrents])

View File

@@ -1,7 +1,7 @@
from typing import List, Any, Dict, Optional
from app.helper.sites import SitesHelper
from fastapi import APIRouter, Depends, HTTPException
from sqlalchemy.ext.asyncio import AsyncSession
from sqlalchemy.orm import Session
from starlette.background import BackgroundTasks
@@ -10,10 +10,10 @@ from app.api.endpoints.plugin import register_plugin_api
from app.chain.site import SiteChain
from app.chain.torrents import TorrentsChain
from app.command import Command
from app.core.event import EventManager
from app.core.event import eventmanager
from app.core.plugin import PluginManager
from app.core.security import verify_token
from app.db import get_db
from app.db import get_db, get_async_db
from app.db.models import User
from app.db.models.site import Site
from app.db.models.siteicon import SiteIcon
@@ -21,7 +21,8 @@ from app.db.models.sitestatistic import SiteStatistic
from app.db.models.siteuserdata import SiteUserData
from app.db.site_oper import SiteOper
from app.db.systemconfig_oper import SystemConfigOper
from app.db.user_oper import get_current_active_superuser
from app.db.user_oper import get_current_active_superuser, get_current_active_superuser_async
from app.helper.sites import SitesHelper # noqa
from app.scheduler import Scheduler
from app.schemas.types import SystemConfigKey, EventType
from app.utils.string import StringUtils
@@ -30,20 +31,20 @@ router = APIRouter()
@router.get("/", summary="所有站点", response_model=List[schemas.Site])
def read_sites(db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(get_current_active_superuser)) -> List[dict]:
async def read_sites(db: AsyncSession = Depends(get_async_db),
_: User = Depends(get_current_active_superuser)) -> List[dict]:
"""
获取站点列表
"""
return Site.list_order_by_pri(db)
return await Site.async_list_order_by_pri(db)
@router.post("/", summary="新增站点", response_model=schemas.Response)
def add_site(
async def add_site(
*,
db: Session = Depends(get_db),
db: AsyncSession = Depends(get_async_db),
site_in: schemas.Site,
_: schemas.TokenPayload = Depends(get_current_active_superuser)
_: User = Depends(get_current_active_superuser)
) -> Any:
"""
新增站点
@@ -53,10 +54,10 @@ def add_site(
if SitesHelper().auth_level < 2:
return schemas.Response(success=False, message="用户未通过认证,无法使用站点功能!")
domain = StringUtils.get_url_domain(site_in.url)
site_info = SitesHelper().get_indexer(domain)
site_info = await SitesHelper().async_get_indexer(domain)
if not site_info:
return schemas.Response(success=False, message="该站点不支持,请检查站点域名是否正确")
if Site.get_by_domain(db, domain):
if await Site.async_get_by_domain(db, domain):
return schemas.Response(success=False, message=f"{domain} 站点己存在")
# 保存站点信息
site_in.domain = domain
@@ -66,42 +67,42 @@ def add_site(
site_in.name = site_info.get("name")
site_in.id = None
site_in.public = 1 if site_info.get("public") else 0
site = Site(**site_in.dict())
site = Site(**site_in.model_dump())
site.create(db)
# 通知站点更新
EventManager().send_event(EventType.SiteUpdated, {
await eventmanager.async_send_event(EventType.SiteUpdated, {
"domain": domain
})
return schemas.Response(success=True)
@router.put("/", summary="更新站点", response_model=schemas.Response)
def update_site(
async def update_site(
*,
db: Session = Depends(get_db),
db: AsyncSession = Depends(get_async_db),
site_in: schemas.Site,
_: schemas.TokenPayload = Depends(get_current_active_superuser)
_: User = Depends(get_current_active_superuser)
) -> Any:
"""
更新站点信息
"""
site = Site.get(db, site_in.id)
site = await Site.async_get(db, site_in.id)
if not site:
return schemas.Response(success=False, message="站点不存在")
# 校正地址格式
_scheme, _netloc = StringUtils.get_url_netloc(site_in.url)
site_in.url = f"{_scheme}://{_netloc}/"
site.update(db, site_in.dict())
await site.async_update(db, site_in.model_dump())
# 通知站点更新
EventManager().send_event(EventType.SiteUpdated, {
await eventmanager.async_send_event(EventType.SiteUpdated, {
"domain": site_in.domain
})
return schemas.Response(success=True)
@router.get("/cookiecloud", summary="CookieCloud同步", response_model=schemas.Response)
def cookie_cloud_sync(background_tasks: BackgroundTasks,
_: schemas.TokenPayload = Depends(get_current_active_superuser)) -> Any:
async def cookie_cloud_sync(background_tasks: BackgroundTasks,
_: User = Depends(get_current_active_superuser_async)) -> Any:
"""
运行CookieCloud同步站点信息
"""
@@ -110,7 +111,7 @@ def cookie_cloud_sync(background_tasks: BackgroundTasks,
@router.get("/reset", summary="重置站点", response_model=schemas.Response)
def reset(db: Session = Depends(get_db),
def reset(db: AsyncSession = Depends(get_db),
_: User = Depends(get_current_active_superuser)) -> Any:
"""
清空所有站点数据并重新同步CookieCloud站点信息
@@ -121,25 +122,25 @@ def reset(db: Session = Depends(get_db),
# 启动定时服务
Scheduler().start("cookiecloud", manual=True)
# 插件站点删除
EventManager().send_event(EventType.SiteDeleted,
{
"site_id": "*"
})
eventmanager.send_event(EventType.SiteDeleted,
{
"site_id": "*"
})
return schemas.Response(success=True, message="站点已重置!")
@router.post("/priorities", summary="批量更新站点优先级", response_model=schemas.Response)
def update_sites_priority(
async def update_sites_priority(
priorities: List[dict],
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(get_current_active_superuser)) -> Any:
db: AsyncSession = Depends(get_async_db),
_: User = Depends(get_current_active_superuser_async)) -> Any:
"""
批量更新站点优先级
"""
for priority in priorities:
site = Site.get(db, priority.get("id"))
site = await Site.async_get(db, priority.get("id"))
if site:
site.update(db, {"pri": priority.get("pri")})
await site.async_update(db, {"pri": priority.get("pri")})
return schemas.Response(success=True)
@@ -150,7 +151,7 @@ def update_cookie(
password: str,
code: Optional[str] = None,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(get_current_active_superuser)) -> Any:
_: User = Depends(get_current_active_superuser)) -> Any:
"""
使用用户密码更新站点Cookie
"""
@@ -173,7 +174,7 @@ def update_cookie(
def refresh_userdata(
site_id: int,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(get_current_active_superuser)) -> Any:
_: User = Depends(get_current_active_superuser)) -> Any:
"""
刷新站点用户数据
"""
@@ -191,37 +192,37 @@ def refresh_userdata(
@router.get("/userdata/latest", summary="查询所有站点最新用户数据", response_model=List[schemas.SiteUserData])
def read_userdata_latest(
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(get_current_active_superuser)) -> Any:
async def read_userdata_latest(
db: AsyncSession = Depends(get_async_db),
_: User = Depends(get_current_active_superuser_async)) -> Any:
"""
查询所有站点最新用户数据
"""
user_datas = SiteUserData.get_latest(db)
user_datas = await SiteUserData.async_get_latest(db)
if not user_datas:
return []
return [user_data.to_dict() for user_data in user_datas]
@router.get("/userdata/{site_id}", summary="查询某站点用户数据", response_model=schemas.Response)
def read_userdata(
async def read_userdata(
site_id: int,
workdate: Optional[str] = None,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(get_current_active_superuser)) -> Any:
db: AsyncSession = Depends(get_async_db),
_: User = Depends(get_current_active_superuser_async)) -> Any:
"""
查询站点用户数据
"""
site = Site.get(db, site_id)
site = await Site.async_get(db, site_id)
if not site:
raise HTTPException(
status_code=404,
detail=f"站点 {site_id} 不存在",
)
user_data = SiteUserData.get_by_domain(db, domain=site.domain, workdate=workdate)
if not user_data:
user_datas = await SiteUserData.async_get_by_domain(db, domain=site.domain, workdate=workdate)
if not user_datas:
return schemas.Response(success=False, data=[])
return schemas.Response(success=True, data=user_data)
return schemas.Response(success=True, data=[data.to_dict() for data in user_datas])
@router.get("/test/{site_id}", summary="连接测试", response_model=schemas.Response)
@@ -242,19 +243,19 @@ def test_site(site_id: int,
@router.get("/icon/{site_id}", summary="站点图标", response_model=schemas.Response)
def site_icon(site_id: int,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def site_icon(site_id: int,
db: AsyncSession = Depends(get_async_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
获取站点图标base64或者url
"""
site = Site.get(db, site_id)
site = await Site.async_get(db, site_id)
if not site:
raise HTTPException(
status_code=404,
detail=f"站点 {site_id} 不存在",
)
icon = SiteIcon.get_by_domain(db, site.domain)
icon = await SiteIcon.async_get_by_domain(db, site.domain)
if not icon:
return schemas.Response(success=False, message="站点图标不存在!")
return schemas.Response(success=True, data={
@@ -263,19 +264,19 @@ def site_icon(site_id: int,
@router.get("/category/{site_id}", summary="站点分类", response_model=List[schemas.SiteCategory])
def site_category(site_id: int,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def site_category(site_id: int,
db: AsyncSession = Depends(get_async_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
获取站点分类
"""
site = Site.get(db, site_id)
site = await Site.async_get(db, site_id)
if not site:
raise HTTPException(
status_code=404,
detail=f"站点 {site_id} 不存在",
)
indexer = SitesHelper().get_indexer(site.domain)
indexer = await SitesHelper().async_get_indexer(site.domain)
if not indexer:
raise HTTPException(
status_code=404,
@@ -293,38 +294,38 @@ def site_category(site_id: int,
@router.get("/resource/{site_id}", summary="站点资源", response_model=List[schemas.TorrentInfo])
def site_resource(site_id: int,
keyword: Optional[str] = None,
cat: Optional[str] = None,
page: Optional[int] = 0,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(get_current_active_superuser)) -> Any:
async def site_resource(site_id: int,
keyword: Optional[str] = None,
cat: Optional[str] = None,
page: Optional[int] = 0,
db: AsyncSession = Depends(get_async_db),
_: User = Depends(get_current_active_superuser_async)) -> Any:
"""
浏览站点资源
"""
site = Site.get(db, site_id)
site = await Site.async_get(db, site_id)
if not site:
raise HTTPException(
status_code=404,
detail=f"站点 {site_id} 不存在",
)
torrents = TorrentsChain().browse(domain=site.domain, keyword=keyword, cat=cat, page=page)
torrents = await TorrentsChain().async_browse(domain=site.domain, keyword=keyword, cat=cat, page=page)
if not torrents:
return []
return [torrent.to_dict() for torrent in torrents]
@router.get("/domain/{site_url}", summary="站点详情", response_model=schemas.Site)
def read_site_by_domain(
async def read_site_by_domain(
site_url: str,
db: Session = Depends(get_db),
db: AsyncSession = Depends(get_async_db),
_: schemas.TokenPayload = Depends(verify_token)
) -> Any:
"""
通过域名获取站点信息
"""
domain = StringUtils.get_url_domain(site_url)
site = Site.get_by_domain(db, domain)
site = await Site.async_get_by_domain(db, domain)
if not site:
raise HTTPException(
status_code=404,
@@ -334,35 +335,35 @@ def read_site_by_domain(
@router.get("/statistic/{site_url}", summary="特定站点统计信息", response_model=schemas.SiteStatistic)
def read_statistic_by_domain(
async def read_statistic_by_domain(
site_url: str,
db: Session = Depends(get_db),
db: AsyncSession = Depends(get_async_db),
_: schemas.TokenPayload = Depends(verify_token)
) -> Any:
"""
通过域名获取站点统计信息
"""
domain = StringUtils.get_url_domain(site_url)
sitestatistic = SiteStatistic.get_by_domain(db, domain)
sitestatistic = await SiteStatistic.async_get_by_domain(db, domain)
if sitestatistic:
return sitestatistic
return schemas.SiteStatistic(domain=domain)
@router.get("/statistic", summary="所有站点统计信息", response_model=List[schemas.SiteStatistic])
def read_statistics(
db: Session = Depends(get_db),
async def read_statistics(
db: AsyncSession = Depends(get_async_db),
_: schemas.TokenPayload = Depends(verify_token)
) -> Any:
"""
获取所有站点统计信息
"""
return SiteStatistic.list(db)
return await SiteStatistic.async_list(db)
@router.get("/rss", summary="所有订阅站点", response_model=List[schemas.Site])
def read_rss_sites(db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> List[dict]:
async def read_rss_sites(db: AsyncSession = Depends(get_async_db),
_: schemas.TokenPayload = Depends(verify_token)) -> List[dict]:
"""
获取站点列表
"""
@@ -370,7 +371,7 @@ def read_rss_sites(db: Session = Depends(get_db),
selected_sites = SystemConfigOper().get(SystemConfigKey.RssSites) or []
# 所有站点
all_site = Site.list_order_by_pri(db)
all_site = await Site.async_list_order_by_pri(db)
if not selected_sites:
return all_site
@@ -380,7 +381,7 @@ def read_rss_sites(db: Session = Depends(get_db),
@router.get("/auth", summary="查询认证站点", response_model=dict)
def read_auth_sites(_: schemas.TokenPayload = Depends(verify_token)) -> dict:
async def read_auth_sites(_: schemas.TokenPayload = Depends(verify_token)) -> dict:
"""
获取可认证站点列表
"""
@@ -398,7 +399,7 @@ def auth_site(
if not auth_info or not auth_info.site or not auth_info.params:
return schemas.Response(success=False, message="请输入认证站点和认证参数")
status, msg = SitesHelper().check_user(auth_info.site, auth_info.params)
SystemConfigOper().set(SystemConfigKey.UserSiteAuthParams, auth_info.dict())
SystemConfigOper().set(SystemConfigKey.UserSiteAuthParams, auth_info.model_dump())
# 认证成功后,重新初始化插件
PluginManager().init_config()
Scheduler().init_plugin_jobs()
@@ -408,12 +409,12 @@ def auth_site(
@router.get("/mapping", summary="获取站点域名到名称的映射", response_model=schemas.Response)
def site_mapping(_: User = Depends(get_current_active_superuser)):
async def site_mapping(_: User = Depends(get_current_active_superuser_async)):
"""
获取站点域名到名称的映射关系
"""
try:
sites = SiteOper().list()
sites = await SiteOper().async_list()
mapping = {}
for site in sites:
mapping[site.domain] = site.name
@@ -422,16 +423,24 @@ def site_mapping(_: User = Depends(get_current_active_superuser)):
return schemas.Response(success=False, message=f"获取映射失败:{str(e)}")
@router.get("/supporting", summary="获取支持的站点列表", response_model=dict)
async def support_sites(_: User = Depends(get_current_active_superuser_async)):
"""
获取支持的站点列表
"""
return SitesHelper().get_indexsites()
@router.get("/{site_id}", summary="站点详情", response_model=schemas.Site)
def read_site(
async def read_site(
site_id: int,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(get_current_active_superuser)
db: AsyncSession = Depends(get_async_db),
_: User = Depends(get_current_active_superuser_async)
) -> Any:
"""
通过ID获取站点信息
"""
site = Site.get(db, site_id)
site = await Site.async_get(db, site_id)
if not site:
raise HTTPException(
status_code=404,
@@ -441,18 +450,18 @@ def read_site(
@router.delete("/{site_id}", summary="删除站点", response_model=schemas.Response)
def delete_site(
async def delete_site(
site_id: int,
db: Session = Depends(get_db),
_: User = Depends(get_current_active_superuser)
db: AsyncSession = Depends(get_async_db),
_: User = Depends(get_current_active_superuser_async)
) -> Any:
"""
删除站点
"""
Site.delete(db, site_id)
await Site.async_delete(db, site_id)
# 插件站点删除
EventManager().send_event(EventType.SiteDeleted,
{
"site_id": site_id
})
await eventmanager.async_send_event(EventType.SiteDeleted,
{
"site_id": site_id
})
return schemas.Response(success=True)

View File

@@ -12,9 +12,10 @@ from app.core.config import settings
from app.core.metainfo import MetaInfoPath
from app.core.security import verify_token
from app.db.models import User
from app.db.user_oper import get_current_active_superuser
from app.db.user_oper import get_current_active_superuser, get_current_active_superuser_async
from app.helper.progress import ProgressHelper
from app.schemas.types import ProgressKey
from app.utils.string import StringUtils
router = APIRouter()
@@ -80,7 +81,7 @@ def list_files(fileitem: schemas.FileItem,
file_list = StorageChain().list_files(fileitem)
if file_list:
if sort == "name":
file_list.sort(key=lambda x: x.name or "")
file_list.sort(key=lambda x: StringUtils.natural_sort_key(x.name or ""))
else:
file_list.sort(key=lambda x: x.modify_time or datetime.min, reverse=True)
return file_list
@@ -171,15 +172,14 @@ def rename(fileitem: schemas.FileItem,
sub_files: List[schemas.FileItem] = StorageChain().list_files(fileitem)
if sub_files:
# 开始进度
progress = ProgressHelper()
progress.start(ProgressKey.BatchRename)
progress = ProgressHelper(ProgressKey.BatchRename)
progress.start()
total = len(sub_files)
handled = 0
for sub_file in sub_files:
handled += 1
progress.update(value=handled / total * 100,
text=f"正在处理 {sub_file.name} ...",
key=ProgressKey.BatchRename)
text=f"正在处理 {sub_file.name} ...")
if sub_file.type == "dir":
continue
if not sub_file.extension:
@@ -190,19 +190,19 @@ def rename(fileitem: schemas.FileItem,
meta = MetaInfoPath(sub_path)
mediainfo = transferchain.recognize_media(meta)
if not mediainfo:
progress.end(ProgressKey.BatchRename)
progress.end()
return schemas.Response(success=False, message=f"{sub_path.name} 未识别到媒体信息")
new_path = transferchain.recommend_name(meta=meta, mediainfo=mediainfo)
if not new_path:
progress.end(ProgressKey.BatchRename)
progress.end()
return schemas.Response(success=False, message=f"{sub_path.name} 未识别到新名称")
ret: schemas.Response = rename(fileitem=sub_file,
new_name=Path(new_path).name,
recursive=False)
if not ret.success:
progress.end(ProgressKey.BatchRename)
progress.end()
return schemas.Response(success=False, message=f"{sub_path.name} 重命名失败!")
progress.end(ProgressKey.BatchRename)
progress.end()
# 重命名自己
result = StorageChain().rename_file(fileitem, new_name)
if result:
@@ -222,7 +222,7 @@ def usage(name: str, _: User = Depends(get_current_active_superuser)) -> Any:
@router.get("/transtype/{name}", summary="支持的整理方式获取", response_model=schemas.StorageTransType)
def transtype(name: str, _: User = Depends(get_current_active_superuser)) -> Any:
async def transtype(name: str, _: User = Depends(get_current_active_superuser_async)) -> Any:
"""
查询支持的整理方式
"""

View File

@@ -2,6 +2,7 @@ from typing import List, Any, Annotated, Optional
import cn2an
from fastapi import APIRouter, Request, BackgroundTasks, Depends, HTTPException, Header
from sqlalchemy.ext.asyncio import AsyncSession
from sqlalchemy.orm import Session
from app import schemas
@@ -11,12 +12,12 @@ from app.core.context import MediaInfo
from app.core.event import eventmanager
from app.core.metainfo import MetaInfo
from app.core.security import verify_token, verify_apitoken
from app.db import get_db
from app.db import get_async_db, get_db
from app.db.models.subscribe import Subscribe
from app.db.models.subscribehistory import SubscribeHistory
from app.db.models.user import User
from app.db.systemconfig_oper import SystemConfigOper
from app.db.user_oper import get_current_active_user
from app.db.user_oper import get_current_active_user_async
from app.helper.subscribe import SubscribeHelper
from app.scheduler import Scheduler
from app.schemas.types import MediaType, EventType, SystemConfigKey
@@ -34,28 +35,28 @@ def start_subscribe_add(title: str, year: str,
@router.get("/", summary="查询所有订阅", response_model=List[schemas.Subscribe])
def read_subscribes(
db: Session = Depends(get_db),
async def read_subscribes(
db: AsyncSession = Depends(get_async_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询所有订阅
"""
return Subscribe.list(db)
return await Subscribe.async_list(db)
@router.get("/list", summary="查询所有订阅API_TOKEN", response_model=List[schemas.Subscribe])
def list_subscribes(_: Annotated[str, Depends(verify_apitoken)]) -> Any:
async def list_subscribes(_: Annotated[str, Depends(verify_apitoken)]) -> Any:
"""
查询所有订阅 API_TOKEN认证?token=xxx
"""
return read_subscribes()
return await read_subscribes()
@router.post("/", summary="新增订阅", response_model=schemas.Response)
def create_subscribe(
async def create_subscribe(
*,
subscribe_in: schemas.Subscribe,
current_user: User = Depends(get_current_active_user),
current_user: User = Depends(get_current_active_user_async),
) -> schemas.Response:
"""
新增订阅
@@ -77,31 +78,35 @@ def create_subscribe(
title = None
# 订阅用户
subscribe_in.username = current_user.name
sid, message = SubscribeChain().add(mtype=mtype,
title=title,
exist_ok=True,
**subscribe_in.dict())
# 转化为字典
subscribe_dict = subscribe_in.model_dump()
if subscribe_in.id:
subscribe_dict.pop("id", None)
sid, message = await SubscribeChain().async_add(mtype=mtype,
title=title,
exist_ok=True,
**subscribe_dict)
return schemas.Response(
success=bool(sid), message=message, data={"id": sid}
)
@router.put("/", summary="更新订阅", response_model=schemas.Response)
def update_subscribe(
async def update_subscribe(
*,
subscribe_in: schemas.Subscribe,
db: Session = Depends(get_db),
db: AsyncSession = Depends(get_async_db),
_: schemas.TokenPayload = Depends(verify_token)
) -> Any:
"""
更新订阅信息
"""
subscribe = Subscribe.get(db, subscribe_in.id)
subscribe = await Subscribe.async_get(db, subscribe_in.id)
if not subscribe:
return schemas.Response(success=False, message="订阅不存在")
# 避免更新缺失集数
old_subscribe_dict = subscribe.to_dict()
subscribe_dict = subscribe_in.dict()
subscribe_dict = subscribe_in.model_dump()
if not subscribe_in.lack_episode:
# 没有缺失集数时缺失集数清空避免更新为0
subscribe_dict.pop("lack_episode")
@@ -114,50 +119,55 @@ def update_subscribe(
# 是否手动修改过总集数
if subscribe_in.total_episode != subscribe.total_episode:
subscribe_dict["manual_total_episode"] = 1
subscribe.update(db, subscribe_dict)
# 更新到数据库
await subscribe.async_update(db, subscribe_dict)
# 重新获取更新后的订阅数据
updated_subscribe = await Subscribe.async_get(db, subscribe_in.id)
# 发送订阅调整事件
eventmanager.send_event(EventType.SubscribeModified, {
"subscribe_id": subscribe.id,
await eventmanager.async_send_event(EventType.SubscribeModified, {
"subscribe_id": subscribe_in.id,
"old_subscribe_info": old_subscribe_dict,
"subscribe_info": subscribe.to_dict(),
"subscribe_info": updated_subscribe.to_dict() if updated_subscribe else {},
})
return schemas.Response(success=True)
@router.put("/status/{subid}", summary="更新订阅状态", response_model=schemas.Response)
def update_subscribe_status(
async def update_subscribe_status(
subid: int,
state: str,
db: Session = Depends(get_db),
db: AsyncSession = Depends(get_async_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
更新订阅状态
"""
subscribe = Subscribe.get(db, subid)
subscribe = await Subscribe.async_get(db, subid)
if not subscribe:
return schemas.Response(success=False, message="订阅不存在")
valid_states = ["R", "P", "S"]
if state not in valid_states:
return schemas.Response(success=False, message="无效的订阅状态")
old_subscribe_dict = subscribe.to_dict()
subscribe.update(db, {
await subscribe.async_update(db, {
"state": state
})
# 重新获取更新后的订阅数据
updated_subscribe = await Subscribe.async_get(db, subid)
# 发送订阅调整事件
eventmanager.send_event(EventType.SubscribeModified, {
"subscribe_id": subscribe.id,
await eventmanager.async_send_event(EventType.SubscribeModified, {
"subscribe_id": subid,
"old_subscribe_info": old_subscribe_dict,
"subscribe_info": subscribe.to_dict(),
"subscribe_info": updated_subscribe.to_dict() if updated_subscribe else {},
})
return schemas.Response(success=True)
@router.get("/media/{mediaid}", summary="查询订阅", response_model=schemas.Subscribe)
def subscribe_mediaid(
async def subscribe_mediaid(
mediaid: str,
season: Optional[int] = None,
title: Optional[str] = None,
db: Session = Depends(get_db),
db: AsyncSession = Depends(get_async_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据 TMDBID/豆瓣ID/BangumiId 查询订阅 tmdb:/douban:
@@ -167,23 +177,23 @@ def subscribe_mediaid(
tmdbid = mediaid[5:]
if not tmdbid or not str(tmdbid).isdigit():
return Subscribe()
result = Subscribe.exists(db, tmdbid=int(tmdbid), season=season)
result = await Subscribe.async_exists(db, tmdbid=int(tmdbid), season=season)
elif mediaid.startswith("douban:"):
doubanid = mediaid[7:]
if not doubanid:
return Subscribe()
result = Subscribe.get_by_doubanid(db, doubanid)
result = await Subscribe.async_get_by_doubanid(db, doubanid)
if not result and title:
title_check = True
elif mediaid.startswith("bangumi:"):
bangumiid = mediaid[8:]
if not bangumiid or not str(bangumiid).isdigit():
return Subscribe()
result = Subscribe.get_by_bangumiid(db, int(bangumiid))
result = await Subscribe.async_get_by_bangumiid(db, int(bangumiid))
if not result and title:
title_check = True
else:
result = Subscribe.get_by_mediaid(db, mediaid)
result = await Subscribe.async_get_by_mediaid(db, mediaid)
if not result and title:
title_check = True
# 使用名称检查订阅
@@ -191,7 +201,7 @@ def subscribe_mediaid(
meta = MetaInfo(title)
if season:
meta.begin_season = season
result = Subscribe.get_by_title(db, title=meta.name, season=meta.begin_season)
result = await Subscribe.async_get_by_title(db, title=meta.name, season=meta.begin_season)
return result if result else Subscribe()
@@ -207,26 +217,30 @@ def refresh_subscribes(
@router.get("/reset/{subid}", summary="重置订阅", response_model=schemas.Response)
def reset_subscribes(
async def reset_subscribes(
subid: int,
db: Session = Depends(get_db),
db: AsyncSession = Depends(get_async_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
重置订阅
"""
subscribe = Subscribe.get(db, subid)
subscribe = await Subscribe.async_get(db, subid)
if subscribe:
# 在更新之前获取旧数据
old_subscribe_dict = subscribe.to_dict()
subscribe.update(db, {
# 更新订阅
await subscribe.async_update(db, {
"note": [],
"lack_episode": subscribe.total_episode,
"state": "R"
})
# 重新获取更新后的订阅数据
updated_subscribe = await Subscribe.async_get(db, subid)
# 发送订阅调整事件
eventmanager.send_event(EventType.SubscribeModified, {
"subscribe_id": subscribe.id,
await eventmanager.async_send_event(EventType.SubscribeModified, {
"subscribe_id": subid,
"old_subscribe_info": old_subscribe_dict,
"subscribe_info": subscribe.to_dict(),
"subscribe_info": updated_subscribe.to_dict() if updated_subscribe else {},
})
return schemas.Response(success=True)
return schemas.Response(success=False, message="订阅不存在")
@@ -243,7 +257,7 @@ def check_subscribes(
@router.get("/search", summary="搜索所有订阅", response_model=schemas.Response)
def search_subscribes(
async def search_subscribes(
background_tasks: BackgroundTasks,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
@@ -262,7 +276,7 @@ def search_subscribes(
@router.get("/search/{subscribe_id}", summary="搜索订阅", response_model=schemas.Response)
def search_subscribe(
async def search_subscribe(
subscribe_id: int,
background_tasks: BackgroundTasks,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
@@ -282,10 +296,10 @@ def search_subscribe(
@router.delete("/media/{mediaid}", summary="删除订阅", response_model=schemas.Response)
def delete_subscribe_by_mediaid(
async def delete_subscribe_by_mediaid(
mediaid: str,
season: Optional[int] = None,
db: Session = Depends(get_db),
db: AsyncSession = Depends(get_async_db),
_: schemas.TokenPayload = Depends(verify_token)
) -> Any:
"""
@@ -296,25 +310,28 @@ def delete_subscribe_by_mediaid(
tmdbid = mediaid[5:]
if not tmdbid or not str(tmdbid).isdigit():
return schemas.Response(success=False)
subscribes = Subscribe().get_by_tmdbid(db, int(tmdbid), season)
subscribes = await Subscribe.async_get_by_tmdbid(db, int(tmdbid), season)
delete_subscribes.extend(subscribes)
elif mediaid.startswith("douban:"):
doubanid = mediaid[7:]
if not doubanid:
return schemas.Response(success=False)
subscribe = Subscribe().get_by_doubanid(db, doubanid)
subscribe = await Subscribe.async_get_by_doubanid(db, doubanid)
if subscribe:
delete_subscribes.append(subscribe)
else:
subscribe = Subscribe().get_by_mediaid(db, mediaid)
subscribe = await Subscribe.async_get_by_mediaid(db, mediaid)
if subscribe:
delete_subscribes.append(subscribe)
for subscribe in delete_subscribes:
Subscribe().delete(db, subscribe.id)
# 在删除之前获取订阅信息
subscribe_info = subscribe.to_dict()
subscribe_id = subscribe.id
await Subscribe.async_delete(db, subscribe_id)
# 发送事件
eventmanager.send_event(EventType.SubscribeDeleted, {
"subscribe_id": subscribe.id,
"subscribe_info": subscribe.to_dict()
await eventmanager.async_send_event(EventType.SubscribeDeleted, {
"subscribe_id": subscribe_id,
"subscribe_info": subscribe_info
})
return schemas.Response(success=True)
@@ -373,42 +390,54 @@ async def seerr_subscribe(request: Request, background_tasks: BackgroundTasks,
@router.get("/history/{mtype}", summary="查询订阅历史", response_model=List[schemas.Subscribe])
def subscribe_history(
async def subscribe_history(
mtype: str,
page: Optional[int] = 1,
count: Optional[int] = 30,
db: Session = Depends(get_db),
db: AsyncSession = Depends(get_async_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询电影/电视剧订阅历史
"""
return SubscribeHistory.list_by_type(db, mtype=mtype, page=page, count=count)
return await SubscribeHistory.async_list_by_type(db, mtype=mtype, page=page, count=count)
@router.delete("/history/{history_id}", summary="删除订阅历史", response_model=schemas.Response)
def delete_subscribe(
async def delete_subscribe(
history_id: int,
db: Session = Depends(get_db),
db: AsyncSession = Depends(get_async_db),
_: schemas.TokenPayload = Depends(verify_token)
) -> Any:
"""
删除订阅历史
"""
SubscribeHistory.delete(db, history_id)
await SubscribeHistory.async_delete(db, history_id)
return schemas.Response(success=True)
@router.get("/popular", summary="热门订阅(基于用户共享数据)", response_model=List[schemas.MediaInfo])
def popular_subscribes(
async def popular_subscribes(
stype: str,
page: Optional[int] = 1,
count: Optional[int] = 30,
min_sub: Optional[int] = None,
genre_id: Optional[int] = None,
min_rating: Optional[float] = None,
max_rating: Optional[float] = None,
sort_type: Optional[str] = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询热门订阅
"""
subscribes = SubscribeHelper().get_statistic(stype=stype, page=page, count=count)
subscribes = await SubscribeHelper().async_get_statistic(
stype=stype,
page=page,
count=count,
genre_id=genre_id,
min_rating=min_rating,
max_rating=max_rating,
sort_type=sort_type
)
if subscribes:
ret_medias = []
for sub in subscribes:
@@ -444,14 +473,14 @@ def popular_subscribes(
@router.get("/user/{username}", summary="用户订阅", response_model=List[schemas.Subscribe])
def user_subscribes(
async def user_subscribes(
username: str,
db: Session = Depends(get_db),
db: AsyncSession = Depends(get_async_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询用户订阅
"""
return Subscribe.list_by_username(db, username)
return await Subscribe.async_list_by_username(db, username)
@router.get("/files/{subscribe_id}", summary="订阅相关文件信息", response_model=schemas.SubscrbieInfo)
@@ -469,51 +498,51 @@ def subscribe_files(
@router.post("/share", summary="分享订阅", response_model=schemas.Response)
def subscribe_share(
async def subscribe_share(
sub: schemas.SubscribeShare,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
分享订阅
"""
state, errmsg = SubscribeHelper().sub_share(subscribe_id=sub.subscribe_id,
share_title=sub.share_title,
share_comment=sub.share_comment,
share_user=sub.share_user)
state, errmsg = await SubscribeHelper().async_sub_share(subscribe_id=sub.subscribe_id,
share_title=sub.share_title,
share_comment=sub.share_comment,
share_user=sub.share_user)
return schemas.Response(success=state, message=errmsg)
@router.delete("/share/{share_id}", summary="删除分享", response_model=schemas.Response)
def subscribe_share_delete(
async def subscribe_share_delete(
share_id: int,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
删除分享
"""
state, errmsg = SubscribeHelper().share_delete(share_id=share_id)
state, errmsg = await SubscribeHelper().async_share_delete(share_id=share_id)
return schemas.Response(success=state, message=errmsg)
@router.post("/fork", summary="复用订阅", response_model=schemas.Response)
def subscribe_fork(
async def subscribe_fork(
sub: schemas.SubscribeShare,
current_user: User = Depends(get_current_active_user)) -> Any:
current_user: User = Depends(get_current_active_user_async)) -> Any:
"""
复用订阅
"""
sub_dict = sub.dict()
sub_dict = sub.model_dump()
sub_dict.pop("id")
for key in list(sub_dict.keys()):
if not hasattr(schemas.Subscribe(), key):
sub_dict.pop(key)
result = create_subscribe(subscribe_in=schemas.Subscribe(**sub_dict),
current_user=current_user)
result = await create_subscribe(subscribe_in=schemas.Subscribe(**sub_dict),
current_user=current_user)
if result.success:
SubscribeHelper().sub_fork(share_id=sub.id)
await SubscribeHelper().async_sub_fork(share_id=sub.id)
return result
@router.get("/follow", summary="查询已Follow的订阅分享人", response_model=List[str])
def followed_subscribers(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def followed_subscribers(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询已Follow的订阅分享人
"""
@@ -521,7 +550,7 @@ def followed_subscribers(_: schemas.TokenPayload = Depends(verify_token)) -> Any
@router.post("/follow", summary="Follow订阅分享人", response_model=schemas.Response)
def follow_subscriber(
async def follow_subscriber(
share_uid: Optional[str] = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
@@ -530,12 +559,12 @@ def follow_subscriber(
subscribers = SystemConfigOper().get(SystemConfigKey.FollowSubscribers) or []
if share_uid and share_uid not in subscribers:
subscribers.append(share_uid)
SystemConfigOper().set(SystemConfigKey.FollowSubscribers, subscribers)
await SystemConfigOper().async_set(SystemConfigKey.FollowSubscribers, subscribers)
return schemas.Response(success=True)
@router.delete("/follow", summary="取消Follow订阅分享人", response_model=schemas.Response)
def unfollow_subscriber(
async def unfollow_subscriber(
share_uid: Optional[str] = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
@@ -544,51 +573,74 @@ def unfollow_subscriber(
subscribers = SystemConfigOper().get(SystemConfigKey.FollowSubscribers) or []
if share_uid and share_uid in subscribers:
subscribers.remove(share_uid)
SystemConfigOper().set(SystemConfigKey.FollowSubscribers, subscribers)
await SystemConfigOper().async_set(SystemConfigKey.FollowSubscribers, subscribers)
return schemas.Response(success=True)
@router.get("/shares", summary="查询分享的订阅", response_model=List[schemas.SubscribeShare])
def popular_subscribes(
async def popular_subscribes(
name: Optional[str] = None,
page: Optional[int] = 1,
count: Optional[int] = 30,
genre_id: Optional[int] = None,
min_rating: Optional[float] = None,
max_rating: Optional[float] = None,
sort_type: Optional[str] = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询分享的订阅
"""
return SubscribeHelper().get_shares(name=name, page=page, count=count)
return await SubscribeHelper().async_get_shares(
name=name,
page=page,
count=count,
genre_id=genre_id,
min_rating=min_rating,
max_rating=max_rating,
sort_type=sort_type
)
@router.get("/share/statistics", summary="查询订阅分享统计", response_model=List[schemas.SubscribeShareStatistics])
async def subscribe_share_statistics(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询订阅分享统计
返回每个分享人分享的媒体数量以及总的复用人次
"""
return await SubscribeHelper().async_get_share_statistics()
@router.get("/{subscribe_id}", summary="订阅详情", response_model=schemas.Subscribe)
def read_subscribe(
async def read_subscribe(
subscribe_id: int,
db: Session = Depends(get_db),
db: AsyncSession = Depends(get_async_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据订阅编号查询订阅信息
"""
if not subscribe_id:
return Subscribe()
return Subscribe.get(db, subscribe_id)
return await Subscribe.async_get(db, subscribe_id)
@router.delete("/{subscribe_id}", summary="删除订阅", response_model=schemas.Response)
def delete_subscribe(
async def delete_subscribe(
subscribe_id: int,
db: Session = Depends(get_db),
db: AsyncSession = Depends(get_async_db),
_: schemas.TokenPayload = Depends(verify_token)
) -> Any:
"""
删除订阅信息
"""
subscribe = Subscribe.get(db, subscribe_id)
subscribe = await Subscribe.async_get(db, subscribe_id)
if subscribe:
subscribe.delete(db, subscribe_id)
# 在删除之前获取订阅信息
subscribe_info = subscribe.to_dict()
await Subscribe.async_delete(db, subscribe_id)
# 发送事件
eventmanager.send_event(EventType.SubscribeDeleted, {
await eventmanager.async_send_event(EventType.SubscribeDeleted, {
"subscribe_id": subscribe_id,
"subscribe_info": subscribe.to_dict()
"subscribe_info": subscribe_info
})
# 统计订阅
SubscribeHelper().sub_done_async({

View File

@@ -2,7 +2,6 @@ import asyncio
import io
import json
import re
import tempfile
from collections import deque
from datetime import datetime
from pathlib import Path
@@ -11,25 +10,29 @@ from typing import Optional, Union, Annotated
import aiofiles
import pillow_avif # noqa 用于自动注册AVIF支持
from PIL import Image
from anyio import Path as AsyncPath
from app.helper.sites import SitesHelper # noqa # noqa
from fastapi import APIRouter, Body, Depends, HTTPException, Header, Request, Response
from fastapi.responses import StreamingResponse
from app import schemas
from app.chain.mediaserver import MediaServerChain
from app.chain.search import SearchChain
from app.chain.system import SystemChain
from app.core.cache import AsyncFileCache
from app.core.config import global_vars, settings
from app.core.event import eventmanager
from app.core.metainfo import MetaInfo
from app.core.module import ModuleManager
from app.core.security import verify_apitoken, verify_resource_token, verify_token
from app.core.event import eventmanager
from app.db.models import User
from app.db.systemconfig_oper import SystemConfigOper
from app.db.user_oper import get_current_active_superuser
from app.db.user_oper import get_current_active_superuser, get_current_active_superuser_async, \
get_current_active_user_async
from app.helper.mediaserver import MediaServerHelper
from app.helper.message import MessageHelper
from app.helper.progress import ProgressHelper
from app.helper.rule import RuleHelper
from app.helper.sites import SitesHelper
from app.helper.subscribe import SubscribeHelper
from app.helper.system import SystemHelper
from app.log import logger
@@ -37,7 +40,7 @@ from app.scheduler import Scheduler
from app.schemas import ConfigChangeEventData
from app.schemas.types import SystemConfigKey, EventType
from app.utils.crypto import HashUtils
from app.utils.http import RequestUtils
from app.utils.http import RequestUtils, AsyncRequestUtils
from app.utils.security import SecurityUtils
from app.utils.url import UrlUtils
from version import APP_VERSION
@@ -45,85 +48,84 @@ from version import APP_VERSION
router = APIRouter()
def fetch_image(
async def fetch_image(
url: str,
proxy: bool = False,
use_disk_cache: bool = False,
use_cache: bool = False,
if_none_match: Optional[str] = None,
allowed_domains: Optional[set[str]] = None) -> Response:
cookies: Optional[str | dict] = None,
allowed_domains: Optional[set[str]] = None) -> Optional[Response]:
"""
处理图片缓存逻辑支持HTTP缓存和磁盘缓存
"""
if not url:
raise HTTPException(status_code=404, detail="URL not provided")
return None
if allowed_domains is None:
allowed_domains = set(settings.SECURITY_IMAGE_DOMAINS)
# 验证URL安全性
if not SecurityUtils.is_safe_url(url, allowed_domains):
raise HTTPException(status_code=404, detail="Unsafe URL")
# 后续观察系统性能表现如果发现磁盘缓存和HTTP缓存无法满足高并发情况下的响应速度需求可以考虑重新引入内存缓存
cache_path = None
if use_disk_cache:
# 生成缓存路径
sanitized_path = SecurityUtils.sanitize_url_path(url)
cache_path = settings.CACHE_PATH / "images" / sanitized_path
logger.warn(f"Blocked unsafe image URL: {url}")
return None
# 缓存路径
sanitized_path = SecurityUtils.sanitize_url_path(url)
cache_path = Path("images") / sanitized_path
if not cache_path.suffix:
# 没有文件类型,则添加后缀,在恶意文件类型和实际需求下的折衷选择
if not cache_path.suffix:
cache_path = cache_path.with_suffix(".jpg")
cache_path = cache_path.with_suffix(".jpg")
# 确保缓存路径和文件类型合法
if not SecurityUtils.is_safe_path(settings.CACHE_PATH, cache_path, settings.SECURITY_IMAGE_SUFFIXES):
raise HTTPException(status_code=400, detail="Invalid cache path or file type")
# 缓存对像,缓存过期时间为全局图片缓存天数
cache_backend = AsyncFileCache(base=settings.CACHE_PATH,
ttl=settings.GLOBAL_IMAGE_CACHE_DAYS * 24 * 3600)
# 目前暂不考虑磁盘缓存文件是否过期,后续通过缓存清理机制处理
if cache_path.exists():
try:
content = cache_path.read_bytes()
etag = HashUtils.md5(content)
headers = RequestUtils.generate_cache_headers(etag, max_age=86400 * 7)
if if_none_match == etag:
return Response(status_code=304, headers=headers)
return Response(content=content, media_type="image/jpeg", headers=headers)
except Exception as e:
# 如果读取磁盘缓存发生异常,这里仅记录日志,尝试再次请求远端进行处理
logger.debug(f"Failed to read cache file {cache_path}: {e}")
if use_cache:
content = await cache_backend.get(cache_path.as_posix(), region="images")
if content:
# 检查 If-None-Match
etag = HashUtils.md5(content)
headers = RequestUtils.generate_cache_headers(etag, max_age=86400 * 7)
if if_none_match == etag:
return Response(status_code=304, headers=headers)
# 返回缓存图片
return Response(
content=content,
media_type=UrlUtils.get_mime_type(url, "image/jpeg"),
headers=headers
)
# 请求远程图片
referer = "https://movie.douban.com/" if "doubanio.com" in url else None
proxies = settings.PROXY if proxy else None
response = RequestUtils(ua=settings.USER_AGENT, proxies=proxies, referer=referer,
accept_type="image/avif,image/webp,image/apng,*/*").get_res(url=url)
response = await AsyncRequestUtils(
ua=settings.NORMAL_USER_AGENT,
proxies=proxies,
referer=referer,
cookies=cookies,
accept_type="image/avif,image/webp,image/apng,*/*",
).get_res(url=url)
if not response:
raise HTTPException(status_code=502, detail="Failed to fetch the image from the remote server")
logger.warn(f"Failed to fetch image from URL: {url}")
return None
# 验证下载的内容是否为有效图片
try:
Image.open(io.BytesIO(response.content)).verify()
content = response.content
Image.open(io.BytesIO(content)).verify()
except Exception as e:
logger.debug(f"Invalid image format for URL {url}: {e}")
raise HTTPException(status_code=502, detail="Invalid image format")
logger.warn(f"Invalid image format for URL {url}: {e}")
return None
content = response.content
# 获取请求响应头
response_headers = response.headers
cache_control_header = response_headers.get("Cache-Control", "")
cache_directive, max_age = RequestUtils.parse_cache_control(cache_control_header)
# 如果需要使用磁盘缓存,则保存到磁盘
if use_disk_cache and cache_path:
try:
if not cache_path.parent.exists():
cache_path.parent.mkdir(parents=True, exist_ok=True)
with tempfile.NamedTemporaryFile(dir=cache_path.parent, delete=False) as tmp_file:
tmp_file.write(content)
temp_path = Path(tmp_file.name)
temp_path.replace(cache_path)
except Exception as e:
logger.debug(f"Failed to write cache file {cache_path}: {e}")
# 保存缓存
if use_cache:
await cache_backend.set(cache_path.as_posix(), content, region="images")
logger.debug(f"Image cached at {cache_path.as_posix()}")
# 检查 If-None-Match
etag = HashUtils.md5(content)
@@ -131,8 +133,8 @@ def fetch_image(
headers = RequestUtils.generate_cache_headers(etag, cache_directive, max_age)
return Response(status_code=304, headers=headers)
# 响应
headers = RequestUtils.generate_cache_headers(etag, cache_directive, max_age)
return Response(
content=content,
media_type=response_headers.get("Content-Type") or UrlUtils.get_mime_type(url, "image/jpeg"),
@@ -141,10 +143,11 @@ def fetch_image(
@router.get("/img/{proxy}", summary="图片代理")
def proxy_img(
async def proxy_img(
imgurl: str,
proxy: bool = False,
cache: bool = False,
use_cookies: bool = False,
if_none_match: Annotated[str | None, Header()] = None,
_: schemas.TokenPayload = Depends(verify_resource_token)
) -> Response:
@@ -155,12 +158,17 @@ def proxy_img(
hosts = [config.config.get("host") for config in MediaServerHelper().get_configs().values() if
config and config.config and config.config.get("host")]
allowed_domains = set(settings.SECURITY_IMAGE_DOMAINS) | set(hosts)
return fetch_image(url=imgurl, proxy=proxy, use_disk_cache=cache,
if_none_match=if_none_match, allowed_domains=allowed_domains)
cookies = (
MediaServerChain().get_image_cookies(server=None, image_url=imgurl)
if use_cookies
else None
)
return await fetch_image(url=imgurl, proxy=proxy, use_cache=cache, cookies=cookies,
if_none_match=if_none_match, allowed_domains=allowed_domains)
@router.get("/cache/image", summary="图片缓存")
def cache_img(
async def cache_img(
url: str,
if_none_match: Annotated[str | None, Header()] = None,
_: schemas.TokenPayload = Depends(verify_resource_token)
@@ -170,7 +178,8 @@ def cache_img(
"""
# 如果没有启用全局图片缓存,则不使用磁盘缓存
proxy = "doubanio.com" not in url
return fetch_image(url=url, proxy=proxy, use_disk_cache=settings.GLOBAL_IMAGE_CACHE, if_none_match=if_none_match)
return await fetch_image(url=url, proxy=proxy, use_cache=settings.GLOBAL_IMAGE_CACHE,
if_none_match=if_none_match)
@router.get("/global", summary="查询非敏感系统设置", response_model=schemas.Response)
@@ -182,25 +191,28 @@ def get_global_setting(token: str):
raise HTTPException(status_code=403, detail="Forbidden")
# FIXME: 新增敏感配置项时要在此处添加排除项
info = settings.dict(
info = settings.model_dump(
exclude={"SECRET_KEY", "RESOURCE_SECRET_KEY", "API_TOKEN", "TMDB_API_KEY", "TVDB_API_KEY", "FANART_API_KEY",
"COOKIECLOUD_KEY", "COOKIECLOUD_PASSWORD", "GITHUB_TOKEN", "REPO_GITHUB_TOKEN"}
"COOKIECLOUD_KEY", "COOKIECLOUD_PASSWORD", "GITHUB_TOKEN", "REPO_GITHUB_TOKEN", "U115_APP_ID",
"ALIPAN_APP_ID", "TVDB_V4_API_KEY", "TVDB_V4_API_PIN"}
)
# 追加用户唯一ID和订阅分享管理权限
share_admin = SubscribeHelper().is_admin_user()
info.update({
"USER_UNIQUE_ID": SubscribeHelper().get_user_uuid(),
"SUBSCRIBE_SHARE_MANAGE": SubscribeHelper().is_admin_user(),
"SUBSCRIBE_SHARE_MANAGE": share_admin,
"WORKFLOW_SHARE_MANAGE": share_admin
})
return schemas.Response(success=True,
data=info)
@router.get("/env", summary="查询系统配置", response_model=schemas.Response)
def get_env_setting(_: User = Depends(get_current_active_superuser)):
async def get_env_setting(_: User = Depends(get_current_active_user_async)):
"""
查询系统环境变量,包括当前版本号(仅管理员)
"""
info = settings.dict(
info = settings.model_dump(
exclude={"SECRET_KEY", "RESOURCE_SECRET_KEY"}
)
info.update({
@@ -214,8 +226,8 @@ def get_env_setting(_: User = Depends(get_current_active_superuser)):
@router.post("/env", summary="更新系统配置", response_model=schemas.Response)
def set_env_setting(env: dict,
_: User = Depends(get_current_active_superuser)):
async def set_env_setting(env: dict,
_: User = Depends(get_current_active_superuser_async)):
"""
更新系统环境变量(仅管理员)
"""
@@ -237,7 +249,7 @@ def set_env_setting(env: dict,
if success_updates:
for key in success_updates.keys():
# 发送配置变更事件
eventmanager.send_event(etype=EventType.ConfigChanged, data=ConfigChangeEventData(
await eventmanager.async_send_event(etype=EventType.ConfigChanged, data=ConfigChangeEventData(
key=key,
value=getattr(settings, key, None),
change_type="update"
@@ -257,16 +269,16 @@ async def get_progress(request: Request, process_type: str, _: schemas.TokenPayl
"""
实时获取处理进度返回格式为SSE
"""
progress = ProgressHelper()
progress = ProgressHelper(process_type)
async def event_generator():
try:
while not global_vars.is_system_stopped:
if await request.is_disconnected():
break
detail = progress.get(process_type)
detail = progress.get()
yield f"data: {json.dumps(detail)}\n\n"
await asyncio.sleep(0.2)
await asyncio.sleep(0.5)
except asyncio.CancelledError:
return
@@ -274,8 +286,8 @@ async def get_progress(request: Request, process_type: str, _: schemas.TokenPayl
@router.get("/setting/{key}", summary="查询系统设置", response_model=schemas.Response)
def get_setting(key: str,
_: User = Depends(get_current_active_superuser)):
async def get_setting(key: str,
_: User = Depends(get_current_active_user_async)):
"""
查询系统设置(仅管理员)
"""
@@ -289,10 +301,10 @@ def get_setting(key: str,
@router.post("/setting/{key}", summary="更新系统设置", response_model=schemas.Response)
def set_setting(
key: str,
value: Annotated[Union[list, dict, bool, int, str] | None, Body()] = None,
_: User = Depends(get_current_active_superuser),
async def set_setting(
key: str,
value: Annotated[Union[list, dict, bool, int, str] | None, Body()] = None,
_: User = Depends(get_current_active_superuser_async),
):
"""
更新系统设置(仅管理员)
@@ -301,7 +313,7 @@ def set_setting(
success, message = settings.update_setting(key=key, value=value)
if success:
# 发送配置变更事件
eventmanager.send_event(etype=EventType.ConfigChanged, data=ConfigChangeEventData(
await eventmanager.async_send_event(etype=EventType.ConfigChanged, data=ConfigChangeEventData(
key=key,
value=value,
change_type="update"
@@ -313,10 +325,10 @@ def set_setting(
if isinstance(value, list):
value = list(filter(None, value))
value = value if value else None
success = SystemConfigOper().set(key, value)
success = await SystemConfigOper().async_set(key, value)
if success:
# 发送配置变更事件
eventmanager.send_event(etype=EventType.ConfigChanged, data=ConfigChangeEventData(
await eventmanager.async_send_event(etype=EventType.ConfigChanged, data=ConfigChangeEventData(
key=key,
value=value,
change_type="update"
@@ -356,60 +368,106 @@ async def get_logging(request: Request, length: Optional[int] = 50, logfile: Opt
length = -1 时, 返回text/plain
否则 返回格式SSE
"""
log_path = settings.LOG_PATH / logfile
base_path = AsyncPath(settings.LOG_PATH)
log_path = base_path / logfile
if not SecurityUtils.is_safe_path(settings.LOG_PATH, log_path, allowed_suffixes={".log"}):
if not await SecurityUtils.async_is_safe_path(base_path=base_path, user_path=log_path, allowed_suffixes={".log"}):
raise HTTPException(status_code=404, detail="Not Found")
if not log_path.exists() or not log_path.is_file():
if not await log_path.exists() or not await log_path.is_file():
raise HTTPException(status_code=404, detail="Not Found")
async def log_generator():
try:
# 使用固定大小的双向队列来限制内存使用
lines_queue = deque(maxlen=max(length, 50))
# 使用 aiofiles 异步读取文件
async with aiofiles.open(log_path, mode="r", encoding="utf-8") as f:
# 逐行读取文件,将每一行存入队列
file_content = await f.read()
for line in file_content.splitlines():
# 取文件大小
file_stat = await log_path.stat()
file_size = file_stat.st_size
# 读取历史日志
async with aiofiles.open(log_path, mode="r", encoding="utf-8", errors="ignore") as f:
# 优化大文件读取策略
if file_size > 100 * 1024:
# 只读取最后100KB的内容
bytes_to_read = min(file_size, 100 * 1024)
position = file_size - bytes_to_read
await f.seek(position)
content = await f.read()
# 找到第一个完整的行
first_newline = content.find('\n')
if first_newline != -1:
content = content[first_newline + 1:]
else:
# 小文件直接读取全部内容
content = await f.read()
# 按行分割并添加到队列,只保留非空行
lines = [line.strip() for line in content.splitlines() if line.strip()]
# 只取最后N行
for line in lines[-max(length, 50):]:
lines_queue.append(line)
for line in lines_queue:
yield f"data: {line}\n\n"
# 输出历史日志
for line in lines_queue:
yield f"data: {line}\n\n"
# 实时监听新日志
async with aiofiles.open(log_path, mode="r", encoding="utf-8", errors="ignore") as f:
# 移动文件指针到文件末尾,继续监听新增内容
await f.seek(0, 2)
# 记录初始文件大小
initial_stat = await log_path.stat()
initial_size = initial_stat.st_size
# 实时监听新日志,使用更短的轮询间隔
while not global_vars.is_system_stopped:
if await request.is_disconnected():
break
line = await f.readline()
if not line:
# 检查文件是否有新内容
current_stat = await log_path.stat()
current_size = current_stat.st_size
if current_size > initial_size:
# 文件有新内容,读取新行
line = await f.readline()
if line:
line = line.strip()
if line:
yield f"data: {line}\n\n"
initial_size = current_size
else:
# 没有新内容,短暂等待
await asyncio.sleep(0.5)
continue
yield f"data: {line}\n\n"
except asyncio.CancelledError:
return
except Exception as err:
logger.error(f"日志读取异常: {err}")
yield f"data: 日志读取异常: {err}\n\n"
# 根据length参数返回不同的响应
if length == -1:
# 返回全部日志作为文本响应
if not log_path.exists():
if not await log_path.exists():
return Response(content="日志文件不存在!", media_type="text/plain")
with open(log_path, "r", encoding='utf-8') as file:
text = file.read()
# 倒序输出
text = "\n".join(text.split("\n")[::-1])
return Response(content=text, media_type="text/plain")
try:
# 使用 aiofiles 异步读取文件
async with aiofiles.open(log_path, mode="r", encoding="utf-8", errors="ignore") as file:
text = await file.read()
# 倒序输出
text = "\n".join(text.split("\n")[::-1])
return Response(content=text, media_type="text/plain")
except Exception as e:
return Response(content=f"读取日志文件失败: {e}", media_type="text/plain")
else:
# 返回SSE流响应
return StreamingResponse(log_generator(), media_type="text/event-stream")
@router.get("/versions", summary="查询Github所有Release版本", response_model=schemas.Response)
def latest_version(_: schemas.TokenPayload = Depends(verify_token)):
async def latest_version(_: schemas.TokenPayload = Depends(verify_token)):
"""
查询Github所有Release版本
"""
version_res = RequestUtils(proxies=settings.PROXY, headers=settings.GITHUB_HEADERS).get_res(
version_res = await AsyncRequestUtils(proxies=settings.PROXY, headers=settings.GITHUB_HEADERS).get_res(
f"https://api.github.com/repos/jxxghp/MoviePilot/releases")
if version_res:
ver_json = version_res.json()
@@ -451,11 +509,11 @@ def ruletest(title: str,
@router.get("/nettest", summary="测试网络连通性")
def nettest(
url: str,
proxy: bool,
include: Optional[str] = None,
_: schemas.TokenPayload = Depends(verify_token),
async def nettest(
url: str,
proxy: bool,
include: Optional[str] = None,
_: schemas.TokenPayload = Depends(verify_token),
):
"""
测试网络连通性
@@ -463,43 +521,68 @@ def nettest(
# 记录开始的毫秒数
start_time = datetime.now()
headers = None
if "github" in url or "{GITHUB_PROXY}" in url:
# 当前使用的加速代理
proxy_name = ""
if "github" in url:
# 这是github的连通性测试
headers = settings.GITHUB_HEADERS
if "{GITHUB_PROXY}" in url:
url = url.replace(
"{GITHUB_PROXY}", UrlUtils.standardize_base_url(settings.GITHUB_PROXY or "")
)
headers = settings.GITHUB_HEADERS
if settings.GITHUB_PROXY:
proxy_name = "Github加速代理"
if "{PIP_PROXY}" in url:
url = url.replace(
"{PIP_PROXY}",
UrlUtils.standardize_base_url(
settings.PIP_PROXY or "https://pypi.org/simple/"
),
)
if settings.PIP_PROXY:
proxy_name = "PIP加速代理"
url = url.replace("{TMDBAPIKEY}", settings.TMDB_API_KEY)
url = url.replace(
"{PIP_PROXY}",
UrlUtils.standardize_base_url(settings.PIP_PROXY or "https://pypi.org/simple/"),
)
result = RequestUtils(
result = await AsyncRequestUtils(
proxies=settings.PROXY if proxy else None,
headers=headers,
timeout=10,
ua=settings.USER_AGENT,
ua=settings.NORMAL_USER_AGENT,
).get_res(url)
# 计时结束的毫秒数
end_time = datetime.now()
time = round((end_time - start_time).total_seconds() * 1000)
# 计算相关秒数
if result is None:
return schemas.Response(success=False, message="无法连接", data={"time": time})
return schemas.Response(
success=False, message=f"{proxy_name}无法连接", data={"time": time}
)
elif result.status_code == 200:
if include and not re.search(r"%s" % include, result.text, re.IGNORECASE):
# 通常是被加速代理跳转到其它页面了
logger.error(f"{url} 的响应内容不匹配包含规则 {include}")
if proxy_name:
message = f"{proxy_name}已失效,请检查配置"
else:
message = f"无效响应,不匹配 {include}"
return schemas.Response(
success=False,
message=f"无效响应,不匹配 {include}",
message=message,
data={"time": time},
)
return schemas.Response(success=True, data={"time": time})
else:
return schemas.Response(
success=False, message=f"错误码:{result.status_code}", data={"time": time}
)
if proxy_name:
# 加速代理失败
message = f"{proxy_name}已失效,错误码:{result.status_code}"
else:
message = f"错误码:{result.status_code}"
if "github" in url:
# 非加速代理访问github
if result.status_code == 401:
message = "Github Token已失效请检查配置"
elif result.status_code in {403, 429}:
message = "触发限流请配置Github Token"
return schemas.Response(success=False, message=message, data={"time": time})
@router.get("/modulelist", summary="查询已加载的模块ID列表", response_model=schemas.Response)

View File

@@ -11,28 +11,28 @@ router = APIRouter()
@router.get("/seasons/{tmdbid}", summary="TMDB所有季", response_model=List[schemas.TmdbSeason])
def tmdb_seasons(tmdbid: int, _: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def tmdb_seasons(tmdbid: int, _: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据TMDBID查询themoviedb所有季信息
"""
seasons_info = TmdbChain().tmdb_seasons(tmdbid=tmdbid)
seasons_info = await TmdbChain().async_tmdb_seasons(tmdbid=tmdbid)
if seasons_info:
return seasons_info
return []
@router.get("/similar/{tmdbid}/{type_name}", summary="类似电影/电视剧", response_model=List[schemas.MediaInfo])
def tmdb_similar(tmdbid: int,
type_name: str,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def tmdb_similar(tmdbid: int,
type_name: str,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据TMDBID查询类似电影/电视剧type_name: 电影/电视剧
"""
mediatype = MediaType(type_name)
if mediatype == MediaType.MOVIE:
medias = TmdbChain().movie_similar(tmdbid=tmdbid)
medias = await TmdbChain().async_movie_similar(tmdbid=tmdbid)
elif mediatype == MediaType.TV:
medias = TmdbChain().tv_similar(tmdbid=tmdbid)
medias = await TmdbChain().async_tv_similar(tmdbid=tmdbid)
else:
return []
if medias:
@@ -41,17 +41,17 @@ def tmdb_similar(tmdbid: int,
@router.get("/recommend/{tmdbid}/{type_name}", summary="推荐电影/电视剧", response_model=List[schemas.MediaInfo])
def tmdb_recommend(tmdbid: int,
type_name: str,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def tmdb_recommend(tmdbid: int,
type_name: str,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据TMDBID查询推荐电影/电视剧type_name: 电影/电视剧
"""
mediatype = MediaType(type_name)
if mediatype == MediaType.MOVIE:
medias = TmdbChain().movie_recommend(tmdbid=tmdbid)
medias = await TmdbChain().async_movie_recommend(tmdbid=tmdbid)
elif mediatype == MediaType.TV:
medias = TmdbChain().tv_recommend(tmdbid=tmdbid)
medias = await TmdbChain().async_tv_recommend(tmdbid=tmdbid)
else:
return []
if medias:
@@ -60,63 +60,63 @@ def tmdb_recommend(tmdbid: int,
@router.get("/collection/{collection_id}", summary="系列合集详情", response_model=List[schemas.MediaInfo])
def tmdb_collection(collection_id: int,
page: Optional[int] = 1,
count: Optional[int] = 20,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def tmdb_collection(collection_id: int,
page: Optional[int] = 1,
count: Optional[int] = 20,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据合集ID查询合集详情
"""
medias = TmdbChain().tmdb_collection(collection_id=collection_id)
medias = await TmdbChain().async_tmdb_collection(collection_id=collection_id)
if medias:
return [media.to_dict() for media in medias][(page - 1) * count:page * count]
return []
@router.get("/credits/{tmdbid}/{type_name}", summary="演员阵容", response_model=List[schemas.MediaPerson])
def tmdb_credits(tmdbid: int,
type_name: str,
page: Optional[int] = 1,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def tmdb_credits(tmdbid: int,
type_name: str,
page: Optional[int] = 1,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据TMDBID查询演员阵容type_name: 电影/电视剧
"""
mediatype = MediaType(type_name)
if mediatype == MediaType.MOVIE:
persons = TmdbChain().movie_credits(tmdbid=tmdbid, page=page)
persons = await TmdbChain().async_movie_credits(tmdbid=tmdbid, page=page)
elif mediatype == MediaType.TV:
persons = TmdbChain().tv_credits(tmdbid=tmdbid, page=page)
persons = await TmdbChain().async_tv_credits(tmdbid=tmdbid, page=page)
else:
return []
return persons or []
@router.get("/person/{person_id}", summary="人物详情", response_model=schemas.MediaPerson)
def tmdb_person(person_id: int,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def tmdb_person(person_id: int,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据人物ID查询人物详情
"""
return TmdbChain().person_detail(person_id=person_id)
return await TmdbChain().async_person_detail(person_id=person_id)
@router.get("/person/credits/{person_id}", summary="人物参演作品", response_model=List[schemas.MediaInfo])
def tmdb_person_credits(person_id: int,
page: Optional[int] = 1,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def tmdb_person_credits(person_id: int,
page: Optional[int] = 1,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据人物ID查询人物参演作品
"""
medias = TmdbChain().person_credits(person_id=person_id, page=page)
medias = await TmdbChain().async_person_credits(person_id=person_id, page=page)
if medias:
return [media.to_dict() for media in medias]
return []
@router.get("/{tmdbid}/{season}", summary="TMDB季所有集", response_model=List[schemas.TmdbEpisode])
def tmdb_season_episodes(tmdbid: int, season: int, episode_group: Optional[str] = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def tmdb_season_episodes(tmdbid: int, season: int, episode_group: Optional[str] = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据TMDBID查询某季的所有信信息
"""
return TmdbChain().tmdb_episodes(tmdbid=tmdbid, season=season, episode_group=episode_group)
return await TmdbChain().async_tmdb_episodes(tmdbid=tmdbid, season=season, episode_group=episode_group)

View File

@@ -9,14 +9,14 @@ from app.core.config import settings
from app.core.context import MediaInfo
from app.core.metainfo import MetaInfo
from app.db.models import User
from app.db.user_oper import get_current_active_superuser
from app.db.user_oper import get_current_active_superuser, get_current_active_superuser_async
from app.utils.crypto import HashUtils
router = APIRouter()
@router.get("/cache", summary="获取种子缓存", response_model=schemas.Response)
def torrents_cache(_: User = Depends(get_current_active_superuser)):
async def torrents_cache(_: User = Depends(get_current_active_superuser_async)):
"""
获取当前种子缓存数据
"""
@@ -24,9 +24,9 @@ def torrents_cache(_: User = Depends(get_current_active_superuser)):
# 获取spider和rss两种缓存
if settings.SUBSCRIBE_MODE == "rss":
cache_info = torrents_chain.get_torrents("rss")
cache_info = await torrents_chain.async_get_torrents("rss")
else:
cache_info = torrents_chain.get_torrents("spider")
cache_info = await torrents_chain.async_get_torrents("spider")
# 统计信息
torrent_count = sum(len(torrents) for torrents in cache_info.values())
@@ -62,9 +62,8 @@ def torrents_cache(_: User = Depends(get_current_active_superuser)):
})
@router.delete("/cache/{domain}/{torrent_hash}", summary="删除指定种子缓存",
response_model=schemas.Response)
def delete_cache(domain: str, torrent_hash: str, _: User = Depends(get_current_active_superuser)):
@router.delete("/cache/{domain}/{torrent_hash}", summary="删除指定种子缓存", response_model=schemas.Response)
async def delete_cache(domain: str, torrent_hash: str, _: User = Depends(get_current_active_superuser_async)):
"""
删除指定的种子缓存
:param domain: 站点域名
@@ -76,7 +75,7 @@ def delete_cache(domain: str, torrent_hash: str, _: User = Depends(get_current_a
try:
# 获取当前缓存
cache_data = torrents_chain.get_torrents()
cache_data = await torrents_chain.async_get_torrents()
if domain not in cache_data:
return schemas.Response(success=False, message=f"站点 {domain} 缓存不存在")
@@ -92,7 +91,7 @@ def delete_cache(domain: str, torrent_hash: str, _: User = Depends(get_current_a
return schemas.Response(success=False, message="未找到指定的种子")
# 保存更新后的缓存
torrents_chain.save_cache(cache_data, torrents_chain.cache_file)
await torrents_chain.async_save_cache(cache_data, torrents_chain.cache_file)
return schemas.Response(success=True, message="种子删除成功")
except Exception as e:
@@ -100,14 +99,14 @@ def delete_cache(domain: str, torrent_hash: str, _: User = Depends(get_current_a
@router.delete("/cache", summary="清理种子缓存", response_model=schemas.Response)
def clear_cache(_: User = Depends(get_current_active_superuser)):
async def clear_cache(_: User = Depends(get_current_active_superuser_async)):
"""
清理所有种子缓存
"""
torrents_chain = TorrentsChain()
try:
torrents_chain.clear_torrents()
await torrents_chain.async_clear_torrents()
return schemas.Response(success=True, message="种子缓存清理完成")
except Exception as e:
return schemas.Response(success=False, message=f"清理失败:{str(e)}")
@@ -135,9 +134,9 @@ def refresh_cache(_: User = Depends(get_current_active_superuser)):
@router.post("/cache/reidentify/{domain}/{torrent_hash}", summary="重新识别种子", response_model=schemas.Response)
def reidentify_cache(domain: str, torrent_hash: str,
tmdbid: Optional[int] = None, doubanid: Optional[str] = None,
_: User = Depends(get_current_active_superuser)):
async def reidentify_cache(domain: str, torrent_hash: str,
tmdbid: Optional[int] = None, doubanid: Optional[str] = None,
_: User = Depends(get_current_active_superuser_async)):
"""
重新识别指定的种子
:param domain: 站点域名
@@ -152,7 +151,7 @@ def reidentify_cache(domain: str, torrent_hash: str,
try:
# 获取当前缓存
cache_data = torrents_chain.get_torrents()
cache_data = await torrents_chain.async_get_torrents()
if domain not in cache_data:
return schemas.Response(success=False, message=f"站点 {domain} 缓存不存在")
@@ -168,14 +167,13 @@ def reidentify_cache(domain: str, torrent_hash: str,
return schemas.Response(success=False, message="未找到指定的种子")
# 重新识别
meta = MetaInfo(title=target_context.torrent_info.title,
subtitle=target_context.torrent_info.description)
meta = MetaInfo(title=target_context.torrent_info.title, subtitle=target_context.torrent_info.description)
if tmdbid or doubanid:
# 手动指定媒体信息
mediainfo = MediaChain().recognize_media(meta=meta, tmdbid=tmdbid, doubanid=doubanid)
mediainfo = await media_chain.async_recognize_media(meta=meta, tmdbid=tmdbid, doubanid=doubanid)
else:
# 自动重新识别
mediainfo = media_chain.recognize_by_meta(meta)
mediainfo = await media_chain.async_recognize_by_meta(meta)
if not mediainfo:
# 创建空的媒体信息
@@ -188,7 +186,7 @@ def reidentify_cache(domain: str, torrent_hash: str,
target_context.media_info = mediainfo
# 保存更新后的缓存
torrents_chain.save_cache(cache_data, TorrentsChain().cache_file)
await torrents_chain.async_save_cache(cache_data, TorrentsChain().cache_file)
return schemas.Response(success=True, message="重新识别完成", data={
"media_name": mediainfo.title if mediainfo else "",

View File

@@ -8,11 +8,14 @@ from app import schemas
from app.chain.media import MediaChain
from app.chain.storage import StorageChain
from app.chain.transfer import TransferChain
from app.core.config import settings, global_vars
from app.core.metainfo import MetaInfoPath
from app.core.security import verify_token, verify_apitoken
from app.db import get_db
from app.db.models import User
from app.db.models.transferhistory import TransferHistory
from app.db.user_oper import get_current_active_superuser
from app.helper.directory import DirectoryHelper
from app.schemas import MediaType, FileItem, ManualTransferItem
router = APIRouter()
@@ -35,11 +38,19 @@ def query_name(path: str, filetype: str,
if not new_path:
return schemas.Response(success=False, message="未识别到新名称")
if filetype == "dir":
parents = Path(new_path).parents
if len(parents) > 2:
new_name = parents[1].name
media_path = DirectoryHelper.get_media_root_path(
rename_format=settings.RENAME_FORMAT(mediainfo.type),
rename_path=Path(new_path),
)
if media_path:
new_name = media_path.name
else:
new_name = parents[0].name
# fallback
parents = Path(new_path).parents
if len(parents) > 2:
new_name = parents[1].name
else:
new_name = parents[0].name
else:
new_name = Path(new_path).name
return schemas.Response(success=True, data={
@@ -48,7 +59,7 @@ def query_name(path: str, filetype: str,
@router.get("/queue", summary="查询整理队列", response_model=List[schemas.TransferJob])
def query_queue(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def query_queue(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询整理队列
:param _: Token校验
@@ -57,13 +68,15 @@ def query_queue(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
@router.delete("/queue", summary="从整理队列中删除任务", response_model=schemas.Response)
def remove_queue(fileitem: schemas.FileItem, _: schemas.TokenPayload = Depends(verify_token)) -> Any:
async def remove_queue(fileitem: schemas.FileItem, _: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询整理队列
:param fileitem: 文件项
:param _: Token校验
"""
TransferChain().remove_from_queue(fileitem)
# 取消整理
global_vars.stop_transfer(fileitem.path)
return schemas.Response(success=True)
@@ -71,7 +84,7 @@ def remove_queue(fileitem: schemas.FileItem, _: schemas.TokenPayload = Depends(v
def manual_transfer(transer_item: ManualTransferItem,
background: Optional[bool] = False,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(get_current_active_superuser)) -> Any:
_: User = Depends(get_current_active_superuser)) -> Any:
"""
手动转移,文件或历史记录,支持自定义剧集识别格式
:param transer_item: 手工整理项
@@ -98,7 +111,7 @@ def manual_transfer(transer_item: ManualTransferItem,
if history.dest_fileitem:
# 删除旧的已整理文件
dest_fileitem = FileItem(**history.dest_fileitem)
state = StorageChain().delete_media_file(dest_fileitem, mtype=MediaType(history.type))
state = StorageChain().delete_media_file(dest_fileitem)
if not state:
return schemas.Response(success=False, message=f"{dest_fileitem.path} 删除失败")

View File

@@ -3,13 +3,14 @@ import re
from typing import Annotated, Any, List, Union
from fastapi import APIRouter, Body, Depends, HTTPException, UploadFile, File
from sqlalchemy.orm import Session
from sqlalchemy.ext.asyncio import AsyncSession
from app import schemas
from app.core.security import get_password_hash
from app.db import get_db
from app.db import get_async_db
from app.db.models.user import User
from app.db.user_oper import get_current_active_superuser, get_current_active_user
from app.db.user_oper import get_current_active_superuser_async, \
get_current_active_user_async, get_current_active_user
from app.db.userconfig_oper import UserConfigOper
from app.utils.otp import OtpUtils
@@ -17,50 +18,48 @@ router = APIRouter()
@router.get("/", summary="所有用户", response_model=List[schemas.User])
def list_users(
db: Session = Depends(get_db),
current_user: User = Depends(get_current_active_superuser),
async def list_users(
db: AsyncSession = Depends(get_async_db),
current_user: User = Depends(get_current_active_superuser_async),
) -> Any:
"""
查询用户列表
"""
users = current_user.list(db)
return users
return await current_user.async_list(db)
@router.post("/", summary="新增用户", response_model=schemas.Response)
def create_user(
async def create_user(
*,
db: Session = Depends(get_db),
db: AsyncSession = Depends(get_async_db),
user_in: schemas.UserCreate,
current_user: User = Depends(get_current_active_superuser),
current_user: User = Depends(get_current_active_superuser_async),
) -> Any:
"""
新增用户
"""
user = current_user.get_by_name(db, name=user_in.name)
user = await current_user.async_get_by_name(db, name=user_in.name)
if user:
return schemas.Response(success=False, message="用户已存在")
user_info = user_in.dict()
user_info = user_in.model_dump()
if user_info.get("password"):
user_info["hashed_password"] = get_password_hash(user_info["password"])
user_info.pop("password")
user = User(**user_info)
user.create(db)
return schemas.Response(success=True)
user = await User(**user_info).async_create(db)
return schemas.Response(success=True if user else False)
@router.put("/", summary="更新用户", response_model=schemas.Response)
def update_user(
async def update_user(
*,
db: Session = Depends(get_db),
db: AsyncSession = Depends(get_async_db),
user_in: schemas.UserUpdate,
_: User = Depends(get_current_active_superuser),
current_user: User = Depends(get_current_active_superuser_async),
) -> Any:
"""
更新用户
"""
user_info = user_in.dict()
user_info = user_in.model_dump()
if user_info.get("password"):
# 正则表达式匹配密码包含字母、数字、特殊字符中的至少两项
pattern = r'^(?![a-zA-Z]+$)(?!\d+$)(?![^\da-zA-Z\s]+$).{6,50}$'
@@ -69,24 +68,24 @@ def update_user(
message="密码需要同时包含字母、数字、特殊字符中的至少两项且长度大于6位")
user_info["hashed_password"] = get_password_hash(user_info["password"])
user_info.pop("password")
user = User.get_by_id(db, user_id=user_info["id"])
user = await current_user.async_get_by_id(db, user_id=user_info["id"])
user_name = user_info.get("name")
if not user_name:
return schemas.Response(success=False, message="用户名不能为空")
# 新用户名去重
users = User.list(db)
users = await current_user.async_list(db)
for u in users:
if u.name == user_name and u.id != user_info["id"]:
return schemas.Response(success=False, message="用户名已被使用")
if not user:
return schemas.Response(success=False, message="用户不存在")
user.update(db, user_info)
await user.async_update(db, user_info)
return schemas.Response(success=True)
@router.get("/current", summary="当前登录用户信息", response_model=schemas.User)
def read_current_user(
current_user: User = Depends(get_current_active_user)
async def read_current_user(
current_user: User = Depends(get_current_active_user_async)
) -> Any:
"""
当前登录用户信息
@@ -95,18 +94,18 @@ def read_current_user(
@router.post("/avatar/{user_id}", summary="上传用户头像", response_model=schemas.Response)
def upload_avatar(user_id: int, db: Session = Depends(get_db), file: UploadFile = File(...),
_: User = Depends(get_current_active_user)):
async def upload_avatar(user_id: int, db: AsyncSession = Depends(get_async_db), file: UploadFile = File(...),
_: User = Depends(get_current_active_user_async)):
"""
上传用户头像
"""
# 将文件转换为Base64
file_base64 = base64.b64encode(file.file.read())
# 更新到用户表
user = User.get(db, user_id)
user = await User.async_get(db, user_id)
if not user:
return schemas.Response(success=False, message="用户不存在")
user.update(db, {
await user.async_update(db, {
"avatar": f"data:image/ico;base64,{file_base64}"
})
return schemas.Response(success=True, message=file.filename)
@@ -121,31 +120,31 @@ def otp_generate(
@router.post('/otp/judge', summary='判断otp验证是否通过', response_model=schemas.Response)
def otp_judge(
async def otp_judge(
data: dict,
db: Session = Depends(get_db),
current_user: User = Depends(get_current_active_user)
db: AsyncSession = Depends(get_async_db),
current_user: User = Depends(get_current_active_user_async)
) -> Any:
uri = data.get("uri")
otp_password = data.get("otpPassword")
if not OtpUtils.is_legal(uri, otp_password):
return schemas.Response(success=False, message="验证码错误")
current_user.update_otp_by_name(db, current_user.name, True, OtpUtils.get_secret(uri))
await current_user.async_update_otp_by_name(db, current_user.name, True, OtpUtils.get_secret(uri))
return schemas.Response(success=True)
@router.post('/otp/disable', summary='关闭当前用户的otp验证', response_model=schemas.Response)
def otp_disable(
db: Session = Depends(get_db),
current_user: User = Depends(get_current_active_user)
async def otp_disable(
db: AsyncSession = Depends(get_async_db),
current_user: User = Depends(get_current_active_user_async)
) -> Any:
current_user.update_otp_by_name(db, current_user.name, False, "")
await current_user.async_update_otp_by_name(db, current_user.name, False, "")
return schemas.Response(success=True)
@router.get('/otp/{userid}', summary='判断当前用户是否开启otp验证', response_model=schemas.Response)
def otp_enable(userid: str, db: Session = Depends(get_db)) -> Any:
user: User = User.get_by_name(db, userid)
async def otp_enable(userid: str, db: AsyncSession = Depends(get_async_db)) -> Any:
user: User = await User.async_get_by_name(db, userid)
if not user:
return schemas.Response(success=False)
return schemas.Response(success=user.is_otp)
@@ -165,9 +164,9 @@ def get_config(key: str,
@router.post("/config/{key}", summary="更新用户配置", response_model=schemas.Response)
def set_config(
key: str,
value: Annotated[Union[list, dict, bool, int, str] | None, Body()] = None,
current_user: User = Depends(get_current_active_user),
key: str,
value: Annotated[Union[list, dict, bool, int, str] | None, Body()] = None,
current_user: User = Depends(get_current_active_user),
):
"""
更新用户配置
@@ -177,49 +176,49 @@ def set_config(
@router.delete("/id/{user_id}", summary="删除用户", response_model=schemas.Response)
def delete_user_by_id(
async def delete_user_by_id(
*,
db: Session = Depends(get_db),
db: AsyncSession = Depends(get_async_db),
user_id: int,
current_user: User = Depends(get_current_active_superuser),
current_user: User = Depends(get_current_active_superuser_async),
) -> Any:
"""
通过唯一ID删除用户
"""
user = current_user.get_by_id(db, user_id=user_id)
user = await current_user.async_get_by_id(db, user_id=user_id)
if not user:
return schemas.Response(success=False, message="用户不存在")
user.delete_by_id(db, user_id)
await current_user.async_delete(db, user_id)
return schemas.Response(success=True)
@router.delete("/name/{user_name}", summary="删除用户", response_model=schemas.Response)
def delete_user_by_name(
async def delete_user_by_name(
*,
db: Session = Depends(get_db),
db: AsyncSession = Depends(get_async_db),
user_name: str,
current_user: User = Depends(get_current_active_superuser),
current_user: User = Depends(get_current_active_superuser_async),
) -> Any:
"""
通过用户名删除用户
"""
user = current_user.get_by_name(db, name=user_name)
user = await current_user.async_get_by_name(db, name=user_name)
if not user:
return schemas.Response(success=False, message="用户不存在")
user.delete_by_name(db, user_name)
await current_user.async_delete(db, user.id)
return schemas.Response(success=True)
@router.get("/{username}", summary="用户详情", response_model=schemas.User)
def read_user_by_name(
async def read_user_by_name(
username: str,
current_user: User = Depends(get_current_active_user),
db: Session = Depends(get_db),
current_user: User = Depends(get_current_active_user_async),
db: AsyncSession = Depends(get_async_db),
) -> Any:
"""
查询用户详情
"""
user = current_user.get_by_name(db, name=username)
user = await current_user.async_get_by_name(db, name=username)
if not user:
raise HTTPException(
status_code=404,

View File

@@ -32,8 +32,8 @@ async def webhook_message(background_tasks: BackgroundTasks,
@router.get("/", summary="Webhook消息响应", response_model=schemas.Response)
def webhook_message(background_tasks: BackgroundTasks,
request: Request, _: Annotated[str, Depends(verify_apitoken)]) -> Any:
async def webhook_message(background_tasks: BackgroundTasks,
request: Request, _: Annotated[str, Depends(verify_apitoken)]) -> Any:
"""
Webhook响应配置请求中需要添加参数token=API_TOKEN&source=媒体服务器名
"""

View File

@@ -1,51 +1,59 @@
import json
from datetime import datetime
from typing import List, Any, Optional
from fastapi import APIRouter, Depends
from sqlalchemy.ext.asyncio import AsyncSession
from sqlalchemy.orm import Session
from app import schemas
from app.chain.workflow import WorkflowChain
from app.core.config import global_vars
from app.core.plugin import PluginManager
from app.core.workflow import WorkFlowManager
from app.db import get_db
from app.db.models.workflow import Workflow
from app.core.security import verify_token
from app.workflow import WorkFlowManager
from app.db import get_async_db, get_db
from app.db.models import Workflow
from app.db.systemconfig_oper import SystemConfigOper
from app.db.user_oper import get_current_active_user
from app.chain.workflow import WorkflowChain
from app.db.workflow_oper import WorkflowOper
from app.helper.workflow import WorkflowHelper
from app.scheduler import Scheduler
from app.schemas.types import EventType, EVENT_TYPE_NAMES
router = APIRouter()
@router.get("/", summary="所有工作流", response_model=List[schemas.Workflow])
def list_workflows(db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(get_current_active_user)) -> Any:
async def list_workflows(db: AsyncSession = Depends(get_async_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
获取工作流列表
"""
return Workflow.list(db)
return await WorkflowOper(db).async_list()
@router.post("/", summary="创建工作流", response_model=schemas.Response)
def create_workflow(workflow: schemas.Workflow,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(get_current_active_user)) -> Any:
async def create_workflow(workflow: schemas.Workflow,
db: AsyncSession = Depends(get_async_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
创建工作流
"""
if Workflow.get_by_name(db, workflow.name):
if workflow.name and await WorkflowOper(db).async_get_by_name(workflow.name):
return schemas.Response(success=False, message="已存在相同名称的工作流")
if not workflow.add_time:
workflow.add_time = datetime.strftime(datetime.now(), "%Y-%m-%d %H:%M:%S")
if not workflow.state:
workflow.state = "P"
Workflow(**workflow.dict()).create(db)
if not workflow.trigger_type:
workflow.trigger_type = "timer"
workflow_obj = Workflow(**workflow.model_dump())
await workflow_obj.async_create(db)
return schemas.Response(success=True, message="创建工作流成功")
@router.get("/plugin/actions", summary="查询插件动作", response_model=List[dict])
def list_plugin_actions(plugin_id: str = None, _: schemas.TokenPayload = Depends(get_current_active_user)) -> Any:
def list_plugin_actions(plugin_id: str = None, _: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
获取所有动作
"""
@@ -53,60 +61,124 @@ def list_plugin_actions(plugin_id: str = None, _: schemas.TokenPayload = Depends
@router.get("/actions", summary="所有动作", response_model=List[dict])
def list_actions(_: schemas.TokenPayload = Depends(get_current_active_user)) -> Any:
async def list_actions(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
获取所有动作
"""
return WorkFlowManager().list_actions()
@router.get("/{workflow_id}", summary="工作流详情", response_model=schemas.Workflow)
def get_workflow(workflow_id: int,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(get_current_active_user)) -> Any:
@router.get("/event_types", summary="获取所有事件类型", response_model=List[dict])
async def get_event_types(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
获取工作流详情
获取所有事件类型
"""
return Workflow.get(db, workflow_id)
return [{
"title": EVENT_TYPE_NAMES.get(event_type, event_type.name),
"value": event_type.value
} for event_type in EventType]
@router.put("/{workflow_id}", summary="更新工作流", response_model=schemas.Response)
def update_workflow(workflow: schemas.Workflow,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(get_current_active_user)) -> Any:
@router.post("/share", summary="分享工作流", response_model=schemas.Response)
async def workflow_share(
workflow: schemas.WorkflowShare,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
更新工作流
分享工作流
"""
wf = Workflow.get(db, workflow.id)
if not wf:
return schemas.Response(success=False, message="工作流不存在")
wf.update(db, workflow.dict())
return schemas.Response(success=True, message="更新成功")
if not workflow.id or not workflow.share_title or not workflow.share_user:
return schemas.Response(success=False, message="请填写工作流ID、分享标题和分享人")
state, errmsg = await WorkflowHelper().async_workflow_share(workflow_id=workflow.id,
share_title=workflow.share_title or "",
share_comment=workflow.share_comment or "",
share_user=workflow.share_user or "")
return schemas.Response(success=state, message=errmsg)
@router.delete("/{workflow_id}", summary="删除工作流", response_model=schemas.Response)
def delete_workflow(workflow_id: int,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(get_current_active_user)) -> Any:
@router.delete("/share/{share_id}", summary="删除分享", response_model=schemas.Response)
async def workflow_share_delete(
share_id: int,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
删除工作流
删除分享
"""
workflow = Workflow.get(db, workflow_id)
if not workflow:
return schemas.Response(success=False, message="工作流不存在")
# 删除定时任务
Scheduler().remove_workflow_job(workflow)
# 删除工作流
Workflow.delete(db, workflow_id)
# 删除缓存
SystemConfigOper().delete(f"WorkflowCache-{workflow_id}")
return schemas.Response(success=True, message="删除成功")
state, errmsg = await WorkflowHelper().async_share_delete(share_id=share_id)
return schemas.Response(success=state, message=errmsg)
@router.post("/fork", summary="复用工作流", response_model=schemas.Response)
async def workflow_fork(
workflow: schemas.WorkflowShare,
db: AsyncSession = Depends(get_async_db),
_: schemas.User = Depends(verify_token)) -> Any:
"""
复用工作流
"""
if not workflow.name:
return schemas.Response(success=False, message="工作流名称不能为空")
# 解析JSON数据添加错误处理
try:
actions = json.loads(workflow.actions or "[]")
except json.JSONDecodeError:
return schemas.Response(success=False, message="actions字段JSON格式错误")
try:
flows = json.loads(workflow.flows or "[]")
except json.JSONDecodeError:
return schemas.Response(success=False, message="flows字段JSON格式错误")
try:
context = json.loads(workflow.context or "{}")
except json.JSONDecodeError:
return schemas.Response(success=False, message="context字段JSON格式错误")
# 创建工作流
workflow_dict = {
"name": workflow.name,
"description": workflow.description,
"timer": workflow.timer,
"trigger_type": workflow.trigger_type or "timer",
"event_type": workflow.event_type,
"event_conditions": json.loads(workflow.event_conditions or "{}") if workflow.event_conditions else {},
"actions": actions,
"flows": flows,
"context": context,
"state": "P" # 默认暂停状态
}
# 检查名称是否重复
workflow_oper = WorkflowOper(db)
if await workflow_oper.async_get_by_name(workflow_dict["name"]):
return schemas.Response(success=False, message="已存在相同名称的工作流")
# 创建新工作流
workflow = await Workflow(**workflow_dict).async_create(db)
# 更新复用次数
if workflow:
await WorkflowHelper().async_workflow_fork(share_id=workflow.id)
return schemas.Response(success=True, message="复用成功")
@router.get("/shares", summary="查询分享的工作流", response_model=List[schemas.WorkflowShare])
async def workflow_shares(
name: Optional[str] = None,
page: Optional[int] = 1,
count: Optional[int] = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询分享的工作流
"""
return await WorkflowHelper().async_get_shares(name=name, page=page, count=count)
@router.post("/{workflow_id}/run", summary="执行工作流", response_model=schemas.Response)
def run_workflow(workflow_id: int,
from_begin: Optional[bool] = True,
_: schemas.TokenPayload = Depends(get_current_active_user)) -> Any:
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
执行工作流
"""
@@ -119,15 +191,19 @@ def run_workflow(workflow_id: int,
@router.post("/{workflow_id}/start", summary="启用工作流", response_model=schemas.Response)
def start_workflow(workflow_id: int,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(get_current_active_user)) -> Any:
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
启用工作流
"""
workflow = Workflow.get(db, workflow_id)
workflow = WorkflowOper(db).get(workflow_id)
if not workflow:
return schemas.Response(success=False, message="工作流不存在")
# 添加定时任务
Scheduler().update_workflow_job(workflow)
if not workflow.trigger_type or workflow.trigger_type == "timer":
# 添加定时任务
Scheduler().update_workflow_job(workflow)
else:
# 事件触发:添加到事件触发器
WorkFlowManager().load_workflow_events(workflow_id)
# 更新状态
workflow.update_state(db, workflow_id, "W")
return schemas.Response(success=True)
@@ -136,15 +212,20 @@ def start_workflow(workflow_id: int,
@router.post("/{workflow_id}/pause", summary="停用工作流", response_model=schemas.Response)
def pause_workflow(workflow_id: int,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(get_current_active_user)) -> Any:
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
停用工作流
"""
workflow = Workflow.get(db, workflow_id)
workflow = WorkflowOper(db).get(workflow_id)
if not workflow:
return schemas.Response(success=False, message="工作流不存在")
# 删除定时任务
Scheduler().remove_workflow_job(workflow)
# 根据触发类型进行不同处理
if workflow.trigger_type == "timer":
# 定时触发:移除定时任务
Scheduler().remove_workflow_job(workflow)
elif workflow.trigger_type == "event":
# 事件触发:从事件触发器中移除
WorkFlowManager().remove_workflow_event(workflow_id, workflow.event_type)
# 停止工作流
global_vars.stop_workflow(workflow_id)
# 更新状态
@@ -153,19 +234,77 @@ def pause_workflow(workflow_id: int,
@router.post("/{workflow_id}/reset", summary="重置工作流", response_model=schemas.Response)
def reset_workflow(workflow_id: int,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(get_current_active_user)) -> Any:
async def reset_workflow(workflow_id: int,
db: AsyncSession = Depends(get_async_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
重置工作流
"""
workflow = Workflow.get(db, workflow_id)
workflow = await WorkflowOper(db).async_get(workflow_id)
if not workflow:
return schemas.Response(success=False, message="工作流不存在")
# 停止工作流
global_vars.stop_workflow(workflow_id)
# 重置工作流
workflow.reset(db, workflow_id, reset_count=True)
await Workflow.async_reset(db, workflow_id, reset_count=True)
# 删除缓存
SystemConfigOper().delete(f"WorkflowCache-{workflow_id}")
return schemas.Response(success=True)
@router.get("/{workflow_id}", summary="工作流详情", response_model=schemas.Workflow)
async def get_workflow(workflow_id: int,
db: AsyncSession = Depends(get_async_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
获取工作流详情
"""
return await WorkflowOper(db).async_get(workflow_id)
@router.put("/{workflow_id}", summary="更新工作流", response_model=schemas.Response)
def update_workflow(workflow: schemas.Workflow,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
更新工作流
"""
if not workflow.id:
return schemas.Response(success=False, message="工作流ID不能为空")
workflow_oper = WorkflowOper(db)
wf = workflow_oper.get(workflow.id)
if not wf:
return schemas.Response(success=False, message="工作流不存在")
if not wf.trigger_type:
workflow.trigger_type = "timer"
wf.update(db, workflow.model_dump())
# 更新后的工作流对象
updated_workflow = workflow_oper.get(workflow.id)
# 更新定时任务
Scheduler().update_workflow_job(updated_workflow)
# 更新事件注册
WorkFlowManager().update_workflow_event(updated_workflow)
return schemas.Response(success=True, message="更新成功")
@router.delete("/{workflow_id}", summary="删除工作流", response_model=schemas.Response)
def delete_workflow(workflow_id: int,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
删除工作流
"""
workflow = WorkflowOper(db).get(workflow_id)
if not workflow:
return schemas.Response(success=False, message="工作流不存在")
if not workflow.trigger_type or workflow.trigger_type == "timer":
# 定时触发:删除定时任务
Scheduler().remove_workflow_job(workflow)
else:
# 事件触发:从事件触发器中移除
WorkFlowManager().remove_workflow_event(workflow_id, workflow.event_type)
# 删除工作流
Workflow.delete(db, workflow_id)
# 删除缓存
SystemConfigOper().delete(f"WorkflowCache-{workflow_id}")
return schemas.Response(success=True, message="删除成功")

View File

@@ -1,15 +1,16 @@
from typing import Any, List, Annotated
from fastapi import APIRouter, HTTPException, Depends
from sqlalchemy.ext.asyncio import AsyncSession
from sqlalchemy.orm import Session
from app import schemas
from app.chain.media import MediaChain
from app.chain.tvdb import TvdbChain
from app.chain.subscribe import SubscribeChain
from app.chain.tvdb import TvdbChain
from app.core.metainfo import MetaInfo
from app.core.security import verify_apikey
from app.db import get_db
from app.db import get_db, get_async_db
from app.db.models.subscribe import Subscribe
from app.schemas import RadarrMovie, SonarrSeries
from app.schemas.types import MediaType
@@ -19,7 +20,7 @@ arr_router = APIRouter(tags=['servarr'])
@arr_router.get("/system/status", summary="系统状态")
def arr_system_status(_: Annotated[str, Depends(verify_apikey)]) -> Any:
async def arr_system_status(_: Annotated[str, Depends(verify_apikey)]) -> Any:
"""
模拟Radarr、Sonarr系统状态
"""
@@ -73,7 +74,7 @@ def arr_system_status(_: Annotated[str, Depends(verify_apikey)]) -> Any:
@arr_router.get("/qualityProfile", summary="质量配置")
def arr_qualityProfile(_: Annotated[str, Depends(verify_apikey)]) -> Any:
async def arr_qualityProfile(_: Annotated[str, Depends(verify_apikey)]) -> Any:
"""
模拟Radarr、Sonarr质量配置
"""
@@ -114,7 +115,7 @@ def arr_qualityProfile(_: Annotated[str, Depends(verify_apikey)]) -> Any:
@arr_router.get("/rootfolder", summary="根目录")
def arr_rootfolder(_: Annotated[str, Depends(verify_apikey)]) -> Any:
async def arr_rootfolder(_: Annotated[str, Depends(verify_apikey)]) -> Any:
"""
模拟Radarr、Sonarr根目录
"""
@@ -130,7 +131,7 @@ def arr_rootfolder(_: Annotated[str, Depends(verify_apikey)]) -> Any:
@arr_router.get("/tag", summary="标签")
def arr_tag(_: Annotated[str, Depends(verify_apikey)]) -> Any:
async def arr_tag(_: Annotated[str, Depends(verify_apikey)]) -> Any:
"""
模拟Radarr、Sonarr标签
"""
@@ -143,7 +144,7 @@ def arr_tag(_: Annotated[str, Depends(verify_apikey)]) -> Any:
@arr_router.get("/languageprofile", summary="语言")
def arr_languageprofile(_: Annotated[str, Depends(verify_apikey)]) -> Any:
async def arr_languageprofile(_: Annotated[str, Depends(verify_apikey)]) -> Any:
"""
模拟Radarr、Sonarr语言
"""
@@ -169,7 +170,7 @@ def arr_languageprofile(_: Annotated[str, Depends(verify_apikey)]) -> Any:
@arr_router.get("/movie", summary="所有订阅电影", response_model=List[schemas.RadarrMovie])
def arr_movies(_: Annotated[str, Depends(verify_apikey)], db: Session = Depends(get_db)) -> Any:
async def arr_movies(_: Annotated[str, Depends(verify_apikey)], db: AsyncSession = Depends(get_async_db)) -> Any:
"""
查询Rardar电影
"""
@@ -240,7 +241,7 @@ def arr_movies(_: Annotated[str, Depends(verify_apikey)], db: Session = Depends(
"""
# 查询所有电影订阅
result = []
subscribes = Subscribe.list(db)
subscribes = await Subscribe.async_list(db)
for subscribe in subscribes:
if subscribe.type != MediaType.MOVIE.value:
continue
@@ -306,11 +307,12 @@ def arr_movie_lookup(term: str, _: Annotated[str, Depends(verify_apikey)], db: S
@arr_router.get("/movie/{mid}", summary="电影订阅详情", response_model=schemas.RadarrMovie)
def arr_movie(mid: int, _: Annotated[str, Depends(verify_apikey)], db: Session = Depends(get_db)) -> Any:
async def arr_movie(mid: int, _: Annotated[str, Depends(verify_apikey)],
db: AsyncSession = Depends(get_async_db)) -> Any:
"""
查询Rardar电影订阅
"""
subscribe = Subscribe.get(db, mid)
subscribe = await Subscribe.async_get(db, mid)
if subscribe:
return RadarrMovie(
id=subscribe.id,
@@ -332,25 +334,25 @@ def arr_movie(mid: int, _: Annotated[str, Depends(verify_apikey)], db: Session =
@arr_router.post("/movie", summary="新增电影订阅")
def arr_add_movie(_: Annotated[str, Depends(verify_apikey)],
movie: RadarrMovie,
db: Session = Depends(get_db)
) -> Any:
async def arr_add_movie(_: Annotated[str, Depends(verify_apikey)],
movie: RadarrMovie,
db: AsyncSession = Depends(get_async_db)
) -> Any:
"""
新增Rardar电影订阅
"""
# 检查订阅是否已存在
subscribe = Subscribe.get_by_tmdbid(db, movie.tmdbId)
subscribe = await Subscribe.async_get_by_tmdbid(db, movie.tmdbId)
if subscribe:
return {
"id": subscribe.id
}
# 添加订阅
sid, message = SubscribeChain().add(title=movie.title,
year=movie.year,
mtype=MediaType.MOVIE,
tmdbid=movie.tmdbId,
username="Seerr")
sid, message = await SubscribeChain().async_add(title=movie.title,
year=movie.year,
mtype=MediaType.MOVIE,
tmdbid=movie.tmdbId,
username="Seerr")
if sid:
return {
"id": sid
@@ -363,13 +365,14 @@ def arr_add_movie(_: Annotated[str, Depends(verify_apikey)],
@arr_router.delete("/movie/{mid}", summary="删除电影订阅", response_model=schemas.Response)
def arr_remove_movie(mid: int, _: Annotated[str, Depends(verify_apikey)], db: Session = Depends(get_db)) -> Any:
async def arr_remove_movie(mid: int, _: Annotated[str, Depends(verify_apikey)],
db: AsyncSession = Depends(get_async_db)) -> Any:
"""
删除Rardar电影订阅
"""
subscribe = Subscribe.get(db, mid)
subscribe = await Subscribe.async_get(db, mid)
if subscribe:
subscribe.delete(db, mid)
await subscribe.async_delete(db, mid)
return schemas.Response(success=True)
else:
raise HTTPException(
@@ -379,7 +382,7 @@ def arr_remove_movie(mid: int, _: Annotated[str, Depends(verify_apikey)], db: Se
@arr_router.get("/series", summary="所有剧集", response_model=List[schemas.SonarrSeries])
def arr_series(_: Annotated[str, Depends(verify_apikey)], db: Session = Depends(get_db)) -> Any:
async def arr_series(_: Annotated[str, Depends(verify_apikey)], db: AsyncSession = Depends(get_async_db)) -> Any:
"""
查询Sonarr剧集
"""
@@ -487,7 +490,7 @@ def arr_series(_: Annotated[str, Depends(verify_apikey)], db: Session = Depends(
"""
# 查询所有电视剧订阅
result = []
subscribes = Subscribe.list(db)
subscribes = await Subscribe.async_list(db)
for subscribe in subscribes:
if subscribe.type != MediaType.TV.value:
continue
@@ -605,11 +608,12 @@ def arr_series_lookup(term: str, _: Annotated[str, Depends(verify_apikey)], db:
@arr_router.get("/series/{tid}", summary="剧集详情")
def arr_serie(tid: int, _: Annotated[str, Depends(verify_apikey)], db: Session = Depends(get_db)) -> Any:
async def arr_serie(tid: int, _: Annotated[str, Depends(verify_apikey)],
db: AsyncSession = Depends(get_async_db)) -> Any:
"""
查询Sonarr剧集
"""
subscribe = Subscribe.get(db, tid)
subscribe = await Subscribe.async_get(db, tid)
if subscribe:
return SonarrSeries(
id=subscribe.id,
@@ -639,17 +643,17 @@ def arr_serie(tid: int, _: Annotated[str, Depends(verify_apikey)], db: Session =
@arr_router.post("/series", summary="新增剧集订阅")
def arr_add_series(tv: schemas.SonarrSeries,
_: Annotated[str, Depends(verify_apikey)],
db: Session = Depends(get_db)) -> Any:
async def arr_add_series(tv: schemas.SonarrSeries,
_: Annotated[str, Depends(verify_apikey)],
db: AsyncSession = Depends(get_async_db)) -> Any:
"""
新增Sonarr剧集订阅
"""
# 检查订阅是否存在
left_seasons = []
for season in tv.seasons:
subscribe = Subscribe.get_by_tmdbid(db, tmdbid=tv.tmdbId,
season=season.get("seasonNumber"))
subscribe = await Subscribe.async_get_by_tmdbid(db, tmdbid=tv.tmdbId,
season=season.get("seasonNumber"))
if subscribe:
continue
left_seasons.append(season)
@@ -664,12 +668,12 @@ def arr_add_series(tv: schemas.SonarrSeries,
for season in left_seasons:
if not season.get("monitored"):
continue
sid, message = SubscribeChain().add(title=tv.title,
year=tv.year,
season=season.get("seasonNumber"),
tmdbid=tv.tmdbId,
mtype=MediaType.TV,
username="Seerr")
sid, message = await SubscribeChain().async_add(title=tv.title,
year=tv.year,
season=season.get("seasonNumber"),
tmdbid=tv.tmdbId,
mtype=MediaType.TV,
username="Seerr")
if sid:
return {
@@ -683,21 +687,22 @@ def arr_add_series(tv: schemas.SonarrSeries,
@arr_router.put("/series", summary="更新剧集订阅")
def arr_update_series(tv: schemas.SonarrSeries) -> Any:
async def arr_update_series(tv: schemas.SonarrSeries, _: Annotated[str, Depends(verify_apikey)]) -> Any:
"""
更新Sonarr剧集订阅
"""
return arr_add_series(tv)
return await arr_add_series(tv)
@arr_router.delete("/series/{tid}", summary="删除剧集订阅")
def arr_remove_series(tid: int, _: Annotated[str, Depends(verify_apikey)], db: Session = Depends(get_db)) -> Any:
async def arr_remove_series(tid: int, _: Annotated[str, Depends(verify_apikey)],
db: AsyncSession = Depends(get_async_db)) -> Any:
"""
删除Sonarr剧集订阅
"""
subscribe = Subscribe.get(db, tid)
subscribe = await Subscribe.async_get(db, tid)
if subscribe:
subscribe.delete(db, tid)
await subscribe.async_delete(db, tid)
return schemas.Response(success=True)
else:
raise HTTPException(

View File

@@ -2,6 +2,8 @@ import gzip
import json
from typing import Annotated, Callable, Any, Dict, Optional
import aiofiles
from anyio import Path as AsyncPath
from fastapi import APIRouter, Depends, HTTPException, Path, Request, Response
from fastapi.responses import PlainTextResponse
from fastapi.routing import APIRoute
@@ -19,7 +21,7 @@ class GzipRequest(Request):
body = await super().body()
if "gzip" in self.headers.getlist("Content-Encoding"):
body = gzip.decompress(body)
self._body = body # noqa
self._body = body # noqa
return self._body
@@ -50,12 +52,12 @@ cookie_router = APIRouter(route_class=GzipRoute,
@cookie_router.get("/", response_class=PlainTextResponse)
def get_root():
async def get_root():
return "Hello MoviePilot! COOKIECLOUD API ROOT = /cookiecloud"
@cookie_router.post("/", response_class=PlainTextResponse)
def post_root():
async def post_root():
return "Hello MoviePilot! COOKIECLOUD API ROOT = /cookiecloud"
@@ -64,31 +66,31 @@ async def update_cookie(req: schemas.CookieData):
"""
上传Cookie数据
"""
file_path = settings.COOKIE_PATH / f"{req.uuid}.json"
file_path = AsyncPath(settings.COOKIE_PATH) / f"{req.uuid}.json"
content = json.dumps({"encrypted": req.encrypted})
with open(file_path, encoding="utf-8", mode="w") as file:
file.write(content)
with open(file_path, encoding="utf-8", mode="r") as file:
read_content = file.read()
async with aiofiles.open(file_path, encoding="utf-8", mode="w") as file:
await file.write(content)
async with aiofiles.open(file_path, encoding="utf-8", mode="r") as file:
read_content = await file.read()
if read_content == content:
return {"action": "done"}
else:
return {"action": "error"}
def load_encrypt_data(uuid: str) -> Dict[str, Any]:
async def load_encrypt_data(uuid: str) -> Dict[str, Any]:
"""
加载本地加密原始数据
"""
file_path = settings.COOKIE_PATH / f"{uuid}.json"
file_path = AsyncPath(settings.COOKIE_PATH) / f"{uuid}.json"
# 检查文件是否存在
if not file_path.exists():
raise HTTPException(status_code=404, detail="Item not found")
# 读取文件
with open(file_path, encoding="utf-8", mode="r") as file:
read_content = file.read()
async with aiofiles.open(file_path, encoding="utf-8", mode="r") as file:
read_content = await file.read()
data = json.loads(read_content.encode("utf-8"))
return data
@@ -120,7 +122,7 @@ async def get_cookie(
"""
GET 下载加密数据
"""
return load_encrypt_data(uuid)
return await load_encrypt_data(uuid)
@cookie_router.post("/get/{uuid}")
@@ -130,5 +132,5 @@ async def post_cookie(
"""
POST 下载加密数据
"""
data = load_encrypt_data(uuid)
data = await load_encrypt_data(uuid)
return get_decrypted_cookie_data(uuid, request.password, data["encrypted"])

View File

@@ -1,4 +1,5 @@
import copy
import inspect
import pickle
import traceback
from abc import ABCMeta
@@ -6,9 +7,11 @@ from collections.abc import Callable
from pathlib import Path
from typing import Optional, Any, Tuple, List, Set, Union, Dict
from fastapi.concurrency import run_in_threadpool
from qbittorrentapi import TorrentFilesList
from transmission_rpc import File
from app.core.cache import FileCache, AsyncFileCache, fresh, async_fresh
from app.core.config import settings
from app.core.context import Context, MediaInfo, TorrentInfo
from app.core.event import EventManager
@@ -43,58 +46,127 @@ class ChainBase(metaclass=ABCMeta):
send_callback=self.run_module
)
self.pluginmanager = PluginManager()
self.filecache = FileCache()
self.async_filecache = AsyncFileCache()
@staticmethod
def load_cache(filename: str) -> Any:
def load_cache(self, filename: str) -> Any:
"""
从本地加载缓存
加载缓存
"""
cache_path = settings.TEMP_PATH / filename
if cache_path.exists():
try:
with open(cache_path, 'rb') as f:
return pickle.load(f)
except Exception as err:
logger.error(f"加载缓存 {filename} 出错:{str(err)}")
return None
content = self.filecache.get(filename)
if not content:
return None
try:
return pickle.loads(content)
except Exception as err:
logger.error(f"加载缓存 {filename} 出错:{str(err)}")
return None
@staticmethod
def save_cache(cache: Any, filename: str) -> None:
async def async_load_cache(self, filename: str) -> Any:
"""
保存缓存到本地
异步加载缓存
"""
content = await self.async_filecache.get(filename)
if not content:
return None
try:
return pickle.loads(content)
except Exception as err:
logger.error(f"异步加载缓存 {filename} 出错:{str(err)}")
return None
async def async_save_cache(self, cache: Any, filename: str) -> None:
"""
异步保存缓存
"""
try:
with open(settings.TEMP_PATH / filename, 'wb') as f:
pickle.dump(cache, f) # noqa
await self.async_filecache.set(filename, pickle.dumps(cache))
except Exception as err:
logger.error(f"异步保存缓存 {filename} 出错:{str(err)}")
return
def save_cache(self, cache: Any, filename: str) -> None:
"""
保存缓存
"""
try:
self.filecache.set(filename, pickle.dumps(cache))
except Exception as err:
logger.error(f"保存缓存 {filename} 出错:{str(err)}")
return
def remove_cache(self, filename: str) -> None:
"""
删除缓存同时删除Redis和本地缓存
"""
self.filecache.delete(filename)
async def async_remove_cache(self, filename: str) -> None:
"""
异步删除缓存同时删除Redis和本地缓存
"""
await self.async_filecache.delete(filename)
@staticmethod
def remove_cache(filename: str) -> None:
def __is_valid_empty(ret):
"""
删除本地缓存
判断结果是否为空
"""
cache_path = settings.TEMP_PATH / filename
if cache_path.exists():
cache_path.unlink()
if isinstance(ret, tuple):
return all(value is None for value in ret)
else:
return ret is None
def run_module(self, method: str, *args, **kwargs) -> Any:
def __handle_plugin_error(self, err: Exception, plugin_id: str, plugin_name: str, method: str, **kwargs):
"""
运行包含该方法的所有模块,然后返回结果
当kwargs包含命名参数raise_exception时如模块方法抛出异常且raise_exception为True则同步抛出异常
处理插件模块执行错误
"""
if kwargs.get("raise_exception"):
raise
logger.error(
f"运行插件 {plugin_id} 模块 {method} 出错:{str(err)}\n{traceback.format_exc()}")
self.messagehelper.put(title=f"{plugin_name} 发生了错误",
message=str(err),
role="plugin")
self.eventmanager.send_event(
EventType.SystemError,
{
"type": "plugin",
"plugin_id": plugin_id,
"plugin_name": plugin_name,
"plugin_method": method,
"error": str(err),
"traceback": traceback.format_exc()
}
)
def is_result_empty(ret):
"""
判断结果是否为空
"""
if isinstance(ret, tuple):
return all(value is None for value in ret)
else:
return ret is None
def __handle_system_error(self, err: Exception, module_id: str, module_name: str, method: str, **kwargs):
"""
处理系统模块执行错误
"""
if kwargs.get("raise_exception"):
raise
logger.error(
f"运行模块 {module_id}.{method} 出错:{str(err)}\n{traceback.format_exc()}")
self.messagehelper.put(title=f"{module_name}发生了错误",
message=str(err),
role="system")
self.eventmanager.send_event(
EventType.SystemError,
{
"type": "module",
"module_id": module_id,
"module_name": module_name,
"module_method": method,
"error": str(err),
"traceback": traceback.format_exc()
}
)
result = None
# 插件模块
def __execute_plugin_modules(self, method: str, result: Any, *args, **kwargs) -> Any:
"""
执行插件模块
"""
for plugin, module_dict in self.pluginmanager.get_plugin_modules().items():
plugin_id, plugin_name = plugin
if method in module_dict:
@@ -102,7 +174,7 @@ class ChainBase(metaclass=ABCMeta):
if func:
try:
logger.info(f"请求插件 {plugin_name} 执行:{method} ...")
if is_result_empty(result):
if self.__is_valid_empty(result):
# 返回None第一次执行或者需继续执行下一模块
result = func(*args, **kwargs)
elif isinstance(result, list):
@@ -113,29 +185,46 @@ class ChainBase(metaclass=ABCMeta):
else:
break
except Exception as err:
if kwargs.get("raise_exception"):
raise
logger.error(
f"运行插件 {plugin_id} 模块 {method} 出错:{str(err)}\n{traceback.format_exc()}")
self.messagehelper.put(title=f"{plugin_name} 发生了错误",
message=str(err),
role="plugin")
self.eventmanager.send_event(
EventType.SystemError,
{
"type": "plugin",
"plugin_id": plugin_id,
"plugin_name": plugin_name,
"plugin_method": method,
"error": str(err),
"traceback": traceback.format_exc()
}
)
if not is_result_empty(result) and not isinstance(result, list):
# 插件模块返回结果不为空且不是列表,直接返回
return result
self.__handle_plugin_error(err, plugin_id, plugin_name, method, **kwargs)
return result
# 系统模块
async def __async_execute_plugin_modules(self, method: str, result: Any, *args, **kwargs) -> Any:
"""
异步执行插件模块
"""
for plugin, module_dict in self.pluginmanager.get_plugin_modules().items():
plugin_id, plugin_name = plugin
if method in module_dict:
func = module_dict[method]
if func:
try:
logger.info(f"请求插件 {plugin_name} 执行:{method} ...")
if self.__is_valid_empty(result):
# 返回None第一次执行或者需继续执行下一模块
if inspect.iscoroutinefunction(func):
result = await func(*args, **kwargs)
else:
# 插件同步函数在异步环境中运行,避免阻塞
result = await run_in_threadpool(func, *args, **kwargs)
elif isinstance(result, list):
# 返回为列表,有多个模块运行结果时进行合并
if inspect.iscoroutinefunction(func):
temp = await func(*args, **kwargs)
else:
# 插件同步函数在异步环境中运行,避免阻塞
temp = await run_in_threadpool(func, *args, **kwargs)
if isinstance(temp, list):
result.extend(temp)
else:
break
except Exception as err:
self.__handle_plugin_error(err, plugin_id, plugin_name, method, **kwargs)
return result
def __execute_system_modules(self, method: str, result: Any, *args, **kwargs) -> Any:
"""
执行系统模块
"""
logger.debug(f"请求系统模块执行:{method} ...")
for module in sorted(self.modulemanager.get_running_modules(method), key=lambda x: x.get_priority()):
module_id = module.__class__.__name__
@@ -146,7 +235,7 @@ class ChainBase(metaclass=ABCMeta):
module_name = module_id
try:
func = getattr(module, method)
if is_result_empty(result):
if self.__is_valid_empty(result):
# 返回None第一次执行或者需继续执行下一模块
result = func(*args, **kwargs)
elif ObjectUtils.check_signature(func, result):
@@ -161,26 +250,85 @@ class ChainBase(metaclass=ABCMeta):
# 中止继续执行
break
except Exception as err:
if kwargs.get("raise_exception"):
raise
logger.error(
f"运行模块 {module_id}.{method} 出错:{str(err)}\n{traceback.format_exc()}")
self.messagehelper.put(title=f"{module_name}发生了错误",
message=str(err),
role="system")
self.eventmanager.send_event(
EventType.SystemError,
{
"type": "module",
"module_id": module_id,
"module_name": module_name,
"module_method": method,
"error": str(err),
"traceback": traceback.format_exc()
}
)
self.__handle_system_error(err, module_id, module_name, method, **kwargs)
return result
async def __async_execute_system_modules(self, method: str, result: Any, *args, **kwargs) -> Any:
"""
异步执行系统模块
"""
logger.debug(f"请求系统模块执行:{method} ...")
for module in sorted(self.modulemanager.get_running_modules(method), key=lambda x: x.get_priority()):
module_id = module.__class__.__name__
try:
module_name = module.get_name()
except Exception as err:
logger.debug(f"获取模块名称出错:{str(err)}")
module_name = module_id
try:
func = getattr(module, method)
if self.__is_valid_empty(result):
# 返回None第一次执行或者需继续执行下一模块
if inspect.iscoroutinefunction(func):
result = await func(*args, **kwargs)
else:
result = func(*args, **kwargs)
elif ObjectUtils.check_signature(func, result):
# 返回结果与方法签名一致,将结果传入
if inspect.iscoroutinefunction(func):
result = await func(result)
else:
result = func(result)
elif isinstance(result, list):
# 返回为列表,有多个模块运行结果时进行合并
if inspect.iscoroutinefunction(func):
temp = await func(*args, **kwargs)
else:
temp = func(*args, **kwargs)
if isinstance(temp, list):
result.extend(temp)
else:
# 中止继续执行
break
except Exception as err:
self.__handle_system_error(err, module_id, module_name, method, **kwargs)
return result
def run_module(self, method: str, *args, **kwargs) -> Any:
"""
运行包含该方法的所有模块,然后返回结果
当kwargs包含命名参数raise_exception时如模块方法抛出异常且raise_exception为True则同步抛出异常
"""
result = None
# 执行插件模块
result = self.__execute_plugin_modules(method, result, *args, **kwargs)
if not self.__is_valid_empty(result) and not isinstance(result, list):
# 插件模块返回结果不为空且不是列表,直接返回
return result
# 执行系统模块
return self.__execute_system_modules(method, result, *args, **kwargs)
async def async_run_module(self, method: str, *args, **kwargs) -> Any:
"""
异步运行包含该方法的所有模块,然后返回结果
当kwargs包含命名参数raise_exception时如模块方法抛出异常且raise_exception为True则同步抛出异常
支持异步和同步方法的混合调用
"""
result = None
# 执行插件模块
result = await self.__async_execute_plugin_modules(method, result, *args, **kwargs)
if not self.__is_valid_empty(result) and not isinstance(result, list):
# 插件模块返回结果不为空且不是列表,直接返回
return result
# 执行系统模块
return await self.__async_execute_system_modules(method, result, *args, **kwargs)
def recognize_media(self, meta: MetaBase = None,
mtype: Optional[MediaType] = None,
tmdbid: Optional[int] = None,
@@ -210,9 +358,44 @@ class ChainBase(metaclass=ABCMeta):
if tmdbid:
doubanid = None
bangumiid = None
return self.run_module("recognize_media", meta=meta, mtype=mtype,
tmdbid=tmdbid, doubanid=doubanid, bangumiid=bangumiid,
episode_group=episode_group, cache=cache)
with fresh(not cache):
return self.run_module("recognize_media", meta=meta, mtype=mtype,
tmdbid=tmdbid, doubanid=doubanid, bangumiid=bangumiid,
episode_group=episode_group, cache=cache)
async def async_recognize_media(self, meta: MetaBase = None,
mtype: Optional[MediaType] = None,
tmdbid: Optional[int] = None,
doubanid: Optional[str] = None,
bangumiid: Optional[int] = None,
episode_group: Optional[str] = None,
cache: bool = True) -> Optional[MediaInfo]:
"""
识别媒体信息不含Fanart图片异步版本
:param meta: 识别的元数据
:param mtype: 识别的媒体类型与tmdbid配套
:param tmdbid: tmdbid
:param doubanid: 豆瓣ID
:param bangumiid: BangumiID
:param episode_group: 剧集组
:param cache: 是否使用缓存
:return: 识别的媒体信息,包括剧集信息
"""
# 识别用名中含指定信息情形
if not mtype and meta and meta.type in [MediaType.TV, MediaType.MOVIE]:
mtype = meta.type
if not tmdbid and hasattr(meta, "tmdbid"):
tmdbid = meta.tmdbid
if not doubanid and hasattr(meta, "doubanid"):
doubanid = meta.doubanid
# 有tmdbid时不使用其它ID
if tmdbid:
doubanid = None
bangumiid = None
async with async_fresh(not cache):
return await self.async_run_module("async_recognize_media", meta=meta, mtype=mtype,
tmdbid=tmdbid, doubanid=doubanid, bangumiid=bangumiid,
episode_group=episode_group, cache=cache)
def match_doubaninfo(self, name: str, imdbid: Optional[str] = None,
mtype: Optional[MediaType] = None, year: Optional[str] = None, season: Optional[int] = None,
@@ -229,6 +412,22 @@ class ChainBase(metaclass=ABCMeta):
return self.run_module("match_doubaninfo", name=name, imdbid=imdbid,
mtype=mtype, year=year, season=season, raise_exception=raise_exception)
async def async_match_doubaninfo(self, name: str, imdbid: Optional[str] = None,
mtype: Optional[MediaType] = None, year: Optional[str] = None,
season: Optional[int] = None,
raise_exception: bool = False) -> Optional[dict]:
"""
搜索和匹配豆瓣信息(异步版本)
:param name: 标题
:param imdbid: imdbid
:param mtype: 类型
:param year: 年份
:param season: 季
:param raise_exception: 触发速率限制时是否抛出异常
"""
return await self.async_run_module("async_match_doubaninfo", name=name, imdbid=imdbid,
mtype=mtype, year=year, season=season, raise_exception=raise_exception)
def match_tmdbinfo(self, name: str, mtype: Optional[MediaType] = None,
year: Optional[str] = None, season: Optional[int] = None) -> Optional[dict]:
"""
@@ -241,6 +440,18 @@ class ChainBase(metaclass=ABCMeta):
return self.run_module("match_tmdbinfo", name=name,
mtype=mtype, year=year, season=season)
async def async_match_tmdbinfo(self, name: str, mtype: Optional[MediaType] = None,
year: Optional[str] = None, season: Optional[int] = None) -> Optional[dict]:
"""
搜索和匹配TMDB信息异步版本
:param name: 标题
:param mtype: 类型
:param year: 年份
:param season: 季
"""
return await self.async_run_module("async_match_tmdbinfo", name=name,
mtype=mtype, year=year, season=season)
def obtain_images(self, mediainfo: MediaInfo) -> Optional[MediaInfo]:
"""
补充抓取媒体信息图片
@@ -249,6 +460,14 @@ class ChainBase(metaclass=ABCMeta):
"""
return self.run_module("obtain_images", mediainfo=mediainfo)
async def async_obtain_images(self, mediainfo: MediaInfo) -> Optional[MediaInfo]:
"""
补充抓取媒体信息图片(异步版本)
:param mediainfo: 识别的媒体信息
:return: 更新后的媒体信息
"""
return await self.async_run_module("async_obtain_images", mediainfo=mediainfo)
def obtain_specific_image(self, mediaid: Union[str, int], mtype: MediaType,
image_type: MediaImageType, image_prefix: Optional[str] = None,
season: Optional[int] = None, episode: Optional[int] = None) -> Optional[str]:
@@ -276,6 +495,18 @@ class ChainBase(metaclass=ABCMeta):
"""
return self.run_module("douban_info", doubanid=doubanid, mtype=mtype, raise_exception=raise_exception)
async def async_douban_info(self, doubanid: str, mtype: Optional[MediaType] = None,
raise_exception: bool = False) -> Optional[dict]:
"""
获取豆瓣信息(异步版本)
:param doubanid: 豆瓣ID
:param mtype: 媒体类型
:return: 豆瓣信息
:param raise_exception: 触发速率限制时是否抛出异常
"""
return await self.async_run_module("async_douban_info", doubanid=doubanid, mtype=mtype,
raise_exception=raise_exception)
def tvdb_info(self, tvdbid: int) -> Optional[dict]:
"""
获取TVDB信息
@@ -294,6 +525,16 @@ class ChainBase(metaclass=ABCMeta):
"""
return self.run_module("tmdb_info", tmdbid=tmdbid, mtype=mtype, season=season)
async def async_tmdb_info(self, tmdbid: int, mtype: MediaType, season: Optional[int] = None) -> Optional[dict]:
"""
获取TMDB信息异步版本
:param tmdbid: int
:param mtype: 媒体类型
:param season: 季
:return: TVDB信息
"""
return await self.async_run_module("async_tmdb_info", tmdbid=tmdbid, mtype=mtype, season=season)
def bangumi_info(self, bangumiid: int) -> Optional[dict]:
"""
获取Bangumi信息
@@ -302,6 +543,14 @@ class ChainBase(metaclass=ABCMeta):
"""
return self.run_module("bangumi_info", bangumiid=bangumiid)
async def async_bangumi_info(self, bangumiid: int) -> Optional[dict]:
"""
获取Bangumi信息异步版本
:param bangumiid: int
:return: Bangumi信息
"""
return await self.async_run_module("async_bangumi_info", bangumiid=bangumiid)
def message_parser(self, source: str, body: Any, form: Any,
args: Any) -> Optional[CommingMessage]:
"""
@@ -335,6 +584,14 @@ class ChainBase(metaclass=ABCMeta):
"""
return self.run_module("search_medias", meta=meta)
async def async_search_medias(self, meta: MetaBase) -> Optional[List[MediaInfo]]:
"""
搜索媒体信息(异步版本)
:param meta: 识别的元数据
:reutrn: 媒体信息列表
"""
return await self.async_run_module("async_search_medias", meta=meta)
def search_persons(self, name: str) -> Optional[List[MediaPerson]]:
"""
搜索人物信息
@@ -342,6 +599,13 @@ class ChainBase(metaclass=ABCMeta):
"""
return self.run_module("search_persons", name=name)
async def async_search_persons(self, name: str) -> Optional[List[MediaPerson]]:
"""
搜索人物信息(异步版本)
:param name: 人物名称
"""
return await self.async_run_module("async_search_persons", name=name)
def search_collections(self, name: str) -> Optional[List[MediaInfo]]:
"""
搜索集合信息
@@ -349,21 +613,43 @@ class ChainBase(metaclass=ABCMeta):
"""
return self.run_module("search_collections", name=name)
async def async_search_collections(self, name: str) -> Optional[List[MediaInfo]]:
"""
搜索集合信息(异步版本)
:param name: 集合名称
"""
return await self.async_run_module("async_search_collections", name=name)
def search_torrents(self, site: dict,
keywords: List[str],
keyword: str,
mtype: Optional[MediaType] = None,
page: Optional[int] = 0) -> List[TorrentInfo]:
"""
搜索一个站点的种子资源
:param site: 站点
:param keywords: 搜索关键词列表
:param keyword: 搜索关键词
:param mtype: 媒体类型
:param page: 页码
:reutrn: 资源列表
"""
return self.run_module("search_torrents", site=site, keywords=keywords,
return self.run_module("search_torrents", site=site, keyword=keyword,
mtype=mtype, page=page)
async def async_search_torrents(self, site: dict,
keyword: str,
mtype: Optional[MediaType] = None,
page: Optional[int] = 0) -> List[TorrentInfo]:
"""
异步搜索一个站点的种子资源
:param site: 站点
:param keyword: 搜索关键词
:param mtype: 媒体类型
:param page: 页码
:reutrn: 资源列表
"""
return await self.async_run_module("async_search_torrents", site=site, keyword=keyword,
mtype=mtype, page=page)
def refresh_torrents(self, site: dict, keyword: Optional[str] = None,
cat: Optional[str] = None, page: Optional[int] = 0) -> List[TorrentInfo]:
"""
@@ -376,6 +662,19 @@ class ChainBase(metaclass=ABCMeta):
"""
return self.run_module("refresh_torrents", site=site, keyword=keyword, cat=cat, page=page)
async def async_refresh_torrents(self, site: dict, keyword: Optional[str] = None,
cat: Optional[str] = None, page: Optional[int] = 0) -> List[TorrentInfo]:
"""
异步获取站点最新一页的种子,多个站点需要多线程处理
:param site: 站点
:param keyword: 标题
:param cat: 分类
:param page: 页码
:reutrn: 种子资源列表
"""
return await self.async_run_module("async_refresh_torrents",
site=site, keyword=keyword, cat=cat, page=page)
def filter_torrents(self, rule_groups: List[str],
torrent_list: List[TorrentInfo],
mediainfo: MediaInfo = None) -> List[TorrentInfo]:
@@ -389,13 +688,13 @@ class ChainBase(metaclass=ABCMeta):
return self.run_module("filter_torrents", rule_groups=rule_groups,
torrent_list=torrent_list, mediainfo=mediainfo)
def download(self, content: Union[Path, str], download_dir: Path, cookie: str,
def download(self, content: Union[Path, str, bytes], download_dir: Path, cookie: str,
episodes: Set[int] = None, category: Optional[str] = None, label: Optional[str] = None,
downloader: Optional[str] = None
) -> Optional[Tuple[Optional[str], Optional[str], Optional[str], str]]:
"""
根据种子文件,选择并添加下载任务
:param content: 种子文件地址或者磁力链接
:param content: 种子文件地址或者磁力链接或者种子内容
:param download_dir: 下载目录
:param cookie: cookie
:param episodes: 需要下载的集数
@@ -408,15 +707,16 @@ class ChainBase(metaclass=ABCMeta):
cookie=cookie, episodes=episodes, category=category, label=label,
downloader=downloader)
def download_added(self, context: Context, download_dir: Path, torrent_path: Path = None) -> None:
def download_added(self, context: Context, download_dir: Path, torrent_content: Union[str, bytes] = None) -> None:
"""
添加下载任务成功后,从站点下载字幕,保存到下载目录
:param context: 上下文,包括识别信息、媒体信息、种子信息
:param download_dir: 下载目录
:param torrent_path: 种子文件地址
:param torrent_content: 种子内容如果有则直接使用该内容否则从context中获取种子文件路径
:return: None该方法可被多个模块同时处理
"""
return self.run_module("download_added", context=context, torrent_path=torrent_path,
return self.run_module("download_added", context=context,
torrent_content=torrent_content,
download_dir=download_dir)
def list_torrents(self, status: TorrentStatus = None,
@@ -552,9 +852,13 @@ class ChainBase(metaclass=ABCMeta):
# 渲染消息
message = MessageTemplateHelper.render(message=message, meta=meta, mediainfo=mediainfo,
torrentinfo=torrentinfo, transferinfo=transferinfo, **kwargs)
# 检查消息是否有效
if not message:
logger.warning("消息为空,跳过发送")
return
# 保存消息
self.messagehelper.put(message, role="user", title=message.title)
self.messageoper.add(**message.dict())
self.messageoper.add(**message.model_dump())
# 发送消息按设置隔离
if not message.userid and message.mtype:
# 消息隔离设置
@@ -601,15 +905,99 @@ class ChainBase(metaclass=ABCMeta):
break
# 按设定发送
self.eventmanager.send_event(etype=EventType.NoticeMessage,
data={**send_message.dict(), "type": send_message.mtype})
self.messagequeue.send_message("post_message", message=send_message)
data={**send_message.model_dump(), "type": send_message.mtype})
self.messagequeue.send_message("post_message", message=send_message, **kwargs)
if not send_orignal:
return
# 发送消息事件
self.eventmanager.send_event(etype=EventType.NoticeMessage, data={**message.dict(), "type": message.mtype})
self.eventmanager.send_event(etype=EventType.NoticeMessage, data={**message.model_dump(), "type": message.mtype})
# 按原消息发送
self.messagequeue.send_message("post_message", message=message,
immediately=True if message.userid else False)
immediately=True if message.userid else False, **kwargs)
async def async_post_message(self,
message: Optional[Notification] = None,
meta: Optional[MetaBase] = None,
mediainfo: Optional[MediaInfo] = None,
torrentinfo: Optional[TorrentInfo] = None,
transferinfo: Optional[TransferInfo] = None,
**kwargs) -> None:
"""
异步发送消息
:param message: Notification实例
:param meta: 元数据
:param mediainfo: 媒体信息
:param torrentinfo: 种子信息
:param transferinfo: 文件整理信息
:param kwargs: 其他参数(覆盖业务对象属性值)
:return: 成功或失败
"""
# 渲染消息
message = MessageTemplateHelper.render(message=message, meta=meta, mediainfo=mediainfo,
torrentinfo=torrentinfo, transferinfo=transferinfo, **kwargs)
# 检查消息是否有效
if not message:
logger.warning("消息为空,跳过发送")
return
# 保存消息
self.messagehelper.put(message, role="user", title=message.title)
await self.messageoper.async_add(**message.model_dump())
# 发送消息按设置隔离
if not message.userid and message.mtype:
# 消息隔离设置
notify_action = ServiceConfigHelper.get_notification_switch(message.mtype)
if notify_action:
# 'admin' 'user,admin' 'user' 'all'
actions = notify_action.split(",")
# 是否已发送管理员标志
admin_sended = False
send_orignal = False
useroper = UserOper()
for action in actions:
send_message = copy.deepcopy(message)
if action == "admin" and not admin_sended:
# 仅发送管理员
logger.info(f"{send_message.mtype} 的消息已设置发送给管理员")
# 读取管理员消息IDS
send_message.targets = useroper.get_settings(settings.SUPERUSER)
admin_sended = True
elif action == "user" and send_message.username:
# 发送对应用户
logger.info(f"{send_message.mtype} 的消息已设置发送给用户 {send_message.username}")
# 读取用户消息IDS
send_message.targets = useroper.get_settings(send_message.username)
if send_message.targets is None:
# 没有找到用户
if not admin_sended:
# 回滚发送管理员
logger.info(f"用户 {send_message.username} 不存在,消息将发送给管理员")
# 读取管理员消息IDS
send_message.targets = useroper.get_settings(settings.SUPERUSER)
admin_sended = True
else:
# 管理员发过了,此消息不发了
logger.info(f"用户 {send_message.username} 不存在,消息无法发送到对应用户")
continue
elif send_message.username == settings.SUPERUSER:
# 管理员同名已发送
admin_sended = True
else:
# 按原消息发送全体
if not admin_sended:
send_orignal = True
break
# 按设定发送
await self.eventmanager.async_send_event(etype=EventType.NoticeMessage,
data={**send_message.model_dump(), "type": send_message.mtype})
await self.messagequeue.async_send_message("post_message", message=send_message, **kwargs)
if not send_orignal:
return
# 发送消息事件
await self.eventmanager.async_send_event(etype=EventType.NoticeMessage,
data={**message.model_dump(), "type": message.mtype})
# 按原消息发送
await self.messagequeue.async_send_message("post_message", message=message,
immediately=True if message.userid else False, **kwargs)
def post_medias_message(self, message: Notification, medias: List[MediaInfo]) -> None:
"""
@@ -620,7 +1008,7 @@ class ChainBase(metaclass=ABCMeta):
"""
note_list = [media.to_dict() for media in medias]
self.messagehelper.put(message, role="user", note=note_list, title=message.title)
self.messageoper.add(**message.dict(), note=note_list)
self.messageoper.add(**message.model_dump(), note=note_list)
return self.messagequeue.send_message("post_medias_message", message=message, medias=medias,
immediately=True if message.userid else False)
@@ -633,7 +1021,7 @@ class ChainBase(metaclass=ABCMeta):
"""
note_list = [torrent.torrent_info.to_dict() for torrent in torrents]
self.messagehelper.put(message, role="user", note=note_list, title=message.title)
self.messageoper.add(**message.dict(), note=note_list)
self.messageoper.add(**message.model_dump(), note=note_list)
return self.messagequeue.send_message("post_torrents_message", message=message, torrents=torrents,
immediately=True if message.userid else False)

View File

@@ -57,3 +57,51 @@ class BangumiChain(ChainBase):
:param person_id: 人物ID
"""
return self.run_module("bangumi_person_credits", person_id=person_id)
async def async_calendar(self) -> Optional[List[MediaInfo]]:
"""
获取Bangumi每日放送异步版本
"""
return await self.async_run_module("async_bangumi_calendar")
async def async_discover(self, **kwargs) -> Optional[List[MediaInfo]]:
"""
发现Bangumi番剧异步版本
"""
return await self.async_run_module("async_bangumi_discover", **kwargs)
async def async_bangumi_info(self, bangumiid: int) -> Optional[dict]:
"""
获取Bangumi信息异步版本
:param bangumiid: BangumiID
:return: Bangumi信息
"""
return await self.async_run_module("async_bangumi_info", bangumiid=bangumiid)
async def async_bangumi_credits(self, bangumiid: int) -> List[schemas.MediaPerson]:
"""
根据BangumiID查询电影演职员表异步版本
:param bangumiid: BangumiID
"""
return await self.async_run_module("async_bangumi_credits", bangumiid=bangumiid)
async def async_bangumi_recommend(self, bangumiid: int) -> Optional[List[MediaInfo]]:
"""
根据BangumiID查询推荐电影异步版本
:param bangumiid: BangumiID
"""
return await self.async_run_module("async_bangumi_recommend", bangumiid=bangumiid)
async def async_person_detail(self, person_id: int) -> Optional[schemas.MediaPerson]:
"""
根据人物ID查询Bangumi人物详情异步版本
:param person_id: 人物ID
"""
return await self.async_run_module("async_bangumi_person_detail", person_id=person_id)
async def async_person_credits(self, person_id: int) -> Optional[List[MediaInfo]]:
"""
根据人物ID查询人物参演作品异步版本
:param person_id: 人物ID
"""
return await self.async_run_module("async_bangumi_person_credits", person_id=person_id)

View File

@@ -111,3 +111,111 @@ class DoubanChain(ChainBase):
:param doubanid: 豆瓣ID
"""
return self.run_module("douban_tv_recommend", doubanid=doubanid)
async def async_person_detail(self, person_id: int) -> Optional[schemas.MediaPerson]:
"""
根据人物ID查询豆瓣人物详情异步版本
:param person_id: 人物ID
"""
return await self.async_run_module("async_douban_person_detail", person_id=person_id)
async def async_person_credits(self, person_id: int, page: Optional[int] = 1) -> List[MediaInfo]:
"""
根据人物ID查询人物参演作品异步版本
:param person_id: 人物ID
:param page: 页码
"""
return await self.async_run_module("async_douban_person_credits", person_id=person_id, page=page)
async def async_movie_top250(self, page: Optional[int] = 1,
count: Optional[int] = 30) -> Optional[List[MediaInfo]]:
"""
获取豆瓣电影TOP250异步版本
:param page: 页码
:param count: 每页数量
"""
return await self.async_run_module("async_movie_top250", page=page, count=count)
async def async_movie_showing(self, page: Optional[int] = 1,
count: Optional[int] = 30) -> Optional[List[MediaInfo]]:
"""
获取正在上映的电影(异步版本)
"""
return await self.async_run_module("async_movie_showing", page=page, count=count)
async def async_tv_weekly_chinese(self, page: Optional[int] = 1,
count: Optional[int] = 30) -> Optional[List[MediaInfo]]:
"""
获取本周中国剧集榜(异步版本)
"""
return await self.async_run_module("async_tv_weekly_chinese", page=page, count=count)
async def async_tv_weekly_global(self, page: Optional[int] = 1,
count: Optional[int] = 30) -> Optional[List[MediaInfo]]:
"""
获取本周全球剧集榜(异步版本)
"""
return await self.async_run_module("async_tv_weekly_global", page=page, count=count)
async def async_douban_discover(self, mtype: MediaType, sort: str, tags: str,
page: Optional[int] = 0, count: Optional[int] = 30) -> Optional[List[MediaInfo]]:
"""
发现豆瓣电影、剧集(异步版本)
:param mtype: 媒体类型
:param sort: 排序方式
:param tags: 标签
:param page: 页码
:param count: 数量
:return: 媒体信息列表
"""
return await self.async_run_module("async_douban_discover", mtype=mtype, sort=sort, tags=tags,
page=page, count=count)
async def async_tv_animation(self, page: Optional[int] = 1,
count: Optional[int] = 30) -> Optional[List[MediaInfo]]:
"""
获取动画剧集(异步版本)
"""
return await self.async_run_module("async_tv_animation", page=page, count=count)
async def async_movie_hot(self, page: Optional[int] = 1,
count: Optional[int] = 30) -> Optional[List[MediaInfo]]:
"""
获取热门电影(异步版本)
"""
return await self.async_run_module("async_movie_hot", page=page, count=count)
async def async_tv_hot(self, page: Optional[int] = 1,
count: Optional[int] = 30) -> Optional[List[MediaInfo]]:
"""
获取热门剧集(异步版本)
"""
return await self.async_run_module("async_tv_hot", page=page, count=count)
async def async_movie_credits(self, doubanid: str) -> Optional[List[schemas.MediaPerson]]:
"""
根据TMDBID查询电影演职人员异步版本
:param doubanid: 豆瓣ID
"""
return await self.async_run_module("async_douban_movie_credits", doubanid=doubanid)
async def async_tv_credits(self, doubanid: str) -> Optional[List[schemas.MediaPerson]]:
"""
根据TMDBID查询电视剧演职人员异步版本
:param doubanid: 豆瓣ID
"""
return await self.async_run_module("async_douban_tv_credits", doubanid=doubanid)
async def async_movie_recommend(self, doubanid: str) -> List[MediaInfo]:
"""
根据豆瓣ID查询推荐电影异步版本
:param doubanid: 豆瓣ID
"""
return await self.async_run_module("async_douban_movie_recommend", doubanid=doubanid)
async def async_tv_recommend(self, doubanid: str) -> List[MediaInfo]:
"""
根据豆瓣ID查询推荐电视剧异步版本
:param doubanid: 豆瓣ID
"""
return await self.async_run_module("async_douban_tv_recommend", doubanid=doubanid)

View File

@@ -8,6 +8,7 @@ from typing import List, Optional, Tuple, Set, Dict, Union
from app import schemas
from app.chain import ChainBase
from app.core.cache import FileCache
from app.core.config import settings, global_vars
from app.core.context import MediaInfo, TorrentInfo, Context
from app.core.event import eventmanager, Event
@@ -35,10 +36,10 @@ class DownloadChain(ChainBase):
channel: MessageChannel = None,
source: Optional[str] = None,
userid: Union[str, int] = None
) -> Tuple[Optional[Union[Path, str]], str, list]:
) -> Tuple[Optional[Union[str, bytes]], str, list]:
"""
下载种子文件,如果是磁力链,会返回磁力链接本身
:return: 种子路径,种子目录名,种子文件清单
:return: 种子内容,种子目录名,种子文件清单
"""
def __get_redict_url(url: str, ua: Optional[str] = None, cookie: Optional[str] = None) -> Optional[str]:
@@ -60,6 +61,8 @@ class DownloadChain(ChainBase):
# 是否使用cookie
if not req_params.get('cookie'):
cookie = None
# 代理
proxy = req_params.get('proxy')
# 请求头
if req_params.get('header'):
headers = req_params.get('header')
@@ -70,14 +73,16 @@ class DownloadChain(ChainBase):
res = RequestUtils(
ua=ua,
cookies=cookie,
headers=headers
headers=headers,
proxies=settings.PROXY if proxy else None
).get_res(url, params=req_params.get('params'))
else:
# POST请求
res = RequestUtils(
ua=ua,
cookies=cookie,
headers=headers
headers=headers,
proxies=settings.PROXY if proxy else None
).post_res(url, params=req_params.get('params'))
if not res:
return None
@@ -113,7 +118,7 @@ class DownloadChain(ChainBase):
logger.error(f"{torrent.title} 无法获取下载地址:{torrent.enclosure}")
return None, "", []
# 下载种子文件
torrent_file, content, download_folder, files, error_msg = TorrentHelper().download_torrent(
_, content, download_folder, files, error_msg = TorrentHelper().download_torrent(
url=torrent_url,
cookie=site_cookie,
ua=torrent.site_ua or settings.USER_AGENT,
@@ -123,7 +128,7 @@ class DownloadChain(ChainBase):
# 磁力链
return content, "", []
if not torrent_file:
if not content:
logger.error(f"下载种子文件失败:{torrent.title} - {torrent_url}")
self.post_message(Notification(
channel=channel,
@@ -135,9 +140,11 @@ class DownloadChain(ChainBase):
return None, "", []
# 返回 种子文件路径,种子目录名,种子文件清单
return torrent_file, download_folder, files
return content, download_folder, files
def download_single(self, context: Context, torrent_file: Path = None,
def download_single(self, context: Context,
torrent_file: Path = None,
torrent_content: Optional[Union[str, bytes]] = None,
episodes: Set[int] = None,
channel: MessageChannel = None,
source: Optional[str] = None,
@@ -150,6 +157,7 @@ class DownloadChain(ChainBase):
下载及发送通知
:param context: 资源上下文
:param torrent_file: 种子文件路径
:param torrent_content: 种子内容(磁力链或种子文件内容)
:param episodes: 需要下载的集数
:param channel: 通知渠道
:param source: 来源消息通知、Subscribe、Manual等
@@ -188,6 +196,9 @@ class DownloadChain(ChainBase):
f"Resource download canceled by event: {event_data.source},"
f"Reason: {event_data.reason}")
return None
# 如果事件修改了下载路径,使用新路径
if event_data.options and event_data.options.get("save_path"):
save_path = event_data.options.get("save_path")
# 补充完整的media数据
if not _media.genre_ids:
@@ -200,18 +211,26 @@ class DownloadChain(ChainBase):
# 实际下载的集数
download_episodes = StringUtils.format_ep(list(episodes)) if episodes else None
_folder_name = ""
if not torrent_file:
if not torrent_file and not torrent_content:
# 下载种子文件,得到的可能是文件也可能是磁力链
content, _folder_name, _file_list = self.download_torrent(_torrent,
channel=channel,
source=source,
userid=userid)
if not content:
return None
else:
content = torrent_file
# 获取种子文件的文件夹名和文件清单
_folder_name, _file_list = TorrentHelper().get_torrent_info(torrent_file)
torrent_content, _folder_name, _file_list = self.download_torrent(_torrent,
channel=channel,
source=source,
userid=userid)
elif torrent_file:
if torrent_file.exists():
torrent_content = torrent_file.read_bytes()
else:
# 缓存处理器
cache_backend = FileCache()
# 读取缓存的种子文件
torrent_content = cache_backend.get(torrent_file.as_posix(), region="torrents")
if not torrent_content:
return None
# 获取种子文件的文件夹名和文件清单
_folder_name, _file_list = TorrentHelper().get_fileinfo_from_torrent_content(torrent_content)
# 下载目录
if save_path:
@@ -242,7 +261,7 @@ class DownloadChain(ChainBase):
return None
# 添加下载
result: Optional[tuple] = self.download(content=content,
result: Optional[tuple] = self.download(content=torrent_content,
cookie=_torrent.site_cookie,
episodes=episodes,
download_dir=download_dir,
@@ -271,7 +290,7 @@ class DownloadChain(ChainBase):
# 登记下载记录
downloadhis = DownloadHistoryOper()
downloadhis.add(
path=str(download_path),
path=download_path.as_posix(),
type=_media.type.value,
title=_media.title,
year=_media.year,
@@ -312,8 +331,8 @@ class DownloadChain(ChainBase):
files_to_add.append({
"download_hash": _hash,
"downloader": _downloader,
"fullpath": str(_save_path / file),
"savepath": str(_save_path),
"fullpath": (_save_path / file).as_posix(),
"savepath": _save_path.as_posix(),
"filepath": file,
"torrentname": _meta.org_string,
})
@@ -339,7 +358,7 @@ class DownloadChain(ChainBase):
username=username,
)
# 下载成功后处理
self.download_added(context=context, download_dir=download_dir, torrent_path=torrent_file)
self.download_added(context=context, download_dir=download_dir, torrent_content=torrent_content)
# 广播事件
self.eventmanager.send_event(EventType.DownloadAdded, {
"hash": _hash,
@@ -553,7 +572,7 @@ class DownloadChain(ChainBase):
logger.info(f"开始下载 {torrent.title} ...")
download_id = self.download_single(
context=context,
torrent_file=content if isinstance(content, Path) else None,
torrent_content=content,
save_path=save_path,
channel=channel,
source=source,
@@ -720,7 +739,7 @@ class DownloadChain(ChainBase):
logger.info(f"开始下载 {torrent.title} ...")
download_id = self.download_single(
context=context,
torrent_file=content if isinstance(content, Path) else None,
torrent_content=content,
episodes=selected_episodes,
save_path=save_path,
channel=channel,
@@ -975,7 +994,7 @@ class DownloadChain(ChainBase):
# 发出下载任务删除事件,如需处理辅种,可监听该事件
self.eventmanager.send_event(EventType.DownloadDeleted, {
"hash": hash_str,
"torrents": [torrent.dict() for torrent in torrents]
"torrents": [torrent.model_dump() for torrent in torrents]
})
else:
logger.info(f"没有在下载器中查询到 {hash_str} 对应的下载任务")

View File

@@ -1,4 +1,6 @@
import os
from pathlib import Path
from tempfile import NamedTemporaryFile
from threading import Lock
from typing import Optional, List, Tuple, Union
@@ -19,7 +21,9 @@ from app.utils.string import StringUtils
recognize_lock = Lock()
scraping_lock = Lock()
scraping_files = []
current_umask = os.umask(0)
os.umask(current_umask)
class MediaChain(ChainBase):
@@ -35,25 +39,25 @@ class MediaChain(ChainBase):
switchs = SystemConfigOper().get(SystemConfigKey.ScrapingSwitchs) or {}
# 默认配置
default_switchs = {
'movie_nfo': True, # 电影NFO
'movie_poster': True, # 电影海报
'movie_backdrop': True, # 电影背景图
'movie_logo': True, # 电影Logo
'movie_disc': True, # 电影光盘图
'movie_banner': True, # 电影横幅图
'movie_thumb': True, # 电影缩略图
'tv_nfo': True, # 电视剧NFO
'tv_poster': True, # 电视剧海报
'tv_backdrop': True, # 电视剧背景图
'tv_banner': True, # 电视剧横幅图
'tv_logo': True, # 电视剧Logo
'tv_thumb': True, # 电视剧缩略图
'season_nfo': True, # 季NFO
'season_poster': True, # 季海报
'season_banner': True, # 季横幅图
'season_thumb': True, # 季缩略图
'episode_nfo': True, # 集NFO
'episode_thumb': True # 集缩略图
'movie_nfo': True, # 电影NFO
'movie_poster': True, # 电影海报
'movie_backdrop': True, # 电影背景图
'movie_logo': True, # 电影Logo
'movie_disc': True, # 电影光盘图
'movie_banner': True, # 电影横幅图
'movie_thumb': True, # 电影缩略图
'tv_nfo': True, # 电视剧NFO
'tv_poster': True, # 电视剧海报
'tv_backdrop': True, # 电视剧背景图
'tv_banner': True, # 电视剧横幅图
'tv_logo': True, # 电视剧Logo
'tv_thumb': True, # 电视剧缩略图
'season_nfo': True, # 季NFO
'season_poster': True, # 季海报
'season_banner': True, # 季横幅图
'season_thumb': True, # 季缩略图
'episode_nfo': True, # 集NFO
'episode_thumb': True # 集缩略图
}
# 合并用户配置和默认配置
for key, default_value in default_switchs.items():
@@ -231,17 +235,15 @@ class MediaChain(ChainBase):
meta_names = list(dict.fromkeys([k for k in [meta_org.name,
meta.cn_name,
meta.en_name] if k]))
for name in meta_names:
tmdbinfo = self.match_tmdbinfo(
name=name,
year=meta.year,
mtype=mtype or meta.type,
season=meta.begin_season
)
if tmdbinfo:
# 合季季后返回
tmdbinfo['season'] = meta.begin_season
break
tmdbinfo = self._match_tmdb_with_names(
meta_names=meta_names,
year=meta.year,
mtype=mtype or meta.type,
season=meta.begin_season
)
if tmdbinfo:
# 合季季后返回
tmdbinfo['season'] = meta.begin_season
return tmdbinfo
def get_tmdbinfo_by_bangumiid(self, bangumiid: int) -> Optional[dict]:
@@ -257,23 +259,17 @@ class MediaChain(ChainBase):
else:
meta_cn = meta = MetaInfo(title=bangumiinfo.get("name"))
# 年份
release_date = bangumiinfo.get("date") or bangumiinfo.get("air_date")
if release_date:
year = release_date[:4]
else:
year = None
year = self._extract_year_from_bangumi(bangumiinfo)
# 识别TMDB媒体信息
meta_names = list(dict.fromkeys([k for k in [meta_cn.name,
meta.name] if k]))
for name in meta_names:
tmdbinfo = self.match_tmdbinfo(
name=name,
year=year,
mtype=MediaType.TV,
season=meta.begin_season
)
if tmdbinfo:
return tmdbinfo
tmdbinfo = self._match_tmdb_with_names(
meta_names=meta_names,
year=year,
mtype=MediaType.TV,
season=meta.begin_season
)
return tmdbinfo
return None
def get_doubaninfo_by_tmdbid(self, tmdbid: int,
@@ -286,19 +282,7 @@ class MediaChain(ChainBase):
# 名称
name = tmdbinfo.get("title") or tmdbinfo.get("name")
# 年份
year = None
if tmdbinfo.get('release_date'):
year = tmdbinfo['release_date'][:4]
elif tmdbinfo.get('seasons') and season:
for seainfo in tmdbinfo['seasons']:
# 季
season_number = seainfo.get("season_number")
if not season_number:
continue
air_date = seainfo.get("air_date")
if air_date and season_number == season:
year = air_date[:4]
break
year = self._extract_year_from_tmdb(tmdbinfo, season)
# IMDBID
imdbid = tmdbinfo.get("external_ids", {}).get("imdb_id")
return self.match_doubaninfo(
@@ -321,11 +305,7 @@ class MediaChain(ChainBase):
else:
meta = MetaInfo(title=bangumiinfo.get("name"))
# 年份
release_date = bangumiinfo.get("date") or bangumiinfo.get("air_date")
if release_date:
year = release_date[:4]
else:
year = None
year = self._extract_year_from_bangumi(bangumiinfo)
# 使用名称识别豆瓣媒体信息
return self.match_doubaninfo(
name=meta.name,
@@ -335,6 +315,21 @@ class MediaChain(ChainBase):
)
return None
@staticmethod
def is_bluray_folder(fileitem: schemas.FileItem) -> bool:
"""
判断是否为原盘目录
"""
if not fileitem or fileitem.type != "dir":
return False
# 蓝光原盘目录必备的文件或文件夹
required_files = ['BDMV', 'CERTIFICATE']
# 检查目录下是否存在所需文件或文件夹
for item in StorageChain().list_files(fileitem):
if item.name in required_files:
return True
return False
@eventmanager.register(EventType.MetadataScrape)
def scrape_metadata_event(self, event: Event):
"""
@@ -343,29 +338,101 @@ class MediaChain(ChainBase):
if not event:
return
event_data = event.event_data or {}
# 媒体根目录
fileitem: FileItem = event_data.get("fileitem")
# 媒体文件列表
file_list: List[str] = event_data.get("file_list", [])
# 媒体元数据
meta: MetaBase = event_data.get("meta")
# 媒体信息
mediainfo: MediaInfo = event_data.get("mediainfo")
# 是否覆盖
overwrite = event_data.get("overwrite", False)
# 检查媒体根目录
if not fileitem:
return
# 刮削锁
with scraping_lock:
if fileitem.path in scraping_files:
# 检查文件项是否存在
storagechain = StorageChain()
if not storagechain.get_item(fileitem):
logger.warn(f"文件项不存在:{fileitem.path}")
return
scraping_files.append(fileitem.path)
try:
# 执行刮削
self.scrape_metadata(fileitem=fileitem, meta=meta, mediainfo=mediainfo, overwrite=overwrite)
finally:
# 释放锁
with scraping_lock:
scraping_files.remove(fileitem.path)
# 检查是否为目录
if fileitem.type == "file":
# 单个文件刮削
self.scrape_metadata(fileitem=fileitem,
mediainfo=mediainfo,
init_folder=False,
parent=storagechain.get_parent_item(fileitem),
overwrite=overwrite)
else:
if file_list:
# 如果是BDMV原盘目录只对根目录进行刮削不处理子目录
if self.is_bluray_folder(fileitem):
logger.info(f"检测到BDMV原盘目录只对根目录进行刮削{fileitem.path}")
self.scrape_metadata(fileitem=fileitem,
mediainfo=mediainfo,
init_folder=True,
recursive=False,
overwrite=overwrite)
else:
# 1. 收集fileitem和file_list中每个文件之间所有子目录
all_dirs = set()
root_path = Path(fileitem.path)
logger.debug(f"开始收集目录,根目录:{root_path}")
# 收集根目录
all_dirs.add(root_path)
# 收集所有目录(包括所有层级)
for sub_file in file_list:
sub_path = Path(sub_file)
# 收集从根目录到文件的所有父目录
current_path = sub_path.parent
while current_path != root_path and current_path.is_relative_to(root_path):
all_dirs.add(current_path)
current_path = current_path.parent
logger.debug(f"共收集到 {len(all_dirs)} 个目录")
# 2. 初始化一遍子目录,但不处理文件
for sub_dir in all_dirs:
sub_dir_item = storagechain.get_file_item(storage=fileitem.storage, path=sub_dir)
if sub_dir_item:
logger.info(f"为目录生成海报和nfo{sub_dir}")
# 初始化目录元数据,但不处理文件
self.scrape_metadata(fileitem=sub_dir_item,
mediainfo=mediainfo,
init_folder=True,
recursive=False,
overwrite=overwrite)
else:
logger.warn(f"无法获取目录项:{sub_dir}")
# 3. 刮削每个文件
logger.info(f"开始刮削 {len(file_list)} 个文件")
for sub_file_path in file_list:
sub_file_item = storagechain.get_file_item(storage=fileitem.storage,
path=Path(sub_file_path))
if sub_file_item:
self.scrape_metadata(fileitem=sub_file_item,
mediainfo=mediainfo,
init_folder=False,
overwrite=overwrite)
else:
logger.warn(f"无法获取文件项:{sub_file_path}")
else:
# 执行全量刮削
logger.info(f"开始刮削目录 {fileitem.path} ...")
self.scrape_metadata(fileitem=fileitem, meta=meta, init_folder=True,
mediainfo=mediainfo, overwrite=overwrite)
def scrape_metadata(self, fileitem: schemas.FileItem,
meta: MetaBase = None, mediainfo: MediaInfo = None,
init_folder: bool = True, parent: schemas.FileItem = None,
overwrite: bool = False):
overwrite: bool = False, recursive: bool = True):
"""
手动刮削媒体信息
:param fileitem: 刮削目录或文件
@@ -374,24 +441,11 @@ class MediaChain(ChainBase):
:param init_folder: 是否刮削根目录
:param parent: 上级目录
:param overwrite: 是否覆盖已有文件
:param recursive: 是否递归处理目录内文件
"""
storagechain = StorageChain()
def is_bluray_folder(_fileitem: schemas.FileItem) -> bool:
"""
判断是否为原盘目录
"""
if not _fileitem or _fileitem.type != "dir":
return False
# 蓝光原盘目录必备的文件或文件夹
required_files = ['BDMV', 'CERTIFICATE']
# 检查目录下是否存在所需文件或文件夹
for item in storagechain.list_files(_fileitem):
if item.name in required_files:
return True
return False
def __list_files(_fileitem: schemas.FileItem):
"""
列出下级文件
@@ -407,34 +461,68 @@ class MediaChain(ChainBase):
"""
if not _fileitem or not _content or not _path:
return
# 保存文件到临时目录,文件名随机
tmp_file = settings.TEMP_PATH / f"{_path.name}.{StringUtils.generate_random_str(10)}"
tmp_file.write_bytes(_content)
# 获取文件的父目录
try:
item = storagechain.upload_file(fileitem=_fileitem, path=tmp_file, new_name=_path.name)
# 使用tempfile创建临时文件自动删除
with NamedTemporaryFile(delete=True, delete_on_close=False, suffix=_path.suffix) as tmp_file:
tmp_file_path = Path(tmp_file.name)
# 写入内容
if isinstance(_content, bytes):
tmp_file.write(_content)
else:
tmp_file.write(_content.encode('utf-8'))
tmp_file.flush()
tmp_file.close() # 关闭文件句柄
# 刮削文件只需要读写权限
tmp_file_path.chmod(0o666 & ~current_umask)
# 上传文件
item = storagechain.upload_file(fileitem=_fileitem, path=tmp_file_path, new_name=_path.name)
if item:
logger.info(f"已保存文件:{item.path}")
else:
logger.warn(f"文件保存失败:{_path}")
finally:
if tmp_file.exists():
tmp_file.unlink()
def __download_image(_url: str) -> Optional[bytes]:
def __download_and_save_image(_fileitem: schemas.FileItem, _path: Path, _url: str):
"""
下载图片并保存
流式下载图片并直接保存到文件(减少内存占用)
:param _fileitem: 关联的媒体文件项
:param _path: 图片文件路径
:param _url: 图片下载URL
"""
if not _fileitem or not _url or not _path:
return
try:
logger.info(f"正在下载图片:{_url} ...")
r = RequestUtils(proxies=settings.PROXY, ua=settings.USER_AGENT).get_res(url=_url)
if r:
return r.content
else:
logger.info(f"{_url} 图片下载失败,请检查网络连通性!")
request_utils = RequestUtils(proxies=settings.PROXY, ua=settings.NORMAL_USER_AGENT)
with request_utils.get_stream(url=_url) as r:
if r and r.status_code == 200:
# 使用tempfile创建临时文件自动删除
with NamedTemporaryFile(delete=True, delete_on_close=False, suffix=_path.suffix) as tmp_file:
tmp_file_path = Path(tmp_file.name)
# 流式写入文件
for chunk in r.iter_content(chunk_size=8192):
if chunk:
tmp_file.write(chunk)
tmp_file.flush()
tmp_file.close() # 关闭文件句柄
# 刮削的图片只需要读写权限
tmp_file_path.chmod(0o666 & ~current_umask)
# 上传文件
item = storagechain.upload_file(fileitem=_fileitem, path=tmp_file_path,
new_name=_path.name)
if item:
logger.info(f"已保存图片:{item.path}")
else:
logger.warn(f"图片保存失败:{_path}")
else:
logger.info(f"{_url} 图片下载失败")
except Exception as err:
logger.error(f"{_url} 图片下载失败:{str(err)}")
return None
if not fileitem:
return
# 当前文件路径
filepath = Path(fileitem.path)
@@ -464,6 +552,8 @@ class MediaChain(ChainBase):
movie_nfo = self.metadata_nfo(meta=meta, mediainfo=mediainfo)
if movie_nfo:
# 保存或上传nfo文件到上级目录
if not parent:
parent = storagechain.get_parent_item(fileitem)
__save_file(_fileitem=parent, _path=nfo_path, _content=movie_nfo)
else:
logger.warn(f"{filepath.name} nfo文件生成失败")
@@ -473,30 +563,36 @@ class MediaChain(ChainBase):
logger.info("电影NFO刮削已关闭跳过")
else:
# 电影目录
if is_bluray_folder(fileitem):
# 原盘目录
if scraping_switchs.get('movie_nfo', True):
nfo_path = filepath / (filepath.name + ".nfo")
if overwrite or not storagechain.get_file_item(storage=fileitem.storage, path=nfo_path):
# 生成原盘nfo
movie_nfo = self.metadata_nfo(meta=meta, mediainfo=mediainfo)
if movie_nfo:
# 保存或上传nfo文件到当前目录
__save_file(_fileitem=fileitem, _path=nfo_path, _content=movie_nfo)
if recursive:
# 处理文件
if self.is_bluray_folder(fileitem):
# 原盘目录
if scraping_switchs.get('movie_nfo', True):
nfo_path = filepath / (filepath.name + ".nfo")
if overwrite or not storagechain.get_file_item(storage=fileitem.storage, path=nfo_path):
# 生成原盘nfo
movie_nfo = self.metadata_nfo(meta=meta, mediainfo=mediainfo)
if movie_nfo:
# 保存或上传nfo文件到当前目录
__save_file(_fileitem=fileitem, _path=nfo_path, _content=movie_nfo)
else:
logger.warn(f"{filepath.name} nfo文件生成失败")
else:
logger.warn(f"{filepath.name} nfo文件生成失败")
logger.info(f"已存在nfo文件{nfo_path}")
else:
logger.info(f"已存在nfo文件{nfo_path}")
logger.info("电影NFO刮削已关闭跳过")
else:
logger.info("电影NFO刮削已关闭跳过")
else:
# 处理目录内的文件
files = __list_files(_fileitem=fileitem)
for file in files:
self.scrape_metadata(fileitem=file,
meta=meta, mediainfo=mediainfo,
init_folder=False, parent=fileitem,
overwrite=overwrite)
# 处理目录内的文件
files = __list_files(_fileitem=fileitem)
for file in files:
if file.type == "dir":
# 电影不处理子目录
continue
self.scrape_metadata(fileitem=file,
mediainfo=mediainfo,
init_folder=False,
parent=fileitem,
overwrite=overwrite)
# 生成目录内图片文件
if init_folder:
# 图片
@@ -525,11 +621,8 @@ class MediaChain(ChainBase):
image_path = filepath.with_name(image_name)
if overwrite or not storagechain.get_file_item(storage=fileitem.storage,
path=image_path):
# 下载图片
content = __download_image(image_url)
# 写入图片到当前目录
if content:
__save_file(_fileitem=fileitem, _path=image_path, _content=content)
# 流式下载图片并直接保存
__download_and_save_image(_fileitem=fileitem, _path=image_path, _url=image_url)
else:
logger.info(f"已存在图片文件:{image_path}")
else:
@@ -575,26 +668,27 @@ class MediaChain(ChainBase):
for episode, image_url in image_dict.items():
image_path = filepath.with_suffix(Path(image_url).suffix)
if overwrite or not storagechain.get_file_item(storage=fileitem.storage, path=image_path):
# 下载图片
content = __download_image(image_url)
# 保存图片文件到当前目录
if content:
if not parent:
parent = storagechain.get_parent_item(fileitem)
__save_file(_fileitem=parent, _path=image_path, _content=content)
# 流式下载图片并直接保存
if not parent:
parent = storagechain.get_parent_item(fileitem)
__download_and_save_image(_fileitem=parent, _path=image_path, _url=image_url)
else:
logger.info(f"已存在图片文件:{image_path}")
else:
logger.info("集缩略图刮削已关闭,跳过")
else:
# 当前为目录,处理目录内的文件
files = __list_files(_fileitem=fileitem)
for file in files:
self.scrape_metadata(fileitem=file,
meta=meta, mediainfo=mediainfo,
parent=fileitem if file.type == "file" else None,
init_folder=True if file.type == "dir" else False,
overwrite=overwrite)
# 当前为电视剧目录,处理目录内的文件
if recursive:
files = __list_files(_fileitem=fileitem)
for file in files:
if file.type == "dir" and not file.name.lower().startswith("season"):
# 电视剧不处理非季子目录
continue
self.scrape_metadata(fileitem=file,
mediainfo=mediainfo,
parent=fileitem if file.type == "file" else None,
init_folder=True if file.type == "dir" else False,
overwrite=overwrite)
# 生成目录的nfo和图片
if init_folder:
# 识别文件夹名称
@@ -628,13 +722,10 @@ class MediaChain(ChainBase):
image_path = filepath.with_name(image_name)
if overwrite or not storagechain.get_file_item(storage=fileitem.storage,
path=image_path):
# 下载图片
content = __download_image(image_url)
# 保存图片文件到剧集目录
if content:
if not parent:
parent = storagechain.get_parent_item(fileitem)
__save_file(_fileitem=parent, _path=image_path, _content=content)
# 流式下载图片并直接保存
if not parent:
parent = storagechain.get_parent_item(fileitem)
__download_and_save_image(_fileitem=parent, _path=image_path, _url=image_url)
else:
logger.info(f"已存在图片文件:{image_path}")
else:
@@ -659,17 +750,16 @@ class MediaChain(ChainBase):
# 只下载当前刮削季的图片
image_season = "00" if "specials" in image_name else image_name[6:8]
if image_season != str(season_meta.begin_season).rjust(2, '0'):
logger.info(f"当前刮削季为:{season_meta.begin_season},跳过文件:{image_path}")
logger.info(
f"当前刮削季为:{season_meta.begin_season},跳过文件:{image_path}")
continue
if overwrite or not storagechain.get_file_item(storage=fileitem.storage,
path=image_path):
# 下载图片
content = __download_image(image_url)
# 保存图片文件到当前目录
if content:
if not parent:
parent = storagechain.get_parent_item(fileitem)
__save_file(_fileitem=parent, _path=image_path, _content=content)
# 流式下载图片并直接保存
if not parent:
parent = storagechain.get_parent_item(fileitem)
__download_and_save_image(_fileitem=parent, _path=image_path,
_url=image_url)
else:
logger.info(f"已存在图片文件:{image_path}")
else:
@@ -719,13 +809,302 @@ class MediaChain(ChainBase):
image_path = filepath / image_name
if overwrite or not storagechain.get_file_item(storage=fileitem.storage,
path=image_path):
# 下载图片
content = __download_image(image_url)
# 保存图片文件到当前目录
if content:
__save_file(_fileitem=fileitem, _path=image_path, _content=content)
# 流式下载图片并直接保存
__download_and_save_image(_fileitem=fileitem, _path=image_path, _url=image_url)
else:
logger.info(f"已存在图片文件:{image_path}")
else:
logger.info(f"电视剧图片刮削已关闭,跳过:{image_name}")
logger.info(f"{filepath.name} 刮削完成")
async def async_recognize_by_meta(self, metainfo: MetaBase,
episode_group: Optional[str] = None) -> Optional[MediaInfo]:
"""
根据主副标题识别媒体信息(异步版本)
"""
title = metainfo.title
# 识别媒体信息
mediainfo: MediaInfo = await self.async_recognize_media(meta=metainfo, episode_group=episode_group)
if not mediainfo:
# 尝试使用辅助识别,如果有注册响应事件的话
if eventmanager.check(ChainEventType.NameRecognize):
logger.info(f'请求辅助识别,标题:{title} ...')
mediainfo = await self.async_recognize_help(title=title, org_meta=metainfo)
if not mediainfo:
logger.warn(f'{title} 未识别到媒体信息')
return None
# 识别成功
logger.info(f'{title} 识别到媒体信息:{mediainfo.type.value} {mediainfo.title_year}')
# 更新媒体图片
await self.async_obtain_images(mediainfo=mediainfo)
# 返回上下文
return mediainfo
async def async_recognize_help(self, title: str, org_meta: MetaBase) -> Optional[MediaInfo]:
"""
请求辅助识别,返回媒体信息(异步版本)
:param title: 标题
:param org_meta: 原始元数据
"""
# 发送请求事件,等待结果
result: Event = await eventmanager.async_send_event(
ChainEventType.NameRecognize,
{
'title': title,
}
)
if not result:
return None
# 获取返回事件数据
event_data = result.event_data or {}
logger.info(f'获取到辅助识别结果:{event_data}')
# 处理数据格式
title, year, season_number, episode_number = None, None, None, None
if event_data.get("name"):
title = str(event_data["name"]).split("/")[0].strip().replace(".", " ")
if event_data.get("year"):
year = str(event_data["year"]).split("/")[0].strip()
if event_data.get("season") and str(event_data["season"]).isdigit():
season_number = int(event_data["season"])
if event_data.get("episode") and str(event_data["episode"]).isdigit():
episode_number = int(event_data["episode"])
if not title:
return None
if title == 'Unknown':
return None
if not str(year).isdigit():
year = None
# 结果赋值
if title == org_meta.name and year == org_meta.year:
logger.info(f'辅助识别与原始识别结果一致,无需重新识别媒体信息')
return None
logger.info(f'辅助识别结果与原始识别结果不一致,重新匹配媒体信息 ...')
org_meta.name = title
org_meta.year = year
org_meta.begin_season = season_number
org_meta.begin_episode = episode_number
if org_meta.begin_season or org_meta.begin_episode:
org_meta.type = MediaType.TV
# 重新识别
return await self.async_recognize_media(meta=org_meta)
async def async_recognize_by_path(self, path: str, episode_group: Optional[str] = None) -> Optional[Context]:
"""
根据文件路径识别媒体信息(异步版本)
"""
logger.info(f'开始识别媒体信息,文件:{path} ...')
file_path = Path(path)
# 元数据
file_meta = MetaInfoPath(file_path)
# 识别媒体信息
mediainfo = await self.async_recognize_media(meta=file_meta, episode_group=episode_group)
if not mediainfo:
# 尝试使用辅助识别,如果有注册响应事件的话
if eventmanager.check(ChainEventType.NameRecognize):
logger.info(f'请求辅助识别,标题:{file_path.name} ...')
mediainfo = await self.async_recognize_help(title=path, org_meta=file_meta)
if not mediainfo:
logger.warn(f'{path} 未识别到媒体信息')
return Context(meta_info=file_meta)
logger.info(f'{path} 识别到媒体信息:{mediainfo.type.value} {mediainfo.title_year}')
# 更新媒体图片
await self.async_obtain_images(mediainfo=mediainfo)
# 返回上下文
return Context(meta_info=file_meta, media_info=mediainfo)
async def async_search(self, title: str) -> Tuple[Optional[MetaBase], List[MediaInfo]]:
"""
搜索媒体/人物信息(异步版本)
:param title: 搜索内容
:return: 识别元数据,媒体信息列表
"""
# 提取要素
mtype, key_word, season_num, episode_num, year, content = StringUtils.get_keyword(title)
# 识别
meta = MetaInfo(content)
if not meta.name:
meta.cn_name = content
# 合并信息
if mtype:
meta.type = mtype
if season_num:
meta.begin_season = season_num
if episode_num:
meta.begin_episode = episode_num
if year:
meta.year = year
# 开始搜索
logger.info(f"开始搜索媒体信息:{meta.name}")
medias: Optional[List[MediaInfo]] = await self.async_search_medias(meta=meta)
if not medias:
logger.warn(f"{meta.name} 没有找到对应的媒体信息!")
return meta, []
logger.info(f"{content} 搜索到 {len(medias)} 条相关媒体信息")
# 识别的元数据,媒体信息列表
return meta, medias
@staticmethod
def _extract_year_from_bangumi(bangumiinfo: dict) -> Optional[str]:
"""
从Bangumi信息中提取年份
"""
release_date = bangumiinfo.get("date") or bangumiinfo.get("air_date")
if release_date:
return release_date[:4]
return None
@staticmethod
def _extract_year_from_tmdb(tmdbinfo: dict, season: Optional[int] = None) -> Optional[str]:
"""
从TMDB信息中提取年份
"""
year = None
if tmdbinfo.get('release_date'):
year = tmdbinfo['release_date'][:4]
elif tmdbinfo.get('seasons') and season:
for seainfo in tmdbinfo['seasons']:
season_number = seainfo.get("season_number")
if not season_number:
continue
air_date = seainfo.get("air_date")
if air_date and season_number == season:
year = air_date[:4]
break
return year
def _match_tmdb_with_names(self, meta_names: list, year: Optional[str],
mtype: MediaType, season: Optional[int] = None) -> Optional[dict]:
"""
使用名称列表匹配TMDB信息
"""
for name in meta_names:
tmdbinfo = self.match_tmdbinfo(
name=name,
year=year,
mtype=mtype,
season=season
)
if tmdbinfo:
return tmdbinfo
return None
async def _async_match_tmdb_with_names(self, meta_names: list, year: Optional[str],
mtype: MediaType, season: Optional[int] = None) -> Optional[dict]:
"""
使用名称列表匹配TMDB信息异步版本
"""
for name in meta_names:
tmdbinfo = await self.async_match_tmdbinfo(
name=name,
year=year,
mtype=mtype,
season=season
)
if tmdbinfo:
return tmdbinfo
return None
async def async_get_tmdbinfo_by_doubanid(self, doubanid: str, mtype: MediaType = None) -> Optional[dict]:
"""
根据豆瓣ID获取TMDB信息异步版本
"""
tmdbinfo = None
doubaninfo = await self.async_douban_info(doubanid=doubanid, mtype=mtype)
if doubaninfo:
# 优先使用原标题匹配
if doubaninfo.get("original_title"):
meta = MetaInfo(title=doubaninfo.get("title"))
meta_org = MetaInfo(title=doubaninfo.get("original_title"))
else:
meta_org = meta = MetaInfo(title=doubaninfo.get("title"))
# 年份
if doubaninfo.get("year"):
meta.year = doubaninfo.get("year")
# 处理类型
if isinstance(doubaninfo.get('media_type'), MediaType):
meta.type = doubaninfo.get('media_type')
else:
meta.type = MediaType.MOVIE if doubaninfo.get("type") == "movie" else MediaType.TV
# 匹配TMDB信息
meta_names = list(dict.fromkeys([k for k in [meta_org.name,
meta.cn_name,
meta.en_name] if k]))
tmdbinfo = await self._async_match_tmdb_with_names(
meta_names=meta_names,
year=meta.year,
mtype=mtype or meta.type,
season=meta.begin_season
)
if tmdbinfo:
# 合季季后返回
tmdbinfo['season'] = meta.begin_season
return tmdbinfo
async def async_get_tmdbinfo_by_bangumiid(self, bangumiid: int) -> Optional[dict]:
"""
根据BangumiID获取TMDB信息异步版本
"""
bangumiinfo = await self.async_bangumi_info(bangumiid=bangumiid)
if bangumiinfo:
# 优先使用原标题匹配
if bangumiinfo.get("name_cn"):
meta = MetaInfo(title=bangumiinfo.get("name"))
meta_cn = MetaInfo(title=bangumiinfo.get("name_cn"))
else:
meta_cn = meta = MetaInfo(title=bangumiinfo.get("name"))
# 年份
year = self._extract_year_from_bangumi(bangumiinfo)
# 识别TMDB媒体信息
meta_names = list(dict.fromkeys([k for k in [meta_cn.name,
meta.name] if k]))
tmdbinfo = await self._async_match_tmdb_with_names(
meta_names=meta_names,
year=year,
mtype=MediaType.TV,
season=meta.begin_season
)
return tmdbinfo
return None
async def async_get_doubaninfo_by_tmdbid(self, tmdbid: int, mtype: MediaType = None,
season: Optional[int] = None) -> Optional[dict]:
"""
根据TMDBID获取豆瓣信息异步版本
"""
tmdbinfo = await self.async_tmdb_info(tmdbid=tmdbid, mtype=mtype)
if tmdbinfo:
# 名称
name = tmdbinfo.get("title") or tmdbinfo.get("name")
# 年份
year = self._extract_year_from_tmdb(tmdbinfo, season)
# IMDBID
imdbid = tmdbinfo.get("external_ids", {}).get("imdb_id")
return await self.async_match_doubaninfo(
name=name,
year=year,
mtype=mtype,
imdbid=imdbid
)
return None
async def async_get_doubaninfo_by_bangumiid(self, bangumiid: int) -> Optional[dict]:
"""
根据BangumiID获取豆瓣信息异步版本
"""
bangumiinfo = await self.async_bangumi_info(bangumiid=bangumiid)
if bangumiinfo:
# 优先使用中文标题匹配
if bangumiinfo.get("name_cn"):
meta = MetaInfo(title=bangumiinfo.get("name_cn"))
else:
meta = MetaInfo(title=bangumiinfo.get("name"))
# 年份
year = self._extract_year_from_bangumi(bangumiinfo)
# 使用名称识别豆瓣媒体信息
return await self.async_match_doubaninfo(
name=meta.name,
year=year,
mtype=MediaType.TV,
season=meta.begin_season
)
return None

View File

@@ -113,6 +113,16 @@ class MediaServerChain(ChainBase):
"""
return self.run_module("mediaserver_play_url", server=server, item_id=item_id)
def get_image_cookies(
self, server: Optional[str], image_url: str
) -> Optional[str | dict]:
"""
获取图片的Cookies
"""
return self.run_module(
"mediaserver_image_cookies", server=server, image_url=image_url
)
def sync(self):
"""
同步媒体库所有数据到本地数据库
@@ -167,7 +177,7 @@ class MediaServerChain(ChainBase):
for episode in espisodes_info:
seasoninfo[episode.season] = episode.episodes
# 插入数据
item_dict = item.dict()
item_dict = item.model_dump()
item_dict["seasoninfo"] = seasoninfo
item_dict["item_type"] = item_type
dboper.add(**item_dict)

View File

@@ -1,6 +1,10 @@
import asyncio
import re
import time
from datetime import datetime, timedelta
from typing import Any, Optional, Dict, Union, List
from app.agent import agent_manager
from app.chain import ChainBase
from app.chain.download import DownloadChain
from app.chain.media import MediaChain
@@ -33,6 +37,10 @@ class MessageChain(ChainBase):
_cache_file = "__user_messages__"
# 每页数据量
_page_size: int = 8
# 用户会话信息 {userid: (session_id, last_time)}
_user_sessions: Dict[Union[str, int], tuple] = {}
# 会话超时时间(分钟)
_session_timeout_minutes: int = 15
@staticmethod
def __get_noexits_info(
@@ -163,6 +171,10 @@ class MessageChain(ChainBase):
original_message_id=original_message_id, original_chat_id=original_chat_id)
else:
logger.warning(f"渠道 {channel.value} 不支持回调,但收到了回调消息:{text}")
elif text.startswith('/ai') or text.startswith('/AI'):
# AI智能体处理
self._handle_ai_message(text=text, channel=channel, source=source,
userid=userid, username=username)
elif text.startswith('/'):
# 执行命令
self.eventmanager.send_event(
@@ -815,3 +827,162 @@ class MessageChain(ChainBase):
buttons.append(page_buttons)
return buttons
@staticmethod
def _get_or_create_session_id(userid: Union[str, int]) -> str:
"""
获取或创建会话ID
如果用户上次会话在15分钟内则复用相同的会话ID否则创建新的会话ID
"""
current_time = datetime.now()
# 检查用户是否有已存在的会话
if userid in MessageChain._user_sessions:
session_id, last_time = MessageChain._user_sessions[userid]
# 计算时间差
time_diff = current_time - last_time
# 如果时间差小于等于15分钟复用会话ID
if time_diff <= timedelta(minutes=MessageChain._session_timeout_minutes):
# 更新最后使用时间
MessageChain._user_sessions[userid] = (session_id, current_time)
logger.info(f"复用会话ID: {session_id}, 用户: {userid}, 距离上次会话: {time_diff.total_seconds() / 60:.1f}分钟")
return session_id
# 创建新的会话ID
new_session_id = f"user_{userid}_{int(time.time())}"
MessageChain._user_sessions[userid] = (new_session_id, current_time)
logger.info(f"创建新会话ID: {new_session_id}, 用户: {userid}")
return new_session_id
@staticmethod
def clear_user_session(userid: Union[str, int]) -> bool:
"""
清除指定用户的会话信息
返回是否成功清除
"""
if userid in MessageChain._user_sessions:
session_id, _ = MessageChain._user_sessions.pop(userid)
logger.info(f"已清除用户 {userid} 的会话: {session_id}")
return True
return False
def remote_clear_session(self, channel: MessageChannel, userid: Union[str, int], source: Optional[str] = None):
"""
清除用户会话(远程命令接口)
"""
# 获取并清除会话信息
session_id = None
if userid in MessageChain._user_sessions:
session_id, _ = MessageChain._user_sessions.pop(userid)
logger.info(f"已清除用户 {userid} 的会话: {session_id}")
# 如果有会话ID同时清除智能体的会话记忆
if session_id:
try:
try:
loop = asyncio.get_event_loop()
loop.run_until_complete(
agent_manager.clear_session(
session_id=session_id,
user_id=str(userid)
)
)
except RuntimeError:
asyncio.run(
agent_manager.clear_session(
session_id=session_id,
user_id=str(userid)
)
)
except Exception as e:
logger.warning(f"清除智能体会话记忆失败: {e}")
self.post_message(Notification(
channel=channel,
source=source,
title="智能体会话已清除,下次将创建新的会话",
userid=userid
))
else:
self.post_message(Notification(
channel=channel,
source=source,
title="您当前没有活跃的智能体会话",
userid=userid
))
def _handle_ai_message(self, text: str, channel: MessageChannel, source: str,
userid: Union[str, int], username: str) -> None:
"""
处理AI智能体消息
"""
try:
# 检查AI智能体是否启用
if not settings.AI_AGENT_ENABLE:
self.post_message(Notification(
channel=channel,
source=source,
userid=userid,
username=username,
title="MoviePilot智能助手未启用请在系统设置中启用"
))
return
# 检查LLM配置
if not settings.LLM_API_KEY:
self.post_message(Notification(
channel=channel,
source=source,
userid=userid,
username=username,
title="MoviePilot智能助未配置请在系统设置中配置"
))
return
# 提取用户消息
user_message = text[3:].strip() # 移除 "/ai" 前缀
if not user_message:
self.post_message(Notification(
channel=channel,
source=source,
userid=userid,
username=username,
title="请输入您的问题或需求"
))
return
# 生成或复用会话ID
session_id = self._get_or_create_session_id(userid)
# 在事件循环中处理
try:
loop = asyncio.get_event_loop()
loop.run_until_complete(
agent_manager.process_message(
session_id=session_id,
user_id=str(userid),
message=user_message,
channel=channel.value if channel else None,
source=source,
username=username
)
)
except RuntimeError:
# 如果没有事件循环,创建新的
asyncio.run(
agent_manager.process_message(
session_id=session_id,
user_id=str(userid),
message=user_message,
channel=channel.value if channel else None,
source=source,
username=username
)
)
except Exception as e:
logger.error(f"处理AI智能体消息失败: {e}")
self.messagehelper.put(f"AI智能体处理失败: {str(e)}", role="system", title="MoviePilot助手")

View File

@@ -1,5 +1,4 @@
import io
import tempfile
from pathlib import Path
from typing import List, Optional
@@ -10,7 +9,7 @@ from app.chain import ChainBase
from app.chain.bangumi import BangumiChain
from app.chain.douban import DoubanChain
from app.chain.tmdb import TmdbChain
from app.core.cache import cache_backend, cached
from app.core.cache import cached, FileCache
from app.core.config import settings, global_vars
from app.log import logger
from app.schemas import MediaType
@@ -19,26 +18,24 @@ from app.utils.http import RequestUtils
from app.utils.security import SecurityUtils
from app.utils.singleton import Singleton
# 推荐相关的专用缓存
recommend_ttl = 24 * 3600
recommend_cache_region = "recommend"
class RecommendChain(ChainBase, metaclass=Singleton):
"""
推荐处理链,单例运行
"""
# 推荐数据的缓存页数
# 推荐缓存时间
recommend_ttl = 24 * 3600
# 推荐缓存页数
cache_max_pages = 5
# 推荐缓存区域
recommend_cache_region = "recommend"
def refresh_recommend(self):
"""
刷新推荐
"""
logger.debug("Starting to refresh Recommend data.")
cache_backend.clear(region=recommend_cache_region)
logger.debug("Recommend Cache has been cleared.")
# 推荐来源方法
recommend_methods = [
@@ -106,31 +103,26 @@ class RecommendChain(ChainBase, metaclass=Singleton):
请求并保存图片
:param url: 图片路径
"""
if not settings.GLOBAL_IMAGE_CACHE or not url:
return
# 生成缓存路径
sanitized_path = SecurityUtils.sanitize_url_path(url)
cache_path = settings.CACHE_PATH / "images" / sanitized_path
cache_path = Path("images") / sanitized_path
# 没有文件类型,则添加后缀,在恶意文件类型和实际需求下的折衷选择
if not cache_path.suffix:
cache_path = cache_path.with_suffix(".jpg")
# 确保缓存路径和文件类型合法
if not SecurityUtils.is_safe_path(settings.CACHE_PATH, cache_path, settings.SECURITY_IMAGE_SUFFIXES):
logger.debug(f"Invalid cache path or file type for URL: {url}, sanitized path: {sanitized_path}")
return
# 获取缓存后端,并设置缓存时间为全局配置的缓存天数
cache_backend = FileCache(base=settings.CACHE_PATH,
ttl=settings.GLOBAL_IMAGE_CACHE_DAYS * 24 * 3600)
# 本地存在缓存图片,则直接跳过
if cache_path.exists():
if cache_backend.get(cache_path.as_posix(), region="images"):
logger.debug(f"Cache hit: Image already exists at {cache_path}")
return
# 请求远程图片
referer = "https://movie.douban.com/" if "doubanio.com" in url else None
proxies = settings.PROXY if not referer else None
response = RequestUtils(ua=settings.USER_AGENT, proxies=proxies, referer=referer).get_res(url=url)
response = RequestUtils(ua=settings.NORMAL_USER_AGENT, proxies=proxies, referer=referer).get_res(url=url)
if not response:
logger.debug(f"Empty response for URL: {url}")
return
@@ -142,19 +134,9 @@ class RecommendChain(ChainBase, metaclass=Singleton):
logger.debug(f"Invalid image format for URL {url}: {e}")
return
if not cache_path:
return
try:
if not cache_path.parent.exists():
cache_path.parent.mkdir(parents=True, exist_ok=True)
with tempfile.NamedTemporaryFile(dir=cache_path.parent, delete=False) as tmp_file:
tmp_file.write(response.content)
temp_path = Path(tmp_file.name)
temp_path.replace(cache_path)
logger.debug(f"Successfully cached image at {cache_path} for URL: {url}")
except Exception as e:
logger.debug(f"Failed to write cache file {cache_path} for URL {url}: {e}")
# 保存缓存
cache_backend.set(cache_path.as_posix(), response.content, region="images")
logger.debug(f"Successfully cached image at {cache_path} for URL: {url}")
@log_execution_time(logger=logger)
@cached(ttl=recommend_ttl, region=recommend_cache_region)
@@ -310,3 +292,158 @@ class RecommendChain(ChainBase, metaclass=Singleton):
"""
tvs = DoubanChain().tv_hot(page=page, count=count)
return [media.to_dict() for media in tvs] if tvs else []
@log_execution_time(logger=logger)
@cached(ttl=recommend_ttl, region=recommend_cache_region)
async def async_tmdb_movies(self, sort_by: Optional[str] = "popularity.desc",
with_genres: Optional[str] = "",
with_original_language: Optional[str] = "",
with_keywords: Optional[str] = "",
with_watch_providers: Optional[str] = "",
vote_average: Optional[float] = 0.0,
vote_count: Optional[int] = 0,
release_date: Optional[str] = "",
page: Optional[int] = 1) -> List[dict]:
"""
异步TMDB热门电影
"""
movies = await TmdbChain().async_run_module("async_tmdb_discover", mtype=MediaType.MOVIE,
sort_by=sort_by,
with_genres=with_genres,
with_original_language=with_original_language,
with_keywords=with_keywords,
with_watch_providers=with_watch_providers,
vote_average=vote_average,
vote_count=vote_count,
release_date=release_date,
page=page)
return [movie.to_dict() for movie in movies] if movies else []
@log_execution_time(logger=logger)
@cached(ttl=recommend_ttl, region=recommend_cache_region)
async def async_tmdb_tvs(self, sort_by: Optional[str] = "popularity.desc",
with_genres: Optional[str] = "",
with_original_language: Optional[str] = "zh|en|ja|ko",
with_keywords: Optional[str] = "",
with_watch_providers: Optional[str] = "",
vote_average: Optional[float] = 0.0,
vote_count: Optional[int] = 0,
release_date: Optional[str] = "",
page: Optional[int] = 1) -> List[dict]:
"""
异步TMDB热门电视剧
"""
tvs = await TmdbChain().async_run_module("async_tmdb_discover", mtype=MediaType.TV,
sort_by=sort_by,
with_genres=with_genres,
with_original_language=with_original_language,
with_keywords=with_keywords,
with_watch_providers=with_watch_providers,
vote_average=vote_average,
vote_count=vote_count,
release_date=release_date,
page=page)
return [tv.to_dict() for tv in tvs] if tvs else []
@log_execution_time(logger=logger)
@cached(ttl=recommend_ttl, region=recommend_cache_region)
async def async_tmdb_trending(self, page: Optional[int] = 1) -> List[dict]:
"""
异步TMDB流行趋势
"""
infos = await TmdbChain().async_run_module("async_tmdb_trending", page=page)
return [info.to_dict() for info in infos] if infos else []
@log_execution_time(logger=logger)
@cached(ttl=recommend_ttl, region=recommend_cache_region)
async def async_bangumi_calendar(self, page: Optional[int] = 1, count: Optional[int] = 30) -> List[dict]:
"""
异步Bangumi每日放送
"""
medias = await BangumiChain().async_run_module("async_bangumi_calendar")
return [media.to_dict() for media in medias[(page - 1) * count: page * count]] if medias else []
@log_execution_time(logger=logger)
@cached(ttl=recommend_ttl, region=recommend_cache_region)
async def async_douban_movie_showing(self, page: Optional[int] = 1, count: Optional[int] = 30) -> List[dict]:
"""
异步豆瓣正在热映
"""
movies = await DoubanChain().async_run_module("async_movie_showing", page=page, count=count)
return [media.to_dict() for media in movies] if movies else []
@log_execution_time(logger=logger)
@cached(ttl=recommend_ttl, region=recommend_cache_region)
async def async_douban_movies(self, sort: Optional[str] = "R", tags: Optional[str] = "",
page: Optional[int] = 1, count: Optional[int] = 30) -> List[dict]:
"""
异步豆瓣最新电影
"""
movies = await DoubanChain().async_run_module("async_douban_discover", mtype=MediaType.MOVIE,
sort=sort, tags=tags, page=page, count=count)
return [media.to_dict() for media in movies] if movies else []
@log_execution_time(logger=logger)
@cached(ttl=recommend_ttl, region=recommend_cache_region)
async def async_douban_tvs(self, sort: Optional[str] = "R", tags: Optional[str] = "",
page: Optional[int] = 1, count: Optional[int] = 30) -> List[dict]:
"""
异步豆瓣最新电视剧
"""
tvs = await DoubanChain().async_run_module("async_douban_discover", mtype=MediaType.TV,
sort=sort, tags=tags, page=page, count=count)
return [media.to_dict() for media in tvs] if tvs else []
@log_execution_time(logger=logger)
@cached(ttl=recommend_ttl, region=recommend_cache_region)
async def async_douban_movie_top250(self, page: Optional[int] = 1, count: Optional[int] = 30) -> List[dict]:
"""
异步豆瓣电影TOP250
"""
movies = await DoubanChain().async_run_module("async_movie_top250", page=page, count=count)
return [media.to_dict() for media in movies] if movies else []
@log_execution_time(logger=logger)
@cached(ttl=recommend_ttl, region=recommend_cache_region)
async def async_douban_tv_weekly_chinese(self, page: Optional[int] = 1, count: Optional[int] = 30) -> List[dict]:
"""
异步豆瓣国产剧集榜
"""
tvs = await DoubanChain().async_run_module("async_tv_weekly_chinese", page=page, count=count)
return [media.to_dict() for media in tvs] if tvs else []
@log_execution_time(logger=logger)
@cached(ttl=recommend_ttl, region=recommend_cache_region)
async def async_douban_tv_weekly_global(self, page: Optional[int] = 1, count: Optional[int] = 30) -> List[dict]:
"""
异步豆瓣全球剧集榜
"""
tvs = await DoubanChain().async_run_module("async_tv_weekly_global", page=page, count=count)
return [media.to_dict() for media in tvs] if tvs else []
@log_execution_time(logger=logger)
@cached(ttl=recommend_ttl, region=recommend_cache_region)
async def async_douban_tv_animation(self, page: Optional[int] = 1, count: Optional[int] = 30) -> List[dict]:
"""
异步豆瓣热门动漫
"""
tvs = await DoubanChain().async_run_module("async_tv_animation", page=page, count=count)
return [media.to_dict() for media in tvs] if tvs else []
@log_execution_time(logger=logger)
@cached(ttl=recommend_ttl, region=recommend_cache_region)
async def async_douban_movie_hot(self, page: Optional[int] = 1, count: Optional[int] = 30) -> List[dict]:
"""
异步豆瓣热门电影
"""
movies = await DoubanChain().async_run_module("async_movie_hot", page=page, count=count)
return [media.to_dict() for media in movies] if movies else []
@log_execution_time(logger=logger)
@cached(ttl=recommend_ttl, region=recommend_cache_region)
async def async_douban_tv_hot(self, page: Optional[int] = 1, count: Optional[int] = 30) -> List[dict]:
"""
异步豆瓣热门电视剧
"""
tvs = await DoubanChain().async_run_module("async_tv_hot", page=page, count=count)
return [media.to_dict() for media in tvs] if tvs else []

View File

@@ -1,19 +1,22 @@
import pickle
import traceback
import asyncio
import random
import time
from concurrent.futures import ThreadPoolExecutor, as_completed
from datetime import datetime
from typing import Dict
from typing import Dict, Tuple
from typing import List, Optional
from app.helper.sites import SitesHelper # noqa
from fastapi.concurrency import run_in_threadpool
from app.chain import ChainBase
from app.core.config import global_vars
from app.core.config import global_vars, settings
from app.core.context import Context
from app.core.context import MediaInfo, TorrentInfo
from app.core.event import eventmanager, Event
from app.core.metainfo import MetaInfo
from app.db.systemconfig_oper import SystemConfigOper
from app.helper.progress import ProgressHelper
from app.helper.sites import SitesHelper
from app.helper.torrent import TorrentHelper
from app.log import logger
from app.schemas import NotExistMediaInfo
@@ -54,7 +57,7 @@ class SearchChain(ChainBase):
results = self.process(mediainfo=mediainfo, sites=sites, area=area, no_exists=no_exists)
# 保存到本地文件
if cache_local:
self.save_cache(pickle.dumps(results), self.__result_temp_file)
self.save_cache(results, self.__result_temp_file)
return results
def search_by_title(self, title: str, page: Optional[int] = 0,
@@ -71,7 +74,7 @@ class SearchChain(ChainBase):
else:
logger.info(f'开始浏览资源,站点:{sites} ...')
# 搜索
torrents = self.__search_all_sites(keywords=[title], sites=sites, page=page) or []
torrents = self.__search_all_sites(keyword=title, sites=sites, page=page) or []
if not torrents:
logger.warn(f'{title} 未搜索到资源')
return []
@@ -80,67 +83,85 @@ class SearchChain(ChainBase):
torrent_info=torrent) for torrent in torrents]
# 保存到本地文件
if cache_local:
self.save_cache(pickle.dumps(contexts), self.__result_temp_file)
self.save_cache(contexts, self.__result_temp_file)
return contexts
def last_search_results(self) -> List[Context]:
def last_search_results(self) -> Optional[List[Context]]:
"""
获取上次搜索结果
"""
# 读取本地文件缓存
content = self.load_cache(self.__result_temp_file)
if not content:
return []
try:
return pickle.loads(content)
except Exception as e:
logger.error(f'加载搜索结果失败:{str(e)} - {traceback.format_exc()}')
return []
return self.load_cache(self.__result_temp_file)
def process(self, mediainfo: MediaInfo,
keyword: Optional[str] = None,
no_exists: Dict[int, Dict[int, NotExistMediaInfo]] = None,
sites: List[int] = None,
rule_groups: List[str] = None,
area: Optional[str] = "title",
custom_words: List[str] = None,
filter_params: Dict[str, str] = None) -> List[Context]:
async def async_last_search_results(self) -> Optional[List[Context]]:
"""
根据媒体信息搜索种子资源精确匹配应用过滤规则同时根据no_exists过滤本地已存在的资源
:param mediainfo: 媒体信息
:param keyword: 搜索关键词
:param no_exists: 缺失的媒体信息
:param sites: 站点ID列表为空时搜索所有站点
:param rule_groups: 过滤规则组名称列表
异步获取上次搜索结果
"""
return await self.async_load_cache(self.__result_temp_file)
async def async_search_by_id(self, tmdbid: Optional[int] = None, doubanid: Optional[str] = None,
mtype: MediaType = None, area: Optional[str] = "title", season: Optional[int] = None,
sites: List[int] = None, cache_local: bool = False) -> List[Context]:
"""
根据TMDBID/豆瓣ID异步搜索资源精确匹配不过滤本地存在的资源
:param tmdbid: TMDB ID
:param doubanid: 豆瓣 ID
:param mtype: 媒体,电影 or 电视剧
:param area: 搜索范围title or imdbid
:param custom_words: 自定义识别词列表
:param filter_params: 过滤参数
:param season: 季数
:param sites: 站点ID列表
:param cache_local: 是否缓存到本地
"""
mediainfo = await self.async_recognize_media(tmdbid=tmdbid, doubanid=doubanid, mtype=mtype)
if not mediainfo:
logger.error(f'{tmdbid} 媒体信息识别失败!')
return []
no_exists = None
if season:
no_exists = {
tmdbid or doubanid: {
season: NotExistMediaInfo(episodes=[])
}
}
results = await self.async_process(mediainfo=mediainfo, sites=sites, area=area, no_exists=no_exists)
# 保存到本地文件
if cache_local:
await self.async_save_cache(results, self.__result_temp_file)
return results
def __do_filter(torrent_list: List[TorrentInfo]) -> List[TorrentInfo]:
"""
执行优先级过滤
"""
return self.filter_torrents(rule_groups=rule_groups,
torrent_list=torrent_list,
mediainfo=mediainfo) or []
# 豆瓣标题处理
if not mediainfo.tmdb_id:
meta = MetaInfo(title=mediainfo.title)
mediainfo.title = meta.name
mediainfo.season = meta.begin_season
logger.info(f'开始搜索资源,关键词:{keyword or mediainfo.title} ...')
# 补充媒体信息
if not mediainfo.names:
mediainfo: MediaInfo = self.recognize_media(mtype=mediainfo.type,
tmdbid=mediainfo.tmdb_id,
doubanid=mediainfo.douban_id)
if not mediainfo:
logger.error(f'媒体信息识别失败!')
return []
async def async_search_by_title(self, title: str, page: Optional[int] = 0,
sites: List[int] = None, cache_local: Optional[bool] = False) -> List[Context]:
"""
根据标题异步搜索资源,不识别不过滤,直接返回站点内容
:param title: 标题,为空时返回所有站点首页内容
:param page: 页码
:param sites: 站点ID列表
:param cache_local: 是否缓存到本地
"""
if title:
logger.info(f'开始搜索资源,关键词:{title} ...')
else:
logger.info(f'开始浏览资源,站点:{sites} ...')
# 搜索
torrents = await self.__async_search_all_sites(keyword=title, sites=sites, page=page) or []
if not torrents:
logger.warn(f'{title} 未搜索到资源')
return []
# 组装上下文
contexts = [Context(meta_info=MetaInfo(title=torrent.title, subtitle=torrent.description),
torrent_info=torrent) for torrent in torrents]
# 保存到本地文件
if cache_local:
await self.async_save_cache(contexts, self.__result_temp_file)
return contexts
@staticmethod
def __prepare_params(mediainfo: MediaInfo,
keyword: Optional[str] = None,
no_exists: Dict[int, Dict[int, NotExistMediaInfo]] = None
) -> Tuple[Dict[int, List[int]], List[str]]:
"""
准备搜索参数
"""
# 缺失的季集
mediakey = mediainfo.tmdb_id or mediainfo.douban_id
if no_exists and no_exists.get(mediakey):
@@ -164,25 +185,41 @@ class SearchChain(ChainBase):
mediainfo.hk_title,
mediainfo.tw_title,
mediainfo.sg_title] if k]))
# 限制搜索关键词数量
if settings.MAX_SEARCH_NAME_LIMIT:
keywords = keywords[:settings.MAX_SEARCH_NAME_LIMIT]
return season_episodes, keywords
def __parse_result(self, torrents: List[TorrentInfo],
mediainfo: MediaInfo,
keyword: Optional[str] = None,
rule_groups: List[str] = None,
season_episodes: Dict[int, List[int]] = None,
custom_words: List[str] = None,
filter_params: Dict[str, str] = None) -> List[Context]:
"""
处理搜索结果
"""
def __do_filter(torrent_list: List[TorrentInfo]) -> List[TorrentInfo]:
"""
执行优先级过滤
"""
return self.filter_torrents(rule_groups=rule_groups,
torrent_list=torrent_list,
mediainfo=mediainfo) or []
# 执行搜索
torrents: List[TorrentInfo] = self.__search_all_sites(
mediainfo=mediainfo,
keywords=keywords,
sites=sites,
area=area
)
if not torrents:
logger.warn(f'{keyword or mediainfo.title} 未搜索到资源')
return []
# 开始新进度
progress = ProgressHelper()
progress.start(ProgressKey.Search)
progress = ProgressHelper(ProgressKey.Search)
progress.start()
# 开始过滤
progress.update(value=0, text=f'开始过滤,总 {len(torrents)} 个资源,请稍候...',
key=ProgressKey.Search)
progress.update(value=0, text=f'开始过滤,总 {len(torrents)} 个资源,请稍候...')
# 匹配订阅附加参数
if filter_params:
logger.info(f'开始附加参数过滤,附加参数:{filter_params} ...')
@@ -200,7 +237,7 @@ class SearchChain(ChainBase):
logger.info(f"过滤规则/剧集过滤完成,剩余 {len(torrents)} 个资源")
# 过滤完成
progress.update(value=50, text=f'过滤完成,剩余 {len(torrents)} 个资源', key=ProgressKey.Search)
progress.update(value=50, text=f'过滤完成,剩余 {len(torrents)} 个资源')
# 总数
_total = len(torrents)
@@ -213,14 +250,13 @@ class SearchChain(ChainBase):
try:
# 英文标题应该在别名/原标题中,不需要再匹配
logger.info(f"开始匹配结果 标题:{mediainfo.title},原标题:{mediainfo.original_title},别名:{mediainfo.names}")
progress.update(value=51, text=f'开始匹配,总 {_total} 个资源 ...', key=ProgressKey.Search)
progress.update(value=51, text=f'开始匹配,总 {_total} 个资源 ...')
for torrent in torrents:
if global_vars.is_system_stopped:
break
_count += 1
progress.update(value=(_count / _total) * 96,
text=f'正在匹配 {torrent.site_name},已完成 {_count} / {_total} ...',
key=ProgressKey.Search)
text=f'正在匹配 {torrent.site_name},已完成 {_count} / {_total} ...')
if not torrent.title:
continue
@@ -253,8 +289,7 @@ class SearchChain(ChainBase):
# 匹配完成
logger.info(f"匹配完成,共匹配到 {len(_match_torrents)} 个资源")
progress.update(value=97,
text=f'匹配完成,共匹配到 {len(_match_torrents)} 个资源',
key=ProgressKey.Search)
text=f'匹配完成,共匹配到 {len(_match_torrents)} 个资源')
# 去掉mediainfo中多余的数据
mediainfo.clear()
@@ -270,21 +305,192 @@ class SearchChain(ChainBase):
# 排序
progress.update(value=99,
text=f'正在对 {len(contexts)} 个资源进行排序,请稍候...',
key=ProgressKey.Search)
text=f'正在对 {len(contexts)} 个资源进行排序,请稍候...')
contexts = torrenthelper.sort_torrents(contexts)
# 结束进度
logger.info(f'搜索完成,共 {len(contexts)} 个资源')
progress.update(value=100,
text=f'搜索完成,共 {len(contexts)} 个资源',
key=ProgressKey.Search)
progress.end(ProgressKey.Search)
text=f'搜索完成,共 {len(contexts)} 个资源')
progress.end()
# 返回
return contexts
# 去重后返回
return self.__remove_duplicate(contexts)
def __search_all_sites(self, keywords: List[str],
@staticmethod
def __remove_duplicate(_torrents: List[Context]) -> List[Context]:
"""
去除重复的种子
:param _torrents: 种子列表
:return: 去重后的种子列表
"""
return list({f"{t.torrent_info.site_name}_{t.torrent_info.title}_{t.torrent_info.description}": t
for t in _torrents}.values())
def process(self, mediainfo: MediaInfo,
keyword: Optional[str] = None,
no_exists: Dict[int, Dict[int, NotExistMediaInfo]] = None,
sites: List[int] = None,
rule_groups: List[str] = None,
area: Optional[str] = "title",
custom_words: List[str] = None,
filter_params: Dict[str, str] = None) -> List[Context]:
"""
根据媒体信息搜索种子资源精确匹配应用过滤规则同时根据no_exists过滤本地已存在的资源
:param mediainfo: 媒体信息
:param keyword: 搜索关键词
:param no_exists: 缺失的媒体信息
:param sites: 站点ID列表为空时搜索所有站点
:param rule_groups: 过滤规则组名称列表
:param area: 搜索范围title or imdbid
:param custom_words: 自定义识别词列表
:param filter_params: 过滤参数
"""
# 豆瓣标题处理
if not mediainfo.tmdb_id:
meta = MetaInfo(title=mediainfo.title)
mediainfo.title = meta.name
mediainfo.season = meta.begin_season
logger.info(f'开始搜索资源,关键词:{keyword or mediainfo.title} ...')
# 补充媒体信息
if not mediainfo.names:
mediainfo: MediaInfo = self.recognize_media(mtype=mediainfo.type,
tmdbid=mediainfo.tmdb_id,
doubanid=mediainfo.douban_id)
if not mediainfo:
logger.error(f'媒体信息识别失败!')
return []
# 准备搜索参数
season_episodes, keywords = self.__prepare_params(
mediainfo=mediainfo,
keyword=keyword,
no_exists=no_exists
)
# 站点搜索结果
torrents: List[TorrentInfo] = []
# 站点搜索次数
search_count = 0
# 多关键字执行搜索
for search_word in keywords:
# 强制休眠 1-10 秒
if search_count > 0:
logger.info(f"已搜索 {search_count} 次,强制休眠 1-10 秒 ...")
time.sleep(random.randint(1, 10))
# 搜索站点
results = self.__search_all_sites(
mediainfo=mediainfo,
keyword=search_word,
sites=sites,
area=area
) or []
# 合并结果
search_count += 1
torrents.extend(results)
# 有结果则停止
if not settings.SEARCH_MULTIPLE_NAME and torrents:
logger.info(f"共搜索到 {len(torrents)} 个资源,停止搜索")
break
# 处理结果
return self.__parse_result(
torrents=torrents,
mediainfo=mediainfo,
keyword=keyword,
rule_groups=rule_groups,
season_episodes=season_episodes,
custom_words=custom_words,
filter_params=filter_params
)
async def async_process(self, mediainfo: MediaInfo,
keyword: Optional[str] = None,
no_exists: Dict[int, Dict[int, NotExistMediaInfo]] = None,
sites: List[int] = None,
rule_groups: List[str] = None,
area: Optional[str] = "title",
custom_words: List[str] = None,
filter_params: Dict[str, str] = None) -> List[Context]:
"""
根据媒体信息异步搜索种子资源精确匹配应用过滤规则同时根据no_exists过滤本地已存在的资源
:param mediainfo: 媒体信息
:param keyword: 搜索关键词
:param no_exists: 缺失的媒体信息
:param sites: 站点ID列表为空时搜索所有站点
:param rule_groups: 过滤规则组名称列表
:param area: 搜索范围title or imdbid
:param custom_words: 自定义识别词列表
:param filter_params: 过滤参数
"""
# 豆瓣标题处理
if not mediainfo.tmdb_id:
meta = MetaInfo(title=mediainfo.title)
mediainfo.title = meta.name
mediainfo.season = meta.begin_season
logger.info(f'开始搜索资源,关键词:{keyword or mediainfo.title} ...')
# 补充媒体信息
if not mediainfo.names:
mediainfo: MediaInfo = await self.async_recognize_media(mtype=mediainfo.type,
tmdbid=mediainfo.tmdb_id,
doubanid=mediainfo.douban_id)
if not mediainfo:
logger.error(f'媒体信息识别失败!')
return []
# 准备搜索参数
season_episodes, keywords = self.__prepare_params(
mediainfo=mediainfo,
keyword=keyword,
no_exists=no_exists
)
# 站点搜索结果
torrents: List[TorrentInfo] = []
# 站点搜索次数
search_count = 0
# 多关键字执行搜索
for search_word in keywords:
# 强制休眠 1-10 秒
if search_count > 0:
logger.info(f"已搜索 {search_count} 次,强制休眠 1-10 秒 ...")
await asyncio.sleep(random.randint(1, 10))
# 搜索站点
torrents.extend(
await self.__async_search_all_sites(
mediainfo=mediainfo,
keyword=search_word,
sites=sites,
area=area
) or []
)
search_count += 1
# 有结果则停止
if torrents:
logger.info(f"共搜索到 {len(torrents)} 个资源,停止搜索")
break
# 处理结果
return await run_in_threadpool(self.__parse_result,
torrents=torrents,
mediainfo=mediainfo,
keyword=keyword,
rule_groups=rule_groups,
season_episodes=season_episodes,
custom_words=custom_words,
filter_params=filter_params
)
def __search_all_sites(self, keyword: str,
mediainfo: Optional[MediaInfo] = None,
sites: List[int] = None,
page: Optional[int] = 0,
@@ -292,7 +498,7 @@ class SearchChain(ChainBase):
"""
多线程搜索多个站点
:param mediainfo: 识别的媒体信息
:param keywords: 搜索关键词列表
:param keyword: 搜索关键词
:param sites: 指定站点ID列表如有则只搜索指定站点否则搜索所有站点
:param page: 搜索页码
:param area: 搜索区域 title or imdbid
@@ -314,8 +520,8 @@ class SearchChain(ChainBase):
return []
# 开始进度
progress = ProgressHelper()
progress.start(ProgressKey.Search)
progress = ProgressHelper(ProgressKey.Search)
progress.start()
# 开始计时
start_time = datetime.now()
# 总数
@@ -324,8 +530,7 @@ class SearchChain(ChainBase):
finish_count = 0
# 更新进度
progress.update(value=0,
text=f"开始搜索,共 {total_num} 个站点 ...",
key=ProgressKey.Search)
text=f"开始搜索,共 {total_num} 个站点 ...")
# 结果集
results = []
# 多线程
@@ -335,13 +540,13 @@ class SearchChain(ChainBase):
if area == "imdbid":
# 搜索IMDBID
task = executor.submit(self.search_torrents, site=site,
keywords=[mediainfo.imdb_id] if mediainfo else None,
keyword=mediainfo.imdb_id if mediainfo else None,
mtype=mediainfo.type if mediainfo else None,
page=page)
else:
# 搜索标题
task = executor.submit(self.search_torrents, site=site,
keywords=keywords,
keyword=keyword,
mtype=mediainfo.type if mediainfo else None,
page=page)
all_task.append(task)
@@ -354,17 +559,100 @@ class SearchChain(ChainBase):
results.extend(result)
logger.info(f"站点搜索进度:{finish_count} / {total_num}")
progress.update(value=finish_count / total_num * 100,
text=f"正在搜索{keywords or ''},已完成 {finish_count} / {total_num} 个站点 ...",
key=ProgressKey.Search)
text=f"正在搜索{keyword or ''},已完成 {finish_count} / {total_num} 个站点 ...")
# 计算耗时
end_time = datetime.now()
# 更新进度
progress.update(value=100,
text=f"站点搜索完成,有效资源数:{len(results)},总耗时 {(end_time - start_time).seconds}",
key=ProgressKey.Search)
text=f"站点搜索完成,有效资源数:{len(results)},总耗时 {(end_time - start_time).seconds}")
logger.info(f"站点搜索完成,有效资源数:{len(results)},总耗时 {(end_time - start_time).seconds}")
# 结束进度
progress.end(ProgressKey.Search)
progress.end()
# 返回
return results
async def __async_search_all_sites(self, keyword: str,
mediainfo: Optional[MediaInfo] = None,
sites: List[int] = None,
page: Optional[int] = 0,
area: Optional[str] = "title") -> Optional[List[TorrentInfo]]:
"""
异步搜索多个站点
:param mediainfo: 识别的媒体信息
:param keyword: 搜索关键词
:param sites: 指定站点ID列表如有则只搜索指定站点否则搜索所有站点
:param page: 搜索页码
:param area: 搜索区域 title or imdbid
:reutrn: 资源列表
"""
# 未开启的站点不搜索
indexer_sites = []
# 配置的索引站点
if not sites:
sites = SystemConfigOper().get(SystemConfigKey.IndexerSites) or []
for indexer in await SitesHelper().async_get_indexers():
# 检查站点索引开关
if not sites or indexer.get("id") in sites:
indexer_sites.append(indexer)
if not indexer_sites:
logger.warn('未开启任何有效站点,无法搜索资源')
return []
# 开始进度
progress = ProgressHelper(ProgressKey.Search)
progress.start()
# 开始计时
start_time = datetime.now()
# 总数
total_num = len(indexer_sites)
# 完成数
finish_count = 0
# 更新进度
progress.update(value=0,
text=f"开始搜索,共 {total_num} 个站点 ...")
# 结果集
results = []
# 创建异步任务列表
tasks = []
for site in indexer_sites:
if area == "imdbid":
# 搜索IMDBID
task = self.async_search_torrents(site=site,
keyword=mediainfo.imdb_id if mediainfo else None,
mtype=mediainfo.type if mediainfo else None,
page=page)
else:
# 搜索标题
task = self.async_search_torrents(site=site,
keyword=keyword,
mtype=mediainfo.type if mediainfo else None,
page=page)
tasks.append(task)
# 使用asyncio.as_completed来处理并发任务
for future in asyncio.as_completed(tasks):
if global_vars.is_system_stopped:
break
finish_count += 1
result = await future
if result:
results.extend(result)
logger.info(f"站点搜索进度:{finish_count} / {total_num}")
progress.update(value=finish_count / total_num * 100,
text=f"正在搜索{keyword or ''},已完成 {finish_count} / {total_num} 个站点 ...")
# 计算耗时
end_time = datetime.now()
# 更新进度
progress.update(value=100,
text=f"站点搜索完成,有效资源数:{len(results)},总耗时 {(end_time - start_time).seconds}")
logger.info(f"站点搜索完成,有效资源数:{len(results)},总耗时 {(end_time - start_time).seconds}")
# 结束进度
progress.end()
# 返回
return results

View File

@@ -1,5 +1,4 @@
import base64
import gc
import re
from datetime import datetime
from typing import Optional, Tuple, Union, Dict
@@ -9,7 +8,7 @@ from lxml import etree
from app.chain import ChainBase
from app.core.config import global_vars, settings
from app.core.event import Event, EventManager, eventmanager
from app.core.event import Event, eventmanager
from app.db.models.site import Site
from app.db.site_oper import SiteOper
from app.db.systemconfig_oper import SystemConfigOper
@@ -18,7 +17,7 @@ from app.helper.cloudflare import under_challenge
from app.helper.cookie import CookieHelper
from app.helper.cookiecloud import CookieCloudHelper
from app.helper.rss import RssHelper
from app.helper.sites import SitesHelper
from app.helper.sites import SitesHelper # noqa
from app.log import logger
from app.schemas import MessageChannel, Notification, SiteUserData
from app.schemas.types import EventType, NotificationType
@@ -57,9 +56,9 @@ class SiteChain(ChainBase):
if userdata:
SiteOper().update_userdata(domain=StringUtils.get_url_domain(site.get("domain")),
name=site.get("name"),
payload=userdata.dict())
payload=userdata.model_dump())
# 发送事件
EventManager().send_event(EventType.SiteRefreshed, {
eventmanager.send_event(EventType.SiteRefreshed, {
"site_id": site.get("id")
})
# 发送站点消息
@@ -104,14 +103,10 @@ class SiteChain(ChainBase):
any_site_updated = True
result[site.get("name")] = userdata
if any_site_updated:
EventManager().send_event(EventType.SiteRefreshed, {
eventmanager.send_event(EventType.SiteRefreshed, {
"site_id": "*"
})
# 如果不是大内存模式,进行垃圾回收
if not settings.BIG_MEMORY_MODE:
gc.collect()
return result
def is_special_site(self, domain: str) -> bool:
@@ -271,16 +266,20 @@ class SiteChain(ChainBase):
logger.error(f"获取站点页面失败:{url}")
return favicon_url, None
html = etree.HTML(html_text)
if StringUtils.is_valid_html_element(html):
fav_link = html.xpath('//head/link[contains(@rel, "icon")]/@href')
if fav_link:
favicon_url = urljoin(url, fav_link[0])
try:
if StringUtils.is_valid_html_element(html):
fav_link = html.xpath('//head/link[contains(@rel, "icon")]/@href')
if fav_link:
favicon_url = urljoin(url, fav_link[0])
res = RequestUtils(cookies=cookie, timeout=15, ua=ua).get_res(url=favicon_url)
if res:
return favicon_url, base64.b64encode(res.content).decode()
else:
logger.error(f"获取站点图标失败:{favicon_url}")
res = RequestUtils(cookies=cookie, timeout=15, ua=ua).get_res(url=favicon_url)
if res:
return favicon_url, base64.b64encode(res.content).decode()
else:
logger.error(f"获取站点图标失败:{favicon_url}")
finally:
if html is not None:
del html
return favicon_url, None
def sync_cookies(self, manual=False) -> Tuple[bool, str]:
@@ -314,11 +313,16 @@ class SiteChain(ChainBase):
siteoper = SiteOper()
rsshelper = RssHelper()
for domain, cookie in cookies.items():
# 检查系统是否停止
if global_vars.is_system_stopped:
logger.info("系统正在停止中断CookieCloud同步")
return False, "系统正在停止,同步被中断"
# 索引器信息
indexer = siteshelper.get_indexer(domain)
# 数据库的站点信息
site_info = siteoper.get_by_domain(domain)
if site_info and site_info.is_active == 1:
if site_info and site_info.is_active:
# 站点已存在,检查站点连通性
status, msg = self.test(domain)
# 更新站点Cookie
@@ -331,7 +335,8 @@ class SiteChain(ChainBase):
url=site_info.url,
cookie=cookie,
ua=site_info.ua or settings.USER_AGENT,
proxy=True if site_info.proxy else False
proxy=True if site_info.proxy else False,
timeout=site_info.timeout or 15
)
if rss_url:
logger.info(f"更新站点 {domain} RSS地址 ...")
@@ -416,7 +421,7 @@ class SiteChain(ChainBase):
# 通知站点更新
if indexer:
EventManager().send_event(EventType.SiteUpdated, {
eventmanager.send_event(EventType.SiteUpdated, {
"domain": domain,
})
# 处理完成
@@ -559,13 +564,15 @@ class SiteChain(ChainBase):
public = site_info.public
proxies = settings.PROXY if site_info.proxy else None
proxy_server = settings.PROXY_SERVER if site_info.proxy else None
timeout = site_info.timeout or 60
# 访问链接
if render:
page_source = PlaywrightHelper().get_page_source(url=site_url,
cookies=site_cookie,
ua=ua,
proxies=proxy_server)
proxies=proxy_server,
timeout=timeout)
if not public and not SiteUtils.is_logged_in(page_source):
if under_challenge(page_source):
return False, f"无法通过Cloudflare"
@@ -698,7 +705,8 @@ class SiteChain(ChainBase):
username=username,
password=password,
two_step_code=two_step_code,
proxies=settings.PROXY_HOST if site_info.proxy else None
proxies=settings.PROXY_SERVER if site_info.proxy else None,
timeout=site_info.timeout or 60
)
if result:
cookie, ua, msg = result

View File

@@ -6,7 +6,6 @@ from app.chain import ChainBase
from app.core.config import settings
from app.helper.directory import DirectoryHelper
from app.log import logger
from app.schemas import MediaType
class StorageChain(ChainBase):
@@ -134,8 +133,7 @@ class StorageChain(ChainBase):
"""
return self.run_module("support_transtype", storage=storage)
def delete_media_file(self, fileitem: schemas.FileItem,
mtype: MediaType = None, delete_self: bool = True) -> bool:
def delete_media_file(self, fileitem: schemas.FileItem, delete_self: bool = True) -> bool:
"""
删除媒体文件,以及不含媒体文件的目录
"""
@@ -152,7 +150,8 @@ class StorageChain(ChainBase):
return False
media_exts = settings.RMT_MEDIAEXT + settings.DOWNLOAD_TMPEXT
if fileitem.path == "/" or len(Path(fileitem.path).parts) <= 2:
fileitem_path = Path(fileitem.path) if fileitem.path else Path("")
if len(fileitem_path.parts) <= 2:
logger.warn(f"{fileitem.storage}{fileitem.path} 根目录或一级目录不允许删除")
return False
if fileitem.type == "dir":
@@ -162,13 +161,7 @@ class StorageChain(ChainBase):
if not self.delete_file(fileitem):
logger.warn(f"{fileitem.storage}{fileitem.path} 删除失败")
return False
elif self.any_files(fileitem, extensions=media_exts) is False:
logger.warn(f"{fileitem.storage}{fileitem.path} 不存在其它媒体文件,正在删除空目录")
if not self.delete_file(fileitem):
logger.warn(f"{fileitem.storage}{fileitem.path} 删除失败")
return False
# 不处理父目录
return True
elif delete_self:
# 本身是文件,需要删除文件
logger.warn(f"正在删除文件【{fileitem.storage}{fileitem.path}")
@@ -176,36 +169,43 @@ class StorageChain(ChainBase):
logger.warn(f"{fileitem.storage}{fileitem.path} 删除失败")
return False
if mtype:
# 重命名格式
rename_format = settings.TV_RENAME_FORMAT \
if mtype == MediaType.TV else settings.MOVIE_RENAME_FORMAT
# 计算重命名中的文件夹层数
rename_format_level = len(rename_format.split("/")) - 1
if rename_format_level < 1:
return True
# 处理媒体文件根目录
dir_item = self.get_file_item(storage=fileitem.storage,
path=Path(fileitem.path).parents[rename_format_level - 1])
else:
# 处理上级目录
dir_item = self.get_parent_item(fileitem)
# 检查和删除上级空目录
dir_item = fileitem if fileitem.type == "dir" else self.get_parent_item(fileitem)
if not dir_item:
logger.warn(f"{fileitem.storage}{fileitem.path} 上级目录不存在")
return True
# 检查和删除上级目录
if dir_item and len(Path(dir_item.path).parts) > 2:
# 如何目录是所有下载目录、媒体库目录的上级,则不处理
for d in DirectoryHelper().get_dirs():
if d.download_path and Path(d.download_path).is_relative_to(Path(dir_item.path)):
logger.debug(f"{dir_item.storage}{dir_item.path} 是下载目录本级或上级目录,不删除")
return True
if d.library_path and Path(d.library_path).is_relative_to(Path(dir_item.path)):
logger.debug(f"{dir_item.storage}{dir_item.path} 是媒体库目录本级或上级目录,不删除")
return True
# 不存在其他媒体文件,删除空目录
if self.any_files(dir_item, extensions=media_exts) is False:
logger.warn(f"{dir_item.storage}{dir_item.path} 不存在其它媒体文件,正在删除空目录")
if not self.delete_file(dir_item):
logger.warn(f"{dir_item.storage}{dir_item.path} 删除失败")
return False
# 查找操作文件项匹配的配置目录(资源目录、媒体库目录)
associated_dir = max(
(
Path(p)
for d in DirectoryHelper().get_dirs()
for p in (d.download_path, d.library_path)
if p and fileitem_path.is_relative_to(p)
),
key=lambda path: len(path.parts),
default=None,
)
while dir_item and len(Path(dir_item.path).parts) > 2:
# 目录是资源目录、媒体库目录的上级,则不处理
if associated_dir and associated_dir.is_relative_to(Path(dir_item.path)):
logger.debug(f"{dir_item.storage}{dir_item.path} 位于资源或媒体库目录结构中,不删除")
break
elif not associated_dir and self.list_files(dir_item, recursion=False):
logger.debug(f"{dir_item.storage}{dir_item.path} 不是空目录,不删除")
break
if self.any_files(dir_item, extensions=media_exts) is not False:
logger.debug(f"{dir_item.storage}{dir_item.path} 存在媒体文件,不删除")
break
# 删除空目录并继续处理父目录
logger.warn(f"{dir_item.storage}{dir_item.path} 不存在其它媒体文件,正在删除空目录")
if not self.delete_file(dir_item):
logger.warn(f"{dir_item.storage}{dir_item.path} 删除失败")
return False
dir_item = self.get_parent_item(dir_item)
return True

View File

@@ -1,5 +1,4 @@
import copy
import gc
import json
import random
import threading
@@ -16,7 +15,7 @@ from app.chain.tmdb import TmdbChain
from app.chain.torrents import TorrentsChain
from app.core.config import settings, global_vars
from app.core.context import TorrentInfo, Context, MediaInfo
from app.core.event import eventmanager, Event, EventManager
from app.core.event import eventmanager, Event
from app.core.meta import MetaBase
from app.core.meta.words import WordsMatcher
from app.core.metainfo import MetaInfo
@@ -39,6 +38,84 @@ class SubscribeChain(ChainBase):
"""
_rlock = threading.RLock()
# 避免莫名原因导致长时间持有锁
_LOCK_TIMOUT = 3600 * 2
@staticmethod
def __get_event_meida(_mediaid: str, _meta: MetaBase) -> Optional[MediaInfo]:
"""
广播事件解析媒体信息
"""
event_data = MediaRecognizeConvertEventData(
mediaid=_mediaid,
convert_type=settings.RECOGNIZE_SOURCE
)
event = eventmanager.send_event(ChainEventType.MediaRecognizeConvert, event_data)
# 使用事件返回的上下文数据
if event and event.event_data:
event_data: MediaRecognizeConvertEventData = event.event_data
if event_data.media_dict:
mediachain = MediaChain()
new_id = event_data.media_dict.get("id")
if event_data.convert_type == "themoviedb":
return mediachain.recognize_media(meta=_meta, tmdbid=new_id)
elif event_data.convert_type == "douban":
return mediachain.recognize_media(meta=_meta, doubanid=new_id)
return None
@staticmethod
async def __async_get_event_meida(_mediaid: str, _meta: MetaBase) -> Optional[MediaInfo]:
"""
广播事件解析媒体信息
"""
event_data = MediaRecognizeConvertEventData(
mediaid=_mediaid,
convert_type=settings.RECOGNIZE_SOURCE
)
event = await eventmanager.async_send_event(ChainEventType.MediaRecognizeConvert, event_data)
# 使用事件返回的上下文数据
if event and event.event_data:
event_data: MediaRecognizeConvertEventData = event.event_data
if event_data.media_dict:
mediachain = MediaChain()
new_id = event_data.media_dict.get("id")
if event_data.convert_type == "themoviedb":
return await mediachain.async_recognize_media(meta=_meta, tmdbid=new_id)
elif event_data.convert_type == "douban":
return await mediachain.async_recognize_media(meta=_meta, doubanid=new_id)
return None
def __get_default_kwargs(self, mtype: MediaType, **kwargs) -> dict:
"""
获取订阅默认配置
:param mtype: 媒体类型
:param key: 配置键
:return: 配置值
"""
return {
'quality': self.__get_default_subscribe_config(mtype, "quality") if not kwargs.get(
"quality") else kwargs.get("quality"),
'resolution': self.__get_default_subscribe_config(mtype, "resolution") if not kwargs.get(
"resolution") else kwargs.get("resolution"),
'effect': self.__get_default_subscribe_config(mtype, "effect") if not kwargs.get(
"effect") else kwargs.get("effect"),
'include': self.__get_default_subscribe_config(mtype, "include") if not kwargs.get(
"include") else kwargs.get("include"),
'exclude': self.__get_default_subscribe_config(mtype, "exclude") if not kwargs.get(
"exclude") else kwargs.get("exclude"),
'best_version': self.__get_default_subscribe_config(mtype, "best_version") if not kwargs.get(
"best_version") else kwargs.get("best_version"),
'search_imdbid': self.__get_default_subscribe_config(mtype, "search_imdbid") if not kwargs.get(
"search_imdbid") else kwargs.get("search_imdbid"),
'sites': self.__get_default_subscribe_config(mtype, "sites") or None if not kwargs.get(
"sites") else kwargs.get("sites"),
'downloader': self.__get_default_subscribe_config(mtype, "downloader") if not kwargs.get(
"downloader") else kwargs.get("downloader"),
'save_path': self.__get_default_subscribe_config(mtype, "save_path") if not kwargs.get(
"save_path") else kwargs.get("save_path"),
'filter_groups': self.__get_default_subscribe_config(mtype, "filter_groups") if not kwargs.get(
"filter_groups") else kwargs.get("filter_groups")
}
def add(self, title: str, year: str,
mtype: MediaType = None,
@@ -59,27 +136,6 @@ class SubscribeChain(ChainBase):
识别媒体信息并添加订阅
"""
def __get_event_meida(_mediaid: str, _meta: MetaBase) -> Optional[MediaInfo]:
"""
广播事件解析媒体信息
"""
event_data = MediaRecognizeConvertEventData(
mediaid=_mediaid,
convert_type=settings.RECOGNIZE_SOURCE
)
event = eventmanager.send_event(ChainEventType.MediaRecognizeConvert, event_data)
# 使用事件返回的上下文数据
if event and event.event_data:
event_data: MediaRecognizeConvertEventData = event.event_data
if event_data.media_dict:
mediachain = MediaChain()
new_id = event_data.media_dict.get("id")
if event_data.convert_type == "themoviedb":
return mediachain.recognize_media(meta=_meta, tmdbid=new_id)
elif event_data.convert_type == "douban":
return mediachain.recognize_media(meta=_meta, doubanid=new_id)
return None
logger.info(f'开始添加订阅,标题:{title} ...')
mediainfo = None
@@ -102,7 +158,7 @@ class SubscribeChain(ChainBase):
mediainfo = MediaInfo(tmdb_info=tmdbinfo)
elif mediaid:
# 未知前缀,广播事件解析媒体信息
mediainfo = __get_event_meida(mediaid, metainfo)
mediainfo = self.__get_event_meida(mediaid, metainfo)
else:
# 使用TMDBID识别
mediainfo = self.recognize_media(meta=metainfo, mtype=mtype, tmdbid=tmdbid,
@@ -113,7 +169,7 @@ class SubscribeChain(ChainBase):
mediainfo = self.recognize_media(meta=metainfo, mtype=mtype, doubanid=doubanid, cache=False)
elif mediaid:
# 未知前缀,广播事件解析媒体信息
mediainfo = __get_event_meida(mediaid, metainfo)
mediainfo = self.__get_event_meida(mediaid, metainfo)
if mediainfo:
# 豆瓣标题处理
meta = MetaInfo(mediainfo.title)
@@ -175,30 +231,8 @@ class SubscribeChain(ChainBase):
mediainfo.bangumi_id = bangumiid
# 添加订阅
kwargs.update({
'quality': self.__get_default_subscribe_config(mediainfo.type, "quality") if not kwargs.get(
"quality") else kwargs.get("quality"),
'resolution': self.__get_default_subscribe_config(mediainfo.type, "resolution") if not kwargs.get(
"resolution") else kwargs.get("resolution"),
'effect': self.__get_default_subscribe_config(mediainfo.type, "effect") if not kwargs.get(
"effect") else kwargs.get("effect"),
'include': self.__get_default_subscribe_config(mediainfo.type, "include") if not kwargs.get(
"include") else kwargs.get("include"),
'exclude': self.__get_default_subscribe_config(mediainfo.type, "exclude") if not kwargs.get(
"exclude") else kwargs.get("exclude"),
'best_version': self.__get_default_subscribe_config(mediainfo.type, "best_version") if not kwargs.get(
"best_version") else kwargs.get("best_version"),
'search_imdbid': self.__get_default_subscribe_config(mediainfo.type, "search_imdbid") if not kwargs.get(
"search_imdbid") else kwargs.get("search_imdbid"),
'sites': self.__get_default_subscribe_config(mediainfo.type, "sites") or None if not kwargs.get(
"sites") else kwargs.get("sites"),
'downloader': self.__get_default_subscribe_config(mediainfo.type, "downloader") if not kwargs.get(
"downloader") else kwargs.get("downloader"),
'save_path': self.__get_default_subscribe_config(mediainfo.type, "save_path") if not kwargs.get(
"save_path") else kwargs.get("save_path"),
'filter_groups': self.__get_default_subscribe_config(mediainfo.type, "filter_groups") if not kwargs.get(
"filter_groups") else kwargs.get("filter_groups")
})
kwargs.update(self.__get_default_kwargs(mediainfo.type, **kwargs))
# 操作数据库
sid, err_msg = SubscribeOper().add(mediainfo=mediainfo, season=season, username=username, **kwargs)
if not sid:
@@ -236,7 +270,7 @@ class SubscribeChain(ChainBase):
username=username
)
# 发送事件
EventManager().send_event(EventType.SubscribeAdded, {
eventmanager.send_event(EventType.SubscribeAdded, {
"subscribe_id": sid,
"username": username,
"mediainfo": mediainfo.to_dict(),
@@ -260,6 +294,183 @@ class SubscribeChain(ChainBase):
# 返回结果
return sid, ""
async def async_add(self, title: str, year: str,
mtype: MediaType = None,
tmdbid: Optional[int] = None,
doubanid: Optional[str] = None,
bangumiid: Optional[int] = None,
mediaid: Optional[str] = None,
episode_group: Optional[str] = None,
season: Optional[int] = None,
channel: MessageChannel = None,
source: Optional[str] = None,
userid: Optional[str] = None,
username: Optional[str] = None,
message: Optional[bool] = True,
exist_ok: Optional[bool] = False,
**kwargs) -> Tuple[Optional[int], str]:
"""
异步识别媒体信息并添加订阅
"""
logger.info(f'开始添加订阅,标题:{title} ...')
mediainfo = None
metainfo = MetaInfo(title)
if year:
metainfo.year = year
if mtype:
metainfo.type = mtype
if season:
metainfo.type = MediaType.TV
metainfo.begin_season = season
# 识别媒体信息
if settings.RECOGNIZE_SOURCE == "themoviedb":
# TMDB识别模式
if not tmdbid:
if doubanid:
# 将豆瓣信息转换为TMDB信息
tmdbinfo = await MediaChain().async_get_tmdbinfo_by_doubanid(doubanid=doubanid, mtype=mtype)
if tmdbinfo:
mediainfo = MediaInfo(tmdb_info=tmdbinfo)
elif mediaid:
# 未知前缀,广播事件解析媒体信息
mediainfo = await self.__async_get_event_meida(mediaid, metainfo)
else:
# 使用TMDBID识别
mediainfo = await self.async_recognize_media(meta=metainfo, mtype=mtype, tmdbid=tmdbid,
episode_group=episode_group, cache=False)
else:
if doubanid:
# 豆瓣识别模式,不使用缓存
mediainfo = await self.async_recognize_media(meta=metainfo, mtype=mtype, doubanid=doubanid, cache=False)
elif mediaid:
# 未知前缀,广播事件解析媒体信息
mediainfo = await self.__async_get_event_meida(mediaid, metainfo)
if mediainfo:
# 豆瓣标题处理
meta = MetaInfo(mediainfo.title)
mediainfo.title = meta.name
if not season:
season = meta.begin_season
# 使用名称识别兜底
if not mediainfo:
mediainfo = await self.async_recognize_media(meta=metainfo, episode_group=episode_group)
# 识别失败
if not mediainfo:
logger.warn(f'未识别到媒体信息,标题:{title}tmdbid{tmdbid}doubanid{doubanid}')
return None, "未识别到媒体信息"
# 总集数
if mediainfo.type == MediaType.TV:
if not season:
season = 1
# 总集数
if not kwargs.get('total_episode'):
if not mediainfo.seasons or episode_group:
# 补充媒体信息
mediainfo = await self.async_recognize_media(mtype=mediainfo.type,
tmdbid=mediainfo.tmdb_id,
doubanid=mediainfo.douban_id,
bangumiid=mediainfo.bangumi_id,
episode_group=episode_group,
cache=False)
if not mediainfo:
logger.error(f"媒体信息识别失败!")
return None, "媒体信息识别失败"
if not mediainfo.seasons:
logger.error(f"媒体信息中没有季集信息,标题:{title}tmdbid{tmdbid}doubanid{doubanid}")
return None, "媒体信息中没有季集信息"
total_episode = len(mediainfo.seasons.get(season) or [])
if not total_episode:
logger.error(f'未获取到总集数,标题:{title}tmdbid{tmdbid}, doubanid{doubanid}')
return None, f"未获取到第 {season} 季的总集数"
kwargs.update({
'total_episode': total_episode
})
# 缺失集
if not kwargs.get('lack_episode'):
kwargs.update({
'lack_episode': kwargs.get('total_episode')
})
else:
# 避免season为0的问题
season = None
# 更新媒体图片
await self.async_obtain_images(mediainfo=mediainfo)
# 合并信息
if doubanid:
mediainfo.douban_id = doubanid
if bangumiid:
mediainfo.bangumi_id = bangumiid
# 列新默认参数
kwargs.update(self.__get_default_kwargs(mediainfo.type, **kwargs))
# 操作数据库
sid, err_msg = await SubscribeOper().async_add(mediainfo=mediainfo, season=season, username=username, **kwargs)
if not sid:
logger.error(f'{mediainfo.title_year} {err_msg}')
if not exist_ok and message:
# 失败发回原用户
await self.async_post_message(schemas.Notification(channel=channel,
source=source,
mtype=NotificationType.Subscribe,
title=f"{mediainfo.title_year} {metainfo.season} "
f"添加订阅失败!",
text=f"{err_msg}",
image=mediainfo.get_message_image(),
userid=userid))
return None, err_msg
elif message:
if mediainfo.type == MediaType.TV:
link = settings.MP_DOMAIN('#/subscribe/tv?tab=mysub')
else:
link = settings.MP_DOMAIN('#/subscribe/movie?tab=mysub')
# 订阅成功按规则发送消息
await self.async_post_message(
schemas.Notification(
channel=channel,
source=source,
mtype=NotificationType.Subscribe,
ctype=ContentType.SubscribeAdded,
image=mediainfo.get_message_image(),
link=link,
userid=userid,
username=username
),
meta=metainfo,
mediainfo=mediainfo,
username=username
)
# 发送事件
await eventmanager.async_send_event(EventType.SubscribeAdded, {
"subscribe_id": sid,
"username": username,
"mediainfo": mediainfo.to_dict(),
})
# 统计订阅
await SubscribeHelper().async_sub_reg({
"name": title,
"year": year,
"type": metainfo.type.value,
"tmdbid": mediainfo.tmdb_id,
"imdbid": mediainfo.imdb_id,
"tvdbid": mediainfo.tvdb_id,
"doubanid": mediainfo.douban_id,
"bangumiid": mediainfo.bangumi_id,
"season": metainfo.begin_season,
"poster": mediainfo.get_poster_image(),
"backdrop": mediainfo.get_backdrop_image(),
"vote": mediainfo.vote_average,
"description": mediainfo.overview
})
# 返回结果
return sid, ""
@staticmethod
def exists(mediainfo: MediaInfo, meta: MetaBase = None):
"""
@@ -279,8 +490,15 @@ class SubscribeChain(ChainBase):
:param manual: 是否手动搜索
:return: 更新订阅状态为R或删除订阅
"""
with self._rlock:
logger.debug(f"search lock acquired at {datetime.now()}")
lock_acquired = False
try:
if lock_acquired := self._rlock.acquire(
blocking=True, timeout=self._LOCK_TIMOUT
):
logger.debug(f"search lock acquired at {datetime.now()}")
else:
logger.warn("search上锁超时")
subscribeoper = SubscribeOper()
if sid:
subscribe = subscribeoper.get(sid)
@@ -437,12 +655,10 @@ class SubscribeChain(ChainBase):
finally:
subscribes.clear()
del subscribes
logger.debug(f"search Lock released at {datetime.now()}")
# 如果不是大内存模式,进行垃圾回收
if not settings.BIG_MEMORY_MODE:
gc.collect()
finally:
if lock_acquired:
self._rlock.release()
logger.debug(f"search Lock released at {datetime.now()}")
def update_subscribe_priority(self, subscribe: Subscribe, meta: MetaBase,
mediainfo: MediaInfo, downloads: Optional[List[Context]]):
@@ -515,9 +731,6 @@ class SubscribeChain(ChainBase):
self.match(
TorrentsChain().refresh(sites=sites)
)
# 如果不是大内存模式,进行垃圾回收
if not settings.BIG_MEMORY_MODE:
gc.collect()
@staticmethod
def get_sub_sites(subscribe: Subscribe) -> List[int]:
@@ -547,10 +760,15 @@ class SubscribeChain(ChainBase):
:return: 返回[]代表所有站点命中返回None代表没有订阅
"""
ret_sites = []
subscribes = SubscribeOper().list()
if not subscribes:
# 没有订阅
return None
# 刷新订阅选中的Rss站点
for subscribe in SubscribeOper().list(self.get_states_for_search('R')):
for subscribe in subscribes:
# 刷新选中的站点
ret_sites.extend(self.get_sub_sites(subscribe))
if subscribe.state in self.get_states_for_search('R'):
ret_sites.extend(self.get_sub_sites(subscribe))
# 去重
if ret_sites:
ret_sites = list(set(ret_sites))
@@ -565,8 +783,14 @@ class SubscribeChain(ChainBase):
logger.warn('没有缓存资源,无法匹配订阅')
return
with self._rlock:
logger.debug(f"match lock acquired at {datetime.now()}")
lock_acquired = False
try:
if lock_acquired := self._rlock.acquire(
blocking=True, timeout=self._LOCK_TIMOUT
):
logger.debug(f"match lock acquired at {datetime.now()}")
else:
logger.warn("match上锁超时")
# 预识别所有未识别的种子
processed_torrents: Dict[str, List[Context]] = {}
@@ -577,15 +801,27 @@ class SubscribeChain(ChainBase):
for context in contexts:
if global_vars.is_system_stopped:
break
# 如果种子未识别,尝试识别
if not context.media_info or (not context.media_info.tmdb_id
and not context.media_info.douban_id):
# 如果种子未识别且失败次数未超过3次,尝试识别
if (not context.media_info or (not context.media_info.tmdb_id
and not context.media_info.douban_id)) and context.media_recognize_fail_count < 3:
logger.debug(
f'尝试重新识别种子:{context.torrent_info.title},当前失败次数:{context.media_recognize_fail_count}/3')
re_mediainfo = self.recognize_media(meta=context.meta_info)
if re_mediainfo:
# 清理多余信息
re_mediainfo.clear()
# 更新种子缓存
context.media_info = re_mediainfo
# 重置失败次数
context.media_recognize_fail_count = 0
logger.debug(f'种子 {context.torrent_info.title} 重新识别成功')
else:
# 识别失败,增加失败次数
context.media_recognize_fail_count += 1
logger.debug(
f'种子 {context.torrent_info.title} 媒体识别失败,失败次数:{context.media_recognize_fail_count}/3')
elif context.media_recognize_fail_count >= 3:
logger.debug(f'种子 {context.torrent_info.title} 已达到最大识别失败次数(3次),跳过识别')
# 添加已预处理
processed_torrents[domain].append(context)
@@ -688,7 +924,7 @@ class SubscribeChain(ChainBase):
# 如果仍然没有识别到媒体信息,尝试标题匹配
if not torrent_mediainfo or (
not torrent_mediainfo.tmdb_id and not torrent_mediainfo.douban_id):
logger.info(
logger.debug(
f'{torrent_info.site_name} - {torrent_info.title} 重新识别失败,尝试通过标题匹配...')
if torrenthelper.match_torrent(mediainfo=mediainfo,
torrent_meta=torrent_meta,
@@ -807,7 +1043,8 @@ class SubscribeChain(ChainBase):
username=subscribe.username,
save_path=subscribe.save_path,
downloader=subscribe.downloader,
source=self.get_subscribe_source_keyword(subscribe)
source=self.get_subscribe_source_keyword(
subscribe)
)
# 同步外部修改,更新订阅信息
@@ -822,8 +1059,10 @@ class SubscribeChain(ChainBase):
del processed_torrents
subscribes.clear()
del subscribes
logger.debug(f"match Lock released at {datetime.now()}")
finally:
if lock_acquired:
self._rlock.release()
logger.debug(f"match Lock released at {datetime.now()}")
def check(self):
"""
@@ -945,6 +1184,42 @@ class SubscribeChain(ChainBase):
logger.error(f'follow用户分享订阅 {title} 添加失败:{message}')
logger.info(f'follow用户分享订阅刷新完成共添加 {success_count} 个订阅')
async def cache_calendar(self):
"""
预缓存订阅日历,实际上就是查询一遍所有订阅的媒体信息
前端请示是异常的,所以需要使用异步缓存方法
"""
logger.info(f'开始预缓存订阅日历 ...')
for subscribe in await SubscribeOper().async_list():
if global_vars.is_system_stopped:
break
try:
mtype = MediaType(subscribe.type)
except ValueError:
logger.error(f'订阅 {subscribe.name} 类型错误:{subscribe.type}')
continue
# 识别媒体信息
if mtype == MediaType.MOVIE:
mediainfo: MediaInfo = await self.async_recognize_media(mtype=mtype,
tmdbid=subscribe.tmdbid,
doubanid=subscribe.doubanid,
bangumiid=subscribe.bangumiid,
episode_group=subscribe.episode_group,
cache=False)
if not mediainfo:
logger.warn(
f'未识别到媒体信息,标题:{subscribe.name}tmdbid{subscribe.tmdbid}doubanid{subscribe.doubanid}')
continue
else:
episodes = await TmdbChain().async_tmdb_episodes(tmdbid=subscribe.tmdbid,
season=subscribe.season,
episode_group=subscribe.episode_group)
if not episodes:
logger.warn(
f'未识别到季集信息,标题:{subscribe.name}tmdbid{subscribe.tmdbid}豆瓣ID{subscribe.doubanid},季:{subscribe.season}')
continue
logger.info(f'订阅日历预缓存完成')
@staticmethod
def __update_subscribe_note(subscribe: Subscribe, downloads: Optional[List[Context]]):
"""
@@ -1073,7 +1348,7 @@ class SubscribeChain(ChainBase):
username=subscribe.username
)
# 发送事件
EventManager().send_event(EventType.SubscribeComplete, {
eventmanager.send_event(EventType.SubscribeComplete, {
"subscribe_id": subscribe.id,
"subscribe_info": subscribe.to_dict(),
"mediainfo": mediainfo.to_dict(),

View File

@@ -6,12 +6,12 @@ from typing import Union, Optional
from app.chain import ChainBase
from app.core.config import settings
from app.core.plugin import PluginManager
from app.helper.system import SystemHelper
from app.log import logger
from app.schemas import Notification, MessageChannel
from app.utils.http import RequestUtils
from app.utils.system import SystemUtils
from app.helper.system import SystemHelper
from app.helper.plugin import PluginHelper
from version import FRONTEND_VERSION, APP_VERSION
@@ -136,13 +136,6 @@ class SystemChain(ChainBase):
shutil.rmtree(target_path)
shutil.copytree(item, target_path)
logger.info(f"已恢复插件目录: {item.name}")
# 安装依赖
requirements_file = target_path / "requirements.txt"
if requirements_file.exists():
logger.info(f"正在安装插件 {item.name} 的依赖...")
success, message = PluginHelper.pip_install_with_fallback(requirements_file)
if not success:
logger.warn(f"插件 {item.name} 依赖安装失败: {message}")
restored_count += 1
# 如果是文件
elif item.is_file():
@@ -155,6 +148,9 @@ class SystemChain(ChainBase):
logger.info(f"插件恢复完成,共恢复 {restored_count} 个项目")
# 安装缺少的依赖
PluginManager.install_plugin_missing_dependencies()
# 删除备份目录
try:
shutil.rmtree(backup_dir)

View File

@@ -164,3 +164,159 @@ class TmdbChain(ChainBase):
if infos:
return [info.backdrop_path for info in infos if info and info.backdrop_path][:num]
return []
async def async_tmdb_discover(self, mtype: MediaType,
sort_by: str,
with_genres: str,
with_original_language: str,
with_keywords: str,
with_watch_providers: str,
vote_average: float,
vote_count: int,
release_date: str,
page: Optional[int] = 1) -> Optional[List[MediaInfo]]:
"""
发现TMDB电影、剧集异步版本
:param mtype: 媒体类型
:param sort_by: 排序方式
:param with_genres: 类型
:param with_original_language: 语言
:param with_keywords: 关键字
:param with_watch_providers: 提供商
:param vote_average: 评分
:param vote_count: 评分人数
:param release_date: 上映日期
:param page: 页码
:return: 媒体信息列表
"""
return await self.async_run_module("async_tmdb_discover", mtype=mtype,
sort_by=sort_by,
with_genres=with_genres,
with_original_language=with_original_language,
with_keywords=with_keywords,
with_watch_providers=with_watch_providers,
vote_average=vote_average,
vote_count=vote_count,
release_date=release_date,
page=page)
async def async_tmdb_trending(self, page: Optional[int] = 1) -> Optional[List[MediaInfo]]:
"""
TMDB流行趋势异步版本
:param page: 第几页
:return: TMDB信息列表
"""
return await self.async_run_module("async_tmdb_trending", page=page)
async def async_tmdb_collection(self, collection_id: int) -> Optional[List[MediaInfo]]:
"""
根据合集ID查询集合异步版本
:param collection_id: 合集ID
"""
return await self.async_run_module("async_tmdb_collection", collection_id=collection_id)
async def async_tmdb_seasons(self, tmdbid: int) -> List[schemas.TmdbSeason]:
"""
根据TMDBID查询themoviedb所有季信息异步版本
:param tmdbid: TMDBID
"""
return await self.async_run_module("async_tmdb_seasons", tmdbid=tmdbid)
async def async_tmdb_group_seasons(self, group_id: str) -> List[schemas.TmdbSeason]:
"""
根据剧集组ID查询themoviedb所有季集信息异步版本
:param group_id: 剧集组ID
"""
return await self.async_run_module("async_tmdb_group_seasons", group_id=group_id)
async def async_tmdb_episodes(self, tmdbid: int, season: int,
episode_group: Optional[str] = None) -> List[schemas.TmdbEpisode]:
"""
根据TMDBID查询某季的所有信信息异步版本
:param tmdbid: TMDBID
:param season: 季
:param episode_group: 剧集组
"""
return await self.async_run_module("async_tmdb_episodes", tmdbid=tmdbid, season=season,
episode_group=episode_group)
async def async_movie_similar(self, tmdbid: int) -> Optional[List[MediaInfo]]:
"""
根据TMDBID查询类似电影异步版本
:param tmdbid: TMDBID
"""
return await self.async_run_module("async_tmdb_movie_similar", tmdbid=tmdbid)
async def async_tv_similar(self, tmdbid: int) -> Optional[List[MediaInfo]]:
"""
根据TMDBID查询类似电视剧异步版本
:param tmdbid: TMDBID
"""
return await self.async_run_module("async_tmdb_tv_similar", tmdbid=tmdbid)
async def async_movie_recommend(self, tmdbid: int) -> Optional[List[MediaInfo]]:
"""
根据TMDBID查询推荐电影异步版本
:param tmdbid: TMDBID
"""
return await self.async_run_module("async_tmdb_movie_recommend", tmdbid=tmdbid)
async def async_tv_recommend(self, tmdbid: int) -> Optional[List[MediaInfo]]:
"""
根据TMDBID查询推荐电视剧异步版本
:param tmdbid: TMDBID
"""
return await self.async_run_module("async_tmdb_tv_recommend", tmdbid=tmdbid)
async def async_movie_credits(self, tmdbid: int, page: Optional[int] = 1) -> Optional[List[schemas.MediaPerson]]:
"""
根据TMDBID查询电影演职人员异步版本
:param tmdbid: TMDBID
:param page: 页码
"""
return await self.async_run_module("async_tmdb_movie_credits", tmdbid=tmdbid, page=page)
async def async_tv_credits(self, tmdbid: int, page: Optional[int] = 1) -> Optional[List[schemas.MediaPerson]]:
"""
根据TMDBID查询电视剧演职人员异步版本
:param tmdbid: TMDBID
:param page: 页码
"""
return await self.async_run_module("async_tmdb_tv_credits", tmdbid=tmdbid, page=page)
async def async_person_detail(self, person_id: int) -> Optional[schemas.MediaPerson]:
"""
根据TMDBID查询演职员详情异步版本
:param person_id: 人物ID
"""
return await self.async_run_module("async_tmdb_person_detail", person_id=person_id)
async def async_person_credits(self, person_id: int, page: Optional[int] = 1) -> Optional[List[MediaInfo]]:
"""
根据人物ID查询人物参演作品异步版本
:param person_id: 人物ID
:param page: 页码
"""
return await self.async_run_module("async_tmdb_person_credits", person_id=person_id, page=page)
async def async_get_random_wallpager(self) -> Optional[str]:
"""
获取随机壁纸异步版本缓存1个小时
"""
infos = await self.async_tmdb_trending()
if infos:
# 随机一个电影
while True:
info = random.choice(infos)
if info and info.backdrop_path:
return info.backdrop_path
return None
async def async_get_trending_wallpapers(self, num: Optional[int] = 10) -> List[str]:
"""
获取所有流行壁纸(异步版本)
"""
infos = await self.async_tmdb_trending()
if infos:
return [info.backdrop_path for info in infos if info and info.backdrop_path][:num]
return []

View File

@@ -10,7 +10,7 @@ from app.core.metainfo import MetaInfo
from app.db.site_oper import SiteOper
from app.db.systemconfig_oper import SystemConfigOper
from app.helper.rss import RssHelper
from app.helper.sites import SitesHelper
from app.helper.sites import SitesHelper # noqa
from app.helper.torrent import TorrentHelper
from app.log import logger
from app.schemas import Notification
@@ -56,9 +56,34 @@ class TorrentsChain(ChainBase):
# 读取缓存
if stype == 'spider':
return self.load_cache(self._spider_file) or {}
torrents_cache = self.load_cache(self._spider_file) or {}
else:
return self.load_cache(self._rss_file) or {}
torrents_cache = self.load_cache(self._rss_file) or {}
# 兼容性处理为旧版本的Context对象添加失败次数字段
self._ensure_context_compatibility(torrents_cache)
return torrents_cache
async def async_get_torrents(self, stype: Optional[str] = None) -> Dict[str, List[Context]]:
"""
异步获取当前缓存的种子
:param stype: 强制指定缓存类型spider:爬虫缓存rss:rss缓存
"""
if not stype:
stype = settings.SUBSCRIBE_MODE
# 异步读取缓存
if stype == 'spider':
torrents_cache = await self.async_load_cache(self._spider_file) or {}
else:
torrents_cache = await self.async_load_cache(self._rss_file) or {}
# 兼容性处理为旧版本的Context对象添加失败次数字段
self._ensure_context_compatibility(torrents_cache)
return torrents_cache
def clear_torrents(self):
"""
@@ -69,6 +94,15 @@ class TorrentsChain(ChainBase):
self.remove_cache(self._rss_file)
logger.info(f'种子缓存数据清理完成')
async def async_clear_torrents(self):
"""
异步清理种子缓存数据
"""
logger.info(f'开始异步清理种子缓存数据 ...')
await self.async_remove_cache(self._spider_file)
await self.async_remove_cache(self._rss_file)
logger.info(f'异步种子缓存数据清理完成')
def browse(self, domain: str, keyword: Optional[str] = None, cat: Optional[str] = None,
page: Optional[int] = 0) -> List[TorrentInfo]:
"""
@@ -85,6 +119,22 @@ class TorrentsChain(ChainBase):
return []
return self.refresh_torrents(site=site, keyword=keyword, cat=cat, page=page)
async def async_browse(self, domain: str, keyword: Optional[str] = None, cat: Optional[str] = None,
page: Optional[int] = 0) -> List[TorrentInfo]:
"""
异步浏览站点首页内容返回种子清单TTL缓存5分钟
:param domain: 站点域名
:param keyword: 搜索标题
:param cat: 搜索分类
:param page: 页码
"""
logger.info(f'开始获取站点 {domain} 最新种子 ...')
site = await SitesHelper().async_get_indexer(domain)
if not site:
logger.error(f'站点 {domain} 不存在!')
return []
return await self.async_refresh_torrents(site=site, keyword=keyword, cat=cat, page=page)
def rss(self, domain: str) -> List[TorrentInfo]:
"""
获取站点RSS内容返回种子清单TTL缓存3分钟
@@ -100,7 +150,7 @@ class TorrentsChain(ChainBase):
return []
# 解析RSS
rss_items = RssHelper().parse(site.get("rss"), True if site.get("proxy") else False,
timeout=int(site.get("timeout") or 30))
timeout=int(site.get("timeout") or 30), ua=site.get("ua") if site.get("ua") else None)
if rss_items is None:
# rss过期尝试保留原配置生成新的rss
self.__renew_rss_url(domain=domain, site=site)
@@ -140,6 +190,16 @@ class TorrentsChain(ChainBase):
:param stype: 强制指定缓存类型spider:爬虫缓存rss:rss缓存
:param sites: 强制指定站点ID列表为空则读取设置的订阅站点
"""
def __is_no_cache_site(_domain: str) -> bool:
"""
判断站点是否不需要缓存
"""
for url_key in settings.NO_CACHE_SITE_KEY.split(','):
if url_key in _domain:
return True
return False
# 刷新类型
if not stype:
stype = settings.SUBSCRIBE_MODE
@@ -169,7 +229,15 @@ class TorrentsChain(ChainBase):
domains.append(domain)
if stype == "spider":
# 刷新首页种子
torrents: List[TorrentInfo] = self.browse(domain=domain)
torrents: List[TorrentInfo] = []
# 读取第0页和第1页
for page in range(2):
page_torrents = self.browse(domain=domain, page=page)
if page_torrents:
torrents.extend(page_torrents)
else:
# 如果某一页没有数据,说明已经到最后一页,停止获取
break
else:
# 刷新RSS种子
torrents: List[TorrentInfo] = self.rss(domain=domain)
@@ -178,11 +246,16 @@ class TorrentsChain(ChainBase):
# 取前N条
torrents = torrents[:settings.CONF.refresh]
if torrents:
# 过滤出没有处理过的种子 - 优化:使用集合查找,避免重复创建字符串列表
cached_signatures = {f'{t.torrent_info.title}{t.torrent_info.description}'
for t in torrents_cache.get(domain) or []}
torrents = [torrent for torrent in torrents
if f'{torrent.title}{torrent.description}' not in cached_signatures]
if __is_no_cache_site(domain):
# 不需要缓存的站点,直接处理
logger.info(f'{indexer.get("name")}{len(torrents)} 个种子 (不缓存)')
torrents_cache[domain] = []
else:
# 过滤出没有处理过的种子 - 优化:使用集合查找,避免重复创建字符串列表
cached_signatures = {f'{t.torrent_info.title}{t.torrent_info.description}'
for t in torrents_cache.get(domain) or []}
torrents = [torrent for torrent in torrents
if f'{torrent.title}{torrent.description}' not in cached_signatures]
if torrents:
logger.info(f'{indexer.get("name")}{len(torrents)} 个新种子')
else:
@@ -211,6 +284,9 @@ class TorrentsChain(ChainBase):
mediainfo.clear()
# 上下文
context = Context(meta_info=meta, media_info=mediainfo, torrent_info=torrent)
# 如果未识别到媒体信息设置初始失败次数为1
if not mediainfo or (not mediainfo.tmdb_id and not mediainfo.douban_id):
context.media_recognize_fail_count = 1
# 添加到缓存
if not torrents_cache.get(domain):
torrents_cache[domain] = [context]
@@ -237,6 +313,21 @@ class TorrentsChain(ChainBase):
return torrents_cache
@staticmethod
def _ensure_context_compatibility(torrents_cache: Dict[str, List[Context]]):
"""
确保Context对象的兼容性为旧版本添加缺失的字段
"""
for domain, contexts in torrents_cache.items():
for context in contexts:
# 如果Context对象没有media_recognize_fail_count字段添加默认值
if not hasattr(context, 'media_recognize_fail_count'):
context.media_recognize_fail_count = 0
# 如果媒体信息未识别,设置初始失败次数
if (not context.media_info or
(not context.media_info.tmdb_id and not context.media_info.douban_id)):
context.media_recognize_fail_count = 1
def __renew_rss_url(self, domain: str, site: dict):
"""
保留原配置生成新的rss地址
@@ -249,7 +340,8 @@ class TorrentsChain(ChainBase):
url=site.get("url"),
cookie=site.get("cookie"),
ua=site.get("ua") or settings.USER_AGENT,
proxy=True if site.get("proxy") else False
proxy=True if site.get("proxy") else False,
timeout=site.get("timeout"),
)
if rss_url:
# 获取新的日期的passkey

View File

@@ -1,11 +1,9 @@
import gc
import queue
import re
import threading
import traceback
from copy import deepcopy
from pathlib import Path
from queue import Queue
from time import sleep
from typing import List, Optional, Tuple, Union, Dict, Callable
@@ -16,9 +14,9 @@ from app.chain.storage import StorageChain
from app.chain.tmdb import TmdbChain
from app.core.config import settings, global_vars
from app.core.context import MediaInfo
from app.core.event import eventmanager
from app.core.meta import MetaBase
from app.core.metainfo import MetaInfoPath
from app.core.event import eventmanager
from app.db.downloadhistory_oper import DownloadHistoryOper
from app.db.models.downloadhistory import DownloadHistory
from app.db.models.transferhistory import TransferHistory
@@ -28,13 +26,14 @@ from app.helper.directory import DirectoryHelper
from app.helper.format import FormatParser
from app.helper.progress import ProgressHelper
from app.log import logger
from app.schemas import StorageOperSelectionEventData
from app.schemas import TransferInfo, TransferTorrent, Notification, EpisodeFormat, FileItem, TransferDirectoryConf, \
TransferTask, TransferQueue, TransferJob, TransferJobTask
from app.schemas.types import TorrentStatus, EventType, MediaType, ProgressKey, NotificationType, MessageChannel, \
SystemConfigKey, ChainEventType, ContentType
from app.schemas import StorageOperSelectionEventData
from app.utils.singleton import Singleton
from app.utils.string import StringUtils
from app.utils.system import SystemUtils
downloader_lock = threading.Lock()
job_lock = threading.Lock()
@@ -213,6 +212,7 @@ class JobManager:
set(self._season_episodes[mediaid]) - set(task.meta.episode_list)
)
return task
return None
def remove_job(self, task: TransferTask) -> Optional[TransferJob]:
"""
@@ -226,6 +226,7 @@ class JobManager:
if __mediaid__ in self._season_episodes:
self._season_episodes.pop(__mediaid__)
return self._job_view.pop(__mediaid__)
return None
def is_done(self, task: TransferTask) -> bool:
"""
@@ -311,7 +312,7 @@ class JobManager:
def count(self, media: MediaInfo, season: Optional[int] = None) -> int:
"""
获取某项任务总数
获取某项任务成功总数
"""
__mediaid__ = self.__get_media_id(media=media, season=season)
with job_lock:
@@ -322,15 +323,19 @@ class JobManager:
def size(self, media: MediaInfo, season: Optional[int] = None) -> int:
"""
获取某项任务总大小
获取某项任务成功文件总大小
"""
__mediaid__ = self.__get_media_id(media=media, season=season)
with job_lock:
# 计算状态为完成的任务数
if __mediaid__ not in self._job_view:
return 0
return sum([task.fileitem.size for task in self._job_view[__mediaid__].tasks if
task.state == "completed" and task.fileitem.size is not None])
return sum([
task.fileitem.size if task.fileitem.size is not None
else (SystemUtils.get_directory_size(Path(task.fileitem.path)) if task.fileitem.storage == "local" else 0)
for task in self._job_view[__mediaid__].tasks
if task.state == "completed"
])
def total(self) -> int:
"""
@@ -359,22 +364,20 @@ class TransferChain(ChainBase, metaclass=Singleton):
文件整理处理链
"""
# 可处理的文件后缀
all_exts = settings.RMT_MEDIAEXT
# 待整理任务队列
_queue = Queue()
# 文件整理线程
_transfer_thread = None
# 队列间隔时间(秒)
_transfer_interval = 15
def __init__(self):
super().__init__()
# 可处理的文件后缀
self.all_exts = settings.RMT_MEDIAEXT
# 待整理任务队列
self._queue = queue.Queue()
# 文件整理线程
self._transfer_thread = None
# 队列间隔时间(秒)
self._transfer_interval = 15
# 事件管理器
self.jobview = JobManager()
# 车移成功的文件清单
self._success_target_files: Dict[str, List[str]] = {}
# 启动整理任务
self.__init()
@@ -391,6 +394,44 @@ class TransferChain(ChainBase, metaclass=Singleton):
"""
整理完成后处理
"""
def __do_finished():
"""
完成时发送消息、刮削事件、移除任务等
"""
# 更新文件数量
transferinfo.file_count = self.jobview.count(task.mediainfo, task.meta.begin_season) or 1
# 更新文件大小
transferinfo.total_size = self.jobview.size(task.mediainfo,
task.meta.begin_season) or task.fileitem.size
# 更新文件清单
transferinfo.file_list_new = self._success_target_files.pop(transferinfo.target_diritem.path, [])
# 发送通知,实时手动整理时不发
if transferinfo.need_notify and (task.background or not task.manual):
se_str = None
if task.mediainfo.type == MediaType.TV:
season_episodes = self.jobview.season_episodes(task.mediainfo, task.meta.begin_season)
if season_episodes:
se_str = f"{task.meta.season} {StringUtils.format_ep(season_episodes)}"
else:
se_str = f"{task.meta.season}"
self.send_transfer_message(meta=task.meta,
mediainfo=task.mediainfo,
transferinfo=transferinfo,
season_episode=se_str,
username=task.username)
# 刮削事件
if transferinfo.need_scrape:
self.eventmanager.send_event(EventType.MetadataScrape, {
'meta': task.meta,
'mediainfo': task.mediainfo,
'fileitem': transferinfo.target_diritem,
'file_list': transferinfo.file_list_new,
'overwrite': False
})
# 移除已完成的任务
self.jobview.remove_job(task)
transferhis = TransferHistoryOper()
if not transferinfo.success:
# 转移失败
@@ -416,6 +457,10 @@ class TransferChain(ChainBase, metaclass=Singleton):
))
# 整理失败
self.jobview.fail_task(task)
with task_lock:
# 整理完成且有成功的任务时
if self.jobview.is_finished(task):
__do_finished()
return False, transferinfo.message
# 转移成功
@@ -444,55 +489,32 @@ class TransferChain(ChainBase, metaclass=Singleton):
})
with task_lock:
# 登记转移成功文件清单
target_dir_path = transferinfo.target_diritem.path
target_files = transferinfo.file_list_new
if self._success_target_files.get(target_dir_path):
self._success_target_files[target_dir_path].extend(target_files)
else:
self._success_target_files[target_dir_path] = target_files
# 全部整理成功时
if self.jobview.is_success(task):
# 移动模式删除空目录
if transferinfo.transfer_type in ["move"]:
# 所有成功的业务
tasks = self.jobview.success_tasks(task.mediainfo, task.meta.begin_season)
# 记录已处理的种子hash
processed_hashes = set()
storagechain = StorageChain()
# 获取整理屏蔽词
transfer_exclude_words = SystemConfigOper().get(SystemConfigKey.TransferExcludeWords)
for t in tasks:
# 下载器hash
if t.download_hash and t.download_hash not in processed_hashes:
processed_hashes.add(t.download_hash)
if t.download_hash and self._can_delete_torrent(t.download_hash, t.downloader,
transfer_exclude_words):
if self.remove_torrents(t.download_hash, downloader=t.downloader):
logger.info(f"移动模式删除种子成功:{t.download_hash} ")
# 删除残留目录
logger.info(f"移动模式删除种子成功:{t.download_hash}")
if t.fileitem:
storagechain.delete_media_file(t.fileitem, delete_self=False)
# 整理完成且有成功的任务时
if self.jobview.is_finished(task):
# 发送通知,实时手动整理时不发
if transferinfo.need_notify and (task.background or not task.manual):
se_str = None
if task.mediainfo.type == MediaType.TV:
season_episodes = self.jobview.season_episodes(task.mediainfo, task.meta.begin_season)
if season_episodes:
se_str = f"{task.meta.season} {StringUtils.format_ep(season_episodes)}"
else:
se_str = f"{task.meta.season}"
# 更新文件数量
transferinfo.file_count = self.jobview.count(task.mediainfo, task.meta.begin_season) or 1
# 更新文件大小
transferinfo.total_size = self.jobview.size(task.mediainfo,
task.meta.begin_season) or task.fileitem.size
self.send_transfer_message(meta=task.meta,
mediainfo=task.mediainfo,
transferinfo=transferinfo,
season_episode=se_str,
username=task.username)
# 刮削事件
if transferinfo.need_scrape:
self.eventmanager.send_event(EventType.MetadataScrape, {
'meta': task.meta,
'mediainfo': task.mediainfo,
'fileitem': transferinfo.target_diritem
})
# 移除已完成的任务
self.jobview.remove_job(task)
__do_finished()
return True, ""
@@ -538,8 +560,10 @@ class TransferChain(ChainBase, metaclass=Singleton):
processed_num = 0
# 失败数量
fail_num = 0
# 已完成文件
finished_files = []
progress = ProgressHelper()
progress = ProgressHelper(ProgressKey.FileTransfer)
while not global_vars.is_system_stopped:
try:
@@ -554,7 +578,7 @@ class TransferChain(ChainBase, metaclass=Singleton):
if __queue_start:
logger.info("开始整理队列处理...")
# 启动进度
progress.start(ProgressKey.FileTransfer)
progress.start()
# 重置计数
processed_num = 0
fail_num = 0
@@ -562,8 +586,7 @@ class TransferChain(ChainBase, metaclass=Singleton):
__process_msg = f"开始整理队列处理,当前共 {total_num} 个文件 ..."
logger.info(__process_msg)
progress.update(value=0,
text=__process_msg,
key=ProgressKey.FileTransfer)
text=__process_msg)
# 队列已开始
__queue_start = False
# 更新进度
@@ -571,7 +594,10 @@ class TransferChain(ChainBase, metaclass=Singleton):
logger.info(__process_msg)
progress.update(value=processed_num / total_num * 100,
text=__process_msg,
key=ProgressKey.FileTransfer)
data={
"current": Path(fileitem.path).as_posix(),
"finished": finished_files
})
# 整理
state, err_msg = self.__handle_transfer(task=task, callback=item.callback)
if not state:
@@ -579,20 +605,20 @@ class TransferChain(ChainBase, metaclass=Singleton):
fail_num += 1
# 更新进度
processed_num += 1
finished_files.append(Path(fileitem.path).as_posix())
__process_msg = f"{fileitem.name} 整理完成"
logger.info(__process_msg)
progress.update(value=processed_num / total_num * 100,
progress.update(value=(processed_num / total_num) * 100,
text=__process_msg,
key=ProgressKey.FileTransfer)
data={})
except queue.Empty:
if not __queue_start:
# 结束进度
__end_msg = f"整理队列处理完成,共整理 {processed_num} 个文件,失败 {fail_num}"
logger.info(__end_msg)
progress.update(value=100,
text=__end_msg,
key=ProgressKey.FileTransfer)
progress.end(ProgressKey.FileTransfer)
text=__end_msg)
progress.end()
# 重置计数
processed_num = 0
fail_num = 0
@@ -847,7 +873,7 @@ class TransferChain(ChainBase, metaclass=Singleton):
state, errmsg = self.do_transfer(
fileitem=FileItem(
storage="local",
path=str(file_path).replace("\\", "/"),
path=file_path.as_posix(),
type="dir" if not file_path.is_file() else "file",
name=file_path.name,
size=file_path.stat().st_size,
@@ -867,10 +893,6 @@ class TransferChain(ChainBase, metaclass=Singleton):
torrents.clear()
del torrents
# 如果不是大内存模式,进行垃圾回收
if not settings.BIG_MEMORY_MODE:
gc.collect()
# 结束
logger.info("所有下载器中下载完成的文件已整理完成")
return True
@@ -1058,16 +1080,7 @@ class TransferChain(ChainBase, metaclass=Singleton):
continue
# 整理屏蔽词不处理
is_blocked = False
if transfer_exclude_words:
for keyword in transfer_exclude_words:
if not keyword:
continue
if keyword and re.search(r"%s" % keyword, file_item.path, re.IGNORECASE):
logger.info(f"{file_item.path} 命中整理屏蔽词 {keyword},不处理")
is_blocked = True
break
if is_blocked:
if self._is_blocked_by_exclude_words(file_item.path, transfer_exclude_words):
continue
# 整理成功的不再处理
@@ -1099,9 +1112,11 @@ class TransferChain(ChainBase, metaclass=Singleton):
# 自定义识别
if formaterHandler:
# 开始集、结束集、PART
begin_ep, end_ep, part = formaterHandler.split_episode(file_name=file_path.name, file_meta=file_meta)
begin_ep, end_ep, part = formaterHandler.split_episode(file_name=file_path.name,
file_meta=file_meta)
if begin_ep is not None:
file_meta.begin_episode = begin_ep
if part is not None:
file_meta.part = part
if end_ep is not None:
file_meta.end_episode = end_ep
@@ -1111,10 +1126,10 @@ class TransferChain(ChainBase, metaclass=Singleton):
downloadhis = DownloadHistoryOper()
if bluray_dir:
# 蓝光原盘,按目录名查询
download_history = downloadhis.get_by_path(str(file_path))
download_history = downloadhis.get_by_path(file_path.as_posix())
else:
# 按文件全路径查询
download_file = downloadhis.get_file_by_fullpath(str(file_path))
download_file = downloadhis.get_file_by_fullpath(file_path.as_posix())
if download_file:
download_history = downloadhis.get_by_hash(download_file.download_hash)
@@ -1160,15 +1175,16 @@ class TransferChain(ChainBase, metaclass=Singleton):
processed_num = 0
# 失败数量
fail_num = 0
# 已完成文件
finished_files = []
# 启动进度
progress = ProgressHelper()
progress.start(ProgressKey.FileTransfer)
progress = ProgressHelper(ProgressKey.FileTransfer)
progress.start()
__process_msg = f"开始整理,共 {total_num} 个文件 ..."
logger.info(__process_msg)
progress.update(value=0,
text=__process_msg,
key=ProgressKey.FileTransfer)
text=__process_msg)
try:
for transfer_task in transfer_tasks:
if global_vars.is_system_stopped:
@@ -1180,7 +1196,10 @@ class TransferChain(ChainBase, metaclass=Singleton):
logger.info(__process_msg)
progress.update(value=(processed_num + fail_num) / total_num * 100,
text=__process_msg,
key=ProgressKey.FileTransfer)
data={
"current": Path(transfer_task.fileitem.path).as_posix(),
"finished": finished_files,
})
state, err_msg = self.__handle_transfer(
task=transfer_task,
callback=self.__default_callback
@@ -1192,6 +1211,8 @@ class TransferChain(ChainBase, metaclass=Singleton):
fail_num += 1
else:
processed_num += 1
# 记录已完成
finished_files.append(Path(transfer_task.fileitem.path).as_posix())
finally:
transfer_tasks.clear()
del transfer_tasks
@@ -1201,8 +1222,8 @@ class TransferChain(ChainBase, metaclass=Singleton):
logger.info(__end_msg)
progress.update(value=100,
text=__end_msg,
key=ProgressKey.FileTransfer)
progress.end(ProgressKey.FileTransfer)
data={})
progress.end()
error_msg = "".join(err_msgs[:2]) + (f",等{len(err_msgs)}个文件错误!" if len(err_msgs) > 2 else "")
return all_success, error_msg
@@ -1342,16 +1363,12 @@ class TransferChain(ChainBase, metaclass=Singleton):
mediainfo: MediaInfo = MediaChain().recognize_media(tmdbid=tmdbid, doubanid=doubanid,
mtype=mtype, episode_group=episode_group)
if not mediainfo:
return False, f"媒体信息识别失败tmdbid{tmdbid}doubanid{doubanid}type: {mtype.value}"
return (False,
f"媒体信息识别失败tmdbid{tmdbid}doubanid{doubanid}type: {mtype.value if mtype else None}")
else:
# 更新媒体图片
self.obtain_images(mediainfo=mediainfo)
# 开始进度
progress = ProgressHelper()
progress.start(ProgressKey.FileTransfer)
progress.update(value=0,
text=f"开始整理 {fileitem.path} ...",
key=ProgressKey.FileTransfer)
# 开始整理
state, errmsg = self.do_transfer(
fileitem=fileitem,
@@ -1372,7 +1389,6 @@ class TransferChain(ChainBase, metaclass=Singleton):
if not state:
return False, errmsg
progress.end(ProgressKey.FileTransfer)
logger.info(f"{fileitem.path} 整理完成")
return True, ""
else:
@@ -1412,3 +1428,63 @@ class TransferChain(ChainBase, metaclass=Singleton):
season_episode=season_episode,
username=username
)
@staticmethod
def _is_blocked_by_exclude_words(file_path: str, exclude_words: list) -> bool:
"""
检查文件是否被整理屏蔽词阻止处理
:param file_path: 文件路径
:param exclude_words: 整理屏蔽词列表
:return: 如果被屏蔽返回True否则返回False
"""
if not exclude_words:
return False
for keyword in exclude_words:
if keyword and re.search(r"%s" % keyword, file_path, re.IGNORECASE):
logger.warn(f"{file_path} 命中屏蔽词 {keyword}")
return True
return False
def _can_delete_torrent(self, download_hash: str, downloader: str, transfer_exclude_words) -> bool:
"""
检查是否可以删除种子文件
:param download_hash: 种子Hash
:param downloader: 下载器名称
:param transfer_exclude_words: 整理屏蔽词
:return: 如果可以删除返回True否则返回False
"""
try:
# 获取种子信息
torrents = self.list_torrents(hashs=download_hash, downloader=downloader)
if not torrents:
return False
# 未下载完成
if torrents[0].progress < 100:
return False
# 获取种子文件列表
torrent_files = self.torrent_files(download_hash, downloader)
if not torrent_files:
return False
if not isinstance(torrent_files, list):
torrent_files = torrent_files.data
# 检查是否有媒体文件未被屏蔽且存在
save_path = torrents[0].path.parent
for file in torrent_files:
file_path = save_path / file.name
# 如果存在未被屏蔽的媒体文件,则不删除种子
if (file_path.suffix in self.all_exts
and not self._is_blocked_by_exclude_words(file_path.as_posix(), transfer_exclude_words)
and file_path.exists()):
return False
# 所有媒体文件都被屏蔽或不存在,可以删除种子
return True
except Exception as e:
logger.error(f"检查种子 {download_hash} 是否需要删除失败:{e}")
return False

View File

@@ -10,11 +10,13 @@ from pydantic.fields import Callable
from app.chain import ChainBase
from app.core.config import global_vars
from app.core.workflow import WorkFlowManager
from app.core.event import Event, eventmanager
from app.workflow import WorkFlowManager
from app.db.models import Workflow
from app.db.workflow_oper import WorkflowOper
from app.log import logger
from app.schemas import ActionContext, ActionFlow, Action, ActionExecution
from app.schemas.types import EventType
class WorkflowExecutor:
@@ -178,7 +180,7 @@ class WorkflowExecutor:
"""
合并上下文
"""
for key, value in context.dict().items():
for key, value in context.model_dump().items():
if not getattr(self.context, key, None):
setattr(self.context, key, value)
@@ -188,6 +190,16 @@ class WorkflowChain(ChainBase):
工作流链
"""
@eventmanager.register(EventType.WorkflowExecute)
def event_process(self, event: Event):
"""
事件触发工作流执行
"""
workflow_id = event.event_data.get('workflow_id')
if not workflow_id:
return
self.process(workflow_id, from_begin=False)
@staticmethod
def process(workflow_id: int, from_begin: Optional[bool] = True) -> Tuple[bool, str]:
"""
@@ -225,7 +237,7 @@ class WorkflowChain(ChainBase):
logger.warn(f"工作流 {workflow.name} 无流程")
return False, "工作流无流程"
logger.info(f"开始处理 {workflow.name},共 {len(workflow.actions)} 个动作 ...")
logger.info(f"开始执行工作流 {workflow.name},共 {len(workflow.actions)} 个动作 ...")
workflowoper.start(workflow_id)
# 执行工作流
@@ -247,3 +259,17 @@ class WorkflowChain(ChainBase):
获取工作流列表
"""
return WorkflowOper().list_enabled()
@staticmethod
def get_timer_workflows() -> List[Workflow]:
"""
获取定时触发的工作流列表
"""
return WorkflowOper().get_timer_triggered_workflows()
@staticmethod
def get_event_workflows() -> List[Workflow]:
"""
获取事件触发的工作流列表
"""
return WorkflowOper().get_event_triggered_workflows()

View File

@@ -5,6 +5,7 @@ from typing import Any, Union, Dict, Optional
from app.chain import ChainBase
from app.chain.download import DownloadChain
from app.chain.message import MessageChain
from app.chain.site import SiteChain
from app.chain.subscribe import SubscribeChain
from app.chain.system import SystemChain
@@ -140,6 +141,12 @@ class Command(metaclass=Singleton):
"description": "当前版本",
"category": "管理",
"data": {}
},
"/clear_session": {
"func": MessageChain().remote_clear_session,
"description": "清除会话",
"category": "管理",
"data": {}
}
}
# 插件命令集合
@@ -215,7 +222,7 @@ class Command(metaclass=Singleton):
except Exception as e:
logger.error(f"Error occurred during command initialization in background: {e}", exc_info=True)
def __trigger_register_commands_event(self) -> (Optional[Event], dict):
def __trigger_register_commands_event(self) -> tuple[Optional[Event], dict]:
"""
触发事件,允许调整命令数据
"""

File diff suppressed because it is too large Load Diff

View File

@@ -1,18 +1,24 @@
import copy
import json
import os
import platform
import re
import secrets
import sys
import threading
from pathlib import Path
from typing import Any, Dict, List, Optional, Tuple, Type
from urllib.parse import urlparse
from dotenv import set_key
from pydantic import BaseModel, BaseSettings, validator, Field
from pydantic import BaseModel, Field, ConfigDict, model_validator
from pydantic_settings import BaseSettings, SettingsConfigDict
from app.log import logger, log_settings, LogConfigModel
from app.schemas import MediaType
from app.utils.system import SystemUtils
from app.utils.url import UrlUtils
from version import APP_VERSION
class SystemConfModel(BaseModel):
@@ -37,10 +43,6 @@ class SystemConfModel(BaseModel):
scheduler: int = 0
# 线程池大小
threadpool: int = 0
# 数据库连接池大小
dbpool: int = 0
# 数据库连接池溢出数量
dbpooloverflow: int = 0
class ConfigModel(BaseModel):
@@ -48,9 +50,9 @@ class ConfigModel(BaseModel):
Pydantic 配置模型,描述所有配置项及其类型和默认值
"""
class Config:
extra = "ignore" # 忽略未定义的配置项
model_config = ConfigDict(extra="ignore") # 忽略未定义的配置项
# ==================== 基础应用配置 ====================
# 项目名称
PROJECT_NAME: str = "MoviePilot"
# 域名 格式https://movie-pilot.org
@@ -59,6 +61,24 @@ class ConfigModel(BaseModel):
API_V1_STR: str = "/api/v1"
# 前端资源路径
FRONTEND_PATH: str = "/public"
# 时区
TZ: str = "Asia/Shanghai"
# API监听地址
HOST: str = "0.0.0.0"
# API监听端口
PORT: int = 3001
# 前端监听端口
NGINX_PORT: int = 3000
# 配置文件目录
CONFIG_DIR: Optional[str] = None
# 是否调试模式
DEBUG: bool = False
# 是否开发模式
DEV: bool = False
# 高级设置模式
ADVANCED_MODE: bool = True
# ==================== 安全认证配置 ====================
# 密钥
SECRET_KEY: str = secrets.token_urlsafe(32)
# RESOURCE密钥
@@ -69,20 +89,26 @@ class ConfigModel(BaseModel):
ACCESS_TOKEN_EXPIRE_MINUTES: int = 60 * 24 * 8
# RESOURCE_TOKEN过期时间
RESOURCE_ACCESS_TOKEN_EXPIRE_SECONDS: int = 60 * 30
# 时区
TZ: str = "Asia/Shanghai"
# API监听地址
HOST: str = "0.0.0.0"
# API监听端口
PORT: int = 3001
# 前端监听端口
NGINX_PORT: int = 3000
# 是否调试模式
DEBUG: bool = False
# 是否开发模式
DEV: bool = False
# 超级管理员初始用户名
SUPERUSER: str = "admin"
# 超级管理员初始密码
SUPERUSER_PASSWORD: Optional[str] = None
# 辅助认证,允许通过外部服务进行认证、单点登录以及自动创建用户
AUXILIARY_AUTH_ENABLE: bool = False
# API密钥需要更换
API_TOKEN: Optional[str] = None
# 用户认证站点
AUTH_SITE: str = ""
# ==================== 数据库配置 ====================
# 数据库类型,支持 sqlite 和 postgresql默认使用 sqlite
DB_TYPE: str = "sqlite"
# 是否在控制台输出 SQL 语句,默认关闭
DB_ECHO: bool = False
# 数据库连接超时时间(秒),默认为 60 秒
DB_TIMEOUT: int = 60
# 是否启用 WAL 模式仅适用于SQLite默认开启
DB_WAL_ENABLE: bool = True
# 数据库连接池类型QueuePool, NullPool
DB_POOL_TYPE: str = "QueuePool"
# 是否在获取连接时进行预先 ping 操作
@@ -91,71 +117,44 @@ class ConfigModel(BaseModel):
DB_POOL_RECYCLE: int = 300
# 数据库连接池获取连接的超时时间(秒)
DB_POOL_TIMEOUT: int = 30
# SQLite 的 busy_timeout 参数,默认为 60 秒
DB_TIMEOUT: int = 60
# SQLite 是否启用 WAL 模式,默认开启
DB_WAL_ENABLE: bool = True
# SQLite 连接池大小
DB_SQLITE_POOL_SIZE: int = 10
# SQLite 连接池溢出数量
DB_SQLITE_MAX_OVERFLOW: int = 50
# PostgreSQL 主机地址
DB_POSTGRESQL_HOST: str = "localhost"
# PostgreSQL 端口
DB_POSTGRESQL_PORT: int = 5432
# PostgreSQL 数据库名
DB_POSTGRESQL_DATABASE: str = "moviepilot"
# PostgreSQL 用户名
DB_POSTGRESQL_USERNAME: str = "moviepilot"
# PostgreSQL 密码
DB_POSTGRESQL_PASSWORD: str = "moviepilot"
# PostgreSQL 连接池大小
DB_POSTGRESQL_POOL_SIZE: int = 10
# PostgreSQL 连接池溢出数量
DB_POSTGRESQL_MAX_OVERFLOW: int = 50
# ==================== 缓存配置 ====================
# 缓存类型,支持 cachetools 和 redis默认使用 cachetools
CACHE_BACKEND_TYPE: str = "cachetools"
# 缓存连接字符串,仅外部缓存(如 Redis、Memcached需要
CACHE_BACKEND_URL: Optional[str] = None
CACHE_BACKEND_URL: Optional[str] = "redis://localhost:6379"
# Redis 缓存最大内存限制,未配置时,如开启大内存模式时为 "1024mb",未开启时为 "256mb"
CACHE_REDIS_MAXMEMORY: Optional[str] = None
# 配置文件目录
CONFIG_DIR: Optional[str] = None
# 超级管理员
SUPERUSER: str = "admin"
# 辅助认证,允许通过外部服务进行认证、单点登录以及自动创建用户
AUXILIARY_AUTH_ENABLE: bool = False
# API密钥需要更换
API_TOKEN: Optional[str] = None
# 全局图片缓存,将媒体图片缓存到本地
GLOBAL_IMAGE_CACHE: bool = False
# 全局图片缓存保留天数
GLOBAL_IMAGE_CACHE_DAYS: int = 7
# 临时文件保留天数
TEMP_FILE_DAYS: int = 3
# 元数据识别缓存过期时间小时0为自动
META_CACHE_EXPIRE: int = 0
# ==================== 网络代理配置 ====================
# 网络代理服务器地址
PROXY_HOST: Optional[str] = None
# 登录页面电影海报,tmdb/bing/mediaserver
WALLPAPER: str = "tmdb"
# 自定义壁纸api地址
CUSTOMIZE_WALLPAPER_API_URL: Optional[str] = None
# 媒体搜索来源 themoviedb/douban/bangumi多个用,分隔
SEARCH_SOURCE: str = "themoviedb,douban,bangumi"
# 媒体识别来源 themoviedb/douban
RECOGNIZE_SOURCE: str = "themoviedb"
# 刮削来源 themoviedb/douban
SCRAP_SOURCE: str = "themoviedb"
# 新增已入库媒体是否跟随TMDB信息变化
SCRAP_FOLLOW_TMDB: bool = True
# TMDB图片地址
TMDB_IMAGE_DOMAIN: str = "image.tmdb.org"
# TMDB API地址
TMDB_API_DOMAIN: str = "api.themoviedb.org"
# TMDB元数据语言
TMDB_LOCALE: str = "zh"
# 刮削使用TMDB原始语种图片
TMDB_SCRAP_ORIGINAL_IMAGE: bool = False
# TMDB API Key
TMDB_API_KEY: str = "db55323b8d3e4154498498a75642b381"
# TVDB API Key
TVDB_V4_API_KEY: str = "ed2aa66b-7899-4677-92a7-67bc9ce3d93a"
TVDB_V4_API_PIN: str = ""
# Fanart开关
FANART_ENABLE: bool = True
# Fanart语言
FANART_LANG: str = "zh,en"
# Fanart API Key
FANART_API_KEY: str = "d2d31f9ecabea050fc7d68aa3146015f"
# 115 AppId
U115_APP_ID: str = "100196807"
# Alipan AppId
ALIPAN_APP_ID: str = "ac1bf04dc9fd4d9aaabb65b4a668d403"
# 元数据识别缓存过期时间(小时)
META_CACHE_EXPIRE: int = 0
# 电视剧动漫的分类genre_ids
ANIME_GENREIDS: List[int] = Field(default=[16])
# 用户认证站点
AUTH_SITE: str = ""
# 重启自动升级
MOVIEPILOT_AUTO_UPDATE: str = 'release'
# 自动检查和更新站点资源包(站点索引、认证等)
AUTO_UPDATE_RESOURCE: bool = True
# 是否启用DOH解析域名
DOH_ENABLE: bool = False
# 使用 DOH 解析的域名列表
@@ -169,6 +168,55 @@ class ConfigModel(BaseModel):
"api.telegram.org")
# DOH 解析服务器列表
DOH_RESOLVERS: str = "1.0.0.1,1.1.1.1,9.9.9.9,149.112.112.112"
# ==================== 媒体元数据配置 ====================
# 媒体搜索来源 themoviedb/douban/bangumi多个用,分隔
SEARCH_SOURCE: str = "themoviedb"
# 媒体识别来源 themoviedb/douban
RECOGNIZE_SOURCE: str = "themoviedb"
# 刮削来源 themoviedb/douban
SCRAP_SOURCE: str = "themoviedb"
# 电视剧动漫的分类genre_ids
ANIME_GENREIDS: List[int] = Field(default=[16])
# ==================== TMDB配置 ====================
# TMDB图片地址
TMDB_IMAGE_DOMAIN: str = "image.tmdb.org"
# TMDB API地址
TMDB_API_DOMAIN: str = "api.themoviedb.org"
# TMDB元数据语言
TMDB_LOCALE: str = "zh"
# 刮削使用TMDB原始语种图片
TMDB_SCRAP_ORIGINAL_IMAGE: bool = False
# TMDB API Key
TMDB_API_KEY: str = "db55323b8d3e4154498498a75642b381"
# ==================== TVDB配置 ====================
# TVDB API Key
TVDB_V4_API_KEY: str = "ed2aa66b-7899-4677-92a7-67bc9ce3d93a"
TVDB_V4_API_PIN: str = ""
# ==================== Fanart配置 ====================
# Fanart开关
FANART_ENABLE: bool = True
# Fanart语言
FANART_LANG: str = "zh,en"
# Fanart API Key
FANART_API_KEY: str = "d2d31f9ecabea050fc7d68aa3146015f"
# ==================== 云盘配置 ====================
# 115 AppId
U115_APP_ID: str = "100196807"
# Alipan AppId
ALIPAN_APP_ID: str = "ac1bf04dc9fd4d9aaabb65b4a668d403"
# ==================== 系统升级配置 ====================
# 重启自动升级
MOVIEPILOT_AUTO_UPDATE: str = 'release'
# 自动检查和更新站点资源包(站点索引、认证等)
AUTO_UPDATE_RESOURCE: bool = True
# ==================== 媒体文件格式配置 ====================
# 支持的后缀格式
RMT_MEDIAEXT: list = Field(
default_factory=lambda: ['.mp4', '.mkv', '.ts', '.iso',
@@ -191,10 +239,12 @@ class ConfigModel(BaseModel):
'.aifc', '.aiff', '.alac', '.adif', '.adts',
'.flac', '.midi', '.opus', '.sfalc']
)
# 下载器临时文件后缀
DOWNLOAD_TMPEXT: list = Field(default_factory=lambda: ['.!qb', '.part'])
# ==================== 媒体服务器配置 ====================
# 媒体服务器同步间隔(小时)
MEDIASERVER_SYNC_INTERVAL: int = 6
# ==================== 订阅配置 ====================
# 订阅模式
SUBSCRIBE_MODE: str = "spider"
# RSS订阅模式刷新时间间隔分钟
@@ -203,20 +253,42 @@ class ConfigModel(BaseModel):
SUBSCRIBE_STATISTIC_SHARE: bool = True
# 订阅搜索开关
SUBSCRIBE_SEARCH: bool = False
# 订阅搜索时间间隔(小时)
SUBSCRIBE_SEARCH_INTERVAL: int = 24
# 检查本地媒体库是否存在资源开关
LOCAL_EXISTS_SEARCH: bool = False
# 搜索多个名称
SEARCH_MULTIPLE_NAME: bool = False
LOCAL_EXISTS_SEARCH: bool = True
# ==================== 站点配置 ====================
# 站点数据刷新间隔(小时)
SITEDATA_REFRESH_INTERVAL: int = 6
# 读取和发送站点消息
SITE_MESSAGE: bool = True
# 不能缓存站点资源的站点域名,多个使用,分隔
NO_CACHE_SITE_KEY: str = "m-team"
# OCR服务器地址用于识别站点验证码
OCR_HOST: str = "https://movie-pilot.org"
# 仿真类型playwright 或 flaresolverr
BROWSER_EMULATION: str = "playwright"
# FlareSolverr 服务地址,例如 http://127.0.0.1:8191
FLARESOLVERR_URL: Optional[str] = None
# ==================== 搜索配置 ====================
# 搜索多个名称
SEARCH_MULTIPLE_NAME: bool = False
# 最大搜索名称数量
MAX_SEARCH_NAME_LIMIT: int = 2
# ==================== 下载配置 ====================
# 种子标签
TORRENT_TAG: str = "MOVIEPILOT"
# 下载站点字幕
DOWNLOAD_SUBTITLE: bool = True
# 交互搜索自动下载用户ID使用,分割
AUTO_DOWNLOAD_USER: Optional[str] = None
# 下载器临时文件后缀
DOWNLOAD_TMPEXT: list = Field(default_factory=lambda: ['.!qb', '.part'])
# ==================== CookieCloud配置 ====================
# CookieCloud是否启动本地服务
COOKIECLOUD_ENABLE_LOCAL: Optional[bool] = False
# CookieCloud服务器地址
@@ -229,8 +301,8 @@ class ConfigModel(BaseModel):
COOKIECLOUD_INTERVAL: Optional[int] = 60 * 24
# CookieCloud同步黑名单多个域名,分割
COOKIECLOUD_BLACKLIST: Optional[str] = None
# CookieCloud对应的浏览器UA
USER_AGENT: str = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/113.0.0.0 Safari/537.36 Edg/113.0.1774.57"
# ==================== 整理配置 ====================
# 电影重命名格式
MOVIE_RENAME_FORMAT: str = "{{title}}{% if year %} ({{year}}){% endif %}" \
"/{{title}}{% if year %} ({{year}}){% endif %}{% if part %}-{{part}}{% endif %}{% if videoFormat %} - {{videoFormat}}{% endif %}" \
@@ -240,10 +312,24 @@ class ConfigModel(BaseModel):
"/Season {{season}}" \
"/{{title}} - {{season_episode}}{% if part %}-{{part}}{% endif %}{% if episode %} - 第 {{episode}} 集{% endif %}" \
"{{fileExt}}"
# OCR服务器地址
OCR_HOST: str = "https://movie-pilot.org"
# 重命名时支持的S0别名
RENAME_FORMAT_S0_NAMES: list = Field(default=["Specials", "SPs"])
# 为指定默认字幕添加.default后缀
DEFAULT_SUB: Optional[str] = "zh-cn"
# 新增已入库媒体是否跟随TMDB信息变化
SCRAP_FOLLOW_TMDB: bool = True
# ==================== 服务地址配置 ====================
# 服务器地址,对应 https://github.com/jxxghp/MoviePilot-Server 项目
MP_SERVER_HOST: str = "https://movie-pilot.org"
# ==================== 个性化 ====================
# 登录页面电影海报,tmdb/bing/mediaserver
WALLPAPER: str = "tmdb"
# 自定义壁纸api地址
CUSTOMIZE_WALLPAPER_API_URL: Optional[str] = None
# ==================== 插件配置 ====================
# 插件市场仓库地址,多个地址使用,分隔,地址以/结尾
PLUGIN_MARKET: str = ("https://github.com/jxxghp/MoviePilot-Plugins,"
"https://github.com/thsrite/MoviePilot-Plugins,"
@@ -264,6 +350,8 @@ class ConfigModel(BaseModel):
PLUGIN_STATISTIC_SHARE: bool = True
# 是否开启插件热加载
PLUGIN_AUTO_RELOAD: bool = False
# ==================== Github & PIP ====================
# Github token提高请求api限流阈值 ghp_****
GITHUB_TOKEN: Optional[str] = None
# Github代理服务器格式https://mirror.ghproxy.com/
@@ -272,20 +360,18 @@ class ConfigModel(BaseModel):
PIP_PROXY: Optional[str] = ''
# 指定的仓库Github token多个仓库使用,分隔,格式:{user1}/{repo1}:ghp_****,{user2}/{repo2}:github_pat_****
REPO_GITHUB_TOKEN: Optional[str] = None
# ==================== 性能配置 ====================
# 大内存模式
BIG_MEMORY_MODE: bool = False
# 是否启用内存监控
MEMORY_ANALYSIS: bool = False
# 内存快照间隔(分钟)
MEMORY_SNAPSHOT_INTERVAL: int = 30
# 保留的内存快照文件数量
MEMORY_SNAPSHOT_KEEP_COUNT: int = 20
# 全局图片缓存,将媒体图片缓存到本地
GLOBAL_IMAGE_CACHE: bool = False
# 是否启用编码探测的性能模式
ENCODING_DETECTION_PERFORMANCE_MODE: bool = True
# 编码探测的最低置信度阈值
ENCODING_DETECTION_MIN_CONFIDENCE: float = 0.8
# 主动内存回收时间间隔分钟0为不启用
MEMORY_GC_INTERVAL: int = 30
# ==================== 安全配置 ====================
# 允许的图片缓存域名
SECURITY_IMAGE_DOMAINS: list = Field(default=[
"image.tmdb.org",
@@ -305,23 +391,58 @@ class ConfigModel(BaseModel):
])
# 允许的图片文件后缀格式
SECURITY_IMAGE_SUFFIXES: list = Field(default=[".jpg", ".jpeg", ".png", ".webp", ".gif", ".svg", ".avif"])
# 重命名时支持的S0别名
RENAME_FORMAT_S0_NAMES: list = Field(default=["Specials", "SPs"])
# 为指定默认字幕添加.default后缀
DEFAULT_SUB: Optional[str] = "zh-cn"
# ==================== 工作流配置 ====================
# 工作流数据共享
WORKFLOW_STATISTIC_SHARE: bool = True
# ==================== 存储配置 ====================
# 对rclone进行快照对比时是否检查文件夹的修改时间
RCLONE_SNAPSHOT_CHECK_FOLDER_MODTIME: bool = True
# 对OpenList进行快照对比时是否检查文件夹的修改时间
OPENLIST_SNAPSHOT_CHECK_FOLDER_MODTIME: bool = True
# ==================== Docker配置 ====================
# Docker Client API地址
DOCKER_CLIENT_API: Optional[str] = "tcp://127.0.0.1:38379"
# ==================== AI智能体配置 ====================
# AI智能体开关
AI_AGENT_ENABLE: bool = False
# LLM提供商 (openai/google/deepseek)
LLM_PROVIDER: str = "deepseek"
# LLM模型名称
LLM_MODEL: str = "deepseek-chat"
# LLM API密钥
LLM_API_KEY: Optional[str] = None
# LLM基础URL用于自定义API端点
LLM_BASE_URL: Optional[str] = "https://api.deepseek.com"
# LLM温度参数
LLM_TEMPERATURE: float = 0.1
# LLM最大迭代次数
LLM_MAX_ITERATIONS: int = 15
# LLM工具调用超时时间
LLM_TOOL_TIMEOUT: int = 300
# 是否启用详细日志
LLM_VERBOSE: bool = False
# 最大记忆消息数量
LLM_MAX_MEMORY_MESSAGES: int = 50
# 记忆保留天数
LLM_MEMORY_RETENTION_DAYS: int = 30
# Redis记忆保留天数如果使用Redis
LLM_REDIS_MEMORY_RETENTION_DAYS: int = 7
class Settings(BaseSettings, ConfigModel, LogConfigModel):
"""
系统配置类
"""
class Config:
case_sensitive = True
env_file = SystemUtils.get_env_path()
env_file_encoding = "utf-8"
model_config = SettingsConfigDict(
case_sensitive=True,
env_file=SystemUtils.get_env_path(),
env_file_encoding="utf-8",
)
def __init__(self, **kwargs):
super().__init__(**kwargs)
@@ -418,33 +539,54 @@ class Settings(BaseSettings, ConfigModel, LogConfigModel):
f"配置项 '{field_name}' 的值 '{value}' 无法转换成正确的类型,使用默认值 '{default}',错误信息: {e}")
return default, True
@validator('*', pre=True, always=True)
def generic_type_validator(cls, value: Any, field): # noqa
@model_validator(mode='before')
@classmethod
def generic_type_validator(cls, data: Any): # noqa
"""
通用校验器,尝试将配置值转换为期望的类型
"""
if field.name == "API_TOKEN":
converted_value, needs_update = cls.validate_api_token(value, value)
else:
converted_value, needs_update = cls.generic_type_converter(value, value, field.type_, field.default,
field.name)
if needs_update:
cls.update_env_config(field, value, converted_value)
return converted_value
if not isinstance(data, dict):
return data
# 处理 API_TOKEN 特殊验证
if 'API_TOKEN' in data:
converted_value, needs_update = cls.validate_api_token(data['API_TOKEN'], data['API_TOKEN'])
if needs_update:
cls.update_env_config("API_TOKEN", data["API_TOKEN"], converted_value)
data['API_TOKEN'] = converted_value
# 对其他字段进行类型转换
for field_name, field_info in cls.model_fields.items():
if field_name not in data:
continue
value = data[field_name]
if value is None:
continue
field = cls.model_fields.get(field_name)
if field:
converted_value, needs_update = cls.generic_type_converter(
value, value, field.annotation, field.default, field_name
)
if needs_update:
cls.update_env_config(field_name, value, converted_value)
data[field_name] = converted_value
return data
@staticmethod
def update_env_config(field: Any, original_value: Any, converted_value: Any) -> Tuple[bool, str]:
def update_env_config(field_name: str, original_value: Any, converted_value: Any) -> Tuple[bool, str]:
"""
更新 env 配置
"""
message = None
is_converted = original_value is not None and str(original_value) != str(converted_value)
if is_converted:
message = f"配置项 '{field.name}' 的值 '{original_value}' 无效,已替换为 '{converted_value}'"
message = f"配置项 '{field_name}' 的值 '{original_value}' 无效,已替换为 '{converted_value}'"
logger.warning(message)
if field.name in os.environ:
message = f"配置项 '{field.name}' 已在环境变量中设置,请手动更新以保持一致性"
if field_name in os.environ:
message = f"配置项 '{field_name}' 已在环境变量中设置,请手动更新以保持一致性"
logger.warning(message)
return False, message
else:
@@ -454,10 +596,10 @@ class Settings(BaseSettings, ConfigModel, LogConfigModel):
else:
value_to_write = str(converted_value) if converted_value is not None else ""
set_key(dotenv_path=SystemUtils.get_env_path(), key_to_set=field.name, value_to_set=value_to_write,
set_key(dotenv_path=SystemUtils.get_env_path(), key_to_set=field_name, value_to_set=value_to_write,
quote_mode="always")
if is_converted:
logger.info(f"配置项 '{field.name}' 已自动修正并写入到 'app.env' 文件")
logger.info(f"配置项 '{field_name}' 已自动修正并写入到 'app.env' 文件")
return True, message
def update_setting(self, key: str, value: Any) -> Tuple[Optional[bool], str]:
@@ -471,19 +613,17 @@ class Settings(BaseSettings, ConfigModel, LogConfigModel):
return False, f"配置项 '{key}' 不存在"
try:
field = self.__fields__[key]
field = Settings.model_fields[key]
original_value = getattr(self, key)
if field.name == "API_TOKEN":
if key == "API_TOKEN":
converted_value, needs_update = self.validate_api_token(value, original_value)
else:
converted_value, needs_update = self.generic_type_converter(value,
original_value,
field.type_,
field.default,
key)
converted_value, needs_update = self.generic_type_converter(
value, original_value, field.annotation, field.default, key
)
# 如果没有抛出异常,则统一使用 converted_value 进行更新
if needs_update or str(value) != str(converted_value):
success, message = self.update_env_config(field, value, converted_value)
success, message = self.update_env_config(key, value, converted_value)
# 仅成功更新配置时,才更新内存
if success:
setattr(self, key, converted_value)
@@ -510,6 +650,20 @@ class Settings(BaseSettings, ConfigModel, LogConfigModel):
"""
return "v2"
@property
def USER_AGENT(self) -> str:
"""
全局用户代理字符串
"""
return f"{self.PROJECT_NAME}/{APP_VERSION[1:]} ({platform.system()} {platform.release()}; {SystemUtils.cpu_arch()})"
@property
def NORMAL_USER_AGENT(self) -> str:
"""
默认浏览器用户代理字符串
"""
return "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/138.0.0.0 Safari/537.36"
@property
def INNER_CONFIG_PATH(self):
return self.ROOT_PATH / "config"
@@ -561,11 +715,9 @@ class Settings(BaseSettings, ConfigModel, LogConfigModel):
douban=512,
bangumi=512,
fanart=512,
meta=(self.META_CACHE_EXPIRE or 24) * 3600,
meta=(self.META_CACHE_EXPIRE or 72) * 3600,
scheduler=100,
threadpool=100,
dbpool=100,
dbpooloverflow=50
threadpool=100
)
return SystemConfModel(
torrents=100,
@@ -574,11 +726,9 @@ class Settings(BaseSettings, ConfigModel, LogConfigModel):
douban=256,
bangumi=256,
fanart=128,
meta=(self.META_CACHE_EXPIRE or 2) * 3600,
meta=(self.META_CACHE_EXPIRE or 24) * 3600,
scheduler=50,
threadpool=50,
dbpool=50,
dbpooloverflow=20
threadpool=50
)
@property
@@ -593,9 +743,22 @@ class Settings(BaseSettings, ConfigModel, LogConfigModel):
@property
def PROXY_SERVER(self):
if self.PROXY_HOST:
return {
"server": self.PROXY_HOST
}
try:
parsed = urlparse(self.PROXY_HOST)
if not parsed.scheme:
return {"server": self.PROXY_HOST}
host = parsed.hostname or ""
port = f":{parsed.port}" if parsed.port else ""
server = f"{parsed.scheme}://{host}{port}"
proxy = {"server": server}
if parsed.username:
proxy["username"] = parsed.username
if parsed.password:
proxy["password"] = parsed.password
return proxy
except Exception as err:
logger.error(f"解析代理服务器地址 '{self.PROXY_HOST}' 时出错: {err}")
return {"server": self.PROXY_HOST}
return None
@property
@@ -606,7 +769,7 @@ class Settings(BaseSettings, ConfigModel, LogConfigModel):
if self.GITHUB_TOKEN:
return {
"Authorization": f"Bearer {self.GITHUB_TOKEN}",
"User-Agent": self.USER_AGENT,
"User-Agent": self.NORMAL_USER_AGENT,
}
return {}
@@ -635,7 +798,7 @@ class Settings(BaseSettings, ConfigModel, LogConfigModel):
continue
headers[repo_info] = {
"Authorization": f"Bearer {token}",
"User-Agent": self.USER_AGENT,
"User-Agent": self.NORMAL_USER_AGENT,
}
except Exception as e:
print(f"处理令牌对 '{token_pair}' 时出错: {e}")
@@ -655,6 +818,23 @@ class Settings(BaseSettings, ConfigModel, LogConfigModel):
return None
return UrlUtils.combine_url(host=self.APP_DOMAIN, path=url)
def RENAME_FORMAT(self, media_type: MediaType):
"""
获取指定类型的重命名格式
:param media_type: MediaType.TV 或 MediaType.Movie
:return: 重命名格式
"""
rename_format = (
self.TV_RENAME_FORMAT
if media_type == MediaType.TV
else self.MOVIE_RENAME_FORMAT
)
# 规范重命名格式
rename_format = rename_format.replace("\\", "/")
rename_format = re.sub(r'/+', '/', rename_format)
return rename_format.strip("/")
# 实例化配置
settings = Settings()
@@ -670,6 +850,8 @@ class GlobalVar(object):
SUBSCRIPTIONS: List[dict] = []
# 需应急停止的工作流
EMERGENCY_STOP_WORKFLOWS: List[int] = []
# 需应急停止文件整理
EMERGENCY_STOP_TRANSFER: List[str] = []
def stop_system(self):
"""
@@ -710,12 +892,30 @@ class GlobalVar(object):
if workflow_id in self.EMERGENCY_STOP_WORKFLOWS:
self.EMERGENCY_STOP_WORKFLOWS.remove(workflow_id)
def is_workflow_stopped(self, workflow_id: int):
def is_workflow_stopped(self, workflow_id: int) -> bool:
"""
是否停止工作流
"""
return self.is_system_stopped or workflow_id in self.EMERGENCY_STOP_WORKFLOWS
def stop_transfer(self, path: str):
"""
停止文件整理
"""
if path not in self.EMERGENCY_STOP_TRANSFER:
self.EMERGENCY_STOP_TRANSFER.append(path)
def is_transfer_stopped(self, path: str) -> bool:
"""
是否停止文件整理
"""
if self.is_system_stopped:
return True
if path in self.EMERGENCY_STOP_TRANSFER:
self.EMERGENCY_STOP_TRANSFER.remove(path)
return True
return False
# 全局标识
global_vars = GlobalVar()

View File

@@ -193,7 +193,7 @@ class MediaInfo:
# LOGO
logo_path: str = None
# 评分
vote_average: float = 0.0
vote_average: float = None
# 描述
overview: str = None
# 风格ID
@@ -237,9 +237,9 @@ class MediaInfo:
# 流媒体平台
networks: list = field(default_factory=list)
# 集数
number_of_episodes: int = 0
number_of_episodes: int = None
# 季数
number_of_seasons: int = 0
number_of_seasons: int = None
# 原产国
origin_country: list = field(default_factory=list)
# 原名
@@ -250,14 +250,16 @@ class MediaInfo:
production_countries: list = field(default_factory=list)
# 语种
spoken_languages: list = field(default_factory=list)
# 所有发行日期
release_dates: list = field(default_factory=list)
# 状态
status: str = None
# 标签
tagline: str = None
# 评价数量
vote_count: int = 0
vote_count: int = None
# 流行度
popularity: int = 0
popularity: float = None
# 时长
runtime: int = None
# 下一集
@@ -433,6 +435,18 @@ class MediaInfo:
if self.release_date:
# 年份
self.year = self.release_date[:4]
# 所有发行日期
self.release_dates = [
{
"date": release_date.get("release_date"),
"iso_code": result.get("iso_3166_1"),
"note": release_date.get("note"),
"type": release_date.get("type"),
}
for result in info.get("release_dates", {}).get("results", [])
for release_date in result.get("release_dates", [])
if release_date.get("release_date")
]
else:
# 电视剧
self.title = info.get('name')
@@ -474,7 +488,16 @@ class MediaInfo:
self.names = info.get('names') or []
# 剩余属性赋值
for key, value in info.items():
if hasattr(self, key) and not getattr(self, key):
if not value:
continue
if not hasattr(self, key):
continue
current_value = getattr(self, key)
if current_value:
continue
if current_value is None:
setattr(self, key, value)
elif type(current_value) is type(value):
setattr(self, key, value)
def set_douban_info(self, info: dict):
@@ -606,7 +629,16 @@ class MediaInfo:
self.production_countries = [{"id": country, "name": country} for country in info.get("countries") or []]
# 剩余属性赋值
for key, value in info.items():
if not value:
continue
if not hasattr(self, key):
continue
current_value = getattr(self, key)
if current_value:
continue
if current_value is None:
setattr(self, key, value)
elif type(current_value) is type(value):
setattr(self, key, value)
def set_bangumi_info(self, info: dict):
@@ -796,6 +828,8 @@ class Context:
media_info: MediaInfo = None
# 种子信息
torrent_info: TorrentInfo = None
# 媒体识别失败次数
media_recognize_fail_count: int = 0
def to_dict(self):
"""
@@ -804,5 +838,6 @@ class Context:
return {
"meta_info": self.meta_info.to_dict() if self.meta_info else None,
"torrent_info": self.torrent_info.to_dict() if self.torrent_info else None,
"media_info": self.media_info.to_dict() if self.media_info else None
"media_info": self.media_info.to_dict() if self.media_info else None,
"media_recognize_fail_count": self.media_recognize_fail_count
}

View File

@@ -1,4 +1,4 @@
import copy
import asyncio
import importlib
import inspect
import random
@@ -7,7 +7,9 @@ import time
import traceback
import uuid
from queue import Empty, PriorityQueue
from typing import Callable, Dict, List, Optional, Union
from typing import Callable, Dict, List, Optional, Tuple, Union, Any
from fastapi.concurrency import run_in_threadpool
from app.helper.thread import ThreadHelper
from app.log import logger
@@ -69,18 +71,27 @@ class EventManager(metaclass=Singleton):
EventManager 负责管理和调度广播事件和链式事件,包括订阅、发送和处理事件
"""
# 退出事件
__event = threading.Event()
def __init__(self):
self.__executor = ThreadHelper() # 动态线程池,用于消费事件
self.__consumer_threads = [] # 用于保存启动的事件消费者线程
self.__event_queue = PriorityQueue() # 优先级队列
self.__broadcast_subscribers: Dict[EventType, Dict[str, Callable]] = {} # 广播事件的订阅者
self.__chain_subscribers: Dict[ChainEventType, Dict[str, tuple[int, Callable]]] = {} # 链式事件的订阅者
self.__disabled_handlers = set() # 禁用的事件处理器集合
self.__disabled_classes = set() # 禁用的事件处理器类集合
self.__lock = threading.Lock() # 线程锁
# 动态线程池,用于消费事件
self.__executor = ThreadHelper()
# 用于保存启动的事件消费者线程
self.__consumer_threads = []
# 优先级队列
self.__event_queue = PriorityQueue()
# 广播事件的订阅者
self.__broadcast_subscribers: Dict[EventType, Dict[str, Callable]] = {}
# 链式事件的订阅者
self.__chain_subscribers: Dict[ChainEventType, Dict[str, tuple[int, Callable]]] = {}
# 禁用的事件处理器集合
self.__disabled_handlers = set()
# 禁用的事件处理器类集合
self.__disabled_classes = set()
# 线程锁
self.__lock = threading.Lock()
# 退出事件
self.__event = threading.Event()
# 当前事件循环
self.loop = asyncio.get_event_loop()
def start(self):
"""
@@ -144,6 +155,25 @@ class EventManager(metaclass=Singleton):
logger.error(f"Unknown event type: {etype}")
return None
async def async_send_event(self, etype: Union[EventType, ChainEventType],
data: Optional[Union[Dict, ChainEventData]] = None,
priority: Optional[int] = DEFAULT_EVENT_PRIORITY) -> Optional[Event]:
"""
异步发送事件,根据事件类型决定是广播事件还是链式事件
:param etype: 事件类型 (EventType 或 ChainEventType)
:param data: 可选,事件数据
:param priority: 广播事件的优先级,默认为 10
:return: 如果是链式事件,返回处理后的事件数据;否则返回 None
"""
event = Event(etype, data, priority)
if isinstance(etype, EventType):
return self.__trigger_broadcast_event(event)
elif isinstance(etype, ChainEventType):
return await self.__trigger_chain_event_async(event)
else:
logger.error(f"Unknown event type: {etype}")
return None
def add_event_listener(self, event_type: Union[EventType, ChainEventType], handler: Callable,
priority: Optional[int] = DEFAULT_EVENT_PRIORITY):
"""
@@ -327,6 +357,14 @@ class EventManager(metaclass=Singleton):
dispatch = self.__dispatch_chain_event(event)
return event if dispatch else None
async def __trigger_chain_event_async(self, event: Event) -> Optional[Event]:
"""
异步触发链式事件,按顺序调用订阅的处理器,并记录处理耗时
"""
logger.debug(f"Triggering asynchronous chain event: {event}")
dispatch = await self.__dispatch_chain_event_async(event)
return event if dispatch else None
def __trigger_broadcast_event(self, event: Event):
"""
触发广播事件,将事件插入到优先级队列中
@@ -364,6 +402,35 @@ class EventManager(metaclass=Singleton):
self.__log_event_lifecycle(event, "Completed")
return True
async def __dispatch_chain_event_async(self, event: Event) -> bool:
"""
异步方式调度链式事件,按优先级顺序逐个调用事件处理器,并记录每个处理器的处理时间
:param event: 要调度的事件对象
"""
handlers = self.__chain_subscribers.get(event.event_type, {})
if not handlers:
logger.debug(f"No handlers found for chain event: {event}")
return False
# 过滤出启用的处理器
enabled_handlers = {handler_id: (priority, handler) for handler_id, (priority, handler) in handlers.items()
if self.__is_handler_enabled(handler)}
if not enabled_handlers:
logger.debug(f"No enabled handlers found for chain event: {event}. Skipping execution.")
return False
self.__log_event_lifecycle(event, "Started")
for handler_id, (priority, handler) in enabled_handlers.items():
start_time = time.time()
await self.__safe_invoke_handler_async(handler, event)
logger.debug(
f"{self.__get_handler_identifier(handler)} (Priority: {priority}), "
f"completed in {time.time() - start_time:.3f}s for event: {event}"
)
self.__log_event_lifecycle(event, "Completed")
return True
def __dispatch_broadcast_event(self, event: Event):
"""
异步方式调度广播事件,通过线程池逐个调用事件处理器
@@ -373,8 +440,25 @@ class EventManager(metaclass=Singleton):
if not handlers:
logger.debug(f"No handlers found for broadcast event: {event}")
return
# 为每个处理器提供独立的事件实例,防止某个处理器对 event_data 的修改影响其他处理器
for handler_id, handler in handlers.items():
self.__executor.submit(self.__safe_invoke_handler, handler, event)
# 仅浅拷贝顶层字典,避免不必要的深拷贝开销;这样可以隔离键级别的替换/赋值
if isinstance(event.event_data, dict):
event_data_copy = event.event_data.copy()
else:
event_data_copy = event.event_data
isolated_event = Event(event_type=event.event_type,
event_data=event_data_copy,
priority=event.priority)
if inspect.iscoroutinefunction(handler):
# 对于异步函数,直接在事件循环中运行
asyncio.run_coroutine_threadsafe(
self.__safe_invoke_handler_async(handler, isolated_event),
self.loop
)
else:
# 对于同步函数,在线程池中运行
self.__executor.submit(self.__safe_invoke_handler, handler, isolated_event)
def __safe_invoke_handler(self, handler: Callable, event: Event):
"""
@@ -386,48 +470,162 @@ class EventManager(metaclass=Singleton):
logger.debug(f"Handler {self.__get_handler_identifier(handler)} is disabled. Skipping execution")
return
# 根据事件类型判断是否需要深复制
is_broadcast_event = isinstance(event.event_type, EventType)
event_to_process = copy.deepcopy(event) if is_broadcast_event else event
self.__invoke_handler_by_type_sync(handler, event)
async def __safe_invoke_handler_async(self, handler: Callable, event: Event):
"""
异步调用处理器,处理链式事件
:param handler: 处理器
:param event: 事件对象
"""
if not self.__is_handler_enabled(handler):
logger.debug(f"Handler {self.__get_handler_identifier(handler)} is disabled. Skipping execution")
return
await self.__invoke_handler_by_type_async(handler, event)
def __invoke_handler_by_type_sync(self, handler: Callable, event: Event):
"""
同步方式根据处理器类型调用相应的方法
:param handler: 处理器
:param event: 要处理的事件对象
"""
class_name, method_name = self.__parse_handler_names(handler)
from app.core.plugin import PluginManager
from app.core.module import ModuleManager
plugin_manager = PluginManager()
module_manager = ModuleManager()
if class_name in plugin_manager.get_plugin_ids():
# 插件处理器
plugin = plugin_manager.running_plugins.get(class_name)
if not plugin:
return
method = getattr(plugin, method_name, None)
if not method:
return
try:
method(event)
except Exception as e:
self.__handle_event_error(event=event, module_name=plugin.name,
class_name=class_name, method_name=method_name, e=e)
elif class_name in module_manager.get_module_ids():
# 模块处理器
module = module_manager.get_running_module(class_name)
if not module:
return
method = getattr(module, method_name, None)
if not method:
return
try:
method(event)
except Exception as e:
self.__handle_event_error(event=event, module_name=module.get_name(),
class_name=class_name, method_name=method_name, e=e)
else:
# 全局处理器
class_obj = self.__get_class_instance(class_name)
if not class_obj or not hasattr(class_obj, method_name):
return
method = getattr(class_obj, method_name, None)
if not method:
return
try:
method(event)
except Exception as e:
self.__handle_event_error(event=event, module_name=class_name,
class_name=class_name, method_name=method_name, e=e)
async def __invoke_handler_by_type_async(self, handler: Callable, event: Event):
"""
异步方式根据处理器类型调用相应的方法
:param handler: 处理器
:param event: 要处理的事件对象
"""
class_name, method_name = self.__parse_handler_names(handler)
from app.core.plugin import PluginManager
from app.core.module import ModuleManager
plugin_manager = PluginManager()
module_manager = ModuleManager()
if class_name in plugin_manager.get_plugin_ids():
await self.__invoke_plugin_method_async(plugin_manager, class_name, method_name, event)
elif class_name in module_manager.get_module_ids():
await self.__invoke_module_method_async(module_manager, class_name, method_name, event)
else:
await self.__invoke_global_method_async(class_name, method_name, event)
@staticmethod
def __parse_handler_names(handler: Callable) -> Tuple[str, str]:
"""
解析处理器的类名和方法名
:param handler: 处理器
:return: (class_name, method_name)
"""
names = handler.__qualname__.split(".")
class_name, method_name = names[0], names[1]
return names[0], names[1]
async def __invoke_plugin_method_async(self, handler: Any, class_name: str, method_name: str, event: Event):
"""
异步调用插件方法
"""
plugin = handler.running_plugins.get(class_name)
if not plugin:
return
method = getattr(plugin, method_name, None)
if not method:
return
try:
from app.core.plugin import PluginManager
from app.core.module import ModuleManager
if class_name in PluginManager().get_plugin_ids():
def plugin_callable():
"""
插件调用函数
"""
PluginManager().run_plugin_method(class_name, method_name, event_to_process)
if is_broadcast_event:
self.__executor.submit(plugin_callable)
else:
plugin_callable()
elif class_name in ModuleManager().get_module_ids():
module = ModuleManager().get_running_module(class_name)
if module:
method = getattr(module, method_name, None)
if method:
if is_broadcast_event:
self.__executor.submit(method, event_to_process)
else:
method(event_to_process)
if inspect.iscoroutinefunction(method):
await method(event)
else:
# 获取全局对象或模块类的实例
class_obj = self.__get_class_instance(class_name)
if class_obj and hasattr(class_obj, method_name):
method = getattr(class_obj, method_name)
if is_broadcast_event:
self.__executor.submit(method, event_to_process)
else:
method(event_to_process)
# 插件同步函数在异步环境中运行,避免阻塞
await run_in_threadpool(method, event)
except Exception as e:
self.__handle_event_error(event, handler, e)
self.__handle_event_error(event=event, module_name=plugin.name,
class_name=class_name, method_name=method_name, e=e)
async def __invoke_module_method_async(self, handler: Any, class_name: str, method_name: str, event: Event):
"""
异步调用模块方法
"""
module = handler.get_running_module(class_name)
if not module:
return
method = getattr(module, method_name, None)
if not method:
return
try:
if inspect.iscoroutinefunction(method):
await method(event)
else:
method(event)
except Exception as e:
self.__handle_event_error(event=event, module_name=module.get_name(),
class_name=class_name, method_name=method_name, e=e)
async def __invoke_global_method_async(self, class_name: str, method_name: str, event: Event):
"""
异步调用全局对象方法
"""
class_obj = self.__get_class_instance(class_name)
if not class_obj:
return
method = getattr(class_obj, method_name, None)
if not method:
return
try:
if inspect.iscoroutinefunction(method):
await method(event)
else:
method(event)
except Exception as e:
self.__handle_event_error(event=event, module_name=class_name,
class_name=class_name, method_name=method_name, e=e)
@staticmethod
def __get_class_instance(class_name: str):
@@ -454,7 +652,11 @@ class EventManager(metaclass=Singleton):
module_name = f"app.chain.{class_name[:-5].lower()}"
module = importlib.import_module(module_name)
elif class_name.endswith("Helper"):
module_name = f"app.helper.{class_name[:-6].lower()}"
# 特殊处理 Async 类
if class_name.startswith("Async"):
module_name = f"app.helper.{class_name[5:-6].lower()}"
else:
module_name = f"app.helper.{class_name[:-6].lower()}"
module = importlib.import_module(module_name)
else:
module_name = f"app.{class_name.lower()}"
@@ -494,18 +696,16 @@ class EventManager(metaclass=Singleton):
"""
logger.debug(f"{stage} - {event}")
def __handle_event_error(self, event: Event, handler: Callable, e: Exception):
def __handle_event_error(self, event: Event, module_name: str,
class_name: str, method_name: str, e: Exception):
"""
全局错误处理器,用于处理事件处理中的异常
"""
logger.error(f"事件处理出错:{str(e)} - {traceback.format_exc()}")
names = handler.__qualname__.split(".")
class_name, method_name = names[0], names[1]
logger.error(f"{module_name} 事件处理出错:{str(e)} - {traceback.format_exc()}")
# 发送系统错误通知
from app.helper.message import MessageHelper
MessageHelper().put(title=f"{event.event_type} 事件处理出错",
MessageHelper().put(title=f"{module_name} 处理事件 {event.event_type} 出错",
message=f"{class_name}.{method_name}{str(e)}",
role="system")
self.send_event(

View File

@@ -9,8 +9,6 @@ class CustomizationMatcher(metaclass=Singleton):
"""
识别自定义占位符
"""
customization = None
custom_separator = None
def __init__(self):
self.systemconfig = SystemConfigOper()

View File

@@ -94,7 +94,6 @@ class MetaVideo(MetaBase):
title = re.sub(r'\d{4}[\s._-]\d{1,2}[\s._-]\d{1,2}', "", title)
# 拆分tokens
tokens = Tokens(title)
self.tokens = tokens
# 实例化StreamingPlatforms对象
streaming_platforms = StreamingPlatforms()
# 解析名称、年份、季、集、资源类型、分辨率等
@@ -102,7 +101,7 @@ class MetaVideo(MetaBase):
while token:
self._index += 1 # 更新当前处理的token索引
# Part
self.__init_part(token)
self.__init_part(token, tokens)
# 标题
if self._continue_flag:
self.__init_name(token)
@@ -123,7 +122,7 @@ class MetaVideo(MetaBase):
self.__init_resource_type(token)
# 流媒体平台
if self._continue_flag:
self.__init_web_source(token, streaming_platforms)
self.__init_web_source(token, tokens, streaming_platforms)
# 视频编码
if self._continue_flag:
self.__init_video_encode(token)
@@ -200,7 +199,7 @@ class MetaVideo(MetaBase):
name = re.sub(r'%s' % self._name_nostring_re, '', name,
flags=re.IGNORECASE).strip()
name = re.sub(r'\s+', ' ', name)
if name.isdigit() \
if name.isdecimal() \
and int(name) < 1800 \
and not self.year \
and not self.begin_season \
@@ -311,7 +310,7 @@ class MetaVideo(MetaBase):
self.en_name = token
self._last_token_type = "enname"
def __init_part(self, token: str):
def __init_part(self, token: str, tokens: Tokens):
"""
识别Part
"""
@@ -327,12 +326,12 @@ class MetaVideo(MetaBase):
if re_res:
if not self.part:
self.part = re_res.group(1)
nextv = self.tokens.cur()
nextv = tokens.cur()
if nextv \
and ((nextv.isdigit() and (len(nextv) == 1 or len(nextv) == 2 and nextv.startswith('0')))
or nextv.upper() in ['A', 'B', 'C', 'I', 'II', 'III']):
self.part = "%s%s" % (self.part, nextv)
self.tokens.get_next()
tokens.get_next()
self._last_token_type = "part"
self._continue_flag = False
# self._stop_name_flag = False
@@ -582,7 +581,7 @@ class MetaVideo(MetaBase):
self._effect.append(effect)
self._last_token = effect.upper()
def __init_web_source(self, token: str, streaming_platforms: StreamingPlatforms):
def __init_web_source(self, token: str, tokens: Tokens, streaming_platforms: StreamingPlatforms):
"""
识别流媒体平台
"""
@@ -594,10 +593,10 @@ class MetaVideo(MetaBase):
prev_token = None
prev_idx = self._index - 2
if 0 <= prev_idx < len(self.tokens.tokens):
prev_token = self.tokens.tokens[prev_idx]
if 0 <= prev_idx < len(tokens.tokens):
prev_token = tokens.tokens[prev_idx]
next_token = self.tokens.peek()
next_token = tokens.peek()
if streaming_platforms.is_streaming_platform(token):
platform_name = streaming_platforms.get_streaming_platform_name(token)
@@ -616,7 +615,7 @@ class MetaVideo(MetaBase):
platform_name = streaming_platforms.get_streaming_platform_name(combined_token)
query_range = 2
if is_next:
self.tokens.get_next()
tokens.get_next()
break
if not platform_name:
@@ -626,8 +625,8 @@ class MetaVideo(MetaBase):
match_start_idx = self._index - query_range
match_end_idx = self._index - 1
start_index = max(0, match_start_idx - query_range)
end_index = min(len(self.tokens.tokens), match_end_idx + 1 + query_range)
tokens_to_check = self.tokens.tokens[start_index:end_index]
end_index = min(len(tokens.tokens), match_end_idx + 1 + query_range)
tokens_to_check = tokens.tokens[start_index:end_index]
if any(tok and tok.upper() in web_tokens for tok in tokens_to_check):
self.web_source = platform_name

View File

@@ -9,7 +9,6 @@ class ReleaseGroupsMatcher(metaclass=Singleton):
"""
识别制作组、字幕组
"""
__release_groups: str = None
# 内置组
RELEASE_GROUPS: dict = {
"0ff": ['FF(?:(?:A|WE)B|CD|E(?:DU|B)|TV)'],
@@ -48,7 +47,7 @@ class ReleaseGroupsMatcher(metaclass=Singleton):
"joyhd": [],
"keepfrds": ['FRDS', 'Yumi', 'cXcY'],
"lemonhd": ['L(?:eague(?:(?:C|H)D|(?:M|T)V|NF|WEB)|HD)', 'i18n', 'CiNT'],
"mteam": ['MTeam(?:TV|)', 'MPAD'],
"mteam": ['MTeam(?:TV|)', 'MPAD', 'MWeb'],
"nanyangpt": [],
"nicept": [],
"oshen": [],
@@ -70,7 +69,7 @@ class ReleaseGroupsMatcher(metaclass=Singleton):
"U2": [],
"ultrahd": [],
"others": ['B(?:MDru|eyondHD|TN)', 'C(?:fandora|trlhd|MRG)', 'DON', 'EVO', 'FLUX', 'HONE(?:yG|)',
'N(?:oGroup|T(?:b|G))', 'PandaMoon', 'SMURF', 'T(?:EPES|aengoo|rollHD )',],
'N(?:oGroup|T(?:b|G))', 'PandaMoon', 'SMURF', 'T(?:EPES|aengoo|rollHD )'],
"anime": ['ANi', 'HYSUB', 'KTXP', 'LoliHouse', 'MCE', 'Nekomoe kissaten', 'SweetSub', 'MingY',
'(?:Lilith|NC)-Raws', '织梦字幕组', '枫叶字幕组', '猎户手抄部', '喵萌奶茶屋', '漫猫字幕社',
'霜庭云花Sub', '北宇治字幕组', '氢气烤肉架', '云歌字幕组', '萌樱字幕组', '极影字幕社',
@@ -106,10 +105,11 @@ class ReleaseGroupsMatcher(metaclass=Singleton):
else:
groups = self.__release_groups
title = f"{title} "
groups_re = re.compile(r"(?<=[-@\[£【&])(?:%s)(?=[@.\s\S\]\[】&])" % groups, re.I)
# 处理一个制作组识别多次的情况,保留顺序
groups_re = re.compile(r"(?<=[-@\[£【&])(?:(?:%s))(?=[@.\s\S\]\[】&])" % groups, re.I)
unique_groups = []
for item in re.findall(groups_re, title):
if item not in unique_groups:
unique_groups.append(item)
item_str = item[0] if isinstance(item, tuple) else item
if item_str not in unique_groups:
unique_groups.append(item_str)
return "@".join(unique_groups)

View File

@@ -312,4 +312,3 @@ class StreamingPlatforms(metaclass=Singleton):
if name is None:
return False
return name.upper() in self._lookup_cache

View File

@@ -16,14 +16,14 @@ class ModuleManager(metaclass=Singleton):
模块管理器
"""
# 模块列表
_modules: dict = {}
# 运行态模块列表
_running_modules: dict = {}
# 子模块类型集合
SubType = Union[DownloaderType, MediaServerType, MessageChannel, StorageSchema, OtherModulesType]
def __init__(self):
# 模块列表
self._modules: dict = {}
# 运行态模块列表
self._running_modules: dict = {}
self.load_modules()
def load_modules(self):
@@ -48,7 +48,7 @@ class ModuleManager(metaclass=Singleton):
# 通过模板开关控制加载
_module.init_module()
self._running_modules[module_id] = _module
logger.info(f"Moudle Loaded{module_id}")
logger.debug(f"Moudle Loaded{module_id}")
except Exception as err:
logger.error(f"Load Moudle Error{module_id}{str(err)} - {traceback.format_exc()}", exc_info=True)
@@ -61,7 +61,7 @@ class ModuleManager(metaclass=Singleton):
if hasattr(module, "stop"):
try:
module.stop()
logger.info(f"Moudle Stoped{module_id}")
logger.debug(f"Moudle Stoped{module_id}")
except Exception as err:
logger.error(f"Stop Moudle Error{module_id}{str(err)} - {traceback.format_exc()}", exc_info=True)
logger.info("所有模块停止完成")

View File

@@ -1,3 +1,5 @@
import ast
import asyncio
import concurrent
import concurrent.futures
import importlib.util
@@ -8,96 +10,46 @@ import time
import traceback
from concurrent.futures import ThreadPoolExecutor, as_completed
from pathlib import Path
import threading
from typing import Any, Dict, List, Optional, Type, Union, Callable, Tuple
from fastapi import HTTPException
from starlette import status
from watchdog.events import FileSystemEventHandler
from watchdog.observers import Observer
from watchfiles import watch
from app import schemas
from app.core.cache import fresh, async_fresh
from app.core.config import settings
from app.core.event import eventmanager, Event
from app.db.plugindata_oper import PluginDataOper
from app.db.systemconfig_oper import SystemConfigOper
from app.helper.plugin import PluginHelper
from app.helper.sites import SitesHelper
from app.helper.sites import SitesHelper # noqa
from app.log import logger
from app.schemas.types import EventType, SystemConfigKey
from app.utils.crypto import RSAUtils
from app.utils.limit import rate_limit_window
from app.utils.object import ObjectUtils
from app.utils.singleton import Singleton
from app.utils.string import StringUtils
from app.utils.system import SystemUtils
class PluginMonitorHandler(FileSystemEventHandler):
def on_modified(self, event):
"""
插件文件修改后重载
"""
if event.is_directory:
return
# 使用 pathlib 处理文件路径,跳过非 .py 文件以及 pycache 目录中的文件
event_path = Path(event.src_path)
if not event_path.name.endswith(".py") or "pycache" in event_path.parts:
return
# 读取插件根目录下的__init__.py文件读取class XXXX(_PluginBase)的类名
try:
plugins_root = settings.ROOT_PATH / "app" / "plugins"
# 确保修改的文件在 plugins 目录下
if plugins_root not in event_path.parents:
return
# 获取插件目录路径没有找到__init__.py时说明不是有效包跳过插件重载
# 插件重载目前没有支持app/plugins/plugin/package/__init__.py的场景这里也不做支持
plugin_dir = event_path.parent
init_file = plugin_dir / "__init__.py"
if not init_file.exists():
logger.debug(f"{plugin_dir} 下没有找到 __init__.py跳过插件重载")
return
with open(init_file, "r", encoding="utf-8") as f:
lines = f.readlines()
pid = None
for line in lines:
if line.startswith("class") and "(_PluginBase)" in line:
pid = line.split("class ")[1].split("(_PluginBase)")[0].strip()
if pid:
self.__reload_plugin(pid)
except Exception as e:
logger.error(f"插件文件修改后重载出错:{str(e)}")
@staticmethod
@rate_limit_window(max_calls=1, window_seconds=2, source="PluginMonitor", enable_logging=False)
def __reload_plugin(pid):
"""
重新加载插件
"""
try:
logger.info(f"插件 {pid} 文件修改,重新加载...")
PluginManager().reload_plugin(pid)
except Exception as e:
logger.error(f"插件文件修改后重载出错:{str(e)}")
class PluginManager(metaclass=Singleton):
"""
插件管理器
"""
# 插件列表
_plugins: dict = {}
# 运行态插件列表
_running_plugins: dict = {}
# 配置Key
_config_key: str = "plugin.%s"
# 监听器
_observer: Observer = None
def __init__(self):
# 插件列表
self._plugins: dict = {}
# 运行态插件列表
self._running_plugins: dict = {}
# 配置Key
self._config_key: str = "plugin.%s"
# 监控线程
self._monitor_thread: Optional[threading.Thread] = None
# 监控停止事件
self._stop_monitor_event = threading.Event()
# 开发者模式监测插件修改
if settings.DEV or settings.PLUGIN_AUTO_RELOAD:
self.__start_monitor()
@@ -264,7 +216,6 @@ class PluginManager(metaclass=Singleton):
# 导入模块
module = importlib.import_module(module_name)
importlib.reload(module)
# 检查模块中的类
for name, obj in module.__dict__.items():
@@ -318,10 +269,9 @@ class PluginManager(metaclass=Singleton):
重新加载插件文件修改监测
"""
if settings.DEV or settings.PLUGIN_AUTO_RELOAD:
if self._observer and self._observer.is_alive():
logger.info("插件文件修改监测已经在运行中...")
else:
self.__start_monitor()
# 先关闭已有监测,再重新启动
self.stop_monitor()
self.__start_monitor()
else:
self.stop_monitor()
@@ -329,25 +279,123 @@ class PluginManager(metaclass=Singleton):
"""
启用监测插件文件修改监测
"""
if self._monitor_thread and self._monitor_thread.is_alive():
logger.info("插件文件修改监测已经在运行中...")
return
logger.info("开始监测插件文件修改...")
monitor_handler = PluginMonitorHandler()
self._observer = Observer()
self._observer.schedule(monitor_handler, str(settings.ROOT_PATH / "app" / "plugins"), recursive=True)
self._observer.start()
# 在启动新线程之前,确保停止事件是清除状态
self._stop_monitor_event.clear()
# 创建并启动监控线程
self._monitor_thread = threading.Thread(
target=self._run_file_watcher,
daemon=True
)
self._monitor_thread.start()
def stop_monitor(self):
"""
停止监测插件文件修改监测
"""
# 停止监测
if self._observer and self._observer.is_alive():
if self._monitor_thread and self._monitor_thread.is_alive():
logger.info("正在停止插件文件修改监测...")
self._observer.stop()
self._observer.join()
self._stop_monitor_event.set()
self._monitor_thread.join(timeout=5)
if self._monitor_thread.is_alive():
logger.warning("插件文件修改监测线程在5秒内未能正常停止。")
self._monitor_thread = None
logger.info("插件文件修改监测停止完成")
else:
logger.info("未启用插件文件修改监测,无需停止")
def _run_file_watcher(self):
"""
运行 watchfiles 监视器的主循环。
"""
# 监视插件目录
plugins_path = str(settings.ROOT_PATH / "app" / "plugins")
logger.info(">>> 监控线程已启动准备进入watch循环...")
# 使用 watchfiles 监视目录变化,并响应变化事件
# Todo: yield_on_timeout = True 时,每秒检查停止事件,会返回空集合;后续可以考虑用来做心跳之类的功能?
for changes in watch(plugins_path, stop_event=self._stop_monitor_event, rust_timeout=1000,
yield_on_timeout=True):
# 如果收到停止事件,退出循环
if not changes:
continue
# 处理变化事件
plugins_to_reload = set()
for _change_type, path_str in changes:
event_path = Path(path_str)
# 跳过非 .py 文件以及 pycache 目录中的文件
if not event_path.name.endswith(".py") or "__pycache__" in event_path.parts:
continue
# 解析插件ID
pid = self._get_plugin_id_from_path(event_path)
# 跳过无效插件文件
if pid:
# 收集需要重载的插件ID自动去重避免重复重载
plugins_to_reload.add(pid)
# 触发重载
if plugins_to_reload:
logger.info(f"检测到插件文件变化,准备重载: {list(plugins_to_reload)}")
for pid in plugins_to_reload:
try:
self.reload_plugin(pid)
except Exception as e:
logger.error(f"插件 {pid} 热重载失败: {e}", exc_info=True)
@staticmethod
def _get_plugin_id_from_path(event_path: Path) -> Optional[str]:
"""
根据文件路径解析出插件的ID。
:param event_path: 被修改文件的 Path 对象。
:return: 插件ID字符串如果不是有效插件文件则返回 None。
"""
try:
plugins_root = settings.ROOT_PATH / "app" / "plugins"
# 确保修改的文件在 plugins 目录下
if not event_path.is_relative_to(plugins_root):
return None
try:
plugin_dir_name = event_path.relative_to(plugins_root).parts[0]
plugin_dir = plugins_root / plugin_dir_name
except (ValueError, IndexError):
return None
init_file = plugin_dir / "__init__.py"
if not init_file.exists():
return None
# 读取 __init__.py 文件,查找插件主类名
with open(init_file, "r", encoding="utf-8") as f:
source_code = f.read()
tree = ast.parse(source_code)
# 遍历AST查找继承自 _PluginBase 的类
for node in ast.walk(tree):
# 检查节点是否为类定义
if isinstance(node, ast.ClassDef):
# 遍历该类的所有基类
for base in node.bases:
# 检查基类是否是我们寻找的 _PluginBase
# ast.Name 用于处理简单的基类名
if isinstance(base, ast.Name) and base.id == '_PluginBase':
# 返回这个类的名字
return node.name
return None
except Exception as e:
logger.error(f"从路径解析插件ID时出错: {e}")
return None
@staticmethod
def __stop_plugin(plugin: Any):
"""
@@ -410,6 +458,10 @@ class PluginManager(metaclass=Singleton):
except KeyError:
# 模块可能已经被删除
pass
importlib.invalidate_caches()
logger.debug("已清除查找器的缓存")
if plugin_id:
if modules_to_remove:
logger.info(f"插件 {plugin_id} 共清除 {len(modules_to_remove)} 个模块缓存:{modules_to_remove}")
@@ -693,6 +745,36 @@ class PluginManager(metaclass=Singleton):
logger.error(f"获取插件 {plugin_id} 动作出错:{str(e)}")
return ret_actions
def get_plugin_agent_tools(self, pid: Optional[str] = None) -> List[Dict[str, Any]]:
"""
获取插件智能体工具
[{
"plugin_id": "插件ID",
"plugin_name": "插件名称",
"tools": [ToolClass1, ToolClass2, ...]
}]
"""
ret_tools = []
# 创建字典快照避免并发修改
running_plugins_snapshot = dict(self._running_plugins)
for plugin_id, plugin in running_plugins_snapshot.items():
if pid and pid != plugin_id:
continue
if hasattr(plugin, "get_agent_tools") and ObjectUtils.check_method(plugin.get_agent_tools):
try:
if not plugin.get_state():
continue
tools = plugin.get_agent_tools()
if tools:
ret_tools.append({
"plugin_id": plugin_id,
"plugin_name": plugin.plugin_name,
"tools": tools
})
except Exception as e:
logger.error(f"获取插件 {plugin_id} 智能体工具出错:{str(e)}")
return ret_tools
@staticmethod
def get_plugin_remote_entry(plugin_id: str, dist_path: str) -> str:
"""
@@ -832,6 +914,25 @@ class PluginManager(metaclass=Singleton):
return None
return getattr(plugin, method)(*args, **kwargs)
async def async_run_plugin_method(self, pid: str, method: str, *args, **kwargs) -> Any:
"""
异步运行插件方法
:param pid: 插件ID
:param method: 方法名
:param args: 参数
:param kwargs: 关键字参数
"""
plugin = self._running_plugins.get(pid)
if not plugin:
return None
if not hasattr(plugin, method):
return None
method_func = getattr(plugin, method)
if asyncio.iscoroutinefunction(method_func):
return await method_func(*args, **kwargs)
else:
return method_func(*args, **kwargs)
def get_plugin_ids(self) -> List[str]:
"""
获取所有插件ID
@@ -851,8 +952,6 @@ class PluginManager(metaclass=Singleton):
if not settings.PLUGIN_MARKET:
return []
# 返回值
all_plugins = []
# 用于存储高于 v1 版本的插件(如 v2, v3 等)
higher_version_plugins = []
# 用于存储 v1 版本插件
@@ -885,25 +984,7 @@ class PluginManager(metaclass=Singleton):
else:
base_version_plugins.extend(plugins) # 收集 v1 版本插件
# 优先处理高版本插件
all_plugins.extend(higher_version_plugins)
# 将未出现在高版本插件列表中的 v1 插件加入 all_plugins
higher_plugin_ids = {f"{p.id}{p.plugin_version}" for p in higher_version_plugins}
all_plugins.extend([p for p in base_version_plugins if f"{p.id}{p.plugin_version}" not in higher_plugin_ids])
# 去重
all_plugins = list({f"{p.id}{p.plugin_version}": p for p in all_plugins}.values())
# 所有插件按 repo 在设置中的顺序排序
all_plugins.sort(
key=lambda x: settings.PLUGIN_MARKET.split(",").index(x.repo_url) if x.repo_url else 0
)
# 相同 ID 的插件保留版本号最大的版本
max_versions = {}
for p in all_plugins:
if p.id not in max_versions or StringUtils.compare_version(p.plugin_version, ">", max_versions[p.id]):
max_versions[p.id] = p.plugin_version
result = [p for p in all_plugins if p.plugin_version == max_versions[p.id]]
logger.info(f"共获取到 {len(result)} 个线上插件")
return result
return self._process_plugins_list(higher_version_plugins, base_version_plugins)
def get_local_plugins(self) -> List[schemas.Plugin]:
"""
@@ -1025,7 +1106,8 @@ class PluginManager(metaclass=Singleton):
# 已安装插件
installed_apps = SystemConfigOper().get(SystemConfigKey.UserInstalledPlugins) or []
# 获取在线插件
online_plugins = PluginHelper().get_plugins(market, package_version, force)
with fresh(force):
online_plugins = PluginHelper().get_plugins(market, package_version)
if online_plugins is None:
logger.warning(
f"获取{package_version if package_version else ''}插件库失败:{market},请检查 GitHub 网络连接")
@@ -1033,81 +1115,217 @@ class PluginManager(metaclass=Singleton):
ret_plugins = []
add_time = len(online_plugins)
for pid, plugin_info in online_plugins.items():
# 如 package_version 为空,则需要判断插件是否兼容当前版本
if not package_version:
if plugin_info.get(settings.VERSION_FLAG) is not True:
# 插件当前版本不兼容
continue
# 运行状插件
plugin_obj = self._running_plugins.get(pid)
# 非运行态插件
plugin_static = self._plugins.get(pid)
# 基本属性
plugin = schemas.Plugin()
# ID
plugin.id = pid
# 安装状态
if pid in installed_apps and plugin_static:
plugin.installed = True
else:
plugin.installed = False
# 是否有新版本
plugin.has_update = False
if plugin_static:
installed_version = getattr(plugin_static, "plugin_version")
if StringUtils.compare_version(installed_version, "<", plugin_info.get("version")):
# 需要更新
plugin.has_update = True
# 运行状态
if plugin_obj and hasattr(plugin_obj, "get_state"):
try:
state = plugin_obj.get_state()
except Exception as e:
logger.error(f"获取插件 {pid} 状态出错:{str(e)}")
state = False
plugin.state = state
else:
plugin.state = False
# 是否有详情页面
plugin.has_page = False
if plugin_obj and hasattr(plugin_obj, "get_page"):
if ObjectUtils.check_method(plugin_obj.get_page):
plugin.has_page = True
# 公钥
if plugin_info.get("key"):
plugin.plugin_public_key = plugin_info.get("key")
# 权限
if not self.__set_and_check_auth_level(plugin=plugin, source=plugin_info):
plugin = self._process_plugin_info(pid, plugin_info, market, installed_apps, add_time, package_version)
if plugin:
ret_plugins.append(plugin)
add_time -= 1
return ret_plugins
@staticmethod
def _process_plugins_list(higher_version_plugins: List[schemas.Plugin],
base_version_plugins: List[schemas.Plugin]) -> List[schemas.Plugin]:
"""
处理插件列表:合并、去重、排序、保留最高版本
:param higher_version_plugins: 高版本插件列表
:param base_version_plugins: 基础版本插件列表
:return: 处理后的插件列表
"""
# 优先处理高版本插件
all_plugins = []
all_plugins.extend(higher_version_plugins)
# 将未出现在高版本插件列表中的 v1 插件加入 all_plugins
higher_plugin_ids = {f"{p.id}{p.plugin_version}" for p in higher_version_plugins}
all_plugins.extend([p for p in base_version_plugins if f"{p.id}{p.plugin_version}" not in higher_plugin_ids])
# 去重
all_plugins = list({f"{p.id}{p.plugin_version}": p for p in all_plugins}.values())
# 所有插件按 repo 在设置中的顺序排序
all_plugins.sort(
key=lambda x: settings.PLUGIN_MARKET.split(",").index(x.repo_url) if x.repo_url else 0
)
# 相同 ID 的插件保留版本号最大的版本
max_versions = {}
for p in all_plugins:
if p.id not in max_versions or StringUtils.compare_version(p.plugin_version, ">", max_versions[p.id]):
max_versions[p.id] = p.plugin_version
result = [p for p in all_plugins if p.plugin_version == max_versions[p.id]]
logger.info(f"共获取到 {len(result)} 个线上插件")
return result
def _process_plugin_info(self, pid: str, plugin_info: dict, market: str,
installed_apps: List[str], add_time: int,
package_version: Optional[str] = None) -> Optional[schemas.Plugin]:
"""
处理单个插件信息,创建 schemas.Plugin 对象
:param pid: 插件ID
:param plugin_info: 插件信息字典
:param market: 市场URL
:param installed_apps: 已安装插件列表
:param add_time: 添加顺序
:param package_version: 包版本
:return: 创建的插件对象如果验证失败返回None
"""
if not isinstance(plugin_info, dict):
return None
# 如 package_version 为空,则需要判断插件是否兼容当前版本
if not package_version:
if plugin_info.get(settings.VERSION_FLAG) is not True:
# 插件当前版本不兼容
return None
# 运行状插件
plugin_obj = self._running_plugins.get(pid)
# 非运行态插件
plugin_static = self._plugins.get(pid)
# 基本属性
plugin = schemas.Plugin()
# ID
plugin.id = pid
# 安装状态
if pid in installed_apps and plugin_static:
plugin.installed = True
else:
plugin.installed = False
# 是否有新版本
plugin.has_update = False
if plugin_static:
installed_version = getattr(plugin_static, "plugin_version")
if StringUtils.compare_version(installed_version, "<", plugin_info.get("version")):
# 需要更新
plugin.has_update = True
# 运行状态
if plugin_obj and hasattr(plugin_obj, "get_state"):
try:
state = plugin_obj.get_state()
except Exception as e:
logger.error(f"获取插件 {pid} 状态出错:{str(e)}")
state = False
plugin.state = state
else:
plugin.state = False
# 是否有详情页面
plugin.has_page = False
if plugin_obj and hasattr(plugin_obj, "get_page"):
if ObjectUtils.check_method(plugin_obj.get_page):
plugin.has_page = True
# 公钥
if plugin_info.get("key"):
plugin.plugin_public_key = plugin_info.get("key")
# 权限
if not self.__set_and_check_auth_level(plugin=plugin, source=plugin_info):
return None
# 名称
if plugin_info.get("name"):
plugin.plugin_name = plugin_info.get("name")
# 描述
if plugin_info.get("description"):
plugin.plugin_desc = plugin_info.get("description")
# 版本
if plugin_info.get("version"):
plugin.plugin_version = plugin_info.get("version")
# 图标
if plugin_info.get("icon"):
plugin.plugin_icon = plugin_info.get("icon")
# 标签
if plugin_info.get("labels"):
plugin.plugin_label = plugin_info.get("labels")
# 作者
if plugin_info.get("author"):
plugin.plugin_author = plugin_info.get("author")
# 更新历史
if plugin_info.get("history"):
plugin.history = plugin_info.get("history")
# 仓库链接
plugin.repo_url = market
# 本地标志
plugin.is_local = False
# 添加顺序
plugin.add_time = add_time
return plugin
async def async_get_online_plugins(self, force: bool = False) -> List[schemas.Plugin]:
"""
异步获取所有在线插件信息
:param force: 是否强制刷新(忽略缓存)
"""
if not settings.PLUGIN_MARKET:
return []
# 用于存储高于 v1 版本的插件(如 v2, v3 等)
higher_version_plugins = []
# 用于存储 v1 版本插件
base_version_plugins = []
# 使用异步并发获取线上插件
import asyncio
tasks = []
task_to_version = {}
for m in settings.PLUGIN_MARKET.split(","):
if not m:
continue
# 名称
if plugin_info.get("name"):
plugin.plugin_name = plugin_info.get("name")
# 描述
if plugin_info.get("description"):
plugin.plugin_desc = plugin_info.get("description")
# 版本
if plugin_info.get("version"):
plugin.plugin_version = plugin_info.get("version")
# 图标
if plugin_info.get("icon"):
plugin.plugin_icon = plugin_info.get("icon")
# 标签
if plugin_info.get("labels"):
plugin.plugin_label = plugin_info.get("labels")
# 作者
if plugin_info.get("author"):
plugin.plugin_author = plugin_info.get("author")
# 更新历史
if plugin_info.get("history"):
plugin.history = plugin_info.get("history")
# 仓库链接
plugin.repo_url = market
# 本地标志
plugin.is_local = False
# 添加顺序
plugin.add_time = add_time
# 汇总
ret_plugins.append(plugin)
# 创建任务获取 v1 版本插件
base_task = asyncio.create_task(self.async_get_plugins_from_market(m, None, force))
tasks.append(base_task)
task_to_version[base_task] = "base_version"
# 创建任务获取高版本插件(如 v2、v3
if settings.VERSION_FLAG:
higher_version_task = asyncio.create_task(
self.async_get_plugins_from_market(m, settings.VERSION_FLAG, force))
tasks.append(higher_version_task)
task_to_version[higher_version_task] = "higher_version"
# 并发执行所有任务
if tasks:
completed_tasks = await asyncio.gather(*tasks, return_exceptions=True)
for i, result in enumerate(completed_tasks):
task = tasks[i]
version = task_to_version[task]
# 检查是否有异常
if isinstance(result, Exception):
logger.error(f"获取插件市场数据失败:{str(result)}")
continue
plugins = result
if plugins:
if version == "higher_version":
higher_version_plugins.extend(plugins) # 收集高版本插件
else:
base_version_plugins.extend(plugins) # 收集 v1 版本插件
return self._process_plugins_list(higher_version_plugins, base_version_plugins)
async def async_get_plugins_from_market(self, market: str,
package_version: Optional[str] = None,
force: bool = False) -> Optional[List[schemas.Plugin]]:
"""
异步从指定的市场获取插件信息
:param market: 市场的 URL 或标识
:param package_version: 首选插件版本 (如 "v2", "v3"),如果不指定则获取 v1 版本
:param force: 是否强制刷新(忽略缓存)
:return: 返回插件的列表,若获取失败返回 []
"""
if not market:
return []
# 已安装插件
installed_apps = SystemConfigOper().get(SystemConfigKey.UserInstalledPlugins) or []
# 获取在线插件
async with async_fresh(force):
online_plugins = await PluginHelper().async_get_plugins(market, package_version)
if online_plugins is None:
logger.warning(
f"获取{package_version if package_version else ''}插件库失败:{market},请检查 GitHub 网络连接")
return []
ret_plugins = []
add_time = len(online_plugins)
for pid, plugin_info in online_plugins.items():
plugin = self._process_plugin_info(pid, plugin_info, market, installed_apps, add_time, package_version)
if plugin:
ret_plugins.append(plugin)
add_time -= 1
return ret_plugins

View File

@@ -252,19 +252,19 @@ def __verify_key(key: str, expected_key: str, key_type: str) -> str:
def verify_apitoken(token: Annotated[str, Security(__get_api_token)]) -> str:
"""
使用 API Token 进行身份认证
:param token: API Token从 URL 查询参数中获取
:param token: API Token从 URL 查询参数中获取 token=xxx
:return: 返回校验通过的 API Token
"""
return __verify_key(token, settings.API_TOKEN, "API_TOKEN")
return __verify_key(token, settings.API_TOKEN, "token")
def verify_apikey(apikey: Annotated[str, Security(__get_api_key)]) -> str:
"""
使用 API Key 进行身份认证
:param apikey: API Key从 URL 查询参数或请求头中获取
:param apikey: API Key从 URL 查询参数中获取 apikey=xxx
:return: 返回校验通过的 API Key
"""
return __verify_key(apikey, settings.API_TOKEN, "API_KEY")
return __verify_key(apikey, settings.API_TOKEN, "apikey")
def verify_password(plain_password: str, hashed_password: str) -> bool:

Some files were not shown because too many files have changed in this diff Show More