Compare commits

...

540 Commits

Author SHA1 Message Date
jxxghp
647c0929c5 v2.6.2 2025-07-06 08:28:33 +08:00
jxxghp
a61533a131 Merge pull request #4536 from cddjr/fix_local_exists 2025-07-05 22:02:16 +08:00
景大侠
bc5e682308 fix 本地媒体检查潜在的额外扫盘问题 2025-07-05 21:46:21 +08:00
jxxghp
25a481df12 Merge pull request #4534 from jxxghp/cursor/bc-55af1137-dea1-4191-9033-64ea5fcaa43a-d338
修复文件整理快照处理问题
2025-07-05 15:44:51 +08:00
Cursor Agent
764c10fae4 Fix snapshot handling logic to correctly process files during monitoring
Co-authored-by: jxxghp <jxxghp@163.com>
2025-07-05 07:22:44 +00:00
Cursor Agent
d8249d4e38 Fix snapshot handling logic to correctly process files during monitoring
Co-authored-by: jxxghp <jxxghp@163.com>
2025-07-05 07:19:53 +00:00
jxxghp
0e3e42b398 Merge pull request #4531 from Aqr-K/feat-process 2025-07-05 06:33:57 +08:00
Aqr-K
7d3b64dcf9 Update requirements.in 2025-07-05 03:16:49 +08:00
Aqr-K
2c8d525796 feat: 增加进程名设置 2025-07-05 03:14:54 +08:00
jxxghp
4869f071ab fix error message 2025-07-04 21:34:31 +08:00
jxxghp
3029eeaf6f fix error message 2025-07-04 21:33:32 +08:00
jxxghp
33fb692aee 更新 plugin.py 2025-07-03 22:20:04 +08:00
jxxghp
6a075d144f 更新 version.py 2025-07-03 20:19:36 +08:00
jxxghp
aa23315599 rollback transmission-rpc 2025-07-03 19:16:36 +08:00
jxxghp
8d0bb35505 add 网络流量API 2025-07-03 19:05:43 +08:00
jxxghp
32e76bc6ce Merge pull request #4529 from cddjr/add_ctx_mgr_proto 2025-07-03 18:47:08 +08:00
景大侠
6c02766000 AutoCloseResponse支持上下文管理协议,避免部分插件报错 2025-07-03 18:38:48 +08:00
jxxghp
52ef390464 图片代理Api增加cache参数 2025-07-03 17:07:54 +08:00
jxxghp
43a557601e fix local usage 2025-07-03 16:48:35 +08:00
jxxghp
82ff7fc090 fix SMB Usage 2025-07-03 15:21:41 +08:00
jxxghp
db40b5105b 修正目录监控模式匹配 2025-07-03 13:55:54 +08:00
jxxghp
b2a379b84b fix SMB Storage 2025-07-03 12:41:44 +08:00
jxxghp
97cbd816fe add SMB Storage 2025-07-03 12:31:59 +08:00
jxxghp
7de3bb2a91 v2.6.0 2025-07-02 21:36:02 +08:00
jxxghp
3a8a2bcab4 Merge pull request #4519 from Aqr-K/patch-2 2025-07-01 19:46:12 +08:00
Aqr-K
eb1adbe992 fix: 错误文案修复,统一文案格式 2025-07-01 19:26:11 +08:00
jxxghp
b55966d42b Merge pull request #4516 from Aqr-K/feat-command
feat(command): 增加 `show` ,用来判断是否注册进菜单里显示
2025-07-01 17:20:59 +08:00
Aqr-K
451ca9cb5a feat(command): 增加 show ,用来判断是否注册进菜单里显示 2025-07-01 17:19:01 +08:00
jxxghp
1e2c607ced fix #4515 流平台不合并到现有标签中,如有需要通过命名模块配置 2025-07-01 17:02:29 +08:00
jxxghp
5ff7da0d19 fix #4515 流平台不合并到现有标签中,如有需要通过命名模块配置 2025-07-01 16:57:45 +08:00
jxxghp
8e06c6f8e6 remove openai 2025-07-01 14:48:16 +08:00
jxxghp
4497cd3904 add site stat api 2025-07-01 11:23:20 +08:00
jxxghp
2945679a94 - 修复Redis缓存问题及站点消息读取问题 2025-07-01 09:20:08 +08:00
jxxghp
1eaf7e3c85 Merge pull request #4513 from cddjr/fix_4511 2025-07-01 06:56:11 +08:00
景大侠
8146b680c6 fix: 修复AutoCloseResponse类在反序列化时无限递归 2025-07-01 01:29:01 +08:00
jxxghp
99e667382f fix #4509 2025-06-30 19:17:36 +08:00
jxxghp
4c03759d3f refactor:优化目录监控 2025-06-30 13:16:05 +08:00
jxxghp
8593a6cdd0 refactor:优化目录监控快照 2025-06-30 12:40:37 +08:00
jxxghp
cd18c31618 fix 订阅匹配 2025-06-30 10:55:10 +08:00
jxxghp
f29c918700 Merge pull request #4505 from wikrin/v2 2025-06-29 23:12:08 +08:00
Attente
0f0c3e660b style: 清理空白字符
移除代码中的 trailing whitespace 和空行缩进, 提升代码整洁度
2025-06-29 22:49:58 +08:00
Attente
1cf4639db3 fix(download): 修复手动下载时下载器选择问题
- 在手动下载模式下,始终使用用户选择的下载器
2025-06-29 22:24:53 +08:00
jxxghp
f5da9b5780 fix log 2025-06-29 22:10:47 +08:00
jxxghp
e4c87c8a96 更新 version.py 2025-06-29 21:56:37 +08:00
jxxghp
4b4bf153f0 fix plugin reload 2025-06-29 21:26:06 +08:00
jxxghp
ec227d0d56 Merge pull request #4500 from Miralia/v2
refactor(meta): 将 web_source 处理逻辑统一到 MetaBase 并添加到消息模板
2025-06-29 11:11:35 +08:00
Miralia
53c8c50779 refactor(meta): 将 web_source 处理逻辑统一到 MetaBase 并添加到消息模板 2025-06-29 11:08:34 +08:00
jxxghp
07b4c8b462 fix #4489 2025-06-29 11:06:36 +08:00
jxxghp
f3cfc5b9f0 fix plex 2025-06-29 08:27:48 +08:00
jxxghp
634e5a4c55 Merge pull request #4496 from wikrin/v2 2025-06-29 07:51:24 +08:00
Attente
332b154f15 fix(api): 适配 FastAPI 请求参数兼容性问题
修复系统配置和用户配置接口无法正常工作的问题。
2025-06-29 05:31:25 +08:00
jxxghp
b446d4db28 更新 GitHub 工作流配置,排除带有 RFC 标签的 issue 2025-06-28 22:24:51 +08:00
jxxghp
ce0397a140 fix update.sh 2025-06-28 22:03:18 +08:00
jxxghp
f278cccef3 for test 2025-06-28 21:42:28 +08:00
jxxghp
cbf1dbcd2e fix 恢复插件后安装依赖 2025-06-28 21:42:03 +08:00
jxxghp
037c6b02fa Merge pull request #4493 from Miralia/v2 2025-06-28 20:07:12 +08:00
Miralia
5f44e4322d Fix and add more 2025-06-28 19:47:33 +08:00
Miralia
6cebe97d6d add FPT Play 2025-06-28 19:12:00 +08:00
jxxghp
82ec146446 更新 plugin.py 2025-06-28 16:49:09 +08:00
jxxghp
3928c352c6 fix update 2025-06-28 15:01:25 +08:00
jxxghp
0ba36d21a9 Revert "fix security"
This reverts commit c7800df801.
2025-06-28 14:37:22 +08:00
jxxghp
6152727e9b fix Dockerfile 2025-06-28 14:33:33 +08:00
jxxghp
53c02fa706 resource v2 2025-06-28 14:26:14 +08:00
jxxghp
c7800df801 fix security 2025-06-28 14:12:24 +08:00
jxxghp
562c1de0c9 aList => OpenList 2025-06-28 08:43:09 +08:00
jxxghp
e2c90639f3 更新 message.py 2025-06-27 19:54:13 +08:00
jxxghp
92e175a8d1 Merge pull request #4488 from Miralia/v2 2025-06-27 17:29:10 +08:00
jxxghp
cf7bca75f6 fix res.text 2025-06-27 17:23:32 +08:00
Miralia
24a173f075 Update streamingplatform.py 2025-06-27 17:21:27 +08:00
jxxghp
8d695dda55 fix log 2025-06-27 17:16:08 +08:00
jxxghp
93eec6c4b8 fix cache 2025-06-27 15:24:57 +08:00
jxxghp
a2cc1a2926 upgrade packages 2025-06-27 14:34:35 +08:00
jxxghp
11729d0eca fix 2025-06-27 13:34:27 +08:00
jxxghp
978819be38 fix db pool size 2025-06-27 12:41:03 +08:00
jxxghp
23c9862eb3 fix site parser 2025-06-27 12:26:17 +08:00
jxxghp
a9f18ea3ef fix #4475 2025-06-27 10:05:19 +08:00
jxxghp
574257edf8 add SystemConfModel 2025-06-27 09:54:15 +08:00
jxxghp
bb4438ac42 feat:非大内存模式下主动gc 2025-06-27 09:44:47 +08:00
jxxghp
0baf6e5fe7 fix SiteParser close session 2025-06-27 08:38:02 +08:00
jxxghp
d8a53da8ee auto close RequestUtils 2025-06-27 08:30:57 +08:00
jxxghp
9555ac6305 fix RequestUtils 2025-06-27 08:09:38 +08:00
jxxghp
4dd5ea8e2f add del 2025-06-27 07:53:10 +08:00
jxxghp
8068523d88 fix downloader 2025-06-26 20:52:17 +08:00
jxxghp
27dd681d9f fix RequestUtils 2025-06-26 17:36:22 +08:00
jxxghp
152f814fb6 fix base chain 2025-06-26 13:28:11 +08:00
jxxghp
2700e639f1 fix chain 2025-06-26 13:16:10 +08:00
jxxghp
c440ce3045 fix oper 2025-06-26 08:33:43 +08:00
jxxghp
2829a3cb4e fix 2025-06-26 08:18:37 +08:00
jxxghp
a487091be8 Revert "fix resource helper"
This reverts commit e7524774da.
2025-06-25 13:32:28 +08:00
jxxghp
e7524774da fix resource helper 2025-06-25 12:50:00 +08:00
jxxghp
3918c876c5 Merge pull request #4478 from Miralia/v2 2025-06-24 21:07:55 +08:00
Miralia
f07f87735c fix 2025-06-24 19:52:14 +08:00
Miralia
b7566e8fe8 feat(meta): 扩展流媒体平台列表,增加更多平台支持。 2025-06-24 19:46:01 +08:00
jxxghp
73eba90f2f 更新 version.py 2025-06-24 10:34:42 +08:00
jxxghp
62e74f6fd1 fix 2025-06-24 08:19:10 +08:00
jxxghp
4375e48840 Merge pull request #4476 from Miralia/v2 2025-06-23 20:52:15 +08:00
Miralia
a1d6e94e90 feat(meta): 新增 WEB 平台来源识别并支持更多音视频格式。 2025-06-23 20:36:58 +08:00
jxxghp
1f44e13ff0 add reload logging 2025-06-23 10:14:22 +08:00
jxxghp
d2992f9ced fix plugin load 2025-06-23 09:31:56 +08:00
jxxghp
950337bccc fix plugin load 2025-06-23 08:19:22 +08:00
jxxghp
757c3be359 更新 version.py 2025-06-22 10:08:17 +08:00
jxxghp
269ab9adfc fix:删除消息能力 2025-06-22 10:04:21 +08:00
jxxghp
bd241a5164 feat:删除消息能力 2025-06-22 09:37:01 +08:00
jxxghp
3d92b57f24 fix 2025-06-22 09:04:03 +08:00
jxxghp
70d8cb3697 fix #4461 2025-06-22 08:51:29 +08:00
jxxghp
9e4ec5841c fix #4470 2025-06-22 08:47:43 +08:00
jxxghp
682f4fe608 fix message cache 2025-06-20 17:33:08 +08:00
jxxghp
ce8a077e07 优化按钮回调数据,简化为仅使用索引值 2025-06-19 15:54:07 +08:00
jxxghp
d5f63bcdb3 remove Commands DEV flag 2025-06-18 13:33:37 +08:00
jxxghp
5c3756fd1b v2.5.7-1 2025-06-17 20:02:45 +08:00
jxxghp
99939e1a3d fix 2025-06-17 19:42:16 +08:00
jxxghp
56742ace11 fix:带UA下载图片 2025-06-17 19:27:53 +08:00
jxxghp
742cb7a8da 更新 version.py 2025-06-17 18:56:47 +08:00
jxxghp
98327d1750 fix download message 2025-06-17 15:35:38 +08:00
jxxghp
b944306302 v2.5.7 2025-06-16 22:15:54 +08:00
jxxghp
02ab1d4111 fix settings 2025-06-16 21:29:57 +08:00
jxxghp
28552fb0ce 更新 transmission.py 2025-06-16 19:38:19 +08:00
jxxghp
bf52fcb2ec fix message 2025-06-16 11:45:26 +08:00
jxxghp
bab1f73480 修复:slack消息交互 2025-06-16 09:49:01 +08:00
jxxghp
c06001d921 feat:内建重启前主动备份插件 2025-06-16 08:57:21 +08:00
jxxghp
0fa49bb9c6 fix 消息定向发送时不检查消息类型匹配 2025-06-16 08:06:47 +08:00
jxxghp
bf23fe6ce2 更新 subscribe.py 2025-06-15 23:31:13 +08:00
jxxghp
7c6137b742 更新 download.py 2025-06-15 23:30:01 +08:00
jxxghp
3823a7c9b6 fix:消息发送范围 2025-06-15 23:18:07 +08:00
jxxghp
a944975be2 fix:交互消息立即发送 2025-06-15 23:06:25 +08:00
jxxghp
6da65d3b03 add MessageAction 2025-06-15 21:25:14 +08:00
jxxghp
0d938f2dca refactor:减少Alipan及115的Api调用 2025-06-15 20:41:32 +08:00
jxxghp
4fa9bb3c1f feat: 插件消息的事件回调 [PLUGIN]插件ID|内容 2025-06-15 19:47:04 +08:00
jxxghp
2f5b22a81f fix 2025-06-15 19:41:24 +08:00
jxxghp
fcd5ca3fda feat:Slack支持编辑消息 2025-06-15 19:28:05 +08:00
jxxghp
c18247f3b1 增强消息处理功能,支持编辑消息 2025-06-15 19:18:18 +08:00
jxxghp
f8fbfdbba7 优化消息处理逻辑 2025-06-15 18:40:36 +08:00
jxxghp
21addfb947 更新 message.py 2025-06-15 16:56:48 +08:00
jxxghp
8672bd12c4 fix bug 2025-06-15 16:31:09 +08:00
jxxghp
be8054e81e fix bug 2025-06-15 15:57:58 +08:00
jxxghp
82f46c6010 feat:回调消息路由给插件 2025-06-15 15:56:38 +08:00
jxxghp
95a827e8a2 feat:Telegram、Slack 支持按钮 2025-06-15 15:34:06 +08:00
jxxghp
c534e3dcb8 feat:未安装的插件,不加载模块 2025-06-15 09:55:20 +08:00
jxxghp
9f5e1b8dd7 更新 version.py 2025-06-14 14:45:58 +08:00
jxxghp
c86ed20c34 fix 2025-06-14 08:23:48 +08:00
jxxghp
c32c37e66a Merge pull request #4444 from cddjr/fix_doh_reload 2025-06-14 08:22:13 +08:00
jxxghp
7b100d3cdb Merge pull request #4446 from wikrin/v2 2025-06-14 07:05:20 +08:00
Attente
95a2362885 fix(db): 修复系统配置更新时内存共享问题
- 在更新系统配置时,使用 deepcopy 复制新值以避免内存共享
2025-06-13 23:03:13 +08:00
jxxghp
d8b14b9a9f Merge pull request #4445 from cddjr/feat_nettest 2025-06-13 19:06:02 +08:00
景大侠
c45953f63a feat 网络测试支持加速代理以及GitHub Token
fix 测试耗时大于1秒时,时间差计算错误
2025-06-13 18:35:49 +08:00
景大侠
e3d3087a5d fix GitHub请求头补上UA 2025-06-13 18:06:17 +08:00
景大侠
e162bd1168 fix DoH热加载 2025-06-13 17:43:45 +08:00
jxxghp
db5d81d7f0 Merge pull request #4442 from wumode/fix_download_api 2025-06-13 14:24:22 +08:00
wumode
f737f1287b fix(api): 无法设置非默认下载器状态 2025-06-13 08:43:34 +08:00
jxxghp
1ffa5178db Merge pull request #4440 from wikrin/v2 2025-06-13 06:35:24 +08:00
Attente
49cb43488c feat(plugin): 优化插件同步和安装逻辑
- 优化 sync 函数,考虑插件版本因素
- 更新 is_plugin_exists 函数,增加版本比较
2025-06-13 00:19:08 +08:00
jxxghp
fd7a6f8ddd Merge pull request #4438 from H1dery/v2 2025-06-12 20:05:00 +08:00
Cais1
7979ce0f0a File reading fixes
File reading fixes
2025-06-12 19:58:47 +08:00
Cais1
2ba5d9484d Update plugin.py
File reading fixes
2025-06-12 19:57:26 +08:00
jxxghp
23b981c5ac fix #4434 2025-06-12 18:41:46 +08:00
jxxghp
86ab2c8c05 Merge pull request #4434 from alfchao/v2 2025-06-12 16:14:59 +08:00
xuchao3
9ea0bc609a feat:增加telegram api代理地址
#4266
2025-06-12 13:56:36 +08:00
jxxghp
5366c2844a Merge pull request #4433 from wikrin/v2 2025-06-12 08:47:56 +08:00
Attente
eac4d703c7 fix(plugins_initializer): 优化插件恢复的容错处理
- 添加单个插件恢复失败的异常处理,使用 continue 跳过
- 确保单个插件恢复失败不影响其他插件继续恢复
2025-06-12 07:56:44 +08:00
jxxghp
8ed87294e2 v2.5.5-1
- 修复下载器监控问题
2025-06-12 07:08:19 +08:00
jxxghp
b343c601be v2.5.5
- 支持更精细的用户权限控制
- 高级设置中增加了刮削内容设定
2025-06-11 20:27:49 +08:00
jxxghp
e56d7006b4 init users 2025-06-11 20:24:59 +08:00
jxxghp
1b7bcd7784 init users 2025-06-11 19:57:21 +08:00
jxxghp
4cb9025b6c fix season_nfo 2025-06-11 19:48:02 +08:00
jxxghp
f8864ab053 fix reload 2025-06-11 07:11:50 +08:00
jxxghp
64eba46a67 fix 2025-06-11 07:07:55 +08:00
jxxghp
35d9cc1d40 remove jiaba 2025-06-11 00:00:08 +08:00
jxxghp
3036107dac fix user api 2025-06-10 23:42:57 +08:00
jxxghp
214089b4ea Merge pull request #4423 from lonelyman0108/v2 2025-06-10 18:04:13 +08:00
LM
95b7ba28e4 update: 添加fanart环境变量 2025-06-10 17:59:25 +08:00
LM
880272f96e update: 优化fanart获取逻辑,支持设定语言 2025-06-10 17:59:03 +08:00
LM
7ed26fadb6 update: 更新fanart刮削逻辑,优先获取中文、英文内容 2025-06-10 17:25:58 +08:00
jxxghp
f0d25a02a6 feat:支持刮削详细设定 2025-06-10 16:37:15 +08:00
jxxghp
162ba9307d fix restart 2025-06-10 07:09:59 +08:00
jxxghp
49dae92b8e fix flag path 2025-06-09 21:58:02 +08:00
jxxghp
b484a52b6d v2.5.4
- 插件市场支持手动刷新
- 优化了重置容器时已安装插件的恢复策略
2025-06-09 20:57:44 +08:00
jxxghp
d754091a7c fix log 2025-06-09 20:44:48 +08:00
jxxghp
e2febc24ae feat:插件市场支持强制刷新 2025-06-09 20:33:06 +08:00
jxxghp
d0677edaaa fix 优雅停止 2025-06-09 15:39:11 +08:00
jxxghp
f0aaecd0c7 fix #4413 2025-06-09 14:45:26 +08:00
jxxghp
3518940fec Merge pull request #4413 from cddjr/fix_plugin
修复分身的一些BUG
2025-06-09 14:42:54 +08:00
jxxghp
2e5c92ae0c fix 优雅停止 2025-06-09 13:09:16 +08:00
jxxghp
4ad699dbe6 fix 优雅停止 2025-06-09 13:06:27 +08:00
景大侠
931be9e6aa fix 分身复用原插件配置 2025-06-09 09:54:55 +08:00
景大侠
9656d6fbd0 fix 分身类名使用小写后缀
避免与分身ID不一致,导致误判没有安装
2025-06-09 09:51:06 +08:00
景大侠
c7cbb13044 fix 插件卸载后从系统模块中移除
避免分身时误报插件已存在
2025-06-09 09:50:55 +08:00
jxxghp
327d30dcc2 feat:识别容器是否重置 2025-06-09 09:15:58 +08:00
jxxghp
e4e2079917 fix:插件恢复安全性 2025-06-09 08:30:24 +08:00
jxxghp
0427506572 fix:移除Action类静态属性 2025-06-09 08:18:43 +08:00
jxxghp
ea168edb43 fix:移除Oper类静态属性 2025-06-09 08:08:55 +08:00
jxxghp
aa039c6c05 feat:启停插件自动备份与恢复 2025-06-09 08:04:44 +08:00
jxxghp
3de998051a fix memory snapshot 2025-06-08 21:57:49 +08:00
jxxghp
69ade1ae37 更新内存快照间隔为30分钟,保留的内存快照文件数量减少至20个 2025-06-08 21:48:37 +08:00
jxxghp
1d6133e3b1 fix plugins遍历 2025-06-08 21:39:37 +08:00
jxxghp
203a111d1a remove gc 2025-06-08 21:24:26 +08:00
jxxghp
0a20234268 remove gc 2025-06-08 21:19:15 +08:00
jxxghp
7f8e50f83d fix memory helper 2025-06-08 21:13:37 +08:00
jxxghp
443ef7d41b fix 2025-06-08 21:06:27 +08:00
jxxghp
059ae6595d fix 2025-06-08 20:37:42 +08:00
jxxghp
19c3dad338 fix 2025-06-08 19:41:46 +08:00
jxxghp
81bc51c972 fix pympler 2025-06-08 19:02:25 +08:00
jxxghp
6c17868744 add pympler 2025-06-08 18:55:02 +08:00
jxxghp
a18040ccfa add pympler 2025-06-08 18:54:35 +08:00
jxxghp
0835a75503 更新 thread.py 2025-06-08 14:43:13 +08:00
jxxghp
3ee32757e5 rollback 2025-06-08 14:35:59 +08:00
jxxghp
344abfa8d8 fix memory helper 2025-06-08 14:03:01 +08:00
jxxghp
906b2a3485 fix memory statistics 2025-06-08 11:36:15 +08:00
jxxghp
e0d2b87ed3 wallpaper cache skip empty 2025-06-08 11:30:57 +08:00
jxxghp
83a8c8b42b fix memory threshold 2025-06-08 11:14:16 +08:00
jxxghp
d840ed6c5a fix memory log 2025-06-08 11:08:01 +08:00
jxxghp
0112087be4 refactor #4407 2025-06-08 10:51:59 +08:00
jxxghp
7320084e11 rollback #4379 2025-06-07 22:26:51 +08:00
jxxghp
23929f5eaa fix pool size 2025-06-07 22:00:09 +08:00
jxxghp
c002d4619a 更新 scheduler.py 2025-06-07 20:11:31 +08:00
jxxghp
f60a909bba 更新 version.py 2025-06-07 11:43:04 +08:00
jxxghp
c2c22e3968 Merge pull request #4399 from cddjr/fix_subscribe 2025-06-07 11:42:25 +08:00
jxxghp
f10299b2de Merge pull request #4403 from cddjr/fix_systemconfig 2025-06-07 11:41:36 +08:00
景大侠
1d3563ed97 fix(config): 修复新装的插件会消失的问题 2025-06-07 11:33:28 +08:00
景大侠
f3eb2caa4e fix(subscribe): 避免重复下载已入库的剧集 2025-06-07 02:48:22 +08:00
jxxghp
2364dacd52 添加对 GitHub 容器注册 2025-06-06 22:02:04 +08:00
jxxghp
883f7451c3 fix event log 2025-06-06 21:45:14 +08:00
jxxghp
a534c9bca1 fix 设置保存失败提示 2025-06-06 21:30:11 +08:00
jxxghp
b14202a324 fix logger 2025-06-06 21:18:31 +08:00
jxxghp
a6fae48f07 更新 system.py 2025-06-06 17:15:25 +08:00
jxxghp
963caf2afe fix logger reload 2025-06-06 16:31:00 +08:00
jxxghp
50b0268531 v2.5.3-1 2025-06-06 15:37:44 +08:00
jxxghp
f484b64be3 fix 2025-06-06 15:37:02 +08:00
jxxghp
349535557f 更新 subscribe.py 2025-06-06 14:04:12 +08:00
jxxghp
de4973a270 feat:内存监控开关 2025-06-06 13:49:52 +08:00
jxxghp
e42d2baf8a fix lint 2025-06-05 22:14:14 +08:00
jxxghp
eac435b233 fix lint 2025-06-05 22:13:33 +08:00
jxxghp
447b8564e9 更新 GitHub Actions 工作流 2025-06-05 22:02:52 +08:00
jxxghp
97cee657bd 更新 .gitignore 文件以包含 Pylint 相关文件,并修改 system.py 中的成功返回逻辑 2025-06-05 21:58:50 +08:00
jxxghp
fe894754cf 更新 system.py 2025-06-05 21:39:13 +08:00
jxxghp
9ffb1d1931 更新 wallpaper.py 2025-06-05 21:03:21 +08:00
jxxghp
a16bd30903 更新 wallpaper.py 2025-06-05 21:00:18 +08:00
jxxghp
13f9ea8be4 v2.5.3 2025-06-05 20:28:43 +08:00
jxxghp
304af5e980 fix:仪表盘内存只显示当前程序占用 2025-06-05 17:09:11 +08:00
jxxghp
dc180c09e9 fix wallpaper 2025-06-05 17:03:29 +08:00
jxxghp
8e20e26565 fix:捕捉插件停止异常 2025-06-05 14:07:31 +08:00
jxxghp
11075a4012 fix:增加更多内存控制 2025-06-05 13:33:39 +08:00
jxxghp
a9300faaf8 fix:优化单例模式和类引用 2025-06-05 13:22:16 +08:00
jxxghp
504827b7e5 fix:memory use 2025-06-05 09:57:41 +08:00
jxxghp
e180130b38 fix:memory use 2025-06-05 08:32:24 +08:00
jxxghp
faaee09827 fix:memory use 2025-06-05 08:18:26 +08:00
jxxghp
99334795b6 fix rsshelper 2025-06-04 22:00:46 +08:00
jxxghp
8c9c59ef64 fix rsshelper 2025-06-04 21:42:03 +08:00
jxxghp
7a112000c9 更新 memory.py 2025-06-04 18:46:55 +08:00
jxxghp
1424087d5a fix:memory use 2025-06-04 18:34:49 +08:00
jxxghp
984f4731cd 更新 log.py 2025-06-04 15:33:58 +08:00
jxxghp
3a3de64b0f fix:重构配置热加载 2025-06-04 08:21:14 +08:00
jxxghp
0911854e9d fix Config reload 2025-06-04 07:17:47 +08:00
jxxghp
2af8b6f445 fix Config reload 2025-06-03 23:10:48 +08:00
jxxghp
bbfd8ca3f5 fix Config reload 2025-06-03 23:08:58 +08:00
jxxghp
b4ed2880f7 refactor:重构配置热加载 2025-06-03 20:56:21 +08:00
jxxghp
5f18a21e86 fix:整理失败时也打上已整理标签 2025-06-03 17:48:30 +08:00
jxxghp
5d188e3877 fix module close 2025-06-03 17:11:44 +08:00
jxxghp
90f113a292 remove ttl cache 2025-06-03 16:31:16 +08:00
jxxghp
eecfe58297 fix memory manager startup 2025-06-03 16:27:51 +08:00
jxxghp
079a747210 fix memory manager startup 2025-06-03 16:19:38 +08:00
jxxghp
4be8c70f23 fix memory log 2025-06-03 16:05:49 +08:00
jxxghp
d9aee4df77 fix memory log 2025-06-03 16:03:05 +08:00
jxxghp
225de87d4d fix torrents chain 2025-06-03 15:48:43 +08:00
jxxghp
2ce7cedfbd fix 2025-06-03 12:30:26 +08:00
jxxghp
cfb163d904 fix 2025-06-03 12:27:50 +08:00
jxxghp
de7c9be11b 优化内存管理,增加最大内存配置项,改进内存使用检查逻辑。 2025-06-03 12:25:13 +08:00
jxxghp
841209adc9 fix 2025-06-03 11:49:16 +08:00
jxxghp
e48d51fe6e 优化内存管理和垃圾回收机制 2025-06-03 11:45:17 +08:00
jxxghp
9d436ec7ed fix #4382 2025-06-03 08:19:15 +08:00
jxxghp
fb2b29d088 fix #4382 2025-06-03 07:07:40 +08:00
jxxghp
1c46b0bc20 更新 subscribe.py 2025-06-02 16:23:09 +08:00
jxxghp
81d0e4696a Merge pull request #4379 from jtcymc/v2 2025-06-02 10:48:36 +08:00
shaw
f9a287b52b feat(core): 增加剧集交集最小置信度设置
新增了剧集交集最小置信度的配置项,用于过滤掉包含过多不需要剧集的种子。实现了以下功能:

- 在 config.py 中添加了 EPISODE_INTERSECTION_MIN_CONFIDENCE 配置项,默认值为 0.0
- 修改了 download.py 中的下载逻辑,增加了计算种子与目标缺失集之间交集比例的函数
- 使用交集比例来筛选和排序种子,优先下载与缺失集交集较大的种子
-可以通过配置项设置交集比例的阈值,低于阈值的种子将被跳过

这个改动可以提高下载效率,避免下载过多不必要的剧集。
2025-06-02 00:38:10 +08:00
jxxghp
0f0072abea Merge pull request #4375 from awsl1110/v2 2025-05-31 20:08:10 +08:00
awsl1110
312933a259 fix(indexer): 修正 DiscuzX 站点名称
- 将 Discuz! 站点名称修改为 DiscuzX
2025-05-31 19:18:25 +08:00
jxxghp
288854b8f1 Merge pull request #4374 from awsl1110/v2 2025-05-31 19:04:51 +08:00
awsl1110
7f5991aa34 refactor(core): 优化配置项和模型定义
- 为配置项添加类型注解,提高代码可读性和安全性
- 为模型字段添加默认值,优化数据处理
- 更新验证器使用新语法,以适应Pydantic库的变更
2025-05-31 16:38:06 +08:00
jxxghp
361df95d50 Merge pull request #4372 from cddjr/fix_4371 2025-05-31 13:34:48 +08:00
景大侠
fc1ade32d7 更新蓝光测试用例 2025-05-31 11:05:02 +08:00
景大侠
b74c7531d9 fix #4371 递归判断蓝光目录 2025-05-31 02:37:14 +08:00
景大侠
7e3be3325a fix #4294 更新测试用例 2025-05-31 01:52:31 +08:00
jxxghp
7dab7fbe66 更新 transhandler.py 2025-05-30 21:42:50 +08:00
jxxghp
62c06b6593 fix #4216 2025-05-30 17:32:37 +08:00
jxxghp
000b62969f v2.5.2 2025-05-30 17:06:21 +08:00
jxxghp
b4473bb4a7 fix 插件分身服务注册 2025-05-30 16:59:54 +08:00
jxxghp
2c0e06d599 fix 插件分身服务注册 2025-05-30 13:37:40 +08:00
jxxghp
d2c55e8ed3 Merge remote-tracking branch 'origin/v2' into v2 2025-05-30 08:07:57 +08:00
jxxghp
714abaa25a fix rename 2025-05-30 08:07:53 +08:00
jxxghp
0017eb987b Merge pull request #4365 from Aqr-K/fix-modules/thetvdb 2025-05-29 21:17:38 +08:00
Aqr-K
e5a0894692 fix(tvdb): 解决无网络环境时,tvdb 模块初始化时,仍然会进入超长等待的问题
- 改为惰性初始化,启动时不再执行 `auth` ,调用方法时,再进行 `auth` (保留 auth_token 过期检查重新 `auth` 的功能);
- 使用 双重检查锁定 的方式,保证线程安全;
- 统一通过一个 `timeout` 值进行设置,默认值从30秒降为15秒,保持与tmdb相同。
2025-05-29 20:04:18 +08:00
jxxghp
a8e00e9f0f fix apis 2025-05-29 13:35:01 +08:00
jxxghp
77a4c271ae Merge pull request #4361 from madrays/v2
增加缓存管理页面
2025-05-29 09:21:45 +08:00
jxxghp
014b77c3c7 v2.5.1-1 2025-05-29 08:30:31 +08:00
jxxghp
076e241056 fix tvdb 2025-05-29 08:30:14 +08:00
jxxghp
7ce57cc67a fix 2025-05-29 08:22:45 +08:00
jxxghp
da0343283a 支持在插件文件夹中管理分身插件的添加与移除 2025-05-29 08:16:54 +08:00
jxxghp
d5f7f1ba91 fix tvdb api 2025-05-29 08:03:12 +08:00
jxxghp
8761c82afe fix TVDB代理与SSL校验 #4356 2025-05-29 07:14:42 +08:00
madrays
13023141bc 增加缓存管理页面 2025-05-29 00:46:11 +08:00
jxxghp
4dd2038625 Merge pull request #4360 from cddjr/fix_TransHandler 2025-05-29 00:06:32 +08:00
景大侠
06a32b0e9d fix: TransHandler误报success的bug 2025-05-28 23:52:23 +08:00
jxxghp
c91ab7a76b 添加新的设定项 2025-05-28 21:05:29 +08:00
jxxghp
0344aa6a49 更新 version.py 2025-05-28 20:34:59 +08:00
jxxghp
a748c9d750 修复:更新壁纸助手以支持更多图片格式 2025-05-28 08:26:44 +08:00
jxxghp
038dc372b7 更新 config.py 2025-05-28 07:03:22 +08:00
jxxghp
bc8198fb8a Merge pull request #4356 from TimoYoung/v2 2025-05-27 21:06:54 +08:00
TimoYoung
f42275bd83 Merge remote-tracking branch 'origin/v2' into v2 2025-05-27 18:02:21 +08:00
TimoYoung
6bd86a724e fix:区分series和movie id 2025-05-27 17:58:37 +08:00
TimoYoung
fc96cfe8a0 feat:tvdb模块重写,更换tvdbv4 api,增加搜索能力
sonarr /series/lookup接口重写,直接用标题在tvdb查询剧集
2025-05-27 17:32:25 +08:00
jxxghp
a9f25fe7d6 fix bug 2025-05-27 12:31:43 +08:00
jxxghp
f740fed5f2 fix bug 2025-05-26 13:30:30 +08:00
jxxghp
a6d1bd12a2 fix:优化插件分身性能
feat:分身插件删除时清理文件
2025-05-26 13:21:47 +08:00
jxxghp
e8ab20acf2 Merge pull request #4351 from madrays/v2 2025-05-26 11:08:30 +08:00
madrays
ccfe193800 增加插件分身功能 2025-05-26 10:55:40 +08:00
jxxghp
bdccedca59 更新 system.py 2025-05-26 07:45:21 +08:00
DDSRem
9abb1488df Merge pull request #4348 from Aqr-K/fix-sh
fix(sh): 引号格式问题
2025-05-25 23:48:45 +08:00
Aqr-K
195fc1bdc3 fix(sh): 引号格式问题 2025-05-25 23:47:23 +08:00
jxxghp
2a9129f470 更新 version.py 2025-05-25 20:15:44 +08:00
jxxghp
acbfc0cc6e Merge pull request #4343 from Aqr-K/fix-sh 2025-05-25 19:53:58 +08:00
Aqr-K
bfb0c75e95 fix(sh): 补全调用 2025-05-25 18:50:41 +08:00
jxxghp
161a2ddae8 Merge pull request #4344 from DDS-Derek/dev 2025-05-25 18:32:29 +08:00
Aqr-K
99621cfd66 fix(config): 强制指定 quote_mode ,避免后续依赖升级,默认值不再是 always 2025-05-25 18:30:00 +08:00
DDSRem
e6e7234215 fix(u115): get information directly through id 2025-05-25 18:27:06 +08:00
DDSRem
5b7b329279 fix(docker): repair restart judgment
当 DOCKER_CLIENT_API 不等于默认值时代表外部调用重启,无需再映射 `/var/run/docker.sock`
2025-05-25 18:20:04 +08:00
Aqr-K
3abb2c8674 fix(sh): 重启时,无法同时结合 系统变量 与 env 文件,进行变量读取的问题。 2025-05-25 18:15:35 +08:00
jxxghp
39de89254f add Docker Client API地址 2025-05-25 14:55:51 +08:00
jxxghp
ac941968cb 更新 plugin.py 2025-05-25 11:22:08 +08:00
jxxghp
96f603bfd1 Merge pull request #4339 from jtcymc/v2 2025-05-25 08:01:00 +08:00
shaw
677e38c62d fix(SearchChain): with 关闭线程池
- 使用 with 语句管理 ThreadPoolExecutor,确保线程池正确关闭
2025-05-25 00:44:19 +08:00
jxxghp
72fce20905 feat:整理后记录字幕和音频文件 2025-05-24 20:58:46 +08:00
jxxghp
1eb41c20d5 fix TransferInfo 2025-05-24 15:40:03 +08:00
DDSRem
dd0c1d331f Merge pull request #4334 from DDS-Derek/dev
fix(plugin): dependency dynamic refresh
2025-05-24 09:24:41 +08:00
DDSRem
12760a70a1 fix(plugin): dependency dynamic refresh 2025-05-24 09:23:47 +08:00
jxxghp
525d17270f fix #4332 2025-05-24 06:37:59 +08:00
jxxghp
bc9959f5ab Merge pull request #4333 from Aqr-K/fix-log 2025-05-24 06:31:41 +08:00
jxxghp
94a8cd5128 Merge pull request #4331 from madrays/v2 2025-05-24 06:30:59 +08:00
Aqr-K
5a1b2c4938 fix(log): 区分 主程序日志 与 插件日志 2025-05-24 06:20:41 +08:00
madrays
851a2ac03a Delete requirements.in 2025-05-24 04:12:53 +08:00
madrays
34d7707f53 Delete config/plugins/twofahelper/twofahelper_sites.json 2025-05-24 04:12:13 +08:00
madrays
0aac7f62a3 Delete config/app.env 2025-05-24 04:11:54 +08:00
madrays
34379b92d0 重构插件页面,增加文件夹功能 2025-05-24 03:57:04 +08:00
DDSRem
250999f9f5 Merge pull request #4330 from Aqr-K/patch-1
fix(log): 修复 docker 环境下,重复打印日志的问题
2025-05-24 01:18:59 +08:00
Aqr-K
2b3832222b fix(log): 修复 docker 环境下,重复打印日志的问题 2025-05-24 01:16:14 +08:00
jxxghp
c5f6d0e721 更新 config.py 2025-05-23 21:05:50 +08:00
jxxghp
dbb0cf15b8 fix 最新入库条目 2025-05-23 07:12:47 +08:00
jxxghp
ab202ba951 Merge pull request #4324 from wumode/fix_typo 2025-05-23 06:45:55 +08:00
wumode
e2c13aa7ed fix: 确保名称识别正确兜底 2025-05-23 00:23:45 +08:00
jxxghp
c1ab19f3cf 更新 version.py 2025-05-21 21:42:42 +08:00
jxxghp
beebfb2e19 fix 2025-05-21 08:39:04 +08:00
jxxghp
cfca90aa7d fix delay get_item 2025-05-19 20:06:46 +08:00
jxxghp
19fe0a32c8 fix #4308 2025-05-19 12:53:55 +08:00
jxxghp
76659f8837 fix #4308 2025-05-19 12:51:34 +08:00
jxxghp
2254715190 Merge pull request #4308 from k1z/v2
修复重复识别缓存种子的bug
2025-05-19 12:29:13 +08:00
jxxghp
ae1a5460d4 fix FetchMedias Action 2025-05-19 12:26:27 +08:00
k1z
27d9f910ff 修复重复识别缓存种子的bug 2025-05-19 10:35:09 +08:00
k1z
28db4881d7 修复重复识别缓存种子的bug 2025-05-19 10:05:39 +08:00
jxxghp
7c76c3ccd6 rollback #4296 2025-05-18 21:40:06 +08:00
jxxghp
007bd24374 fix message link check 2025-05-18 15:25:45 +08:00
jxxghp
c8dc30287c fix #4294 x26[45] 调整为小写x 2025-05-18 15:15:01 +08:00
jxxghp
360184bbd1 fix 2025-05-18 13:50:43 +08:00
jxxghp
e8ed2454a1 feat:消息为链接时,交由第三方处理 2025-05-18 13:22:42 +08:00
jxxghp
923ecf29b8 fix #4294 2025-05-18 13:16:06 +08:00
jxxghp
a8f8bf5872 增强MetaBase类以支持tmdbid和doubanid的赋值,并为Emby格式ID识别添加测试用例。 2025-05-18 13:03:35 +08:00
jxxghp
bedcd94020 优化find_metainfo函数,增加对Emby格式ID标签的支持,并添加相应的测试用例以验证不同ID格式的识别。 2025-05-18 12:55:25 +08:00
jxxghp
959d4da1f8 Merge pull request #4300 from DDS-Derek/dev 2025-05-18 10:05:14 +08:00
DDSRem
861453c1a8 fix(u115): refresh delay 2025-05-18 10:03:36 +08:00
jxxghp
2f4072da0d Merge pull request #4297 from wikrin/v2 2025-05-17 20:20:30 +08:00
Attente
411b5e0ca6 fix(database): 将下载模板中的 title 变量更改为 torrent_title 2025-05-17 19:45:49 +08:00
Attente
3f03963811 fix(themoviedb): 直接在 API 层次处理剧集组集号
- 移除 season_group_details 中的冗余集号处理
2025-05-17 19:45:49 +08:00
jxxghp
d43f81e118 Merge pull request #4296 from Pollo3470/fix-bluray-match 2025-05-17 18:11:27 +08:00
Pollo
b97dbd2515 fix: 优化 Blu-ray 匹配规则 2025-05-17 17:56:05 +08:00
jxxghp
c6a20a9ed3 Merge pull request #4294 from Miralia/v2 2025-05-16 21:57:19 +08:00
Miralia
27f0f29eef fix(meta): 修复部分格式识别问题 2025-05-16 20:49:23 +08:00
jxxghp
223508ae72 Merge pull request #4292 from Seed680/v2 2025-05-16 15:55:31 +08:00
qiaoyun680
bce0a4b8cd bugfix:如果自定义壁纸API是图片地址,应该返回请求地址 2025-05-16 15:48:37 +08:00
jxxghp
65412a4263 v2.4.8
- 修复了部分情况下插件不注册定时服务的问题
- 二级分类策略支持发行年份范围
- 支持自定义背景壁纸
- 支持插件扩展工作流动作,并编排到工作流中
2025-05-16 12:47:38 +08:00
jxxghp
0233b78c8e fix plugin actions api 2025-05-15 22:13:15 +08:00
jxxghp
b0b25e4cfa fix plugin actions api 2025-05-15 22:02:05 +08:00
jxxghp
806288d587 add:查询插件动作API 2025-05-15 20:54:39 +08:00
jxxghp
97265fc43b feat:二级分类发行年份支持范围 2025-05-15 20:13:44 +08:00
jxxghp
41ca50d0d4 feat:工作流支持调用插件动作 2025-05-15 19:55:14 +08:00
jxxghp
9d02206fd9 feat:二级分类支持发行年份 2025-05-15 15:52:42 +08:00
jxxghp
ba2293eb30 feat:默认配置更多第三方插件仓库 2025-05-15 12:50:18 +08:00
jxxghp
8b9e28975d Merge pull request #4280 from Miralia/v2 2025-05-15 12:09:18 +08:00
jxxghp
22ae8b8f87 fix 非str类型设置保存 2025-05-15 12:00:09 +08:00
Miralia
187e352cbd feat(meta): 修改正则表达式 2025-05-15 11:50:31 +08:00
Miralia
23ef8ad28d feat(meta): 扩展音视频格式匹配规则 2025-05-15 09:58:27 +08:00
jxxghp
1dadf56c42 fix #4276 2025-05-15 08:40:38 +08:00
jxxghp
52640b80c0 Merge pull request #4276 from Seed680/v2
支持支持自定义壁纸api地址,返回中配置中允许的图片文件后缀格式图片都会返回作为壁纸
2025-05-15 08:24:01 +08:00
jxxghp
fe25f8f48f fix #4277 2025-05-15 07:12:52 +08:00
jxxghp
7f59572d8b Merge pull request #4279 from wumode/pip_invocation 2025-05-15 06:43:53 +08:00
wumode
90fc4c6bad Use sys.executable -m pip for env-safe package installation 2025-05-14 23:19:40 +08:00
qiaoyun680
16b6c0da33 支持支持自定义壁纸api地址,返回中配置中允许的图片文件后缀格式图片都会返回作为壁纸 2025-05-14 20:04:38 +08:00
qiaoyun680
488a691f29 支持支持自定义壁纸api地址,返回中配置中允许的图片文件后缀格式图片都会返回作为壁纸 2025-05-14 16:50:17 +08:00
jxxghp
bcbfe2ccd5 feat:增加默认插件仓库 2025-05-14 15:10:27 +08:00
jxxghp
bd9a1d7ec7 Merge pull request #4275 from akvsdk/fix_time_error 2025-05-14 13:10:41 +08:00
jiangyuqing
9331ba64d6 fix 时间解析问题 2025-05-14 12:51:02 +08:00
jxxghp
21e5cb0a03 v2.4.7
- 修复了订阅文件信息显示问题
- 修复了默认通知模板格式中季号的显示问题
- 修复了原始语言图片刮削的问题
- 修复了馒头新版标签无法识别的问题
- 优化了联邦插件API的注册
2025-05-14 09:16:12 +08:00
jxxghp
1a8e0c9ecb fix #4270 2025-05-14 08:41:06 +08:00
jxxghp
16fc0d31cd fix #4270 2025-05-14 08:11:50 +08:00
jxxghp
a622ada58b 更新 lifecycle.py 2025-05-13 23:58:08 +08:00
jxxghp
ee9c4948d3 refactor: 优化启停逻辑 2025-05-13 23:47:12 +08:00
jxxghp
cf28e1d963 refactor: 优化启停逻辑 2025-05-13 23:11:38 +08:00
jxxghp
089ec36160 Merge pull request #4269 from wikrin/v2 2025-05-13 21:44:22 +08:00
jxxghp
04ce774c22 fix plugin initializer 2025-05-13 21:37:10 +08:00
Attente
99c1422f37 feat(message): 优化消息模板中的季号显示格式
- 在 TemplateContextBuilder 中添加 season_fmt 字段,用于存储 Sxx 格式的季号
- 在 meta_info 中添加 season_fmt 字段,用于存储 Sxx 格式的季号
- 更新消息模板中的 season 引用为 season_fmt,以实现统一的季号显示格式
- 新增数据库迁移脚本,用于更新消息模板中的 season 引用为 season_fmt
2025-05-13 21:21:27 +08:00
Attente
b583a60f23 refactor(app): 增加消息构建器的空值过滤
- 在 TemplateContextBuilder 类中增加了对空值的过滤,解决通知模板渲染出`'None'`的问题
2025-05-13 21:21:27 +08:00
jxxghp
7be2910809 fix api register bug 2025-05-13 20:52:22 +08:00
jxxghp
30de524319 fix api register bug 2025-05-13 20:35:36 +08:00
jxxghp
c431d5e759 Merge pull request #4267 from k1z/v2 2025-05-13 18:45:01 +08:00
jxxghp
184b62b024 fix plugin apis 2025-05-13 16:36:50 +08:00
wangkai
2751770350 修复馒头新版标签无法识别的问题 2025-05-13 12:23:51 +08:00
jxxghp
75d98aee8e Merge pull request #4262 from wumode/fix_4180 2025-05-12 21:16:48 +08:00
wumode
48120b9406 fix: get_torrent_tags fails to properly retrieve the existing tags 2025-05-12 21:05:30 +08:00
wumode
0e302d7959 fix: add '已整理' tag to non-default downloader 2025-05-12 21:04:03 +08:00
jxxghp
59cd176f44 更新 build.yml,将 tag_name 的格式修改为 v${{ env.app_version }},以确保版本标签前缀正确 2025-05-12 11:10:42 +08:00
jxxghp
619f728f09 更新 build.yml,添加 continue-on-error: true 以确保删除发布时即使出错也能继续执行后续步骤 2025-05-12 11:06:24 +08:00
jxxghp
6e8002acc4 fix blanks 2025-05-12 11:02:47 +08:00
jxxghp
8a4a6174f7 Merge pull request #4260 from zhuweitung/v2_fix_scrap
fix(scrap):修复自动整理电影、电视剧主海报不为原始语种
2025-05-12 11:00:59 +08:00
jxxghp
ee6c4823d3 优化 build actions 2025-05-12 10:52:23 +08:00
zhuweitung
14dcb73d06 fix(scrap):修复自动整理电影、电视剧主海报不为原始语种 2025-05-12 10:09:36 +08:00
jxxghp
e15107e5ec fix DownloadHistory.get_by_mediaid 2025-05-12 07:57:25 +08:00
jxxghp
0167a9462e Merge pull request #4258 from wumode/fix_4219 2025-05-11 21:18:53 +08:00
wumode
7fa1d342ab fix: blocking issue 2025-05-11 21:05:49 +08:00
jxxghp
05b9988e1d Merge pull request #4257 from cikichen/yemapt 2025-05-11 17:29:15 +08:00
Simon
1c09e61219 _special_domains列表中添加pt.gtk.pw 2025-05-11 17:16:25 +08:00
jxxghp
35f0ad7a83 更新 version.py 2025-05-11 10:11:18 +08:00
jxxghp
7ae1d6763a fix #4245 2025-05-11 08:17:42 +08:00
jxxghp
460e859795 fix #4245 2025-05-10 21:53:03 +08:00
jxxghp
4b88ec6460 feat:单独设置刮削图片语言 #4245 2025-05-10 20:43:00 +08:00
jxxghp
27ee13bb7e Merge pull request #4251 from cikichen/yemapt
update yemapt downloadsize
2025-05-10 20:10:50 +08:00
jxxghp
e6cdd337c3 fix subscribe files 2025-05-10 20:10:13 +08:00
jxxghp
7d8dd12131 fix delete_media_file 2025-05-10 20:00:06 +08:00
Simon
0800e3a136 update yemapt downloadsize 2025-05-10 16:50:53 +08:00
jxxghp
9b0f1a2a04 Merge pull request #4247 from k1z/v2 2025-05-10 00:35:07 +08:00
jxxghp
9de3cb0f92 fix douban test 2025-05-09 20:14:33 +08:00
wangkai
c053a8291c 1. 修复特殊微信id无法处理消息的问题 2025-05-09 16:43:13 +08:00
jxxghp
a0ddfe173b fix 兼容 target_storage 为 None 2025-05-09 12:57:50 +08:00
jxxghp
17843a7c71 v2.4.5-1 2025-05-09 08:17:08 +08:00
jxxghp
324ae5c883 rollback upload api 2025-05-09 08:16:44 +08:00
jxxghp
ef03989c3f 更新 u115.py 2025-05-09 00:27:27 +08:00
jxxghp
63412ddd42 fix bug 2025-05-08 20:37:04 +08:00
jxxghp
30ce32608a fix typo 2025-05-08 19:49:52 +08:00
jxxghp
74799ad096 更新 storage.py 2025-05-08 17:49:12 +08:00
jxxghp
31176f99c8 Merge pull request #4239 from Seed680/v2 2025-05-08 17:48:31 +08:00
Seed680
b9439c05ec Merge branch 'jxxghp:v2' into v2 2025-05-08 17:45:53 +08:00
qiaoyun680
435a04da0c feat(storge):添加存储重置功能 2025-05-08 17:44:44 +08:00
jxxghp
0040b266a5 v2.4.5 2025-05-08 17:26:56 +08:00
jxxghp
645de137f2 fix 插件代码判定 2025-05-08 14:26:47 +08:00
jxxghp
1883607118 fix upload api 2025-05-08 13:12:20 +08:00
jxxghp
4ccae1dac7 fix upload api 2025-05-08 12:55:40 +08:00
jxxghp
ff75db310f fix upload parts 2025-05-08 12:03:39 +08:00
jxxghp
5788520401 fix 阿里云盘会话提示 2025-05-08 10:09:24 +08:00
jxxghp
570dddc120 fix 2025-05-08 09:56:43 +08:00
jxxghp
ea31072ae5 优化AliPan类的文件上传功能,增加多线程分片上传和动态分片计算,提升上传效率和进度监控。 2025-05-08 09:52:32 +08:00
jxxghp
5eca5a6011 优化U115Pan类的文件上传功能,支持多线程并发上传和动态分片计算,提升上传效率和稳定性。 2025-05-08 09:47:43 +08:00
jxxghp
67d5357227 Merge pull request #4238 from cddjr/fix_4236 2025-05-07 19:00:14 +08:00
jxxghp
a0d04ff488 Merge pull request #4237 from wikrin/v2 2025-05-07 18:59:44 +08:00
景大侠
f83787508f fix #4236 2025-05-07 18:36:24 +08:00
Attente
20aba7eb17 fix: #4228 添加订阅传入 MetaBase, 上下文增加 username 字段, 原始对象引用默认开启 2025-05-07 18:19:11 +08:00
jxxghp
0cdea3318c feat:插件API支持bear认证 2025-05-07 13:26:42 +08:00
jxxghp
4dc2c18075 修复插件仪表板异常 2025-05-07 10:57:02 +08:00
jxxghp
74e97abac4 fix 修复仪表板异常 2025-05-07 10:55:13 +08:00
jxxghp
b1db95a925 v2.4.4 2025-05-07 08:26:06 +08:00
jxxghp
9dac9850b6 fix plugin file api 2025-05-06 23:56:35 +08:00
jxxghp
abe091254a fix plugin file api 2025-05-06 23:30:26 +08:00
jxxghp
d2e5367dc6 fix plugins 2025-05-06 11:44:23 +08:00
jxxghp
8ccd1f5fe4 Merge pull request #4229 from wikrin/v2 2025-05-06 06:34:16 +08:00
Attente
50bc865dd2 fix(database): improve message template
- Fix syntax error in downloadAdded message template
2025-05-05 23:14:58 +08:00
jxxghp
74a6ee7066 fix 2025-05-05 19:50:15 +08:00
jxxghp
89e76bcb48 fix 2025-05-05 19:49:30 +08:00
jxxghp
c55f6baf67 Merge pull request #4228 from wikrin/format_notification
Format notification
2025-05-05 19:28:44 +08:00
Attente
ae154489e1 上下文构建并非复杂任务, 移除缓存 2025-05-05 14:08:41 +08:00
Attente
fdc79033ce Merge https://github.com/jxxghp/MoviePilot into format_notification 2025-05-05 13:21:58 +08:00
jxxghp
9a8aa5e632 更新 subscribe.py 2025-05-05 13:16:14 +08:00
Attente
6b81f3ce5f feat(template):实现缓存机制以提升性能
- 在 `TemplateHelper` 和 `TemplateContextBuilder` 中集成 TTLCache(带过期时间的缓存),提升数据复用能力
- 引入 `build_context_cache` 装饰器,统一管理上下文构建的缓存逻辑
对媒体信息、剧集详情、种子信息、传输信息及原始对象启用缓存,减少重复计算
- 新增上下文缓存支持,为异步广播事件 NoticeMessage 提供所需上下文(可通过消息 title 与 text 内容重新获取上下文)
- 支持插件通过自定义模板灵活重构消息体,提升扩展性与灵活性
2025-05-05 13:14:45 +08:00
Attente
aeaddfe36b feat(database): add notification templates for version 2.1.4
- Add new Alembic migration script for version 2.1.4
- Implement notification templates for various events:
  - Organize success
  - Download added
  - Subscribe added
  - Subscribe complete
- Store notification templates in system configuration
2025-05-05 05:27:59 +08:00
Attente
20c1f30877 feat(message): 实现自定义消息模板功能
- 新增 MessageTemplateHelper 类用于渲染消息模板
- 在 ChainBase 中集成消息模板渲染功能
- 修改 DownloadChain、SubscribeChain 和 TransferChain 以使用新消息模板
- 新增 TemplateHelper 类用于处理模板格式
- 在 SystemConfigKey 中添加 NotificationTemplates 配置项
- 更新 Notification 模型以支持 ctype 字段
2025-05-05 05:27:48 +08:00
jxxghp
52ce6ff38e fix plugin file api 2025-05-03 22:14:39 +08:00
jxxghp
c692a3c80e feat:支持vue原生插件页面 2025-05-03 10:03:44 +08:00
jxxghp
491009636a fix bug 2025-05-02 22:57:29 +08:00
jxxghp
ed16ee14ea fix bug 2025-05-02 21:57:19 +08:00
jxxghp
7f2ed09267 fix storage 2025-05-02 20:49:38 +08:00
jxxghp
c0976897ef fix bug 2025-05-02 13:30:39 +08:00
jxxghp
85b55aa924 fix bug 2025-05-02 08:31:38 +08:00
jxxghp
91d0f76783 feat:支持新增存储类型 2025-05-02 08:11:48 +08:00
jxxghp
741badf9e6 feat:支持文件整理存储操作事件 2025-05-01 21:16:21 +08:00
jxxghp
ca1f3ac377 feat:文件整理支持操作类入参 2025-05-01 20:56:17 +08:00
jxxghp
e13e1c9ca3 fix run_module 2025-05-01 11:36:43 +08:00
jxxghp
06ad042443 fix typo 2025-05-01 11:20:56 +08:00
jxxghp
9d333b855c feat:支持插件协持系统模块实现 2025-05-01 11:03:28 +08:00
jxxghp
f46e2acd56 v2.4.3
- 用户界面支持多语言
- 支持设定TheMovieDb元数据语言
- 订阅成功消息增加了演员和简介
- 修复问题

提醒:如升级后页面空白,请强制刷新或者清理浏览器缓存
2025-04-29 17:32:40 +08:00
jxxghp
5ac4d3f4ae fix wallpaper api 2025-04-29 15:26:10 +08:00
jxxghp
1614eebc47 fix 2025-04-29 14:53:04 +08:00
jxxghp
b50599b71f fix:增加安全性 2025-04-29 14:30:34 +08:00
jxxghp
0459025bf8 Merge pull request #4207 from monster-fire/v2 2025-04-28 19:37:52 +08:00
monster-fire
0bd37da8c7 Update __init__.py 添加空值检查 2025-04-28 18:46:48 +08:00
jxxghp
da969dde53 fix:TMDB支持设置语种 2025-04-28 12:11:48 +08:00
jxxghp
33fdd6cafa feat:TMDB支持设置语种 2025-04-28 09:10:38 +08:00
jxxghp
2fe68766eb Merge remote-tracking branch 'origin/v2' into v2 2025-04-28 09:07:42 +08:00
jxxghp
205348697c fix #4188 2025-04-27 12:26:49 +08:00
jxxghp
9b3533c1da Merge pull request #4199 from cddjr/fix_bing 2025-04-27 06:53:00 +08:00
景大侠
c3584e838e fix: 开启全局图片缓存后无法显示来自Bing的壁纸 2025-04-27 00:17:29 +08:00
jxxghp
16d8b3fb58 Merge pull request #4187 from thsrite/v2 2025-04-23 11:53:29 +08:00
thsrite
686bbdc16b fix 添加订阅成功消息增加演员名称、简介 2025-04-23 11:44:44 +08:00
jxxghp
65b17e4f2b v2.4.2
- 修复普通用户通过媒体卡片跳转搜索时无法选择站点的问题,普通用户不能修改搜索站点,会按管理员预设站点直接搜索
2025-04-22 17:35:30 +08:00
jxxghp
23c6898789 更新 nginx.template.conf 2025-04-21 21:42:12 +08:00
jxxghp
df2a1be2a2 更新 nginx.template.conf 2025-04-21 21:33:00 +08:00
jxxghp
2db628a2ba v2.4.1
本版本更新主要调整了用户界面:
- 新增透明主题风格
- PWA模式下全新设计了底部导航栏
- 优化了多处UI细节
2025-04-21 20:05:53 +08:00
jxxghp
b6c40436c9 Merge pull request #4165 from wikrin/v2 2025-04-19 22:36:48 +08:00
Attente
a8a70cac08 refactor(db): optimize download history query logic
- 使用`TransferHistory.list_by`相同逻辑
2025-04-19 20:22:37 +08:00
jxxghp
3eefbf97b1 更新 plex.py 2025-04-19 15:14:47 +08:00
jxxghp
3c423e0838 更新 jellyfin.py 2025-04-19 15:14:14 +08:00
jxxghp
99cde43954 更新 emby.py 2025-04-19 15:13:33 +08:00
jxxghp
fa3a787bf7 更新 mediaserver.py 2025-04-19 15:12:42 +08:00
jxxghp
c776dc8036 feat: WebhookMessage.json 2025-04-19 07:59:59 +08:00
jxxghp
1ef068351d fix docker 2025-04-17 19:36:54 +08:00
jxxghp
6abe0a1862 fix version 2025-04-17 19:15:18 +08:00
jxxghp
ff13045f52 fix build 2025-04-17 12:44:22 +08:00
jxxghp
59c09681cb fix build 2025-04-17 11:49:07 +08:00
jxxghp
f664cf6fa5 remove built-lite 2025-04-17 11:47:24 +08:00
jxxghp
01a847a9c2 test beta 2025-04-17 11:43:42 +08:00
jxxghp
6da655f67f Merge pull request #4154 from TimoYoung/v2 2025-04-16 12:41:15 +08:00
TimoYoung
21df7dced1 fix: 同步cookiecloud站点执行失败问题 2025-04-16 10:26:43 +08:00
jxxghp
7fc257ea79 v2.4.0 2025-04-16 08:11:31 +08:00
jxxghp
24f170ff72 fix 搜索缓存 2025-04-16 08:10:48 +08:00
jxxghp
39999c9ee4 更新 Dockerfile 2025-04-15 06:54:11 +08:00
jxxghp
27a5188e4e 更新 Dockerfile.lite 2025-04-15 06:52:53 +08:00
jxxghp
a5af0786aa - 修复UI错误 2025-04-13 16:03:40 +08:00
jxxghp
e9c9cfaa72 Merge pull request #4137 from lddsb/patch-1 2025-04-11 16:06:29 +08:00
Dee Luo
8ca4ea0f3f perf: 优化qb下载器端口获取逻辑 2025-04-11 15:43:40 +08:00
jxxghp
86e1f9a9d6 Merge pull request #4136 from lddsb/patch-3 2025-04-11 11:43:26 +08:00
Dee Luo
b36ceda585 fix: Rename groups to groups.py 2025-04-11 11:22:29 +08:00
Dee Luo
27a3e6c6db feat: 增加制作组的单元测试 2025-04-11 11:21:39 +08:00
Dee Luo
a731327c00 feat: 增加制作组的单元测试cases 2025-04-11 11:20:36 +08:00
Dee Luo
737c00978e perf: 优化制作组匹配逻辑,解决部分Web组匹配不到的问题
增加两个站制作组的匹配规则
2025-04-11 11:18:15 +08:00
jxxghp
18bcb3a067 fix #4118 2025-04-10 19:40:22 +08:00
jxxghp
f49f55576f Merge pull request #4128 from lddsb/patch-2 2025-04-10 11:09:12 +08:00
Dee Luo
1bef4f9a4d perf: 优化制作组读取自定义制作组的逻辑,避免被空字符串的list影响最终结果 2025-04-10 11:00:46 +08:00
Dee Luo
ab1df59f7a fix: 修复前端传递了[""]这样的空list导致判空时逻辑异常的问题 2025-04-10 10:51:40 +08:00
214 changed files with 14121 additions and 6620 deletions

View File

@@ -10,7 +10,7 @@ body:
目的是让协作的开发者间清晰的知道「要做什么」和「具体会怎么做」,以及所有的开发者都能公开透明的参与讨论;
以便评估和讨论产生的影响 (遗漏的考虑、向后兼容性、与现有功能的冲突)
因此提案侧重在对解决问题的 **方案、设计、步骤** 的描述上。
如果仅希望讨论是否添加或改进某功能本身,请使用 -> [Issue: 功能改进](https://github.com/jxxghp/MoviePilot/issues/new?assignees=&labels=feature+request&projects=&template=feature_request.yml&title=%5BFeature+Request%5D%3A+)
- type: textarea
id: background

View File

@@ -25,7 +25,9 @@ jobs:
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ secrets.DOCKER_USERNAME }}/moviepilot-v2
images: |
${{ secrets.DOCKER_USERNAME }}/moviepilot-v2
ghcr.io/${{ github.repository }}
tags: |
type=raw,value=${{ env.app_version }}
type=raw,value=latest
@@ -42,11 +44,18 @@ jobs:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Login GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build Image
uses: docker/build-push-action@v5
with:
context: .
file: Dockerfile
file: docker/Dockerfile
platforms: |
linux/amd64
linux/arm64/v8
@@ -56,10 +65,22 @@ jobs:
cache-from: type=gha, scope=${{ github.workflow }}-docker
cache-to: type=gha, scope=${{ github.workflow }}-docker
- name: Get existing release body
id: get_release_body
continue-on-error: true
run: |
release_body=$(curl -s -H "Authorization: token ${{ secrets.GITHUB_TOKEN }}" \
"https://api.github.com/repos/${{ github.repository }}/releases/tags/v${{ env.app_version }}" | \
jq -r '.body // ""')
echo "RELEASE_BODY<<EOF" >> $GITHUB_ENV
echo "$release_body" >> $GITHUB_ENV
echo "EOF" >> $GITHUB_ENV
- name: Delete Release
uses: dev-drprasad/delete-tag-and-release@v1.1
continue-on-error: true
with:
tag_name: ${{ env.app_version }}
tag_name: v${{ env.app_version }}
delete_release: true
github_token: ${{ secrets.GITHUB_TOKEN }}
@@ -68,6 +89,7 @@ jobs:
with:
tag_name: v${{ env.app_version }}
name: v${{ env.app_version }}
body: ${{ env.RELEASE_BODY }}
draft: false
prerelease: false
make_latest: false

View File

@@ -1,55 +0,0 @@
name: MoviePilot Builder v2 Lite
on:
workflow_dispatch:
push:
branches:
- v2
paths:
- 'version.py'
jobs:
Docker-build:
runs-on: ubuntu-latest
name: Build Docker Image
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Release version
id: release_version
run: |
app_version=$(cat version.py |sed -ne "s/APP_VERSION\s=\s'v\(.*\)'/\1/gp")
echo "app_version=$app_version" >> $GITHUB_ENV
- name: Docker Meta
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ secrets.DOCKER_USERNAME }}/moviepilot-v2
tags: |
type=raw,value=lite-latest
- name: Set Up QEMU
uses: docker/setup-qemu-action@v3
- name: Set Up Buildx
uses: docker/setup-buildx-action@v3
- name: Login DockerHub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Build Image
uses: docker/build-push-action@v5
with:
context: .
file: Dockerfile.lite
platforms: |
linux/amd64
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha, scope=${{ github.workflow }}-docker
cache-to: type=gha, scope=${{ github.workflow }}-docker

32
.github/workflows/issues.yml vendored Normal file
View File

@@ -0,0 +1,32 @@
name: Close inactive issues
on:
workflow_dispatch:
schedule:
# Github Action 只支持 UTC 时间。
# '0 18 * * *' 对应 UTC 时间的 18:00也就是中国时区 (UTC+8) 的第二天凌晨 02:00。
- cron: "0 18 * * *"
jobs:
close-issues:
runs-on: ubuntu-latest
permissions:
issues: write
pull-requests: write
steps:
- uses: actions/stale@v5
with:
# 标记 stale 标签时间
days-before-issue-stale: 30
# 关闭 issues 标签时间
days-before-issue-close: 14
# 自定义标签名
stale-issue-label: "stale"
stale-issue-message: "此问题已过时,因为它已打开 30 天且没有任何活动。"
close-issue-message: "此问题已关闭,因为它在标记为 stale 后,已处于无更新状态 14 天。"
# 忽略所有的 Pull Request只处理 Issue
days-before-pr-stale: -1
days-before-pr-close: -1
# 排除带有RFC标签的issue
exempt-issue-labels: "RFC"
repo-token: ${{ secrets.GITHUB_TOKEN }}

91
.github/workflows/pylint.yml vendored Normal file
View File

@@ -0,0 +1,91 @@
name: Pylint Code Quality Check
on:
# 允许手动触发
workflow_dispatch:
jobs:
pylint:
runs-on: ubuntu-latest
name: Pylint Code Quality Check
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.12'
cache: 'pip'
- name: Cache pip dependencies
uses: actions/cache@v4
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ hashFiles('**/requirements.txt', '**/requirements.in') }}
restore-keys: |
${{ runner.os }}-pip-
- name: Install dependencies
run: |
python -m pip install --upgrade pip setuptools wheel
pip install pylint
# 安装项目依赖
if [ -f requirements.txt ]; then
echo "📦 安装 requirements.txt 中的依赖..."
pip install -r requirements.txt
elif [ -f requirements.in ]; then
echo "📦 安装 requirements.in 中的依赖..."
pip install -r requirements.in
else
echo "⚠️ 未找到依赖文件,仅安装 pylint"
fi
- name: Verify pylint config
run: |
# 检查项目中的pylint配置文件是否存在
if [ -f .pylintrc ]; then
echo "✅ 找到项目配置文件: .pylintrc"
echo "配置文件内容预览:"
head -10 .pylintrc
else
echo "❌ 未找到 .pylintrc 配置文件"
exit 1
fi
- name: Run pylint
run: |
# 运行pylint检查主要的Python文件
echo "🚀 运行 Pylint 错误检查..."
# 检查主要目录 - 只关注错误,如果有错误则退出
echo "📂 检查 app/ 目录..."
pylint app/ --output-format=colorized --reports=yes --score=yes
# 检查根目录的Python文件
echo "📂 检查根目录 Python 文件..."
for file in $(find . -name "*.py" -not -path "./.*" -not -path "./.venv/*" -not -path "./build/*" -not -path "./dist/*" -not -path "./tests/*" -not -path "./docs/*" -not -path "./__pycache__/*" -maxdepth 1); do
echo "检查文件: $file"
pylint "$file" --output-format=colorized || exit 1
done
# 生成详细报告
echo "📊 生成 Pylint 详细报告..."
pylint app/ --output-format=json > pylint-report.json || true
# 显示评分(仅供参考)
echo "📈 Pylint 评分(仅供参考):"
pylint app/ --score=yes --reports=no | tail -2 || true
- name: Upload pylint report
uses: actions/upload-artifact@v4
if: always()
with:
name: pylint-report
path: pylint-report.json
- name: Summary
run: |
echo "🎉 Pylint 检查完成!"
echo "✅ 没有发现语法错误或严重问题"
echo "📊 详细报告已保存为构建工件"

6
.gitignore vendored
View File

@@ -23,4 +23,8 @@ config/cache/
*.pyc
*.log
.vscode
venv
venv
# Pylint
pylint-report.json
.pylint.d/

83
.pylintrc Normal file
View File

@@ -0,0 +1,83 @@
[MASTER]
# 指定Python路径
init-hook='import sys; sys.path.append(".")'
# 忽略的文件和目录
ignore=.git,__pycache__,.venv,build,dist,tests,docs
# 并行作业数量
jobs=0
[MESSAGES CONTROL]
# 只关注错误级别的问题,禁用警告、约定和重构建议
# E = Error (错误) - 会导致构建失败
# W = Warning (警告) - 仅显示,不会失败
# R = Refactor (重构建议) - 仅显示,不会失败
# C = Convention (约定) - 仅显示,不会失败
# I = Information (信息) - 仅显示,不会失败
# 禁用大部分警告、约定和重构建议,只保留错误和重要警告
disable=all
enable=error,
syntax-error,
undefined-variable,
used-before-assignment,
unreachable,
return-outside-function,
yield-outside-function,
continue-in-finally,
nonlocal-without-binding,
undefined-loop-variable,
redefined-builtin,
not-callable,
assignment-from-no-return,
no-value-for-parameter,
too-many-function-args,
unexpected-keyword-arg,
redundant-keyword-arg,
import-error,
relative-beyond-top-level
[REPORTS]
# 设置报告格式
output-format=colorized
reports=yes
score=yes
[FORMAT]
# 最大行长度
max-line-length=120
# 缩进大小
indent-string=' '
[DESIGN]
# 最大参数数量
max-args=10
# 最大本地变量数量
max-locals=20
# 最大分支数量
max-branches=15
# 最大语句数量
max-statements=50
# 最大父类数量
max-parents=7
# 最大属性数量
max-attributes=10
# 最小公共方法数量
min-public-methods=1
# 最大公共方法数量
max-public-methods=25
[SIMILARITIES]
# 最小相似行数
min-similarity-lines=6
# 忽略注释
ignore-comments=yes
# 忽略文档字符串
ignore-docstrings=yes
# 忽略导入
ignore-imports=yes
[TYPECHECK]
# 生成缺失成员提示的类列表
generated-members=requests.packages.urllib3

View File

@@ -1,93 +0,0 @@
FROM python:3.12.8-slim-bookworm
ENV LANG="C.UTF-8" \
TZ="Asia/Shanghai" \
HOME="/moviepilot" \
CONFIG_DIR="/config" \
TERM="xterm" \
DISPLAY=:987 \
PUID=0 \
PGID=0 \
UMASK=000 \
PORT=3001 \
NGINX_PORT=3000 \
MOVIEPILOT_AUTO_UPDATE=release
WORKDIR "/app"
RUN apt-get update -y \
&& apt-get upgrade -y \
&& apt-get -y install \
musl-dev \
nginx \
gettext-base \
locales \
procps \
gosu \
bash \
wget \
curl \
busybox \
dumb-init \
jq \
fuse3 \
rsync \
ffmpeg \
nano \
&& \
if [ "$(uname -m)" = "x86_64" ]; \
then ln -s /usr/lib/x86_64-linux-musl/libc.so /lib/libc.musl-x86_64.so.1; \
elif [ "$(uname -m)" = "aarch64" ]; \
then ln -s /usr/lib/aarch64-linux-musl/libc.so /lib/libc.musl-aarch64.so.1; \
fi \
&& curl https://rclone.org/install.sh | bash \
&& curl --insecure -fsSL https://raw.githubusercontent.com/DDS-Derek/Aria2-Pro-Core/master/aria2-install.sh | bash \
&& apt-get autoremove -y \
&& apt-get clean -y \
&& rm -rf \
/tmp/* \
/moviepilot/.cache \
/var/lib/apt/lists/* \
/var/tmp/*
COPY requirements.in requirements.in
RUN apt-get update -y \
&& apt-get install -y build-essential \
&& pip install --upgrade pip \
&& pip install Cython pip-tools \
&& pip-compile requirements.in \
&& pip install -r requirements.txt \
&& playwright install-deps chromium \
&& apt-get remove -y build-essential \
&& apt-get autoremove -y \
&& apt-get clean -y \
&& rm -rf \
/tmp/* \
/moviepilot/.cache \
/var/lib/apt/lists/* \
/var/tmp/*
COPY . .
RUN cp -f /app/nginx.conf /etc/nginx/nginx.template.conf \
&& cp -f /app/update /usr/local/bin/mp_update \
&& cp -f /app/entrypoint /entrypoint \
&& cp -f /app/docker_http_proxy.conf /etc/nginx/docker_http_proxy.conf \
&& chmod +x /entrypoint /usr/local/bin/mp_update \
&& mkdir -p ${HOME} \
&& groupadd -r moviepilot -g 918 \
&& useradd -r moviepilot -g moviepilot -d ${HOME} -s /bin/bash -u 918 \
&& python_ver=$(python3 -V | awk '{print $2}') \
&& echo "/app/" > /usr/local/lib/python${python_ver%.*}/site-packages/app.pth \
&& echo 'fs.inotify.max_user_watches=5242880' >> /etc/sysctl.conf \
&& echo 'fs.inotify.max_user_instances=5242880' >> /etc/sysctl.conf \
&& locale-gen zh_CN.UTF-8 \
&& python3 /app/setup.py \
&& find /app/app -type f -name "*.py" ! -path "/app/app/main.py" -exec rm -f {} \; \
&& FRONTEND_VERSION=$(sed -n "s/^FRONTEND_VERSION\s*=\s*'\([^']*\)'/\1/p" /app/version.py) \
&& curl -sL "https://github.com/jxxghp/MoviePilot-Frontend/releases/download/${FRONTEND_VERSION}/dist.zip" | busybox unzip -d / - \
&& mv /dist /public \
&& curl -sL "https://github.com/jxxghp/MoviePilot-Plugins/archive/refs/heads/main.zip" | busybox unzip -d /tmp - \
&& mv -f /tmp/MoviePilot-Plugins-main/plugins.v2/* /app/app/plugins/ \
&& cat /tmp/MoviePilot-Plugins-main/package.json | jq -r 'to_entries[] | select(.value.v2 == true) | .key' | awk '{print tolower($0)}' | \
while read -r i; do if [ ! -d "/app/app/plugins/$i" ]; then mv "/tmp/MoviePilot-Plugins-main/plugins/$i" "/app/app/plugins/"; else echo "跳过 $i"; fi; done \
&& curl -sL "https://github.com/jxxghp/MoviePilot-Resources/archive/refs/heads/main.zip" | busybox unzip -d /tmp - \
&& mv -f /tmp/MoviePilot-Resources-main/resources/* /app/app/helper/ \
&& rm -rf /tmp/* /app/build
EXPOSE 3000
VOLUME [ "/config" ]
ENTRYPOINT [ "/entrypoint" ]

View File

@@ -26,37 +26,31 @@ class AddDownloadAction(BaseAction):
添加下载资源
"""
# 已添加的下载
_added_downloads = []
_has_error = False
def __init__(self, action_id: str):
super().__init__(action_id)
self.downloadchain = DownloadChain()
self.mediachain = MediaChain()
self._added_downloads = []
self._has_error = False
@classmethod
@property
def name(cls) -> str: # noqa
def name(cls) -> str: # noqa
return "添加下载"
@classmethod
@property
def description(cls) -> str: # noqa
def description(cls) -> str: # noqa
return "根据资源列表添加下载任务"
@classmethod
@property
def data(cls) -> dict: # noqa
def data(cls) -> dict: # noqa
return AddDownloadParams().dict()
@property
def success(self) -> bool:
return not self._has_error
def execute(self, workflow_id: int, params: dict, context: ActionContext) -> ActionContext:
def execute(self, workflow_id: int, params: dict, context: ActionContext) -> ActionContext:
"""
将上下文中的torrents添加到下载任务中
"""
@@ -73,13 +67,13 @@ class AddDownloadAction(BaseAction):
if not t.meta_info:
t.meta_info = MetaInfo(title=t.torrent_info.title, subtitle=t.torrent_info.description)
if not t.media_info:
t.media_info = self.mediachain.recognize_media(meta=t.meta_info)
t.media_info = MediaChain().recognize_media(meta=t.meta_info)
if not t.media_info:
self._has_error = True
logger.warning(f"{t.torrent_info.title} 未识别到媒体信息,无法下载")
continue
if params.only_lack:
exists_info = self.downloadchain.media_exists(t.media_info)
exists_info = DownloadChain().media_exists(t.media_info)
if exists_info:
if t.media_info.type == MediaType.MOVIE:
# 电影
@@ -96,14 +90,15 @@ class AddDownloadAction(BaseAction):
exists_episodes = exists_seasons.get(t.meta_info.begin_season)
if exists_episodes:
if set(t.meta_info.episode_list).issubset(exists_episodes):
logger.warning(f"{t.meta_info.title}{t.meta_info.begin_season} 季第 {t.meta_info.episode_list} 集已存在,跳过")
logger.warning(
f"{t.meta_info.title}{t.meta_info.begin_season} 季第 {t.meta_info.episode_list} 集已存在,跳过")
continue
_started = True
did = self.downloadchain.download_single(context=t,
downloader=params.downloader,
save_path=params.save_path,
label=params.labels)
did = DownloadChain().download_single(context=t,
downloader=params.downloader,
save_path=params.save_path,
label=params.labels)
if did:
self._added_downloads.append(did)
# 保存缓存

View File

@@ -19,29 +19,24 @@ class AddSubscribeAction(BaseAction):
添加订阅
"""
_added_subscribes = []
_has_error = False
def __init__(self, action_id: str):
super().__init__(action_id)
self.subscribechain = SubscribeChain()
self.subscribeoper = SubscribeOper()
self._added_subscribes = []
self._has_error = False
@classmethod
@property
def name(cls) -> str: # noqa
def name(cls) -> str: # noqa
return "添加订阅"
@classmethod
@property
def description(cls) -> str: # noqa
def description(cls) -> str: # noqa
return "根据媒体列表添加订阅"
@classmethod
@property
def data(cls) -> dict: # noqa
def data(cls) -> dict: # noqa
return AddSubscribeParams().dict()
@property
@@ -63,19 +58,20 @@ class AddSubscribeAction(BaseAction):
continue
mediainfo = MediaInfo()
mediainfo.from_dict(media.dict())
if self.subscribechain.exists(mediainfo):
subscribechain = SubscribeChain()
if subscribechain.exists(mediainfo):
logger.info(f"{media.title} 已存在订阅")
continue
# 添加订阅
_started = True
sid, message = self.subscribechain.add(mtype=mediainfo.type,
title=mediainfo.title,
year=mediainfo.year,
tmdbid=mediainfo.tmdb_id,
season=mediainfo.season,
doubanid=mediainfo.douban_id,
bangumiid=mediainfo.bangumi_id,
username=settings.SUPERUSER)
sid, message = subscribechain.add(mtype=mediainfo.type,
title=mediainfo.title,
year=mediainfo.year,
tmdbid=mediainfo.tmdb_id,
season=mediainfo.season,
doubanid=mediainfo.douban_id,
bangumiid=mediainfo.bangumi_id,
username=settings.SUPERUSER)
if sid:
self._added_subscribes.append(sid)
# 保存缓存
@@ -84,7 +80,7 @@ class AddSubscribeAction(BaseAction):
if self._added_subscribes:
logger.info(f"已添加 {len(self._added_subscribes)} 个订阅")
for sid in self._added_subscribes:
context.subscribes.append(self.subscribeoper.get(sid))
context.subscribes.append(SubscribeOper().get(sid))
elif _started:
self._has_error = True

View File

@@ -16,11 +16,8 @@ class FetchDownloadsAction(BaseAction):
获取下载任务
"""
_downloads = []
def __init__(self, action_id: str):
super().__init__(action_id)
self.chain = ActionChain()
self._downloads = []
@classmethod
@@ -51,7 +48,7 @@ class FetchDownloadsAction(BaseAction):
if global_vars.is_workflow_stopped(workflow_id):
break
logger.info(f"获取下载任务 {download.download_id} 状态 ...")
torrents = self.chain.list_torrents(hashs=[download.download_id])
torrents = ActionChain().list_torrents(hashs=[download.download_id])
if not torrents:
download.completed = True
continue

View File

@@ -27,10 +27,6 @@ class FetchMediasAction(BaseAction):
获取媒体数据
"""
_inner_sources = []
_medias = []
_has_error = False
def __init__(self, action_id: str):
super().__init__(action_id)
@@ -40,54 +36,67 @@ class FetchMediasAction(BaseAction):
{
"func": RecommendChain().tmdb_trending,
"name": '流行趋势',
"api_path": "recommend/tmdb_trending"
},
{
"func": RecommendChain().douban_movie_showing,
"name": '正在热映',
"api_path": "recommend/douban_showing"
},
{
"func": RecommendChain().bangumi_calendar,
"name": 'Bangumi每日放送',
"api_path": "recommend/bangumi_calendar"
},
{
"func": RecommendChain().tmdb_movies,
"name": 'TMDB热门电影',
"api_path": "recommend/tmdb_movies"
},
{
"func": RecommendChain().tmdb_tvs,
"name": 'TMDB热门电视剧',
"api_path": "recommend/tmdb_tvs?with_original_language=zh|en|ja|ko"
},
{
"func": RecommendChain().douban_movie_hot,
"name": '豆瓣热门电影',
"api_path": "recommend/douban_movie_hot"
},
{
"func": RecommendChain().douban_tv_hot,
"name": '豆瓣热门电视剧',
"api_path": "recommend/douban_tv_hot"
},
{
"func": RecommendChain().douban_tv_animation,
"name": '豆瓣热门动漫',
"api_path": "recommend/douban_tv_animation"
},
{
"func": RecommendChain().douban_movies,
"name": '豆瓣最新电影',
"api_path": "recommend/douban_movies"
},
{
"func": RecommendChain().douban_tvs,
"name": '豆瓣最新电视剧',
"api_path": "recommend/douban_tvs"
},
{
"func": RecommendChain().douban_movie_top250,
"name": '豆瓣电影TOP250',
"api_path": "recommend/douban_movie_top250"
},
{
"func": RecommendChain().douban_tv_weekly_chinese,
"name": '豆瓣国产剧集榜',
"api_path": "recommend/douban_tv_weekly_chinese"
},
{
"func": RecommendChain().douban_tv_weekly_global,
"name": '豆瓣全球剧集榜',
"api_path": "recommend/douban_tv_weekly_global"
}
]
@@ -124,7 +133,7 @@ class FetchMediasAction(BaseAction):
获取数据源
"""
for s in self.__inner_sources:
if s['name'] == source:
if s['api_path'] == source:
return s
return None
@@ -135,13 +144,14 @@ class FetchMediasAction(BaseAction):
params = FetchMediasParams(**params)
try:
if params.source_type == "ranking":
for name in params.sources:
for api_path in params.sources:
if global_vars.is_workflow_stopped(workflow_id):
break
source = self.__get_source(name)
source = self.__get_source(api_path)
if not source:
continue
logger.info(f"获取媒体数据 {source} ...")
name = source.get("name")
results = []
if source.get("func"):
results = source['func']()

View File

@@ -29,29 +29,24 @@ class FetchRssAction(BaseAction):
获取RSS资源列表
"""
_rss_torrents = []
_has_error = False
def __init__(self, action_id: str):
super().__init__(action_id)
self.rsshelper = RssHelper()
self.chain = ActionChain()
self._rss_torrents = []
self._has_error = False
@classmethod
@property
def name(cls) -> str: # noqa
def name(cls) -> str: # noqa
return "获取RSS资源"
@classmethod
@property
def description(cls) -> str: # noqa
def description(cls) -> str: # noqa
return "订阅RSS地址获取资源"
@classmethod
@property
def data(cls) -> dict: # noqa
def data(cls) -> dict: # noqa
return FetchRssParams().dict()
@property
@@ -74,10 +69,10 @@ class FetchRssAction(BaseAction):
if params.ua:
headers["User-Agent"] = params.ua
rss_items = self.rsshelper.parse(url=params.url,
proxy=settings.PROXY if params.proxy else None,
timeout=params.timeout,
headers=headers)
rss_items = RssHelper().parse(url=params.url,
proxy=settings.PROXY if params.proxy else None,
timeout=params.timeout,
headers=headers)
if rss_items is None or rss_items is False:
logger.error(f'RSS地址 {params.url} 请求失败!')
self._has_error = True
@@ -103,7 +98,7 @@ class FetchRssAction(BaseAction):
meta = MetaInfo(title=torrentinfo.title, subtitle=torrentinfo.description)
mediainfo = None
if params.match_media:
mediainfo = self.chain.recognize_media(meta)
mediainfo = ActionChain().recognize_media(meta)
if not mediainfo:
logger.warning(f"{torrentinfo.title} 未识别到媒体信息")
continue

View File

@@ -29,26 +29,23 @@ class FetchTorrentsAction(BaseAction):
搜索站点资源
"""
_torrents = []
def __init__(self, action_id: str):
super().__init__(action_id)
self.searchchain = SearchChain()
self._torrents = []
@classmethod
@property
def name(cls) -> str: # noqa
def name(cls) -> str: # noqa
return "搜索站点资源"
@classmethod
@property
def description(cls) -> str: # noqa
def description(cls) -> str: # noqa
return "搜索站点种子资源列表"
@classmethod
@property
def data(cls) -> dict: # noqa
def data(cls) -> dict: # noqa
return FetchTorrentsParams().dict()
@property
@@ -60,9 +57,10 @@ class FetchTorrentsAction(BaseAction):
搜索站点,获取资源列表
"""
params = FetchTorrentsParams(**params)
searchchain = SearchChain()
if params.search_type == "keyword":
# 按关键字搜索
torrents = self.searchchain.search_by_title(title=params.name, sites=params.sites, cache_local=False)
torrents = searchchain.search_by_title(title=params.name, sites=params.sites)
for torrent in torrents:
if global_vars.is_workflow_stopped(workflow_id):
break
@@ -74,7 +72,7 @@ class FetchTorrentsAction(BaseAction):
continue
# 识别媒体信息
if params.match_media:
torrent.media_info = self.searchchain.recognize_media(torrent.meta_info)
torrent.media_info = searchchain.recognize_media(torrent.meta_info)
if not torrent.media_info:
logger.warning(f"{torrent.torrent_info.title} 未识别到媒体信息")
continue
@@ -84,10 +82,10 @@ class FetchTorrentsAction(BaseAction):
for media in context.medias:
if global_vars.is_workflow_stopped(workflow_id):
break
torrents = self.searchchain.search_by_id(tmdbid=media.tmdb_id,
doubanid=media.douban_id,
mtype=MediaType(media.type),
sites=params.sites)
torrents = searchchain.search_by_id(tmdbid=media.tmdb_id,
doubanid=media.douban_id,
mtype=MediaType(media.type),
sites=params.sites)
for torrent in torrents:
self._torrents.append(torrent)

View File

@@ -22,8 +22,6 @@ class FilterMediasAction(BaseAction):
过滤媒体数据
"""
_medias = []
def __init__(self, action_id: str):
super().__init__(action_id)
self._medias = []

View File

@@ -27,12 +27,8 @@ class FilterTorrentsAction(BaseAction):
过滤资源数据
"""
_torrents = []
def __init__(self, action_id: str):
super().__init__(action_id)
self.torrenthelper = TorrentHelper()
self.chain = ActionChain()
self._torrents = []
@classmethod
@@ -62,7 +58,7 @@ class FilterTorrentsAction(BaseAction):
for torrent in context.torrents:
if global_vars.is_workflow_stopped(workflow_id):
break
if self.torrenthelper.filter_torrent(
if TorrentHelper().filter_torrent(
torrent_info=torrent.torrent_info,
filter_params={
"quality": params.quality,
@@ -73,7 +69,7 @@ class FilterTorrentsAction(BaseAction):
"size": params.size
}
):
if self.chain.filter_torrents(
if ActionChain().filter_torrents(
rule_groups=params.rule_groups,
torrent_list=[torrent.torrent_info],
mediainfo=torrent.media_info

View File

@@ -0,0 +1,70 @@
from pydantic import Field
from app.actions import BaseAction
from app.core.plugin import PluginManager
from app.log import logger
from app.schemas import ActionParams, ActionContext
class InvokePluginParams(ActionParams):
"""
调用插件动作参数
"""
plugin_id: str = Field(default=None, description="插件ID")
action_id: str = Field(default=None, description="动作ID")
action_params: dict = Field(default={}, description="动作参数")
class InvokePluginAction(BaseAction):
"""
调用插件
"""
def __init__(self, action_id: str):
super().__init__(action_id)
self._success = False
@classmethod
@property
def name(cls) -> str: # noqa
return "调用插件"
@classmethod
@property
def description(cls) -> str: # noqa
return "调用插件提供的动作"
@classmethod
@property
def data(cls) -> dict: # noqa
return InvokePluginParams().dict()
@property
def success(self) -> bool:
return self._success
def execute(self, workflow_id: int, params: dict, context: ActionContext) -> ActionContext:
"""
执行插件定义的动作
"""
params = InvokePluginParams(**params)
if not params.plugin_id or not params.action_id:
return context
try:
plugin_actions = PluginManager().get_plugin_actions(params.plugin_id)
if not plugin_actions:
logger.error(f"插件不存在: {params.plugin_id}")
return context
actions = plugin_actions[0].get("actions", [])
action = next((action for action in actions if action.action_id == params.action_id), None)
if not action or not action.get("func"):
logger.error(f"插件动作不存在: {params.plugin_id} - {params.action_id}")
return context
# 执行插件动作
self._success, context = action["func"](context, **params.action_params)
except Exception as e:
self._success = False
logger.error(f"调用插件动作失败: {e}")
return context
self.job_done()
return context

View File

@@ -24,12 +24,8 @@ class ScanFileAction(BaseAction):
整理文件
"""
_fileitems = []
_has_error = False
def __init__(self, action_id: str):
super().__init__(action_id)
self.storagechain = StorageChain()
self._fileitems = []
self._has_error = False
@@ -59,12 +55,13 @@ class ScanFileAction(BaseAction):
params = ScanFileParams(**params)
if not params.storage or not params.directory:
return context
fileitem = self.storagechain.get_file_item(params.storage, Path(params.directory))
storagechain = StorageChain()
fileitem = storagechain.get_file_item(params.storage, Path(params.directory))
if not fileitem:
logger.error(f"目录不存在: 【{params.storage}{params.directory}")
self._has_error = True
return context
files = self.storagechain.list_files(fileitem, recursion=True)
files = storagechain.list_files(fileitem, recursion=True)
for file in files:
if global_vars.is_workflow_stopped(workflow_id):
break

View File

@@ -21,13 +21,8 @@ class ScrapeFileAction(BaseAction):
刮削文件
"""
_scraped_files = []
_has_error = False
def __init__(self, action_id: str):
super().__init__(action_id)
self.storagechain = StorageChain()
self.mediachain = MediaChain()
self._scraped_files = []
self._has_error = False
@@ -61,7 +56,7 @@ class ScrapeFileAction(BaseAction):
break
if fileitem in self._scraped_files:
continue
if not self.storagechain.exists(fileitem):
if not StorageChain().exists(fileitem):
continue
# 检查缓存
cache_key = f"{fileitem.path}"
@@ -69,12 +64,13 @@ class ScrapeFileAction(BaseAction):
logger.info(f"{fileitem.path} 已刮削过,跳过")
continue
meta = MetaInfoPath(Path(fileitem.path))
mediainfo = self.mediachain.recognize_media(meta)
mediachain = MediaChain()
mediainfo = mediachain.recognize_media(meta)
if not mediainfo:
_failed_count += 1
logger.info(f"{fileitem.path} 未识别到媒体信息,无法刮削")
continue
self.mediachain.scrape_metadata(fileitem=fileitem, meta=meta, mediainfo=mediainfo)
mediachain.scrape_metadata(fileitem=fileitem, meta=meta, mediainfo=mediainfo)
self._scraped_files.append(fileitem)
# 保存缓存
self.save_cache(workflow_id, cache_key)

View File

@@ -4,7 +4,7 @@ from pydantic import Field
from app.actions import BaseAction, ActionChain
from app.schemas import ActionParams, ActionContext, Notification
from core.config import settings
from app.core.config import settings
class SendMessageParams(ActionParams):
@@ -22,7 +22,6 @@ class SendMessageAction(BaseAction):
def __init__(self, action_id: str):
super().__init__(action_id)
self.chain = ActionChain()
@classmethod
@property
@@ -60,7 +59,7 @@ class SendMessageAction(BaseAction):
if not params.client:
params.client = [""]
for client in params.client:
self.chain.post_message(
ActionChain().post_message(
Notification(
source=client,
userid=params.userid,

View File

@@ -26,30 +26,24 @@ class TransferFileAction(BaseAction):
整理文件
"""
_fileitems = []
_has_error = False
def __init__(self, action_id: str):
super().__init__(action_id)
self.transferchain = TransferChain()
self.storagechain = StorageChain()
self.transferhis = TransferHistoryOper()
self._fileitems = []
self._has_error = False
@classmethod
@property
def name(cls) -> str: # noqa
def name(cls) -> str: # noqa
return "整理文件"
@classmethod
@property
def description(cls) -> str: # noqa
def description(cls) -> str: # noqa
return "整理队列中的文件"
@classmethod
@property
def data(cls) -> dict: # noqa
def data(cls) -> dict: # noqa
return TransferFileParams().dict()
@property
@@ -72,6 +66,9 @@ class TransferFileAction(BaseAction):
params = TransferFileParams(**params)
# 失败次数
_failed_count = 0
storagechain = StorageChain()
transferchain = TransferChain()
transferhis = TransferHistoryOper()
if params.source == "downloads":
# 从下载任务中整理文件
for download in context.downloads:
@@ -85,16 +82,16 @@ class TransferFileAction(BaseAction):
if self.check_cache(workflow_id, cache_key):
logger.info(f"{download.path} 已整理过,跳过")
continue
fileitem = self.storagechain.get_file_item(storage="local", path=Path(download.path))
fileitem = storagechain.get_file_item(storage="local", path=Path(download.path))
if not fileitem:
logger.info(f"文件 {download.path} 不存在")
continue
transferd = self.transferhis.get_by_src(fileitem.path, storage=fileitem.storage)
transferd = transferhis.get_by_src(fileitem.path, storage=fileitem.storage)
if transferd:
# 已经整理过的文件不再整理
continue
logger.info(f"开始整理文件 {download.path} ...")
state, errmsg = self.transferchain.do_transfer(fileitem, background=False)
state, errmsg = transferchain.do_transfer(fileitem, background=False)
if not state:
_failed_count += 1
logger.error(f"整理文件 {download.path} 失败: {errmsg}")
@@ -112,13 +109,13 @@ class TransferFileAction(BaseAction):
if self.check_cache(workflow_id, cache_key):
logger.info(f"{fileitem.path} 已整理过,跳过")
continue
transferd = self.transferhis.get_by_src(fileitem.path, storage=fileitem.storage)
transferd = transferhis.get_by_src(fileitem.path, storage=fileitem.storage)
if transferd:
# 已经整理过的文件不再整理
continue
logger.info(f"开始整理文件 {fileitem.path} ...")
state, errmsg = self.transferchain.do_transfer(fileitem, background=False,
continue_callback=check_continue)
state, errmsg = transferchain.do_transfer(fileitem, background=False,
continue_callback=check_continue)
if not state:
_failed_count += 1
logger.error(f"整理文件 {fileitem.path} 失败: {errmsg}")

View File

@@ -2,7 +2,7 @@ from fastapi import APIRouter
from app.api.endpoints import login, user, site, message, webhook, subscribe, \
media, douban, search, plugin, tmdb, history, system, download, dashboard, \
transfer, mediaserver, bangumi, storage, discover, recommend, workflow
transfer, mediaserver, bangumi, storage, discover, recommend, workflow, torrent
api_router = APIRouter()
api_router.include_router(login.router, prefix="/login", tags=["login"])
@@ -27,3 +27,4 @@ api_router.include_router(bangumi.router, prefix="/bangumi", tags=["bangumi"])
api_router.include_router(discover.router, prefix="/discover", tags=["discover"])
api_router.include_router(recommend.router, prefix="/recommend", tags=["recommend"])
api_router.include_router(workflow.router, prefix="/workflow", tags=["workflow"])
api_router.include_router(torrent.router, prefix="/torrent", tags=["torrent"])

View File

@@ -166,3 +166,19 @@ def memory2(_: Annotated[str, Depends(verify_apitoken)]) -> Any:
获取当前内存使用率 API_TOKEN认证?token=xxx
"""
return memory()
@router.get("/network", summary="获取当前网络流量", response_model=List[int])
def network(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
获取当前网络流量上行和下行流量单位bytes/s
"""
return SystemUtils.network_usage()
@router.get("/network2", summary="获取当前网络流量API_TOKEN", response_model=List[int])
def network2(_: Annotated[str, Depends(verify_apitoken)]) -> Any:
"""
获取当前网络流量 API_TOKEN认证?token=xxx
"""
return network()

View File

@@ -7,9 +7,9 @@ from app.core.event import eventmanager
from app.core.security import verify_token
from app.schemas import DiscoverSourceEventData
from app.schemas.types import ChainEventType, MediaType
from chain.bangumi import BangumiChain
from chain.douban import DoubanChain
from chain.tmdb import TmdbChain
from app.chain.bangumi import BangumiChain
from app.chain.douban import DoubanChain
from app.chain.tmdb import TmdbChain
router = APIRouter()

View File

@@ -44,6 +44,8 @@ def download(
# 种子信息
torrentinfo = TorrentInfo()
torrentinfo.from_dict(torrent_in.dict())
# 手动下载始终使用选择的下载器
torrentinfo.site_downloader = downloader
# 上下文
context = Context(
meta_info=metainfo,
@@ -51,7 +53,7 @@ def download(
torrent_info=torrentinfo
)
did = DownloadChain().download_single(context=context, username=current_user.name,
downloader=downloader, save_path=save_path, source="Manual")
save_path=save_path, source="Manual")
if not did:
return schemas.Response(success=False, message="任务添加失败")
return schemas.Response(success=True, data={
@@ -94,22 +96,22 @@ def add(
@router.get("/start/{hashString}", summary="开始任务", response_model=schemas.Response)
def start(
hashString: str,
hashString: str, name: Optional[str] = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
开如下载任务
"""
ret = DownloadChain().set_downloading(hashString, "start")
ret = DownloadChain().set_downloading(hashString, "start", name=name)
return schemas.Response(success=True if ret else False)
@router.get("/stop/{hashString}", summary="暂停任务", response_model=schemas.Response)
def stop(hashString: str,
def stop(hashString: str, name: Optional[str] = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
暂停下载任务
"""
ret = DownloadChain().set_downloading(hashString, "stop")
ret = DownloadChain().set_downloading(hashString, "stop", name=name)
return schemas.Response(success=True if ret else False)
@@ -125,10 +127,10 @@ def clients(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
@router.delete("/{hashString}", summary="删除下载任务", response_model=schemas.Response)
def delete(hashString: str,
def delete(hashString: str, name: Optional[str] = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
删除下载任务
"""
ret = DownloadChain().remove_downloading(hashString)
ret = DownloadChain().remove_downloading(hashString, name=name)
return schemas.Response(success=True if ret else False)

View File

@@ -6,7 +6,6 @@ from sqlalchemy.orm import Session
from app import schemas
from app.chain.storage import StorageChain
from app.core.config import settings
from app.core.event import eventmanager
from app.core.security import verify_token
from app.db import get_db
@@ -59,9 +58,8 @@ def transfer_history(title: Optional[str] = None,
status = True
if title:
if settings.TOKENIZED_SEARCH:
words = jieba.cut(title, HMM=False)
title = "%".join(words)
words = jieba.cut(title, HMM=False)
title = "%".join(words)
total = TransferHistory.count_by_title(db, title=title, status=status)
result = TransferHistory.list_by_title(db, title=title, page=page,
count=count, status=status)

View File

@@ -5,13 +5,11 @@ from fastapi import APIRouter, Depends, Form, HTTPException
from fastapi.security import OAuth2PasswordRequestForm
from app import schemas
from app.chain.tmdb import TmdbChain
from app.chain.user import UserChain
from app.chain.mediaserver import MediaServerChain
from app.core import security
from app.core.config import settings
from app.helper.sites import SitesHelper
from app.utils.web import WebUtils
from app.helper.wallpaper import WallpaperHelper
router = APIRouter()
@@ -45,7 +43,8 @@ def login_access_token(
user_id=user_or_message.id,
user_name=user_or_message.name,
avatar=user_or_message.avatar,
level=level
level=level,
permissions= user_or_message.permissions or {},
)
@@ -54,12 +53,7 @@ def wallpaper() -> Any:
"""
获取登录页面电影海报
"""
if settings.WALLPAPER == "bing":
url = WebUtils.get_bing_wallpaper()
elif settings.WALLPAPER == "mediaserver":
url = MediaServerChain().get_latest_wallpaper()
else:
url = TmdbChain().get_random_wallpager()
url = WallpaperHelper().get_wallpaper()
if url:
return schemas.Response(
success=True,
@@ -73,9 +67,4 @@ def wallpapers() -> Any:
"""
获取登录页面电影海报
"""
if settings.WALLPAPER == "bing":
return WebUtils.get_bing_wallpapers()
elif settings.WALLPAPER == "mediaserver":
return MediaServerChain().get_latest_wallpapers()
else:
return TmdbChain().get_trending_wallpapers()
return WallpaperHelper().get_wallpapers()

View File

@@ -149,11 +149,12 @@ def seasons(tmdbid: int, _: schemas.TokenPayload = Depends(verify_token)) -> Any
"""
查询媒体剧集组列表themoviedb
"""
mediainfo = MediaChain().recognize_media(tmdbid=tmdbid, mtype=MediaType.TV)
mediainfo = MediaChain().recognize_media(tmdbid=tmdbid, mtype=MediaType.TV)
if not mediainfo:
return []
return mediainfo.episode_groups
@router.get("/seasons", summary="查询媒体季信息", response_model=List[schemas.MediaSeason])
def seasons(mediaid: Optional[str] = None,
title: Optional[str] = None,
@@ -198,7 +199,7 @@ def seasons(mediaid: Optional[str] = None,
@router.get("/{mediaid}", summary="查询媒体详情", response_model=schemas.MediaInfo)
def detail(mediaid: str, type_name: str, title: Optional[str] = None, year: int = None,
def detail(mediaid: str, type_name: str, title: Optional[str] = None, year: str = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据媒体ID查询themoviedb或豆瓣媒体信息type_name: 电影/电视剧
@@ -219,14 +220,13 @@ def detail(mediaid: str, type_name: str, title: Optional[str] = None, year: int
)
event = eventmanager.send_event(ChainEventType.MediaRecognizeConvert, event_data)
# 使用事件返回的上下文数据
if event and event.event_data:
if event and event.event_data and event.event_data.media_dict:
event_data: MediaRecognizeConvertEventData = event.event_data
if event_data.media_dict:
new_id = event_data.media_dict.get("id")
if event_data.convert_type == "themoviedb":
mediainfo = MediaChain().recognize_media(tmdbid=new_id, mtype=mtype)
elif event_data.convert_type == "douban":
mediainfo = MediaChain().recognize_media(doubanid=new_id, mtype=mtype)
new_id = event_data.media_dict.get("id")
if event_data.convert_type == "themoviedb":
mediainfo = MediaChain().recognize_media(tmdbid=new_id, mtype=mtype)
elif event_data.convert_type == "douban":
mediainfo = MediaChain().recognize_media(doubanid=new_id, mtype=mtype)
elif title:
# 使用名称识别兜底
meta = MetaInfo(title)

View File

@@ -121,7 +121,7 @@ def not_exists(media_in: schemas.MediaInfo,
@router.get("/latest", summary="最新入库条目", response_model=List[schemas.MediaServerPlayItem])
def latest(server: str, count: Optional[int] = 18,
def latest(server: str, count: Optional[int] = 20,
userinfo: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
获取媒体服务器最新入库条目

View File

@@ -1,6 +1,10 @@
import mimetypes
import shutil
from typing import Annotated, Any, List, Optional
from fastapi import APIRouter, Depends, Header
from fastapi import APIRouter, Depends, Header, HTTPException
from starlette import status
from starlette.responses import FileResponse
from app import schemas
from app.command import Command
@@ -16,7 +20,6 @@ from app.scheduler import Scheduler
from app.schemas.types import SystemConfigKey
PROTECTED_ROUTES = {"/api/v1/openapi.json", "/docs", "/docs/oauth2-redirect", "/redoc"}
PLUGIN_PREFIX = f"{settings.API_V1_STR}/plugin"
router = APIRouter()
@@ -66,9 +69,13 @@ def _update_plugin_api_routes(plugin_id: Optional[str], action: str):
try:
api["path"] = api_path
allow_anonymous = api.pop("allow_anonymous", False)
auth_mode = api.pop("auth", "apikey")
dependencies = api.setdefault("dependencies", [])
if not allow_anonymous and Depends(verify_apikey) not in dependencies:
dependencies.append(Depends(verify_apikey))
if not allow_anonymous:
if auth_mode == "bear" and Depends(verify_token) not in dependencies:
dependencies.append(Depends(verify_token))
elif Depends(verify_apikey) not in dependencies:
dependencies.append(Depends(verify_apikey))
app.add_api_route(**api, tags=["plugin"])
is_modified = True
logger.debug(f"Added plugin route: {api_path}")
@@ -116,9 +123,21 @@ def _clean_protected_routes(existing_paths: dict):
logger.error(f"Error removing protected route {protected_route}: {str(e)}")
def register_plugin(plugin_id: str):
"""
注册一个插件相关的服务
"""
# 注册插件服务
Scheduler().update_plugin_job(plugin_id)
# 注册菜单命令
Command().init_commands(plugin_id)
# 注册插件API
register_plugin_api(plugin_id)
@router.get("/", summary="所有插件", response_model=List[schemas.Plugin])
def all_plugins(_: schemas.TokenPayload = Depends(get_current_active_superuser),
state: Optional[str] = "all") -> List[schemas.Plugin]:
state: Optional[str] = "all", force: bool = False) -> List[schemas.Plugin]:
"""
查询所有插件清单包括本地插件和在线插件插件状态installed, market, all
"""
@@ -126,13 +145,13 @@ def all_plugins(_: schemas.TokenPayload = Depends(get_current_active_superuser),
local_plugins = PluginManager().get_local_plugins()
# 已安装插件
installed_plugins = [plugin for plugin in local_plugins if plugin.installed]
# 未安装的本地插件
not_installed_plugins = [plugin for plugin in local_plugins if not plugin.installed]
if state == "installed":
return installed_plugins
# 未安装的本地插件
not_installed_plugins = [plugin for plugin in local_plugins if not plugin.installed]
# 在线插件
online_plugins = PluginManager().get_online_plugins()
online_plugins = PluginManager().get_online_plugins(force)
if not online_plugins:
# 没有获取在线插件
if state == "market":
@@ -159,6 +178,7 @@ def all_plugins(_: schemas.TokenPayload = Depends(get_current_active_superuser),
if state == "market":
# 返回未安装的插件
return market_plugins
# 返回所有插件
return installed_plugins + market_plugins
@@ -179,6 +199,18 @@ def statistic(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
return PluginHelper().get_statistic()
@router.get("/reload/{plugin_id}", summary="重新加载插件", response_model=schemas.Response)
def reload_plugin(plugin_id: str, _: schemas.TokenPayload = Depends(get_current_active_superuser)) -> Any:
"""
重新加载插件
"""
# 重新加载插件
PluginManager().reload_plugin(plugin_id)
# 注册插件服务
register_plugin(plugin_id)
return schemas.Response(success=True)
@router.get("/install/{plugin_id}", summary="安装插件", response_model=schemas.Response)
def install(plugin_id: str,
repo_url: Optional[str] = "",
@@ -207,36 +239,65 @@ def install(plugin_id: str,
install_plugins.append(plugin_id)
# 保存设置
SystemConfigOper().set(SystemConfigKey.UserInstalledPlugins, install_plugins)
# 加载插件到内存
PluginManager().reload_plugin(plugin_id)
# 注册插件服务
Scheduler().update_plugin_job(plugin_id)
# 注册菜单命令
Command().init_commands(plugin_id)
# 注册插件API
register_plugin_api(plugin_id)
# 重新加载插件
reload_plugin(plugin_id)
return schemas.Response(success=True)
@router.get("/remotes", summary="获取插件联邦组件列表", response_model=List[dict])
def remotes(token: str) -> Any:
"""
获取插件联邦组件列表
"""
if token != "moviepilot":
raise HTTPException(status_code=403, detail="Forbidden")
return PluginManager().get_plugin_remotes()
@router.get("/form/{plugin_id}", summary="获取插件表单页面")
def plugin_form(plugin_id: str,
_: schemas.TokenPayload = Depends(get_current_active_superuser)) -> dict:
"""
根据插件ID获取插件配置表单
根据插件ID获取插件配置表单或Vue组件URL
"""
conf, model = PluginManager().get_plugin_form(plugin_id)
return {
"conf": conf,
"model": model
}
plugin_instance = PluginManager().running_plugins.get(plugin_id)
if not plugin_instance:
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail=f"插件 {plugin_id} 不存在或未加载")
# 渲染模式
render_mode, _ = plugin_instance.get_render_mode()
try:
conf, model = plugin_instance.get_form()
return {
"render_mode": render_mode,
"conf": conf,
"model": PluginManager().get_plugin_config(plugin_id) or model
}
except Exception as e:
logger.error(f"插件 {plugin_id} 调用方法 get_form 出错: {str(e)}")
return {}
@router.get("/page/{plugin_id}", summary="获取插件数据页面")
def plugin_page(plugin_id: str, _: schemas.TokenPayload = Depends(get_current_active_superuser)) -> List[dict]:
def plugin_page(plugin_id: str, _: schemas.TokenPayload = Depends(get_current_active_superuser)) -> dict:
"""
根据插件ID获取插件数据页面
"""
return PluginManager().get_plugin_page(plugin_id)
plugin_instance = PluginManager().running_plugins.get(plugin_id)
if not plugin_instance:
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail=f"插件 {plugin_id} 不存在或未加载")
# 渲染模式
render_mode, _ = plugin_instance.get_render_mode()
try:
page = plugin_instance.get_page()
return {
"render_mode": render_mode,
"page": page or []
}
except Exception as e:
logger.error(f"插件 {plugin_id} 调用方法 get_page 出错: {str(e)}")
return {}
@router.get("/dashboard/meta", summary="获取所有插件仪表板元信息")
@@ -247,22 +308,22 @@ def plugin_dashboard_meta(_: schemas.TokenPayload = Depends(verify_token)) -> Li
return PluginManager().get_plugin_dashboard_meta()
@router.get("/dashboard/{plugin_id}/{key}", summary="获取插件仪表板配置")
def plugin_dashboard_by_key(plugin_id: str, key: str, user_agent: Annotated[str | None, Header()] = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Optional[schemas.PluginDashboard]:
"""
根据插件ID获取插件仪表板
"""
return PluginManager().get_plugin_dashboard(plugin_id, key, user_agent)
@router.get("/dashboard/{plugin_id}", summary="获取插件仪表板配置")
def plugin_dashboard(plugin_id: str, user_agent: Annotated[str | None, Header()] = None,
_: schemas.TokenPayload = Depends(verify_token)) -> schemas.PluginDashboard:
"""
根据插件ID获取插件仪表板
"""
return PluginManager().get_plugin_dashboard(plugin_id, user_agent=user_agent)
@router.get("/dashboard/{plugin_id}/{key}", summary="获取插件仪表板配置")
def plugin_dashboard(plugin_id: str, key: str, user_agent: Annotated[str | None, Header()] = None,
_: schemas.TokenPayload = Depends(verify_token)) -> schemas.PluginDashboard:
"""
根据插件ID获取插件仪表板
"""
return PluginManager().get_plugin_dashboard(plugin_id, key=key, user_agent=user_agent)
return plugin_dashboard_by_key(plugin_id, "", user_agent)
@router.get("/reset/{plugin_id}", summary="重置插件配置及数据", response_model=schemas.Response)
@@ -271,21 +332,116 @@ def reset_plugin(plugin_id: str,
"""
根据插件ID重置插件配置及数据
"""
plugin_manager = PluginManager()
# 删除配置
PluginManager().delete_plugin_config(plugin_id)
plugin_manager.delete_plugin_config(plugin_id)
# 删除插件所有数据
PluginManager().delete_plugin_data(plugin_id)
# 重新生效插件
PluginManager().reload_plugin(plugin_id)
# 注册插件服务
Scheduler().update_plugin_job(plugin_id)
# 注册菜单命令
Command().init_commands(plugin_id)
# 注册插件API
register_plugin_api(plugin_id)
plugin_manager.delete_plugin_data(plugin_id)
# 重新加载插件
reload_plugin(plugin_id)
return schemas.Response(success=True)
@router.get("/file/{plugin_id}/{filepath:path}", summary="获取插件静态文件")
def plugin_static_file(plugin_id: str, filepath: str):
"""
获取插件静态文件
"""
# 基础安全检查
if ".." in filepath or ".." in plugin_id:
logger.warning(f"Static File API: Path traversal attempt detected: {plugin_id}/{filepath}")
raise HTTPException(status_code=status.HTTP_403_FORBIDDEN, detail="Forbidden")
plugin_base_dir = settings.ROOT_PATH / "app" / "plugins" / plugin_id.lower()
plugin_file_path = plugin_base_dir / filepath
if not plugin_file_path.exists():
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail=f"{plugin_file_path} 不存在")
if not plugin_file_path.is_file():
raise HTTPException(status_code=status.HTTP_403_FORBIDDEN, detail=f"{plugin_file_path} 不是文件")
# 判断 MIME 类型
response_type, _ = mimetypes.guess_type(str(plugin_file_path))
suffix = plugin_file_path.suffix.lower()
# 强制修正 .mjs 和 .js 的 MIME 类型
if suffix in ['.js', '.mjs']:
response_type = 'application/javascript'
elif suffix == '.css' and not response_type: # 如果 guess_type 没猜对 css也修正
response_type = 'text/css'
elif not response_type: # 对于其他猜不出的类型
response_type = 'application/octet-stream'
try:
return FileResponse(plugin_file_path, media_type=response_type)
except Exception as e:
logger.error(f"Error creating/sending FileResponse for {plugin_file_path}: {e}", exc_info=True)
raise HTTPException(status_code=500, detail="Internal Server Error")
@router.get("/folders", summary="获取插件文件夹配置", response_model=dict)
def get_plugin_folders(_: schemas.TokenPayload = Depends(get_current_active_superuser)) -> dict:
"""
获取插件文件夹分组配置
"""
try:
result = SystemConfigOper().get(SystemConfigKey.PluginFolders) or {}
return result
except Exception as e:
logger.error(f"[文件夹API] 获取文件夹配置失败: {str(e)}")
return {}
@router.post("/folders", summary="保存插件文件夹配置", response_model=schemas.Response)
def save_plugin_folders(folders: dict, _: schemas.TokenPayload = Depends(get_current_active_superuser)) -> Any:
"""
保存插件文件夹分组配置
"""
try:
SystemConfigOper().set(SystemConfigKey.PluginFolders, folders)
return schemas.Response(success=True)
except Exception as e:
logger.error(f"[文件夹API] 保存文件夹配置失败: {str(e)}")
return schemas.Response(success=False, message=str(e))
@router.post("/folders/{folder_name}", summary="创建插件文件夹", response_model=schemas.Response)
def create_plugin_folder(folder_name: str, _: schemas.TokenPayload = Depends(get_current_active_superuser)) -> Any:
"""
创建新的插件文件夹
"""
folders = SystemConfigOper().get(SystemConfigKey.PluginFolders) or {}
if folder_name not in folders:
folders[folder_name] = []
SystemConfigOper().set(SystemConfigKey.PluginFolders, folders)
return schemas.Response(success=True, message=f"文件夹 '{folder_name}' 创建成功")
else:
return schemas.Response(success=False, message=f"文件夹 '{folder_name}' 已存在")
@router.delete("/folders/{folder_name}", summary="删除插件文件夹", response_model=schemas.Response)
def delete_plugin_folder(folder_name: str, _: schemas.TokenPayload = Depends(get_current_active_superuser)) -> Any:
"""
删除插件文件夹
"""
folders = SystemConfigOper().get(SystemConfigKey.PluginFolders) or {}
if folder_name in folders:
del folders[folder_name]
SystemConfigOper().set(SystemConfigKey.PluginFolders, folders)
return schemas.Response(success=True, message=f"文件夹 '{folder_name}' 删除成功")
else:
return schemas.Response(success=False, message=f"文件夹 '{folder_name}' 不存在")
@router.put("/folders/{folder_name}/plugins", summary="更新文件夹中的插件", response_model=schemas.Response)
def update_folder_plugins(folder_name: str, plugin_ids: List[str], _: schemas.TokenPayload = Depends(get_current_active_superuser)) -> Any:
"""
更新指定文件夹中的插件列表
"""
folders = SystemConfigOper().get(SystemConfigKey.PluginFolders) or {}
folders[folder_name] = plugin_ids
SystemConfigOper().set(SystemConfigKey.PluginFolders, folders)
return schemas.Response(success=True, message=f"文件夹 '{folder_name}' 中的插件已更新")
@router.get("/{plugin_id}", summary="获取插件配置")
def plugin_config(plugin_id: str,
_: schemas.TokenPayload = Depends(get_current_active_superuser)) -> dict:
@@ -301,16 +457,13 @@ def set_plugin_config(plugin_id: str, conf: dict,
"""
更新插件配置
"""
plugin_manager = PluginManager()
# 保存配置
PluginManager().save_plugin_config(plugin_id, conf)
plugin_manager.save_plugin_config(plugin_id, conf)
# 重新生效插件
PluginManager().init_plugin(plugin_id, conf)
plugin_manager.init_plugin(plugin_id, conf)
# 注册插件服务
Scheduler().update_plugin_job(plugin_id)
# 注册菜单命令
Command().init_commands(plugin_id)
# 注册插件API
register_plugin_api(plugin_id)
register_plugin(plugin_id)
return schemas.Response(success=True)
@@ -320,22 +473,153 @@ def uninstall_plugin(plugin_id: str,
"""
卸载插件
"""
config_oper = SystemConfigOper()
# 删除已安装信息
install_plugins = SystemConfigOper().get(SystemConfigKey.UserInstalledPlugins) or []
install_plugins = config_oper.get(SystemConfigKey.UserInstalledPlugins) or []
for plugin in install_plugins:
if plugin == plugin_id:
install_plugins.remove(plugin)
break
# 保存
SystemConfigOper().set(SystemConfigKey.UserInstalledPlugins, install_plugins)
config_oper.set(SystemConfigKey.UserInstalledPlugins, install_plugins)
# 移除插件API
remove_plugin_api(plugin_id)
# 移除插件服务
Scheduler().remove_plugin_job(plugin_id)
# 判断是否为分身
plugin_manager = PluginManager()
plugin_class = plugin_manager.plugins.get(plugin_id)
if getattr(plugin_class, "is_clone", False):
# 如果是分身插件,则删除分身数据和配置
plugin_manager.delete_plugin_config(plugin_id)
plugin_manager.delete_plugin_data(plugin_id)
# 删除分身文件
plugin_base_dir = settings.ROOT_PATH / "app" / "plugins" / plugin_id.lower()
if plugin_base_dir.exists():
try:
shutil.rmtree(plugin_base_dir)
plugin_manager.plugins.pop(plugin_id, None)
except Exception as e:
logger.error(f"删除插件分身目录 {plugin_base_dir} 失败: {str(e)}")
# 从插件文件夹中移除该插件
_remove_plugin_from_folders(plugin_id)
# 移除插件
PluginManager().remove_plugin(plugin_id)
plugin_manager.remove_plugin(plugin_id)
return schemas.Response(success=True)
# 注册全部插件API
register_plugin_api()
@router.post("/clone/{plugin_id}", summary="创建插件分身", response_model=schemas.Response)
def clone_plugin(plugin_id: str,
clone_data: dict,
_: schemas.TokenPayload = Depends(get_current_active_superuser)) -> Any:
"""
创建插件分身
"""
try:
success, message = PluginManager().clone_plugin(
plugin_id=plugin_id,
suffix=clone_data.get("suffix", ""),
name=clone_data.get("name", ""),
description=clone_data.get("description", ""),
version=clone_data.get("version", ""),
icon=clone_data.get("icon", "")
)
if success:
# 注册插件服务
reload_plugin(message)
# 将分身插件添加到原插件所在的文件夹中
_add_clone_to_plugin_folder(plugin_id, message)
return schemas.Response(success=True, message="插件分身创建成功")
else:
return schemas.Response(success=False, message=message)
except Exception as e:
logger.error(f"创建插件分身失败:{str(e)}")
return schemas.Response(success=False, message=f"创建插件分身失败:{str(e)}")
def _add_clone_to_plugin_folder(original_plugin_id: str, clone_plugin_id: str):
"""
将分身插件添加到原插件所在的文件夹中
:param original_plugin_id: 原插件ID
:param clone_plugin_id: 分身插件ID
"""
try:
config_oper = SystemConfigOper()
# 获取插件文件夹配置
folders = config_oper.get(SystemConfigKey.PluginFolders) or {}
# 查找原插件所在的文件夹
target_folder = None
for folder_name, folder_data in folders.items():
if isinstance(folder_data, dict) and 'plugins' in folder_data:
# 新格式:{"plugins": [...], "order": ..., "icon": ...}
if original_plugin_id in folder_data['plugins']:
target_folder = folder_name
break
elif isinstance(folder_data, list):
# 旧格式:直接是插件列表
if original_plugin_id in folder_data:
target_folder = folder_name
break
# 如果找到了原插件所在的文件夹,则将分身插件也添加到该文件夹中
if target_folder:
folder_data = folders[target_folder]
if isinstance(folder_data, dict) and 'plugins' in folder_data:
# 新格式
if clone_plugin_id not in folder_data['plugins']:
folder_data['plugins'].append(clone_plugin_id)
logger.info(f"已将分身插件 {clone_plugin_id} 添加到文件夹 '{target_folder}'")
elif isinstance(folder_data, list):
# 旧格式
if clone_plugin_id not in folder_data:
folder_data.append(clone_plugin_id)
logger.info(f"已将分身插件 {clone_plugin_id} 添加到文件夹 '{target_folder}'")
# 保存更新后的文件夹配置
config_oper.set(SystemConfigKey.PluginFolders, folders)
else:
logger.info(f"原插件 {original_plugin_id} 不在任何文件夹中,分身插件 {clone_plugin_id} 将保持独立")
except Exception as e:
logger.error(f"处理插件文件夹时出错:{str(e)}")
# 文件夹处理失败不影响插件分身创建的整体流程
def _remove_plugin_from_folders(plugin_id: str):
"""
从所有文件夹中移除指定的插件
:param plugin_id: 要移除的插件ID
"""
try:
config_oper = SystemConfigOper()
# 获取插件文件夹配置
folders = config_oper.get(SystemConfigKey.PluginFolders) or {}
# 标记是否有修改
modified = False
# 遍历所有文件夹,移除指定插件
for folder_name, folder_data in folders.items():
if isinstance(folder_data, dict) and 'plugins' in folder_data:
# 新格式:{"plugins": [...], "order": ..., "icon": ...}
if plugin_id in folder_data['plugins']:
folder_data['plugins'].remove(plugin_id)
logger.info(f"已从文件夹 '{folder_name}' 中移除插件 {plugin_id}")
modified = True
elif isinstance(folder_data, list):
# 旧格式:直接是插件列表
if plugin_id in folder_data:
folder_data.remove(plugin_id)
logger.info(f"已从文件夹 '{folder_name}' 中移除插件 {plugin_id}")
modified = True
# 如果有修改,保存更新后的文件夹配置
if modified:
config_oper.set(SystemConfigKey.PluginFolders, folders)
else:
logger.debug(f"插件 {plugin_id} 不在任何文件夹中,无需移除")
except Exception as e:
logger.error(f"从文件夹中移除插件时出错:{str(e)}")
# 文件夹处理失败不影响插件卸载的整体流程

View File

@@ -58,12 +58,12 @@ def search_by_id(mediaid: str,
if doubaninfo:
torrents = SearchChain().search_by_id(doubanid=doubaninfo.get("id"),
mtype=media_type, area=area, season=media_season,
sites=site_list)
sites=site_list, cache_local=True)
else:
return schemas.Response(success=False, message="未识别到豆瓣媒体信息")
else:
torrents = SearchChain().search_by_id(tmdbid=tmdbid, mtype=media_type, area=area, season=media_season,
sites=site_list)
sites=site_list, cache_local=True)
elif mediaid.startswith("douban:"):
doubanid = mediaid.replace("douban:", "")
if settings.RECOGNIZE_SOURCE == "themoviedb":
@@ -74,12 +74,12 @@ def search_by_id(mediaid: str,
media_season = tmdbinfo.get('season')
torrents = SearchChain().search_by_id(tmdbid=tmdbinfo.get("id"),
mtype=media_type, area=area, season=media_season,
sites=site_list)
sites=site_list, cache_local=True)
else:
return schemas.Response(success=False, message="未识别到TMDB媒体信息")
else:
torrents = SearchChain().search_by_id(doubanid=doubanid, mtype=media_type, area=area, season=media_season,
sites=site_list)
sites=site_list, cache_local=True)
elif mediaid.startswith("bangumi:"):
bangumiid = int(mediaid.replace("bangumi:", ""))
if settings.RECOGNIZE_SOURCE == "themoviedb":
@@ -88,7 +88,7 @@ def search_by_id(mediaid: str,
if tmdbinfo:
torrents = SearchChain().search_by_id(tmdbid=tmdbinfo.get("id"),
mtype=media_type, area=area, season=media_season,
sites=site_list)
sites=site_list, cache_local=True)
else:
return schemas.Response(success=False, message="未识别到TMDB媒体信息")
else:
@@ -97,7 +97,7 @@ def search_by_id(mediaid: str,
if doubaninfo:
torrents = SearchChain().search_by_id(doubanid=doubaninfo.get("id"),
mtype=media_type, area=area, season=media_season,
sites=site_list)
sites=site_list, cache_local=True)
else:
return schemas.Response(success=False, message="未识别到豆瓣媒体信息")
else:
@@ -113,11 +113,11 @@ def search_by_id(mediaid: str,
if event_data.media_dict:
search_id = event_data.media_dict.get("id")
if event_data.convert_type == "themoviedb":
torrents = SearchChain().search_by_id(tmdbid=search_id,
mtype=media_type, area=area, season=media_season)
torrents = SearchChain().search_by_id(tmdbid=search_id, mtype=media_type, area=area,
season=media_season, cache_local=True)
elif event_data.convert_type == "douban":
torrents = SearchChain().search_by_id(doubanid=search_id,
mtype=media_type, area=area, season=media_season)
torrents = SearchChain().search_by_id(doubanid=search_id, mtype=media_type, area=area,
season=media_season, cache_local=True)
else:
if not title:
return schemas.Response(success=False, message="未知的媒体ID")
@@ -133,11 +133,11 @@ def search_by_id(mediaid: str,
mediainfo = MediaChain().recognize_media(meta=meta)
if mediainfo:
if settings.RECOGNIZE_SOURCE == "themoviedb":
torrents = SearchChain().search_by_id(tmdbid=mediainfo.tmdb_id,
mtype=media_type, area=area, season=media_season)
torrents = SearchChain().search_by_id(tmdbid=mediainfo.tmdb_id, mtype=media_type, area=area,
season=media_season, cache_local=True)
else:
torrents = SearchChain().search_by_id(doubanid=mediainfo.douban_id,
mtype=media_type, area=area, season=media_season)
torrents = SearchChain().search_by_id(doubanid=mediainfo.douban_id, mtype=media_type, area=area,
season=media_season, cache_local=True)
# 返回搜索结果
if not torrents:
return schemas.Response(success=False, message="未搜索到任何资源")
@@ -154,7 +154,8 @@ def search_by_title(keyword: Optional[str] = None,
根据名称模糊搜索站点资源,支持分页,关键词为空是返回首页资源
"""
torrents = SearchChain().search_by_title(title=keyword, page=page,
sites=[int(site) for site in sites.split(",") if site] if sites else None)
sites=[int(site) for site in sites.split(",") if site] if sites else None,
cache_local=True)
if not torrents:
return schemas.Response(success=False, message="未搜索到任何资源")
return schemas.Response(success=True, data=[torrent.to_dict() for torrent in torrents])

View File

@@ -1,12 +1,15 @@
from typing import List, Any, Dict, Optional
from app.helper.sites import SitesHelper
from fastapi import APIRouter, Depends, HTTPException
from sqlalchemy.orm import Session
from starlette.background import BackgroundTasks
from app import schemas
from app.api.endpoints.plugin import register_plugin_api
from app.chain.site import SiteChain
from app.chain.torrents import TorrentsChain
from app.command import Command
from app.core.event import EventManager
from app.core.plugin import PluginManager
from app.core.security import verify_token
@@ -16,9 +19,9 @@ from app.db.models.site import Site
from app.db.models.siteicon import SiteIcon
from app.db.models.sitestatistic import SiteStatistic
from app.db.models.siteuserdata import SiteUserData
from app.db.site_oper import SiteOper
from app.db.systemconfig_oper import SystemConfigOper
from app.db.user_oper import get_current_active_superuser
from app.helper.sites import SitesHelper
from app.scheduler import Scheduler
from app.schemas.types import SystemConfigKey, EventType
from app.utils.string import StringUtils
@@ -330,8 +333,8 @@ def read_site_by_domain(
return site
@router.get("/statistic/{site_url}", summary="站点统计信息", response_model=schemas.SiteStatistic)
def read_site_by_domain(
@router.get("/statistic/{site_url}", summary="特定站点统计信息", response_model=schemas.SiteStatistic)
def read_statistic_by_domain(
site_url: str,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)
@@ -346,6 +349,17 @@ def read_site_by_domain(
return schemas.SiteStatistic(domain=domain)
@router.get("/statistic", summary="所有站点统计信息", response_model=List[schemas.SiteStatistic])
def read_statistics(
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)
) -> Any:
"""
获取所有站点统计信息
"""
return SiteStatistic.list(db)
@router.get("/rss", summary="所有订阅站点", response_model=List[schemas.Site])
def read_rss_sites(db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> List[dict]:
@@ -385,11 +399,29 @@ def auth_site(
return schemas.Response(success=False, message="请输入认证站点和认证参数")
status, msg = SitesHelper().check_user(auth_info.site, auth_info.params)
SystemConfigOper().set(SystemConfigKey.UserSiteAuthParams, auth_info.dict())
# 认证成功后,重新初始化插件
PluginManager().init_config()
Scheduler().init_plugin_jobs()
Command().init_commands()
register_plugin_api()
return schemas.Response(success=status, message=msg)
@router.get("/mapping", summary="获取站点域名到名称的映射", response_model=schemas.Response)
def site_mapping(_: User = Depends(get_current_active_superuser)):
"""
获取站点域名到名称的映射关系
"""
try:
sites = SiteOper().list()
mapping = {}
for site in sites:
mapping[site.domain] = site.name
return schemas.Response(success=True, data=mapping)
except Exception as e:
return schemas.Response(success=False, message=f"获取映射失败:{str(e)}")
@router.get("/{site_id}", summary="站点详情", response_model=schemas.Site)
def read_site(
site_id: int,

View File

@@ -31,7 +31,7 @@ def qrcode(name: str, _: schemas.TokenPayload = Depends(verify_token)) -> Any:
@router.get("/check/{name}", summary="二维码登录确认", response_model=schemas.Response)
def check(name: str, ck: Optional[str] = None, t: Optional[str] = None,
def check(name: str, ck: Optional[str] = None, t: Optional[str] = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
二维码登录确认
@@ -56,6 +56,16 @@ def save(name: str,
return schemas.Response(success=True)
@router.get("/reset/{name}", summary="重置存储配置", response_model=schemas.Response)
def reset(name: str,
_: User = Depends(get_current_active_superuser)) -> Any:
"""
重置存储配置
"""
StorageChain().reset_config(name)
return schemas.Response(success=True)
@router.post("/list", summary="所有目录和文件", response_model=List[schemas.FileItem])
def list_files(fileitem: schemas.FileItem,
sort: Optional[str] = 'updated_at',
@@ -152,47 +162,50 @@ def rename(fileitem: schemas.FileItem,
"""
if not new_name:
return schemas.Response(success=False, message="新名称为空")
# 重命名目录内文件
if recursive:
transferchain = TransferChain()
media_exts = settings.RMT_MEDIAEXT + settings.RMT_SUBEXT + settings.RMT_AUDIO_TRACK_EXT
# 递归修改目录内文件(智能识别命名)
sub_files: List[schemas.FileItem] = StorageChain().list_files(fileitem)
if sub_files:
# 开始进度
progress = ProgressHelper()
progress.start(ProgressKey.BatchRename)
total = len(sub_files)
handled = 0
for sub_file in sub_files:
handled += 1
progress.update(value=handled / total * 100,
text=f"正在处理 {sub_file.name} ...",
key=ProgressKey.BatchRename)
if sub_file.type == "dir":
continue
if not sub_file.extension:
continue
if f".{sub_file.extension.lower()}" not in media_exts:
continue
sub_path = Path(f"{fileitem.path}{sub_file.name}")
meta = MetaInfoPath(sub_path)
mediainfo = transferchain.recognize_media(meta)
if not mediainfo:
progress.end(ProgressKey.BatchRename)
return schemas.Response(success=False, message=f"{sub_path.name} 未识别到媒体信息")
new_path = transferchain.recommend_name(meta=meta, mediainfo=mediainfo)
if not new_path:
progress.end(ProgressKey.BatchRename)
return schemas.Response(success=False, message=f"{sub_path.name} 未识别到新名称")
ret: schemas.Response = rename(fileitem=sub_file,
new_name=Path(new_path).name,
recursive=False)
if not ret.success:
progress.end(ProgressKey.BatchRename)
return schemas.Response(success=False, message=f"{sub_path.name} 重命名失败!")
progress.end(ProgressKey.BatchRename)
# 重命名自己
result = StorageChain().rename_file(fileitem, new_name)
if result:
if recursive:
transferchain = TransferChain()
media_exts = settings.RMT_MEDIAEXT + settings.RMT_SUBEXT + settings.RMT_AUDIO_TRACK_EXT
# 递归修改目录内文件(智能识别命名)
sub_files: List[schemas.FileItem] = StorageChain().list_files(fileitem)
if sub_files:
# 开始进度
progress = ProgressHelper()
progress.start(ProgressKey.BatchRename)
total = len(sub_files)
handled = 0
for sub_file in sub_files:
handled += 1
progress.update(value=handled / total * 100,
text=f"正在处理 {sub_file.name} ...",
key=ProgressKey.BatchRename)
if sub_file.type == "dir":
continue
if not sub_file.extension:
continue
if f".{sub_file.extension.lower()}" not in media_exts:
continue
sub_path = Path(f"{fileitem.path}{sub_file.name}")
meta = MetaInfoPath(sub_path)
mediainfo = transferchain.recognize_media(meta)
if not mediainfo:
progress.end(ProgressKey.BatchRename)
return schemas.Response(success=False, message=f"{sub_path.name} 未识别到媒体信息")
new_path = transferchain.recommend_name(meta=meta, mediainfo=mediainfo)
if not new_path:
progress.end(ProgressKey.BatchRename)
return schemas.Response(success=False, message=f"{sub_path.name} 未识别到新名称")
ret: schemas.Response = rename(fileitem=sub_file,
new_name=Path(new_path).name,
recursive=False)
if not ret.success:
progress.end(ProgressKey.BatchRename)
return schemas.Response(success=False, message=f"{sub_path.name} 重命名失败!")
progress.end(ProgressKey.BatchRename)
return schemas.Response(success=True)
return schemas.Response(success=False)

View File

@@ -1,6 +1,7 @@
import asyncio
import io
import json
import re
import tempfile
from collections import deque
from datetime import datetime
@@ -10,7 +11,7 @@ from typing import Optional, Union, Annotated
import aiofiles
import pillow_avif # noqa 用于自动注册AVIF支持
from PIL import Image
from fastapi import APIRouter, Depends, HTTPException, Header, Request, Response
from fastapi import APIRouter, Body, Depends, HTTPException, Header, Request, Response
from fastapi.responses import StreamingResponse
from app import schemas
@@ -20,23 +21,24 @@ from app.core.config import global_vars, settings
from app.core.metainfo import MetaInfo
from app.core.module import ModuleManager
from app.core.security import verify_apitoken, verify_resource_token, verify_token
from app.core.event import eventmanager
from app.db.models import User
from app.db.systemconfig_oper import SystemConfigOper
from app.db.user_oper import get_current_active_superuser
from app.helper.mediaserver import MediaServerHelper
from app.helper.message import MessageHelper, MessageQueueManager
from app.helper.message import MessageHelper
from app.helper.progress import ProgressHelper
from app.helper.rule import RuleHelper
from app.helper.sites import SitesHelper
from app.helper.subscribe import SubscribeHelper
from app.helper.system import SystemHelper
from app.log import logger
from app.monitor import Monitor
from app.scheduler import Scheduler
from app.schemas.types import SystemConfigKey
from app.schemas import ConfigChangeEventData
from app.schemas.types import SystemConfigKey, EventType
from app.utils.crypto import HashUtils
from app.utils.http import RequestUtils
from app.utils.security import SecurityUtils
from app.utils.system import SystemUtils
from app.utils.url import UrlUtils
from version import APP_VERSION
@@ -142,6 +144,7 @@ def fetch_image(
def proxy_img(
imgurl: str,
proxy: bool = False,
cache: bool = False,
if_none_match: Annotated[str | None, Header()] = None,
_: schemas.TokenPayload = Depends(verify_resource_token)
) -> Response:
@@ -152,7 +155,7 @@ def proxy_img(
hosts = [config.config.get("host") for config in MediaServerHelper().get_configs().values() if
config and config.config and config.config.get("host")]
allowed_domains = set(settings.SECURITY_IMAGE_DOMAINS) | set(hosts)
return fetch_image(url=imgurl, proxy=proxy, use_disk_cache=False,
return fetch_image(url=imgurl, proxy=proxy, use_disk_cache=cache,
if_none_match=if_none_match, allowed_domains=allowed_domains)
@@ -171,10 +174,13 @@ def cache_img(
@router.get("/global", summary="查询非敏感系统设置", response_model=schemas.Response)
def get_global_setting():
def get_global_setting(token: str):
"""
查询非敏感系统设置(无需鉴权)
查询非敏感系统设置(默认鉴权)
"""
if token != "moviepilot":
raise HTTPException(status_code=403, detail="Forbidden")
# FIXME: 新增敏感配置项时要在此处添加排除项
info = settings.dict(
exclude={"SECRET_KEY", "RESOURCE_SECRET_KEY", "API_TOKEN", "TMDB_API_KEY", "TVDB_API_KEY", "FANART_API_KEY",
@@ -216,18 +222,27 @@ def set_env_setting(env: dict,
result = settings.update_settings(env=env)
# 统计成功和失败的结果
success_updates = {k: v for k, v in result.items() if v[0]}
failed_updates = {k: v for k, v in result.items() if not v[0]}
failed_updates = {k: v for k, v in result.items() if v[0] is False}
if failed_updates:
return schemas.Response(
success=False,
message="部分配置项更新失败",
message=f"{', '.join([v[1] for v in failed_updates.values()])}",
data={
"success_updates": success_updates,
"failed_updates": failed_updates
}
)
if success_updates:
for key in success_updates.keys():
# 发送配置变更事件
eventmanager.send_event(etype=EventType.ConfigChanged, data=ConfigChangeEventData(
key=key,
value=getattr(settings, key, None),
change_type="update"
))
return schemas.Response(
success=True,
message="所有配置项更新成功",
@@ -274,16 +289,38 @@ def get_setting(key: str,
@router.post("/setting/{key}", summary="更新系统设置", response_model=schemas.Response)
def set_setting(key: str, value: Union[list, dict, bool, int, str] = None,
_: User = Depends(get_current_active_superuser)):
def set_setting(
key: str,
value: Annotated[Union[list, dict, bool, int, str] | None, Body()] = None,
_: User = Depends(get_current_active_superuser),
):
"""
更新系统设置(仅管理员)
"""
if hasattr(settings, key):
success, message = settings.update_setting(key=key, value=value)
if success:
# 发送配置变更事件
eventmanager.send_event(etype=EventType.ConfigChanged, data=ConfigChangeEventData(
key=key,
value=value,
change_type="update"
))
elif success is None:
success = True
return schemas.Response(success=success, message=message)
elif key in {item.value for item in SystemConfigKey}:
SystemConfigOper().set(key, value)
if isinstance(value, list):
value = list(filter(None, value))
value = value if value else None
success = SystemConfigOper().set(key, value)
if success:
# 发送配置变更事件
eventmanager.send_event(etype=EventType.ConfigChanged, data=ConfigChangeEventData(
key=key,
value=value,
change_type="update"
))
return schemas.Response(success=True)
else:
return schemas.Response(success=False, message=f"配置项 '{key}' 不存在")
@@ -414,30 +451,55 @@ def ruletest(title: str,
@router.get("/nettest", summary="测试网络连通性")
def nettest(url: str,
proxy: bool,
_: schemas.TokenPayload = Depends(verify_token)):
def nettest(
url: str,
proxy: bool,
include: Optional[str] = None,
_: schemas.TokenPayload = Depends(verify_token),
):
"""
测试网络连通性
"""
# 记录开始的毫秒数
start_time = datetime.now()
headers = None
if "github" in url or "{GITHUB_PROXY}" in url:
# 这是github的连通性测试
url = url.replace(
"{GITHUB_PROXY}", UrlUtils.standardize_base_url(settings.GITHUB_PROXY or "")
)
headers = settings.GITHUB_HEADERS
url = url.replace("{TMDBAPIKEY}", settings.TMDB_API_KEY)
result = RequestUtils(proxies=settings.PROXY if proxy else None,
ua=settings.USER_AGENT).get_res(url)
url = url.replace(
"{PIP_PROXY}",
UrlUtils.standardize_base_url(settings.PIP_PROXY or "https://pypi.org/simple/"),
)
result = RequestUtils(
proxies=settings.PROXY if proxy else None,
headers=headers,
timeout=10,
ua=settings.USER_AGENT,
).get_res(url)
# 计时结束的毫秒数
end_time = datetime.now()
time = round((end_time - start_time).total_seconds() * 1000)
# 计算相关秒数
if result and result.status_code == 200:
return schemas.Response(success=True, data={
"time": round((end_time - start_time).microseconds / 1000)
})
elif result:
return schemas.Response(success=False, message=f"错误码:{result.status_code}", data={
"time": round((end_time - start_time).microseconds / 1000)
})
if result is None:
return schemas.Response(success=False, message="无法连接", data={"time": time})
elif result.status_code == 200:
if include and not re.search(r"%s" % include, result.text, re.IGNORECASE):
# 通常是被加速代理跳转到其它页面了
logger.error(f"{url} 的响应内容不匹配包含规则 {include}")
return schemas.Response(
success=False,
message=f"无效响应,不匹配 {include}",
data={"time": time},
)
return schemas.Response(success=True, data={"time": time})
else:
return schemas.Response(success=False, message="网络连接失败!")
return schemas.Response(
success=False, message=f"错误码:{result.status_code}", data={"time": time}
)
@router.get("/modulelist", summary="查询已加载的模块ID列表", response_model=schemas.Response)
@@ -468,27 +530,15 @@ def restart_system(_: User = Depends(get_current_active_superuser)):
"""
重启系统(仅管理员)
"""
if not SystemUtils.can_restart():
if not SystemHelper.can_restart():
return schemas.Response(success=False, message="当前运行环境不支持重启操作!")
# 标识停止事件
global_vars.stop_system()
# 执行重启
ret, msg = SystemUtils.restart()
ret, msg = SystemHelper.restart()
return schemas.Response(success=ret, message=msg)
@router.get("/reload", summary="重新加载模块", response_model=schemas.Response)
def reload_module(_: User = Depends(get_current_active_superuser)):
"""
重新加载模块(仅管理员)
"""
MessageQueueManager().init_config()
ModuleManager().reload()
Scheduler().init()
Monitor().init()
return schemas.Response(success=True)
@router.get("/runscheduler", summary="运行服务", response_model=schemas.Response)
def run_scheduler(jobid: str,
_: User = Depends(get_current_active_superuser)):

View File

@@ -0,0 +1,199 @@
from typing import Optional
from fastapi import APIRouter, Depends
from app import schemas
from app.chain.media import MediaChain
from app.chain.torrents import TorrentsChain
from app.core.config import settings
from app.core.context import MediaInfo
from app.core.metainfo import MetaInfo
from app.db.models import User
from app.db.user_oper import get_current_active_superuser
from app.utils.crypto import HashUtils
router = APIRouter()
@router.get("/cache", summary="获取种子缓存", response_model=schemas.Response)
def torrents_cache(_: User = Depends(get_current_active_superuser)):
"""
获取当前种子缓存数据
"""
torrents_chain = TorrentsChain()
# 获取spider和rss两种缓存
if settings.SUBSCRIBE_MODE == "rss":
cache_info = torrents_chain.get_torrents("rss")
else:
cache_info = torrents_chain.get_torrents("spider")
# 统计信息
torrent_count = sum(len(torrents) for torrents in cache_info.values())
# 转换为前端需要的格式
torrent_data = []
for domain, contexts in cache_info.items():
for context in contexts:
torrent_hash = HashUtils.md5(f"{context.torrent_info.title}{context.torrent_info.description}")
torrent_data.append({
"hash": torrent_hash,
"domain": domain,
"title": context.torrent_info.title,
"description": context.torrent_info.description,
"size": context.torrent_info.size,
"pubdate": context.torrent_info.pubdate,
"site_name": context.torrent_info.site_name,
"media_name": context.media_info.title if context.media_info else "",
"media_year": context.media_info.year if context.media_info else "",
"media_type": context.media_info.type if context.media_info else "",
"season_episode": context.meta_info.season_episode if context.meta_info else "",
"resource_term": context.meta_info.resource_term if context.meta_info else "",
"enclosure": context.torrent_info.enclosure,
"page_url": context.torrent_info.page_url,
"poster_path": context.media_info.get_poster_image() if context.media_info else "",
"backdrop_path": context.media_info.get_backdrop_image() if context.media_info else ""
})
return schemas.Response(success=True, data={
"count": torrent_count,
"sites": len(cache_info),
"data": torrent_data
})
@router.delete("/cache/{domain}/{torrent_hash}", summary="删除指定种子缓存",
response_model=schemas.Response)
def delete_cache(domain: str, torrent_hash: str, _: User = Depends(get_current_active_superuser)):
"""
删除指定的种子缓存
:param domain: 站点域名
:param torrent_hash: 种子hash使用title+description的md5
:param _: 当前用户,必须是超级用户
"""
torrents_chain = TorrentsChain()
try:
# 获取当前缓存
cache_data = torrents_chain.get_torrents()
if domain not in cache_data:
return schemas.Response(success=False, message=f"站点 {domain} 缓存不存在")
# 查找并删除指定种子
original_count = len(cache_data[domain])
cache_data[domain] = [
context for context in cache_data[domain]
if HashUtils.md5(f"{context.torrent_info.title}{context.torrent_info.description}") != torrent_hash
]
if len(cache_data[domain]) == original_count:
return schemas.Response(success=False, message="未找到指定的种子")
# 保存更新后的缓存
torrents_chain.save_cache(cache_data, torrents_chain.cache_file)
return schemas.Response(success=True, message="种子删除成功")
except Exception as e:
return schemas.Response(success=False, message=f"删除失败:{str(e)}")
@router.delete("/cache", summary="清理种子缓存", response_model=schemas.Response)
def clear_cache(_: User = Depends(get_current_active_superuser)):
"""
清理所有种子缓存
"""
torrents_chain = TorrentsChain()
try:
torrents_chain.clear_torrents()
return schemas.Response(success=True, message="种子缓存清理完成")
except Exception as e:
return schemas.Response(success=False, message=f"清理失败:{str(e)}")
@router.post("/cache/refresh", summary="刷新种子缓存", response_model=schemas.Response)
def refresh_cache(_: User = Depends(get_current_active_superuser)):
"""
刷新种子缓存
"""
from app.chain.torrents import TorrentsChain
torrents_chain = TorrentsChain()
try:
result = torrents_chain.refresh()
# 统计刷新结果
total_count = sum(len(torrents) for torrents in result.values())
sites_count = len(result)
return schemas.Response(success=True, message=f"缓存刷新完成,共刷新 {sites_count} 个站点,{total_count} 个种子")
except Exception as e:
return schemas.Response(success=False, message=f"刷新失败:{str(e)}")
@router.post("/cache/reidentify/{domain}/{torrent_hash}", summary="重新识别种子", response_model=schemas.Response)
def reidentify_cache(domain: str, torrent_hash: str,
tmdbid: Optional[int] = None, doubanid: Optional[str] = None,
_: User = Depends(get_current_active_superuser)):
"""
重新识别指定的种子
:param domain: 站点域名
:param torrent_hash: 种子hash使用title+description的md5
:param tmdbid: 手动指定的TMDB ID
:param doubanid: 手动指定的豆瓣ID
:param _: 当前用户,必须是超级用户
"""
torrents_chain = TorrentsChain()
media_chain = MediaChain()
try:
# 获取当前缓存
cache_data = torrents_chain.get_torrents()
if domain not in cache_data:
return schemas.Response(success=False, message=f"站点 {domain} 缓存不存在")
# 查找指定种子
target_context = None
for context in cache_data[domain]:
if HashUtils.md5(f"{context.torrent_info.title}{context.torrent_info.description}") == torrent_hash:
target_context = context
break
if not target_context:
return schemas.Response(success=False, message="未找到指定的种子")
# 重新识别
meta = MetaInfo(title=target_context.torrent_info.title,
subtitle=target_context.torrent_info.description)
if tmdbid or doubanid:
# 手动指定媒体信息
mediainfo = MediaChain().recognize_media(meta=meta, tmdbid=tmdbid, doubanid=doubanid)
else:
# 自动重新识别
mediainfo = media_chain.recognize_by_meta(meta)
if not mediainfo:
# 创建空的媒体信息
mediainfo = MediaInfo()
else:
# 清理多余数据
mediainfo.clear()
# 更新上下文中的媒体信息
target_context.media_info = mediainfo
# 保存更新后的缓存
torrents_chain.save_cache(cache_data, TorrentsChain().cache_file)
return schemas.Response(success=True, message="重新识别完成", data={
"media_name": mediainfo.title if mediainfo else "",
"media_year": mediainfo.year if mediainfo else "",
"media_type": mediainfo.type.value if mediainfo and mediainfo.type else ""
})
except Exception as e:
return schemas.Response(success=False, message=f"重新识别失败:{str(e)}")

View File

@@ -1,8 +1,8 @@
import base64
import re
from typing import Any, List, Union
from typing import Annotated, Any, List, Union
from fastapi import APIRouter, Depends, HTTPException, UploadFile, File
from fastapi import APIRouter, Body, Depends, HTTPException, UploadFile, File
from sqlalchemy.orm import Session
from app import schemas
@@ -164,8 +164,11 @@ def get_config(key: str,
@router.post("/config/{key}", summary="更新用户配置", response_model=schemas.Response)
def set_config(key: str, value: Union[list, dict, bool, int, str] = None,
current_user: User = Depends(get_current_active_user)):
def set_config(
key: str,
value: Annotated[Union[list, dict, bool, int, str] | None, Body()] = None,
current_user: User = Depends(get_current_active_user),
):
"""
更新用户配置
"""

View File

@@ -6,6 +6,7 @@ from sqlalchemy.orm import Session
from app import schemas
from app.core.config import global_vars
from app.core.plugin import PluginManager
from app.core.workflow import WorkFlowManager
from app.db import get_db
from app.db.models.workflow import Workflow
@@ -43,6 +44,14 @@ def create_workflow(workflow: schemas.Workflow,
return schemas.Response(success=True, message="创建工作流成功")
@router.get("/plugin/actions", summary="查询插件动作", response_model=List[dict])
def list_plugin_actions(plugin_id: str = None, _: schemas.TokenPayload = Depends(get_current_active_user)) -> Any:
"""
获取所有动作
"""
return PluginManager().get_plugin_actions(plugin_id)
@router.get("/actions", summary="所有动作", response_model=List[dict])
def list_actions(_: schemas.TokenPayload = Depends(get_current_active_user)) -> Any:
"""

View File

@@ -5,6 +5,7 @@ from sqlalchemy.orm import Session
from app import schemas
from app.chain.media import MediaChain
from app.chain.tvdb import TvdbChain
from app.chain.subscribe import SubscribeChain
from app.core.metainfo import MetaInfo
from app.core.security import verify_apikey
@@ -518,88 +519,89 @@ def arr_series_lookup(term: str, _: Annotated[str, Depends(verify_apikey)], db:
"""
查询Sonarr剧集 term: `tvdb:${id}` title
"""
# 获取TVDBID
if not term.startswith("tvdb:"):
mediainfo = MediaChain().recognize_media(meta=MetaInfo(term),
mtype=MediaType.TV)
if not mediainfo:
return [SonarrSeries()]
tvdbid = mediainfo.tvdb_id
if not tvdbid:
return [SonarrSeries()]
else:
mediainfo = None
tvdbid = int(term.replace("tvdb:", ""))
# 查询TVDB信息
tvdbinfo = MediaChain().tvdb_info(tvdbid=tvdbid)
if not tvdbinfo:
return [SonarrSeries()]
# 季信息
seas: List[int] = []
sea_num = tvdbinfo.get('season')
if sea_num:
seas = list(range(1, int(sea_num) + 1))
# 根据TVDB查询媒体信息
if not mediainfo:
mediainfo = MediaChain().recognize_media(meta=MetaInfo(tvdbinfo.get('seriesName')),
mtype=MediaType.TV)
# 查询是否存在
exists = MediaChain().media_exists(mediainfo)
if exists:
hasfile = True
# tvdbid 列表
tvdbids: List[int] = []
# 获取TVDBID
if not term.startswith("tvdb:"):
title = term.replace("+", " ")
tvdbids = TvdbChain().get_tvdbid_by_name(title=title)
else:
hasfile = False
tvdbid = int(term.replace("tvdb:", ""))
tvdbids.append(tvdbid)
# 查询订阅信息
seasons: List[dict] = []
subscribes = Subscribe.get_by_tmdbid(db, mediainfo.tmdb_id)
if subscribes:
# 已监控
monitored = True
# 已监控季
sub_seas = [sub.season for sub in subscribes]
for sea in seas:
if sea in sub_seas:
seasons.append({
"seasonNumber": sea,
"monitored": True,
})
else:
sonarr_series_list = []
for tvdbid in tvdbids:
# 查询TVDB信息
tvdbinfo = MediaChain().tvdb_info(tvdbid=tvdbid)
if not tvdbinfo:
continue
# 季信息(只取默认季类型,排除特别季)
sea_num = len([season for season in tvdbinfo.get('seasons') if
season['type']['id'] == tvdbinfo.get('defaultSeasonType') and season['number'] > 0])
if sea_num:
seas = list(range(1, int(sea_num) + 1))
# 根据TVDB查询媒体信息
mediainfo = MediaChain().recognize_media(meta=MetaInfo(tvdbinfo.get('name')),
mtype=MediaType.TV)
if not mediainfo:
continue
# 查询是否存在
exists = MediaChain().media_exists(mediainfo)
if exists:
hasfile = True
else:
hasfile = False
# 查询订阅信息
seasons: List[dict] = []
subscribes = Subscribe.get_by_tmdbid(db, mediainfo.tmdb_id)
if subscribes:
# 已监控
monitored = True
# 已监控季
sub_seas = [sub.season for sub in subscribes]
for sea in seas:
if sea in sub_seas:
seasons.append({
"seasonNumber": sea,
"monitored": True,
})
else:
seasons.append({
"seasonNumber": sea,
"monitored": False,
})
subid = subscribes[-1].id
else:
subid = None
monitored = False
for sea in seas:
seasons.append({
"seasonNumber": sea,
"monitored": False,
})
subid = subscribes[-1].id
else:
subid = None
monitored = False
for sea in seas:
seasons.append({
"seasonNumber": sea,
"monitored": False,
})
sonarr_series = SonarrSeries(
id=subid,
title=mediainfo.title,
seasonCount=len(seasons),
seasons=seasons,
remotePoster=mediainfo.get_poster_image(),
year=mediainfo.year,
tmdbId=mediainfo.tmdb_id,
tvdbId=tvdbid,
imdbId=mediainfo.imdb_id,
profileId=1,
languageProfileId=1,
monitored=monitored,
hasFile=hasfile,
)
sonarr_series_list.append(sonarr_series)
return [SonarrSeries(
id=subid,
title=mediainfo.title,
seasonCount=len(seasons),
seasons=seasons,
remotePoster=mediainfo.get_poster_image(),
year=mediainfo.year,
tmdbId=mediainfo.tmdb_id,
tvdbId=mediainfo.tvdb_id,
imdbId=mediainfo.imdb_id,
profileId=1,
languageProfileId=1,
qualityProfileId=1,
isAvailable=True,
monitored=monitored,
hasFile=hasfile
)]
return sonarr_series_list if sonarr_series_list else [SonarrSeries()]
@arr_router.get("/series/{tid}", summary="剧集详情")

View File

@@ -1,8 +1,8 @@
import copy
import gc
import pickle
import traceback
from abc import ABCMeta
from collections.abc import Callable
from pathlib import Path
from typing import Optional, Any, Tuple, List, Set, Union, Dict
@@ -14,14 +14,15 @@ from app.core.context import Context, MediaInfo, TorrentInfo
from app.core.event import EventManager
from app.core.meta import MetaBase
from app.core.module import ModuleManager
from app.core.plugin import PluginManager
from app.db.message_oper import MessageOper
from app.db.user_oper import UserOper
from app.helper.message import MessageHelper, MessageQueueManager
from app.helper.message import MessageHelper, MessageQueueManager, MessageTemplateHelper
from app.helper.service import ServiceConfigHelper
from app.log import logger
from app.schemas import TransferInfo, TransferTorrent, ExistMediaInfo, DownloadingTorrent, CommingMessage, Notification, \
WebhookEventInfo, TmdbEpisode, MediaPerson, FileItem, TransferDirectoryConf
from app.schemas.types import TorrentStatus, MediaType, MediaImageType, EventType
from app.schemas.types import TorrentStatus, MediaType, MediaImageType, EventType, MessageChannel
from app.utils.object import ObjectUtils
@@ -41,7 +42,7 @@ class ChainBase(metaclass=ABCMeta):
self.messagequeue = MessageQueueManager(
send_callback=self.run_module
)
self.useroper = UserOper()
self.pluginmanager = PluginManager()
@staticmethod
def load_cache(filename: str) -> Any:
@@ -64,13 +65,9 @@ class ChainBase(metaclass=ABCMeta):
"""
try:
with open(settings.TEMP_PATH / filename, 'wb') as f:
pickle.dump(cache, f) # noqa
pickle.dump(cache, f) # noqa
except Exception as err:
logger.error(f"保存缓存 {filename} 出错:{str(err)}")
finally:
# 主动资源回收
del cache
gc.collect()
@staticmethod
def remove_cache(filename: str) -> None:
@@ -97,11 +94,50 @@ class ChainBase(metaclass=ABCMeta):
return ret is None
result = None
logger.debug(f"请求模块执行:{method} ...")
modules = self.modulemanager.get_running_modules(method)
# 按优先级排序
modules = sorted(modules, key=lambda x: x.get_priority())
for module in modules:
# 插件模块
for plugin, module_dict in self.pluginmanager.get_plugin_modules().items():
plugin_id, plugin_name = plugin
if method in module_dict:
func = module_dict[method]
if func:
try:
logger.info(f"请求插件 {plugin_name} 执行:{method} ...")
if is_result_empty(result):
# 返回None第一次执行或者需继续执行下一模块
result = func(*args, **kwargs)
elif isinstance(result, list):
# 返回为列表,有多个模块运行结果时进行合并
temp = func(*args, **kwargs)
if isinstance(temp, list):
result.extend(temp)
else:
break
except Exception as err:
if kwargs.get("raise_exception"):
raise
logger.error(
f"运行插件 {plugin_id} 模块 {method} 出错:{str(err)}\n{traceback.format_exc()}")
self.messagehelper.put(title=f"{plugin_name} 发生了错误",
message=str(err),
role="plugin")
self.eventmanager.send_event(
EventType.SystemError,
{
"type": "plugin",
"plugin_id": plugin_id,
"plugin_name": plugin_name,
"plugin_method": method,
"error": str(err),
"traceback": traceback.format_exc()
}
)
if not is_result_empty(result) and not isinstance(result, list):
# 插件模块返回结果不为空且不是列表,直接返回
return result
# 系统模块
logger.debug(f"请求系统模块执行:{method} ...")
for module in sorted(self.modulemanager.get_running_modules(method), key=lambda x: x.get_priority()):
module_id = module.__class__.__name__
try:
module_name = module.get_name()
@@ -114,10 +150,10 @@ class ChainBase(metaclass=ABCMeta):
# 返回None第一次执行或者需继续执行下一模块
result = func(*args, **kwargs)
elif ObjectUtils.check_signature(func, result):
# 返回结果与方法签名一致,将结果传入(不能多个模块同时运行的需要通过开关控制)
# 返回结果与方法签名一致,将结果传入
result = func(result)
elif isinstance(result, list):
# 返回为列表,有多个模块运行结果时进行合并(不能多个模块同时运行的需要通过开关控制)
# 返回为列表,有多个模块运行结果时进行合并
temp = func(*args, **kwargs)
if isinstance(temp, list):
result.extend(temp)
@@ -328,7 +364,7 @@ class ChainBase(metaclass=ABCMeta):
return self.run_module("search_torrents", site=site, keywords=keywords,
mtype=mtype, page=page)
def refresh_torrents(self, site: dict, keyword: Optional[str] = None,
def refresh_torrents(self, site: dict, keyword: Optional[str] = None,
cat: Optional[str] = None, page: Optional[int] = 0) -> List[TorrentInfo]:
"""
获取站点最新一页的种子,多个站点需要多线程处理
@@ -401,7 +437,8 @@ class ChainBase(metaclass=ABCMeta):
target_storage: Optional[str] = None, target_path: Path = None,
transfer_type: Optional[str] = None, scrape: bool = None,
library_type_folder: bool = None, library_category_folder: bool = None,
episodes_info: List[TmdbEpisode] = None) -> Optional[TransferInfo]:
episodes_info: List[TmdbEpisode] = None,
source_oper: Callable = None, target_oper: Callable = None) -> Optional[TransferInfo]:
"""
文件转移
:param fileitem: 文件信息
@@ -415,6 +452,8 @@ class ChainBase(metaclass=ABCMeta):
:param library_type_folder: 是否按类型创建目录
:param library_category_folder: 是否按类别创建目录
:param episodes_info: 当前季的全部集信息
:param source_oper: 源存储操作类
:param target_oper: 目标存储操作类
:return: {path, target_path, message}
"""
return self.run_module("transfer",
@@ -424,7 +463,8 @@ class ChainBase(metaclass=ABCMeta):
transfer_type=transfer_type, scrape=scrape,
library_type_folder=library_type_folder,
library_category_folder=library_category_folder,
episodes_info=episodes_info)
episodes_info=episodes_info,
source_oper=source_oper, target_oper=target_oper)
def transfer_completed(self, hashs: str, downloader: Optional[str] = None) -> None:
"""
@@ -492,13 +532,27 @@ class ChainBase(metaclass=ABCMeta):
"""
return self.run_module("media_files", mediainfo=mediainfo)
def post_message(self, message: Notification) -> None:
def post_message(self,
message: Optional[Notification] = None,
meta: Optional[MetaBase] = None,
mediainfo: Optional[MediaInfo] = None,
torrentinfo: Optional[TorrentInfo] = None,
transferinfo: Optional[TransferInfo] = None,
**kwargs) -> None:
"""
发送消息
:param message: 消息体
:param message: Notification实例
:param meta: 元数据
:param mediainfo: 媒体信息
:param torrentinfo: 种子信息
:param transferinfo: 文件整理信息
:param kwargs: 其他参数(覆盖业务对象属性值)
:return: 成功或失败
"""
# 保存原消息
# 渲染消息
message = MessageTemplateHelper.render(message=message, meta=meta, mediainfo=mediainfo,
torrentinfo=torrentinfo, transferinfo=transferinfo, **kwargs)
# 保存消息
self.messagehelper.put(message, role="user", title=message.title)
self.messageoper.add(**message.dict())
# 发送消息按设置隔离
@@ -511,26 +565,27 @@ class ChainBase(metaclass=ABCMeta):
# 是否已发送管理员标志
admin_sended = False
send_orignal = False
useroper = UserOper()
for action in actions:
send_message = copy.deepcopy(message)
if action == "admin" and not admin_sended:
# 仅发送管理员
logger.info(f"{send_message.mtype} 的消息已设置发送给管理员")
# 读取管理员消息IDS
send_message.targets = self.useroper.get_settings(settings.SUPERUSER)
send_message.targets = useroper.get_settings(settings.SUPERUSER)
admin_sended = True
elif action == "user" and send_message.username:
# 发送对应用户
logger.info(f"{send_message.mtype} 的消息已设置发送给用户 {send_message.username}")
# 读取用户消息IDS
send_message.targets = self.useroper.get_settings(send_message.username)
send_message.targets = useroper.get_settings(send_message.username)
if send_message.targets is None:
# 没有找到用户
if not admin_sended:
# 回滚发送管理员
logger.info(f"用户 {send_message.username} 不存在,消息将发送给管理员")
# 读取管理员消息IDS
send_message.targets = self.useroper.get_settings(settings.SUPERUSER)
send_message.targets = useroper.get_settings(settings.SUPERUSER)
admin_sended = True
else:
# 管理员发过了,此消息不发了
@@ -553,7 +608,8 @@ class ChainBase(metaclass=ABCMeta):
# 发送消息事件
self.eventmanager.send_event(etype=EventType.NoticeMessage, data={**message.dict(), "type": message.mtype})
# 按原消息发送
self.messagequeue.send_message("post_message", message=message)
self.messagequeue.send_message("post_message", message=message,
immediately=True if message.userid else False)
def post_medias_message(self, message: Notification, medias: List[MediaInfo]) -> None:
"""
@@ -565,7 +621,8 @@ class ChainBase(metaclass=ABCMeta):
note_list = [media.to_dict() for media in medias]
self.messagehelper.put(message, role="user", note=note_list, title=message.title)
self.messageoper.add(**message.dict(), note=note_list)
return self.messagequeue.send_message("post_medias_message", message=message, medias=medias)
return self.messagequeue.send_message("post_medias_message", message=message, medias=medias,
immediately=True if message.userid else False)
def post_torrents_message(self, message: Notification, torrents: List[Context]) -> None:
"""
@@ -577,9 +634,23 @@ class ChainBase(metaclass=ABCMeta):
note_list = [torrent.torrent_info.to_dict() for torrent in torrents]
self.messagehelper.put(message, role="user", note=note_list, title=message.title)
self.messageoper.add(**message.dict(), note=note_list)
return self.messagequeue.send_message("post_torrents_message", message=message, torrents=torrents)
return self.messagequeue.send_message("post_torrents_message", message=message, torrents=torrents,
immediately=True if message.userid else False)
def metadata_img(self, mediainfo: MediaInfo,
def delete_message(self, channel: MessageChannel, source: str,
message_id: Union[str, int], chat_id: Optional[Union[str, int]] = None) -> bool:
"""
删除消息
:param channel: 消息渠道
:param source: 消息源(指定特定的消息模块)
:param message_id: 消息ID
:param chat_id: 聊天ID如群组ID
:return: 删除是否成功
"""
return self.run_module("delete_message", channel=channel, source=source,
message_id=message_id, chat_id=chat_id)
def metadata_img(self, mediainfo: MediaInfo,
season: Optional[int] = None, episode: Optional[int] = None) -> Optional[dict]:
"""
获取图片名称和url

View File

@@ -3,12 +3,11 @@ from typing import Optional, List
from app import schemas
from app.chain import ChainBase
from app.core.context import MediaInfo
from app.utils.singleton import Singleton
class BangumiChain(ChainBase, metaclass=Singleton):
class BangumiChain(ChainBase):
"""
Bangumi处理链,单例运行
Bangumi处理链
"""
def calendar(self) -> Optional[List[MediaInfo]]:

View File

@@ -2,10 +2,9 @@ from typing import Optional, List
from app import schemas
from app.chain import ChainBase
from app.utils.singleton import Singleton
class DashboardChain(ChainBase, metaclass=Singleton):
class DashboardChain(ChainBase):
"""
各类仪表板统计处理链
"""

View File

@@ -4,12 +4,11 @@ from app import schemas
from app.chain import ChainBase
from app.core.context import MediaInfo
from app.schemas import MediaType
from app.utils.singleton import Singleton
class DoubanChain(ChainBase, metaclass=Singleton):
class DoubanChain(ChainBase):
"""
豆瓣处理链,单例运行
豆瓣处理链
"""
def person_detail(self, person_id: int) -> Optional[schemas.MediaPerson]:

View File

@@ -16,11 +16,12 @@ from app.core.metainfo import MetaInfo
from app.db.downloadhistory_oper import DownloadHistoryOper
from app.db.mediaserver_oper import MediaServerOper
from app.helper.directory import DirectoryHelper
from app.helper.message import MessageHelper
from app.helper.torrent import TorrentHelper
from app.log import logger
from app.schemas import ExistMediaInfo, NotExistMediaInfo, DownloadingTorrent, Notification, ResourceSelectionEventData, ResourceDownloadEventData
from app.schemas.types import MediaType, TorrentStatus, EventType, MessageChannel, NotificationType, ChainEventType
from app.schemas import ExistMediaInfo, NotExistMediaInfo, DownloadingTorrent, Notification, ResourceSelectionEventData, \
ResourceDownloadEventData
from app.schemas.types import MediaType, TorrentStatus, EventType, MessageChannel, NotificationType, ContentType, \
ChainEventType
from app.utils.http import RequestUtils
from app.utils.string import StringUtils
@@ -30,71 +31,6 @@ class DownloadChain(ChainBase):
下载处理链
"""
def __init__(self):
super().__init__()
self.torrent = TorrentHelper()
self.downloadhis = DownloadHistoryOper()
self.mediaserver = MediaServerOper()
self.directoryhelper = DirectoryHelper()
self.messagehelper = MessageHelper()
def post_download_message(self, meta: MetaBase, mediainfo: MediaInfo, torrent: TorrentInfo,
channel: MessageChannel = None, username: Optional[str] = None,
download_episodes: Optional[str] = None):
"""
发送添加下载的消息,根据消息场景开关决定发给谁
:param meta: 元数据
:param mediainfo: 媒体信息
:param torrent: 种子信息
:param channel: 通知渠道
:param username: 通知显示的下载用户信息
:param download_episodes: 下载的集数
"""
# 拼装消息内容
msg_text = ""
if username:
msg_text = f"用户:{username}"
if torrent.site_name:
msg_text = f"{msg_text}\n站点:{torrent.site_name}"
if meta.resource_term:
msg_text = f"{msg_text}\n质量:{meta.resource_term}"
if torrent.size:
if str(torrent.size).replace(".", "").isdigit():
size = StringUtils.str_filesize(torrent.size)
else:
size = torrent.size
msg_text = f"{msg_text}\n大小:{size}"
if torrent.title:
msg_text = f"{msg_text}\n种子:{torrent.title}"
if torrent.pubdate:
msg_text = f"{msg_text}\n发布时间:{torrent.pubdate}"
if torrent.freedate:
msg_text = f"{msg_text}\n免费时间:{StringUtils.diff_time_str(torrent.freedate)}"
if torrent.seeders:
msg_text = f"{msg_text}\n做种数:{torrent.seeders}"
if torrent.uploadvolumefactor and torrent.downloadvolumefactor:
msg_text = f"{msg_text}\n促销:{torrent.volume_factor}"
if torrent.hit_and_run:
msg_text = f"{msg_text}\nHit&Run"
if torrent.labels:
msg_text = f"{msg_text}\n标签:{' '.join(torrent.labels)}"
if torrent.description:
html_re = re.compile(r'<[^>]+>', re.S)
description = html_re.sub('', torrent.description)
torrent.description = re.sub(r'<[^>]+>', '', description)
msg_text = f"{msg_text}\n描述:{torrent.description}"
# 下载成功按规则发送消息
self.post_message(Notification(
channel=channel,
mtype=NotificationType.Download,
title=f"{mediainfo.title_year} "
f"{'%s %s' % (meta.season, download_episodes) if download_episodes else meta.season_episode} 开始下载",
text=msg_text,
image=mediainfo.get_message_image(),
link=settings.MP_DOMAIN('/#/downloading'),
username=username))
def download_torrent(self, torrent: TorrentInfo,
channel: MessageChannel = None,
source: Optional[str] = None,
@@ -177,7 +113,7 @@ class DownloadChain(ChainBase):
logger.error(f"{torrent.title} 无法获取下载地址:{torrent.enclosure}")
return None, "", []
# 下载种子文件
torrent_file, content, download_folder, files, error_msg = self.torrent.download_torrent(
torrent_file, content, download_folder, files, error_msg = TorrentHelper().download_torrent(
url=torrent_url,
cookie=site_cookie,
ua=torrent.site_ua or settings.USER_AGENT,
@@ -275,7 +211,7 @@ class DownloadChain(ChainBase):
else:
content = torrent_file
# 获取种子文件的文件夹名和文件清单
_folder_name, _file_list = self.torrent.get_torrent_info(torrent_file)
_folder_name, _file_list = TorrentHelper().get_torrent_info(torrent_file)
# 下载目录
if save_path:
@@ -283,7 +219,7 @@ class DownloadChain(ChainBase):
download_dir = Path(save_path)
else:
# 根据媒体信息查询下载目录配置
dir_info = self.directoryhelper.get_dir(_media, storage="local", include_unsorted=True)
dir_info = DirectoryHelper().get_dir(_media, storage="local", include_unsorted=True)
# 拼装子目录
if dir_info:
# 一级目录
@@ -333,7 +269,8 @@ class DownloadChain(ChainBase):
_save_path = download_dir if _layout == "NoSubfolder" or not _folder_name else download_path
# 登记下载记录
self.downloadhis.add(
downloadhis = DownloadHistoryOper()
downloadhis.add(
path=str(download_path),
type=_media.type.value,
title=_media.title,
@@ -381,11 +318,26 @@ class DownloadChain(ChainBase):
"torrentname": _meta.org_string,
})
if files_to_add:
self.downloadhis.add_files(files_to_add)
downloadhis.add_files(files_to_add)
# 下载成功发送消息
self.post_download_message(meta=_meta, mediainfo=_media, torrent=_torrent,
username=username, download_episodes=download_episodes)
self.post_message(
Notification(
channel=channel,
source=source if channel else None,
mtype=NotificationType.Download,
ctype=ContentType.DownloadAdded,
image=_media.get_message_image(),
link=settings.MP_DOMAIN('/#/downloading'),
userid=userid,
username=username
),
meta=_meta,
mediainfo=_media,
torrentinfo=_torrent,
download_episodes=download_episodes,
username=username,
)
# 下载成功后处理
self.download_added(context=context, download_dir=download_dir, torrent_path=torrent_file)
# 广播事件
@@ -582,7 +534,7 @@ class DownloadChain(ChainBase):
if isinstance(content, str):
logger.warn(f"{meta.org_string} 下载地址是磁力链,无法确定种子文件集数")
continue
torrent_episodes = self.torrent.get_torrent_episodes(torrent_files)
torrent_episodes = TorrentHelper().get_torrent_episodes(torrent_files)
logger.info(f"{meta.org_string} 解析种子文件集数为 {torrent_episodes}")
if not torrent_episodes:
continue
@@ -756,7 +708,7 @@ class DownloadChain(ChainBase):
logger.warn(f"{meta.org_string} 下载地址是磁力链,无法解析种子文件集数")
continue
# 种子全部集
torrent_episodes = self.torrent.get_torrent_episodes(torrent_files)
torrent_episodes = TorrentHelper().get_torrent_episodes(torrent_files)
logger.info(f"{torrent.site_name} - {meta.org_string} 解析种子文件集数:{torrent_episodes}")
# 选中的集
selected_episodes = set(torrent_episodes).intersection(set(need_episodes))
@@ -845,11 +797,12 @@ class DownloadChain(ChainBase):
if not totals:
totals = {}
mediaserver = MediaServerOper()
if mediainfo.type == MediaType.MOVIE:
# 电影
itemid = self.mediaserver.get_item_id(mtype=mediainfo.type.value,
title=mediainfo.title,
tmdbid=mediainfo.tmdb_id)
itemid = mediaserver.get_item_id(mtype=mediainfo.type.value,
title=mediainfo.title,
tmdbid=mediainfo.tmdb_id)
exists_movies: Optional[ExistMediaInfo] = self.media_exists(mediainfo=mediainfo, itemid=itemid)
if exists_movies:
logger.info(f"媒体库中已存在电影:{mediainfo.title_year}")
@@ -869,10 +822,10 @@ class DownloadChain(ChainBase):
logger.error(f"媒体信息中没有季集信息:{mediainfo.title_year}")
return False, {}
# 电视剧
itemid = self.mediaserver.get_item_id(mtype=mediainfo.type.value,
title=mediainfo.title,
tmdbid=mediainfo.tmdb_id,
season=mediainfo.season)
itemid = mediaserver.get_item_id(mtype=mediainfo.type.value,
title=mediainfo.title,
tmdbid=mediainfo.tmdb_id,
season=mediainfo.season)
# 媒体库已存在的剧集
exists_tvs: Optional[ExistMediaInfo] = self.media_exists(mediainfo=mediainfo, itemid=itemid)
if not exists_tvs:
@@ -971,7 +924,7 @@ class DownloadChain(ChainBase):
return []
ret_torrents = []
for torrent in torrents:
history = self.downloadhis.get_by_hash(torrent.hash)
history = DownloadHistoryOper().get_by_hash(torrent.hash)
if history:
# 媒体信息
torrent.media = {
@@ -988,21 +941,21 @@ class DownloadChain(ChainBase):
ret_torrents.append(torrent)
return ret_torrents
def set_downloading(self, hash_str, oper: str) -> bool:
def set_downloading(self, hash_str, oper: str, name: Optional[str] = None) -> bool:
"""
控制下载任务 start/stop
"""
if oper == "start":
return self.start_torrents(hashs=[hash_str])
return self.start_torrents(hashs=[hash_str], downloader=name)
elif oper == "stop":
return self.stop_torrents(hashs=[hash_str])
return self.stop_torrents(hashs=[hash_str], downloader=name)
return False
def remove_downloading(self, hash_str: str) -> bool:
def remove_downloading(self, hash_str: str, name: Optional[str] = None) -> bool:
"""
删除下载任务
"""
return self.remove_torrents(hashs=[hash_str])
return self.remove_torrents(hashs=[hash_str], downloader=name)
@eventmanager.register(EventType.DownloadFileDeleted)
def download_file_deleted(self, event: Event):

View File

@@ -10,11 +10,11 @@ from app.core.context import Context, MediaInfo
from app.core.event import eventmanager, Event
from app.core.meta import MetaBase
from app.core.metainfo import MetaInfo, MetaInfoPath
from app.db.systemconfig_oper import SystemConfigOper
from app.log import logger
from app.schemas import FileItem
from app.schemas.types import EventType, MediaType, ChainEventType
from app.schemas.types import EventType, MediaType, ChainEventType, SystemConfigKey
from app.utils.http import RequestUtils
from app.utils.singleton import Singleton
from app.utils.string import StringUtils
recognize_lock = Lock()
@@ -22,14 +22,53 @@ scraping_lock = Lock()
scraping_files = []
class MediaChain(ChainBase, metaclass=Singleton):
class MediaChain(ChainBase):
"""
媒体信息处理链,单例运行
"""
def __init__(self):
super().__init__()
self.storagechain = StorageChain()
@staticmethod
def _get_scraping_switchs() -> dict:
"""
获取刮削开关配置
"""
switchs = SystemConfigOper().get(SystemConfigKey.ScrapingSwitchs) or {}
# 默认配置
default_switchs = {
'movie_nfo': True, # 电影NFO
'movie_poster': True, # 电影海报
'movie_backdrop': True, # 电影背景图
'movie_logo': True, # 电影Logo
'movie_disc': True, # 电影光盘图
'movie_banner': True, # 电影横幅图
'movie_thumb': True, # 电影缩略图
'tv_nfo': True, # 电视剧NFO
'tv_poster': True, # 电视剧海报
'tv_backdrop': True, # 电视剧背景图
'tv_banner': True, # 电视剧横幅图
'tv_logo': True, # 电视剧Logo
'tv_thumb': True, # 电视剧缩略图
'season_nfo': True, # 季NFO
'season_poster': True, # 季海报
'season_banner': True, # 季横幅图
'season_thumb': True, # 季缩略图
'episode_nfo': True, # 集NFO
'episode_thumb': True # 集缩略图
}
# 合并用户配置和默认配置
for key, default_value in default_switchs.items():
if key not in switchs:
switchs[key] = default_value
return switchs
@staticmethod
def set_scraping_switchs(switchs: dict) -> bool:
"""
设置刮削开关配置
:param switchs: 开关配置字典
:return: 是否设置成功
"""
return SystemConfigOper().set(SystemConfigKey.ScrapingSwitchs, switchs)
def metadata_nfo(self, meta: MetaBase, mediainfo: MediaInfo,
season: Optional[int] = None, episode: Optional[int] = None) -> Optional[str]:
@@ -337,6 +376,8 @@ class MediaChain(ChainBase, metaclass=Singleton):
:param overwrite: 是否覆盖已有文件
"""
storagechain = StorageChain()
def is_bluray_folder(_fileitem: schemas.FileItem) -> bool:
"""
判断是否为原盘目录
@@ -346,7 +387,7 @@ class MediaChain(ChainBase, metaclass=Singleton):
# 蓝光原盘目录必备的文件或文件夹
required_files = ['BDMV', 'CERTIFICATE']
# 检查目录下是否存在所需文件或文件夹
for item in self.storagechain.list_files(_fileitem):
for item in storagechain.list_files(_fileitem):
if item.name in required_files:
return True
return False
@@ -355,7 +396,7 @@ class MediaChain(ChainBase, metaclass=Singleton):
"""
列出下级文件
"""
return self.storagechain.list_files(fileitem=_fileitem)
return storagechain.list_files(fileitem=_fileitem)
def __save_file(_fileitem: schemas.FileItem, _path: Path, _content: Union[bytes, str]):
"""
@@ -371,7 +412,7 @@ class MediaChain(ChainBase, metaclass=Singleton):
tmp_file.write_bytes(_content)
# 获取文件的父目录
try:
item = self.storagechain.upload_file(fileitem=_fileitem, path=tmp_file, new_name=_path.name)
item = storagechain.upload_file(fileitem=_fileitem, path=tmp_file, new_name=_path.name)
if item:
logger.info(f"已保存文件:{item.path}")
else:
@@ -386,7 +427,7 @@ class MediaChain(ChainBase, metaclass=Singleton):
"""
try:
logger.info(f"正在下载图片:{_url} ...")
r = RequestUtils(proxies=settings.PROXY).get_res(url=_url)
r = RequestUtils(proxies=settings.PROXY, ua=settings.USER_AGENT).get_res(url=_url)
if r:
return r.content
else:
@@ -407,37 +448,47 @@ class MediaChain(ChainBase, metaclass=Singleton):
if not mediainfo:
logger.warn(f"{filepath} 无法识别文件媒体信息!")
return
# 获取刮削开关配置
scraping_switchs = self._get_scraping_switchs()
logger.info(f"开始刮削:{filepath} ...")
if mediainfo.type == MediaType.MOVIE:
# 电影
if fileitem.type == "file":
# 是否已存在
nfo_path = filepath.with_suffix(".nfo")
if overwrite or not self.storagechain.get_file_item(storage=fileitem.storage, path=nfo_path):
# 电影文件
movie_nfo = self.metadata_nfo(meta=meta, mediainfo=mediainfo)
if movie_nfo:
# 保存或上传nfo文件到上级目录
__save_file(_fileitem=parent, _path=nfo_path, _content=movie_nfo)
else:
logger.warn(f"{filepath.name} nfo文件生成失败")
else:
logger.info(f"已存在nfo文件{nfo_path}")
else:
# 电影目录
if is_bluray_folder(fileitem):
# 原盘目录
nfo_path = filepath / (filepath.name + ".nfo")
if overwrite or not self.storagechain.get_file_item(storage=fileitem.storage, path=nfo_path):
# 生成原盘nfo
# 检查电影NFO开关
if scraping_switchs.get('movie_nfo', True):
# 是否已存在
nfo_path = filepath.with_suffix(".nfo")
if overwrite or not storagechain.get_file_item(storage=fileitem.storage, path=nfo_path):
# 电影文件
movie_nfo = self.metadata_nfo(meta=meta, mediainfo=mediainfo)
if movie_nfo:
# 保存或上传nfo文件到当前目录
__save_file(_fileitem=fileitem, _path=nfo_path, _content=movie_nfo)
# 保存或上传nfo文件到上级目录
__save_file(_fileitem=parent, _path=nfo_path, _content=movie_nfo)
else:
logger.warn(f"{filepath.name} nfo文件生成失败")
else:
logger.info(f"已存在nfo文件{nfo_path}")
else:
logger.info("电影NFO刮削已关闭跳过")
else:
# 电影目录
if is_bluray_folder(fileitem):
# 原盘目录
if scraping_switchs.get('movie_nfo', True):
nfo_path = filepath / (filepath.name + ".nfo")
if overwrite or not storagechain.get_file_item(storage=fileitem.storage, path=nfo_path):
# 生成原盘nfo
movie_nfo = self.metadata_nfo(meta=meta, mediainfo=mediainfo)
if movie_nfo:
# 保存或上传nfo文件到当前目录
__save_file(_fileitem=fileitem, _path=nfo_path, _content=movie_nfo)
else:
logger.warn(f"{filepath.name} nfo文件生成失败")
else:
logger.info(f"已存在nfo文件{nfo_path}")
else:
logger.info("电影NFO刮削已关闭跳过")
else:
# 处理目录内的文件
files = __list_files(_fileitem=fileitem)
@@ -449,23 +500,40 @@ class MediaChain(ChainBase, metaclass=Singleton):
# 生成目录内图片文件
if init_folder:
# 图片
for attr_name, attr_value in vars(mediainfo).items():
if attr_value \
and attr_name.endswith("_path") \
and attr_value \
and isinstance(attr_value, str) \
and attr_value.startswith("http"):
image_name = attr_name.replace("_path", "") + Path(attr_value).suffix
image_path = filepath / image_name
if overwrite or not self.storagechain.get_file_item(storage=fileitem.storage,
path=image_path):
# 下载图片
content = __download_image(_url=attr_value)
# 写入图片到当前目录
if content:
__save_file(_fileitem=fileitem, _path=image_path, _content=content)
image_dict = self.metadata_img(mediainfo=mediainfo)
if image_dict:
for image_name, image_url in image_dict.items():
# 根据图片类型检查开关
if 'poster' in image_name.lower():
should_scrape = scraping_switchs.get('movie_poster', True)
elif ('backdrop' in image_name.lower()
or 'fanart' in image_name.lower()
or 'background' in image_name.lower()):
should_scrape = scraping_switchs.get('movie_backdrop', True)
elif 'logo' in image_name.lower():
should_scrape = scraping_switchs.get('movie_logo', True)
elif 'disc' in image_name.lower() or 'cdart' in image_name.lower():
should_scrape = scraping_switchs.get('movie_disc', True)
elif 'banner' in image_name.lower():
should_scrape = scraping_switchs.get('movie_banner', True)
elif 'thumb' in image_name.lower():
should_scrape = scraping_switchs.get('movie_thumb', True)
else:
logger.info(f"已存在图片文件:{image_path}")
should_scrape = True # 未知类型默认刮削
if should_scrape:
image_path = filepath.with_name(image_name)
if overwrite or not storagechain.get_file_item(storage=fileitem.storage,
path=image_path):
# 下载图片
content = __download_image(image_url)
# 写入图片到当前目录
if content:
__save_file(_fileitem=fileitem, _path=image_path, _content=content)
else:
logger.info(f"已存在图片文件:{image_path}")
else:
logger.info(f"电影图片刮削已关闭,跳过:{image_name}")
else:
# 电视剧
if fileitem.type == "file":
@@ -479,38 +547,45 @@ class MediaChain(ChainBase, metaclass=Singleton):
if not file_mediainfo:
logger.warn(f"{filepath.name} 无法识别文件媒体信息!")
return
# 是否已存在
nfo_path = filepath.with_suffix(".nfo")
if overwrite or not self.storagechain.get_file_item(storage=fileitem.storage, path=nfo_path):
# 获取集的nfo文件
episode_nfo = self.metadata_nfo(meta=file_meta, mediainfo=file_mediainfo,
season=file_meta.begin_season,
episode=file_meta.begin_episode)
if episode_nfo:
# 保存或上传nfo文件到上级目录
if not parent:
parent = self.storagechain.get_parent_item(fileitem)
__save_file(_fileitem=parent, _path=nfo_path, _content=episode_nfo)
else:
logger.warn(f"{filepath.name} nfo文件生成失败")
else:
logger.info(f"已存在nfo文件{nfo_path}")
# 获取集的图片
image_dict = self.metadata_img(mediainfo=file_mediainfo,
season=file_meta.begin_season, episode=file_meta.begin_episode)
if image_dict:
for episode, image_url in image_dict.items():
image_path = filepath.with_suffix(Path(image_url).suffix)
if overwrite or not self.storagechain.get_file_item(storage=fileitem.storage, path=image_path):
# 下载图片
content = __download_image(image_url)
# 保存图片文件到当前目录
if content:
if not parent:
parent = self.storagechain.get_parent_item(fileitem)
__save_file(_fileitem=parent, _path=image_path, _content=content)
# 检查集NFO开关
if scraping_switchs.get('episode_nfo', True):
# 是否已存在
nfo_path = filepath.with_suffix(".nfo")
if overwrite or not storagechain.get_file_item(storage=fileitem.storage, path=nfo_path):
# 获取集的nfo文件
episode_nfo = self.metadata_nfo(meta=file_meta, mediainfo=file_mediainfo,
season=file_meta.begin_season,
episode=file_meta.begin_episode)
if episode_nfo:
# 保存或上传nfo文件到上级目录
if not parent:
parent = storagechain.get_parent_item(fileitem)
__save_file(_fileitem=parent, _path=nfo_path, _content=episode_nfo)
else:
logger.info(f"已存在图片文件:{image_path}")
logger.warn(f"{filepath.name} nfo文件生成失败")
else:
logger.info(f"已存在nfo文件{nfo_path}")
else:
logger.info("集NFO刮削已关闭跳过")
# 获取集的图片
if scraping_switchs.get('episode_thumb', True):
image_dict = self.metadata_img(mediainfo=file_mediainfo,
season=file_meta.begin_season, episode=file_meta.begin_episode)
if image_dict:
for episode, image_url in image_dict.items():
image_path = filepath.with_suffix(Path(image_url).suffix)
if overwrite or not storagechain.get_file_item(storage=fileitem.storage, path=image_path):
# 下载图片
content = __download_image(image_url)
# 保存图片文件到当前目录
if content:
if not parent:
parent = storagechain.get_parent_item(fileitem)
__save_file(_fileitem=parent, _path=image_path, _content=content)
else:
logger.info(f"已存在图片文件:{image_path}")
else:
logger.info("集缩略图刮削已关闭,跳过")
else:
# 当前为目录,处理目录内的文件
files = __list_files(_fileitem=fileitem)
@@ -528,71 +603,95 @@ class MediaChain(ChainBase, metaclass=Singleton):
if filepath.name in settings.RENAME_FORMAT_S0_NAMES:
season_meta.begin_season = 0
if season_meta.begin_season is not None:
# 是否已存在
nfo_path = filepath / "season.nfo"
if overwrite or not self.storagechain.get_file_item(storage=fileitem.storage, path=nfo_path):
# 当前目录有季号生成季nfo
season_nfo = self.metadata_nfo(meta=meta, mediainfo=mediainfo,
season=season_meta.begin_season)
if season_nfo:
# 写入nfo到根目录
__save_file(_fileitem=fileitem, _path=nfo_path, _content=season_nfo)
else:
logger.warn(f"无法生成电视剧季nfo文件{meta.name}")
else:
logger.info(f"已存在nfo文件{nfo_path}")
# TMDB季poster图片
image_dict = self.metadata_img(mediainfo=mediainfo, season=season_meta.begin_season)
if image_dict:
for image_name, image_url in image_dict.items():
image_path = filepath.with_name(image_name)
if overwrite or not self.storagechain.get_file_item(storage=fileitem.storage,
path=image_path):
# 下载图片
content = __download_image(image_url)
# 保存图片文件到剧集目录
if content:
if not parent:
parent = self.storagechain.get_parent_item(fileitem)
__save_file(_fileitem=parent, _path=image_path, _content=content)
# 检查季NFO开关
if scraping_switchs.get('season_nfo', True):
# 是否已存在
nfo_path = filepath / "season.nfo"
if overwrite or not storagechain.get_file_item(storage=fileitem.storage, path=nfo_path):
# 当前目录有季号生成季nfo
season_nfo = self.metadata_nfo(meta=meta, mediainfo=mediainfo,
season=season_meta.begin_season)
if season_nfo:
# 写入nfo到根目录
__save_file(_fileitem=fileitem, _path=nfo_path, _content=season_nfo)
else:
logger.info(f"已存在图片文件:{image_path}")
logger.warn(f"无法生成电视剧季nfo文件{meta.name}")
else:
logger.info(f"已存在nfo文件{nfo_path}")
else:
logger.info("季NFO刮削已关闭跳过")
# TMDB季poster图片
if scraping_switchs.get('season_poster', True):
image_dict = self.metadata_img(mediainfo=mediainfo, season=season_meta.begin_season)
if image_dict:
for image_name, image_url in image_dict.items():
image_path = filepath.with_name(image_name)
if overwrite or not storagechain.get_file_item(storage=fileitem.storage,
path=image_path):
# 下载图片
content = __download_image(image_url)
# 保存图片文件到剧集目录
if content:
if not parent:
parent = storagechain.get_parent_item(fileitem)
__save_file(_fileitem=parent, _path=image_path, _content=content)
else:
logger.info(f"已存在图片文件:{image_path}")
else:
logger.info("季海报刮削已关闭,跳过")
# 额外fanart季图片poster thumb banner
image_dict = self.metadata_img(mediainfo=mediainfo)
if image_dict:
for image_name, image_url in image_dict.items():
if image_name.startswith("season"):
image_path = filepath.with_name(image_name)
# 只下载当前刮削季的图片
image_season = "00" if "specials" in image_name else image_name[6:8]
if image_season != str(season_meta.begin_season).rjust(2, '0'):
logger.info(f"当前刮削季为:{season_meta.begin_season},跳过文件:{image_path}")
continue
if overwrite or not self.storagechain.get_file_item(storage=fileitem.storage,
path=image_path):
# 下载图片
content = __download_image(image_url)
# 保存图片文件到当前目录
if content:
if not parent:
parent = self.storagechain.get_parent_item(fileitem)
__save_file(_fileitem=parent, _path=image_path, _content=content)
# 根据季图片类型检查开关
if 'poster' in image_name.lower():
should_scrape = scraping_switchs.get('season_poster', True)
elif 'banner' in image_name.lower():
should_scrape = scraping_switchs.get('season_banner', True)
elif 'thumb' in image_name.lower():
should_scrape = scraping_switchs.get('season_thumb', True)
else:
logger.info(f"已存在图片文件:{image_path}")
should_scrape = True # 未知类型默认刮削
if should_scrape:
image_path = filepath.with_name(image_name)
# 只下载当前刮削季的图片
image_season = "00" if "specials" in image_name else image_name[6:8]
if image_season != str(season_meta.begin_season).rjust(2, '0'):
logger.info(f"当前刮削季为:{season_meta.begin_season},跳过文件:{image_path}")
continue
if overwrite or not storagechain.get_file_item(storage=fileitem.storage,
path=image_path):
# 下载图片
content = __download_image(image_url)
# 保存图片文件到当前目录
if content:
if not parent:
parent = storagechain.get_parent_item(fileitem)
__save_file(_fileitem=parent, _path=image_path, _content=content)
else:
logger.info(f"已存在图片文件:{image_path}")
else:
logger.info(f"季图片刮削已关闭,跳过:{image_name}")
# 判断当前目录是不是剧集根目录
if not season_meta.season:
# 是否已存在
nfo_path = filepath / "tvshow.nfo"
if overwrite or not self.storagechain.get_file_item(storage=fileitem.storage, path=nfo_path):
# 当前目录有名称,生成tvshow nfo 和 tv图片
tv_nfo = self.metadata_nfo(meta=meta, mediainfo=mediainfo)
if tv_nfo:
# 写入tvshow nfo到根目录
__save_file(_fileitem=fileitem, _path=nfo_path, _content=tv_nfo)
# 检查电视剧NFO开关
if scraping_switchs.get('tv_nfo', True):
# 是否已存在
nfo_path = filepath / "tvshow.nfo"
if overwrite or not storagechain.get_file_item(storage=fileitem.storage, path=nfo_path):
# 当前目录有名称生成tvshow nfo 和 tv图片
tv_nfo = self.metadata_nfo(meta=meta, mediainfo=mediainfo)
if tv_nfo:
# 写入tvshow nfo到根目录
__save_file(_fileitem=fileitem, _path=nfo_path, _content=tv_nfo)
else:
logger.warn(f"无法生成电视剧nfo文件{meta.name}")
else:
logger.warn(f"无法生成电视剧nfo文件{meta.name}")
logger.info(f"已存在nfo文件{nfo_path}")
else:
logger.info(f"已存在nfo文件{nfo_path}")
logger.info("电视剧NFO刮削已关闭跳过")
# 生成目录图片
image_dict = self.metadata_img(mediainfo=mediainfo)
if image_dict:
@@ -600,14 +699,33 @@ class MediaChain(ChainBase, metaclass=Singleton):
# 不下载季图片
if image_name.startswith("season"):
continue
image_path = filepath / image_name
if overwrite or not self.storagechain.get_file_item(storage=fileitem.storage,
path=image_path):
# 下载图片
content = __download_image(image_url)
# 保存图片文件到当前目录
if content:
__save_file(_fileitem=fileitem, _path=image_path, _content=content)
# 根据电视剧图片类型检查开关
if 'poster' in image_name.lower():
should_scrape = scraping_switchs.get('tv_poster', True)
elif ('backdrop' in image_name.lower()
or 'fanart' in image_name.lower()
or 'background' in image_name.lower()):
should_scrape = scraping_switchs.get('tv_backdrop', True)
elif 'banner' in image_name.lower():
should_scrape = scraping_switchs.get('tv_banner', True)
elif 'logo' in image_name.lower():
should_scrape = scraping_switchs.get('tv_logo', True)
elif 'thumb' in image_name.lower():
should_scrape = scraping_switchs.get('tv_thumb', True)
else:
logger.info(f"已存在图片文件:{image_path}")
should_scrape = True # 未知类型默认刮削
if should_scrape:
image_path = filepath / image_name
if overwrite or not storagechain.get_file_item(storage=fileitem.storage,
path=image_path):
# 下载图片
content = __download_image(image_url)
# 保存图片文件到当前目录
if content:
__save_file(_fileitem=fileitem, _path=image_path, _content=content)
else:
logger.info(f"已存在图片文件:{image_path}")
else:
logger.info(f"电视剧图片刮削已关闭,跳过:{image_name}")
logger.info(f"{filepath.name} 刮削完成")

View File

@@ -2,7 +2,6 @@ import threading
from typing import List, Union, Optional, Generator, Any
from app.chain import ChainBase
from app.core.cache import cached
from app.core.config import global_vars
from app.db.mediaserver_oper import MediaServerOper
from app.helper.service import ServiceConfigHelper
@@ -17,10 +16,6 @@ class MediaServerChain(ChainBase):
媒体服务器处理链
"""
def __init__(self):
super().__init__()
self.dboper = MediaServerOper()
def librarys(self, server: str, username: Optional[str] = None,
hidden: bool = False) -> List[MediaServerLibrary]:
"""
@@ -96,7 +91,6 @@ class MediaServerChain(ChainBase):
"""
return self.run_module("mediaserver_latest", count=count, server=server, username=username)
@cached(maxsize=1, ttl=3600)
def get_latest_wallpapers(self, server: Optional[str] = None, count: Optional[int] = 10,
remote: bool = True, username: Optional[str] = None) -> List[str]:
"""
@@ -131,7 +125,8 @@ class MediaServerChain(ChainBase):
# 汇总统计
total_count = 0
# 清空登记薄
self.dboper.empty()
dboper = MediaServerOper()
dboper.empty()
# 遍历媒体服务器
for mediaserver in mediaservers:
if not mediaserver:
@@ -175,7 +170,7 @@ class MediaServerChain(ChainBase):
item_dict = item.dict()
item_dict["seasoninfo"] = seasoninfo
item_dict["item_type"] = item_type
self.dboper.add(**item_dict)
dboper.add(**item_dict)
logger.info(f"{server_name} 媒体库 {library.name} 同步完成,共同步数量:{library_count}")
# 总数累加
total_count += library_count

File diff suppressed because it is too large Load Diff

View File

@@ -29,12 +29,8 @@ class RecommendChain(ChainBase, metaclass=Singleton):
推荐处理链,单例运行
"""
def __init__(self):
super().__init__()
self.tmdbchain = TmdbChain()
self.doubanchain = DoubanChain()
self.bangumichain = BangumiChain()
self.cache_max_pages = 5
# 推荐数据的缓存页数
cache_max_pages = 5
def refresh_recommend(self):
"""
@@ -174,16 +170,16 @@ class RecommendChain(ChainBase, metaclass=Singleton):
"""
TMDB热门电影
"""
movies = self.tmdbchain.tmdb_discover(mtype=MediaType.MOVIE,
sort_by=sort_by,
with_genres=with_genres,
with_original_language=with_original_language,
with_keywords=with_keywords,
with_watch_providers=with_watch_providers,
vote_average=vote_average,
vote_count=vote_count,
release_date=release_date,
page=page)
movies = TmdbChain().tmdb_discover(mtype=MediaType.MOVIE,
sort_by=sort_by,
with_genres=with_genres,
with_original_language=with_original_language,
with_keywords=with_keywords,
with_watch_providers=with_watch_providers,
vote_average=vote_average,
vote_count=vote_count,
release_date=release_date,
page=page)
return [movie.to_dict() for movie in movies] if movies else []
@log_execution_time(logger=logger)
@@ -200,16 +196,16 @@ class RecommendChain(ChainBase, metaclass=Singleton):
"""
TMDB热门电视剧
"""
tvs = self.tmdbchain.tmdb_discover(mtype=MediaType.TV,
sort_by=sort_by,
with_genres=with_genres,
with_original_language=with_original_language,
with_keywords=with_keywords,
with_watch_providers=with_watch_providers,
vote_average=vote_average,
vote_count=vote_count,
release_date=release_date,
page=page)
tvs = TmdbChain().tmdb_discover(mtype=MediaType.TV,
sort_by=sort_by,
with_genres=with_genres,
with_original_language=with_original_language,
with_keywords=with_keywords,
with_watch_providers=with_watch_providers,
vote_average=vote_average,
vote_count=vote_count,
release_date=release_date,
page=page)
return [tv.to_dict() for tv in tvs] if tvs else []
@log_execution_time(logger=logger)
@@ -218,7 +214,7 @@ class RecommendChain(ChainBase, metaclass=Singleton):
"""
TMDB流行趋势
"""
infos = self.tmdbchain.tmdb_trending(page=page)
infos = TmdbChain().tmdb_trending(page=page)
return [info.to_dict() for info in infos] if infos else []
@log_execution_time(logger=logger)
@@ -227,7 +223,7 @@ class RecommendChain(ChainBase, metaclass=Singleton):
"""
Bangumi每日放送
"""
medias = self.bangumichain.calendar()
medias = BangumiChain().calendar()
return [media.to_dict() for media in medias[(page - 1) * count: page * count]] if medias else []
@log_execution_time(logger=logger)
@@ -236,7 +232,7 @@ class RecommendChain(ChainBase, metaclass=Singleton):
"""
豆瓣正在热映
"""
movies = self.doubanchain.movie_showing(page=page, count=count)
movies = DoubanChain().movie_showing(page=page, count=count)
return [media.to_dict() for media in movies] if movies else []
@log_execution_time(logger=logger)
@@ -246,8 +242,8 @@ class RecommendChain(ChainBase, metaclass=Singleton):
"""
豆瓣最新电影
"""
movies = self.doubanchain.douban_discover(mtype=MediaType.MOVIE,
sort=sort, tags=tags, page=page, count=count)
movies = DoubanChain().douban_discover(mtype=MediaType.MOVIE,
sort=sort, tags=tags, page=page, count=count)
return [media.to_dict() for media in movies] if movies else []
@log_execution_time(logger=logger)
@@ -257,8 +253,8 @@ class RecommendChain(ChainBase, metaclass=Singleton):
"""
豆瓣最新电视剧
"""
tvs = self.doubanchain.douban_discover(mtype=MediaType.TV,
sort=sort, tags=tags, page=page, count=count)
tvs = DoubanChain().douban_discover(mtype=MediaType.TV,
sort=sort, tags=tags, page=page, count=count)
return [media.to_dict() for media in tvs] if tvs else []
@log_execution_time(logger=logger)
@@ -267,7 +263,7 @@ class RecommendChain(ChainBase, metaclass=Singleton):
"""
豆瓣电影TOP250
"""
movies = self.doubanchain.movie_top250(page=page, count=count)
movies = DoubanChain().movie_top250(page=page, count=count)
return [media.to_dict() for media in movies] if movies else []
@log_execution_time(logger=logger)
@@ -276,7 +272,7 @@ class RecommendChain(ChainBase, metaclass=Singleton):
"""
豆瓣国产剧集榜
"""
tvs = self.doubanchain.tv_weekly_chinese(page=page, count=count)
tvs = DoubanChain().tv_weekly_chinese(page=page, count=count)
return [media.to_dict() for media in tvs] if tvs else []
@log_execution_time(logger=logger)
@@ -285,7 +281,7 @@ class RecommendChain(ChainBase, metaclass=Singleton):
"""
豆瓣全球剧集榜
"""
tvs = self.doubanchain.tv_weekly_global(page=page, count=count)
tvs = DoubanChain().tv_weekly_global(page=page, count=count)
return [media.to_dict() for media in tvs] if tvs else []
@log_execution_time(logger=logger)
@@ -294,7 +290,7 @@ class RecommendChain(ChainBase, metaclass=Singleton):
"""
豆瓣热门动漫
"""
tvs = self.doubanchain.tv_animation(page=page, count=count)
tvs = DoubanChain().tv_animation(page=page, count=count)
return [media.to_dict() for media in tvs] if tvs else []
@log_execution_time(logger=logger)
@@ -303,7 +299,7 @@ class RecommendChain(ChainBase, metaclass=Singleton):
"""
豆瓣热门电影
"""
movies = self.doubanchain.movie_hot(page=page, count=count)
movies = DoubanChain().movie_hot(page=page, count=count)
return [media.to_dict() for media in movies] if movies else []
@log_execution_time(logger=logger)
@@ -312,5 +308,5 @@ class RecommendChain(ChainBase, metaclass=Singleton):
"""
豆瓣热门电视剧
"""
tvs = self.doubanchain.tv_hot(page=page, count=count)
tvs = DoubanChain().tv_hot(page=page, count=count)
return [media.to_dict() for media in tvs] if tvs else []

View File

@@ -27,16 +27,9 @@ class SearchChain(ChainBase):
__result_temp_file = "__search_result__"
def __init__(self):
super().__init__()
self.siteshelper = SitesHelper()
self.progress = ProgressHelper()
self.systemconfig = SystemConfigOper()
self.torrenthelper = TorrentHelper()
def search_by_id(self, tmdbid: Optional[int] = None, doubanid: Optional[str] = None,
mtype: MediaType = None, area: Optional[str] = "title", season: Optional[int] = None,
sites: List[int] = None) -> List[Context]:
sites: List[int] = None, cache_local: bool = False) -> List[Context]:
"""
根据TMDBID/豆瓣ID搜索资源精确匹配不过滤本地存在的资源
:param tmdbid: TMDB ID
@@ -45,6 +38,7 @@ class SearchChain(ChainBase):
:param area: 搜索范围title or imdbid
:param season: 季数
:param sites: 站点ID列表
:param cache_local: 是否缓存到本地
"""
mediainfo = self.recognize_media(tmdbid=tmdbid, doubanid=doubanid, mtype=mtype)
if not mediainfo:
@@ -59,12 +53,12 @@ class SearchChain(ChainBase):
}
results = self.process(mediainfo=mediainfo, sites=sites, area=area, no_exists=no_exists)
# 保存到本地文件
bytes_results = pickle.dumps(results)
self.save_cache(bytes_results, self.__result_temp_file)
if cache_local:
self.save_cache(pickle.dumps(results), self.__result_temp_file)
return results
def search_by_title(self, title: str, page: Optional[int] = 0,
sites: List[int] = None, cache_local: Optional[bool] = True) -> List[Context]:
sites: List[int] = None, cache_local: Optional[bool] = False) -> List[Context]:
"""
根据标题搜索资源,不识别不过滤,直接返回站点内容
:param title: 标题,为空时返回所有站点首页内容
@@ -86,8 +80,7 @@ class SearchChain(ChainBase):
torrent_info=torrent) for torrent in torrents]
# 保存到本地文件
if cache_local:
bytes_results = pickle.dumps(contexts)
self.save_cache(bytes_results, self.__result_temp_file)
self.save_cache(pickle.dumps(contexts), self.__result_temp_file)
return contexts
def last_search_results(self) -> List[Context]:
@@ -184,19 +177,20 @@ class SearchChain(ChainBase):
return []
# 开始新进度
self.progress.start(ProgressKey.Search)
progress = ProgressHelper()
progress.start(ProgressKey.Search)
# 开始过滤
self.progress.update(value=0, text=f'开始过滤,总 {len(torrents)} 个资源,请稍候...',
key=ProgressKey.Search)
progress.update(value=0, text=f'开始过滤,总 {len(torrents)} 个资源,请稍候...',
key=ProgressKey.Search)
# 匹配订阅附加参数
if filter_params:
logger.info(f'开始附加参数过滤,附加参数:{filter_params} ...')
torrents = [torrent for torrent in torrents if self.torrenthelper.filter_torrent(torrent, filter_params)]
torrents = [torrent for torrent in torrents if TorrentHelper().filter_torrent(torrent, filter_params)]
# 开始过滤规则过滤
if rule_groups is None:
# 取搜索过滤规则
rule_groups: List[str] = self.systemconfig.get(SystemConfigKey.SearchFilterRuleGroups)
rule_groups: List[str] = SystemConfigOper().get(SystemConfigKey.SearchFilterRuleGroups)
if rule_groups:
logger.info(f'开始过滤规则/剧集过滤,使用规则组:{rule_groups} ...')
torrents = __do_filter(torrents)
@@ -206,26 +200,27 @@ class SearchChain(ChainBase):
logger.info(f"过滤规则/剧集过滤完成,剩余 {len(torrents)} 个资源")
# 过滤完成
self.progress.update(value=50, text=f'过滤完成,剩余 {len(torrents)} 个资源', key=ProgressKey.Search)
progress.update(value=50, text=f'过滤完成,剩余 {len(torrents)} 个资源', key=ProgressKey.Search)
# 开始匹配
_match_torrents = []
# 总数
_total = len(torrents)
# 已处理数
_count = 0
if mediainfo:
# 开始匹配
_match_torrents = []
torrenthelper = TorrentHelper()
try:
# 英文标题应该在别名/原标题中,不需要再匹配
logger.info(f"开始匹配结果 标题:{mediainfo.title},原标题:{mediainfo.original_title},别名:{mediainfo.names}")
self.progress.update(value=51, text=f'开始匹配,总 {_total} 个资源 ...', key=ProgressKey.Search)
progress.update(value=51, text=f'开始匹配,总 {_total} 个资源 ...', key=ProgressKey.Search)
for torrent in torrents:
if global_vars.is_system_stopped:
break
_count += 1
self.progress.update(value=(_count / _total) * 96,
text=f'正在匹配 {torrent.site_name},已完成 {_count} / {_total} ...',
key=ProgressKey.Search)
progress.update(value=(_count / _total) * 96,
text=f'正在匹配 {torrent.site_name},已完成 {_count} / {_total} ...',
key=ProgressKey.Search)
if not torrent.title:
continue
@@ -236,10 +231,9 @@ class SearchChain(ChainBase):
logger.info(f"种子名称应用识别词后发生改变:{torrent.title} => {torrent_meta.org_string}")
# 季集数过滤
if season_episodes \
and not self.torrenthelper.match_season_episodes(
torrent=torrent,
meta=torrent_meta,
season_episodes=season_episodes):
and not torrenthelper.match_season_episodes(torrent=torrent,
meta=torrent_meta,
season_episodes=season_episodes):
continue
# 比对IMDBID
if torrent.imdbid \
@@ -250,40 +244,42 @@ class SearchChain(ChainBase):
continue
# 比对种子
if self.torrenthelper.match_torrent(mediainfo=mediainfo,
torrent_meta=torrent_meta,
torrent=torrent):
if torrenthelper.match_torrent(mediainfo=mediainfo,
torrent_meta=torrent_meta,
torrent=torrent):
# 匹配成功
_match_torrents.append((torrent, torrent_meta))
continue
# 匹配完成
logger.info(f"匹配完成,共匹配到 {len(_match_torrents)} 个资源")
self.progress.update(value=97,
text=f'匹配完成,共匹配到 {len(_match_torrents)} 个资源',
key=ProgressKey.Search)
else:
_match_torrents = [(t, MetaInfo(title=t.title, subtitle=t.description)) for t in torrents]
progress.update(value=97,
text=f'匹配完成,共匹配到 {len(_match_torrents)} 个资源',
key=ProgressKey.Search)
# 去掉mediainfo中多余的数据
mediainfo.clear()
# 组装上下文
contexts = [Context(torrent_info=t[0],
media_info=mediainfo,
meta_info=t[1]) for t in _match_torrents]
# 去掉mediainfo中多余的数据
mediainfo.clear()
# 组装上下文
contexts = [Context(torrent_info=t[0],
media_info=mediainfo,
meta_info=t[1]) for t in _match_torrents]
finally:
torrents.clear()
del torrents
_match_torrents.clear()
del _match_torrents
# 排序
self.progress.update(value=99,
text=f'正在对 {len(contexts)} 个资源进行排序,请稍候...',
key=ProgressKey.Search)
contexts = self.torrenthelper.sort_torrents(contexts)
progress.update(value=99,
text=f'正在对 {len(contexts)} 个资源进行排序,请稍候...',
key=ProgressKey.Search)
contexts = torrenthelper.sort_torrents(contexts)
# 结束进度
logger.info(f'搜索完成,共 {len(contexts)} 个资源')
self.progress.update(value=100,
text=f'搜索完成,共 {len(contexts)} 个资源',
key=ProgressKey.Search)
self.progress.end(ProgressKey.Search)
progress.update(value=100,
text=f'搜索完成,共 {len(contexts)} 个资源',
key=ProgressKey.Search)
progress.end(ProgressKey.Search)
# 返回
return contexts
@@ -307,9 +303,9 @@ class SearchChain(ChainBase):
# 配置的索引站点
if not sites:
sites = self.systemconfig.get(SystemConfigKey.IndexerSites) or []
sites = SystemConfigOper().get(SystemConfigKey.IndexerSites) or []
for indexer in self.siteshelper.get_indexers():
for indexer in SitesHelper().get_indexers():
# 检查站点索引开关
if not sites or indexer.get("id") in sites:
indexer_sites.append(indexer)
@@ -318,7 +314,8 @@ class SearchChain(ChainBase):
return []
# 开始进度
self.progress.start(ProgressKey.Search)
progress = ProgressHelper()
progress.start(ProgressKey.Search)
# 开始计时
start_time = datetime.now()
# 总数
@@ -326,48 +323,49 @@ class SearchChain(ChainBase):
# 完成数
finish_count = 0
# 更新进度
self.progress.update(value=0,
text=f"开始搜索,共 {total_num} 个站点 ...",
key=ProgressKey.Search)
# 多线程
executor = ThreadPoolExecutor(max_workers=len(indexer_sites))
all_task = []
for site in indexer_sites:
if area == "imdbid":
# 搜索IMDBID
task = executor.submit(self.search_torrents, site=site,
keywords=[mediainfo.imdb_id] if mediainfo else None,
mtype=mediainfo.type if mediainfo else None,
page=page)
else:
# 搜索标题
task = executor.submit(self.search_torrents, site=site,
keywords=keywords,
mtype=mediainfo.type if mediainfo else None,
page=page)
all_task.append(task)
progress.update(value=0,
text=f"开始搜索,共 {total_num} 个站点 ...",
key=ProgressKey.Search)
# 结果集
results = []
for future in as_completed(all_task):
if global_vars.is_system_stopped:
break
finish_count += 1
result = future.result()
if result:
results.extend(result)
logger.info(f"站点搜索进度:{finish_count} / {total_num}")
self.progress.update(value=finish_count / total_num * 100,
text=f"正在搜索{keywords or ''},已完成 {finish_count} / {total_num} 个站点 ...",
key=ProgressKey.Search)
# 多线程
with ThreadPoolExecutor(max_workers=len(indexer_sites)) as executor:
all_task = []
for site in indexer_sites:
if area == "imdbid":
# 搜索IMDBID
task = executor.submit(self.search_torrents, site=site,
keywords=[mediainfo.imdb_id] if mediainfo else None,
mtype=mediainfo.type if mediainfo else None,
page=page)
else:
# 搜索标题
task = executor.submit(self.search_torrents, site=site,
keywords=keywords,
mtype=mediainfo.type if mediainfo else None,
page=page)
all_task.append(task)
for future in as_completed(all_task):
if global_vars.is_system_stopped:
break
finish_count += 1
result = future.result()
if result:
results.extend(result)
logger.info(f"站点搜索进度:{finish_count} / {total_num}")
progress.update(value=finish_count / total_num * 100,
text=f"正在搜索{keywords or ''},已完成 {finish_count} / {total_num} 个站点 ...",
key=ProgressKey.Search)
# 计算耗时
end_time = datetime.now()
# 更新进度
self.progress.update(value=100,
text=f"站点搜索完成,有效资源数:{len(results)},总耗时 {(end_time - start_time).seconds}",
key=ProgressKey.Search)
progress.update(value=100,
text=f"站点搜索完成,有效资源数:{len(results)},总耗时 {(end_time - start_time).seconds}",
key=ProgressKey.Search)
logger.info(f"站点搜索完成,有效资源数:{len(results)},总耗时 {(end_time - start_time).seconds}")
# 结束进度
self.progress.end(ProgressKey.Search)
progress.end(ProgressKey.Search)
# 返回
return results

View File

@@ -1,4 +1,5 @@
import base64
import gc
import re
from datetime import datetime
from typing import Optional, Tuple, Union, Dict
@@ -16,7 +17,6 @@ from app.helper.browser import PlaywrightHelper
from app.helper.cloudflare import under_challenge
from app.helper.cookie import CookieHelper
from app.helper.cookiecloud import CookieCloudHelper
from app.helper.message import MessageHelper
from app.helper.rss import RssHelper
from app.helper.sites import SitesHelper
from app.log import logger
@@ -34,13 +34,6 @@ class SiteChain(ChainBase):
def __init__(self):
super().__init__()
self.siteoper = SiteOper()
self.siteshelper = SitesHelper()
self.rsshelper = RssHelper()
self.cookiehelper = CookieHelper()
self.message = MessageHelper()
self.cookiecloud = CookieCloudHelper()
self.systemconfig = SystemConfigOper()
# 特殊站点登录验证
self.special_site_test = {
@@ -62,9 +55,9 @@ class SiteChain(ChainBase):
"""
userdata: SiteUserData = self.run_module("refresh_userdata", site=site)
if userdata:
self.siteoper.update_userdata(domain=StringUtils.get_url_domain(site.get("domain")),
name=site.get("name"),
payload=userdata.dict())
SiteOper().update_userdata(domain=StringUtils.get_url_domain(site.get("domain")),
name=site.get("name"),
payload=userdata.dict())
# 发送事件
EventManager().send_event(EventType.SiteRefreshed, {
"site_id": site.get("id")
@@ -100,10 +93,9 @@ class SiteChain(ChainBase):
"""
刷新所有站点的用户数据
"""
sites = self.siteshelper.get_indexers()
any_site_updated = False
result = {}
for site in sites:
for site in SitesHelper().get_indexers():
if global_vars.is_system_stopped:
return None
if site.get("is_active"):
@@ -115,6 +107,11 @@ class SiteChain(ChainBase):
EventManager().send_event(EventType.SiteRefreshed, {
"site_id": "*"
})
# 如果不是大内存模式,进行垃圾回收
if not settings.BIG_MEMORY_MODE:
gc.collect()
return result
def is_special_site(self, domain: str) -> bool:
@@ -297,27 +294,30 @@ class SiteChain(ChainBase):
"""
if StringUtils.get_url_domain(inx.get("domain")) == sub_domain:
return inx.get("domain")
for ext_d in inx.get("ext_domains"):
for ext_d in inx.get("ext_domains", []):
if StringUtils.get_url_domain(ext_d) == sub_domain:
return ext_d
return sub_domain
logger.info("开始同步CookieCloud站点 ...")
cookies, msg = self.cookiecloud.download()
cookies, msg = CookieCloudHelper().download()
if not cookies:
logger.error(f"CookieCloud同步失败{msg}")
if manual:
self.message.put(msg, title="CookieCloud同步失败", role="system")
self.messagehelper.put(msg, title="CookieCloud同步失败", role="system")
return False, msg
# 保存Cookie或新增站点
_update_count = 0
_add_count = 0
_fail_count = 0
siteshelper = SitesHelper()
siteoper = SiteOper()
rsshelper = RssHelper()
for domain, cookie in cookies.items():
# 索引器信息
indexer = self.siteshelper.get_indexer(domain)
indexer = siteshelper.get_indexer(domain)
# 数据库的站点信息
site_info = self.siteoper.get_by_domain(domain)
site_info = siteoper.get_by_domain(domain)
if site_info and site_info.is_active == 1:
# 站点已存在,检查站点连通性
status, msg = self.test(domain)
@@ -327,7 +327,7 @@ class SiteChain(ChainBase):
# 更新站点rss地址
if not site_info.public and not site_info.rss:
# 自动生成rss地址
rss_url, errmsg = self.rsshelper.get_rss_link(
rss_url, errmsg = rsshelper.get_rss_link(
url=site_info.url,
cookie=cookie,
ua=site_info.ua or settings.USER_AGENT,
@@ -335,13 +335,13 @@ class SiteChain(ChainBase):
)
if rss_url:
logger.info(f"更新站点 {domain} RSS地址 ...")
self.siteoper.update_rss(domain=domain, rss=rss_url)
siteoper.update_rss(domain=domain, rss=rss_url)
else:
logger.warn(errmsg)
continue
# 更新站点Cookie
logger.info(f"更新站点 {domain} Cookie ...")
self.siteoper.update_cookie(domain=domain, cookies=cookie)
siteoper.update_cookie(domain=domain, cookies=cookie)
_update_count += 1
elif indexer:
if settings.COOKIECLOUD_BLACKLIST and any(
@@ -356,9 +356,10 @@ class SiteChain(ChainBase):
ua=settings.USER_AGENT
).get_res(url=domain_url)
if res and res.status_code in [200, 500, 403]:
if not indexer.get("public") and not SiteUtils.is_logged_in(res.text):
content = res.text
if not indexer.get("public") and not SiteUtils.is_logged_in(content):
_fail_count += 1
if under_challenge(res.text):
if under_challenge(content):
logger.warn(f"站点 {indexer.get('name')} 被Cloudflare防护无法登录无法添加站点")
continue
logger.warn(
@@ -396,21 +397,21 @@ class SiteChain(ChainBase):
rss_url = None
if not indexer.get("public") and domain_url:
# 自动生成rss地址
rss_url, errmsg = self.rsshelper.get_rss_link(url=domain_url,
cookie=cookie,
ua=settings.USER_AGENT,
proxy=proxy)
rss_url, errmsg = rsshelper.get_rss_link(url=domain_url,
cookie=cookie,
ua=settings.USER_AGENT,
proxy=proxy)
if errmsg:
logger.warn(errmsg)
# 插入数据库
logger.info(f"新增站点 {indexer.get('name')} ...")
self.siteoper.add(name=indexer.get("name"),
url=domain_url,
domain=domain,
cookie=cookie,
rss=rss_url,
proxy=1 if proxy else 0,
public=1 if indexer.get("public") else 0)
siteoper.add(name=indexer.get("name"),
url=domain_url,
domain=domain,
cookie=cookie,
rss=rss_url,
proxy=1 if proxy else 0,
public=1 if indexer.get("public") else 0)
_add_count += 1
# 通知站点更新
@@ -423,7 +424,7 @@ class SiteChain(ChainBase):
if _fail_count > 0:
ret_msg += f"{_fail_count}个站点添加失败,下次同步时将重试,也可以手动添加"
if manual:
self.message.put(ret_msg, title="CookieCloud同步成功", role="system")
self.messagehelper.put(ret_msg, title="CookieCloud同步成功", role="system")
logger.info(f"CookieCloud同步成功{ret_msg}")
return True, ret_msg
@@ -442,29 +443,31 @@ class SiteChain(ChainBase):
if str(domain).startswith("http"):
domain = StringUtils.get_url_domain(domain)
# 站点信息
siteinfo = self.siteoper.get_by_domain(domain)
siteoper = SiteOper()
siteshelper = SitesHelper()
siteinfo = siteoper.get_by_domain(domain)
if not siteinfo:
logger.warn(f"未维护站点 {domain} 信息!")
return
# Cookie
cookie = siteinfo.cookie
# 索引器
indexer = self.siteshelper.get_indexer(domain)
indexer = siteshelper.get_indexer(domain)
if not indexer:
logger.warn(f"站点 {domain} 索引器不存在!")
return
# 查询站点图标
site_icon = self.siteoper.get_icon_by_domain(domain)
site_icon = siteoper.get_icon_by_domain(domain)
if not site_icon or not site_icon.base64:
logger.info(f"开始缓存站点 {indexer.get('name')} 图标 ...")
icon_url, icon_base64 = self.__parse_favicon(url=indexer.get("domain"),
cookie=cookie,
ua=settings.USER_AGENT)
if icon_url:
self.siteoper.update_icon(name=indexer.get("name"),
domain=domain,
icon_url=icon_url,
icon_base64=icon_base64)
siteoper.update_icon(name=indexer.get("name"),
domain=domain,
icon_url=icon_url,
icon_base64=icon_base64)
logger.info(f"缓存站点 {indexer.get('name')} 图标成功")
else:
logger.warn(f"缓存站点 {indexer.get('name')} 图标失败")
@@ -484,11 +487,12 @@ class SiteChain(ChainBase):
# 获取主域名中间那段
domain_host = StringUtils.get_url_host(domain)
# 查询以"site.domain_host"开头的配置项,并清除
site_keys = self.systemconfig.all().keys()
systemconfig = SystemConfigOper()
site_keys = systemconfig.all().keys()
for key in site_keys:
if key.startswith(f"site.{domain_host}"):
logger.info(f"清理站点配置:{key}")
self.systemconfig.delete(key)
systemconfig.delete(key)
@eventmanager.register(EventType.SiteUpdated)
def cache_site_userdata(self, event: Event):
@@ -504,7 +508,7 @@ class SiteChain(ChainBase):
return
if str(domain).startswith("http"):
domain = StringUtils.get_url_domain(domain)
indexer = self.siteshelper.get_indexer(domain)
indexer = SitesHelper().get_indexer(domain)
if not indexer:
return
# 刷新站点用户数据
@@ -518,7 +522,8 @@ class SiteChain(ChainBase):
"""
# 检查域名是否可用
domain = StringUtils.get_url_domain(url)
site_info = self.siteoper.get_by_domain(domain)
siteoper = SiteOper()
site_info = siteoper.get_by_domain(domain)
if not site_info:
return False, f"站点【{url}】不存在"
@@ -535,9 +540,9 @@ class SiteChain(ChainBase):
# 统计
seconds = (datetime.now() - start_time).seconds
if state:
self.siteoper.success(domain=domain, seconds=seconds)
siteoper.success(domain=domain, seconds=seconds)
else:
self.siteoper.fail(domain)
siteoper.fail(domain)
return state, message
except Exception as e:
return False, f"{str(e)}"
@@ -572,8 +577,9 @@ class SiteChain(ChainBase):
).get_res(url=site_url)
# 判断登录状态
if res and res.status_code in [200, 500, 403]:
if not public and not SiteUtils.is_logged_in(res.text):
if under_challenge(res.text):
content = res.text
if not public and not SiteUtils.is_logged_in(content):
if under_challenge(content):
msg = "站点被Cloudflare防护请打开站点浏览器仿真"
elif res.status_code == 200:
msg = "Cookie已失效"
@@ -593,7 +599,7 @@ class SiteChain(ChainBase):
"""
查询所有站点,发送消息
"""
site_list = self.siteoper.list()
site_list = SiteOper().list()
if not site_list:
self.post_message(Notification(
channel=channel,
@@ -633,7 +639,8 @@ class SiteChain(ChainBase):
if not arg_str.isdigit():
return
site_id = int(arg_str)
site = self.siteoper.get(site_id)
siteoper = SiteOper()
site = siteoper.get(site_id)
if not site:
self.post_message(Notification(
channel=channel,
@@ -641,7 +648,7 @@ class SiteChain(ChainBase):
userid=userid))
return
# 禁用站点
self.siteoper.update(site_id, {
siteoper.update(site_id, {
"is_active": False
})
# 重新发送消息
@@ -655,25 +662,27 @@ class SiteChain(ChainBase):
if not arg_str:
return
arg_strs = str(arg_str).split()
siteoper = SiteOper()
for arg_str in arg_strs:
arg_str = arg_str.strip()
if not arg_str.isdigit():
continue
site_id = int(arg_str)
site = self.siteoper.get(site_id)
site = siteoper.get(site_id)
if not site:
self.post_message(Notification(
channel=channel,
title=f"站点编号 {site_id} 不存在!", userid=userid))
return
# 禁用站点
self.siteoper.update(site_id, {
siteoper.update(site_id, {
"is_active": True
})
# 重新发送消息
self.remote_list(channel=channel, userid=userid, source=source)
def update_cookie(self, site_info: Site,
@staticmethod
def update_cookie(site_info: Site,
username: str, password: str, two_step_code: Optional[str] = None) -> Tuple[bool, str]:
"""
根据用户名密码更新站点Cookie
@@ -684,7 +693,7 @@ class SiteChain(ChainBase):
:return: (是否成功, 错误信息)
"""
# 更新站点Cookie
result = self.cookiehelper.get_site_cookie_ua(
result = CookieHelper().get_site_cookie_ua(
url=site_info.url,
username=username,
password=password,
@@ -695,7 +704,7 @@ class SiteChain(ChainBase):
cookie, ua, msg = result
if not cookie:
return False, msg
self.siteoper.update(site_info.id, {
SiteOper().update(site_info.id, {
"cookie": cookie,
"ua": ua
})
@@ -737,7 +746,7 @@ class SiteChain(ChainBase):
# 站点ID
site_id = int(site_id)
# 站点信息
site_info = self.siteoper.get(site_id)
site_info = SiteOper().get(site_id)
if not site_info:
self.post_message(Notification(
channel=channel,

View File

@@ -14,16 +14,18 @@ class StorageChain(ChainBase):
存储处理链
"""
def __init__(self):
super().__init__()
self.directoryhelper = DirectoryHelper()
def save_config(self, storage: str, conf: dict) -> None:
"""
保存存储配置
"""
self.run_module("save_config", storage=storage, conf=conf)
def reset_config(self, storage: str) -> None:
"""
重置存储配置
"""
self.run_module("reset_config", storage=storage)
def generate_qrcode(self, storage: str) -> Optional[Tuple[dict, str]]:
"""
生成二维码
@@ -108,11 +110,17 @@ class StorageChain(ChainBase):
"""
return self.run_module("get_parent_item", fileitem=fileitem)
def snapshot_storage(self, storage: str, path: Path) -> Optional[Dict[str, float]]:
def snapshot_storage(self, storage: str, path: Path,
last_snapshot_time: float = None, max_depth: int = 5) -> Optional[Dict[str, Dict]]:
"""
快照存储
:param storage: 存储类型
:param path: 路径
:param last_snapshot_time: 上次快照时间,用于增量快照
:param max_depth: 最大递归深度,避免过深遍历
"""
return self.run_module("snapshot_storage", storage=storage, path=path)
return self.run_module("snapshot_storage", storage=storage, path=path,
last_snapshot_time=last_snapshot_time, max_depth=max_depth)
def storage_usage(self, storage: str) -> Optional[schemas.StorageUsage]:
"""
@@ -131,28 +139,43 @@ class StorageChain(ChainBase):
"""
删除媒体文件,以及不含媒体文件的目录
"""
def __is_bluray_dir(_fileitem: schemas.FileItem) -> bool:
"""
检查是否蓝光目录
"""
_dir_files = self.list_files(fileitem=_fileitem, recursion=False)
if _dir_files:
for _f in _dir_files:
if _f.type == "dir" and _f.name in ["BDMV", "CERTIFICATE"]:
return True
return False
media_exts = settings.RMT_MEDIAEXT + settings.DOWNLOAD_TMPEXT
if fileitem.path == "/" or len(Path(fileitem.path).parts) <= 2:
logger.warn(f"{fileitem.storage}{fileitem.path} 根目录或一级目录不允许删除")
return False
if fileitem.type == "dir":
# 本身是目录
if _blue_dir := self.list_files(fileitem=fileitem, recursion=False):
# 删除蓝光目录
for _f in _blue_dir:
if _f.type == "dir" and _f.name in ["BDMV", "CERTIFICATE"]:
logger.warn(f"{fileitem.storage}{_f.path} 删除蓝光目录")
self.delete_file(_f)
if self.any_files(fileitem, extensions=media_exts) is False:
logger.warn(f"{fileitem.storage}{fileitem.path} 不存在其它媒体文件,删除空目录")
return self.delete_file(fileitem)
return False
if __is_bluray_dir(fileitem):
logger.warn(f"正在删除蓝光原盘目录:【{fileitem.storage}{fileitem.path}")
if not self.delete_file(fileitem):
logger.warn(f"{fileitem.storage}{fileitem.path} 删除失败")
return False
elif self.any_files(fileitem, extensions=media_exts) is False:
logger.warn(f"{fileitem.storage}{fileitem.path} 不存在其它媒体文件,正在删除空目录")
if not self.delete_file(fileitem):
logger.warn(f"{fileitem.storage}{fileitem.path} 删除失败")
return False
# 不处理父目录
return True
elif delete_self:
# 本身是文件
logger.warn(f"正在删除【{fileitem.storage}{fileitem.path}")
# 本身是文件,需要删除文件
logger.warn(f"正在删除文件{fileitem.storage}{fileitem.path}")
if not self.delete_file(fileitem):
logger.warn(f"{fileitem.storage}{fileitem.path} 删除失败")
return False
if mtype:
# 重命名格式
rename_format = settings.TV_RENAME_FORMAT \
@@ -161,14 +184,17 @@ class StorageChain(ChainBase):
rename_format_level = len(rename_format.split("/")) - 1
if rename_format_level < 1:
return True
# 处理上级目录
# 处理媒体文件根目录
dir_item = self.get_file_item(storage=fileitem.storage,
path=Path(fileitem.path).parents[rename_format_level - 1])
else:
# 处理上级目录
dir_item = self.get_parent_item(fileitem)
# 检查和删除上级目录
if dir_item and len(Path(dir_item.path).parts) > 2:
# 如何目录是所有下载目录、媒体库目录的上级,则不处理
for d in self.directoryhelper.get_dirs():
for d in DirectoryHelper().get_dirs():
if d.download_path and Path(d.download_path).is_relative_to(Path(dir_item.path)):
logger.debug(f"{dir_item.storage}{dir_item.path} 是下载目录本级或上级目录,不删除")
return True
@@ -177,7 +203,9 @@ class StorageChain(ChainBase):
return True
# 不存在其他媒体文件,删除空目录
if self.any_files(dir_item, extensions=media_exts) is False:
logger.warn(f"{dir_item.storage}{dir_item.path} 不存在其它媒体文件,删除空目录")
return self.delete_file(dir_item)
logger.warn(f"{dir_item.storage}{dir_item.path} 不存在其它媒体文件,正在删除空目录")
if not self.delete_file(dir_item):
logger.warn(f"{dir_item.storage}{dir_item.path} 删除失败")
return False
return True

File diff suppressed because it is too large Load Diff

View File

@@ -1,5 +1,6 @@
import json
import re
import shutil
from pathlib import Path
from typing import Union, Optional
@@ -8,23 +9,19 @@ from app.core.config import settings
from app.log import logger
from app.schemas import Notification, MessageChannel
from app.utils.http import RequestUtils
from app.utils.singleton import Singleton
from app.utils.system import SystemUtils
from app.helper.system import SystemHelper
from app.helper.plugin import PluginHelper
from version import FRONTEND_VERSION, APP_VERSION
class SystemChain(ChainBase, metaclass=Singleton):
class SystemChain(ChainBase):
"""
系统级处理链
"""
_restart_file = "__system_restart__"
def __init__(self):
super().__init__()
# 重启完成检测
self.restart_finish()
def remote_clear_cache(self, channel: MessageChannel, userid: Union[int, str], source: Optional[str] = None):
"""
清理系统缓存
@@ -37,6 +34,8 @@ class SystemChain(ChainBase, metaclass=Singleton):
"""
重启系统
"""
from app.core.config import global_vars
if channel and userid:
self.post_message(Notification(channel=channel, source=source,
title="系统正在重启,请耐心等候!", userid=userid))
@@ -45,7 +44,123 @@ class SystemChain(ChainBase, metaclass=Singleton):
"channel": channel.value,
"userid": userid
}, self._restart_file)
SystemUtils.restart()
# 主动备份一次插件
self.backup_plugins()
# 设置停止标志,通知所有模块准备停止
global_vars.stop_system()
# 重启
SystemHelper.restart()
@staticmethod
def backup_plugins():
"""
备份插件到用户配置目录仅docker环境
"""
# 非docker环境不处理
if not SystemUtils.is_docker():
return
try:
# 使用绝对路径确保准确性
plugins_dir = settings.ROOT_PATH / "app" / "plugins"
backup_dir = settings.CONFIG_PATH / "plugins_backup"
if not plugins_dir.exists():
logger.info("插件目录不存在,跳过备份")
return
# 确保备份目录存在
backup_dir.mkdir(parents=True, exist_ok=True)
# 需要排除的文件和目录
exclude_items = {"__init__.py", "__pycache__", ".DS_Store"}
# 遍历插件目录,备份除排除项外的所有内容
for item in plugins_dir.iterdir():
if item.name in exclude_items:
continue
target_path = backup_dir / item.name
# 如果是目录
if item.is_dir():
if target_path.exists():
continue
shutil.copytree(item, target_path)
logger.info(f"已备份插件目录: {item.name}")
# 如果是文件
elif item.is_file():
if target_path.exists():
continue
shutil.copy2(item, target_path)
logger.info(f"已备份插件文件: {item.name}")
logger.info(f"插件备份完成,备份位置: {backup_dir}")
except Exception as e:
logger.error(f"插件备份失败: {str(e)}")
@staticmethod
def restore_plugins():
"""
从备份恢复插件到app/plugins目录恢复完成后删除备份仅docker环境
"""
# 非docker环境不处理
if not SystemUtils.is_docker():
return
# 使用绝对路径确保准确性
plugins_dir = settings.ROOT_PATH / "app" / "plugins"
backup_dir = settings.CONFIG_PATH / "plugins_backup"
if not backup_dir.exists():
logger.info("插件备份目录不存在,跳过恢复")
return
# 系统被重置才恢复插件
if SystemHelper().is_system_reset():
# 确保插件目录存在
plugins_dir.mkdir(parents=True, exist_ok=True)
# 遍历备份目录,恢复所有内容
restored_count = 0
for item in backup_dir.iterdir():
target_path = plugins_dir / item.name
try:
# 如果是目录,且目录内有内容
if item.is_dir() and any(item.iterdir()):
if target_path.exists():
shutil.rmtree(target_path)
shutil.copytree(item, target_path)
logger.info(f"已恢复插件目录: {item.name}")
# 安装依赖
requirements_file = target_path / "requirements.txt"
if requirements_file.exists():
logger.info(f"正在安装插件 {item.name} 的依赖...")
success, message = PluginHelper.pip_install_with_fallback(requirements_file)
if not success:
logger.warn(f"插件 {item.name} 依赖安装失败: {message}")
restored_count += 1
# 如果是文件
elif item.is_file():
shutil.copy2(item, target_path)
logger.info(f"已恢复插件文件: {item.name}")
restored_count += 1
except Exception as e:
logger.error(f"恢复插件 {item.name} 时发生错误: {str(e)}")
continue
logger.info(f"插件恢复完成,共恢复 {restored_count} 个项目")
# 删除备份目录
try:
shutil.rmtree(backup_dir)
logger.info(f"已删除插件备份目录: {backup_dir}")
except Exception as e:
logger.warning(f"删除备份目录失败: {str(e)}")
def __get_version_message(self) -> str:
"""

View File

@@ -3,13 +3,11 @@ from typing import Optional, List
from app import schemas
from app.chain import ChainBase
from app.core.cache import cached
from app.core.context import MediaInfo
from app.schemas import MediaType
from app.utils.singleton import Singleton
class TmdbChain(ChainBase, metaclass=Singleton):
class TmdbChain(ChainBase):
"""
TheMovieDB处理链单例运行
"""
@@ -145,7 +143,6 @@ class TmdbChain(ChainBase, metaclass=Singleton):
"""
return self.run_module("tmdb_person_credits", person_id=person_id, page=page)
@cached(maxsize=1, ttl=3600)
def get_random_wallpager(self) -> Optional[str]:
"""
获取随机壁纸缓存1个小时
@@ -159,7 +156,6 @@ class TmdbChain(ChainBase, metaclass=Singleton):
return info.backdrop_path
return None
@cached(maxsize=1, ttl=3600)
def get_trending_wallpapers(self, num: Optional[int] = 10) -> List[str]:
"""
获取所有流行壁纸

View File

@@ -2,8 +2,6 @@ import re
import traceback
from typing import Dict, List, Union, Optional
from cachetools import cached, TTLCache
from app.chain import ChainBase
from app.chain.media import MediaChain
from app.core.config import settings, global_vars
@@ -17,11 +15,10 @@ from app.helper.torrent import TorrentHelper
from app.log import logger
from app.schemas import Notification
from app.schemas.types import SystemConfigKey, MessageChannel, NotificationType, MediaType
from app.utils.singleton import Singleton
from app.utils.string import StringUtils
class TorrentsChain(ChainBase, metaclass=Singleton):
class TorrentsChain(ChainBase):
"""
站点首页或RSS种子处理链服务于订阅、刷流等
"""
@@ -29,14 +26,14 @@ class TorrentsChain(ChainBase, metaclass=Singleton):
_spider_file = "__torrents_cache__"
_rss_file = "__rss_cache__"
def __init__(self):
super().__init__()
self.siteshelper = SitesHelper()
self.siteoper = SiteOper()
self.rsshelper = RssHelper()
self.systemconfig = SystemConfigOper()
self.mediachain = MediaChain()
self.torrenthelper = TorrentHelper()
@property
def cache_file(self) -> str:
"""
返回缓存文件列表
"""
if settings.SUBSCRIBE_MODE == 'spider':
return self._spider_file
return self._rss_file
def remote_refresh(self, channel: MessageChannel, userid: Union[str, int] = None):
"""
@@ -72,39 +69,38 @@ class TorrentsChain(ChainBase, metaclass=Singleton):
self.remove_cache(self._rss_file)
logger.info(f'种子缓存数据清理完成')
@cached(cache=TTLCache(maxsize=128, ttl=595))
def browse(self, domain: str, keyword: Optional[str] = None, cat: Optional[str] = None,
page: Optional[int] = 0) -> List[TorrentInfo]:
"""
浏览站点首页内容返回种子清单TTL缓存10分钟
浏览站点首页内容返回种子清单TTL缓存5分钟
:param domain: 站点域名
:param keyword: 搜索标题
:param cat: 搜索分类
:param page: 页码
"""
logger.info(f'开始获取站点 {domain} 最新种子 ...')
site = self.siteshelper.get_indexer(domain)
site = SitesHelper().get_indexer(domain)
if not site:
logger.error(f'站点 {domain} 不存在!')
return []
return self.refresh_torrents(site=site, keyword=keyword, cat=cat, page=page)
@cached(cache=TTLCache(maxsize=128, ttl=295))
def rss(self, domain: str) -> List[TorrentInfo]:
"""
获取站点RSS内容返回种子清单TTL缓存5分钟
获取站点RSS内容返回种子清单TTL缓存3分钟
:param domain: 站点域名
"""
logger.info(f'开始获取站点 {domain} RSS ...')
site = self.siteshelper.get_indexer(domain)
site = SitesHelper().get_indexer(domain)
if not site:
logger.error(f'站点 {domain} 不存在!')
return []
if not site.get("rss"):
logger.error(f'站点 {domain} 未配置RSS地址')
return []
rss_items = self.rsshelper.parse(site.get("rss"), True if site.get("proxy") else False,
timeout=int(site.get("timeout") or 30))
# 解析RSS
rss_items = RssHelper().parse(site.get("rss"), True if site.get("proxy") else False,
timeout=int(site.get("timeout") or 30))
if rss_items is None:
# rss过期尝试保留原配置生成新的rss
self.__renew_rss_url(domain=domain, site=site)
@@ -114,25 +110,28 @@ class TorrentsChain(ChainBase, metaclass=Singleton):
return []
# 组装种子
ret_torrents: List[TorrentInfo] = []
for item in rss_items:
if not item.get("title"):
continue
torrentinfo = TorrentInfo(
site=site.get("id"),
site_name=site.get("name"),
site_cookie=site.get("cookie"),
site_ua=site.get("ua") or settings.USER_AGENT,
site_proxy=site.get("proxy"),
site_order=site.get("pri"),
site_downloader=site.get("downloader"),
title=item.get("title"),
enclosure=item.get("enclosure"),
page_url=item.get("link"),
size=item.get("size"),
pubdate=item["pubdate"].strftime("%Y-%m-%d %H:%M:%S") if item.get("pubdate") else None,
)
ret_torrents.append(torrentinfo)
try:
for item in rss_items:
if not item.get("title"):
continue
torrentinfo = TorrentInfo(
site=site.get("id"),
site_name=site.get("name"),
site_cookie=site.get("cookie"),
site_ua=site.get("ua") or settings.USER_AGENT,
site_proxy=site.get("proxy"),
site_order=site.get("pri"),
site_downloader=site.get("downloader"),
title=item.get("title"),
enclosure=item.get("enclosure"),
page_url=item.get("link"),
size=item.get("size"),
pubdate=item["pubdate"].strftime("%Y-%m-%d %H:%M:%S") if item.get("pubdate") else None,
)
ret_torrents.append(torrentinfo)
finally:
rss_items.clear()
del rss_items
return ret_torrents
def refresh(self, stype: Optional[str] = None, sites: List[int] = None) -> Dict[str, List[Context]]:
@@ -147,7 +146,7 @@ class TorrentsChain(ChainBase, metaclass=Singleton):
# 刷新站点
if not sites:
sites = self.systemconfig.get(SystemConfigKey.RssSites) or []
sites = SystemConfigOper().get(SystemConfigKey.RssSites) or []
# 读取缓存
torrents_cache = self.get_torrents()
@@ -155,14 +154,12 @@ class TorrentsChain(ChainBase, metaclass=Singleton):
# 缓存过滤掉无效种子
for _domain, _torrents in torrents_cache.items():
torrents_cache[_domain] = [_torrent for _torrent in _torrents
if not self.torrenthelper.is_invalid(_torrent.torrent_info.enclosure)]
if not TorrentHelper().is_invalid(_torrent.torrent_info.enclosure)]
# 所有站点索引
indexers = self.siteshelper.get_indexers()
# 需要刷新的站点domain
domains = []
# 遍历站点缓存资源
for indexer in indexers:
for indexer in SitesHelper().get_indexers():
if global_vars.is_system_stopped:
break
# 未开启的站点不刷新
@@ -179,50 +176,52 @@ class TorrentsChain(ChainBase, metaclass=Singleton):
# 按pubdate降序排列
torrents.sort(key=lambda x: x.pubdate or '', reverse=True)
# 取前N条
torrents = torrents[:settings.CACHE_CONF["refresh"]]
torrents = torrents[:settings.CONF.refresh]
if torrents:
# 过滤出没有处理过的种子
# 过滤出没有处理过的种子 - 优化:使用集合查找,避免重复创建字符串列表
cached_signatures = {f'{t.torrent_info.title}{t.torrent_info.description}'
for t in torrents_cache.get(domain) or []}
torrents = [torrent for torrent in torrents
if f'{torrent.title}{torrent.description}'
not in [f'{t.torrent_info.title}{t.torrent_info.description}'
for t in torrents_cache.get(domain) or []]]
if f'{torrent.title}{torrent.description}' not in cached_signatures]
if torrents:
logger.info(f'{indexer.get("name")}{len(torrents)} 个新种子')
else:
logger.info(f'{indexer.get("name")} 没有新种子')
continue
for torrent in torrents:
if global_vars.is_system_stopped:
break
logger.info(f'处理资源:{torrent.title} ...')
# 识别
meta = MetaInfo(title=torrent.title, subtitle=torrent.description)
if torrent.title != meta.org_string:
logger.info(f'种子名称应用识别词后发生改变:{torrent.title} => {meta.org_string}')
# 使用站点种子分类,校正类型识别
if meta.type != MediaType.TV \
and torrent.category == MediaType.TV.value:
meta.type = MediaType.TV
# 识别媒体信息
mediainfo: MediaInfo = self.mediachain.recognize_by_meta(meta)
if not mediainfo:
logger.warn(f'{torrent.title} 未识别到媒体信息')
# 存储空的媒体信息
mediainfo = MediaInfo()
# 清理多余数据
mediainfo.clear()
# 上下文
context = Context(meta_info=meta, media_info=mediainfo, torrent_info=torrent)
# 添加到缓存
if not torrents_cache.get(domain):
torrents_cache[domain] = [context]
else:
torrents_cache[domain].append(context)
# 如果超过了限制条数则移除掉前面的
if len(torrents_cache[domain]) > settings.CACHE_CONF["torrents"]:
torrents_cache[domain] = torrents_cache[domain][-settings.CACHE_CONF["torrents"]:]
# 回收资源
del torrents
try:
for torrent in torrents:
if global_vars.is_system_stopped:
break
logger.info(f'处理资源:{torrent.title} ...')
# 识别
meta = MetaInfo(title=torrent.title, subtitle=torrent.description)
if torrent.title != meta.org_string:
logger.info(f'种子名称应用识别词后发生改变:{torrent.title} => {meta.org_string}')
# 使用站点种子分类,校正类型识别
if meta.type != MediaType.TV \
and torrent.category == MediaType.TV.value:
meta.type = MediaType.TV
# 识别媒体信息
mediainfo: MediaInfo = MediaChain().recognize_by_meta(meta)
if not mediainfo:
logger.warn(f'{torrent.title} 未识别到媒体信息')
# 存储空的媒体信息
mediainfo = MediaInfo()
# 清理多余数据,减少内存占用
mediainfo.clear()
# 上下文
context = Context(meta_info=meta, media_info=mediainfo, torrent_info=torrent)
# 添加到缓存
if not torrents_cache.get(domain):
torrents_cache[domain] = [context]
else:
torrents_cache[domain].append(context)
# 如果超过了限制条数则移除掉前面的
if len(torrents_cache[domain]) > settings.CONF.torrents:
torrents_cache[domain] = torrents_cache[domain][-settings.CONF.torrents:]
finally:
torrents.clear()
del torrents
else:
logger.info(f'{indexer.get("name")} 没有获取到种子')
@@ -235,6 +234,7 @@ class TorrentsChain(ChainBase, metaclass=Singleton):
# 去除不在站点范围内的缓存种子
if sites and torrents_cache:
torrents_cache = {k: v for k, v in torrents_cache.items() if k in domains}
return torrents_cache
def __renew_rss_url(self, domain: str, site: dict):
@@ -245,7 +245,7 @@ class TorrentsChain(ChainBase, metaclass=Singleton):
# RSS链接过期
logger.error(f"站点 {domain} RSS链接已过期正在尝试自动获取")
# 自动生成rss地址
rss_url, errmsg = self.rsshelper.get_rss_link(
rss_url, errmsg = RssHelper().get_rss_link(
url=site.get("url"),
cookie=site.get("cookie"),
ua=site.get("ua") or settings.USER_AGENT,
@@ -259,7 +259,7 @@ class TorrentsChain(ChainBase, metaclass=Singleton):
# 获取过期rss除去passkey部分
new_rss = re.sub(r'&passkey=([a-zA-Z0-9]+)', f'&passkey={new_passkey}', site.get("rss"))
logger.info(f"更新站点 {domain} RSS地址 ...")
self.siteoper.update_rss(domain=domain, rss=new_rss)
SiteOper().update_rss(domain=domain, rss=new_rss)
else:
# 发送消息
self.post_message(

607
app/chain/transfer.py Normal file → Executable file
View File

@@ -1,3 +1,4 @@
import gc
import queue
import re
import threading
@@ -17,6 +18,7 @@ from app.core.config import settings, global_vars
from app.core.context import MediaInfo
from app.core.meta import MetaBase
from app.core.metainfo import MetaInfoPath
from app.core.event import eventmanager
from app.db.downloadhistory_oper import DownloadHistoryOper
from app.db.models.downloadhistory import DownloadHistory
from app.db.models.transferhistory import TransferHistory
@@ -29,7 +31,8 @@ from app.log import logger
from app.schemas import TransferInfo, TransferTorrent, Notification, EpisodeFormat, FileItem, TransferDirectoryConf, \
TransferTask, TransferQueue, TransferJob, TransferJobTask
from app.schemas.types import TorrentStatus, EventType, MediaType, ProgressKey, NotificationType, MessageChannel, \
SystemConfigKey
SystemConfigKey, ChainEventType, ContentType
from app.schemas import StorageOperSelectionEventData
from app.utils.singleton import Singleton
from app.utils.string import StringUtils
@@ -326,7 +329,8 @@ class JobManager:
# 计算状态为完成的任务数
if __mediaid__ not in self._job_view:
return 0
return sum([task.fileitem.size for task in self._job_view[__mediaid__].tasks if task.state == "completed" and task.fileitem.size is not None])
return sum([task.fileitem.size for task in self._job_view[__mediaid__].tasks if
task.state == "completed" and task.fileitem.size is not None])
def total(self) -> int:
"""
@@ -369,14 +373,6 @@ class TransferChain(ChainBase, metaclass=Singleton):
def __init__(self):
super().__init__()
self.downloadhis = DownloadHistoryOper()
self.transferhis = TransferHistoryOper()
self.progress = ProgressHelper()
self.mediachain = MediaChain()
self.tmdbchain = TmdbChain()
self.storagechain = StorageChain()
self.systemconfig = SystemConfigOper()
self.directoryhelper = DirectoryHelper()
self.jobview = JobManager()
# 启动整理任务
@@ -395,11 +391,12 @@ class TransferChain(ChainBase, metaclass=Singleton):
"""
整理完成后处理
"""
transferhis = TransferHistoryOper()
if not transferinfo.success:
# 转移失败
logger.warn(f"{task.fileitem.name} 入库失败:{transferinfo.message}")
# 新增转移失败历史记录
self.transferhis.add_fail(
transferhis.add_fail(
fileitem=task.fileitem,
mode=transferinfo.transfer_type if transferinfo else '',
downloader=task.downloader,
@@ -426,7 +423,7 @@ class TransferChain(ChainBase, metaclass=Singleton):
logger.info(f"{task.fileitem.name} 入库成功:{transferinfo.target_diritem.path}")
# 新增转移成功历史记录
self.transferhis.add_success(
transferhis.add_success(
fileitem=task.fileitem,
mode=transferinfo.transfer_type if transferinfo else '',
downloader=task.downloader,
@@ -455,6 +452,7 @@ class TransferChain(ChainBase, metaclass=Singleton):
tasks = self.jobview.success_tasks(task.mediainfo, task.meta.begin_season)
# 记录已处理的种子hash
processed_hashes = set()
storagechain = StorageChain()
for t in tasks:
# 下载器hash
if t.download_hash and t.download_hash not in processed_hashes:
@@ -463,7 +461,7 @@ class TransferChain(ChainBase, metaclass=Singleton):
logger.info(f"移动模式删除种子成功:{t.download_hash} ")
# 删除残留目录
if t.fileitem:
self.storagechain.delete_media_file(t.fileitem, delete_self=False)
storagechain.delete_media_file(t.fileitem, delete_self=False)
# 整理完成且有成功的任务时
if self.jobview.is_finished(task):
# 发送通知,实时手动整理时不发
@@ -541,6 +539,8 @@ class TransferChain(ChainBase, metaclass=Singleton):
# 失败数量
fail_num = 0
progress = ProgressHelper()
while not global_vars.is_system_stopped:
try:
item: TransferQueue = self._queue.get(block=False)
@@ -554,24 +554,24 @@ class TransferChain(ChainBase, metaclass=Singleton):
if __queue_start:
logger.info("开始整理队列处理...")
# 启动进度
self.progress.start(ProgressKey.FileTransfer)
progress.start(ProgressKey.FileTransfer)
# 重置计数
processed_num = 0
fail_num = 0
total_num = self.jobview.total()
__process_msg = f"开始整理队列处理,当前共 {total_num} 个文件 ..."
logger.info(__process_msg)
self.progress.update(value=0,
text=__process_msg,
key=ProgressKey.FileTransfer)
progress.update(value=0,
text=__process_msg,
key=ProgressKey.FileTransfer)
# 队列已开始
__queue_start = False
# 更新进度
__process_msg = f"正在整理 {fileitem.name} ..."
logger.info(__process_msg)
self.progress.update(value=processed_num / total_num * 100,
text=__process_msg,
key=ProgressKey.FileTransfer)
progress.update(value=processed_num / total_num * 100,
text=__process_msg,
key=ProgressKey.FileTransfer)
# 整理
state, err_msg = self.__handle_transfer(task=task, callback=item.callback)
if not state:
@@ -581,18 +581,18 @@ class TransferChain(ChainBase, metaclass=Singleton):
processed_num += 1
__process_msg = f"{fileitem.name} 整理完成"
logger.info(__process_msg)
self.progress.update(value=processed_num / total_num * 100,
text=__process_msg,
key=ProgressKey.FileTransfer)
progress.update(value=processed_num / total_num * 100,
text=__process_msg,
key=ProgressKey.FileTransfer)
except queue.Empty:
if not __queue_start:
# 结束进度
__end_msg = f"整理队列处理完成,共整理 {processed_num} 个文件,失败 {fail_num}"
logger.info(__end_msg)
self.progress.update(value=100,
text=__end_msg,
key=ProgressKey.FileTransfer)
self.progress.end(ProgressKey.FileTransfer)
progress.update(value=100,
text=__end_msg,
key=ProgressKey.FileTransfer)
progress.end(ProgressKey.FileTransfer)
# 重置计数
processed_num = 0
fail_num = 0
@@ -612,6 +612,7 @@ class TransferChain(ChainBase, metaclass=Singleton):
"""
try:
# 识别
transferhis = TransferHistoryOper()
if not task.mediainfo:
mediainfo = None
download_history = task.download_history
@@ -631,7 +632,7 @@ class TransferChain(ChainBase, metaclass=Singleton):
mediainfo.category = download_history.media_category
else:
# 识别媒体信息
mediainfo = self.mediachain.recognize_by_meta(task.meta)
mediainfo = MediaChain().recognize_by_meta(task.meta)
# 更新媒体图片
if mediainfo:
@@ -639,7 +640,7 @@ class TransferChain(ChainBase, metaclass=Singleton):
if not mediainfo:
# 新增整理失败历史记录
his = self.transferhis.add_fail(
his = transferhis.add_fail(
fileitem=task.fileitem,
mode=task.transfer_type,
meta=task.meta,
@@ -659,8 +660,8 @@ class TransferChain(ChainBase, metaclass=Singleton):
# 如果未开启新增已入库媒体是否跟随TMDB信息变化则根据tmdbid查询之前的title
if not settings.SCRAP_FOLLOW_TMDB:
transfer_history = self.transferhis.get_by_type_tmdbid(tmdbid=mediainfo.tmdb_id,
mtype=mediainfo.type.value)
transfer_history = transferhis.get_by_type_tmdbid(tmdbid=mediainfo.tmdb_id,
mtype=mediainfo.type.value)
if transfer_history:
mediainfo.title = transfer_history.title
@@ -680,7 +681,7 @@ class TransferChain(ChainBase, metaclass=Singleton):
# 默认值1
if season_num is None:
season_num = 1
task.episodes_info = self.tmdbchain.tmdb_episodes(
task.episodes_info = TmdbChain().tmdb_episodes(
tmdbid=task.mediainfo.tmdb_id,
season=season_num,
episode_group=task.mediainfo.episode_group
@@ -690,19 +691,45 @@ class TransferChain(ChainBase, metaclass=Singleton):
if not task.target_directory:
if task.target_path:
# 指定目标路径,`手动整理`场景下使用,忽略源目录匹配,使用指定目录匹配
task.target_directory = self.directoryhelper.get_dir(media=task.mediainfo,
dest_path=task.target_path,
target_storage=task.target_storage)
task.target_directory = DirectoryHelper().get_dir(media=task.mediainfo,
dest_path=task.target_path,
target_storage=task.target_storage)
else:
# 启用源目录匹配时,根据源目录匹配下载目录,否则按源目录同盘优先原则,如无源目录,则根据媒体信息获取目标目录
task.target_directory = self.directoryhelper.get_dir(media=task.mediainfo,
storage=task.fileitem.storage,
src_path=Path(task.fileitem.path),
target_storage=task.target_storage)
task.target_directory = DirectoryHelper().get_dir(media=task.mediainfo,
storage=task.fileitem.storage,
src_path=Path(task.fileitem.path),
target_storage=task.target_storage)
if not task.target_storage and task.target_directory:
task.target_storage = task.target_directory.library_storage
# 正在处理
self.jobview.running_task(task)
# 广播事件,请示额外的源存储支持
source_oper = None
source_event_data = StorageOperSelectionEventData(
storage=task.fileitem.storage,
)
source_event = eventmanager.send_event(ChainEventType.StorageOperSelection, source_event_data)
# 使用事件返回的上下文数据
if source_event and source_event.event_data:
source_event_data: StorageOperSelectionEventData = source_event.event_data
if source_event_data.storage_oper:
source_oper = source_event_data.storage_oper
# 广播事件,请示额外的目标存储支持
target_oper = None
target_event_data = StorageOperSelectionEventData(
storage=task.target_storage,
)
target_event = eventmanager.send_event(ChainEventType.StorageOperSelection, target_event_data)
# 使用事件返回的上下文数据
if target_event and target_event.event_data:
target_event_data: StorageOperSelectionEventData = target_event.event_data
if target_event_data.storage_oper:
target_oper = target_event_data.storage_oper
# 执行整理
transferinfo: TransferInfo = self.transfer(fileitem=task.fileitem,
meta=task.meta,
@@ -714,7 +741,9 @@ class TransferChain(ChainBase, metaclass=Singleton):
episodes_info=task.episodes_info,
scrape=task.scrape,
library_type_folder=task.library_type_folder,
library_category_folder=task.library_category_folder)
library_category_folder=task.library_category_folder,
source_oper=source_oper,
target_oper=target_oper)
if not transferinfo:
logger.error("文件整理模块运行失败")
return False, "文件整理模块运行失败"
@@ -754,12 +783,13 @@ class TransferChain(ChainBase, metaclass=Singleton):
# 全局锁,避免重复处理
with downloader_lock:
# 获取下载器监控目录
download_dirs = self.directoryhelper.get_download_dirs()
download_dirs = DirectoryHelper().get_download_dirs()
# 如果没有下载器监控的目录则不处理
if not any(dir_info.monitor_type == "downloader" and dir_info.storage == "local"
for dir_info in download_dirs):
return True
logger.info("开始整理下载器中已经完成下载的文件 ...")
# 从下载器获取种子列表
torrents: Optional[List[TransferTorrent]] = self.list_torrents(status=TorrentStatus.TRANSFER)
if not torrents:
@@ -768,87 +798,100 @@ class TransferChain(ChainBase, metaclass=Singleton):
logger.info(f"获取到 {len(torrents)} 个已完成的下载任务")
for torrent in torrents:
if global_vars.is_system_stopped:
break
# 文件路径
file_path = torrent.path
if not file_path.exists():
logger.warn(f"文件不存在:{file_path}")
continue
# 检查是否为下载器监控目录中的文件
is_downloader_monitor = False
for dir_info in download_dirs:
if dir_info.monitor_type != "downloader":
continue
if not dir_info.download_path:
continue
if file_path.is_relative_to(Path(dir_info.download_path)):
is_downloader_monitor = True
try:
for torrent in torrents:
if global_vars.is_system_stopped:
break
if not is_downloader_monitor:
logger.debug(f"文件 {file_path} 不在下载器监控目录中,不通过下载器进行整理")
continue
# 查询下载记录识别情况
downloadhis: DownloadHistory = self.downloadhis.get_by_hash(torrent.hash)
if downloadhis:
# 类型
try:
mtype = MediaType(downloadhis.type)
except ValueError:
mtype = MediaType.TV
# 按TMDBID识别
mediainfo = self.recognize_media(mtype=mtype,
tmdbid=downloadhis.tmdbid,
doubanid=downloadhis.doubanid,
episode_group=downloadhis.episode_group)
if mediainfo:
# 补充图片
self.obtain_images(mediainfo)
# 更新自定义媒体类别
if downloadhis.media_category:
mediainfo.category = downloadhis.media_category
else:
# 非MoviePilot下载的任务按文件识别
mediainfo = None
# 文件路径
file_path = torrent.path
if not file_path.exists():
logger.warn(f"文件不存在:{file_path}")
continue
# 检查是否为下载器监控目录中的文件
is_downloader_monitor = False
for dir_info in download_dirs:
if dir_info.monitor_type != "downloader":
continue
if not dir_info.download_path:
continue
if file_path.is_relative_to(Path(dir_info.download_path)):
is_downloader_monitor = True
break
if not is_downloader_monitor:
logger.debug(f"文件 {file_path} 不在下载器监控目录中,不通过下载器进行整理")
continue
# 查询下载记录识别情况
downloadhis: DownloadHistory = DownloadHistoryOper().get_by_hash(torrent.hash)
if downloadhis:
# 类型
try:
mtype = MediaType(downloadhis.type)
except ValueError:
mtype = MediaType.TV
# 按TMDBID识别
mediainfo = self.recognize_media(mtype=mtype,
tmdbid=downloadhis.tmdbid,
doubanid=downloadhis.doubanid,
episode_group=downloadhis.episode_group)
if mediainfo:
# 补充图片
self.obtain_images(mediainfo)
# 更新自定义媒体类别
if downloadhis.media_category:
mediainfo.category = downloadhis.media_category
else:
# 非MoviePilot下载的任务按文件识别
mediainfo = None
# 执行实时整理,匹配源目录
state, errmsg = self.do_transfer(
fileitem=FileItem(
storage="local",
path=str(file_path).replace("\\", "/"),
type="dir" if not file_path.is_file() else "file",
name=file_path.name,
size=file_path.stat().st_size,
extension=file_path.suffix.lstrip('.'),
),
mediainfo=mediainfo,
downloader=torrent.downloader,
download_hash=torrent.hash,
background=False,
)
# 执行实时整理,匹配源目录
state, errmsg = self.do_transfer(
fileitem=FileItem(
storage="local",
path=str(file_path).replace("\\", "/"),
type="dir" if not file_path.is_file() else "file",
name=file_path.name,
size=file_path.stat().st_size,
extension=file_path.suffix.lstrip('.'),
),
mediainfo=mediainfo,
downloader=torrent.downloader,
download_hash=torrent.hash,
background=False,
)
# 设置下载任务状态
if state:
self.transfer_completed(hashs=torrent.hash)
# 设置下载任务状态
if not state:
logger.warn(f"整理下载器任务失败:{torrent.hash} - {errmsg}")
self.transfer_completed(hashs=torrent.hash, downloader=torrent.downloader)
finally:
torrents.clear()
del torrents
# 如果不是大内存模式,进行垃圾回收
if not settings.BIG_MEMORY_MODE:
gc.collect()
# 结束
logger.info("所有下载器中下载完成的文件已整理完成")
return True
def __get_trans_fileitems(self, fileitem: FileItem) -> List[Tuple[FileItem, bool]]:
def __get_trans_fileitems(
self, fileitem: FileItem, depth: int = 1
) -> List[Tuple[FileItem, bool]]:
"""
获取整理目录或文件列表
:param fileitem: 文件项
"""
def __is_bluray_dir(_fileitem: FileItem) -> bool:
:param fileitem: 文件项
:param depth: 递归深度默认为1
"""
storagechain = StorageChain()
def __contains_bluray_sub(_fileitems: List[FileItem]) -> bool:
"""
判断是不是蓝光目录
判断是否包含蓝光目录
"""
subs = self.storagechain.list_files(_fileitem)
if subs:
for sub in subs:
if _fileitems:
for sub in _fileitems:
if sub.type == "dir" and sub.name in ["BDMV", "CERTIFICATE"]:
return True
return False
@@ -865,10 +908,10 @@ class TransferChain(ChainBase, metaclass=Singleton):
"""
for p in _path.parents:
if p.name == "BDMV":
return self.storagechain.get_file_item(storage=_storage, path=p.parent)
return storagechain.get_file_item(storage=_storage, path=p.parent)
return None
if not self.storagechain.get_item(fileitem):
if not storagechain.get_item(fileitem):
logger.warn(f"目录或文件不存在:{fileitem.path}")
return []
@@ -883,25 +926,22 @@ class TransferChain(ChainBase, metaclass=Singleton):
return [(fileitem, False)]
# 蓝光原盘根目录
if __is_bluray_dir(fileitem):
sub_items = storagechain.list_files(fileitem) or []
if __contains_bluray_sub(sub_items):
return [(fileitem, True)]
# 需要整理的文件项列表
trans_items = []
# 先检查当前目录的下级目录,以支持合集的情况
for sub_dir in self.storagechain.list_files(fileitem):
for sub_dir in sub_items if depth >= 1 else []:
if sub_dir.type == "dir":
if __is_bluray_dir(sub_dir):
trans_items.append((sub_dir, True))
else:
trans_items.append((sub_dir, False))
trans_items.extend(self.__get_trans_fileitems(sub_dir, depth=depth - 1))
if not trans_items:
# 没有有效子目录,直接整理当前目录
trans_items.append((fileitem, False))
else:
# 有子目录时,把当前目录的文件添加到整理任务中
sub_items = self.storagechain.list_files(fileitem)
if sub_items:
trans_items.extend([(f, False) for f in sub_items if f.type == "file"])
@@ -963,11 +1003,13 @@ class TransferChain(ChainBase, metaclass=Singleton):
offset=epformat.offset) if epformat else None
# 整理屏蔽词
transfer_exclude_words = self.systemconfig.get(SystemConfigKey.TransferExcludeWords)
transfer_exclude_words = SystemConfigOper().get(SystemConfigKey.TransferExcludeWords)
# 汇总错误信息
err_msgs: List[str] = []
# 待整理目录或文件项
trans_items = self.__get_trans_fileitems(fileitem)
trans_items = self.__get_trans_fileitems(
fileitem, depth=2 # 为解决 issue#4371 深度至少需要>=2
)
# 待整理的文件列表
file_items: List[Tuple[FileItem, bool]] = []
@@ -980,7 +1022,7 @@ class TransferChain(ChainBase, metaclass=Singleton):
# 如果是目录且不是⼀蓝光原盘,获取所有文件并整理
if trans_item.type == "dir" and not bluray_dir:
# 遍历获取下载目录所有文件(递归)
if files := self.storagechain.list_files(trans_item, recursion=True):
if files := StorageChain().list_files(trans_item, recursion=True):
file_items.extend([(file, False) for file in files])
else:
file_items.append((trans_item, bluray_dir))
@@ -1000,110 +1042,115 @@ class TransferChain(ChainBase, metaclass=Singleton):
# 整理所有文件
transfer_tasks: List[TransferTask] = []
for file_item, bluray_dir in file_items:
if global_vars.is_system_stopped:
break
if continue_callback and not continue_callback():
break
file_path = Path(file_item.path)
# 回收站及隐藏的文件不处理
if file_item.path.find('/@Recycle/') != -1 \
or file_item.path.find('/#recycle/') != -1 \
or file_item.path.find('/.') != -1 \
or file_item.path.find('/@eaDir') != -1:
logger.debug(f"{file_item.path} 是回收站或隐藏的文件")
continue
# 整理屏蔽词不处理
is_blocked = False
if transfer_exclude_words:
for keyword in transfer_exclude_words:
if not keyword:
continue
if keyword and re.search(r"%s" % keyword, file_item.path, re.IGNORECASE):
logger.info(f"{file_item.path} 命中整理屏蔽词 {keyword},不处理")
is_blocked = True
break
if is_blocked:
continue
# 整理成功的不再处理
if not force:
transferd = self.transferhis.get_by_src(file_item.path, storage=file_item.storage)
if transferd:
if not transferd.status:
all_success = False
logger.info(f"{file_item.path} 已整理过,如需重新处理,请删除整理记录。")
err_msgs.append(f"{file_item.name} 已整理过")
try:
for file_item, bluray_dir in file_items:
if global_vars.is_system_stopped:
break
if continue_callback and not continue_callback():
break
file_path = Path(file_item.path)
# 回收站及隐藏的文件不处理
if file_item.path.find('/@Recycle/') != -1 \
or file_item.path.find('/#recycle/') != -1 \
or file_item.path.find('/.') != -1 \
or file_item.path.find('/@eaDir') != -1:
logger.debug(f"{file_item.path} 是回收站或隐藏的文件")
continue
if not meta:
# 文件元数据
file_meta = MetaInfoPath(file_path)
else:
file_meta = meta
# 整理屏蔽词不处理
is_blocked = False
if transfer_exclude_words:
for keyword in transfer_exclude_words:
if not keyword:
continue
if keyword and re.search(r"%s" % keyword, file_item.path, re.IGNORECASE):
logger.info(f"{file_item.path} 命中整理屏蔽词 {keyword},不处理")
is_blocked = True
break
if is_blocked:
continue
# 合并季
if season is not None:
file_meta.begin_season = season
# 整理成功的不再处理
if not force:
transferd = TransferHistoryOper().get_by_src(file_item.path, storage=file_item.storage)
if transferd:
if not transferd.status:
all_success = False
logger.info(f"{file_item.path} 已整理过,如需重新处理,请删除整理记录。")
err_msgs.append(f"{file_item.name} 已整理过")
continue
if not file_meta:
all_success = False
logger.error(f"{file_path.name} 无法识别有效信息")
err_msgs.append(f"{file_path.name} 无法识别有效信息")
continue
if not meta:
# 文件元数据
file_meta = MetaInfoPath(file_path)
else:
file_meta = meta
# 自定义识别
if formaterHandler:
# 开始集、结束集、PART
begin_ep, end_ep, part = formaterHandler.split_episode(file_name=file_path.name, file_meta=file_meta)
if begin_ep is not None:
file_meta.begin_episode = begin_ep
file_meta.part = part
if end_ep is not None:
file_meta.end_episode = end_ep
# 合并季
if season is not None:
file_meta.begin_season = season
# 根据父路径获取下载历史
download_history = None
if bluray_dir:
# 蓝光原盘,按目录名查询
download_history = self.downloadhis.get_by_path(str(file_path))
else:
# 按文件全路径查询
download_file = self.downloadhis.get_file_by_fullpath(str(file_path))
if download_file:
download_history = self.downloadhis.get_by_hash(download_file.download_hash)
if not file_meta:
all_success = False
logger.error(f"{file_path.name} 无法识别有效信息")
err_msgs.append(f"{file_path.name} 无法识别有效信息")
continue
# 获取下载Hash
if download_history and (not downloader or not download_hash):
downloader = download_history.downloader
download_hash = download_history.download_hash
# 自定义识别
if formaterHandler:
# 开始集、结束集、PART
begin_ep, end_ep, part = formaterHandler.split_episode(file_name=file_path.name, file_meta=file_meta)
if begin_ep is not None:
file_meta.begin_episode = begin_ep
file_meta.part = part
if end_ep is not None:
file_meta.end_episode = end_ep
# 后台整理
transfer_task = TransferTask(
fileitem=file_item,
meta=file_meta,
mediainfo=mediainfo,
target_directory=target_directory,
target_storage=target_storage,
target_path=target_path,
transfer_type=transfer_type,
scrape=scrape,
library_type_folder=library_type_folder,
library_category_folder=library_category_folder,
downloader=downloader,
download_hash=download_hash,
download_history=download_history,
manual=manual,
background=background
)
if background:
self.put_to_queue(task=transfer_task)
logger.info(f"{file_path.name} 已添加到整理队列")
else:
# 加入列表
self.__put_to_jobview(transfer_task)
transfer_tasks.append(transfer_task)
# 根据父路径获取下载历史
download_history = None
downloadhis = DownloadHistoryOper()
if bluray_dir:
# 蓝光原盘,按目录名查询
download_history = downloadhis.get_by_path(str(file_path))
else:
# 按文件全路径查询
download_file = downloadhis.get_file_by_fullpath(str(file_path))
if download_file:
download_history = downloadhis.get_by_hash(download_file.download_hash)
# 获取下载Hash
if download_history and (not downloader or not download_hash):
downloader = download_history.downloader
download_hash = download_history.download_hash
# 后台整理
transfer_task = TransferTask(
fileitem=file_item,
meta=file_meta,
mediainfo=mediainfo,
target_directory=target_directory,
target_storage=target_storage,
target_path=target_path,
transfer_type=transfer_type,
scrape=scrape,
library_type_folder=library_type_folder,
library_category_folder=library_category_folder,
downloader=downloader,
download_hash=download_hash,
download_history=download_history,
manual=manual,
background=background
)
if background:
self.put_to_queue(task=transfer_task)
logger.info(f"{file_path.name} 已添加到整理队列")
else:
# 加入列表
self.__put_to_jobview(transfer_task)
transfer_tasks.append(transfer_task)
finally:
file_items.clear()
del file_items
# 实时整理
if transfer_tasks:
@@ -1115,45 +1162,50 @@ class TransferChain(ChainBase, metaclass=Singleton):
fail_num = 0
# 启动进度
self.progress.start(ProgressKey.FileTransfer)
progress = ProgressHelper()
progress.start(ProgressKey.FileTransfer)
__process_msg = f"开始整理,共 {total_num} 个文件 ..."
logger.info(__process_msg)
self.progress.update(value=0,
text=__process_msg,
key=ProgressKey.FileTransfer)
for transfer_task in transfer_tasks:
if global_vars.is_system_stopped:
break
if continue_callback and not continue_callback():
break
# 更新进度
__process_msg = f"正在整理 {processed_num + fail_num + 1}/{total_num}{transfer_task.fileitem.name} ..."
logger.info(__process_msg)
self.progress.update(value=(processed_num + fail_num) / total_num * 100,
text=__process_msg,
key=ProgressKey.FileTransfer)
state, err_msg = self.__handle_transfer(
task=transfer_task,
callback=self.__default_callback
)
if not state:
all_success = False
logger.warn(f"{transfer_task.fileitem.name} {err_msg}")
err_msgs.append(f"{transfer_task.fileitem.name} {err_msg}")
fail_num += 1
else:
processed_num += 1
progress.update(value=0,
text=__process_msg,
key=ProgressKey.FileTransfer)
try:
for transfer_task in transfer_tasks:
if global_vars.is_system_stopped:
break
if continue_callback and not continue_callback():
break
# 更新进度
__process_msg = f"正在整理 {processed_num + fail_num + 1}/{total_num}{transfer_task.fileitem.name} ..."
logger.info(__process_msg)
progress.update(value=(processed_num + fail_num) / total_num * 100,
text=__process_msg,
key=ProgressKey.FileTransfer)
state, err_msg = self.__handle_transfer(
task=transfer_task,
callback=self.__default_callback
)
if not state:
all_success = False
logger.warn(f"{transfer_task.fileitem.name} {err_msg}")
err_msgs.append(f"{transfer_task.fileitem.name} {err_msg}")
fail_num += 1
else:
processed_num += 1
finally:
transfer_tasks.clear()
del transfer_tasks
# 整理结束
__end_msg = f"整理队列处理完成,共整理 {total_num} 个文件,失败 {fail_num}"
logger.info(__end_msg)
self.progress.update(value=100,
text=__end_msg,
key=ProgressKey.FileTransfer)
self.progress.end(ProgressKey.FileTransfer)
progress.update(value=100,
text=__end_msg,
key=ProgressKey.FileTransfer)
progress.end(ProgressKey.FileTransfer)
return all_success, "".join(err_msgs)
error_msg = "".join(err_msgs[:2]) + (f",等{len(err_msgs)}个文件错误!" if len(err_msgs) > 2 else "")
return all_success, error_msg
def remote_transfer(self, arg_str: str, channel: MessageChannel,
userid: Union[str, int] = None, source: Optional[str] = None):
@@ -1206,7 +1258,7 @@ class TransferChain(ChainBase, metaclass=Singleton):
:param mediaid: TMDB ID/豆瓣ID
"""
# 查询历史记录
history: TransferHistory = self.transferhis.get(logid)
history: TransferHistory = TransferHistoryOper().get(logid)
if not history:
logger.error(f"整理记录不存在ID{logid}")
return False, "整理记录不存在"
@@ -1222,7 +1274,7 @@ class TransferChain(ChainBase, metaclass=Singleton):
# 更新媒体图片
self.obtain_images(mediainfo=mediainfo)
else:
mediainfo = self.mediachain.recognize_by_path(str(src_path), episode_group=history.episode_group)
mediainfo = MediaChain().recognize_by_path(str(src_path), episode_group=history.episode_group)
if not mediainfo:
return False, f"未识别到媒体信息,类型:{mtype.value}id{mediaid}"
# 重新执行整理
@@ -1232,7 +1284,7 @@ class TransferChain(ChainBase, metaclass=Singleton):
if history.dest_fileitem:
# 解析目标文件对象
dest_fileitem = FileItem(**history.dest_fileitem)
self.storagechain.delete_file(dest_fileitem)
StorageChain().delete_file(dest_fileitem)
# 强制整理
if history.src_fileitem:
@@ -1287,18 +1339,19 @@ class TransferChain(ChainBase, metaclass=Singleton):
if tmdbid or doubanid:
# 有输入TMDBID时单个识别
# 识别媒体信息
mediainfo: MediaInfo = self.mediachain.recognize_media(tmdbid=tmdbid, doubanid=doubanid,
mtype=mtype, episode_group=episode_group)
mediainfo: MediaInfo = MediaChain().recognize_media(tmdbid=tmdbid, doubanid=doubanid,
mtype=mtype, episode_group=episode_group)
if not mediainfo:
return False, f"媒体信息识别失败tmdbid{tmdbid}doubanid{doubanid}type: {mtype.value}"
else:
# 更新媒体图片
self.obtain_images(mediainfo=mediainfo)
# 开始进度
self.progress.start(ProgressKey.FileTransfer)
self.progress.update(value=0,
text=f"开始整理 {fileitem.path} ...",
key=ProgressKey.FileTransfer)
progress = ProgressHelper()
progress.start(ProgressKey.FileTransfer)
progress.update(value=0,
text=f"开始整理 {fileitem.path} ...",
key=ProgressKey.FileTransfer)
# 开始整理
state, errmsg = self.do_transfer(
fileitem=fileitem,
@@ -1319,7 +1372,7 @@ class TransferChain(ChainBase, metaclass=Singleton):
if not state:
return False, errmsg
self.progress.end(ProgressKey.FileTransfer)
progress.end(ProgressKey.FileTransfer)
logger.info(f"{fileitem.path} 整理完成")
return True, ""
else:
@@ -1340,26 +1393,22 @@ class TransferChain(ChainBase, metaclass=Singleton):
return state, errmsg
def send_transfer_message(self, meta: MetaBase, mediainfo: MediaInfo,
transferinfo: TransferInfo, season_episode: Optional[str] = None, username: Optional[str] = None):
transferinfo: TransferInfo, season_episode: Optional[str] = None,
username: Optional[str] = None):
"""
发送入库成功的消息
"""
msg_title = f"{mediainfo.title_year} {meta.season_episode if not season_episode else season_episode} 已入库"
if mediainfo.vote_average:
msg_str = f"评分:{mediainfo.vote_average},类型:{mediainfo.type.value}"
else:
msg_str = f"类型:{mediainfo.type.value}"
if mediainfo.category:
msg_str = f"{msg_str},类别:{mediainfo.category}"
if meta.resource_term:
msg_str = f"{msg_str},质量:{meta.resource_term}"
msg_str = f"{msg_str},共{transferinfo.file_count}个文件," \
f"大小:{StringUtils.str_filesize(transferinfo.total_size)}"
if transferinfo.message:
msg_str = f"{msg_str},以下文件处理失败:\n{transferinfo.message}"
# 发送
self.post_message(Notification(
mtype=NotificationType.Organize,
title=msg_title, text=msg_str, image=mediainfo.get_message_image(),
username=username,
link=settings.MP_DOMAIN('#/history')))
self.post_message(
Notification(
mtype=NotificationType.Organize,
ctype=ContentType.OrganizeSuccess,
image=mediainfo.get_message_image(),
username=username,
link=settings.MP_DOMAIN('#/history')
),
meta=meta,
mediainfo=mediainfo,
transferinfo=transferinfo,
season_episode=season_episode,
username=username
)

13
app/chain/tvdb.py Normal file
View File

@@ -0,0 +1,13 @@
from typing import List
from app.chain import ChainBase
class TvdbChain(ChainBase):
"""
Tvdb处理链单例运行
"""
def get_tvdbid_by_name(self, title: str) -> List[int]:
tvdb_info_list = self.run_module("search_tvdb", title=title)
return [int(item["tvdb_id"]) for item in tvdb_info_list]

View File

@@ -10,20 +10,15 @@ from app.log import logger
from app.schemas import AuthCredentials, AuthInterceptCredentials
from app.schemas.types import ChainEventType
from app.utils.otp import OtpUtils
from app.utils.singleton import Singleton
PASSWORD_INVALID_CREDENTIALS_MESSAGE = "用户名或密码或二次校验码不正确"
class UserChain(ChainBase, metaclass=Singleton):
class UserChain(ChainBase):
"""
用户链,处理多种认证协议
"""
def __init__(self):
super().__init__()
self.user_oper = UserOper()
def user_authenticate(
self,
username: Optional[str] = None,
@@ -90,7 +85,8 @@ class UserChain(ChainBase, metaclass=Singleton):
logger.debug(f"辅助认证未启用,认证类型 {grant_type} 未实现")
return False, "不支持的认证类型"
def password_authenticate(self, credentials: AuthCredentials) -> Tuple[bool, Union[User, str]]:
@staticmethod
def password_authenticate(credentials: AuthCredentials) -> Tuple[bool, Union[User, str]]:
"""
密码认证
@@ -103,7 +99,7 @@ class UserChain(ChainBase, metaclass=Singleton):
logger.info("密码认证失败,认证类型不匹配")
return False, PASSWORD_INVALID_CREDENTIALS_MESSAGE
user = self.user_oper.get_by_name(name=credentials.username)
user = UserOper().get_by_name(name=credentials.username)
if not user:
logger.info(f"密码认证失败,用户 {credentials.username} 不存在")
return False, PASSWORD_INVALID_CREDENTIALS_MESSAGE
@@ -131,8 +127,9 @@ class UserChain(ChainBase, metaclass=Singleton):
return False, "认证凭证无效"
# 检查是否因为用户被禁用
useroper = UserOper()
if credentials.username:
user = self.user_oper.get_by_name(name=credentials.username)
user = useroper.get_by_name(name=credentials.username)
if user and not user.is_active:
logger.info(f"用户 {user.name} 已被禁用,跳过后续身份校验")
return False, PASSWORD_INVALID_CREDENTIALS_MESSAGE
@@ -156,7 +153,7 @@ class UserChain(ChainBase, metaclass=Singleton):
success = self._process_auth_success(username=credentials.username, credentials=credentials)
if success:
logger.info(f"用户 {credentials.username} 辅助认证通过")
return True, self.user_oper.get_by_name(credentials.username)
return True, useroper.get_by_name(credentials.username)
else:
logger.warning(f"用户 {credentials.username} 辅助认证未通过")
return False, PASSWORD_INVALID_CREDENTIALS_MESSAGE
@@ -213,7 +210,8 @@ class UserChain(ChainBase, metaclass=Singleton):
return False
# 检查用户是否存在,如果不存在且当前为密码认证时则创建新用户
user = self.user_oper.get_by_name(name=username)
useroper = UserOper()
user = useroper.get_by_name(name=username)
if user:
# 如果用户存在,但是已经被禁用,则直接响应
if not user.is_active:
@@ -226,8 +224,8 @@ class UserChain(ChainBase, metaclass=Singleton):
return True
else:
if credentials.grant_type == "password":
self.user_oper.add(name=username, is_active=True, is_superuser=False,
hashed_password=get_password_hash(secrets.token_urlsafe(16)))
useroper.add(name=username, is_active=True, is_superuser=False,
hashed_password=get_password_hash(secrets.token_urlsafe(16)))
logger.info(f"用户 {username} 不存在,已通过 {credentials.grant_type} 认证并已创建普通用户")
return True
else:

View File

@@ -2,10 +2,9 @@ from typing import Any
from app.chain import ChainBase
from app.schemas.types import EventType
from app.utils.singleton import Singleton
class WebhookChain(ChainBase, metaclass=Singleton):
class WebhookChain(ChainBase):
"""
Webhook处理链
"""

View File

@@ -188,16 +188,14 @@ class WorkflowChain(ChainBase):
工作流链
"""
def __init__(self):
super().__init__()
self.workflowoper = WorkflowOper()
def process(self, workflow_id: int, from_begin: Optional[bool] = True) -> Tuple[bool, str]:
@staticmethod
def process(workflow_id: int, from_begin: Optional[bool] = True) -> Tuple[bool, str]:
"""
处理工作流
:param workflow_id: 工作流ID
:param from_begin: 是否从头开始默认为True
"""
workflowoper = WorkflowOper()
def save_step(action: Action, context: ActionContext):
"""
@@ -207,16 +205,16 @@ class WorkflowChain(ChainBase):
serialized_data = pickle.dumps(context)
# 使用Base64编码字节流
encoded_data = base64.b64encode(serialized_data).decode('utf-8')
self.workflowoper.step(workflow_id, action_id=action.id, context={
workflowoper.step(workflow_id, action_id=action.id, context={
"content": encoded_data
})
# 重置工作流
if from_begin:
self.workflowoper.reset(workflow_id)
workflowoper.reset(workflow_id)
# 查询工作流数据
workflow = self.workflowoper.get(workflow_id)
workflow = workflowoper.get(workflow_id)
if not workflow:
logger.warn(f"工作流 {workflow_id} 不存在")
return False, "工作流不存在"
@@ -228,7 +226,7 @@ class WorkflowChain(ChainBase):
return False, "工作流无流程"
logger.info(f"开始处理 {workflow.name},共 {len(workflow.actions)} 个动作 ...")
self.workflowoper.start(workflow_id)
workflowoper.start(workflow_id)
# 执行工作流
executor = WorkflowExecutor(workflow, step_callback=save_step)
@@ -236,15 +234,16 @@ class WorkflowChain(ChainBase):
if not executor.success:
logger.info(f"工作流 {workflow.name} 执行失败:{executor.errmsg}")
self.workflowoper.fail(workflow_id, result=executor.errmsg)
workflowoper.fail(workflow_id, result=executor.errmsg)
return False, executor.errmsg
else:
logger.info(f"工作流 {workflow.name} 执行完成")
self.workflowoper.success(workflow_id)
workflowoper.success(workflow_id)
return True, ""
def get_workflows(self) -> List[Workflow]:
@staticmethod
def get_workflows() -> List[Workflow]:
"""
获取工作流列表
"""
return self.workflowoper.list_enabled()
return WorkflowOper().list_enabled()

View File

@@ -9,7 +9,6 @@ from app.chain.site import SiteChain
from app.chain.subscribe import SubscribeChain
from app.chain.system import SystemChain
from app.chain.transfer import TransferChain
from app.core.config import settings
from app.core.event import Event as ManagerEvent, eventmanager, Event
from app.core.plugin import PluginManager
from app.helper.message import MessageHelper
@@ -162,10 +161,6 @@ class Command(metaclass=Singleton):
"""
初始化菜单命令
"""
if settings.DEV:
logger.debug("Development mode active. Skipping command initialization.")
return
# 使用线程池提交后台任务,避免引起阻塞
ThreadHelper().submit(self.__init_commands_background, pid)
@@ -230,6 +225,9 @@ class Command(metaclass=Singleton):
添加命令集合
"""
for cmd, command in source.items():
if not command.get("show", True):
continue
command_data = {
"type": command_type,
"description": command.get("description"),
@@ -266,6 +264,7 @@ class Command(metaclass=Singleton):
"func": self.send_plugin_event,
"description": command.get("desc"),
"category": command.get("category"),
"show": command.get("show", True),
"data": {
"etype": command.get("event"),
"data": command.get("data")
@@ -340,7 +339,8 @@ class Command(metaclass=Singleton):
return self._commands.get(cmd, {})
def register(self, cmd: str, func: Any, data: Optional[dict] = None,
desc: Optional[str] = None, category: Optional[str] = None) -> None:
desc: Optional[str] = None, category: Optional[str] = None,
show: bool = True) -> None:
"""
注册单个命令
"""
@@ -349,7 +349,8 @@ class Command(metaclass=Singleton):
"func": func,
"description": desc,
"category": category,
"data": data or {}
"data": data or {},
"show": show
}
def execute(self, cmd: str, data_str: Optional[str] = "",

View File

@@ -131,7 +131,7 @@ class CacheToolsBackend(CacheBackend):
- 不支持按 `key` 独立隔离 TTL 和 Maxsize仅支持作用于 region 级别
"""
def __init__(self, maxsize: Optional[int] = 1000, ttl: Optional[int] = 1800):
def __init__(self, maxsize: Optional[int] = 512, ttl: Optional[int] = 1800):
"""
初始化缓存实例
@@ -150,7 +150,7 @@ class CacheToolsBackend(CacheBackend):
region = self.get_region(region)
return self._region_caches.get(region)
def set(self, key: str, value: Any, ttl: Optional[int] = None,
def set(self, key: str, value: Any, ttl: Optional[int] = None,
region: Optional[str] = DEFAULT_CACHE_REGION, **kwargs) -> None:
"""
设置缓存值支持每个 key 独立配置 TTL 和 Maxsize
@@ -196,7 +196,7 @@ class CacheToolsBackend(CacheBackend):
return None
return region_cache.get(key)
def delete(self, key: str, region: Optional[str] = DEFAULT_CACHE_REGION) -> None:
def delete(self, key: str, region: Optional[str] = DEFAULT_CACHE_REGION):
"""
删除缓存
@@ -205,7 +205,7 @@ class CacheToolsBackend(CacheBackend):
"""
region_cache = self.__get_region_cache(region)
if region_cache is None:
return None
return
with lock:
del region_cache[key]
@@ -357,7 +357,7 @@ class RedisBackend(CacheBackend):
region = self.get_region(quote(region))
return f"{region}:key:{quote(key)}"
def set(self, key: str, value: Any, ttl: Optional[int] = None,
def set(self, key: str, value: Any, ttl: Optional[int] = None,
region: Optional[str] = DEFAULT_CACHE_REGION, **kwargs) -> None:
"""
设置缓存
@@ -454,7 +454,7 @@ class RedisBackend(CacheBackend):
self.client.close()
def get_cache_backend(maxsize: Optional[int] = 1000, ttl: Optional[int] = 1800) -> CacheBackend:
def get_cache_backend(maxsize: Optional[int] = 512, ttl: Optional[int] = 1800) -> CacheBackend:
"""
根据配置获取缓存后端实例
@@ -482,13 +482,13 @@ def get_cache_backend(maxsize: Optional[int] = 1000, ttl: Optional[int] = 1800)
return CacheToolsBackend(maxsize=maxsize, ttl=ttl)
def cached(region: Optional[str] = None, maxsize: Optional[int] = 1000, ttl: Optional[int] = 1800,
def cached(region: Optional[str] = None, maxsize: Optional[int] = 512, ttl: Optional[int] = 1800,
skip_none: Optional[bool] = True, skip_empty: Optional[bool] = False):
"""
自定义缓存装饰器,支持为每个 key 动态传递 maxsize 和 ttl
:param region: 缓存的区
:param maxsize: 缓存的最大条目数,默认值为 1000
:param maxsize: 缓存的最大条目数,默认值为 512
:param ttl: 缓存的存活时间,单位秒,默认值为 1800
:param skip_none: 跳过 None 缓存,默认为 True
:param skip_empty: 跳过空值缓存(如 None, [], {}, "", set()),默认为 False

View File

@@ -1,6 +1,6 @@
import copy
import json
import os
import re
import secrets
import sys
import threading
@@ -15,6 +15,34 @@ from app.utils.system import SystemUtils
from app.utils.url import UrlUtils
class SystemConfModel(BaseModel):
"""
系统关键资源大小配置
"""
# 缓存种子数量
torrents: int = 0
# 订阅刷新处理数量
refresh: int = 0
# TMDB请求缓存数量
tmdb: int = 0
# 豆瓣请求缓存数量
douban: int = 0
# Bangumi请求缓存数量
bangumi: int = 0
# Fanart请求缓存数量
fanart: int = 0
# 元数据缓存过期时间(秒)
meta: int = 0
# 调度器数量
scheduler: int = 0
# 线程池大小
threadpool: int = 0
# 数据库连接池大小
dbpool: int = 0
# 数据库连接池溢出数量
dbpooloverflow: int = 0
class ConfigModel(BaseModel):
"""
Pydantic 配置模型,描述所有配置项及其类型和默认值
@@ -24,7 +52,7 @@ class ConfigModel(BaseModel):
extra = "ignore" # 忽略未定义的配置项
# 项目名称
PROJECT_NAME = "MoviePilot"
PROJECT_NAME: str = "MoviePilot"
# 域名 格式https://movie-pilot.org
APP_DOMAIN: str = ""
# API路径
@@ -57,20 +85,16 @@ class ConfigModel(BaseModel):
DB_ECHO: bool = False
# 数据库连接池类型QueuePool, NullPool
DB_POOL_TYPE: str = "QueuePool"
# 是否在获取连接时进行预先 ping 操作,默认关闭
DB_POOL_PRE_PING: bool = False
# 数据库连接池的大小,默认 100
DB_POOL_SIZE: int = 100
# 数据库连接的回收时间(秒),默认 1800 秒
DB_POOL_RECYCLE: int = 1800
# 数据库连接池获取连接的超时时间(秒),默认 60 秒
DB_POOL_TIMEOUT: int = 60
# 数据库连接池最大溢出连接数,默认 500
DB_MAX_OVERFLOW: int = 500
# 是否在获取连接时进行预先 ping 操作
DB_POOL_PRE_PING: bool = True
# 数据库连接的回收时间(秒)
DB_POOL_RECYCLE: int = 300
# 数据库连接池获取连接的超时时间(秒)
DB_POOL_TIMEOUT: int = 30
# SQLite 的 busy_timeout 参数,默认为 60 秒
DB_TIMEOUT: int = 60
# SQLite 是否启用 WAL 模式,默认关闭
DB_WAL_ENABLE: bool = False
# SQLite 是否启用 WAL 模式,默认开启
DB_WAL_ENABLE: bool = True
# 缓存类型,支持 cachetools 和 redis默认使用 cachetools
CACHE_BACKEND_TYPE: str = "cachetools"
# 缓存连接字符串,仅外部缓存(如 Redis、Memcached需要
@@ -85,10 +109,12 @@ class ConfigModel(BaseModel):
AUXILIARY_AUTH_ENABLE: bool = False
# API密钥需要更换
API_TOKEN: Optional[str] = None
# 网络代理 IP:PORT
# 网络代理服务器地址
PROXY_HOST: Optional[str] = None
# 登录页面电影海报,tmdb/bing/mediaserver
WALLPAPER: str = "tmdb"
# 自定义壁纸api地址
CUSTOMIZE_WALLPAPER_API_URL: Optional[str] = None
# 媒体搜索来源 themoviedb/douban/bangumi多个用,分隔
SEARCH_SOURCE: str = "themoviedb,douban,bangumi"
# 媒体识别来源 themoviedb/douban
@@ -101,12 +127,19 @@ class ConfigModel(BaseModel):
TMDB_IMAGE_DOMAIN: str = "image.tmdb.org"
# TMDB API地址
TMDB_API_DOMAIN: str = "api.themoviedb.org"
# TMDB元数据语言
TMDB_LOCALE: str = "zh"
# 刮削使用TMDB原始语种图片
TMDB_SCRAP_ORIGINAL_IMAGE: bool = False
# TMDB API Key
TMDB_API_KEY: str = "db55323b8d3e4154498498a75642b381"
# TVDB API Key
TVDB_API_KEY: str = "6b481081-10aa-440c-99f2-21d17717ee02"
TVDB_V4_API_KEY: str = "ed2aa66b-7899-4677-92a7-67bc9ce3d93a"
TVDB_V4_API_PIN: str = ""
# Fanart开关
FANART_ENABLE: bool = True
# Fanart语言
FANART_LANG: str = "zh,en"
# Fanart API Key
FANART_API_KEY: str = "d2d31f9ecabea050fc7d68aa3146015f"
# 115 AppId
@@ -116,9 +149,11 @@ class ConfigModel(BaseModel):
# 元数据识别缓存过期时间(小时)
META_CACHE_EXPIRE: int = 0
# 电视剧动漫的分类genre_ids
ANIME_GENREIDS = [16]
ANIME_GENREIDS: List[int] = Field(default=[16])
# 用户认证站点
AUTH_SITE: str = ""
# 重启自动升级
MOVIEPILOT_AUTO_UPDATE: str = 'release'
# 自动检查和更新站点资源包(站点索引、认证等)
AUTO_UPDATE_RESOURCE: bool = True
# 是否启用DOH解析域名
@@ -130,6 +165,7 @@ class ConfigModel(BaseModel):
"api.github.com,"
"github.com,"
"raw.githubusercontent.com,"
"codeload.github.com,"
"api.telegram.org")
# DOH 解析服务器列表
DOH_RESOLVERS: str = "1.0.0.1,1.1.1.1,9.9.9.9,149.112.112.112"
@@ -194,7 +230,7 @@ class ConfigModel(BaseModel):
# CookieCloud同步黑名单多个域名,分割
COOKIECLOUD_BLACKLIST: Optional[str] = None
# CookieCloud对应的浏览器UA
USER_AGENT = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/113.0.0.0 Safari/537.36 Edg/113.0.1774.57"
USER_AGENT: str = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/113.0.0.0 Safari/537.36 Edg/113.0.1774.57"
# 电影重命名格式
MOVIE_RENAME_FORMAT: str = "{{title}}{% if year %} ({{year}}){% endif %}" \
"/{{title}}{% if year %} ({{year}}){% endif %}{% if part %}-{{part}}{% endif %}{% if videoFormat %} - {{videoFormat}}{% endif %}" \
@@ -213,7 +249,17 @@ class ConfigModel(BaseModel):
"https://github.com/thsrite/MoviePilot-Plugins,"
"https://github.com/honue/MoviePilot-Plugins,"
"https://github.com/InfinityPacer/MoviePilot-Plugins,"
"https://github.com/DDS-Derek/MoviePilot-Plugins")
"https://github.com/DDS-Derek/MoviePilot-Plugins,"
"https://github.com/madrays/MoviePilot-Plugins,"
"https://github.com/justzerock/MoviePilot-Plugins,"
"https://github.com/KoWming/MoviePilot-Plugins,"
"https://github.com/wikrin/MoviePilot-Plugins,"
"https://github.com/HankunYu/MoviePilot-Plugins,"
"https://github.com/baozaodetudou/MoviePilot-Plugins,"
"https://github.com/Aqr-K/MoviePilot-Plugins,"
"https://github.com/hotlcc/MoviePilot-Plugins-Third,"
"https://github.com/gxterry/MoviePilot-Plugins,"
"https://github.com/DzAvril/MoviePilot-Plugins")
# 插件安装数据共享
PLUGIN_STATISTIC_SHARE: bool = True
# 是否开启插件热加载
@@ -222,12 +268,18 @@ class ConfigModel(BaseModel):
GITHUB_TOKEN: Optional[str] = None
# Github代理服务器格式https://mirror.ghproxy.com/
GITHUB_PROXY: Optional[str] = ''
# pip镜像站点格式https://pypi.tuna.tsinghua.edu.cn/simple
# pip镜像站点格式https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple
PIP_PROXY: Optional[str] = ''
# 指定的仓库Github token多个仓库使用,分隔,格式:{user1}/{repo1}:ghp_****,{user2}/{repo2}:github_pat_****
REPO_GITHUB_TOKEN: Optional[str] = None
# 大内存模式
BIG_MEMORY_MODE: bool = False
# 是否启用内存监控
MEMORY_ANALYSIS: bool = False
# 内存快照间隔(分钟)
MEMORY_SNAPSHOT_INTERVAL: int = 30
# 保留的内存快照文件数量
MEMORY_SNAPSHOT_KEEP_COUNT: int = 20
# 全局图片缓存,将媒体图片缓存到本地
GLOBAL_IMAGE_CACHE: bool = False
# 是否启用编码探测的性能模式
@@ -235,33 +287,30 @@ class ConfigModel(BaseModel):
# 编码探测的最低置信度阈值
ENCODING_DETECTION_MIN_CONFIDENCE: float = 0.8
# 允许的图片缓存域名
SECURITY_IMAGE_DOMAINS: List[str] = Field(
default_factory=lambda: ["image.tmdb.org",
"static-mdb.v.geilijiasu.com",
"doubanio.com",
"lain.bgm.tv",
"raw.githubusercontent.com",
"github.com",
"thetvdb.com",
"cctvpic.com",
"iqiyipic.com",
"hdslb.com",
"cmvideo.cn",
"ykimg.com",
"qpic.cn"]
)
SECURITY_IMAGE_DOMAINS: list = Field(default=[
"image.tmdb.org",
"static-mdb.v.geilijiasu.com",
"bing.com",
"doubanio.com",
"lain.bgm.tv",
"raw.githubusercontent.com",
"github.com",
"thetvdb.com",
"cctvpic.com",
"iqiyipic.com",
"hdslb.com",
"cmvideo.cn",
"ykimg.com",
"qpic.cn"
])
# 允许的图片文件后缀格式
SECURITY_IMAGE_SUFFIXES: List[str] = Field(
default_factory=lambda: [".jpg", ".jpeg", ".png", ".webp", ".gif", ".svg", ".avif"]
)
SECURITY_IMAGE_SUFFIXES: list = Field(default=[".jpg", ".jpeg", ".png", ".webp", ".gif", ".svg", ".avif"])
# 重命名时支持的S0别名
RENAME_FORMAT_S0_NAMES: List[str] = Field(
default_factory=lambda: ["Specials", "SPs"]
)
# 启用分词搜索
TOKENIZED_SEARCH: bool = False
RENAME_FORMAT_S0_NAMES: list = Field(default=["Specials", "SPs"])
# 为指定默认字幕添加.default后缀
DEFAULT_SUB: Optional[str] = "zh-cn"
# Docker Client API地址
DOCKER_CLIENT_API: Optional[str] = "tcp://127.0.0.1:38379"
class Settings(BaseSettings, ConfigModel, LogConfigModel):
@@ -308,6 +357,7 @@ class Settings(BaseSettings, ConfigModel, LogConfigModel):
raise_exception: bool = False) -> Tuple[Any, bool]:
"""
通用类型转换函数,根据预期类型转换值。如果转换失败,返回默认值
:return: 元组 (转换后的值, 是否需要更新)
"""
if isinstance(value, (list, dict, set)):
value = copy.deepcopy(value)
@@ -348,19 +398,17 @@ class Settings(BaseSettings, ConfigModel, LogConfigModel):
converted = float(value)
return converted, str(converted) != str(original_value)
elif expected_type is str:
# 清理 value 中所有空白字符的字段
fields_not_keep_spaces = {"AUTO_DOWNLOAD_USER", "REPO_GITHUB_TOKEN", "PLUGIN_MARKET"}
if field_name in fields_not_keep_spaces:
value = re.sub(r"\s+", "", value)
return value, str(value) != str(original_value)
# # 后续考虑支持 list 类型的处理
# elif expected_type is list:
# if isinstance(value, list):
# return value, False
# if isinstance(value, str):
# items = [item.strip() for item in value.split(",") if item.strip()]
# return items, items != original_value.split(",")
# 可根据需要添加更多类型处理
converted = str(value).strip()
return converted, converted != str(original_value)
elif expected_type is list:
if isinstance(value, list):
return value, str(value) != str(original_value)
if isinstance(value, str):
items = json.loads(value)
if isinstance(original_value, list):
return items, items != original_value
else:
return items, str(items) != str(original_value)
else:
return value, str(value) != str(original_value)
except (ValueError, TypeError) as e:
@@ -400,14 +448,24 @@ class Settings(BaseSettings, ConfigModel, LogConfigModel):
logger.warning(message)
return False, message
else:
set_key(SystemUtils.get_env_path(), field.name, str(converted_value) if converted_value is not None else "")
# 如果是列表、字典或集合类型将其转换为JSON字符串
if isinstance(converted_value, (list, dict, set)):
value_to_write = json.dumps(converted_value)
else:
value_to_write = str(converted_value) if converted_value is not None else ""
set_key(dotenv_path=SystemUtils.get_env_path(), key_to_set=field.name, value_to_set=value_to_write,
quote_mode="always")
if is_converted:
logger.info(f"配置项 '{field.name}' 已自动修正并写入到 'app.env' 文件")
return True, message
def update_setting(self, key: str, value: Any) -> Tuple[bool, str]:
def update_setting(self, key: str, value: Any) -> Tuple[Optional[bool], str]:
"""
更新单个配置项
:param key: 配置项的名称
:param value: 配置项的新值
:return: (是否成功 True 成功/False 失败/None 无需更新, 错误信息)
"""
if not hasattr(self, key):
return False, f"配置项 '{key}' 不存在"
@@ -418,8 +476,11 @@ class Settings(BaseSettings, ConfigModel, LogConfigModel):
if field.name == "API_TOKEN":
converted_value, needs_update = self.validate_api_token(value, original_value)
else:
converted_value, needs_update = self.generic_type_converter(value, original_value, field.type_,
field.default, key)
converted_value, needs_update = self.generic_type_converter(value,
original_value,
field.type_,
field.default,
key)
# 如果没有抛出异常,则统一使用 converted_value 进行更新
if needs_update or str(value) != str(converted_value):
success, message = self.update_env_config(field, value, converted_value)
@@ -429,30 +490,17 @@ class Settings(BaseSettings, ConfigModel, LogConfigModel):
if hasattr(log_settings, key):
setattr(log_settings, key, converted_value)
return success, message
return True, ""
return None, ""
except Exception as e:
return False, str(e)
def update_settings(self, env: Dict[str, Any]) -> Dict[str, Tuple[bool, str]]:
def update_settings(self, env: Dict[str, Any]) -> Dict[str, Tuple[Optional[bool], str]]:
"""
更新多个配置项
"""
results = {}
log_updated, plugin_monitor_updated = False, False
for k, v in env.items():
results[k] = self.update_setting(k, v)
if hasattr(log_settings, k):
log_updated = True
if k in ["PLUGIN_AUTO_RELOAD", "DEV"]:
plugin_monitor_updated = True
# 本次更新存在日志配置项更新,需要重新加载日志配置
if log_updated:
logger.update_loggers()
# 本次更新存在插件监控配置项更新,需要重新加载插件监控
if plugin_monitor_updated:
# 解决顶层循环导入问题
from app.core.plugin import PluginManager
PluginManager().reload_monitor()
return results
@property
@@ -501,36 +549,37 @@ class Settings(BaseSettings, ConfigModel, LogConfigModel):
return self.CONFIG_PATH / "cookies"
@property
def CACHE_CONF(self):
def CONF(self) -> SystemConfModel:
"""
{
"torrents": "缓存种子数量",
"refresh": "订阅刷新处理数量",
"tmdb": "TMDB请求缓存数量",
"douban": "豆瓣请求缓存数量",
"fanart": "Fanart请求缓存数量",
"meta": "元数据缓存过期时间(秒)"
}
根据内存模式返回系统配置
"""
if self.BIG_MEMORY_MODE:
return {
"torrents": 200,
"refresh": 100,
"tmdb": 1024,
"douban": 512,
"bangumi": 512,
"fanart": 512,
"meta": (self.META_CACHE_EXPIRE or 24) * 3600
}
return {
"torrents": 100,
"refresh": 50,
"tmdb": 256,
"douban": 256,
"bangumi": 256,
"fanart": 128,
"meta": (self.META_CACHE_EXPIRE or 2) * 3600
}
return SystemConfModel(
torrents=200,
refresh=100,
tmdb=1024,
douban=512,
bangumi=512,
fanart=512,
meta=(self.META_CACHE_EXPIRE or 24) * 3600,
scheduler=100,
threadpool=100,
dbpool=100,
dbpooloverflow=50
)
return SystemConfModel(
torrents=100,
refresh=50,
tmdb=256,
douban=256,
bangumi=256,
fanart=128,
meta=(self.META_CACHE_EXPIRE or 2) * 3600,
scheduler=50,
threadpool=50,
dbpool=50,
dbpooloverflow=20
)
@property
def PROXY(self):
@@ -547,6 +596,7 @@ class Settings(BaseSettings, ConfigModel, LogConfigModel):
return {
"server": self.PROXY_HOST
}
return None
@property
def GITHUB_HEADERS(self):
@@ -555,7 +605,8 @@ class Settings(BaseSettings, ConfigModel, LogConfigModel):
"""
if self.GITHUB_TOKEN:
return {
"Authorization": f"Bearer {self.GITHUB_TOKEN}"
"Authorization": f"Bearer {self.GITHUB_TOKEN}",
"User-Agent": self.USER_AGENT,
}
return {}
@@ -583,7 +634,8 @@ class Settings(BaseSettings, ConfigModel, LogConfigModel):
print(f"无效的令牌或仓库信息: {token_pair}")
continue
headers[repo_info] = {
"Authorization": f"Bearer {token}"
"Authorization": f"Bearer {token}",
"User-Agent": self.USER_AGENT,
}
except Exception as e:
print(f"处理令牌对 '{token_pair}' 时出错: {e}")
@@ -604,6 +656,10 @@ class Settings(BaseSettings, ConfigModel, LogConfigModel):
return UrlUtils.combine_url(host=self.APP_DOMAIN, path=url)
# 实例化配置
settings = Settings()
class GlobalVar(object):
"""
全局标识
@@ -661,8 +717,5 @@ class GlobalVar(object):
return self.is_system_stopped or workflow_id in self.EMERGENCY_STOP_WORKFLOWS
# 实例化配置
settings = Settings()
# 全局标识
global_vars = GlobalVar()

View File

@@ -6,11 +6,9 @@ import threading
import time
import traceback
import uuid
from functools import lru_cache
from queue import Empty, PriorityQueue
from typing import Callable, Dict, List, Optional, Union
from app.helper.message import MessageHelper
from app.helper.thread import ThreadHelper
from app.log import logger
from app.schemas import ChainEventData
@@ -75,7 +73,6 @@ class EventManager(metaclass=Singleton):
__event = threading.Event()
def __init__(self):
self.__messagehelper = MessageHelper()
self.__executor = ThreadHelper() # 动态线程池,用于消费事件
self.__consumer_threads = [] # 用于保存启动的事件消费者线程
self.__event_queue = PriorityQueue() # 优先级队列
@@ -140,11 +137,12 @@ class EventManager(metaclass=Singleton):
"""
event = Event(etype, data, priority)
if isinstance(etype, EventType):
self.__trigger_broadcast_event(event)
return self.__trigger_broadcast_event(event)
elif isinstance(etype, ChainEventType):
return self.__trigger_chain_event(event)
else:
logger.error(f"Unknown event type: {etype}")
return None
def add_event_listener(self, event_type: Union[EventType, ChainEventType], handler: Callable,
priority: Optional[int] = DEFAULT_EVENT_PRIORITY):
@@ -264,7 +262,6 @@ class EventManager(metaclass=Singleton):
return handler_info
@classmethod
@lru_cache(maxsize=1000)
def __get_handler_identifier(cls, target: Union[Callable, type]) -> Optional[str]:
"""
获取处理器或处理器类的唯一标识符,包括模块名和类名/方法名
@@ -280,7 +277,6 @@ class EventManager(metaclass=Singleton):
return f"{module_name}.{qualname}"
@classmethod
@lru_cache(maxsize=1000)
def __get_class_from_callable(cls, handler: Callable) -> Optional[str]:
"""
获取可调用对象所属类的唯一标识符
@@ -293,7 +289,7 @@ class EventManager(metaclass=Singleton):
# 对于类实例(实现了 __call__ 方法)
if not inspect.isfunction(handler) and hasattr(handler, "__call__"):
handler_cls = handler.__class__ # noqa
handler_cls = handler.__class__ # noqa
return cls.__get_handler_identifier(handler_cls)
# 对于未绑定方法、静态方法、类方法,使用 __qualname__ 提取类信息
@@ -303,6 +299,7 @@ class EventManager(metaclass=Singleton):
module = inspect.getmodule(handler)
module_name = module.__name__ if module else "unknown_module"
return f"{module_name}.{class_name}"
return None
def __is_handler_enabled(self, handler: Callable) -> bool:
"""
@@ -398,16 +395,28 @@ class EventManager(metaclass=Singleton):
try:
from app.core.plugin import PluginManager
from app.core.module import ModuleManager
if class_name in PluginManager().get_plugin_ids():
# 定义一个插件调用函数
def plugin_callable():
"""
插件调用函数
"""
PluginManager().run_plugin_method(class_name, method_name, event_to_process)
if is_broadcast_event:
self.__executor.submit(plugin_callable)
else:
plugin_callable()
elif class_name in ModuleManager().get_module_ids():
module = ModuleManager().get_running_module(class_name)
if module:
method = getattr(module, method_name, None)
if method:
if is_broadcast_event:
self.__executor.submit(method, event_to_process)
else:
method(event_to_process)
else:
# 获取全局对象或模块类的实例
class_obj = self.__get_class_instance(class_name)
@@ -438,22 +447,25 @@ class EventManager(metaclass=Singleton):
# 如果类不在全局变量中,尝试动态导入模块并创建实例
try:
if class_name == "Command":
module_name = "app.command"
if class_name.endswith("Manager"):
module_name = f"app.core.{class_name[:-7].lower()}"
module = importlib.import_module(module_name)
elif class_name.endswith("Chain"):
module_name = f"app.chain.{class_name[:-5].lower()}"
module = importlib.import_module(module_name)
elif class_name.endswith("Helper"):
module_name = f"app.helper.{class_name[:-6].lower()}"
module = importlib.import_module(module_name)
else:
logger.debug(f"事件处理出错:无效的 Chain 类名: {class_name},类名必须以 'Chain' 结尾")
return None
module_name = f"app.{class_name.lower()}"
module = importlib.import_module(module_name)
if hasattr(module, class_name):
class_obj = getattr(module, class_name)()
return class_obj
else:
logger.debug(f"事件处理出错:模块 {module_name} 中没有找到类 {class_name}")
except Exception as e:
logger.error(f"事件处理出错:{str(e)} - {traceback.format_exc()}")
logger.debug(f"事件处理出错:{str(e)} - {traceback.format_exc()}")
return None
def __broadcast_consumer_loop(self):
@@ -491,9 +503,11 @@ class EventManager(metaclass=Singleton):
names = handler.__qualname__.split(".")
class_name, method_name = names[0], names[1]
self.__messagehelper.put(title=f"{event.event_type} 事件处理出错",
message=f"{class_name}.{method_name}{str(e)}",
role="system")
# 发送系统错误通知
from app.helper.message import MessageHelper
MessageHelper().put(title=f"{event.event_type} 事件处理出错",
message=f"{class_name}.{method_name}{str(e)}",
role="system")
self.send_event(
EventType.SystemError,
{

View File

@@ -55,6 +55,8 @@ class MetaBase(object):
resource_team: Optional[str] = None
# 识别的自定义占位符
customization: Optional[str] = None
# 识别的流媒体平台
web_source: Optional[str] = None
# 视频编码
video_encode: Optional[str] = None
# 音频编码
@@ -582,6 +584,12 @@ class MetaBase(object):
# Part
if not self.part:
self.part = meta.part
# tmdbid
if not self.tmdbid and meta.tmdbid:
self.tmdbid = meta.tmdbid
# doubanid
if not self.doubanid and meta.doubanid:
self.doubanid = meta.doubanid
def to_dict(self):
"""

View File

@@ -10,6 +10,7 @@ from app.core.meta.releasegroup import ReleaseGroupsMatcher
from app.schemas.types import MediaType
from app.utils.string import StringUtils
from app.utils.tokens import Tokens
from app.core.meta.streamingplatform import StreamingPlatforms
class MetaVideo(MetaBase):
@@ -31,7 +32,7 @@ class MetaVideo(MetaBase):
_part_re = r"(^PART[0-9ABI]{0,2}$|^CD[0-9]{0,2}$|^DVD[0-9]{0,2}$|^DISK[0-9]{0,2}$|^DISC[0-9]{0,2}$)"
_roman_numerals = r"^(?=[MDCLXVI])M*(C[MD]|D?C{0,3})(X[CL]|L?X{0,3})(I[XV]|V?I{0,3})$"
_source_re = r"^BLURAY$|^HDTV$|^UHDTV$|^HDDVD$|^WEBRIP$|^DVDRIP$|^BDRIP$|^BLU$|^WEB$|^BD$|^HDRip$|^REMUX$|^UHD$"
_effect_re = r"^SDR$|^HDR\d*$|^DOLBY$|^DOVI$|^DV$|^3D$|^REPACK$"
_effect_re = r"^SDR$|^HDR\d*$|^DOLBY$|^DOVI$|^DV$|^3D$|^REPACK$|^HLG$|^HDR10(\+|Plus)$|^EDR$|^HQ$"
_resources_type_re = r"%s|%s" % (_source_re, _effect_re)
_name_no_begin_re = r"^[\[【].+?[\]】]"
_name_no_chinese_re = r".*版|.*字幕"
@@ -50,8 +51,8 @@ class MetaVideo(MetaBase):
r"|CD[\s.]*[1-9]|DVD[\s.]*[1-9]|DISK[\s.]*[1-9]|DISC[\s.]*[1-9]|\s+GB"
_resources_pix_re = r"^[SBUHD]*(\d{3,4}[PI]+)|\d{3,4}X(\d{3,4})"
_resources_pix_re2 = r"(^[248]+K)"
_video_encode_re = r"^[HX]26[45]$|^AVC$|^HEVC$|^VC\d?$|^MPEG\d?$|^Xvid$|^DivX$|^HDR\d*$"
_audio_encode_re = r"^DTS\d?$|^DTSHD$|^DTSHDMA$|^Atmos$|^TrueHD\d?$|^AC3$|^\dAudios?$|^DDP\d?$|^DD\d?$|^LPCM\d?$|^AAC\d?$|^FLAC\d?$|^HD\d?$|^MA\d?$"
_video_encode_re = r"^(H26[45])$|^(x26[45])$|^AVC$|^HEVC$|^VC\d?$|^MPEG\d?$|^Xvid$|^DivX$|^AV1$|^HDR\d*$|^AVS(\+|[23])$"
_audio_encode_re = r"^DTS\d?$|^DTSHD$|^DTSHDMA$|^Atmos$|^TrueHD\d?$|^AC3$|^\dAudios?$|^DDP\d?$|^DD\+\d?$|^DD\d?$|^LPCM\d?$|^AAC\d?$|^FLAC\d?$|^HD\d?$|^MA\d?$|^HR\d?$|^Opus\d?$|^Vorbis\d?$|^AV[3S]A$"
def __init__(self, title: str, subtitle: str = None, isfile: bool = False):
"""
@@ -66,6 +67,7 @@ class MetaVideo(MetaBase):
original_title = title
self._source = ""
self._effect = []
self._index = 0
# 判断是否纯数字命名
if isfile \
and title.isdigit() \
@@ -93,9 +95,12 @@ class MetaVideo(MetaBase):
# 拆分tokens
tokens = Tokens(title)
self.tokens = tokens
# 实例化StreamingPlatforms对象
streaming_platforms = StreamingPlatforms()
# 解析名称、年份、季、集、资源类型、分辨率等
token = tokens.get_next()
while token:
self._index += 1 # 更新当前处理的token索引
# Part
self.__init_part(token)
# 标题
@@ -116,6 +121,9 @@ class MetaVideo(MetaBase):
# 资源类型
if self._continue_flag:
self.__init_resource_type(token)
# 流媒体平台
if self._continue_flag:
self.__init_web_source(token, streaming_platforms)
# 视频编码
if self._continue_flag:
self.__init_video_encode(token)
@@ -574,6 +582,57 @@ class MetaVideo(MetaBase):
self._effect.append(effect)
self._last_token = effect.upper()
def __init_web_source(self, token: str, streaming_platforms: StreamingPlatforms):
"""
识别流媒体平台
"""
if not self.name:
return
platform_name = None
query_range = 1
prev_token = None
prev_idx = self._index - 2
if 0 <= prev_idx < len(self.tokens.tokens):
prev_token = self.tokens.tokens[prev_idx]
next_token = self.tokens.peek()
if streaming_platforms.is_streaming_platform(token):
platform_name = streaming_platforms.get_streaming_platform_name(token)
else:
for adjacent_token, is_next in [(prev_token, False), (next_token, True)]:
if not adjacent_token or platform_name:
continue
for separator in [" ", "-"]:
if is_next:
combined_token = f"{token}{separator}{adjacent_token}"
else:
combined_token = f"{adjacent_token}{separator}{token}"
if streaming_platforms.is_streaming_platform(combined_token):
platform_name = streaming_platforms.get_streaming_platform_name(combined_token)
query_range = 2
if is_next:
self.tokens.get_next()
break
if not platform_name:
return
web_tokens = ["WEB", "DL", "WEBDL", "WEBRIP"]
match_start_idx = self._index - query_range
match_end_idx = self._index - 1
start_index = max(0, match_start_idx - query_range)
end_index = min(len(self.tokens.tokens), match_end_idx + 1 + query_range)
tokens_to_check = self.tokens.tokens[start_index:end_index]
if any(tok and tok.upper() in web_tokens for tok in tokens_to_check):
self.web_source = platform_name
self._continue_flag = False
def __init_video_encode(self, token: str):
"""
识别视频编码
@@ -592,7 +651,12 @@ class MetaVideo(MetaBase):
self._stop_name_flag = True
self._last_token_type = "videoencode"
if not self.video_encode:
self.video_encode = re_res.group(1).upper()
if re_res.group(2):
self.video_encode = re_res.group(2).upper()
elif re_res.group(3):
self.video_encode = re_res.group(3).lower()
else:
self.video_encode = re_res.group(1).upper()
self._last_token = self.video_encode
elif self.video_encode == "10bit":
self.video_encode = f"{re_res.group(1).upper()} 10bit"

View File

@@ -15,32 +15,32 @@ class ReleaseGroupsMatcher(metaclass=Singleton):
"0ff": ['FF(?:(?:A|WE)B|CD|E(?:DU|B)|TV)'],
"1pt": [],
"52pt": [],
"audiences": ['Audies', 'AD(?:Audio|E(?:|book)|Music|Web)'],
"audiences": ['Audies', 'AD(?:Audio|E(?:book|)|Music|Web)'],
"azusa": [],
"beitai": ['BeiTai'],
"btschool": ['Bts(?:CHOOL|HD|PAD|TV)', 'Zone'],
"carpt": ['CarPT'],
"chdbits": ['CHD(?:|Bits|PAD|(?:|HK)TV|WEB)', 'StBOX', 'OneHD', 'Lee', 'xiaopie'],
"chdbits": ['CHD(?:Bits|PAD|(?:|HK)TV|WEB|)', 'StBOX', 'OneHD', 'Lee', 'xiaopie'],
"discfan": [],
"dragonhd": [],
"eastgame": ['(?:(?:iNT|(?:HALFC|Mini(?:S|H|FH)D))-|)TLF'],
"filelist": [],
"gainbound": ['(?:DG|GBWE)B'],
"hares": ['Hares(?:|(?:M|T)V|Web)'],
"hares": ['Hares(?:(?:M|T)V|Web|)'],
"hd4fans": [],
"hdarea": ['HDA(?:pad|rea|TV)', 'EPiC'],
"hdatmos": [],
"hdbd": [],
"hdchina": ['HDC(?:|hina|TV)', 'k9611', 'tudou', 'iHD'],
"hdchina": ['HDC(?:hina|TV|)', 'k9611', 'tudou', 'iHD'],
"hddolby": ['D(?:ream|BTV)', '(?:HD|QHstudI)o'],
"hdfans": ['beAst(?:|TV)'],
"hdhome": ['HDH(?:|ome|Pad|TV|WEB)'],
"hdpt": ['HDPT(?:|Web)'],
"hdsky": ['HDS(?:|ky|TV|Pad|WEB)', 'AQLJ'],
"hdfans": ['beAst(?:TV|)'],
"hdhome": ['HDH(?:ome|Pad|TV|WEB|)'],
"hdpt": ['HDPT(?:Web|)'],
"hdsky": ['HDS(?:ky|TV|Pad|WEB|)', 'AQLJ'],
"hdtime": [],
"HDU": [],
"hdvideo": [],
"hdzone": ['HDZ(?:|one)'],
"hdzone": ['HDZ(?:one|)'],
"hhanclub": ['HHWEB'],
"hitpt": [],
"htpt": ['HTPT'],
@@ -48,38 +48,39 @@ class ReleaseGroupsMatcher(metaclass=Singleton):
"joyhd": [],
"keepfrds": ['FRDS', 'Yumi', 'cXcY'],
"lemonhd": ['L(?:eague(?:(?:C|H)D|(?:M|T)V|NF|WEB)|HD)', 'i18n', 'CiNT'],
"mteam": ['MTeam(?:|TV)', 'MPAD'],
"mteam": ['MTeam(?:TV|)', 'MPAD'],
"nanyangpt": [],
"nicept": [],
"oshen": [],
"ourbits": ['Our(?:Bits|TV)', 'FLTTH', 'Ao', 'PbK', 'MGs', 'iLove(?:HD|TV)'],
"piggo": ['PiGo(?:NF|(?:H|WE)B)'],
"ptchina": [],
"pterclub": ['PTer(?:|DIY|Game|(?:M|T)V|WEB)'],
"pthome": ['PTH(?:|Audio|eBook|music|ome|tv|WEB)'],
"pterclub": ['PTer(?:DIY|Game|(?:M|T)V|WEB|)'],
"pthome": ['PTH(?:Audio|eBook|music|ome|tv|WEB|)'],
"ptmsg": [],
"ptsbao": ['PTsbao', 'OPS', 'F(?:Fans(?:AIeNcE|BD|D(?:VD|IY)|TV|WEB)|HDMv)', 'SGXT'],
"pttime": [],
"putao": ['PuTao'],
"soulvoice": [],
"springsunday": ['CMCT(?:|V)'],
"sharkpt": ['Shark(?:|WEB|DIY|TV|MV)'],
"springsunday": ['CMCT(?:V|)'],
"sharkpt": ['Shark(?:WEB|DIY|TV|MV|)'],
"tccf": [],
"tjupt": ['TJUPT'],
"totheglory": ['TTG', 'WiKi', 'NGB', 'DoA', '(?:ARi|ExRE)N'],
"U2": [],
"ultrahd": [],
"others": ['B(?:MDru|eyondHD|TN)', 'C(?:fandora|trlhd|MRG)', 'DON', 'EVO', 'FLUX', 'HONE(?:|yG)',
'N(?:oGroup|T(?:b|G))', 'PandaMoon', 'SMURF', 'T(?:EPES|aengoo|rollHD )', 'UBWEB'],
"others": ['B(?:MDru|eyondHD|TN)', 'C(?:fandora|trlhd|MRG)', 'DON', 'EVO', 'FLUX', 'HONE(?:yG|)',
'N(?:oGroup|T(?:b|G))', 'PandaMoon', 'SMURF', 'T(?:EPES|aengoo|rollHD )',],
"anime": ['ANi', 'HYSUB', 'KTXP', 'LoliHouse', 'MCE', 'Nekomoe kissaten', 'SweetSub', 'MingY',
'(?:Lilith|NC)-Raws', '织梦字幕组', '枫叶字幕组', '猎户手抄部', '喵萌奶茶屋', '漫猫字幕社',
'霜庭云花Sub', '北宇治字幕组', '氢气烤肉架', '云歌字幕组', '萌樱字幕组', '极影字幕社',
'悠哈璃羽字幕社',
'❀拨雪寻春❀', '沸羊羊(?:制作|字幕组)', '(?:桜|樱)都字幕组']
'❀拨雪寻春❀', '沸羊羊(?:制作|字幕组)', '(?:桜|樱)都字幕组'],
"forge": ['FROG(?:E|Web|)'],
"ubits": ['UB(?:its|WEB|TV)'],
}
def __init__(self):
self.systemconfig = SystemConfigOper()
release_groups = []
for site_groups in self.RELEASE_GROUPS.values():
for release_group in site_groups:
@@ -96,7 +97,9 @@ class ReleaseGroupsMatcher(metaclass=Singleton):
return ""
if not groups:
# 自定义组
custom_release_groups = self.systemconfig.get(SystemConfigKey.CustomReleaseGroups)
custom_release_groups = SystemConfigOper().get(SystemConfigKey.CustomReleaseGroups)
if isinstance(custom_release_groups, list):
custom_release_groups = list(filter(None, custom_release_groups))
if custom_release_groups:
custom_release_groups_str = '|'.join(custom_release_groups)
groups = f"{self.__release_groups}|{custom_release_groups_str}"

View File

@@ -0,0 +1,315 @@
from typing import Optional, List, Tuple
from app.utils.singleton import Singleton
class StreamingPlatforms(metaclass=Singleton):
"""
流媒体平台简称与全称。
"""
STREAMING_PLATFORMS: List[Tuple[str, str]] = [
("AMZN", "Amazon"),
("NF", "Netflix"),
("ATVP", "Apple TV+"),
("iT", "iTunes"),
("DSNP", "Disney+"),
("HS", "Hotstar"),
("APPS", "Disney+ MENA"),
("PMTP", "Paramount+"),
("HMAX", "Max"),
("", "Max"),
("HULU", "Hulu Networks"),
("MA", "Movies Anywhere"),
("BCORE", "Bravia Core"),
("MS", "Microsoft Store"),
("SHO", "Showtime"),
("STAN", "Stan"),
("PCOK", "Peacock"),
("SKST", "SkyShowtime"),
("NOW", "Now"),
("FXTL", "Foxtel Now"),
("BNGE", "Binge"),
("CRKL", "Crackle"),
("RKTN", "Rakuten TV"),
("ALL4", "Channel 4"),
("AS", "Adult Swim"),
("BRTB", "Brtb TV"),
("CNLP", "Canal+"),
("CRIT", "Criterion Channel"),
("DSCP", "Discovery+"),
("FOOD", "Food Network"),
("MUBI", "Mubi"),
("PLAY", "Google Play"),
("YT", "YouTube"),
("", "friDay"),
("", "KKTV"),
("", "ofiii"),
("", "LiTV"),
("", "MyVideo"),
("Hami", "Hami Video"),
("HamiVideo", "Hami Video"),
("MW", "meWATCH"),
("CATCHPLAY", "CATCHPLAY+"),
("CPP", "CATCHPLAY+"),
("LINETV", "LINE TV"),
("VIU", "Viu"),
("IQ", ""),
("", "WeTV"),
("ABMA", "Abema"),
("ADN", ""),
("AT-X", ""),
("Baha", ""),
("BG", "B-Global"),
("CR", "Crunchyroll"),
("", "DMM"),
("FOD", ""),
("FUNi", "Funimation"),
("HIDI", "HIDIVE"),
("UNXT", "U-NEXT"),
("FAA", "Filmarchiv Austria"),
("CC", "Comedy Central"),
("iP", "BBC iPlayer"),
("9NOW", "9Now"),
("ABC", ""),
("", "AMC"),
("", "ZEE5"),
("", "WAVO"),
("SHAHID", "Shahid"),
("Flixole", "FlixOlé"),
("TOU", "Ici TOU.TV"),
("ROKU", "Roku"),
("KNPY", "Kanopy"),
("SNXT", "Sun NXT"),
("CUR", "Curiosity Stream"),
("MY5", "Channel 5"),
("AHA", "aha"),
("WOWP", "WOW Presents Plus"),
("JC", "JioCinema"),
("", "Dekkoo"),
("FILMZIE", "Filmzie"),
("HoiChoi", "Hoichoi"),
("VIKI", "Rakuten Viki"),
("SF", "SF Anytime"),
("PLEX", "Plex"),
("SHDR", "Shudder"),
("CRAV", "Crave"),
("CPE", "Cineplex Entertainment"),
("JF HC", ""),
("JF", ""),
("JFFP", ""),
("VIAP", "Viaplay"),
("TUBI", "TubiTV"),
("", "PBS"),
("PBSK", "PBS KIDS"),
("LGP", "Lionsgate Play"),
("", "CTV"),
("", "Cineverse"),
("LN", "Love Nature"),
("MP", "Movistar Plus+"),
("RUNTIME", "Runtime"),
("STZ", "STARZ"),
("FUBO", "fuboTV"),
("TENK", "Tënk"),
("KNOW", "Knowledge Network"),
("TVO", "tvo"),
("", "OVID"),
("CBC", "CBC Gem"),
("FANDOR", "fandor"),
("CW", "The CW"),
("KNPY", "Kanopy"),
("FREE", "Freeform"),
("AE", "A&E"),
("LIFE", "Lifetime"),
("WWEN", "WWE Network"),
("CMAX", "Cinemax"),
("HLMK", "Hallmark"),
("BYU", "BYUtv"),
("", "ViX"),
("VICE", "Viceland"),
("", "TVING"),
("USAN", "USA Network"),
("FOX", ""),
("", "TCM"),
("BRAV", "BravoTV"),
("", "TNT"),
("", "ZDF"),
("", "IndieFlix"),
("", "TLC"),
("", "HGTV"),
("ANPL", "Animal Planet"),
("TRVL", "Travel Channel"),
("", "VH1"),
("SAINA", "Saina Play"),
("SP", "Saina Play"),
("OXGN", "Oxygen"),
("PSN", "PlayStation Network"),
("PMNT", "Paramount Network"),
("FAWESOME", "Fawesome"),
("KLASSIKI", "Klassiki"),
("STRP", "Star+"),
("NATG", "National Geographic"),
("REVEEL", "Reveel"),
("FYI", "FYI Network"),
("WatchiT", "WATCH IT"),
("ITVX", "ITV"),
("GAIA", "Gaia"),
("", "FlixLatino"),
("CNNP", "CNN+"),
("TROMA", "Troma"),
("IVI", "Ivi"),
("9NOW", "9Now"),
("A3P", "Atresplayer"),
("7PLUS", "7plus"),
("", "SBS"),
("TEN", "10Play"),
("AUBC", ""),
("DSNY", "Disney Networks"),
("OSN", "OSN+"),
("SVT", "Sveriges Television"),
("LACINETEK", "LaCinetek"),
("", "Maxdome"),
("RTL", "RTL+"),
("ARTE", "Arte"),
("JOYN", "Joyn"),
("TV2", "TV 2"),
("3SAT", "3sat"),
("FILMINGO", "filmingo"),
("", "WOW"),
("OKKO", "Okko"),
("", "Go3"),
("ARGP", "Argo"),
("VOYO", "Voyo"),
("VMAX", "vivamax"),
("FILMIN", "Filmin"),
("", "Mitele"),
("MY5", "Channel 5"),
("", "ARD"),
("BK", "Bentkey"),
("BOOM", "Boomerang"),
("", "CBS"),
("CLBI", "Club illico"),
("CMOR", "C More"),
("CMT", ""),
("", "CNBC"),
("COOK", "Cooking Channel"),
("CWS", "CW Seed"),
("DCU", "DC Universe"),
("DDY", "Digiturk Dilediğin Yerde"),
("DEST", "Destination America"),
("DISC", "Discovery Channel"),
("DW", "DailyWire+"),
("DLWP", "DailyWire+"),
("DPLY", "dplay"),
("DRPO", "Dropout"),
("EPIX", "EPIX MGM+"),
("ESQ", "Esquire"),
("ETV", "E!"),
("FBWatch", "Facebook Watch"),
("FPT", "FPT Play"),
("FTV", "France.tv"),
("GLOB", "GloboSat Play"),
("GLBO", "Globoplay"),
("GO90", "go90"),
("HIST", "History Channel"),
("HPLAY", "Hungama Play"),
("KS", "Kaleidescape"),
("", "MBC"),
("MMAX", "ManoramaMAX"),
("MNBC", "MSNBC"),
("MTOD", "Motor Trend OnDemand"),
("NBC", ""),
("NBLA", "Nebula"),
("NICK", "Nickelodeon"),
("ODK", "OnDemandKorea"),
("POGO", "PokerGO"),
("PUHU", "puhutv"),
("QIBI", "Quibi"),
("RTE", "RTÉ"),
("SESO", "Seeso"),
("SPIK", "Spike"),
("SS", "Simply South"),
("SYFY", "SyFy"),
("TIMV", "TIMvision"),
("TK", "Tentkotta"),
("", "TV4"),
("TVL", "TV Land"),
("", "TVNZ"),
("", "UKTV"),
("VLCT", "Discovery Velocity"),
("VMEO", "Vimeo"),
("VRV", "VRV Defunct"),
("WTCH", "Watcha"),
("", "NowPlayer"),
("HuluJP", "Hulu Networks"),
("Gaga", "GagaOOLala"),
("MyTVS", "MyTVSuper"),
("", "BBC"),
("CC", "Comedy Central"),
("NowE", "Now E"),
("WAVVE", "Wavve"),
("SE", ""),
("", "BritBox"),
("AOD", "Anime on Demand"),
("AF", ""),
("BCH", "Bandai Channel"),
("VMJ", "VideoMarket"),
("LFTL", "Laftel"),
("WAKA", "Wakanim"),
("WAKANIM", "Wakanim"),
("AO", "AnimeOnegai"),
("", "Lemino"),
("VIDIO", "Vidio"),
("TVER", "TVer"),
("", "MBS"),
("LFTLNET", "Laftel"),
("JONU", "Jonu Play"),
("PlutoTV", "Pluto TV"),
("AbemaTV", "Abema"),
("", "dTV"),
("NYMEY", "Nymey"),
("SMNS", "SAMANSA"),
("CTHP", "CATCHPLAY+"),
("HBOGO", "HBO GO"),
("HBO", "HBO"),
("FPTP", "FPT Play"),
("", "LOCIPO"),
("DANT", "DANET"),
("OV", "OceanVeil"),
]
def __init__(self):
"""初始化流媒体平台匹配器"""
self._lookup_cache = {}
self._build_cache()
def _build_cache(self) -> None:
"""
构建查询缓存。
"""
self._lookup_cache.clear()
for short_name, full_name in self.STREAMING_PLATFORMS:
canonical_name = full_name or short_name
if not canonical_name:
continue
aliases = {short_name, full_name}
for alias in aliases:
if alias:
self._lookup_cache[alias.upper()] = canonical_name
def get_streaming_platform_name(self, platform_code: str) -> Optional[str]:
"""
根据流媒体平台简称或全称获取标准名称。
"""
if platform_code is None:
return None
return self._lookup_cache.get(platform_code.upper())
def is_streaming_platform(self, name: str) -> bool:
"""
判断给定的字符串是否为已知的流媒体平台代码或名称。
"""
if name is None:
return False
return name.upper() in self._lookup_cache

View File

@@ -120,41 +120,69 @@ def find_metainfo(title: str) -> Tuple[str, dict]:
return title, metainfo
# 从标题中提取媒体信息 格式为{[tmdbid=xxx;type=xxx;s=xxx;e=xxx]}
results = re.findall(r'(?<={\[)[\W\w]+(?=]})', title)
if not results:
return title, metainfo
for result in results:
# 查找tmdbid信息
tmdbid = re.findall(r'(?<=tmdbid=)\d+', result)
if tmdbid and tmdbid[0].isdigit():
metainfo['tmdbid'] = tmdbid[0]
# 查找豆瓣id信息
doubanid = re.findall(r'(?<=doubanid=)\d+', result)
if doubanid and doubanid[0].isdigit():
metainfo['doubanid'] = doubanid[0]
# 查找媒体类型
mtype = re.findall(r'(?<=type=)\w+', result)
if mtype:
if mtype[0] == "movies":
metainfo['type'] = MediaType.MOVIE
elif mtype[0] == "tv":
metainfo['type'] = MediaType.TV
# 查找季信息
begin_season = re.findall(r'(?<=s=)\d+', result)
if begin_season and begin_season[0].isdigit():
metainfo['begin_season'] = int(begin_season[0])
end_season = re.findall(r'(?<=s=\d+-)\d+', result)
if end_season and end_season[0].isdigit():
metainfo['end_season'] = int(end_season[0])
# 查找集信息
begin_episode = re.findall(r'(?<=e=)\d+', result)
if begin_episode and begin_episode[0].isdigit():
metainfo['begin_episode'] = int(begin_episode[0])
end_episode = re.findall(r'(?<=e=\d+-)\d+', result)
if end_episode and end_episode[0].isdigit():
metainfo['end_episode'] = int(end_episode[0])
# 去除title中该部分
if tmdbid or mtype or begin_season or end_season or begin_episode or end_episode:
title = title.replace(f"{{[{result}]}}", '')
if results:
for result in results:
# 查找tmdbid信息
tmdbid = re.findall(r'(?<=tmdbid=)\d+', result)
if tmdbid and tmdbid[0].isdigit():
metainfo['tmdbid'] = tmdbid[0]
# 查找豆瓣id信息
doubanid = re.findall(r'(?<=doubanid=)\d+', result)
if doubanid and doubanid[0].isdigit():
metainfo['doubanid'] = doubanid[0]
# 查找媒体类型
mtype = re.findall(r'(?<=type=)\w+', result)
if mtype:
if mtype[0] == "movies":
metainfo['type'] = MediaType.MOVIE
elif mtype[0] == "tv":
metainfo['type'] = MediaType.TV
# 查找季信息
begin_season = re.findall(r'(?<=s=)\d+', result)
if begin_season and begin_season[0].isdigit():
metainfo['begin_season'] = int(begin_season[0])
end_season = re.findall(r'(?<=s=\d+-)\d+', result)
if end_season and end_season[0].isdigit():
metainfo['end_season'] = int(end_season[0])
# 查找集信息
begin_episode = re.findall(r'(?<=e=)\d+', result)
if begin_episode and begin_episode[0].isdigit():
metainfo['begin_episode'] = int(begin_episode[0])
end_episode = re.findall(r'(?<=e=\d+-)\d+', result)
if end_episode and end_episode[0].isdigit():
metainfo['end_episode'] = int(end_episode[0])
# 去除title中该部分
if tmdbid or mtype or begin_season or end_season or begin_episode or end_episode:
title = title.replace(f"{{[{result}]}}", '')
# 支持Emby格式的ID标签
# 1. [tmdbid=xxxx] 或 [tmdbid-xxxx] 格式
tmdb_match = re.search(r'\[tmdbid[=\-](\d+)\]', title)
if tmdb_match:
metainfo['tmdbid'] = tmdb_match.group(1)
title = re.sub(r'\[tmdbid[=\-](\d+)\]', '', title).strip()
# 2. [tmdb=xxxx] 或 [tmdb-xxxx] 格式
if not metainfo['tmdbid']:
tmdb_match = re.search(r'\[tmdb[=\-](\d+)\]', title)
if tmdb_match:
metainfo['tmdbid'] = tmdb_match.group(1)
title = re.sub(r'\[tmdb[=\-](\d+)\]', '', title).strip()
# 3. {tmdbid=xxxx} 或 {tmdbid-xxxx} 格式
if not metainfo['tmdbid']:
tmdb_match = re.search(r'\{tmdbid[=\-](\d+)\}', title)
if tmdb_match:
metainfo['tmdbid'] = tmdb_match.group(1)
title = re.sub(r'\{tmdbid[=\-](\d+)\}', '', title).strip()
# 4. {tmdb=xxxx} 或 {tmdb-xxxx} 格式
if not metainfo['tmdbid']:
tmdb_match = re.search(r'\{tmdb[=\-](\d+)\}', title)
if tmdb_match:
metainfo['tmdbid'] = tmdb_match.group(1)
title = re.sub(r'\{tmdb[=\-](\d+)\}', '', title).strip()
# 计算季集总数
if metainfo.get('begin_season') and metainfo.get('end_season'):
if metainfo['begin_season'] > metainfo['end_season']:

View File

@@ -1,5 +1,5 @@
import traceback
from typing import Generator, Optional, Tuple, Any, Union
from typing import Generator, Optional, Tuple, Any, Union, List
from app.core.config import settings
from app.core.event import eventmanager
@@ -164,3 +164,9 @@ class ModuleManager(metaclass=Singleton):
获取模块列表
"""
return self._modules
def get_module_ids(self) -> List[str]:
"""
获取模块id列表
"""
return list(self._modules.keys())

File diff suppressed because it is too large Load Diff

View File

@@ -24,9 +24,9 @@ db_kwargs = {
# 当使用 QueuePool 时,添加 QueuePool 特有的参数
if pool_class == QueuePool:
db_kwargs.update({
"pool_size": settings.DB_POOL_SIZE,
"pool_size": settings.CONF.dbpool,
"pool_timeout": settings.DB_POOL_TIMEOUT,
"max_overflow": settings.DB_MAX_OVERFLOW
"max_overflow": settings.CONF.dbpooloverflow
})
# 创建数据库引擎
Engine = create_engine(**db_kwargs)
@@ -221,8 +221,7 @@ class Base:
@classmethod
@db_query
def list(cls, db: Session) -> List[Self]:
result = db.query(cls).all()
return list(result)
return db.query(cls).all()
def to_dict(self):
return {c.name: getattr(self, c.name, None) for c in self.__table__.columns} # noqa
@@ -236,7 +235,6 @@ class DbOper:
"""
数据库操作基类
"""
_db: Session = None
def __init__(self, db: Session = None):
self._db = db

View File

@@ -113,6 +113,7 @@ class DownloadHistoryOper(DbOper):
season: Optional[str] = None, episode: Optional[str] = None, tmdbid=None) -> List[DownloadHistory]:
"""
按类型、标题、年份、季集查询下载记录
tmdbid + mtype 或 title + year
"""
return DownloadHistory.get_last_by(db=self._db,
mtype=mtype,

View File

@@ -9,3 +9,4 @@ from .transferhistory import TransferHistory
from .user import User
from .userconfig import UserConfig
from .workflow import Workflow
from .userrequest import UserRequest

View File

@@ -65,14 +65,16 @@ class DownloadHistory(Base):
@staticmethod
@db_query
def get_by_mediaid(db: Session, tmdbid: int, doubanid: str):
return db.query(DownloadHistory).filter(DownloadHistory.tmdbid == tmdbid,
DownloadHistory.doubanid == doubanid).all()
if tmdbid:
return db.query(DownloadHistory).filter(DownloadHistory.tmdbid == tmdbid).all()
elif doubanid:
return db.query(DownloadHistory).filter(DownloadHistory.doubanid == doubanid).all()
return []
@staticmethod
@db_query
def list_by_page(db: Session, page: Optional[int] = 1, count: Optional[int] = 30):
result = db.query(DownloadHistory).offset((page - 1) * count).limit(count).all()
return list(result)
return db.query(DownloadHistory).offset((page - 1) * count).limit(count).all()
@staticmethod
@db_query
@@ -81,49 +83,55 @@ class DownloadHistory(Base):
@staticmethod
@db_query
def get_last_by(db: Session, mtype: Optional[str] = None, title: Optional[str] = None,
def get_last_by(db: Session, mtype: Optional[str] = None, title: Optional[str] = None,
year: Optional[str] = None, season: Optional[str] = None,
episode: Optional[str] = None, tmdbid: Optional[int] = None):
"""
据tmdbid、season、season_episode查询转移记录
据tmdbid、season、season_episode查询下载记录
tmdbid + mtype 或 title + year
"""
result = None
if tmdbid and not season and not episode:
result = db.query(DownloadHistory).filter(DownloadHistory.tmdbid == tmdbid).order_by(
DownloadHistory.id.desc()).all()
if tmdbid and season and not episode:
result = db.query(DownloadHistory).filter(DownloadHistory.tmdbid == tmdbid,
DownloadHistory.seasons == season).order_by(
DownloadHistory.id.desc()).all()
if tmdbid and season and episode:
result = db.query(DownloadHistory).filter(DownloadHistory.tmdbid == tmdbid,
DownloadHistory.seasons == season,
DownloadHistory.episodes == episode).order_by(
DownloadHistory.id.desc()).all()
# 电视剧所有季集|电影
if not season and not episode:
result = db.query(DownloadHistory).filter(DownloadHistory.type == mtype,
DownloadHistory.title == title,
DownloadHistory.year == year).order_by(
DownloadHistory.id.desc()).all()
# 电视剧某季
if season and not episode:
result = db.query(DownloadHistory).filter(DownloadHistory.type == mtype,
DownloadHistory.title == title,
DownloadHistory.year == year,
DownloadHistory.seasons == season).order_by(
DownloadHistory.id.desc()).all()
# 电视剧某季某集
if season and episode:
result = db.query(DownloadHistory).filter(DownloadHistory.type == mtype,
DownloadHistory.title == title,
DownloadHistory.year == year,
DownloadHistory.seasons == season,
DownloadHistory.episodes == episode).order_by(
DownloadHistory.id.desc()).all()
# TMDBID + 类型
if tmdbid and mtype:
# 电视剧某季某集
if season and episode:
return db.query(DownloadHistory).filter(DownloadHistory.tmdbid == tmdbid,
DownloadHistory.type == mtype,
DownloadHistory.seasons == season,
DownloadHistory.episodes == episode).order_by(
DownloadHistory.id.desc()).all()
# 电视剧某季
elif season:
return db.query(DownloadHistory).filter(DownloadHistory.tmdbid == tmdbid,
DownloadHistory.type == mtype,
DownloadHistory.seasons == season).order_by(
DownloadHistory.id.desc()).all()
else:
# 电视剧所有季集/电影
return db.query(DownloadHistory).filter(DownloadHistory.tmdbid == tmdbid,
DownloadHistory.type == mtype).order_by(
DownloadHistory.id.desc()).all()
# 标题 + 年份
elif title and year:
# 电视剧某季某集
if season and episode:
return db.query(DownloadHistory).filter(DownloadHistory.title == title,
DownloadHistory.year == year,
DownloadHistory.seasons == season,
DownloadHistory.episodes == episode).order_by(
DownloadHistory.id.desc()).all()
# 电视剧某季
elif season:
return db.query(DownloadHistory).filter(DownloadHistory.title == title,
DownloadHistory.year == year,
DownloadHistory.seasons == season).order_by(
DownloadHistory.id.desc()).all()
else:
# 电视剧所有季集/电影
return db.query(DownloadHistory).filter(DownloadHistory.title == title,
DownloadHistory.year == year).order_by(
DownloadHistory.id.desc()).all()
if result:
return list(result)
return []
@staticmethod
@db_query
@@ -132,13 +140,12 @@ class DownloadHistory(Base):
查询某用户某时间之后的下载历史
"""
if username:
result = db.query(DownloadHistory).filter(DownloadHistory.date < date,
DownloadHistory.username == username).order_by(
return db.query(DownloadHistory).filter(DownloadHistory.date < date,
DownloadHistory.username == username).order_by(
DownloadHistory.id.desc()).all()
else:
result = db.query(DownloadHistory).filter(DownloadHistory.date < date).order_by(
return db.query(DownloadHistory).filter(DownloadHistory.date < date).order_by(
DownloadHistory.id.desc()).all()
return list(result)
@staticmethod
@db_query
@@ -161,12 +168,11 @@ class DownloadHistory(Base):
@staticmethod
@db_query
def list_by_type(db: Session, mtype: str, days: int):
result = db.query(DownloadHistory) \
return db.query(DownloadHistory) \
.filter(DownloadHistory.type == mtype,
DownloadHistory.date >= time.strftime("%Y-%m-%d %H:%M:%S",
time.localtime(time.time() - 86400 * int(days)))
).all()
return list(result)
class DownloadFiles(Base):
@@ -193,12 +199,10 @@ class DownloadFiles(Base):
@db_query
def get_by_hash(db: Session, download_hash: str, state: Optional[int] = None):
if state:
result = db.query(DownloadFiles).filter(DownloadFiles.download_hash == download_hash,
DownloadFiles.state == state).all()
return db.query(DownloadFiles).filter(DownloadFiles.download_hash == download_hash,
DownloadFiles.state == state).all()
else:
result = db.query(DownloadFiles).filter(DownloadFiles.download_hash == download_hash).all()
return list(result)
return db.query(DownloadFiles).filter(DownloadFiles.download_hash == download_hash).all()
@staticmethod
@db_query
@@ -213,8 +217,7 @@ class DownloadFiles(Base):
@staticmethod
@db_query
def get_by_savepath(db: Session, savepath: str):
result = db.query(DownloadFiles).filter(DownloadFiles.savepath == savepath).all()
return list(result)
return db.query(DownloadFiles).filter(DownloadFiles.savepath == savepath).all()
@staticmethod
@db_update

View File

@@ -37,7 +37,4 @@ class Message(Base):
@staticmethod
@db_query
def list_by_page(db: Session, page: Optional[int] = 1, count: Optional[int] = 30):
result = db.query(Message).order_by(Message.reg_time.desc()).offset((page - 1) * count).limit(
count).all()
result.sort(key=lambda x: x.reg_time, reverse=False)
return list(result)
return db.query(Message).order_by(Message.reg_time.desc()).offset((page - 1) * count).limit(count).all()

View File

@@ -16,8 +16,7 @@ class PluginData(Base):
@staticmethod
@db_query
def get_plugin_data(db: Session, plugin_id: str):
result = db.query(PluginData).filter(PluginData.plugin_id == plugin_id).all()
return list(result)
return db.query(PluginData).filter(PluginData.plugin_id == plugin_id).all()
@staticmethod
@db_query
@@ -37,5 +36,4 @@ class PluginData(Base):
@staticmethod
@db_query
def get_plugin_data_by_plugin_id(db: Session, plugin_id: str):
result = db.query(PluginData).filter(PluginData.plugin_id == plugin_id).all()
return list(result)
return db.query(PluginData).filter(PluginData.plugin_id == plugin_id).all()

View File

@@ -62,20 +62,17 @@ class Site(Base):
@staticmethod
@db_query
def get_actives(db: Session):
result = db.query(Site).filter(Site.is_active == 1).all()
return list(result)
return db.query(Site).filter(Site.is_active == 1).all()
@staticmethod
@db_query
def list_order_by_pri(db: Session):
result = db.query(Site).order_by(Site.pri).all()
return list(result)
return db.query(Site).order_by(Site.pri).all()
@staticmethod
@db_query
def get_domains_by_ids(db: Session, ids: list):
result = db.query(Site.domain).filter(Site.id.in_(ids)).all()
return [r[0] for r in result]
return [r[0] for r in db.query(Site.domain).filter(Site.id.in_(ids)).all()]
@staticmethod
@db_update

View File

@@ -104,12 +104,10 @@ class Subscribe(Base):
def get_by_state(db: Session, state: str):
# 如果 state 为空或 None返回所有订阅
if not state:
result = db.query(Subscribe).all()
return db.query(Subscribe).all()
else:
# 如果传入的状态不为空,拆分成多个状态
states = state.split(',')
result = db.query(Subscribe).filter(Subscribe.state.in_(states)).all()
return list(result)
return db.query(Subscribe).filter(Subscribe.state.in_(state.split(','))).all()
@staticmethod
@db_query
@@ -123,11 +121,10 @@ class Subscribe(Base):
@db_query
def get_by_tmdbid(db: Session, tmdbid: int, season: Optional[int] = None):
if season:
result = db.query(Subscribe).filter(Subscribe.tmdbid == tmdbid,
Subscribe.season == season).all()
return db.query(Subscribe).filter(Subscribe.tmdbid == tmdbid,
Subscribe.season == season).all()
else:
result = db.query(Subscribe).filter(Subscribe.tmdbid == tmdbid).all()
return list(result)
return db.query(Subscribe).filter(Subscribe.tmdbid == tmdbid).all()
@staticmethod
@db_query
@@ -170,26 +167,24 @@ class Subscribe(Base):
def list_by_username(db: Session, username: str, state: Optional[str] = None, mtype: Optional[str] = None):
if mtype:
if state:
result = db.query(Subscribe).filter(Subscribe.state == state,
Subscribe.username == username,
Subscribe.type == mtype).all()
return db.query(Subscribe).filter(Subscribe.state == state,
Subscribe.username == username,
Subscribe.type == mtype).all()
else:
result = db.query(Subscribe).filter(Subscribe.username == username,
Subscribe.type == mtype).all()
return db.query(Subscribe).filter(Subscribe.username == username,
Subscribe.type == mtype).all()
else:
if state:
result = db.query(Subscribe).filter(Subscribe.state == state,
Subscribe.username == username).all()
return db.query(Subscribe).filter(Subscribe.state == state,
Subscribe.username == username).all()
else:
result = db.query(Subscribe).filter(Subscribe.username == username).all()
return list(result)
return db.query(Subscribe).filter(Subscribe.username == username).all()
@staticmethod
@db_query
def list_by_type(db: Session, mtype: str, days: int):
result = db.query(Subscribe) \
return db.query(Subscribe) \
.filter(Subscribe.type == mtype,
Subscribe.date >= time.strftime("%Y-%m-%d %H:%M:%S",
time.localtime(time.time() - 86400 * int(days)))
).all()
return list(result)

View File

@@ -75,12 +75,11 @@ class SubscribeHistory(Base):
@staticmethod
@db_query
def list_by_type(db: Session, mtype: str, page: Optional[int] = 1, count: Optional[int] = 30):
result = db.query(SubscribeHistory).filter(
return db.query(SubscribeHistory).filter(
SubscribeHistory.type == mtype
).order_by(
SubscribeHistory.date.desc()
).offset((page - 1) * count).limit(count).all()
return list(result)
@staticmethod
@db_query

View File

@@ -63,35 +63,33 @@ class TransferHistory(Base):
@db_query
def list_by_title(db: Session, title: str, page: Optional[int] = 1, count: Optional[int] = 30, status: bool = None):
if status is not None:
result = db.query(TransferHistory).filter(
return db.query(TransferHistory).filter(
TransferHistory.status == status
).order_by(
TransferHistory.date.desc()
).offset((page - 1) * count).limit(count).all()
else:
result = db.query(TransferHistory).filter(or_(
return db.query(TransferHistory).filter(or_(
TransferHistory.title.like(f'%{title}%'),
TransferHistory.src.like(f'%{title}%'),
TransferHistory.dest.like(f'%{title}%'),
)).order_by(
TransferHistory.date.desc()
).offset((page - 1) * count).limit(count).all()
return list(result)
@staticmethod
@db_query
def list_by_page(db: Session, page: Optional[int] = 1, count: Optional[int] = 30, status: bool = None):
if status is not None:
result = db.query(TransferHistory).filter(
return db.query(TransferHistory).filter(
TransferHistory.status == status
).order_by(
TransferHistory.date.desc()
).offset((page - 1) * count).limit(count).all()
else:
result = db.query(TransferHistory).order_by(
return db.query(TransferHistory).order_by(
TransferHistory.date.desc()
).offset((page - 1) * count).limit(count).all()
return list(result)
@staticmethod
@db_query
@@ -115,8 +113,7 @@ class TransferHistory(Base):
@staticmethod
@db_query
def list_by_hash(db: Session, download_hash: str):
result = db.query(TransferHistory).filter(TransferHistory.download_hash == download_hash).all()
return list(result)
return db.query(TransferHistory).filter(TransferHistory.download_hash == download_hash).all()
@staticmethod
@db_query
@@ -128,8 +125,7 @@ class TransferHistory(Base):
TransferHistory.id.label('id')).filter(
TransferHistory.date >= time.strftime("%Y-%m-%d %H:%M:%S",
time.localtime(time.time() - 86400 * days))).subquery()
result = db.query(sub_query.c.date, func.count(sub_query.c.id)).group_by(sub_query.c.date).all()
return list(result)
return db.query(sub_query.c.date, func.count(sub_query.c.id)).group_by(sub_query.c.date).all()
@staticmethod
@db_query
@@ -153,70 +149,67 @@ class TransferHistory(Base):
@staticmethod
@db_query
def list_by(db: Session, mtype: Optional[str] = None, title: Optional[str] = None, year: Optional[str] = None, season: Optional[str] = None,
def list_by(db: Session, mtype: Optional[str] = None, title: Optional[str] = None, year: Optional[str] = None,
season: Optional[str] = None,
episode: Optional[str] = None, tmdbid: Optional[int] = None, dest: Optional[str] = None):
"""
据tmdbid、season、season_episode查询转移记录
tmdbid + mtype 或 title + year 必输
"""
result = None
# TMDBID + 类型
if tmdbid and mtype:
# 电视剧某季某集
if season and episode:
result = db.query(TransferHistory).filter(TransferHistory.tmdbid == tmdbid,
TransferHistory.type == mtype,
TransferHistory.seasons == season,
TransferHistory.episodes == episode,
TransferHistory.dest == dest).all()
return db.query(TransferHistory).filter(TransferHistory.tmdbid == tmdbid,
TransferHistory.type == mtype,
TransferHistory.seasons == season,
TransferHistory.episodes == episode,
TransferHistory.dest == dest).all()
# 电视剧某季
elif season:
result = db.query(TransferHistory).filter(TransferHistory.tmdbid == tmdbid,
TransferHistory.type == mtype,
TransferHistory.seasons == season).all()
return db.query(TransferHistory).filter(TransferHistory.tmdbid == tmdbid,
TransferHistory.type == mtype,
TransferHistory.seasons == season).all()
else:
if dest:
# 电影
result = db.query(TransferHistory).filter(TransferHistory.tmdbid == tmdbid,
TransferHistory.type == mtype,
TransferHistory.dest == dest).all()
return db.query(TransferHistory).filter(TransferHistory.tmdbid == tmdbid,
TransferHistory.type == mtype,
TransferHistory.dest == dest).all()
else:
# 电视剧所有季集
result = db.query(TransferHistory).filter(TransferHistory.tmdbid == tmdbid,
TransferHistory.type == mtype).all()
return db.query(TransferHistory).filter(TransferHistory.tmdbid == tmdbid,
TransferHistory.type == mtype).all()
# 标题 + 年份
elif title and year:
# 电视剧某季某集
if season and episode:
result = db.query(TransferHistory).filter(TransferHistory.title == title,
TransferHistory.year == year,
TransferHistory.seasons == season,
TransferHistory.episodes == episode,
TransferHistory.dest == dest).all()
return db.query(TransferHistory).filter(TransferHistory.title == title,
TransferHistory.year == year,
TransferHistory.seasons == season,
TransferHistory.episodes == episode,
TransferHistory.dest == dest).all()
# 电视剧某季
elif season:
result = db.query(TransferHistory).filter(TransferHistory.title == title,
TransferHistory.year == year,
TransferHistory.seasons == season).all()
return db.query(TransferHistory).filter(TransferHistory.title == title,
TransferHistory.year == year,
TransferHistory.seasons == season).all()
else:
if dest:
# 电影
result = db.query(TransferHistory).filter(TransferHistory.title == title,
TransferHistory.year == year,
TransferHistory.dest == dest).all()
return db.query(TransferHistory).filter(TransferHistory.title == title,
TransferHistory.year == year,
TransferHistory.dest == dest).all()
else:
# 电视剧所有季集
result = db.query(TransferHistory).filter(TransferHistory.title == title,
TransferHistory.year == year).all()
return db.query(TransferHistory).filter(TransferHistory.title == title,
TransferHistory.year == year).all()
# 类型 + 转移路径emby webhook season无tmdbid场景
elif mtype and season and dest:
# 电视剧某季
result = db.query(TransferHistory).filter(TransferHistory.type == mtype,
TransferHistory.seasons == season,
TransferHistory.dest.like(f"{dest}%")).all()
if result:
return list(result)
return db.query(TransferHistory).filter(TransferHistory.type == mtype,
TransferHistory.seasons == season,
TransferHistory.dest.like(f"{dest}%")).all()
return []
@staticmethod

View File

@@ -1,4 +1,5 @@
from typing import Any, Union
import copy
from typing import Any, Optional, Union
from app.db import DbOper
from app.db.models.systemconfig import SystemConfig
@@ -7,34 +8,44 @@ from app.utils.singleton import Singleton
class SystemConfigOper(DbOper, metaclass=Singleton):
# 配置对象
__SYSTEMCONF: dict = {}
"""
系统配置管理
"""
def __init__(self):
"""
加载配置到内存
"""
super().__init__()
self.__SYSTEMCONF = {}
for item in SystemConfig.list(self._db):
self.__SYSTEMCONF[item.key] = item.value
def set(self, key: Union[str, SystemConfigKey], value: Any):
def set(self, key: Union[str, SystemConfigKey], value: Any) -> Optional[bool]:
"""
设置系统设置
:param key: 配置键
:param value: 配置值
:return: 是否设置成功True 成功/False 失败/None 无需更新)
"""
if isinstance(key, SystemConfigKey):
key = key.value
# 更新内存
self.__SYSTEMCONF[key] = value
# 旧值
old_value = self.__SYSTEMCONF.get(key)
# 更新内存(deepcopy避免内存共享)
self.__SYSTEMCONF[key] = copy.deepcopy(value)
conf = SystemConfig.get_by_key(self._db, key)
if conf:
if value:
conf.update(self._db, {"value": value})
else:
conf.delete(self._db, conf.id)
if old_value != value:
if value:
conf.update(self._db, {"value": value})
else:
conf.delete(self._db, conf.id)
return True
return None
else:
conf = SystemConfig(key=key, value=value)
conf.create(self._db)
return True
def get(self, key: Union[str, SystemConfigKey] = None) -> Any:
"""
@@ -43,16 +54,18 @@ class SystemConfigOper(DbOper, metaclass=Singleton):
if isinstance(key, SystemConfigKey):
key = key.value
if not key:
return self.__SYSTEMCONF
return self.__SYSTEMCONF.get(key)
return self.all()
# 避免将__SYSTEMCONF内的值引用出去会导致set时误判没有变动
return copy.deepcopy(self.__SYSTEMCONF.get(key))
def all(self):
"""
获取所有系统设置
"""
return self.__SYSTEMCONF or {}
# 避免将__SYSTEMCONF内的值引用出去会导致set时误判没有变动
return copy.deepcopy(self.__SYSTEMCONF)
def delete(self, key: Union[str, SystemConfigKey]):
def delete(self, key: Union[str, SystemConfigKey]) -> bool:
"""
删除系统设置
"""
@@ -65,7 +78,3 @@ class SystemConfigOper(DbOper, metaclass=Singleton):
if conf:
conf.delete(self._db, conf.id)
return True
def __del__(self):
if self._db:
self._db.close()

View File

@@ -7,14 +7,15 @@ from app.utils.singleton import Singleton
class UserConfigOper(DbOper, metaclass=Singleton):
# 配置缓存
__USERCONF: Dict[str, Dict[str, Any]] = {}
"""
用户配置管理
"""
def __init__(self):
"""
加载配置到内存
"""
super().__init__()
self.__USERCONF = {}
for item in UserConfig.list(self._db):
self.__set_config_cache(username=item.username, key=item.key, value=item.value)
@@ -49,10 +50,6 @@ class UserConfigOper(DbOper, metaclass=Singleton):
return self.__get_config_caches(username=username)
return self.__get_config_cache(username=username, key=key)
def __del__(self):
if self._db:
self._db.close()
def __set_config_cache(self, username: str, key: str, value: Any):
"""
设置配置缓存

View File

@@ -1,2 +1 @@
from .doh import doh_query_json
from .cloudflare import under_challenge

View File

@@ -1,7 +1,8 @@
from typing import Callable, Any, Optional
from playwright.sync_api import sync_playwright, Page
from cf_clearance import sync_cf_retry, sync_stealth
from playwright.sync_api import sync_playwright, Page
from app.log import logger
@@ -35,26 +36,41 @@ class PlaywrightHelper:
:param headless: 是否无头模式
:param timeout: 超时时间
"""
result = None
try:
with sync_playwright() as playwright:
browser = playwright[self.browser_type].launch(headless=headless)
context = browser.new_context(user_agent=ua, proxy=proxies)
page = context.new_page()
if cookies:
page.set_extra_http_headers({"cookie": cookies})
browser = None
context = None
page = None
try:
browser = playwright[self.browser_type].launch(headless=headless)
context = browser.new_context(user_agent=ua, proxy=proxies)
page = context.new_page()
if cookies:
page.set_extra_http_headers({"cookie": cookies})
if not self.__pass_cloudflare(url, page):
logger.warn("cloudflare challenge fail")
page.wait_for_load_state("networkidle", timeout=timeout * 1000)
# 回调函数
return callback(page)
result = callback(page)
except Exception as e:
logger.error(f"网页操作失败: {str(e)}")
finally:
browser.close()
# 确保资源被正确清理
if page:
page.close()
if context:
context.close()
if browser:
browser.close()
except Exception as e:
logger.error(f"网页操作失败: {str(e)}")
return None
logger.error(f"Playwright初始化失败: {str(e)}")
return result
def get_page_source(self, url: str,
cookies: Optional[str] = None,
@@ -71,26 +87,40 @@ class PlaywrightHelper:
:param headless: 是否无头模式
:param timeout: 超时时间
"""
source = ""
source = None
try:
with sync_playwright() as playwright:
browser = playwright[self.browser_type].launch(headless=headless)
context = browser.new_context(user_agent=ua, proxy=proxies)
page = context.new_page()
if cookies:
page.set_extra_http_headers({"cookie": cookies})
browser = None
context = None
page = None
try:
browser = playwright[self.browser_type].launch(headless=headless)
context = browser.new_context(user_agent=ua, proxy=proxies)
page = context.new_page()
if cookies:
page.set_extra_http_headers({"cookie": cookies})
if not self.__pass_cloudflare(url, page):
logger.warn("cloudflare challenge fail")
page.wait_for_load_state("networkidle", timeout=timeout * 1000)
source = page.content()
except Exception as e:
logger.error(f"获取网页源码失败: {str(e)}")
source = None
finally:
browser.close()
# 确保资源被正确清理
if page:
page.close()
if context:
context.close()
if browser:
browser.close()
except Exception as e:
logger.error(f"获取网页源码失败: {str(e)}")
logger.error(f"Playwright初始化失败: {str(e)}")
return source

View File

@@ -14,7 +14,6 @@ class CookieCloudHelper:
def __init__(self):
self.__sync_setting()
self._req = RequestUtils(content_type="application/json")
def __sync_setting(self):
"""
@@ -46,7 +45,7 @@ class CookieCloudHelper:
return {}, "未从本地CookieCloud服务加载到cookie数据请检查服务器设置、用户KEY及加密密码是否正确"
else:
req_url = UrlUtils.combine_url(host=self._server, path=f"get/{self._key}")
ret = self._req.get_res(url=req_url)
ret = RequestUtils(content_type="application/json").get_res(url=req_url)
if ret and ret.status_code == 200:
try:
result = ret.json()

View File

@@ -13,14 +13,12 @@ class DirectoryHelper:
下载目录/媒体库目录帮助类
"""
def __init__(self):
self.systemconfig = SystemConfigOper()
def get_dirs(self) -> List[schemas.TransferDirectoryConf]:
@staticmethod
def get_dirs() -> List[schemas.TransferDirectoryConf]:
"""
获取所有下载目录
"""
dir_confs: List[dict] = self.systemconfig.get(SystemConfigKey.Directories)
dir_confs: List[dict] = SystemConfigOper().get(SystemConfigKey.Directories)
if not dir_confs:
return []
return [schemas.TransferDirectoryConf(**d) for d in dir_confs]

View File

@@ -10,10 +10,15 @@ import socket
import struct
import urllib
import urllib.request
from threading import Lock
from typing import Dict, Optional
from app.core.config import settings
from app.core.event import Event, eventmanager
from app.log import logger
from app.schemas import ConfigChangeEventData
from app.schemas.types import EventType
from app.utils.singleton import Singleton
# 定义一个全局线程池执行器
_executor = concurrent.futures.ThreadPoolExecutor()
@@ -21,41 +26,64 @@ _executor = concurrent.futures.ThreadPoolExecutor()
# 定义默认的DoH配置
_doh_timeout = 5
_doh_cache: Dict[str, str] = {}
_doh_lock = Lock()
# 保存原始的 socket.getaddrinfo 方法
_orig_getaddrinfo = socket.getaddrinfo
def _patched_getaddrinfo(host, *args, **kwargs):
def enable_doh(enable: bool):
"""
socket.getaddrinfo的补丁版本。
socket.getaddrinfo 进行补丁
"""
if host not in settings.DOH_DOMAINS.split(","):
def _patched_getaddrinfo(host, *args, **kwargs):
"""
socket.getaddrinfo的补丁版本。
"""
if host not in settings.DOH_DOMAINS.split(","):
return _orig_getaddrinfo(host, *args, **kwargs)
# 检查主机是否已解析
with _doh_lock:
ip = _doh_cache.get("host", None)
if ip is not None:
logger.info("已解析 [%s] 为 [%s] (缓存)", host, ip)
return _orig_getaddrinfo(ip, *args, **kwargs)
# 使用DoH解析主机
futures = []
for resolver in settings.DOH_RESOLVERS.split(","):
futures.append(_executor.submit(_doh_query, resolver, host))
for future in concurrent.futures.as_completed(futures):
ip = future.result()
if ip is not None:
logger.info("已解析 [%s] 为 [%s]", host, ip)
with _doh_lock:
_doh_cache[host] = ip
host = ip
break
return _orig_getaddrinfo(host, *args, **kwargs)
# 检查主机是否已解析
if host in _doh_cache:
ip = _doh_cache[host]
logger.info("已解析 [%s] 为 [%s] (缓存)", host, ip)
return _orig_getaddrinfo(ip, *args, **kwargs)
# 使用DoH解析主机
futures = []
for resolver in settings.DOH_RESOLVERS.split(","):
futures.append(_executor.submit(_doh_query, resolver, host))
for future in concurrent.futures.as_completed(futures):
ip = future.result()
if ip is not None:
logger.info("已解析 [%s] 为 [%s]", host, ip)
_doh_cache[host] = ip
host = ip
break
return _orig_getaddrinfo(host, *args, **kwargs)
if enable:
# 替换 socket.getaddrinfo 方法
socket.getaddrinfo = _patched_getaddrinfo
else:
socket.getaddrinfo = _orig_getaddrinfo
# 对 socket.getaddrinfo 进行补丁
if settings.DOH_ENABLE:
_orig_getaddrinfo = socket.getaddrinfo
socket.getaddrinfo = _patched_getaddrinfo
class DohHelper(metaclass=Singleton):
def __init__(self):
enable_doh(settings.DOH_ENABLE)
@eventmanager.register(EventType.ConfigChanged)
def handle_config_changed(self, event: Event):
if not event:
return
event_data: ConfigChangeEventData = event.event_data
if event_data.key not in ["DOH_ENABLE", "DOH_DOMAINS", "DOH_RESOLVERS"]:
return
with _doh_lock:
# DOH配置有变动的情况下清空缓存
_doh_cache.clear()
enable_doh(settings.DOH_ENABLE)
def _doh_query(resolver: str, host: str) -> Optional[str]:

457
app/helper/memory.py Normal file
View File

@@ -0,0 +1,457 @@
import gc
import sys
import threading
import time
from datetime import datetime
from typing import Optional
import psutil
from pympler import muppy, summary, asizeof
from app.core.config import settings
from app.core.event import eventmanager, Event
from app.log import logger
from app.schemas import ConfigChangeEventData
from app.schemas.types import EventType
from app.utils.singleton import Singleton
class MemoryHelper(metaclass=Singleton):
"""
内存管理工具类,用于监控和优化内存使用
"""
def __init__(self):
# 检查间隔(秒) - 从配置获取默认5分钟
self._check_interval = settings.MEMORY_SNAPSHOT_INTERVAL * 60
self._monitoring = False
self._monitor_thread: Optional[threading.Thread] = None
# 内存快照保存目录
self._memory_snapshot_dir = settings.LOG_PATH / "memory_snapshots"
# 保留的快照文件数量
self._keep_count = settings.MEMORY_SNAPSHOT_KEEP_COUNT
@eventmanager.register(EventType.ConfigChanged)
def handle_config_changed(self, event: Event):
"""
处理配置变更事件,更新内存监控设置
:param event: 事件对象
"""
if not event:
return
event_data: ConfigChangeEventData = event.event_data
if event_data.key not in ['MEMORY_ANALYSIS', 'MEMORY_SNAPSHOT_INTERVAL', 'MEMORY_SNAPSHOT_KEEP_COUNT']:
return
# 更新配置
if event_data.key == 'MEMORY_SNAPSHOT_INTERVAL':
self._check_interval = settings.MEMORY_SNAPSHOT_INTERVAL * 60
elif event_data.key == 'MEMORY_SNAPSHOT_KEEP_COUNT':
self._keep_count = settings.MEMORY_SNAPSHOT_KEEP_COUNT
self.stop_monitoring()
self.start_monitoring()
def start_monitoring(self):
"""
开始内存监控
"""
if not settings.MEMORY_ANALYSIS:
return
if self._monitoring:
return
# 创建内存快照目录
self._memory_snapshot_dir.mkdir(parents=True, exist_ok=True)
# 初始化内存分析器
self._monitoring = True
self._monitor_thread = threading.Thread(target=self._monitor_loop, daemon=True)
self._monitor_thread.start()
logger.info("内存监控已启动")
def stop_monitoring(self):
"""
停止内存监控
"""
self._monitoring = False
if self._monitor_thread:
self._monitor_thread.join(timeout=5)
logger.info("内存监控已停止")
def _monitor_loop(self):
"""
内存监控循环
"""
logger.info("内存监控循环开始")
while self._monitoring:
try:
# 生成内存快照
self._create_memory_snapshot()
time.sleep(self._check_interval)
except Exception as e:
logger.error(f"内存监控出错: {e}")
# 出错后等待1分钟再继续
time.sleep(60)
logger.info("内存监控循环结束")
def _create_memory_snapshot(self):
"""
创建内存快照并保存到文件
"""
try:
# 获取当前时间戳
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
snapshot_file = self._memory_snapshot_dir / f"memory_snapshot_{timestamp}.txt"
# 获取系统内存使用情况
memory_usage = psutil.Process().memory_info().rss
logger.info(f"开始创建内存快照: {snapshot_file}")
# 第一步:写入基本信息和对象类型统计
self._write_basic_info(snapshot_file, memory_usage)
# 第二步:分析并写入类实例内存使用情况
self._append_class_analysis(snapshot_file)
# 第三步:分析并写入大内存变量详情
self._append_variable_analysis(snapshot_file)
logger.info(f"内存快照已保存: {snapshot_file}, 当前内存使用: {memory_usage / 1024 / 1024:.2f} MB")
# 清理过期的快照文件保留最近30个
self._cleanup_old_snapshots()
except Exception as e:
logger.error(f"创建内存快照失败: {e}")
@staticmethod
def _write_basic_info(snapshot_file, memory_usage):
"""
写入基本信息和对象类型统计
"""
# 获取当前进程的内存使用情况
all_objects = muppy.get_objects()
sum1 = summary.summarize(all_objects)
with open(snapshot_file, 'w', encoding='utf-8') as f:
f.write(f"内存快照时间: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}\n")
f.write(f"当前进程内存使用: {memory_usage / 1024 / 1024:.2f} MB\n")
f.write("=" * 80 + "\n")
f.write("对象类型统计:\n")
f.write("-" * 80 + "\n")
# 写入对象统计信息
for line in summary.format_(sum1):
f.write(line + "\n")
# 立即刷新到磁盘
f.flush()
logger.debug("基本信息已写入快照文件")
def _append_class_analysis(self, snapshot_file):
"""
分析并追加类实例内存使用情况
"""
with open(snapshot_file, 'a', encoding='utf-8') as f:
f.write("\n" + "=" * 80 + "\n")
f.write("类实例内存使用情况 (按内存大小排序):\n")
f.write("-" * 80 + "\n")
f.write("正在分析中...\n")
# 立即刷新,让用户知道这部分开始了
f.flush()
try:
logger.debug("开始分析类实例内存使用情况")
class_objects = self._get_class_memory_usage()
# 重新打开文件,移除"正在分析中..."并写入实际结果
with open(snapshot_file, 'r', encoding='utf-8') as f:
content = f.read()
# 替换"正在分析中..."
content = content.replace("正在分析中...\n", "")
with open(snapshot_file, 'w', encoding='utf-8') as f:
f.write(content)
if class_objects:
# 只显示前100个类
for i, class_info in enumerate(class_objects[:100], 1):
f.write(f"{i:3d}. {class_info['name']:<50} "
f"{class_info['size_mb']:>8.2f} MB ({class_info['count']} 个实例)\n")
else:
f.write("未找到有效的类实例信息\n")
f.flush()
except Exception as e:
logger.error(f"获取类实例信息失败: {e}")
# 即使出错也要更新文件
with open(snapshot_file, 'r', encoding='utf-8') as f:
content = f.read()
content = content.replace("正在分析中...\n", f"获取类实例信息失败: {e}\n")
with open(snapshot_file, 'w', encoding='utf-8') as f:
f.write(content)
f.flush()
logger.debug("类实例分析已完成并写入")
def _append_variable_analysis(self, snapshot_file):
"""
分析并追加大内存变量详情
"""
with open(snapshot_file, 'a', encoding='utf-8') as f:
f.write("\n" + "=" * 80 + "\n")
f.write("大内存变量详情 (前100个):\n")
f.write("-" * 80 + "\n")
f.write("正在分析中...\n")
# 立即刷新,让用户知道这部分开始了
f.flush()
try:
logger.debug("开始分析大内存变量")
large_variables = self._get_large_variables(100)
# 重新打开文件,移除"正在分析中..."并写入实际结果
with open(snapshot_file, 'r', encoding='utf-8') as f:
content = f.read()
# 替换最后的"正在分析中..."
content = content.replace("正在分析中...\n", "")
with open(snapshot_file, 'w', encoding='utf-8') as f:
f.write(content)
if large_variables:
for i, var_info in enumerate(large_variables, 1):
f.write(
f"{i:3d}. {var_info['name']:<30} {var_info['type']:<15} {var_info['size_mb']:>8.2f} MB\n")
else:
f.write("未找到大内存变量\n")
f.flush()
except Exception as e:
logger.error(f"获取大内存变量信息失败: {e}")
# 即使出错也要更新文件
with open(snapshot_file, 'r', encoding='utf-8') as f:
content = f.read()
content = content.replace("正在分析中...\n", f"获取变量信息失败: {e}\n")
with open(snapshot_file, 'w', encoding='utf-8') as f:
f.write(content)
f.flush()
logger.debug("大内存变量分析已完成并写入")
def _cleanup_old_snapshots(self):
"""
清理过期的内存快照文件,只保留最近的指定数量文件
"""
try:
snapshot_files = list(self._memory_snapshot_dir.glob("memory_snapshot_*.txt"))
if len(snapshot_files) > self._keep_count:
# 按修改时间排序,删除最旧的文件
snapshot_files.sort(key=lambda x: x.stat().st_mtime)
for old_file in snapshot_files[:-self._keep_count]:
old_file.unlink()
logger.debug(f"已删除过期内存快照: {old_file}")
except Exception as e:
logger.error(f"清理过期快照失败: {e}")
@staticmethod
def _get_class_memory_usage():
"""
获取所有类实例的内存使用情况,按内存大小排序
"""
class_info = {}
processed_count = 0
error_count = 0
# 获取所有对象
all_objects = muppy.get_objects()
logger.debug(f"开始分析 {len(all_objects)} 个对象的类实例内存使用情况")
for obj in all_objects:
try:
# 跳过类对象本身,统计类的实例
if isinstance(obj, type):
continue
# 获取对象的类名 - 这里可能会出错
obj_class = type(obj)
# 安全地获取类名
try:
if hasattr(obj_class, '__module__') and hasattr(obj_class, '__name__'):
class_name = f"{obj_class.__module__}.{obj_class.__name__}"
else:
class_name = str(obj_class)
except Exception as e:
# 如果获取类名失败,使用简单的类型描述
class_name = f"<unknown_class_{id(obj_class)}>"
logger.debug(f"获取类名失败: {e}")
# 计算对象本身的内存使用(不包括引用对象,避免重复计算)
size_bytes = sys.getsizeof(obj)
if size_bytes < 100: # 跳过太小的对象
continue
size_mb = size_bytes / 1024 / 1024
processed_count += 1
if class_name in class_info:
class_info[class_name]['size_mb'] += size_mb
class_info[class_name]['count'] += 1
else:
class_info[class_name] = {
'name': class_name,
'size_mb': size_mb,
'count': 1
}
except Exception as e:
# 捕获所有可能的异常包括SQLAlchemy、ORM等框架的异常
error_count += 1
if error_count <= 5: # 只记录前5个错误避免日志过多
logger.debug(f"分析对象时出错: {e}")
continue
logger.debug(f"类实例分析完成: 处理了 {processed_count} 个对象, 遇到 {error_count} 个错误")
# 按内存大小排序
sorted_classes = sorted(class_info.values(), key=lambda x: x['size_mb'], reverse=True)
return sorted_classes
def _get_large_variables(self, limit=100):
"""
获取大内存变量信息,按内存大小排序
使用已计算对象集合避免重复计算
"""
large_vars = []
processed_count = 0
calculated_objects = set() # 避免重复计算
# 获取所有对象
all_objects = muppy.get_objects()
logger.debug(f"开始分析 {len(all_objects)} 个对象的内存使用情况")
for obj in all_objects:
# 跳过类对象
if isinstance(obj, type):
continue
# 跳过已经计算过的对象
obj_id = id(obj)
if obj_id in calculated_objects:
continue
try:
# 首先使用 sys.getsizeof 快速筛选
shallow_size = sys.getsizeof(obj)
if shallow_size < 1024: # 只处理大于1KB的对象
continue
# 对于较大的对象,使用 asizeof 进行深度计算
size_bytes = asizeof.asizeof(obj)
# 只处理大于10KB的对象提高分析效率
if size_bytes < 10240:
continue
size_mb = size_bytes / 1024 / 1024
processed_count += 1
calculated_objects.add(obj_id)
# 获取对象信息
var_info = self._get_variable_info(obj, size_mb)
if var_info:
large_vars.append(var_info)
# 如果已经找到足够多的大对象,可以提前结束
if len(large_vars) >= limit * 2: # 多收集一些,后面排序筛选
break
except Exception as e:
# 更广泛的异常捕获
logger.debug(f"分析对象失败: {e}")
continue
logger.debug(f"处理了 {processed_count} 个大对象,找到 {len(large_vars)} 个有效变量")
# 按内存大小排序并返回前N个
large_vars.sort(key=lambda x: x['size_mb'], reverse=True)
return large_vars[:limit]
def _get_variable_info(self, obj, size_mb):
"""
获取变量的描述信息
"""
try:
obj_type = type(obj).__name__
# 尝试获取变量名
var_name = self._get_variable_name(obj)
# 生成描述性信息
if isinstance(obj, dict):
key_count = len(obj)
if key_count > 0:
sample_keys = list(obj.keys())[:3]
var_name += f" ({key_count}项, 键: {sample_keys})"
elif isinstance(obj, (list, tuple, set)):
var_name += f" ({len(obj)}个元素)"
elif isinstance(obj, str):
if len(obj) > 50:
var_name += f" (长度: {len(obj)}, 内容: '{obj[:50]}...')"
else:
var_name += f" ('{obj}')"
elif hasattr(obj, '__class__') and hasattr(obj.__class__, '__name__'):
if hasattr(obj, '__dict__'):
attr_count = len(obj.__dict__)
var_name += f" ({attr_count}个属性)"
return {
'name': var_name,
'type': obj_type,
'size_mb': size_mb
}
except Exception as e:
logger.debug(f"获取变量信息失败: {e}")
return None
@staticmethod
def _get_variable_name(obj):
"""
尝试获取变量名
"""
try:
# 尝试通过gc获取引用该对象的变量名
referrers = gc.get_referrers(obj)
for referrer in referrers:
if isinstance(referrer, dict):
# 检查是否在某个模块的全局变量中
for name, value in referrer.items():
if value is obj and isinstance(name, str):
return name
elif hasattr(referrer, '__dict__'):
# 检查是否在某个实例的属性中
for name, value in referrer.__dict__.items():
if value is obj and isinstance(name, str):
return f"{type(referrer).__name__}.{name}"
# 如果找不到变量名返回对象类型和id
return f"{type(obj).__name__}_{id(obj)}"
except Exception as e:
logger.debug(f"获取变量名失败: {e}")
return f"{type(obj).__name__}_{id(obj)}"

View File

@@ -1,18 +1,539 @@
from __future__ import annotations
import ast
import json
import queue
import re
import threading
import time
from datetime import datetime
from typing import Any, Union
from typing import List, Optional, Callable
from typing import Any, Literal, Optional, List, Dict, Union
from typing import Callable
from cachetools import TTLCache
from jinja2 import Template
from app.core.config import global_vars
from app.core.context import MediaInfo, TorrentInfo
from app.core.meta import MetaBase
from app.db.systemconfig_oper import SystemConfigOper
from app.log import logger
from app.schemas.message import Notification
from app.schemas.tmdb import TmdbEpisode
from app.schemas.transfer import TransferInfo
from app.schemas.types import SystemConfigKey
from app.utils.singleton import Singleton, SingletonClass
from app.log import logger
from app.utils.string import StringUtils
class TemplateContextBuilder:
"""
模板上下文构建器
"""
def __init__(self):
self._context = {}
def build(
self,
meta: Optional[MetaBase] = None,
mediainfo: Optional[MediaInfo] = None,
torrentinfo: Optional[TorrentInfo] = None,
transferinfo: Optional[TransferInfo] = None,
file_extension: Optional[str] = None,
episodes_info: Optional[List[TmdbEpisode]] = None,
include_raw_objects: bool = True,
**kwargs
) -> Dict[str, Any]:
"""
:param meta: 媒体信息
:param mediainfo: 媒体信息
:param torrentinfo: 种子信息
:param transferinfo: 传输信息
:param file_extension: 文件扩展名
:param episodes_info: 剧集信息
:param include_raw_objects: 是否包含原始对象
:return: 渲染上下文字典
"""
self._context.clear()
self._add_episode_details(meta, episodes_info)
self._add_media_info(mediainfo)
self._add_transfer_info(transferinfo)
self._add_torrent_info(torrentinfo)
self._add_file_info(file_extension)
if kwargs:
self._context.update(kwargs)
if include_raw_objects:
self._add_raw_objects(meta, mediainfo, torrentinfo, transferinfo, episodes_info)
# 移除空值
return {k: v for k, v in self._context.items() if v is not None}
def _add_media_info(self, mediainfo: MediaInfo):
"""
增加媒体信息
"""
if not mediainfo:
return
season_fmt = f"S{mediainfo.season:02d}" if mediainfo.season is not None else None
base_info = {
# 标题
"title": self.__convert_invalid_characters(mediainfo.title),
# 英文标题
"en_title": self.__convert_invalid_characters(mediainfo.en_title),
# 原语种标题
"original_title": self.__convert_invalid_characters(mediainfo.original_title),
# 季号
"season": self._context.get("season") or mediainfo.season,
# Sxx
"season_fmt": self._context.get("season_fmt") or season_fmt,
# 年份
"year": mediainfo.year or self._context.get("year"),
# 媒体标题 + 年份
"title_year": mediainfo.title_year or self._context.get("title_year"),
}
_meta_season = self._context.get("season")
media_info = {
# 类型
"type": mediainfo.type.value,
# 类别
"category": mediainfo.category,
# 评分
"vote_average": mediainfo.vote_average,
# 海报
"poster": mediainfo.get_poster_image(),
# 背景图
"backdrop": mediainfo.get_backdrop_image(),
# 季年份根据season值获取
"season_year": mediainfo.season_years.get(
int(_meta_season),
None) if (mediainfo.season_years and _meta_season) else None,
# 演员
"actors": ''.join([actor['name'] for actor in mediainfo.actors[:5]]),
# 简介
"overview": mediainfo.overview,
# TMDBID
"tmdbid": mediainfo.tmdb_id,
# IMDBID
"imdbid": mediainfo.imdb_id,
# 豆瓣ID
"doubanid": mediainfo.douban_id,
}
self._context.update({**base_info, **media_info})
def _add_episode_details(self, meta: Optional[MetaBase], episodes: Optional[List[TmdbEpisode]]):
"""
添加剧集详细信息
"""
if not meta:
return
episode_data = {"episode_title": None, "episode_date": None}
if meta.begin_episode and episodes:
for episode in episodes:
if episode.episode_number == meta.begin_episode:
episode_data.update({
"episode_title": self.__convert_invalid_characters(episode.name),
"episode_date": episode.air_date if episode.air_date else None
})
break
meta_info = {
# 原文件名
"original_name": meta.title,
# 识别名称(优先使用中文)
"name": meta.name,
# 识别的英文名称(可能为空)
"en_name": meta.en_name,
# 年份
"year": meta.year,
# 名字 + 年份
"title_year": self._context.get("title_year") or "%s (%s)" % (
meta.name, meta.year) if meta.year else meta.name,
# 季号
"season": meta.season_seq,
# Sxx
"season_fmt": meta.season,
# 集号
"episode": meta.episode_seqs,
# 季集 SxxExx
"season_episode": "%s%s" % (meta.season, meta.episode),
# 段/节
"part": meta.part,
# 自定义占位符
"customization": meta.customization,
}
tech_metadata = {
# 资源类型
"resourceType": meta.resource_type,
# 特效
"effect": meta.resource_effect,
# 版本
"edition": meta.edition,
# 分辨率
"videoFormat": meta.resource_pix,
# 质量
"resource_term": meta.resource_term,
# 制作组/字幕组
"releaseGroup": meta.resource_team,
# 视频编码
"videoCodec": meta.video_encode,
# 音频编码
"audioCodec": meta.audio_encode,
# 流媒体平台
"webSource": meta.web_source,
}
self._context.update({**meta_info, **tech_metadata, **episode_data})
def _add_torrent_info(self, torrentinfo: Optional[TorrentInfo]):
"""
添加种子信息
"""
if not torrentinfo:
return
if torrentinfo.size:
if str(torrentinfo.size).replace(".", "").isdigit():
size = StringUtils.str_filesize(torrentinfo.size)
else:
size = torrentinfo.size
else:
size = 0
if torrentinfo.description:
html_re = re.compile(r'<[^>]+>', re.S)
description = html_re.sub('', torrentinfo.description)
torrentinfo.description = re.sub(r'<[^>]+>', '', description)
torrent_info = {
# 种子标题
"torrent_title": torrentinfo.title,
# 发布时间
"pubdate": torrentinfo.pubdate,
# 免费剩余时间
"freedate": torrentinfo.freedate_diff,
# 做种数
"seeders": torrentinfo.seeders,
# 促销信息
"volume_factor": torrentinfo.volume_factor,
# Hit&Run
"hit_and_run": "" if torrentinfo.hit_and_run else "",
# 种子标签
"labels": ' '.join(torrentinfo.labels),
# 描述
"description": torrentinfo.description,
# 站点名称
"site_name": torrentinfo.site_name,
# 种子大小
"size": size,
}
self._context.update(torrent_info)
def _add_transfer_info(self, transferinfo: Optional[TransferInfo]) -> Optional[Dict]:
"""
添加文件转移上下文
"""
if not transferinfo:
return None
ctx = {
"transfer_type": transferinfo.transfer_type,
"file_count": transferinfo.file_count,
"total_size": StringUtils.str_filesize(transferinfo.total_size),
"err_msg": transferinfo.message,
}
return self._context.update(ctx)
def _add_file_info(self, file_extension: Optional[str]):
"""
添加文件信息
"""
if not file_extension:
return
file_info = {
# 文件后缀
"fileExt": file_extension,
}
self._context.update(file_info)
def _add_raw_objects(
self,
meta: Optional[MetaBase],
mediainfo: Optional[MediaInfo],
torrentinfo: Optional[TorrentInfo],
transferinfo: Optional[TransferInfo],
episodes_info: Optional[List[TmdbEpisode]],
):
"""
添加原始对象引用
"""
raw_objects = {
# 文件元数据
"__meta__": meta,
# 识别的媒体信息
"__mediainfo__": mediainfo,
# 种子信息
"__torrentinfo__": torrentinfo,
# 文件转移信息
"__transferinfo__": transferinfo,
# 当前季的全部集信息
"__episodes_info__": episodes_info,
}
self._context.update(raw_objects)
@staticmethod
def __convert_invalid_characters(filename: str):
"""
将不支持的字符转换为全角字符
"""
if not filename:
return filename
invalid_characters = r'\/:*?"<>|'
# 创建半角到全角字符的转换表
halfwidth_chars = "".join([chr(i) for i in range(33, 127)])
fullwidth_chars = "".join([chr(i + 0xFEE0) for i in range(33, 127)])
translation_table = str.maketrans(halfwidth_chars, fullwidth_chars)
# 将不支持的字符替换为对应的全角字符
for char in invalid_characters:
filename = filename.replace(char, char.translate(translation_table))
return filename
class TemplateHelper(metaclass=SingletonClass):
"""
模板格式渲染帮助类
"""
def __init__(self):
self.builder = TemplateContextBuilder()
self.cache = TTLCache(maxsize=100, ttl=600)
@staticmethod
def _generate_cache_key(cuntent: Union[str, dict]) -> str:
"""
生成缓存键
"""
if isinstance(cuntent, dict):
base_str = cuntent.get("title", '') + cuntent.get("text", '')
return StringUtils.md5_hash(json.dumps(base_str, sort_keys=True, ensure_ascii=False))
return StringUtils.md5_hash(cuntent)
def get_cache_context(self, cuntent: Union[str, dict]) -> Optional[dict]:
"""
获取缓存上下文
"""
cache_key = self._generate_cache_key(cuntent)
return self.cache.get(cache_key)
def set_cache_context(self, cuntent: Union[str, dict], context: dict) -> None:
"""
设置缓存上下文
"""
cache_key = self._generate_cache_key(cuntent)
self.cache[cache_key] = context
def render(self,
template_content: str,
template_type: Literal['string', 'dict', 'literal'] = "literal",
**kwargs) -> Optional[Union[str, dict]]:
"""
根据模板格式渲染内容
:param template_content: 模板字符串
:param template_type: 模板字符串类型(消息通知`literal`, 路径`string`)
:param kwargs: 补传业务对象
:raises ValueError: 当模板处理过程中出现错误
:return: 渲染后的结果
"""
try:
# 解析模板字符
parsed = self.parse_template_content(template_content, template_type)
if not parsed:
raise ValueError("模板解析失败")
context = self.builder.build(**kwargs)
if not context:
raise ValueError("上下文构建失败")
rendered = self.render_with_context(parsed, context)
if not rendered:
raise ValueError("模板渲染失败")
if rendered := rendered if template_type == 'string' else self.__process_formatted_string(rendered):
# 缓存上下文
self.set_cache_context(rendered, context)
# 返回渲染结果
return rendered
return None
except Exception as e:
logger.error(f"模板处理失败: {str(e)}")
raise ValueError(f"模板处理失败: {str(e)}") from e
@staticmethod
def render_with_context(template_content: str, context: dict) -> str:
"""
使用指定上下文渲染 Jinja2 模板字符串
template_content: Jinja2 模板字符串
context: 渲染用的上下文数据
"""
# 渲染模板
template = Template(template_content)
return template.render(context)
@staticmethod
def parse_template_content(template_content: Union[str, dict],
template_type: Literal['string', 'dict', 'literal'] = None) -> Optional[str]:
"""
解析模板字符
:param template_content 模板格式字符
:param template_type 模板字符类型
"""
def parse_literal(_template_content: str) -> str:
"""
解析Python字面量
"""
try:
template_dict = ast.literal_eval(_template_content) if isinstance(_template_content,
str) else _template_content
if not isinstance(template_dict, dict):
raise ValueError("解析结果必须是一个字典")
return json.dumps(template_dict, ensure_ascii=False)
except (ValueError, SyntaxError) as err:
raise ValueError(f"无效的Python字面量格式: {str(err)}")
try:
if template_type:
parse_map = {
'string': lambda x: str(x),
'dict': lambda x: json.dumps(x, ensure_ascii=False),
'literal': parse_literal
}
return parse_map[template_type](template_content)
# 自动判断模板类型
if isinstance(template_content, dict):
return json.dumps(template_content, ensure_ascii=False)
elif isinstance(template_content, str):
try:
json.loads(template_content)
return template_content
except json.JSONDecodeError:
try:
return parse_literal(template_content)
except (ValueError, SyntaxError):
return template_content
else:
raise ValueError(f"不支持的模板类型: {type(template_content)}")
except Exception as e:
logger.error(f"模板解析失败: {str(e)}")
return None
@staticmethod
def __process_formatted_string(rendered: str) -> Optional[Union[dict, str]]:
"""
处理格式化字符串
保留转义字符
"""
def restore_chars(obj: Any) -> Any:
"""恢复特殊字符"""
if isinstance(obj, str):
return obj.replace('\\n', '\n').replace('\\r', '\r').replace('\\t', '\t').replace('\\b', '\b').replace(
'\\f', '\f')
elif isinstance(obj, dict):
return {k: restore_chars(v) for k, v in obj.items()}
elif isinstance(obj, list):
return [restore_chars(item) for item in obj]
return obj
# 定义特殊字符映射
special_chars = {
'\n': '\\n', # 换行符
'\r': '\\r', # 回车符
'\t': '\\t', # 制表符
'\b': '\\b', # 退格符
'\f': '\\f', # 换页符
}
# 处理特殊字符
processed = rendered
for char, escape in special_chars.items():
processed = processed.replace(char, escape)
# 尝试解析为JSON
try:
rendered_dict = json.loads(processed)
return restore_chars(rendered_dict)
except json.JSONDecodeError:
return rendered
class MessageTemplateHelper:
"""
消息模板渲染器
"""
@staticmethod
def render(message: Notification, *args, **kwargs) -> Optional[Notification]:
"""
渲染消息模板
"""
if not MessageTemplateHelper.is_instance_valid(message):
if MessageTemplateHelper.meets_update_conditions(message, *args, **kwargs):
logger.info("将使用模板渲染消息内容")
return MessageTemplateHelper._apply_template_data(message, *args, **kwargs)
return message
@staticmethod
def is_instance_valid(message: Notification) -> bool:
"""
检查消息是否有效
"""
if isinstance(message, Notification):
return bool(message.title or message.text)
return False
@staticmethod
def meets_update_conditions(message: Notification, *args, **kwargs) -> bool:
"""
判断是否满足消息实例更新条件
满足条件需同时具备:
1. 消息为有效Notification实例
2. 消息指定了模板类型(ctype)
3. 存在待渲染的模板变量数据
"""
if isinstance(message, Notification):
return True if message.ctype and (args or kwargs) else False
return False
@staticmethod
def _apply_template_data(message: Notification, *args, **kwargs) -> Optional[Notification]:
"""
更新消息实例
"""
try:
if template := MessageTemplateHelper._get_template(message):
rendered = TemplateHelper().render(template_content=template, *args, **kwargs)
for key, value in rendered.items():
if hasattr(message, key):
setattr(message, key, value)
return message
except Exception as e:
logger.error(f"更新Notification时出现错误{str(e)}")
return message
@staticmethod
def _get_template(message: Notification) -> Optional[str]:
"""
获取消息模板
"""
template_dict: dict[str, str] = SystemConfigOper().get(SystemConfigKey.NotificationTemplates)
return template_dict.get(f"{message.ctype.value}")
class MessageQueueManager(metaclass=SingletonClass):
@@ -55,6 +576,7 @@ class MessageQueueManager(metaclass=SingletonClass):
def _parse_schedule(periods: Union[list, dict]) -> List[tuple[int, int, int, int]]:
"""
将字符串时间格式转换为分钟数元组
支持格式为 'HH:MM''HH:MM:SS' 的时间字符串
"""
parsed = []
if not periods:
@@ -66,9 +588,31 @@ class MessageQueueManager(metaclass=SingletonClass):
continue
if not period.get('start') or not period.get('end'):
continue
start_h, start_m = map(int, period['start'].split(':'))
end_h, end_m = map(int, period['end'].split(':'))
parsed.append((start_h, start_m, end_h, end_m))
try:
# 处理 start 时间
start_parts = period['start'].split(':')
if len(start_parts) == 2:
start_h, start_m = map(int, start_parts)
elif len(start_parts) >= 3:
start_h, start_m = map(int, start_parts[:2]) # 只取前两个部分 (HH:MM)
else:
continue
# 处理 end 时间
end_parts = period['end'].split(':')
if len(end_parts) == 2:
end_h, end_m = map(int, end_parts)
elif len(end_parts) >= 3:
end_h, end_m = map(int, end_parts[:2]) # 只取前两个部分 (HH:MM)
else:
continue
parsed.append((start_h, start_m, end_h, end_m))
except ValueError as e:
logger.error(f"解析时间周期时出现错误:{period}. 错误:{str(e)}. 跳过此周期。")
continue
except Exception as e:
logger.error(f"解析时间周期时出现意外错误:{period}. 错误:{str(e)}. 跳过此周期。")
continue
return parsed
@staticmethod
@@ -103,7 +647,8 @@ class MessageQueueManager(metaclass=SingletonClass):
"""
发送消息(立即发送或加入队列)
"""
if self._is_in_scheduled_time(datetime.now()):
immediately = kwargs.pop("immediately", False)
if immediately or self._is_in_scheduled_time(datetime.now()):
self._send(*args, **kwargs)
else:
self.queue.put({

View File

@@ -7,14 +7,15 @@ from typing import List, Any, Callable
from app.log import logger
FilterFuncType = Callable[[str, Any], bool]
def _default_filter(name: str, obj: Any) -> bool:
"""
默认过滤器
"""
return True
return True if name and obj else False
class ModuleHelper:
"""
@@ -76,7 +77,8 @@ class ModuleHelper:
def reload_sub_modules(parent_module, parent_module_name):
"""重新加载一级子模块"""
for sub_importer, sub_module_name, sub_is_pkg in pkgutil.walk_packages(parent_module.__path__, parent_module_name+'.'):
for sub_importer, sub_module_name, sub_is_pkg in pkgutil.walk_packages(parent_module.__path__,
parent_module_name + '.'):
try:
full_sub_module = importlib.import_module(sub_module_name)
importlib.reload(full_sub_module)

View File

@@ -9,7 +9,7 @@ class OcrHelper:
_ocr_b64_url = f"{settings.OCR_HOST}/captcha/base64"
def get_captcha_text(self, image_url: Optional[str] = None, image_b64: Optional[str] = None,
def get_captcha_text(self, image_url: Optional[str] = None, image_b64: Optional[str] = None,
cookie: Optional[str] = None, ua: Optional[str] = None):
"""
根据图片地址,获取验证码图片,并识别内容

View File

@@ -1,12 +1,16 @@
import importlib
import json
import shutil
import site
import sys
import traceback
from pathlib import Path
from typing import Any, Dict, List, Optional, Tuple, Set
from typing import Dict, List, Optional, Tuple, Set
from packaging.specifiers import SpecifierSet, InvalidSpecifier
from packaging.version import Version, InvalidVersion
from pkg_resources import Requirement, working_set
from requests import Response
from app.core.cache import cached
from app.core.config import settings
@@ -38,12 +42,35 @@ class PluginHelper(metaclass=Singleton):
if self.install_report():
self.systemconfig.set(SystemConfigKey.PluginInstallReport, "1")
@cached(maxsize=1000, ttl=1800)
def get_plugins(self, repo_url: str, package_version: Optional[str] = None) -> Optional[Dict[str, dict]]:
def get_plugins(self, repo_url: str, package_version: Optional[str] = None,
force: bool = False) -> Optional[Dict[str, dict]]:
"""
获取Github所有最新插件列表
:param repo_url: Github仓库地址
:param package_version: 首选插件版本 (如 "v2", "v3"),如果不指定则获取 v1 版本
:param force: 是否强制刷新,忽略缓存
"""
# 如果强制刷新,直接调用不带缓存的版本
if force:
return self._get_plugins_uncached(repo_url, package_version)
# 正常情况下调用带缓存的版本
return self._get_plugins_cached(repo_url, package_version)
@cached(maxsize=64, ttl=1800)
def _get_plugins_cached(self, repo_url: str, package_version: Optional[str] = None) -> Optional[Dict[str, dict]]:
"""
获取Github所有最新插件列表使用缓存
:param repo_url: Github仓库地址
:param package_version: 首选插件版本 (如 "v2", "v3"),如果不指定则获取 v1 版本
"""
return self._get_plugins_uncached(repo_url, package_version)
def _get_plugins_uncached(self, repo_url: str, package_version: Optional[str] = None) -> Optional[Dict[str, dict]]:
"""
获取Github所有最新插件列表不使用缓存
:param repo_url: Github仓库地址
:param package_version: 首选插件版本 (如 "v2", "v3"),如果不指定则获取 v1 版本
"""
if not repo_url:
return None
@@ -59,11 +86,13 @@ class PluginHelper(metaclass=Singleton):
if res is None:
return None
if res:
content = res.text
try:
return json.loads(res.text)
return json.loads(content)
except json.JSONDecodeError:
logger.error(f"插件包数据解析失败:{res.text}")
return None
if "404: Not Found" not in content:
logger.warn(f"插件包数据解析失败:{content}")
return None
return {}
def get_plugin_package_version(self, pid: str, repo_url: str, package_version: Optional[str] = None) -> Optional[str]:
@@ -83,14 +112,11 @@ class PluginHelper(metaclass=Singleton):
package_version = settings.VERSION_FLAG
# 优先检查指定版本的插件,即 package.v(x).json 文件中是否存在该插件,如果存在,返回该版本号
plugins = self.get_plugins(repo_url, package_version)
if pid in plugins:
if pid in (self.get_plugins(repo_url, package_version) or []):
return package_version
# 如果指定版本的插件不存在,检查全局 package.json 文件,查看插件是否兼容指定的版本
global_plugins = self.get_plugins(repo_url)
plugin = global_plugins.get(pid, None)
plugin = (self.get_plugins(repo_url) or {}).get(pid, None)
# 检查插件是否明确支持当前指定的版本(如 v2 或 v3如果支持返回空字符串表示使用 package.jsonv1
if plugin and plugin.get(package_version) is True:
return ""
@@ -282,7 +308,7 @@ class PluginHelper(metaclass=Singleton):
return None, "连接仓库失败"
elif res.status_code != 200:
return None, f"连接仓库失败:{res.status_code} - " \
f"{'超出速率限制,请置GITHUB_TOKEN环境变量或稍后重试' if res.status_code == 403 else res.reason}"
f"{'超出速率限制,请置Github Token或稍后重试' if res.status_code == 403 else res.reason}"
try:
ret = res.json()
@@ -291,7 +317,7 @@ class PluginHelper(metaclass=Singleton):
else:
return None, "插件在仓库中不存在或返回数据格式不正确"
except Exception as e:
logger.error(f"插件数据解析失败:{res.text}{e}")
logger.error(f"插件数据解析失败:{e}")
return None, "插件数据解析失败"
def __download_files(self, pid: str, file_list: List[dict], user_repo: str,
@@ -373,8 +399,7 @@ class PluginHelper(metaclass=Singleton):
with open(requirements_file_path, "w", encoding="utf-8") as f:
f.write(requirements_txt)
success, message = self.__pip_install_with_fallback(requirements_file_path)
return success, message
return self.pip_install_with_fallback(requirements_file_path)
return True, "" # 如果 requirements.txt 为空,视作成功
@@ -391,7 +416,7 @@ class PluginHelper(metaclass=Singleton):
# 检查是否存在 requirements.txt 文件
if requirements_file.exists():
logger.info(f"{pid} 存在依赖,开始尝试安装依赖")
success, error_message = self.__pip_install_with_fallback(requirements_file)
success, error_message = self.pip_install_with_fallback(requirements_file)
if success:
return True, True, ""
else:
@@ -449,21 +474,24 @@ class PluginHelper(metaclass=Singleton):
shutil.rmtree(plugin_dir, ignore_errors=True)
@staticmethod
def __pip_install_with_fallback(requirements_file: Path) -> Tuple[bool, str]:
def pip_install_with_fallback(requirements_file: Path) -> Tuple[bool, str]:
"""
使用自动降级策略PIP 安装依赖,优先级依次为镜像站、代理、直连
使用自动降级策略安装依赖,并确保新安装的包可被动态导入
:param requirements_file: 依赖的 requirements.txt 文件路径
:return: (是否成功, 错误信息)
"""
base_cmd = [sys.executable, "-m", "pip", "install", "-r", str(requirements_file)]
strategies = []
# 添加策略到列表中
if settings.PIP_PROXY:
strategies.append(("镜像站", ["pip", "install", "-r", str(requirements_file), "-i", settings.PIP_PROXY]))
strategies.append(("镜像站", base_cmd + ["-i", settings.PIP_PROXY]))
if settings.PROXY_HOST:
strategies.append(
("代理", ["pip", "install", "-r", str(requirements_file), "--proxy", settings.PROXY_HOST]))
strategies.append(("直连", ["pip", "install", "-r", str(requirements_file)]))
strategies.append(("代理", base_cmd + ["--proxy", settings.PROXY_HOST]))
strategies.append(("直连", base_cmd))
# 记录当前已安装的包,以便后续刷新
before_installation = set(sys.modules.keys())
# 遍历策略进行安装
for strategy_name, pip_command in strategies:
@@ -471,6 +499,16 @@ class PluginHelper(metaclass=Singleton):
success, message = SystemUtils.execute_with_subprocess(pip_command)
if success:
logger.debug(f"[PIP] 策略:{strategy_name} 安装依赖成功,输出:{message}")
# 安装成功后刷新Python的模块系统
importlib.reload(site)
# 获取新安装的模块
current_modules = set(sys.modules.keys())
new_modules = current_modules - before_installation
# 重新加载新安装的模块
for module in new_modules:
if module in sys.modules:
del sys.modules[module]
logger.debug(f"[PIP] 已刷新导入系统,新加载的模块: {new_modules}")
return True, message
else:
logger.error(f"[PIP] 策略:{strategy_name} 安装依赖失败,错误信息:{message}")
@@ -481,7 +519,7 @@ class PluginHelper(metaclass=Singleton):
def __request_with_fallback(url: str,
headers: Optional[dict] = None,
timeout: Optional[int] = 60,
is_api: bool = False) -> Optional[Any]:
is_api: bool = False) -> Optional[Response]:
"""
使用自动降级策略,请求资源,优先级依次为镜像站、代理、直连
:param url: 目标URL
@@ -557,7 +595,6 @@ class PluginHelper(metaclass=Singleton):
def install_dependencies(self, dependencies: List[str]) -> Tuple[bool, str]:
"""
安装指定的依赖项列表
:param dependencies: 需要安装或更新的依赖项列表
:return: (success, message)
"""
@@ -572,12 +609,12 @@ class PluginHelper(metaclass=Singleton):
with open(requirements_temp_file, "w", encoding="utf-8") as f:
for dep in dependencies:
f.write(dep + "\n")
# 使用自动降级策略安装依赖
success, message = self.__pip_install_with_fallback(requirements_temp_file)
# 删除临时文件
requirements_temp_file.unlink()
return success, message
try:
# 使用自动降级策略安装依赖
return self.pip_install_with_fallback(requirements_temp_file)
finally:
# 删除临时文件
requirements_temp_file.unlink()
except Exception as e:
logger.error(f"安装依赖项时发生错误:{e}")
return False, f"安装依赖项时发生错误:{e}"

View File

@@ -3,24 +3,23 @@ from pathlib import Path
from app.core.config import settings
from app.helper.sites import SitesHelper
from app.helper.system import SystemHelper
from app.log import logger
from app.utils.http import RequestUtils
from app.utils.singleton import Singleton
from app.utils.string import StringUtils
from app.utils.system import SystemUtils
class ResourceHelper(metaclass=Singleton):
class ResourceHelper:
"""
检测和更新资源包
"""
# 资源包的git仓库地址
_repo = f"{settings.GITHUB_PROXY}https://raw.githubusercontent.com/jxxghp/MoviePilot-Resources/main/package.json"
_files_api = f"https://api.github.com/repos/jxxghp/MoviePilot-Resources/contents/resources"
_repo = f"{settings.GITHUB_PROXY}https://raw.githubusercontent.com/jxxghp/MoviePilot-Resources/main/package.v2.json"
_files_api = f"https://api.github.com/repos/jxxghp/MoviePilot-Resources/contents/resources.v2"
_base_dir: Path = settings.ROOT_PATH
def __init__(self):
self.siteshelper = SitesHelper()
self.check()
@property
@@ -32,80 +31,86 @@ class ResourceHelper(metaclass=Singleton):
检测是否有更新,如有则下载安装
"""
if not settings.AUTO_UPDATE_RESOURCE:
return
return None
if SystemUtils.is_frozen():
return
return None
logger.info("开始检测资源包版本...")
res = RequestUtils(proxies=self.proxies, headers=settings.GITHUB_HEADERS, timeout=10).get_res(self._repo)
if res:
try:
resource_info = json.loads(res.text)
online_version = resource_info.get("version")
if online_version:
logger.info(f"最新资源包版本v{online_version}")
# 需要更新的资源包
need_updates = {}
# 资源明细
resources: dict = resource_info.get("resources") or {}
for rname, resource in resources.items():
rtype = resource.get("type")
platform = resource.get("platform")
target = resource.get("target")
version = resource.get("version")
# 判断平台
if platform and platform != SystemUtils.platform():
continue
# 判断版本号
if rtype == "auth":
# 站点认证资源
local_version = SitesHelper().auth_version
# 阻断v2.3.0以下的版本直接更新,避免无限重启
if StringUtils.compare_version(local_version, "<", "2.3.0"):
continue
elif rtype == "sites":
# 站点索引资源
local_version = SitesHelper().indexer_version
# 阻断v2.0.0以下的版本直接更新,避免无限重启
if StringUtils.compare_version(local_version, "<", "2.0.0"):
continue
else:
continue
if StringUtils.compare_version(version, ">", local_version):
logger.info(f"{rname} 资源包有更新最新版本v{version}")
else:
continue
# 需要安装
need_updates[rname] = target
if need_updates:
# 下载文件信息列表
r = RequestUtils(proxies=settings.PROXY, headers=settings.GITHUB_HEADERS,
timeout=30).get_res(self._files_api)
if r and not r.ok:
return None, f"连接仓库失败:{r.status_code} - {r.reason}"
elif not r:
return None, "连接仓库失败"
files_info = r.json()
for item in files_info:
save_path = need_updates.get(item.get("name"))
if not save_path:
continue
if item.get("download_url"):
logger.info(f"开始更新资源文件:{item.get('name')} ...")
download_url = f"{settings.GITHUB_PROXY}{item.get('download_url')}"
# 下载资源文件
res = RequestUtils(proxies=self.proxies, headers=settings.GITHUB_HEADERS,
timeout=180).get_res(download_url)
if not res:
logger.error(f"文件 {item.get('name')} 下载失败!")
elif res.status_code != 200:
logger.error(f"下载文件 {item.get('name')} 失败:{res.status_code} - {res.reason}")
# 创建插件文件夹
file_path = self._base_dir / save_path / item.get("name")
if not file_path.parent.exists():
file_path.parent.mkdir(parents=True, exist_ok=True)
# 写入文件
file_path.write_bytes(res.content)
logger.info("资源包更新完成,开始重启服务...")
SystemHelper.restart()
else:
logger.info("所有资源已最新,无需更新")
except json.JSONDecodeError:
logger.error("资源包仓库数据解析失败!")
return
return None
else:
logger.warn("无法连接资源包仓库!")
return
online_version = resource_info.get("version")
if online_version:
logger.info(f"最新资源包版本v{online_version}")
# 需要更新的资源包
need_updates = {}
# 资源明细
resources: dict = resource_info.get("resources") or {}
for rname, resource in resources.items():
rtype = resource.get("type")
platform = resource.get("platform")
target = resource.get("target")
version = resource.get("version")
# 判断平台
if platform and platform != SystemUtils.platform():
continue
# 判断版本号
if rtype == "auth":
# 站点认证资源
local_version = self.siteshelper.auth_version
elif rtype == "sites":
# 站点索引资源
local_version = self.siteshelper.indexer_version
else:
continue
if StringUtils.compare_version(version, ">", local_version):
logger.info(f"{rname} 资源包有更新最新版本v{version}")
else:
continue
# 需要安装
need_updates[rname] = target
if need_updates:
# 下载文件信息列表
r = RequestUtils(proxies=settings.PROXY, headers=settings.GITHUB_HEADERS,
timeout=30).get_res(self._files_api)
if r and not r.ok:
return None, f"连接仓库失败:{r.status_code} - {r.reason}"
elif not r:
return None, "连接仓库失败"
files_info = r.json()
for item in files_info:
save_path = need_updates.get(item.get("name"))
if not save_path:
continue
if item.get("download_url"):
logger.info(f"开始更新资源文件:{item.get('name')} ...")
download_url = f"{settings.GITHUB_PROXY}{item.get('download_url')}"
# 下载资源文件
res = RequestUtils(proxies=self.proxies, headers=settings.GITHUB_HEADERS,
timeout=180).get_res(download_url)
if not res:
logger.error(f"文件 {item.get('name')} 下载失败!")
elif res.status_code != 200:
logger.error(f"下载文件 {item.get('name')} 失败:{res.status_code} - {res.reason}")
# 创建插件文件夹
file_path = self._base_dir / save_path / item.get("name")
if not file_path.parent.exists():
file_path.parent.mkdir(parents=True, exist_ok=True)
# 写入文件
file_path.write_bytes(res.content)
logger.info("资源包更新完成,开始重启服务...")
SystemUtils.restart()
else:
logger.info("所有资源已最新,无需更新")
return None

View File

@@ -1,6 +1,5 @@
import re
import traceback
import xml.dom.minidom
from typing import List, Tuple, Union, Optional
from urllib.parse import urljoin
@@ -10,7 +9,6 @@ from lxml import etree
from app.core.config import settings
from app.helper.browser import PlaywrightHelper
from app.log import logger
from app.utils.dom import DomUtils
from app.utils.http import RequestUtils
from app.utils.string import StringUtils
@@ -19,6 +17,11 @@ class RssHelper:
"""
RSS帮助类解析RSS报文、获取RSS地址等
"""
# RSS解析限制配置
MAX_RSS_SIZE = 50 * 1024 * 1024 # 50MB最大RSS文件大小
MAX_RSS_ITEMS = 1000 # 最大解析条目数
# 各站点RSS链接获取配置
rss_link_conf = {
"default": {
@@ -224,8 +227,8 @@ class RssHelper:
},
}
@staticmethod
def parse(url, proxy: bool = False, timeout: Optional[int] = 15, headers: dict = None) -> Union[List[dict], None, bool]:
def parse(self, url, proxy: bool = False,
timeout: Optional[int] = 15, headers: dict = None) -> Union[List[dict], None, bool]:
"""
解析RSS订阅URL获取RSS中的种子信息
:param url: RSS地址
@@ -238,19 +241,31 @@ class RssHelper:
ret_array: list = []
if not url:
return False
try:
ret = RequestUtils(proxies=settings.PROXY if proxy else None,
timeout=timeout, headers=headers).get_res(url)
if not ret:
logger.error(f"获取RSS失败请求返回空值URL: {url}")
return False
except Exception as err:
logger.error(f"获取RSS失败{str(err)} - {traceback.format_exc()}")
return False
if ret:
ret_xml = ""
# 检查HTTP状态码
if ret.status_code != 200:
logger.error(f"RSS请求失败状态码: {ret.status_code}, URL: {url}")
return False
ret_xml = None
root = None
try:
# 使用chardet检测字符编码
# 检查响应大小避免处理过大的RSS文件
raw_data = ret.content
if raw_data and len(raw_data) > self.MAX_RSS_SIZE:
logger.warning(f"RSS文件过大: {len(raw_data) / 1024 / 1024:.1f}MB跳过解析")
return False
if raw_data:
try:
result = chardet.detect(raw_data)
@@ -269,57 +284,135 @@ class RssHelper:
ret.encoding = ret.apparent_encoding
if not ret_xml:
ret_xml = ret.text
# 解析XML
dom_tree = xml.dom.minidom.parseString(ret_xml)
rootNode = dom_tree.documentElement
items = rootNode.getElementsByTagName("item")
for item in items:
# 验证RSS内容是否有效
if not ret_xml or not ret_xml.strip():
logger.error("RSS内容为空")
return False
# 检查是否包含基本的RSS/XML结构
ret_xml_stripped = ret_xml.strip()
if not ret_xml_stripped.startswith('<'):
logger.error("RSS内容不是有效的XML格式")
return False
# 使用lxml.etree解析XML
parser = None
try:
# 创建解析器,禁用网络访问以提高安全性和性能
parser = etree.XMLParser(
recover=True, # 容错模式
strip_cdata=False, # 保留CDATA
resolve_entities=False, # 禁用外部实体解析
no_network=True, # 禁用网络访问
huge_tree=False # 禁用大文档解析,避免内存问题
)
root = etree.fromstring(ret_xml.encode('utf-8'), parser=parser)
except etree.XMLSyntaxError as xml_error:
logger.debug(f"XML解析失败{str(xml_error)}尝试HTML解析")
# 如果XML解析失败尝试作为HTML解析
try:
# 标题
title = DomUtils.tag_value(item, "title", default="")
if not title:
continue
# 描述
description = DomUtils.tag_value(item, "description", default="")
# 种子页面
link = DomUtils.tag_value(item, "link", default="")
# 种子链接
enclosure = DomUtils.tag_value(item, "enclosure", "url", default="")
if not enclosure and not link:
continue
# 部分RSS只有link没有enclosure
if not enclosure and link:
enclosure = link
# 大小
size = DomUtils.tag_value(item, "enclosure", "length", default=0)
if size and str(size).isdigit():
size = int(size)
else:
root = etree.HTML(ret_xml)
if root is not None:
# 查找RSS根节点
rss_root = root.xpath('//rss | //feed')
if rss_root:
root = rss_root[0]
except Exception as e:
logger.error(f"HTML解析也失败{str(e)}")
return False
except Exception as general_error:
logger.error(f"解析RSS时发生未预期错误{str(general_error)}")
return False
finally:
if parser is not None:
try:
parser.close()
except Exception as close_error:
logger.debug(f"关闭解析器时出错:{str(close_error)}")
del parser
if root is None:
logger.error("无法解析RSS内容")
return False
# 查找所有item或entry节点
items = root.xpath('.//item | .//entry')
# 限制处理的条目数量
items_count = min(len(items), self.MAX_RSS_ITEMS)
if len(items) > self.MAX_RSS_ITEMS:
logger.warning(f"RSS条目过多: {len(items)},仅处理前{self.MAX_RSS_ITEMS}")
try:
for item in items[:items_count]:
try:
# 使用xpath提取信息更高效
title_nodes = item.xpath('.//title')
title = title_nodes[0].text if title_nodes and title_nodes[0].text else ""
if not title:
continue
# 描述
desc_nodes = item.xpath('.//description | .//summary')
description = desc_nodes[0].text if desc_nodes and desc_nodes[0].text else ""
# 种子页面
link_nodes = item.xpath('.//link')
if link_nodes:
link = link_nodes[0].text if hasattr(link_nodes[0], 'text') and link_nodes[0].text else link_nodes[0].get('href', '')
else:
link = ""
# 种子链接
enclosure_nodes = item.xpath('.//enclosure')
enclosure = enclosure_nodes[0].get('url', '') if enclosure_nodes else ""
if not enclosure and not link:
continue
# 部分RSS只有link没有enclosure
if not enclosure and link:
enclosure = link
# 大小
size = 0
# 发布日期
pubdate = DomUtils.tag_value(item, "pubDate", default="")
if pubdate:
# 转换为时间
pubdate = StringUtils.get_time(pubdate)
# 获取豆瓣昵称
nickname = DomUtils.tag_value(item, "dc:createor", default="")
# 返回对象
tmp_dict = {'title': title,
'enclosure': enclosure,
'size': size,
'description': description,
'link': link,
'pubdate': pubdate}
# 如果豆瓣昵称不为空返回数据增加豆瓣昵称供doubansync插件获取
if nickname:
tmp_dict['nickname'] = nickname
ret_array.append(tmp_dict)
except Exception as e1:
logger.debug(f"解析RSS失败{str(e1)} - {traceback.format_exc()}")
continue
if enclosure_nodes:
size_attr = enclosure_nodes[0].get('length', '0')
if size_attr and str(size_attr).isdigit():
size = int(size_attr)
# 发布日期
pubdate_nodes = item.xpath('.//pubDate | .//published | .//updated')
pubdate = ""
if pubdate_nodes and pubdate_nodes[0].text:
pubdate = StringUtils.get_time(pubdate_nodes[0].text)
# 获取豆瓣昵称
nickname_nodes = item.xpath('.//*[local-name()="creator"]')
nickname = nickname_nodes[0].text if nickname_nodes and nickname_nodes[0].text else ""
# 返回对象
tmp_dict = {
'title': title,
'enclosure': enclosure,
'size': size,
'description': description,
'link': link,
'pubdate': pubdate
}
# 如果豆瓣昵称不为空返回数据增加豆瓣昵称供doubansync插件获取
if nickname:
tmp_dict['nickname'] = nickname
ret_array.append(tmp_dict)
except Exception as e1:
logger.debug(f"解析RSS条目失败{str(e1)} - {traceback.format_exc()}")
continue
finally:
items.clear()
del items
except Exception as e2:
logger.error(f"解析RSS失败{str(e2)} - {traceback.format_exc()}")
# RSS过期 观众RSS 链接已过期,您需要获得一个新的! pthome RSS Link has expired, You need to get a new one!
# RSS过期检查
_rss_expired_msg = [
"RSS 链接已过期, 您需要获得一个新的!",
"RSS Link has expired, You need to get a new one!",
@@ -328,6 +421,12 @@ class RssHelper:
if ret_xml in _rss_expired_msg:
return None
return False
finally:
if root is not None:
del root
if ret_xml is not None:
del ret_xml
return ret_array
def get_rss_link(self, url: str, cookie: str, ua: str, proxy: bool = False) -> Tuple[str, str]:
@@ -369,12 +468,20 @@ class RssHelper:
return "", f"获取 {url} RSS链接失败错误码{res.status_code},错误原因:{res.reason}"
else:
return "", f"获取RSS链接失败无法连接 {url} "
# 解析HTML
html = etree.HTML(html_text)
if StringUtils.is_valid_html_element(html):
rss_link = html.xpath(site_conf.get("xpath"))
if rss_link:
return str(rss_link[-1]), ""
if html_text:
html = None
try:
html = etree.HTML(html_text)
if StringUtils.is_valid_html_element(html):
rss_link = html.xpath(site_conf.get("xpath"))
if rss_link:
return str(rss_link[-1]), ""
finally:
if html is not None:
del html
return "", f"获取RSS链接失败{url}"
except Exception as e:
return "", f"获取 {url} RSS链接失败{str(e)}"

View File

@@ -11,14 +11,12 @@ class RuleHelper:
规划帮助类
"""
def __init__(self):
self.systemconfig = SystemConfigOper()
def get_rule_groups(self) -> List[FilterRuleGroup]:
@staticmethod
def get_rule_groups() -> List[FilterRuleGroup]:
"""
获取用户所有规则组
"""
rule_groups: List[dict] = self.systemconfig.get(SystemConfigKey.UserFilterRuleGroups)
rule_groups: List[dict] = SystemConfigOper().get(SystemConfigKey.UserFilterRuleGroups)
if not rule_groups:
return []
return [FilterRuleGroup(**group) for group in rule_groups]
@@ -50,11 +48,12 @@ class RuleHelper:
ret_groups.append(group)
return ret_groups
def get_custom_rules(self) -> List[CustomRule]:
@staticmethod
def get_custom_rules() -> List[CustomRule]:
"""
获取用户所有自定义规则
"""
rules: List[dict] = self.systemconfig.get(SystemConfigKey.CustomFilterRules)
rules: List[dict] = SystemConfigOper().get(SystemConfigKey.CustomFilterRules)
if not rules:
return []
return [CustomRule(**rule) for rule in rules]

View File

@@ -107,8 +107,7 @@ class ServiceBaseHelper(Generic[TConf]):
迭代所有模块的实例及其对应的配置,返回 ServiceInfo 实例
"""
configs = self.get_configs()
modules = self.modulemanager.get_running_type_modules(self.module_type)
for module in modules:
for module in self.modulemanager.get_running_type_modules(self.module_type):
if not module:
continue
module_instances = module.get_instances()

View File

@@ -10,14 +10,12 @@ class StorageHelper:
存储帮助类
"""
def __init__(self):
self.systemconfig = SystemConfigOper()
def get_storagies(self) -> List[schemas.StorageConf]:
@staticmethod
def get_storagies() -> List[schemas.StorageConf]:
"""
获取所有存储设置
"""
storage_confs: List[dict] = self.systemconfig.get(SystemConfigKey.Storages)
storage_confs: List[dict] = SystemConfigOper().get(SystemConfigKey.Storages)
if not storage_confs:
return []
return [schemas.StorageConf(**s) for s in storage_confs]
@@ -49,4 +47,36 @@ class StorageHelper:
if s.type == storage:
s.config = conf
break
self.systemconfig.set(SystemConfigKey.Storages, [s.dict() for s in storagies])
SystemConfigOper().set(SystemConfigKey.Storages, [s.dict() for s in storagies])
def add_storage(self, storage: str, name: str, conf: dict):
"""
添加存储配置
"""
storagies = self.get_storagies()
if not storagies:
storagies = [
schemas.StorageConf(
type=storage,
name=name,
config=conf
)
]
else:
storagies.append(schemas.StorageConf(
type=storage,
name=name,
config=conf
))
SystemConfigOper().set(SystemConfigKey.Storages, [s.dict() for s in storagies])
def reset_storage(self, storage: str):
"""
重置存储配置
"""
storagies = self.get_storagies()
for s in storagies:
if s.type == storage:
s.config = {}
break
SystemConfigOper().set(SystemConfigKey.Storages, [s.dict() for s in storagies])

View File

@@ -50,15 +50,15 @@ class SubscribeHelper(metaclass=Singleton):
]
def __init__(self):
self.systemconfig = SystemConfigOper()
systemconfig = SystemConfigOper()
if settings.SUBSCRIBE_STATISTIC_SHARE:
if not self.systemconfig.get(SystemConfigKey.SubscribeReport):
if not systemconfig.get(SystemConfigKey.SubscribeReport):
if self.sub_report():
self.systemconfig.set(SystemConfigKey.SubscribeReport, "1")
systemconfig.set(SystemConfigKey.SubscribeReport, "1")
self.get_user_uuid()
self.get_github_user()
@cached(maxsize=20, ttl=1800)
@cached(maxsize=5, ttl=1800)
def get_statistic(self, stype: str, page: Optional[int] = 1, count: Optional[int] = 30) -> List[dict]:
"""
获取订阅统计数据

163
app/helper/system.py Normal file
View File

@@ -0,0 +1,163 @@
import os
import signal
from pathlib import Path
from typing import Tuple
import docker
from app.core.config import settings
from app.core.event import eventmanager, Event
from app.log import logger
from app.schemas import ConfigChangeEventData
from app.schemas.types import EventType
from app.utils.system import SystemUtils
class SystemHelper:
"""
系统工具类,提供系统相关的操作和判断
"""
__system_flag_file = "/var/log/nginx/__moviepilot__"
@eventmanager.register(EventType.ConfigChanged)
def handle_config_changed(self, event: Event):
"""
处理配置变更事件,更新日志设置
:param event: 事件对象
"""
if not event:
return
event_data: ConfigChangeEventData = event.event_data
if event_data.key not in ['DEBUG', 'LOG_LEVEL', 'LOG_MAX_FILE_SIZE', 'LOG_BACKUP_COUNT',
'LOG_FILE_FORMAT', 'LOG_CONSOLE_FORMAT']:
return
logger.info("配置变更,更新日志设置...")
logger.update_loggers()
@staticmethod
def can_restart() -> bool:
"""
判断是否可以内部重启
"""
return (
Path("/var/run/docker.sock").exists()
or settings.DOCKER_CLIENT_API != "tcp://127.0.0.1:38379"
)
@staticmethod
def _get_container_id() -> str:
"""
获取当前容器ID
"""
container_id = None
try:
with open("/proc/self/mountinfo", "r") as f:
data = f.read()
index_resolv_conf = data.find("resolv.conf")
if index_resolv_conf != -1:
index_second_slash = data.rfind("/", 0, index_resolv_conf)
index_first_slash = data.rfind("/", 0, index_second_slash) + 1
container_id = data[index_first_slash:index_second_slash]
if len(container_id) < 20:
index_resolv_conf = data.find("/sys/fs/cgroup/devices")
if index_resolv_conf != -1:
index_second_slash = data.rfind(" ", 0, index_resolv_conf)
index_first_slash = (
data.rfind("/", 0, index_second_slash) + 1
)
container_id = data[index_first_slash:index_second_slash]
except Exception as e:
logger.debug(f"获取容器ID失败: {str(e)}")
return container_id.strip() if container_id else None
@staticmethod
def _check_restart_policy() -> bool:
"""
检查当前容器是否配置了自动重启策略
"""
try:
# 获取当前容器ID
container_id = SystemHelper._get_container_id()
if not container_id:
return False
# 创建 Docker 客户端
client = docker.DockerClient(base_url=settings.DOCKER_CLIENT_API)
# 获取容器信息
container = client.containers.get(container_id)
restart_policy = container.attrs.get('HostConfig', {}).get('RestartPolicy', {})
policy_name = restart_policy.get('Name', 'no')
# 检查是否有有效的重启策略
auto_restart_policies = ['always', 'unless-stopped', 'on-failure']
has_restart_policy = policy_name in auto_restart_policies
logger.info(f"容器重启策略: {policy_name}, 支持自动重启: {has_restart_policy}")
return has_restart_policy
except Exception as e:
logger.warning(f"检查重启策略失败: {str(e)}")
return False
@staticmethod
def restart() -> Tuple[bool, str]:
"""
执行Docker重启操作
"""
if not SystemUtils.is_docker():
return False, "非Docker环境无法重启"
try:
# 检查容器是否配置了自动重启策略
has_restart_policy = SystemHelper._check_restart_policy()
if has_restart_policy:
# 有重启策略,使用优雅退出方式
logger.info("检测到容器配置了自动重启策略,使用优雅重启方式...")
# 发送SIGTERM信号给当前进程触发优雅停止
os.kill(os.getpid(), signal.SIGTERM)
return True, ""
else:
# 没有重启策略使用Docker API强制重启
logger.info("容器未配置自动重启策略使用Docker API重启...")
return SystemHelper._docker_api_restart()
except Exception as err:
logger.error(f"重启失败: {str(err)}")
# 降级为Docker API重启
logger.warning("降级为Docker API重启...")
return SystemHelper._docker_api_restart()
@staticmethod
def _docker_api_restart() -> Tuple[bool, str]:
"""
使用Docker API重启容器并尝试优雅停止
"""
try:
# 创建 Docker 客户端
client = docker.DockerClient(base_url=settings.DOCKER_CLIENT_API)
container_id = SystemHelper._get_container_id()
if not container_id:
return False, "获取容器ID失败"
# 重启容器
client.containers.get(container_id).restart()
return True, ""
except Exception as docker_err:
return False, f"重启时发生错误:{str(docker_err)}"
def set_system_modified(self):
"""
设置系统已修改标志
"""
try:
if SystemUtils.is_docker():
Path(self.__system_flag_file).touch(exist_ok=True)
except Exception as e:
print(f"设置系统修改标志失败: {str(e)}")
def is_system_reset(self) -> bool:
"""
检查系统是否已被重置
:return: 如果系统已重置,返回 True否则返回 False
"""
if SystemUtils.is_docker():
return not Path(self.__system_flag_file).exists()
return False

View File

@@ -1,6 +1,6 @@
from concurrent.futures import ThreadPoolExecutor
from typing import Optional
from app.core.config import settings
from app.utils.singleton import Singleton
@@ -8,8 +8,8 @@ class ThreadHelper(metaclass=Singleton):
"""
线程池管理
"""
def __init__(self, max_workers: Optional[int] = 50):
self.pool = ThreadPoolExecutor(max_workers=max_workers)
def __init__(self):
self.pool = ThreadPoolExecutor(max_workers=settings.CONF.threadpool)
def submit(self, func, *args, **kwargs):
"""
@@ -27,6 +27,3 @@ class ThreadHelper(metaclass=Singleton):
:return:
"""
self.pool.shutdown()
def __del__(self):
self.shutdown()

Some files were not shown because too many files have changed in this diff Show More