Compare commits

...

176 Commits

Author SHA1 Message Date
jxxghp
8902fb50d6 更新 context.py 2025-07-16 22:22:45 +08:00
jxxghp
b6aa013eb3 v2.6.6 2025-07-16 20:25:43 +08:00
jxxghp
034b43bf70 fix context 2025-07-16 19:59:06 +08:00
jxxghp
59e9032286 add subscribe share statistic api 2025-07-16 08:47:54 +08:00
jxxghp
52a98efd0a add subscribe share statistic api 2025-07-16 08:31:28 +08:00
jxxghp
90cc91aa7f Merge pull request #4614 from Aqr-K/feature-ua 2025-07-15 06:47:34 +08:00
Aqr-K
1973a26e83 fix: 去除冗余代码,简化写法 2025-07-14 22:19:48 +08:00
Aqr-K
6519ad25ca fix is_aarch 2025-07-14 22:17:04 +08:00
Aqr-K
cacfde8166 fix 2025-07-14 22:14:52 +08:00
Aqr-K
df85873726 feat(ua): add cup_arch , USER_AGENT value add cup_arch 2025-07-14 22:04:09 +08:00
jxxghp
dfea294cc9 fix ua 2025-07-14 13:42:49 +08:00
jxxghp
d35b855404 fix ua 2025-07-14 13:30:18 +08:00
jxxghp
7a1cbf70e3 feat:特定默认UA 2025-07-14 12:35:08 +08:00
jxxghp
f260990b86 更新 version.py 2025-07-13 15:14:10 +08:00
jxxghp
6affbe9b55 fix #4558 2025-07-13 15:04:41 +08:00
jxxghp
dbe3a10697 fix 2025-07-13 14:53:39 +08:00
jxxghp
3c25306a5d fix #4590 2025-07-13 14:43:48 +08:00
jxxghp
17f4d49731 fix #4594 2025-07-13 14:24:41 +08:00
jxxghp
e213b5cc64 Merge branch 'v2' of https://github.com/jxxghp/MoviePilot into v2 2025-07-13 14:14:26 +08:00
jxxghp
65e5dad44b 优化移动模式下的种子和残留目录删除逻辑 2025-07-13 14:14:24 +08:00
jxxghp
62ad38ea5d Merge pull request #4605 from wikrin/torrent_optimize 2025-07-13 13:25:35 +08:00
Attente
f98f4c1f77 refactor(helper): 优化 TorrentHelper 类
- 添加检查临时目录中是否存在种子文件
- 修改 match_torrent 方法参数类型
- 优化种子文件下载和处理逻辑
2025-07-13 13:16:36 +08:00
jxxghp
e9f02b58b7 Merge pull request #4604 from cddjr/fix_4602 2025-07-13 06:51:36 +08:00
景大侠
05495e481d fix #4602 2025-07-13 01:10:07 +08:00
jxxghp
5bb2167b78 Merge pull request #4603 from cddjr/fix_nettest 2025-07-12 18:34:54 +08:00
景大侠
b4e0ed66cf 完善网络连通性测试的错误描述 2025-07-12 18:15:19 +08:00
jxxghp
70a0563435 add server_type return 2025-07-12 14:52:18 +08:00
jxxghp
955912b832 fix plex 2025-07-12 14:44:45 +08:00
jxxghp
b65ee75b3d Merge pull request #4601 from cddjr/minimal_deps 2025-07-11 21:46:13 +08:00
景大侠
f642493a38 fix 2025-07-11 21:25:10 +08:00
jxxghp
7f1bfb1e07 Merge pull request #4599 from jtcymc/v2 2025-07-11 21:12:16 +08:00
景大侠
8931e2e016 fix 仅安装用户需要使用的插件依赖 2025-07-11 21:04:33 +08:00
shaw
0465fa77c2 fix(filemanager): 检查目标媒体库目录是否设置
- 在文件整理过程中,增加对目标媒体库目录是否设置的检查- 如果目标媒体库目录未设置,返回错误信息并中断整理过程
- 优化了错误处理逻辑,提高了系统的稳定性和可靠性
2025-07-11 20:02:12 +08:00
jxxghp
575d503cb9 Merge pull request #4598 from cddjr/fix_4586 2025-07-11 18:12:57 +08:00
景大侠
a4fdbdb9ad fix 极空间、Unraid误报网络文件系统 2025-07-11 18:03:19 +08:00
jxxghp
b9cb781a4e rollback size 2025-07-11 08:34:02 +08:00
jxxghp
a3adf867b7 fix 2025-07-10 22:48:08 +08:00
jxxghp
d52cbd2f74 feat:资源下载事件保存路径 2025-07-10 22:16:19 +08:00
jxxghp
8d0003db94 更新 version.py 2025-07-10 11:57:54 +08:00
jxxghp
b775e89e77 fix #4581 2025-07-10 10:44:04 +08:00
jxxghp
0e14b097ba fix #4581 2025-07-10 10:39:22 +08:00
jxxghp
51848b8d8d fix #4581 2025-07-10 10:20:00 +08:00
jxxghp
72658c3e60 Merge pull request #4582 from cddjr/fix_rename_related 2025-07-09 20:42:54 +08:00
jxxghp
036cb6f3b0 remove memory helper 2025-07-09 19:11:37 +08:00
jxxghp
1a86d96bfa Merge pull request #4579 from jxxghp/cursor/bc-f8a13fbf-5ca0-4b0b-ae8d-59c208732d44-b74e 2025-07-09 17:43:46 +08:00
Cursor Agent
f67db38a25 Fix memory analysis performance and timeout issues across platforms
Co-authored-by: jxxghp <jxxghp@163.com>
2025-07-09 09:43:34 +00:00
Cursor Agent
028d18826a Refactor memory analysis with ThreadPoolExecutor for cross-platform timeout
Co-authored-by: jxxghp <jxxghp@163.com>
2025-07-09 09:38:06 +00:00
Cursor Agent
29a605f265 Optimize memory analysis with timeout, sampling, and performance improvements
Co-authored-by: jxxghp <jxxghp@163.com>
2025-07-09 08:57:22 +00:00
jxxghp
4b6959470d Merge pull request #4577 from jxxghp/cursor/analyze-memory-usage-discrepancies-6709 2025-07-09 16:08:00 +08:00
Cursor Agent
600767d2bf Remove memory analysis guide and test script
Co-authored-by: jxxghp <jxxghp@163.com>
2025-07-09 08:07:30 +00:00
Cursor Agent
3efbd47ffd Add comprehensive memory analysis tool with guide and test script
Co-authored-by: jxxghp <jxxghp@163.com>
2025-07-09 08:04:10 +00:00
Cursor Agent
d17e85217b Enhance memory analysis with detailed tracking, leak detection, and system insights
Co-authored-by: jxxghp <jxxghp@163.com>
2025-07-09 07:47:23 +00:00
jxxghp
e608089805 add Note Action 2025-07-09 12:22:22 +08:00
jxxghp
b852acec28 fix workflow 2025-07-09 09:34:53 +08:00
jxxghp
2a3ea8315d fix workflow 2025-07-09 00:19:47 +08:00
jxxghp
9271ee833c Merge pull request #4566 from jxxghp/cursor/helper-91dc
新增工作流分享相关接口和helper
2025-07-09 00:12:56 +08:00
Cursor Agent
570d4ad1a3 Fix workflow API by passing database session to WorkflowOper methods
Co-authored-by: jxxghp <jxxghp@163.com>
2025-07-08 15:44:55 +00:00
Cursor Agent
dccdf3231a Checkpoint before follow-up message 2025-07-08 15:42:31 +00:00
Cursor Agent
b8ee777fd2 Refactor workflow sharing with independent config and improved data access
Co-authored-by: jxxghp <jxxghp@163.com>
2025-07-08 15:33:43 +00:00
Cursor Agent
a2fd3a8d90 Implement workflow sharing feature with new API endpoints and helper
Co-authored-by: jxxghp <jxxghp@163.com>
2025-07-08 15:26:16 +00:00
Cursor Agent
bbffb1420b Add workflow sharing, forking, and related API endpoints
Co-authored-by: jxxghp <jxxghp@163.com>
2025-07-08 15:18:01 +00:00
景大侠
8ea0a32879 fix 优化重命名后的媒体文件根路径获取 2025-07-08 22:37:32 +08:00
景大侠
8c27b8c33e fix 文件管理的自动重命名缺少集信息 2025-07-08 22:37:09 +08:00
景大侠
5c61b22c2f fix 未启用重命名时,整理文件的转移路径不正确 2025-07-08 21:49:31 +08:00
jxxghp
9da9d765a0 fix:静态类引用 2025-07-08 21:40:04 +08:00
jxxghp
f64363728e fix:静态类引用 2025-07-08 21:38:34 +08:00
jxxghp
378777dc7c feat:弱引用单例 2025-07-08 21:29:01 +08:00
jxxghp
6156b9a481 Merge pull request #4561 from jxxghp/cursor/move-media-files-to-season-directory-6ee0 2025-07-08 18:00:50 +08:00
Cursor Agent
8c516c5691 Fix: Ensure parent item exists before saving NFO file
Co-authored-by: jxxghp <jxxghp@163.com>
2025-07-08 09:51:43 +00:00
Cursor Agent
bf9a149898 Fix TV show metadata scraping to use correct parent directory
Co-authored-by: jxxghp <jxxghp@163.com>
2025-07-08 09:31:35 +00:00
jxxghp
277cde8db2 更新 version.py 2025-07-08 12:17:57 +08:00
jxxghp
e06bdaf53e fix:资源包升级失败时一直重启的问题 2025-07-08 12:06:30 +08:00
jxxghp
da367bd138 fix spider 2025-07-08 11:25:36 +08:00
jxxghp
d336bcbf1f fix etree 2025-07-08 11:00:38 +08:00
jxxghp
a8aedba6ff fix https://github.com/jxxghp/MoviePilot/issues/4552 2025-07-08 09:34:24 +08:00
jxxghp
9ede86c6a3 Merge pull request #4555 from cddjr/fix_local_exists 2025-07-07 23:30:51 +08:00
景大侠
1468f2b082 fix 本地媒体文件检查时首选含影视标题的目录
避免了以年份、分辨率等作为重命名第一层目录时的误判问题
2025-07-07 23:24:04 +08:00
jxxghp
e04ae70f89 Merge pull request #4553 from cddjr/fix_trim_task 2025-07-07 22:15:12 +08:00
景大侠
7f7d2c9ba8 fix 飞牛刷新媒体库报错Task duplicate 2025-07-07 21:46:17 +08:00
jxxghp
d73deef8dc Merge pull request #4549 from cddjr/fix_tr 2025-07-07 17:28:28 +08:00
景大侠
f93a1540af fix TR模块报错找不到_protocol属性
v2.5.9引入的bug
2025-07-07 17:05:28 +08:00
jxxghp
c8bd9cb716 Merge pull request #4548 from cddjr/set_lock_timeout 2025-07-07 12:04:46 +08:00
景大侠
2ed13c7e5b fix 订阅匹配锁增加超时,避免罕见的长时间卡任务问题 2025-07-07 11:51:58 +08:00
jxxghp
647c0929c5 v2.6.2 2025-07-06 08:28:33 +08:00
jxxghp
a61533a131 Merge pull request #4536 from cddjr/fix_local_exists 2025-07-05 22:02:16 +08:00
景大侠
bc5e682308 fix 本地媒体检查潜在的额外扫盘问题 2025-07-05 21:46:21 +08:00
jxxghp
25a481df12 Merge pull request #4534 from jxxghp/cursor/bc-55af1137-dea1-4191-9033-64ea5fcaa43a-d338
修复文件整理快照处理问题
2025-07-05 15:44:51 +08:00
Cursor Agent
764c10fae4 Fix snapshot handling logic to correctly process files during monitoring
Co-authored-by: jxxghp <jxxghp@163.com>
2025-07-05 07:22:44 +00:00
Cursor Agent
d8249d4e38 Fix snapshot handling logic to correctly process files during monitoring
Co-authored-by: jxxghp <jxxghp@163.com>
2025-07-05 07:19:53 +00:00
jxxghp
0e3e42b398 Merge pull request #4531 from Aqr-K/feat-process 2025-07-05 06:33:57 +08:00
Aqr-K
7d3b64dcf9 Update requirements.in 2025-07-05 03:16:49 +08:00
Aqr-K
2c8d525796 feat: 增加进程名设置 2025-07-05 03:14:54 +08:00
jxxghp
4869f071ab fix error message 2025-07-04 21:34:31 +08:00
jxxghp
3029eeaf6f fix error message 2025-07-04 21:33:32 +08:00
jxxghp
33fb692aee 更新 plugin.py 2025-07-03 22:20:04 +08:00
jxxghp
6a075d144f 更新 version.py 2025-07-03 20:19:36 +08:00
jxxghp
aa23315599 rollback transmission-rpc 2025-07-03 19:16:36 +08:00
jxxghp
8d0bb35505 add 网络流量API 2025-07-03 19:05:43 +08:00
jxxghp
32e76bc6ce Merge pull request #4529 from cddjr/add_ctx_mgr_proto 2025-07-03 18:47:08 +08:00
景大侠
6c02766000 AutoCloseResponse支持上下文管理协议,避免部分插件报错 2025-07-03 18:38:48 +08:00
jxxghp
52ef390464 图片代理Api增加cache参数 2025-07-03 17:07:54 +08:00
jxxghp
43a557601e fix local usage 2025-07-03 16:48:35 +08:00
jxxghp
82ff7fc090 fix SMB Usage 2025-07-03 15:21:41 +08:00
jxxghp
db40b5105b 修正目录监控模式匹配 2025-07-03 13:55:54 +08:00
jxxghp
b2a379b84b fix SMB Storage 2025-07-03 12:41:44 +08:00
jxxghp
97cbd816fe add SMB Storage 2025-07-03 12:31:59 +08:00
jxxghp
7de3bb2a91 v2.6.0 2025-07-02 21:36:02 +08:00
jxxghp
3a8a2bcab4 Merge pull request #4519 from Aqr-K/patch-2 2025-07-01 19:46:12 +08:00
Aqr-K
eb1adbe992 fix: 错误文案修复,统一文案格式 2025-07-01 19:26:11 +08:00
jxxghp
b55966d42b Merge pull request #4516 from Aqr-K/feat-command
feat(command): 增加 `show` ,用来判断是否注册进菜单里显示
2025-07-01 17:20:59 +08:00
Aqr-K
451ca9cb5a feat(command): 增加 show ,用来判断是否注册进菜单里显示 2025-07-01 17:19:01 +08:00
jxxghp
1e2c607ced fix #4515 流平台不合并到现有标签中,如有需要通过命名模块配置 2025-07-01 17:02:29 +08:00
jxxghp
5ff7da0d19 fix #4515 流平台不合并到现有标签中,如有需要通过命名模块配置 2025-07-01 16:57:45 +08:00
jxxghp
8e06c6f8e6 remove openai 2025-07-01 14:48:16 +08:00
jxxghp
4497cd3904 add site stat api 2025-07-01 11:23:20 +08:00
jxxghp
2945679a94 - 修复Redis缓存问题及站点消息读取问题 2025-07-01 09:20:08 +08:00
jxxghp
1eaf7e3c85 Merge pull request #4513 from cddjr/fix_4511 2025-07-01 06:56:11 +08:00
景大侠
8146b680c6 fix: 修复AutoCloseResponse类在反序列化时无限递归 2025-07-01 01:29:01 +08:00
jxxghp
99e667382f fix #4509 2025-06-30 19:17:36 +08:00
jxxghp
4c03759d3f refactor:优化目录监控 2025-06-30 13:16:05 +08:00
jxxghp
8593a6cdd0 refactor:优化目录监控快照 2025-06-30 12:40:37 +08:00
jxxghp
cd18c31618 fix 订阅匹配 2025-06-30 10:55:10 +08:00
jxxghp
f29c918700 Merge pull request #4505 from wikrin/v2 2025-06-29 23:12:08 +08:00
Attente
0f0c3e660b style: 清理空白字符
移除代码中的 trailing whitespace 和空行缩进, 提升代码整洁度
2025-06-29 22:49:58 +08:00
Attente
1cf4639db3 fix(download): 修复手动下载时下载器选择问题
- 在手动下载模式下,始终使用用户选择的下载器
2025-06-29 22:24:53 +08:00
jxxghp
f5da9b5780 fix log 2025-06-29 22:10:47 +08:00
jxxghp
e4c87c8a96 更新 version.py 2025-06-29 21:56:37 +08:00
jxxghp
4b4bf153f0 fix plugin reload 2025-06-29 21:26:06 +08:00
jxxghp
ec227d0d56 Merge pull request #4500 from Miralia/v2
refactor(meta): 将 web_source 处理逻辑统一到 MetaBase 并添加到消息模板
2025-06-29 11:11:35 +08:00
Miralia
53c8c50779 refactor(meta): 将 web_source 处理逻辑统一到 MetaBase 并添加到消息模板 2025-06-29 11:08:34 +08:00
jxxghp
07b4c8b462 fix #4489 2025-06-29 11:06:36 +08:00
jxxghp
f3cfc5b9f0 fix plex 2025-06-29 08:27:48 +08:00
jxxghp
634e5a4c55 Merge pull request #4496 from wikrin/v2 2025-06-29 07:51:24 +08:00
Attente
332b154f15 fix(api): 适配 FastAPI 请求参数兼容性问题
修复系统配置和用户配置接口无法正常工作的问题。
2025-06-29 05:31:25 +08:00
jxxghp
b446d4db28 更新 GitHub 工作流配置,排除带有 RFC 标签的 issue 2025-06-28 22:24:51 +08:00
jxxghp
ce0397a140 fix update.sh 2025-06-28 22:03:18 +08:00
jxxghp
f278cccef3 for test 2025-06-28 21:42:28 +08:00
jxxghp
cbf1dbcd2e fix 恢复插件后安装依赖 2025-06-28 21:42:03 +08:00
jxxghp
037c6b02fa Merge pull request #4493 from Miralia/v2 2025-06-28 20:07:12 +08:00
Miralia
5f44e4322d Fix and add more 2025-06-28 19:47:33 +08:00
Miralia
6cebe97d6d add FPT Play 2025-06-28 19:12:00 +08:00
jxxghp
82ec146446 更新 plugin.py 2025-06-28 16:49:09 +08:00
jxxghp
3928c352c6 fix update 2025-06-28 15:01:25 +08:00
jxxghp
0ba36d21a9 Revert "fix security"
This reverts commit c7800df801.
2025-06-28 14:37:22 +08:00
jxxghp
6152727e9b fix Dockerfile 2025-06-28 14:33:33 +08:00
jxxghp
53c02fa706 resource v2 2025-06-28 14:26:14 +08:00
jxxghp
c7800df801 fix security 2025-06-28 14:12:24 +08:00
jxxghp
562c1de0c9 aList => OpenList 2025-06-28 08:43:09 +08:00
jxxghp
e2c90639f3 更新 message.py 2025-06-27 19:54:13 +08:00
jxxghp
92e175a8d1 Merge pull request #4488 from Miralia/v2 2025-06-27 17:29:10 +08:00
jxxghp
cf7bca75f6 fix res.text 2025-06-27 17:23:32 +08:00
Miralia
24a173f075 Update streamingplatform.py 2025-06-27 17:21:27 +08:00
jxxghp
8d695dda55 fix log 2025-06-27 17:16:08 +08:00
jxxghp
93eec6c4b8 fix cache 2025-06-27 15:24:57 +08:00
jxxghp
a2cc1a2926 upgrade packages 2025-06-27 14:34:35 +08:00
jxxghp
11729d0eca fix 2025-06-27 13:34:27 +08:00
jxxghp
978819be38 fix db pool size 2025-06-27 12:41:03 +08:00
jxxghp
23c9862eb3 fix site parser 2025-06-27 12:26:17 +08:00
jxxghp
a9f18ea3ef fix #4475 2025-06-27 10:05:19 +08:00
jxxghp
574257edf8 add SystemConfModel 2025-06-27 09:54:15 +08:00
jxxghp
bb4438ac42 feat:非大内存模式下主动gc 2025-06-27 09:44:47 +08:00
jxxghp
0baf6e5fe7 fix SiteParser close session 2025-06-27 08:38:02 +08:00
jxxghp
d8a53da8ee auto close RequestUtils 2025-06-27 08:30:57 +08:00
jxxghp
9555ac6305 fix RequestUtils 2025-06-27 08:09:38 +08:00
jxxghp
4dd5ea8e2f add del 2025-06-27 07:53:10 +08:00
jxxghp
8068523d88 fix downloader 2025-06-26 20:52:17 +08:00
jxxghp
27dd681d9f fix RequestUtils 2025-06-26 17:36:22 +08:00
jxxghp
152f814fb6 fix base chain 2025-06-26 13:28:11 +08:00
jxxghp
2700e639f1 fix chain 2025-06-26 13:16:10 +08:00
jxxghp
c440ce3045 fix oper 2025-06-26 08:33:43 +08:00
jxxghp
2829a3cb4e fix 2025-06-26 08:18:37 +08:00
jxxghp
a487091be8 Revert "fix resource helper"
This reverts commit e7524774da.
2025-06-25 13:32:28 +08:00
jxxghp
e7524774da fix resource helper 2025-06-25 12:50:00 +08:00
jxxghp
3918c876c5 Merge pull request #4478 from Miralia/v2 2025-06-24 21:07:55 +08:00
Miralia
f07f87735c fix 2025-06-24 19:52:14 +08:00
Miralia
b7566e8fe8 feat(meta): 扩展流媒体平台列表,增加更多平台支持。 2025-06-24 19:46:01 +08:00
146 changed files with 6502 additions and 3632 deletions

View File

@@ -10,7 +10,7 @@ body:
目的是让协作的开发者间清晰的知道「要做什么」和「具体会怎么做」,以及所有的开发者都能公开透明的参与讨论;
以便评估和讨论产生的影响 (遗漏的考虑、向后兼容性、与现有功能的冲突)
因此提案侧重在对解决问题的 **方案、设计、步骤** 的描述上。
如果仅希望讨论是否添加或改进某功能本身,请使用 -> [Issue: 功能改进](https://github.com/jxxghp/MoviePilot/issues/new?assignees=&labels=feature+request&projects=&template=feature_request.yml&title=%5BFeature+Request%5D%3A+)
- type: textarea
id: background

View File

@@ -27,4 +27,6 @@ jobs:
# 忽略所有的 Pull Request只处理 Issue
days-before-pr-stale: -1
days-before-pr-close: -1
# 排除带有RFC标签的issue
exempt-issue-labels: "RFC"
repo-token: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -8,17 +8,17 @@ jobs:
pylint:
runs-on: ubuntu-latest
name: Pylint Code Quality Check
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.12'
cache: 'pip'
- name: Cache pip dependencies
uses: actions/cache@v4
with:
@@ -26,7 +26,7 @@ jobs:
key: ${{ runner.os }}-pip-${{ hashFiles('**/requirements.txt', '**/requirements.in') }}
restore-keys: |
${{ runner.os }}-pip-
- name: Install dependencies
run: |
python -m pip install --upgrade pip setuptools wheel
@@ -41,7 +41,7 @@ jobs:
else
echo "⚠️ 未找到依赖文件,仅安装 pylint"
fi
- name: Verify pylint config
run: |
# 检查项目中的pylint配置文件是否存在
@@ -57,35 +57,35 @@ jobs:
run: |
# 运行pylint检查主要的Python文件
echo "🚀 运行 Pylint 错误检查..."
# 检查主要目录 - 只关注错误,如果有错误则退出
echo "📂 检查 app/ 目录..."
pylint app/ --output-format=colorized --reports=yes --score=yes
# 检查根目录的Python文件
echo "📂 检查根目录 Python 文件..."
for file in $(find . -name "*.py" -not -path "./.*" -not -path "./.venv/*" -not -path "./build/*" -not -path "./dist/*" -not -path "./tests/*" -not -path "./docs/*" -not -path "./__pycache__/*" -maxdepth 1); do
echo "检查文件: $file"
pylint "$file" --output-format=colorized || exit 1
done
# 生成详细报告
echo "📊 生成 Pylint 详细报告..."
pylint app/ --output-format=json > pylint-report.json || true
# 显示评分(仅供参考)
echo "📈 Pylint 评分(仅供参考):"
pylint app/ --score=yes --reports=no | tail -2 || true
- name: Upload pylint report
uses: actions/upload-artifact@v4
if: always()
with:
name: pylint-report
path: pylint-report.json
- name: Summary
run: |
echo "🎉 Pylint 检查完成!"
echo "✅ 没有发现语法错误或严重问题"
echo "📊 详细报告已保存为构建工件"
echo "📊 详细报告已保存为构建工件"

View File

@@ -12,7 +12,7 @@ jobs=0
# 只关注错误级别的问题,禁用警告、约定和重构建议
# E = Error (错误) - 会导致构建失败
# W = Warning (警告) - 仅显示,不会失败
# R = Refactor (重构建议) - 仅显示,不会失败
# R = Refactor (重构建议) - 仅显示,不会失败
# C = Convention (约定) - 仅显示,不会失败
# I = Information (信息) - 仅显示,不会失败
@@ -80,4 +80,4 @@ ignore-imports=yes
[TYPECHECK]
# 生成缺失成员提示的类列表
generated-members=requests.packages.urllib3
generated-members=requests.packages.urllib3

30
app/actions/note.py Normal file
View File

@@ -0,0 +1,30 @@
from app.actions import BaseAction
from app.schemas import ActionContext
class NoteAction(BaseAction):
"""
备注
"""
@classmethod
@property
def name(cls) -> str: # noqa
return "备注"
@classmethod
@property
def description(cls) -> str: # noqa
return "给工作流添加备注"
@classmethod
@property
def data(cls) -> dict: # noqa
return {}
@property
def success(self) -> bool:
return True
def execute(self, workflow_id: int, params: dict, context: ActionContext) -> ActionContext:
return context

View File

@@ -166,3 +166,19 @@ def memory2(_: Annotated[str, Depends(verify_apitoken)]) -> Any:
获取当前内存使用率 API_TOKEN认证?token=xxx
"""
return memory()
@router.get("/network", summary="获取当前网络流量", response_model=List[int])
def network(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
获取当前网络流量上行和下行流量单位bytes/s
"""
return SystemUtils.network_usage()
@router.get("/network2", summary="获取当前网络流量API_TOKEN", response_model=List[int])
def network2(_: Annotated[str, Depends(verify_apitoken)]) -> Any:
"""
获取当前网络流量 API_TOKEN认证?token=xxx
"""
return network()

View File

@@ -44,6 +44,8 @@ def download(
# 种子信息
torrentinfo = TorrentInfo()
torrentinfo.from_dict(torrent_in.dict())
# 手动下载始终使用选择的下载器
torrentinfo.site_downloader = downloader
# 上下文
context = Context(
meta_info=metainfo,
@@ -51,7 +53,7 @@ def download(
torrent_info=torrentinfo
)
did = DownloadChain().download_single(context=context, username=current_user.name,
downloader=downloader, save_path=save_path, source="Manual")
save_path=save_path, source="Manual")
if not did:
return schemas.Response(success=False, message="任务添加失败")
return schemas.Response(success=True, data={

View File

@@ -1,5 +1,6 @@
from typing import List, Any, Optional
import jieba
from fastapi import APIRouter, Depends
from sqlalchemy.orm import Session
@@ -57,6 +58,8 @@ def transfer_history(title: Optional[str] = None,
status = True
if title:
words = jieba.cut(title, HMM=False)
title = "%".join(words)
total = TransferHistory.count_by_title(db, title=title, status=status)
result = TransferHistory.list_by_title(db, title=title, page=page,
count=count, status=status)

View File

@@ -147,7 +147,7 @@ def all_plugins(_: schemas.TokenPayload = Depends(get_current_active_superuser),
installed_plugins = [plugin for plugin in local_plugins if plugin.installed]
if state == "installed":
return installed_plugins
# 未安装的本地插件
not_installed_plugins = [plugin for plugin in local_plugins if not plugin.installed]
# 在线插件
@@ -178,7 +178,7 @@ def all_plugins(_: schemas.TokenPayload = Depends(get_current_active_superuser),
if state == "market":
# 返回未安装的插件
return market_plugins
# 返回所有插件
return installed_plugins + market_plugins
@@ -523,7 +523,7 @@ def clone_plugin(plugin_id: str,
version=clone_data.get("version", ""),
icon=clone_data.get("icon", "")
)
if success:
# 注册插件服务
reload_plugin(message)
@@ -547,7 +547,7 @@ def _add_clone_to_plugin_folder(original_plugin_id: str, clone_plugin_id: str):
config_oper = SystemConfigOper()
# 获取插件文件夹配置
folders = config_oper.get(SystemConfigKey.PluginFolders) or {}
# 查找原插件所在的文件夹
target_folder = None
for folder_name, folder_data in folders.items():
@@ -561,7 +561,7 @@ def _add_clone_to_plugin_folder(original_plugin_id: str, clone_plugin_id: str):
if original_plugin_id in folder_data:
target_folder = folder_name
break
# 如果找到了原插件所在的文件夹,则将分身插件也添加到该文件夹中
if target_folder:
folder_data = folders[target_folder]
@@ -575,12 +575,12 @@ def _add_clone_to_plugin_folder(original_plugin_id: str, clone_plugin_id: str):
if clone_plugin_id not in folder_data:
folder_data.append(clone_plugin_id)
logger.info(f"已将分身插件 {clone_plugin_id} 添加到文件夹 '{target_folder}'")
# 保存更新后的文件夹配置
config_oper.set(SystemConfigKey.PluginFolders, folders)
else:
logger.info(f"原插件 {original_plugin_id} 不在任何文件夹中,分身插件 {clone_plugin_id} 将保持独立")
except Exception as e:
logger.error(f"处理插件文件夹时出错:{str(e)}")
# 文件夹处理失败不影响插件分身创建的整体流程
@@ -595,10 +595,10 @@ def _remove_plugin_from_folders(plugin_id: str):
config_oper = SystemConfigOper()
# 获取插件文件夹配置
folders = config_oper.get(SystemConfigKey.PluginFolders) or {}
# 标记是否有修改
modified = False
# 遍历所有文件夹,移除指定插件
for folder_name, folder_data in folders.items():
if isinstance(folder_data, dict) and 'plugins' in folder_data:
@@ -613,13 +613,13 @@ def _remove_plugin_from_folders(plugin_id: str):
folder_data.remove(plugin_id)
logger.info(f"已从文件夹 '{folder_name}' 中移除插件 {plugin_id}")
modified = True
# 如果有修改,保存更新后的文件夹配置
if modified:
config_oper.set(SystemConfigKey.PluginFolders, folders)
else:
logger.debug(f"插件 {plugin_id} 不在任何文件夹中,无需移除")
except Exception as e:
logger.error(f"从文件夹中移除插件时出错:{str(e)}")
# 文件夹处理失败不影响插件卸载的整体流程

View File

@@ -1,5 +1,6 @@
from typing import List, Any, Dict, Optional
from app.helper.sites import SitesHelper
from fastapi import APIRouter, Depends, HTTPException
from sqlalchemy.orm import Session
from starlette.background import BackgroundTasks
@@ -21,7 +22,6 @@ from app.db.models.siteuserdata import SiteUserData
from app.db.site_oper import SiteOper
from app.db.systemconfig_oper import SystemConfigOper
from app.db.user_oper import get_current_active_superuser
from app.helper.sites import SitesHelper
from app.scheduler import Scheduler
from app.schemas.types import SystemConfigKey, EventType
from app.utils.string import StringUtils
@@ -333,8 +333,8 @@ def read_site_by_domain(
return site
@router.get("/statistic/{site_url}", summary="站点统计信息", response_model=schemas.SiteStatistic)
def read_site_by_domain(
@router.get("/statistic/{site_url}", summary="特定站点统计信息", response_model=schemas.SiteStatistic)
def read_statistic_by_domain(
site_url: str,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)
@@ -349,6 +349,17 @@ def read_site_by_domain(
return schemas.SiteStatistic(domain=domain)
@router.get("/statistic", summary="所有站点统计信息", response_model=List[schemas.SiteStatistic])
def read_statistics(
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)
) -> Any:
"""
获取所有站点统计信息
"""
return SiteStatistic.list(db)
@router.get("/rss", summary="所有订阅站点", response_model=List[schemas.Site])
def read_rss_sites(db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> List[dict]:

View File

@@ -560,6 +560,15 @@ def popular_subscribes(
return SubscribeHelper().get_shares(name=name, page=page, count=count)
@router.get("/share/statistics", summary="查询订阅分享统计", response_model=List[schemas.SubscribeShareStatistics])
def subscribe_share_statistics(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询订阅分享统计
返回每个分享人分享的媒体数量以及总的复用人次
"""
return SubscribeHelper().get_share_statistics()
@router.get("/{subscribe_id}", summary="订阅详情", response_model=schemas.Subscribe)
def read_subscribe(
subscribe_id: int,

View File

@@ -11,17 +11,18 @@ from typing import Optional, Union, Annotated
import aiofiles
import pillow_avif # noqa 用于自动注册AVIF支持
from PIL import Image
from fastapi import APIRouter, Depends, HTTPException, Header, Request, Response
from app.helper.sites import SitesHelper
from fastapi import APIRouter, Body, Depends, HTTPException, Header, Request, Response
from fastapi.responses import StreamingResponse
from app import schemas
from app.chain.search import SearchChain
from app.chain.system import SystemChain
from app.core.config import global_vars, settings
from app.core.event import eventmanager
from app.core.metainfo import MetaInfo
from app.core.module import ModuleManager
from app.core.security import verify_apitoken, verify_resource_token, verify_token
from app.core.event import eventmanager
from app.db.models import User
from app.db.systemconfig_oper import SystemConfigOper
from app.db.user_oper import get_current_active_superuser
@@ -29,7 +30,6 @@ from app.helper.mediaserver import MediaServerHelper
from app.helper.message import MessageHelper
from app.helper.progress import ProgressHelper
from app.helper.rule import RuleHelper
from app.helper.sites import SitesHelper
from app.helper.subscribe import SubscribeHelper
from app.helper.system import SystemHelper
from app.log import logger
@@ -144,6 +144,7 @@ def fetch_image(
def proxy_img(
imgurl: str,
proxy: bool = False,
cache: bool = False,
if_none_match: Annotated[str | None, Header()] = None,
_: schemas.TokenPayload = Depends(verify_resource_token)
) -> Response:
@@ -154,7 +155,7 @@ def proxy_img(
hosts = [config.config.get("host") for config in MediaServerHelper().get_configs().values() if
config and config.config and config.config.get("host")]
allowed_domains = set(settings.SECURITY_IMAGE_DOMAINS) | set(hosts)
return fetch_image(url=imgurl, proxy=proxy, use_disk_cache=False,
return fetch_image(url=imgurl, proxy=proxy, use_disk_cache=cache,
if_none_match=if_none_match, allowed_domains=allowed_domains)
@@ -186,9 +187,11 @@ def get_global_setting(token: str):
"COOKIECLOUD_KEY", "COOKIECLOUD_PASSWORD", "GITHUB_TOKEN", "REPO_GITHUB_TOKEN"}
)
# 追加用户唯一ID和订阅分享管理权限
share_admin = SubscribeHelper().is_admin_user()
info.update({
"USER_UNIQUE_ID": SubscribeHelper().get_user_uuid(),
"SUBSCRIBE_SHARE_MANAGE": SubscribeHelper().is_admin_user(),
"SUBSCRIBE_SHARE_MANAGE": share_admin,
"WORKFLOW_SHARE_MANAGE": share_admin
})
return schemas.Response(success=True,
data=info)
@@ -288,8 +291,11 @@ def get_setting(key: str,
@router.post("/setting/{key}", summary="更新系统设置", response_model=schemas.Response)
def set_setting(key: str, value: Union[list, dict, bool, int, str] = None,
_: User = Depends(get_current_active_superuser)):
def set_setting(
key: str,
value: Annotated[Union[list, dict, bool, int, str] | None, Body()] = None,
_: User = Depends(get_current_active_superuser),
):
"""
更新系统设置(仅管理员)
"""
@@ -448,10 +454,10 @@ def ruletest(title: str,
@router.get("/nettest", summary="测试网络连通性")
def nettest(
url: str,
proxy: bool,
include: Optional[str] = None,
_: schemas.TokenPayload = Depends(verify_token),
url: str,
proxy: bool,
include: Optional[str] = None,
_: schemas.TokenPayload = Depends(verify_token),
):
"""
测试网络连通性
@@ -459,17 +465,27 @@ def nettest(
# 记录开始的毫秒数
start_time = datetime.now()
headers = None
if "github" in url or "{GITHUB_PROXY}" in url:
# 当前使用的加速代理
proxy_name = ""
if "github" in url:
# 这是github的连通性测试
headers = settings.GITHUB_HEADERS
if "{GITHUB_PROXY}" in url:
url = url.replace(
"{GITHUB_PROXY}", UrlUtils.standardize_base_url(settings.GITHUB_PROXY or "")
)
headers = settings.GITHUB_HEADERS
if settings.GITHUB_PROXY:
proxy_name = "Github加速代理"
if "{PIP_PROXY}" in url:
url = url.replace(
"{PIP_PROXY}",
UrlUtils.standardize_base_url(
settings.PIP_PROXY or "https://pypi.org/simple/"
),
)
if settings.PIP_PROXY:
proxy_name = "PIP加速代理"
url = url.replace("{TMDBAPIKEY}", settings.TMDB_API_KEY)
url = url.replace(
"{PIP_PROXY}",
UrlUtils.standardize_base_url(settings.PIP_PROXY or "https://pypi.org/simple/"),
)
result = RequestUtils(
proxies=settings.PROXY if proxy else None,
headers=headers,
@@ -481,21 +497,36 @@ def nettest(
time = round((end_time - start_time).total_seconds() * 1000)
# 计算相关秒数
if result is None:
return schemas.Response(success=False, message="无法连接", data={"time": time})
return schemas.Response(
success=False, message=f"{proxy_name}无法连接", data={"time": time}
)
elif result.status_code == 200:
if include and not re.search(r"%s" % include, result.text, re.IGNORECASE):
# 通常是被加速代理跳转到其它页面了
logger.error(f"{url} 的响应内容不匹配包含规则 {include}")
if proxy_name:
message = f"{proxy_name}已失效,请检查配置"
else:
message = f"无效响应,不匹配 {include}"
return schemas.Response(
success=False,
message=f"无效响应,不匹配 {include}",
message=message,
data={"time": time},
)
return schemas.Response(success=True, data={"time": time})
else:
return schemas.Response(
success=False, message=f"错误码:{result.status_code}", data={"time": time}
)
if proxy_name:
# 加速代理失败
message = f"{proxy_name}已失效,错误码:{result.status_code}"
else:
message = f"错误码:{result.status_code}"
if "github" in url:
# 非加速代理访问github
if result.status_code == 401:
message = "Github Token已失效请检查配置"
elif result.status_code in {403, 429}:
message = "触发限流请配置Github Token"
return schemas.Response(success=False, message=message, data={"time": time})
@router.get("/modulelist", summary="查询已加载的模块ID列表", response_model=schemas.Response)

View File

@@ -8,11 +8,13 @@ from app import schemas
from app.chain.media import MediaChain
from app.chain.storage import StorageChain
from app.chain.transfer import TransferChain
from app.core.config import settings
from app.core.metainfo import MetaInfoPath
from app.core.security import verify_token, verify_apitoken
from app.db import get_db
from app.db.models.transferhistory import TransferHistory
from app.db.user_oper import get_current_active_superuser
from app.helper.directory import DirectoryHelper
from app.schemas import MediaType, FileItem, ManualTransferItem
router = APIRouter()
@@ -35,11 +37,19 @@ def query_name(path: str, filetype: str,
if not new_path:
return schemas.Response(success=False, message="未识别到新名称")
if filetype == "dir":
parents = Path(new_path).parents
if len(parents) > 2:
new_name = parents[1].name
media_path = DirectoryHelper.get_media_root_path(
rename_format=settings.RENAME_FORMAT(mediainfo.type),
rename_path=Path(new_path),
)
if media_path:
new_name = media_path.name
else:
new_name = parents[0].name
# fallback
parents = Path(new_path).parents
if len(parents) > 2:
new_name = parents[1].name
else:
new_name = parents[0].name
else:
new_name = Path(new_path).name
return schemas.Response(success=True, data={

View File

@@ -1,8 +1,8 @@
import base64
import re
from typing import Any, List, Union
from typing import Annotated, Any, List, Union
from fastapi import APIRouter, Depends, HTTPException, UploadFile, File
from fastapi import APIRouter, Body, Depends, HTTPException, UploadFile, File
from sqlalchemy.orm import Session
from app import schemas
@@ -164,8 +164,11 @@ def get_config(key: str,
@router.post("/config/{key}", summary="更新用户配置", response_model=schemas.Response)
def set_config(key: str, value: Union[list, dict, bool, int, str] = None,
current_user: User = Depends(get_current_active_user)):
def set_config(
key: str,
value: Annotated[Union[list, dict, bool, int, str] | None, Body()] = None,
current_user: User = Depends(get_current_active_user),
):
"""
更新用户配置
"""

View File

@@ -1,3 +1,4 @@
import json
from datetime import datetime
from typing import List, Any, Optional
@@ -5,14 +6,15 @@ from fastapi import APIRouter, Depends
from sqlalchemy.orm import Session
from app import schemas
from app.chain.workflow import WorkflowChain
from app.core.config import global_vars
from app.core.plugin import PluginManager
from app.core.workflow import WorkFlowManager
from app.db import get_db
from app.db.models.workflow import Workflow
from app.db.models import Workflow
from app.db.systemconfig_oper import SystemConfigOper
from app.db.user_oper import get_current_active_user
from app.chain.workflow import WorkflowChain
from app.helper.workflow import WorkflowHelper
from app.scheduler import Scheduler
router = APIRouter()
@@ -24,7 +26,8 @@ def list_workflows(db: Session = Depends(get_db),
"""
获取工作流列表
"""
return Workflow.list(db)
from app.db.workflow_oper import WorkflowOper
return WorkflowOper(db).list()
@router.post("/", summary="创建工作流", response_model=schemas.Response)
@@ -34,13 +37,15 @@ def create_workflow(workflow: schemas.Workflow,
"""
创建工作流
"""
if Workflow.get_by_name(db, workflow.name):
from app.db.workflow_oper import WorkflowOper
if workflow.name and WorkflowOper(db).get_by_name(workflow.name):
return schemas.Response(success=False, message="已存在相同名称的工作流")
if not workflow.add_time:
workflow.add_time = datetime.strftime(datetime.now(), "%Y-%m-%d %H:%M:%S")
if not workflow.state:
workflow.state = "P"
Workflow(**workflow.dict()).create(db)
from app.db.models.workflow import Workflow as WorkflowModel
WorkflowModel(**workflow.dict()).create(db)
return schemas.Response(success=True, message="创建工作流成功")
@@ -60,47 +65,97 @@ def list_actions(_: schemas.TokenPayload = Depends(get_current_active_user)) ->
return WorkFlowManager().list_actions()
@router.get("/{workflow_id}", summary="工作流详情", response_model=schemas.Workflow)
def get_workflow(workflow_id: int,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(get_current_active_user)) -> Any:
@router.post("/share", summary="分享工作流", response_model=schemas.Response)
def workflow_share(
workflow: schemas.WorkflowShare,
_: schemas.TokenPayload = Depends(get_current_active_user)) -> Any:
"""
获取工作流详情
分享工作流
"""
return Workflow.get(db, workflow_id)
if not workflow.id or not workflow.share_title or not workflow.share_user:
return schemas.Response(success=False, message="请填写工作流ID、分享标题和分享人")
state, errmsg = WorkflowHelper().workflow_share(workflow_id=workflow.id,
share_title=workflow.share_title or "",
share_comment=workflow.share_comment or "",
share_user=workflow.share_user or "")
return schemas.Response(success=state, message=errmsg)
@router.put("/{workflow_id}", summary="更新工作流", response_model=schemas.Response)
def update_workflow(workflow: schemas.Workflow,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(get_current_active_user)) -> Any:
@router.delete("/share/{share_id}", summary="删除分享", response_model=schemas.Response)
def workflow_share_delete(
share_id: int,
_: schemas.TokenPayload = Depends(get_current_active_user)) -> Any:
"""
更新工作流
删除分享
"""
wf = Workflow.get(db, workflow.id)
if not wf:
return schemas.Response(success=False, message="工作流不存在")
wf.update(db, workflow.dict())
return schemas.Response(success=True, message="更新成功")
state, errmsg = WorkflowHelper().share_delete(share_id=share_id)
return schemas.Response(success=state, message=errmsg)
@router.delete("/{workflow_id}", summary="删除工作流", response_model=schemas.Response)
def delete_workflow(workflow_id: int,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(get_current_active_user)) -> Any:
@router.post("/fork", summary="复用工作流", response_model=schemas.Response)
def workflow_fork(
workflow: schemas.WorkflowShare,
db: Session = Depends(get_db),
_: schemas.User = Depends(get_current_active_user)) -> Any:
"""
删除工作流
复用工作流
"""
workflow = Workflow.get(db, workflow_id)
if not workflow:
return schemas.Response(success=False, message="工作流不存在")
# 删除定时任务
Scheduler().remove_workflow_job(workflow)
# 删除工作流
Workflow.delete(db, workflow_id)
# 删除缓存
SystemConfigOper().delete(f"WorkflowCache-{workflow_id}")
return schemas.Response(success=True, message="删除成功")
if not workflow.name:
return schemas.Response(success=False, message="工作流名称不能为空")
# 解析JSON数据添加错误处理
try:
actions = json.loads(workflow.actions or "[]")
except json.JSONDecodeError:
return schemas.Response(success=False, message="actions字段JSON格式错误")
try:
flows = json.loads(workflow.flows or "[]")
except json.JSONDecodeError:
return schemas.Response(success=False, message="flows字段JSON格式错误")
try:
context = json.loads(workflow.context or "{}")
except json.JSONDecodeError:
return schemas.Response(success=False, message="context字段JSON格式错误")
# 创建工作流
workflow_dict = {
"name": workflow.name,
"description": workflow.description,
"timer": workflow.timer,
"actions": actions,
"flows": flows,
"context": context,
"state": "P" # 默认暂停状态
}
# 检查名称是否重复
if Workflow.get_by_name(db, workflow_dict["name"]):
return schemas.Response(success=False, message="已存在相同名称的工作流")
# 创建新工作流
workflow = Workflow(**workflow_dict)
workflow.create(db)
# 更新复用次数
if workflow.id:
WorkflowHelper().workflow_fork(share_id=workflow.id)
return schemas.Response(success=True, message="复用成功")
@router.get("/shares", summary="查询分享的工作流", response_model=List[schemas.WorkflowShare])
def workflow_shares(
name: Optional[str] = None,
page: Optional[int] = 1,
count: Optional[int] = 30,
_: schemas.TokenPayload = Depends(get_current_active_user)) -> Any:
"""
查询分享的工作流
"""
return WorkflowHelper().get_shares(name=name, page=page, count=count)
@router.post("/{workflow_id}/run", summary="执行工作流", response_model=schemas.Response)
@@ -123,7 +178,8 @@ def start_workflow(workflow_id: int,
"""
启用工作流
"""
workflow = Workflow.get(db, workflow_id)
from app.db.workflow_oper import WorkflowOper
workflow = WorkflowOper(db).get(workflow_id)
if not workflow:
return schemas.Response(success=False, message="工作流不存在")
# 添加定时任务
@@ -140,7 +196,8 @@ def pause_workflow(workflow_id: int,
"""
停用工作流
"""
workflow = Workflow.get(db, workflow_id)
from app.db.workflow_oper import WorkflowOper
workflow = WorkflowOper(db).get(workflow_id)
if not workflow:
return schemas.Response(success=False, message="工作流不存在")
# 删除定时任务
@@ -159,7 +216,8 @@ def reset_workflow(workflow_id: int,
"""
重置工作流
"""
workflow = Workflow.get(db, workflow_id)
from app.db.workflow_oper import WorkflowOper
workflow = WorkflowOper(db).get(workflow_id)
if not workflow:
return schemas.Response(success=False, message="工作流不存在")
# 停止工作流
@@ -169,3 +227,52 @@ def reset_workflow(workflow_id: int,
# 删除缓存
SystemConfigOper().delete(f"WorkflowCache-{workflow_id}")
return schemas.Response(success=True)
@router.get("/{workflow_id}", summary="工作流详情", response_model=schemas.Workflow)
def get_workflow(workflow_id: int,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(get_current_active_user)) -> Any:
"""
获取工作流详情
"""
from app.db.workflow_oper import WorkflowOper
return WorkflowOper(db).get(workflow_id)
@router.put("/{workflow_id}", summary="更新工作流", response_model=schemas.Response)
def update_workflow(workflow: schemas.Workflow,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(get_current_active_user)) -> Any:
"""
更新工作流
"""
from app.db.workflow_oper import WorkflowOper
if not workflow.id:
return schemas.Response(success=False, message="工作流ID不能为空")
wf = WorkflowOper(db).get(workflow.id)
if not wf:
return schemas.Response(success=False, message="工作流不存在")
wf.update(db, workflow.dict())
return schemas.Response(success=True, message="更新成功")
@router.delete("/{workflow_id}", summary="删除工作流", response_model=schemas.Response)
def delete_workflow(workflow_id: int,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(get_current_active_user)) -> Any:
"""
删除工作流
"""
from app.db.workflow_oper import WorkflowOper
workflow = WorkflowOper(db).get(workflow_id)
if not workflow:
return schemas.Response(success=False, message="工作流不存在")
# 删除定时任务
Scheduler().remove_workflow_job(workflow)
# 删除工作流
from app.db.models.workflow import Workflow as WorkflowModel
WorkflowModel.delete(db, workflow_id)
# 删除缓存
SystemConfigOper().delete(f"WorkflowCache-{workflow_id}")
return schemas.Response(success=True, message="删除成功")

View File

@@ -94,9 +94,8 @@ class ChainBase(metaclass=ABCMeta):
return ret is None
result = None
plugin_modules = self.pluginmanager.get_plugin_modules()
# 插件模块
for plugin, module_dict in plugin_modules.items():
for plugin, module_dict in self.pluginmanager.get_plugin_modules().items():
plugin_id, plugin_name = plugin
if method in module_dict:
func = module_dict[method]
@@ -138,10 +137,7 @@ class ChainBase(metaclass=ABCMeta):
# 系统模块
logger.debug(f"请求系统模块执行:{method} ...")
modules = self.modulemanager.get_running_modules(method)
# 按优先级排序
modules = sorted(modules, key=lambda x: x.get_priority())
for module in modules:
for module in sorted(self.modulemanager.get_running_modules(method), key=lambda x: x.get_priority()):
module_id = module.__class__.__name__
try:
module_name = module.get_name()

View File

@@ -188,6 +188,9 @@ class DownloadChain(ChainBase):
f"Resource download canceled by event: {event_data.source},"
f"Reason: {event_data.reason}")
return None
# 如果事件修改了下载路径,使用新路径
if event_data.options and event_data.options.get("save_path"):
save_path = event_data.options.get("save_path")
# 补充完整的media数据
if not _media.genre_ids:

View File

@@ -19,7 +19,6 @@ from app.utils.string import StringUtils
recognize_lock = Lock()
scraping_lock = Lock()
scraping_files = []
class MediaChain(ChainBase):
@@ -35,25 +34,25 @@ class MediaChain(ChainBase):
switchs = SystemConfigOper().get(SystemConfigKey.ScrapingSwitchs) or {}
# 默认配置
default_switchs = {
'movie_nfo': True, # 电影NFO
'movie_poster': True, # 电影海报
'movie_backdrop': True, # 电影背景图
'movie_logo': True, # 电影Logo
'movie_disc': True, # 电影光盘图
'movie_banner': True, # 电影横幅图
'movie_thumb': True, # 电影缩略图
'tv_nfo': True, # 电视剧NFO
'tv_poster': True, # 电视剧海报
'tv_backdrop': True, # 电视剧背景图
'tv_banner': True, # 电视剧横幅图
'tv_logo': True, # 电视剧Logo
'tv_thumb': True, # 电视剧缩略图
'season_nfo': True, # 季NFO
'season_poster': True, # 季海报
'season_banner': True, # 季横幅图
'season_thumb': True, # 季缩略图
'episode_nfo': True, # 集NFO
'episode_thumb': True # 集缩略图
'movie_nfo': True, # 电影NFO
'movie_poster': True, # 电影海报
'movie_backdrop': True, # 电影背景图
'movie_logo': True, # 电影Logo
'movie_disc': True, # 电影光盘图
'movie_banner': True, # 电影横幅图
'movie_thumb': True, # 电影缩略图
'tv_nfo': True, # 电视剧NFO
'tv_poster': True, # 电视剧海报
'tv_backdrop': True, # 电视剧背景图
'tv_banner': True, # 电视剧横幅图
'tv_logo': True, # 电视剧Logo
'tv_thumb': True, # 电视剧缩略图
'season_nfo': True, # 季NFO
'season_poster': True, # 季海报
'season_banner': True, # 季横幅图
'season_thumb': True, # 季缩略图
'episode_nfo': True, # 集NFO
'episode_thumb': True # 集缩略图
}
# 合并用户配置和默认配置
for key, default_value in default_switchs.items():
@@ -344,23 +343,49 @@ class MediaChain(ChainBase):
return
event_data = event.event_data or {}
fileitem: FileItem = event_data.get("fileitem")
file_list: List[str] = event_data.get("file_list", [])
meta: MetaBase = event_data.get("meta")
mediainfo: MediaInfo = event_data.get("mediainfo")
overwrite = event_data.get("overwrite", False)
if not fileitem:
return
# 刮削锁
with scraping_lock:
if fileitem.path in scraping_files:
# 检查文件项是否存在
storagechain = StorageChain()
if not storagechain.get_item(fileitem):
logger.warn(f"文件项不存在:{fileitem.path}")
return
scraping_files.append(fileitem.path)
try:
# 执行刮削
self.scrape_metadata(fileitem=fileitem, meta=meta, mediainfo=mediainfo, overwrite=overwrite)
finally:
# 释放锁
with scraping_lock:
scraping_files.remove(fileitem.path)
# 检查是否为目录
if fileitem.type == "file":
# 单个文件刮削
self.scrape_metadata(fileitem=fileitem,
mediainfo=mediainfo,
init_folder=False,
parent=storagechain.get_parent_item(fileitem),
overwrite=overwrite)
else:
# 检查目的目录下是否已经有nfo刮削文件
has_nfo_file = storagechain.any_files(fileitem, extensions=['.nfo'])
if has_nfo_file and file_list:
logger.info(f"目录 {fileitem.path} 已有NFO文件开始增量刮削...")
for file_path in file_list:
file_item = storagechain.get_file_item(storage=fileitem.storage,
path=Path(file_path))
if file_item:
# 对于电视剧文件,应该保存到与视频文件相同的目录
# 而不是电视剧根目录
self.scrape_metadata(fileitem=file_item,
mediainfo=mediainfo,
init_folder=False,
parent=None, # 让函数内部自动获取正确的父目录
overwrite=overwrite)
else:
# 执行全量刮削
logger.info(f"开始全量刮削目录 {fileitem.path} ...")
self.scrape_metadata(fileitem=fileitem, meta=meta, init_folder=True,
mediainfo=mediainfo, overwrite=overwrite)
def scrape_metadata(self, fileitem: schemas.FileItem,
meta: MetaBase = None, mediainfo: MediaInfo = None,
@@ -436,6 +461,9 @@ class MediaChain(ChainBase):
logger.error(f"{_url} 图片下载失败:{str(err)}")
return None
if not fileitem:
return
# 当前文件路径
filepath = Path(fileitem.path)
if fileitem.type == "file" \
@@ -448,7 +476,7 @@ class MediaChain(ChainBase):
if not mediainfo:
logger.warn(f"{filepath} 无法识别文件媒体信息!")
return
# 获取刮削开关配置
scraping_switchs = self._get_scraping_switchs()
logger.info(f"开始刮削:{filepath} ...")
@@ -464,6 +492,8 @@ class MediaChain(ChainBase):
movie_nfo = self.metadata_nfo(meta=meta, mediainfo=mediainfo)
if movie_nfo:
# 保存或上传nfo文件到上级目录
if not parent:
parent = storagechain.get_parent_item(fileitem)
__save_file(_fileitem=parent, _path=nfo_path, _content=movie_nfo)
else:
logger.warn(f"{filepath.name} nfo文件生成失败")
@@ -494,8 +524,9 @@ class MediaChain(ChainBase):
files = __list_files(_fileitem=fileitem)
for file in files:
self.scrape_metadata(fileitem=file,
meta=meta, mediainfo=mediainfo,
init_folder=False, parent=fileitem,
mediainfo=mediainfo,
init_folder=False,
parent=fileitem,
overwrite=overwrite)
# 生成目录内图片文件
if init_folder:
@@ -520,7 +551,7 @@ class MediaChain(ChainBase):
should_scrape = scraping_switchs.get('movie_thumb', True)
else:
should_scrape = True # 未知类型默认刮削
if should_scrape:
image_path = filepath.with_name(image_name)
if overwrite or not storagechain.get_file_item(storage=fileitem.storage,
@@ -587,11 +618,11 @@ class MediaChain(ChainBase):
else:
logger.info("集缩略图刮削已关闭,跳过")
else:
# 当前为目录,处理目录内的文件
# 当前为电视剧目录,处理目录内的文件
files = __list_files(_fileitem=fileitem)
for file in files:
self.scrape_metadata(fileitem=file,
meta=meta, mediainfo=mediainfo,
mediainfo=mediainfo,
parent=fileitem if file.type == "file" else None,
init_folder=True if file.type == "dir" else False,
overwrite=overwrite)
@@ -653,13 +684,14 @@ class MediaChain(ChainBase):
should_scrape = scraping_switchs.get('season_thumb', True)
else:
should_scrape = True # 未知类型默认刮削
if should_scrape:
image_path = filepath.with_name(image_name)
# 只下载当前刮削季的图片
image_season = "00" if "specials" in image_name else image_name[6:8]
if image_season != str(season_meta.begin_season).rjust(2, '0'):
logger.info(f"当前刮削季为:{season_meta.begin_season},跳过文件:{image_path}")
logger.info(
f"当前刮削季为:{season_meta.begin_season},跳过文件:{image_path}")
continue
if overwrite or not storagechain.get_file_item(storage=fileitem.storage,
path=image_path):
@@ -714,7 +746,7 @@ class MediaChain(ChainBase):
should_scrape = scraping_switchs.get('tv_thumb', True)
else:
should_scrape = True # 未知类型默认刮削
if should_scrape:
image_path = filepath / image_name
if overwrite or not storagechain.get_file_item(storage=fileitem.storage,

View File

@@ -136,371 +136,400 @@ class MessageChain(ChainBase):
logger.info(f'收到用户消息内容,用户:{userid},内容:{text}')
# 加载缓存
user_cache: Dict[str, dict] = self.load_cache(self._cache_file) or {}
# 保存消息
if not text.startswith('CALLBACK:'):
self.messagehelper.put(
CommingMessage(
userid=userid,
username=username,
channel=channel,
source=source,
text=text
), role="user")
self.messageoper.add(
channel=channel,
source=source,
userid=username or userid,
text=text,
action=0
)
# 处理消息
if text.startswith('CALLBACK:'):
# 处理按钮回调(适配支持回调的渠道)
if ChannelCapabilityManager.supports_callbacks(channel):
self._handle_callback(text=text, channel=channel, source=source,
userid=userid, username=username,
original_message_id=original_message_id, original_chat_id=original_chat_id)
else:
logger.warning(f"渠道 {channel.value} 不支持回调,但收到了回调消息:{text}")
elif text.startswith('/'):
# 执行命令
self.eventmanager.send_event(
EventType.CommandExcute,
{
"cmd": text,
"user": userid,
"channel": channel,
"source": source
}
)
elif text.isdigit():
# 用户选择了具体的条目
# 缓存
cache_data: dict = user_cache.get(userid).copy()
# 选择项目
if not cache_data \
or not cache_data.get('items') \
or len(cache_data.get('items')) < int(text):
# 发送消息
self.post_message(Notification(channel=channel, source=source, title="输入有误!", userid=userid))
return
# 选择的序号
_choice = int(text) + _current_page * self._page_size - 1
# 缓存类型
cache_type: str = cache_data.get('type')
# 缓存列表
cache_list: list = cache_data.get('items').copy()
# 选择
if cache_type in ["Search", "ReSearch"]:
# 当前媒体信息
mediainfo: MediaInfo = cache_list[_choice]
_current_media = mediainfo
# 查询缺失的媒体信息
exist_flag, no_exists = DownloadChain().get_no_exists_info(meta=_current_meta,
mediainfo=_current_media)
if exist_flag and cache_type == "Search":
# 媒体库中已存在
self.post_message(
Notification(channel=channel,
source=source,
title=f"{_current_media.title_year}"
f"{_current_meta.sea} 媒体库中已存在,如需重新下载请发送:搜索 名称 或 下载 名称】",
userid=userid))
return
elif exist_flag:
# 没有缺失,但要全量重新搜索和下载
no_exists = self.__get_noexits_info(_current_meta, _current_media)
# 发送缺失的媒体信息
messages = []
if no_exists and cache_type == "Search":
# 发送缺失消息
mediakey = mediainfo.tmdb_id or mediainfo.douban_id
messages = [
f"{sea} 季缺失 {StringUtils.str_series(no_exist.episodes) if no_exist.episodes else no_exist.total_episode}"
for sea, no_exist in no_exists.get(mediakey).items()]
elif no_exists:
# 发送总集数的消息
mediakey = mediainfo.tmdb_id or mediainfo.douban_id
messages = [
f"{sea} 季总 {no_exist.total_episode}"
for sea, no_exist in no_exists.get(mediakey).items()]
if messages:
self.post_message(Notification(channel=channel,
source=source,
title=f"{mediainfo.title_year}\n" + "\n".join(messages),
userid=userid))
# 搜索种子,过滤掉不需要的剧集,以便选择
logger.info(f"开始搜索 {mediainfo.title_year} ...")
self.post_message(
Notification(channel=channel,
source=source,
title=f"开始搜索 {mediainfo.type.value} {mediainfo.title_year} ...",
userid=userid))
# 开始搜索
contexts = SearchChain().process(mediainfo=mediainfo,
no_exists=no_exists)
if not contexts:
# 没有数据
self.post_message(Notification(
try:
# 保存消息
if not text.startswith('CALLBACK:'):
self.messagehelper.put(
CommingMessage(
userid=userid,
username=username,
channel=channel,
source=source,
title=f"{mediainfo.title}"
f"{_current_meta.sea} 未搜索到需要的资源!",
userid=userid))
return
# 搜索结果排序
contexts = TorrentHelper().sort_torrents(contexts)
# 判断是否设置自动下载
auto_download_user = settings.AUTO_DOWNLOAD_USER
# 匹配到自动下载用户
if auto_download_user \
and (auto_download_user == "all"
or any(userid == user for user in auto_download_user.split(","))):
logger.info(f"用户 {userid} 在自动下载用户中,开始自动择优下载 ...")
# 自动选择下载
self.__auto_download(channel=channel,
source=source,
cache_list=contexts,
userid=userid,
username=username,
no_exists=no_exists)
text=text
), role="user")
self.messageoper.add(
channel=channel,
source=source,
userid=username or userid,
text=text,
action=0
)
# 处理消息
if text.startswith('CALLBACK:'):
# 处理按钮回调(适配支持回调的渠道)
if ChannelCapabilityManager.supports_callbacks(channel):
self._handle_callback(text=text, channel=channel, source=source,
userid=userid, username=username,
original_message_id=original_message_id, original_chat_id=original_chat_id)
else:
# 更新缓存
user_cache[userid] = {
"type": "Torrent",
"items": contexts
}
_current_page = 0
# 保存缓存
self.save_cache(user_cache, self._cache_file)
# 删除原消息
if (original_message_id and original_chat_id and
ChannelCapabilityManager.supports_deletion(channel)):
self.delete_message(
channel=channel,
source=source,
message_id=original_message_id,
chat_id=original_chat_id
)
# 发送种子数据
logger.info(f"搜索到 {len(contexts)} 条数据,开始发送选择消息 ...")
self.__post_torrents_message(channel=channel,
source=source,
title=mediainfo.title,
items=contexts[:self._page_size],
userid=userid,
total=len(contexts))
elif cache_type in ["Subscribe", "ReSubscribe"]:
# 订阅或洗版媒体
mediainfo: MediaInfo = cache_list[_choice]
# 洗版标识
best_version = False
# 查询缺失的媒体信息
if cache_type == "Subscribe":
exist_flag, _ = DownloadChain().get_no_exists_info(meta=_current_meta,
mediainfo=mediainfo)
if exist_flag:
self.post_message(Notification(
channel=channel,
source=source,
title=f"{mediainfo.title_year}"
f"{_current_meta.sea} 媒体库中已存在,如需洗版请发送:洗版 XXX】",
userid=userid))
return
else:
best_version = True
# 转换用户名
mp_name = UserOper().get_name(**{f"{channel.name.lower()}_userid": userid}) if channel else None
# 添加订阅状态为N
SubscribeChain().add(title=mediainfo.title,
year=mediainfo.year,
mtype=mediainfo.type,
tmdbid=mediainfo.tmdb_id,
season=_current_meta.begin_season,
channel=channel,
source=source,
userid=userid,
username=mp_name or username,
best_version=best_version)
elif cache_type == "Torrent":
if int(text) == 0:
# 自动选择下载,强制下载模式
self.__auto_download(channel=channel,
source=source,
cache_list=cache_list,
userid=userid,
username=username)
else:
# 下载种子
context: Context = cache_list[_choice]
# 下载
DownloadChain().download_single(context, channel=channel, source=source,
userid=userid, username=username)
elif text.lower() == "p":
# 上一页
cache_data: dict = user_cache.get(userid).copy()
if not cache_data:
# 没有缓存
self.post_message(Notification(
channel=channel, source=source, title="输入有误!", userid=userid))
return
if _current_page == 0:
# 第一页
self.post_message(Notification(
channel=channel, source=source, title="已经是第一页了!", userid=userid))
return
# 减一页
_current_page -= 1
cache_type: str = cache_data.get('type')
# 产生副本,避免修改原值
cache_list: list = cache_data.get('items').copy()
if _current_page == 0:
start = 0
end = self._page_size
else:
start = _current_page * self._page_size
end = start + self._page_size
if cache_type == "Torrent":
# 发送种子数据
self.__post_torrents_message(channel=channel,
source=source,
title=_current_media.title,
items=cache_list[start:end],
userid=userid,
total=len(cache_list),
original_message_id=original_message_id,
original_chat_id=original_chat_id)
else:
# 发送媒体数据
self.__post_medias_message(channel=channel,
source=source,
title=_current_meta.name,
items=cache_list[start:end],
userid=userid,
total=len(cache_list),
original_message_id=original_message_id,
original_chat_id=original_chat_id)
elif text.lower() == "n":
# 下一页
cache_data: dict = user_cache.get(userid).copy()
if not cache_data:
# 没有缓存
self.post_message(Notification(
channel=channel, source=source, title="输入有误!", userid=userid))
return
cache_type: str = cache_data.get('type')
# 产生副本,避免修改原值
cache_list: list = cache_data.get('items').copy()
total = len(cache_list)
# 加一页
cache_list = cache_list[
(_current_page + 1) * self._page_size:(_current_page + 2) * self._page_size]
if not cache_list:
# 没有数据
self.post_message(Notification(
channel=channel, source=source, title="已经是最后一页了!", userid=userid))
return
else:
# 加一页
_current_page += 1
if cache_type == "Torrent":
# 发送种子数据
self.__post_torrents_message(channel=channel,
source=source,
title=_current_media.title,
items=cache_list,
userid=userid,
total=total,
original_message_id=original_message_id,
original_chat_id=original_chat_id)
else:
# 发送媒体数据
self.__post_medias_message(channel=channel,
source=source,
title=_current_meta.name,
items=cache_list,
userid=userid,
total=total,
original_message_id=original_message_id,
original_chat_id=original_chat_id)
else:
# 搜索或订阅
if text.startswith("订阅"):
# 订阅
content = re.sub(r"订阅[:\s]*", "", text)
action = "Subscribe"
elif text.startswith("洗版"):
# 洗版
content = re.sub(r"洗版[:\s]*", "", text)
action = "ReSubscribe"
elif text.startswith("搜索") or text.startswith("下载"):
# 重新搜索/下载
content = re.sub(r"(搜索|下载)[:\s]*", "", text)
action = "ReSearch"
elif text.startswith("#") \
or re.search(r"^请[问帮你]", text) \
or re.search(r"[?]$", text) \
or StringUtils.count_words(text) > 10 \
or text.find("继续") != -1:
# 聊天
content = text
action = "Chat"
elif StringUtils.is_link(text):
# 链接
content = text
action = "Link"
else:
# 搜索
content = text
action = "Search"
if action in ["Search", "ReSearch", "Subscribe", "ReSubscribe"]:
# 搜索
meta, medias = MediaChain().search(content)
# 识别
if not meta.name:
self.post_message(Notification(
channel=channel, source=source, title="无法识别输入内容!", userid=userid))
return
# 开始搜索
if not medias:
self.post_message(Notification(
channel=channel, source=source, title=f"{meta.name} 没有找到对应的媒体信息!", userid=userid))
return
logger.info(f"搜索到 {len(medias)} 条相关媒体信息")
# 记录当前状态
_current_meta = meta
# 保存缓存
user_cache[userid] = {
'type': action,
'items': medias
}
self.save_cache(user_cache, self._cache_file)
_current_page = 0
_current_media = None
# 发送媒体列表
self.__post_medias_message(channel=channel,
source=source,
title=meta.name,
items=medias[:self._page_size],
userid=userid, total=len(medias))
else:
# 广播事件
logger.warning(f"渠道 {channel.value} 不支持回调,但收到了回调消息:{text}")
elif text.startswith('/'):
# 执行命令
self.eventmanager.send_event(
EventType.UserMessage,
EventType.CommandExcute,
{
"text": content,
"userid": userid,
"cmd": text,
"user": userid,
"channel": channel,
"source": source
}
)
elif text.isdigit():
# 用户选择了具体的条目
# 缓存
cache_data: dict = user_cache.get(userid).copy()
# 选择项目
if not cache_data \
or not cache_data.get('items') \
or len(cache_data.get('items')) < int(text):
# 发送消息
self.post_message(Notification(channel=channel, source=source, title="输入有误!", userid=userid))
return
try:
# 选择的序号
_choice = int(text) + _current_page * self._page_size - 1
# 缓存类型
cache_type: str = cache_data.get('type')
# 缓存列表
cache_list: list = cache_data.get('items').copy()
# 选择
try:
if cache_type in ["Search", "ReSearch"]:
# 当前媒体信息
mediainfo: MediaInfo = cache_list[_choice]
_current_media = mediainfo
# 查询缺失的媒体信息
exist_flag, no_exists = DownloadChain().get_no_exists_info(meta=_current_meta,
mediainfo=_current_media)
if exist_flag and cache_type == "Search":
# 媒体库中已存在
self.post_message(
Notification(channel=channel,
source=source,
title=f"{_current_media.title_year}"
f"{_current_meta.sea} 媒体库中已存在,如需重新下载请发送:搜索 名称 或 下载 名称】",
userid=userid))
return
elif exist_flag:
# 没有缺失,但要全量重新搜索和下载
no_exists = self.__get_noexits_info(_current_meta, _current_media)
# 发送缺失的媒体信息
messages = []
if no_exists and cache_type == "Search":
# 发送缺失消息
mediakey = mediainfo.tmdb_id or mediainfo.douban_id
messages = [
f"{sea} 季缺失 {StringUtils.str_series(no_exist.episodes) if no_exist.episodes else no_exist.total_episode}"
for sea, no_exist in no_exists.get(mediakey).items()]
elif no_exists:
# 发送总集数的消息
mediakey = mediainfo.tmdb_id or mediainfo.douban_id
messages = [
f"{sea} 季总 {no_exist.total_episode}"
for sea, no_exist in no_exists.get(mediakey).items()]
if messages:
self.post_message(Notification(channel=channel,
source=source,
title=f"{mediainfo.title_year}\n" + "\n".join(messages),
userid=userid))
# 搜索种子,过滤掉不需要的剧集,以便选择
logger.info(f"开始搜索 {mediainfo.title_year} ...")
self.post_message(
Notification(channel=channel,
source=source,
title=f"开始搜索 {mediainfo.type.value} {mediainfo.title_year} ...",
userid=userid))
# 开始搜索
contexts = SearchChain().process(mediainfo=mediainfo,
no_exists=no_exists)
if not contexts:
# 没有数据
self.post_message(Notification(
channel=channel,
source=source,
title=f"{mediainfo.title}"
f"{_current_meta.sea} 未搜索到需要的资源!",
userid=userid))
return
# 搜索结果排序
contexts = TorrentHelper().sort_torrents(contexts)
try:
# 判断是否设置自动下载
auto_download_user = settings.AUTO_DOWNLOAD_USER
# 匹配到自动下载用户
if auto_download_user \
and (auto_download_user == "all"
or any(userid == user for user in auto_download_user.split(","))):
logger.info(f"用户 {userid} 在自动下载用户中,开始自动择优下载 ...")
# 自动选择下载
self.__auto_download(channel=channel,
source=source,
cache_list=contexts,
userid=userid,
username=username,
no_exists=no_exists)
else:
# 更新缓存
user_cache[userid] = {
"type": "Torrent",
"items": contexts
}
_current_page = 0
# 保存缓存
self.save_cache(user_cache, self._cache_file)
# 删除原消息
if (original_message_id and original_chat_id and
ChannelCapabilityManager.supports_deletion(channel)):
self.delete_message(
channel=channel,
source=source,
message_id=original_message_id,
chat_id=original_chat_id
)
# 发送种子数据
logger.info(f"搜索到 {len(contexts)} 条数据,开始发送选择消息 ...")
self.__post_torrents_message(channel=channel,
source=source,
title=mediainfo.title,
items=contexts[:self._page_size],
userid=userid,
total=len(contexts))
finally:
contexts.clear()
del contexts
elif cache_type in ["Subscribe", "ReSubscribe"]:
# 订阅或洗版媒体
mediainfo: MediaInfo = cache_list[_choice]
# 洗版标识
best_version = False
# 查询缺失的媒体信息
if cache_type == "Subscribe":
exist_flag, _ = DownloadChain().get_no_exists_info(meta=_current_meta,
mediainfo=mediainfo)
if exist_flag:
self.post_message(Notification(
channel=channel,
source=source,
title=f"{mediainfo.title_year}"
f"{_current_meta.sea} 媒体库中已存在,如需洗版请发送:洗版 XXX】",
userid=userid))
return
else:
best_version = True
# 转换用户名
mp_name = UserOper().get_name(**{f"{channel.name.lower()}_userid": userid}) if channel else None
# 添加订阅状态为N
SubscribeChain().add(title=mediainfo.title,
year=mediainfo.year,
mtype=mediainfo.type,
tmdbid=mediainfo.tmdb_id,
season=_current_meta.begin_season,
channel=channel,
source=source,
userid=userid,
username=mp_name or username,
best_version=best_version)
elif cache_type == "Torrent":
if int(text) == 0:
# 自动选择下载,强制下载模式
self.__auto_download(channel=channel,
source=source,
cache_list=cache_list,
userid=userid,
username=username)
else:
# 下载种子
context: Context = cache_list[_choice]
# 下载
DownloadChain().download_single(context, channel=channel, source=source,
userid=userid, username=username)
finally:
cache_list.clear()
del cache_list
finally:
cache_data.clear()
del cache_data
elif text.lower() == "p":
# 上一页
cache_data: dict = user_cache.get(userid).copy()
if not cache_data:
# 没有缓存
self.post_message(Notification(
channel=channel, source=source, title="输入有误!", userid=userid))
return
try:
if _current_page == 0:
# 第一页
self.post_message(Notification(
channel=channel, source=source, title="已经是第一页了!", userid=userid))
return
# 减一页
_current_page -= 1
cache_type: str = cache_data.get('type')
# 产生副本,避免修改原值
cache_list: list = cache_data.get('items').copy()
try:
if _current_page == 0:
start = 0
end = self._page_size
else:
start = _current_page * self._page_size
end = start + self._page_size
if cache_type == "Torrent":
# 发送种子数据
self.__post_torrents_message(channel=channel,
source=source,
title=_current_media.title,
items=cache_list[start:end],
userid=userid,
total=len(cache_list),
original_message_id=original_message_id,
original_chat_id=original_chat_id)
else:
# 发送媒体数据
self.__post_medias_message(channel=channel,
source=source,
title=_current_meta.name,
items=cache_list[start:end],
userid=userid,
total=len(cache_list),
original_message_id=original_message_id,
original_chat_id=original_chat_id)
finally:
cache_list.clear()
del cache_list
finally:
cache_data.clear()
del cache_data
elif text.lower() == "n":
# 下一页
cache_data: dict = user_cache.get(userid).copy()
if not cache_data:
# 没有缓存
self.post_message(Notification(
channel=channel, source=source, title="输入有误!", userid=userid))
return
try:
cache_type: str = cache_data.get('type')
# 产生副本,避免修改原值
cache_list: list = cache_data.get('items').copy()
total = len(cache_list)
# 加一页
cache_list = cache_list[(_current_page + 1) * self._page_size:(_current_page + 2) * self._page_size]
if not cache_list:
# 没有数据
self.post_message(Notification(
channel=channel, source=source, title="已经是最后一页了!", userid=userid))
return
else:
try:
# 加一页
_current_page += 1
if cache_type == "Torrent":
# 发送种子数据
self.__post_torrents_message(channel=channel,
source=source,
title=_current_media.title,
items=cache_list,
userid=userid,
total=total,
original_message_id=original_message_id,
original_chat_id=original_chat_id)
else:
# 发送媒体数据
self.__post_medias_message(channel=channel,
source=source,
title=_current_meta.name,
items=cache_list,
userid=userid,
total=total,
original_message_id=original_message_id,
original_chat_id=original_chat_id)
finally:
cache_list.clear()
del cache_list
finally:
cache_data.clear()
del cache_data
else:
# 搜索或订阅
if text.startswith("订阅"):
# 订阅
content = re.sub(r"订阅[:\s]*", "", text)
action = "Subscribe"
elif text.startswith("洗版"):
# 洗版
content = re.sub(r"洗版[:\s]*", "", text)
action = "ReSubscribe"
elif text.startswith("搜索") or text.startswith("下载"):
# 重新搜索/下载
content = re.sub(r"(搜索|下载)[:\s]*", "", text)
action = "ReSearch"
elif text.startswith("#") \
or re.search(r"^请[问帮你]", text) \
or re.search(r"[?]$", text) \
or StringUtils.count_words(text) > 10 \
or text.find("继续") != -1:
# 聊天
content = text
action = "Chat"
elif StringUtils.is_link(text):
# 链接
content = text
action = "Link"
else:
# 搜索
content = text
action = "Search"
if action in ["Search", "ReSearch", "Subscribe", "ReSubscribe"]:
# 搜索
meta, medias = MediaChain().search(content)
# 识别
if not meta.name:
self.post_message(Notification(
channel=channel, source=source, title="无法识别输入内容!", userid=userid))
return
# 开始搜索
if not medias:
self.post_message(Notification(
channel=channel, source=source, title=f"{meta.name} 没有找到对应的媒体信息!", userid=userid))
return
logger.info(f"搜索到 {len(medias)} 条相关媒体信息")
try:
# 记录当前状态
_current_meta = meta
# 保存缓存
user_cache[userid] = {
'type': action,
'items': medias
}
self.save_cache(user_cache, self._cache_file)
_current_page = 0
_current_media = None
# 发送媒体列表
self.__post_medias_message(channel=channel,
source=source,
title=meta.name,
items=medias[:self._page_size],
userid=userid, total=len(medias))
finally:
medias.clear()
del medias
else:
# 广播事件
self.eventmanager.send_event(
EventType.UserMessage,
{
"text": content,
"userid": userid,
"channel": channel,
"source": source
}
)
finally:
user_cache.clear()
del user_cache
def _handle_callback(self, text: str, channel: MessageChannel, source: str,
userid: Union[str, int], username: str,

View File

@@ -202,16 +202,15 @@ class SearchChain(ChainBase):
# 过滤完成
progress.update(value=50, text=f'过滤完成,剩余 {len(torrents)} 个资源', key=ProgressKey.Search)
# 开始匹配
_match_torrents = []
# 总数
_total = len(torrents)
# 已处理数
_count = 0
# 开始匹配
_match_torrents = []
torrenthelper = TorrentHelper()
if mediainfo:
try:
# 英文标题应该在别名/原标题中,不需要再匹配
logger.info(f"开始匹配结果 标题:{mediainfo.title},原标题:{mediainfo.original_title},别名:{mediainfo.names}")
progress.update(value=51, text=f'开始匹配,总 {_total} 个资源 ...', key=ProgressKey.Search)
@@ -256,16 +255,18 @@ class SearchChain(ChainBase):
progress.update(value=97,
text=f'匹配完成,共匹配到 {len(_match_torrents)} 个资源',
key=ProgressKey.Search)
else:
_match_torrents = [(t, MetaInfo(title=t.title, subtitle=t.description)) for t in torrents]
# 去掉mediainfo中多余的数据
mediainfo.clear()
# 组装上下文
contexts = [Context(torrent_info=t[0],
media_info=mediainfo,
meta_info=t[1]) for t in _match_torrents]
# 去掉mediainfo中多余的数据
mediainfo.clear()
# 组装上下文
contexts = [Context(torrent_info=t[0],
media_info=mediainfo,
meta_info=t[1]) for t in _match_torrents]
finally:
torrents.clear()
del torrents
_match_torrents.clear()
del _match_torrents
# 排序
progress.update(value=99,
@@ -364,6 +365,7 @@ class SearchChain(ChainBase):
logger.info(f"站点搜索完成,有效资源数:{len(results)},总耗时 {(end_time - start_time).seconds}")
# 结束进度
progress.end(ProgressKey.Search)
# 返回
return results

View File

@@ -1,4 +1,5 @@
import base64
import gc
import re
from datetime import datetime
from typing import Optional, Tuple, Union, Dict
@@ -92,10 +93,9 @@ class SiteChain(ChainBase):
"""
刷新所有站点的用户数据
"""
sites = SitesHelper().get_indexers()
any_site_updated = False
result = {}
for site in sites:
for site in SitesHelper().get_indexers():
if global_vars.is_system_stopped:
return None
if site.get("is_active"):
@@ -107,6 +107,11 @@ class SiteChain(ChainBase):
EventManager().send_event(EventType.SiteRefreshed, {
"site_id": "*"
})
# 如果不是大内存模式,进行垃圾回收
if not settings.BIG_MEMORY_MODE:
gc.collect()
return result
def is_special_site(self, domain: str) -> bool:
@@ -266,16 +271,20 @@ class SiteChain(ChainBase):
logger.error(f"获取站点页面失败:{url}")
return favicon_url, None
html = etree.HTML(html_text)
if StringUtils.is_valid_html_element(html):
fav_link = html.xpath('//head/link[contains(@rel, "icon")]/@href')
if fav_link:
favicon_url = urljoin(url, fav_link[0])
try:
if StringUtils.is_valid_html_element(html):
fav_link = html.xpath('//head/link[contains(@rel, "icon")]/@href')
if fav_link:
favicon_url = urljoin(url, fav_link[0])
res = RequestUtils(cookies=cookie, timeout=15, ua=ua).get_res(url=favicon_url)
if res:
return favicon_url, base64.b64encode(res.content).decode()
else:
logger.error(f"获取站点图标失败:{favicon_url}")
res = RequestUtils(cookies=cookie, timeout=15, ua=ua).get_res(url=favicon_url)
if res:
return favicon_url, base64.b64encode(res.content).decode()
else:
logger.error(f"获取站点图标失败:{favicon_url}")
finally:
if html is not None:
del html
return favicon_url, None
def sync_cookies(self, manual=False) -> Tuple[bool, str]:
@@ -351,9 +360,10 @@ class SiteChain(ChainBase):
ua=settings.USER_AGENT
).get_res(url=domain_url)
if res and res.status_code in [200, 500, 403]:
if not indexer.get("public") and not SiteUtils.is_logged_in(res.text):
content = res.text
if not indexer.get("public") and not SiteUtils.is_logged_in(content):
_fail_count += 1
if under_challenge(res.text):
if under_challenge(content):
logger.warn(f"站点 {indexer.get('name')} 被Cloudflare防护无法登录无法添加站点")
continue
logger.warn(
@@ -571,8 +581,9 @@ class SiteChain(ChainBase):
).get_res(url=site_url)
# 判断登录状态
if res and res.status_code in [200, 500, 403]:
if not public and not SiteUtils.is_logged_in(res.text):
if under_challenge(res.text):
content = res.text
if not public and not SiteUtils.is_logged_in(content):
if under_challenge(content):
msg = "站点被Cloudflare防护请打开站点浏览器仿真"
elif res.status_code == 200:
msg = "Cookie已失效"

View File

@@ -110,11 +110,17 @@ class StorageChain(ChainBase):
"""
return self.run_module("get_parent_item", fileitem=fileitem)
def snapshot_storage(self, storage: str, path: Path) -> Optional[Dict[str, float]]:
def snapshot_storage(self, storage: str, path: Path,
last_snapshot_time: float = None, max_depth: int = 5) -> Optional[Dict[str, Dict]]:
"""
快照存储
:param storage: 存储类型
:param path: 路径
:param last_snapshot_time: 上次快照时间,用于增量快照
:param max_depth: 最大递归深度,避免过深遍历
"""
return self.run_module("snapshot_storage", storage=storage, path=path)
return self.run_module("snapshot_storage", storage=storage, path=path,
last_snapshot_time=last_snapshot_time, max_depth=max_depth)
def storage_usage(self, storage: str) -> Optional[schemas.StorageUsage]:
"""
@@ -172,15 +178,14 @@ class StorageChain(ChainBase):
if mtype:
# 重命名格式
rename_format = settings.TV_RENAME_FORMAT \
if mtype == MediaType.TV else settings.MOVIE_RENAME_FORMAT
# 计算重命名中的文件夹层数
rename_format_level = len(rename_format.split("/")) - 1
if rename_format_level < 1:
rename_format = settings.RENAME_FORMAT(mtype)
media_path = DirectoryHelper.get_media_root_path(
rename_format, rename_path=Path(fileitem.path)
)
if not media_path:
return True
# 处理媒体文件根目录
dir_item = self.get_file_item(storage=fileitem.storage,
path=Path(fileitem.path).parents[rename_format_level - 1])
dir_item = self.get_file_item(storage=fileitem.storage, path=media_path)
else:
# 处理上级目录
dir_item = self.get_parent_item(fileitem)

View File

@@ -1,4 +1,5 @@
import copy
import gc
import json
import random
import threading
@@ -38,6 +39,8 @@ class SubscribeChain(ChainBase):
"""
_rlock = threading.RLock()
# 避免莫名原因导致长时间持有锁
_LOCK_TIMOUT = 3600 * 2
def add(self, title: str, year: str,
mtype: MediaType = None,
@@ -278,155 +281,179 @@ class SubscribeChain(ChainBase):
:param manual: 是否手动搜索
:return: 更新订阅状态为R或删除订阅
"""
with self._rlock:
logger.debug(f"search lock acquired at {datetime.now()}")
lock_acquired = False
try:
if lock_acquired := self._rlock.acquire(
blocking=True, timeout=self._LOCK_TIMOUT
):
logger.debug(f"search lock acquired at {datetime.now()}")
else:
logger.warn("search上锁超时")
subscribeoper = SubscribeOper()
if sid:
subscribe = subscribeoper.get(sid)
subscribes = [subscribe] if subscribe else []
else:
subscribes = subscribeoper.list(self.get_states_for_search(state))
# 遍历订阅
for subscribe in subscribes:
if global_vars.is_system_stopped:
break
mediakey = subscribe.tmdbid or subscribe.doubanid
custom_word_list = subscribe.custom_words.split("\n") if subscribe.custom_words else None
# 校验当前时间减订阅创建时间是否大于1分钟否则跳过先留出编辑订阅的时间
if subscribe.date:
now = datetime.now()
subscribe_time = datetime.strptime(subscribe.date, '%Y-%m-%d %H:%M:%S')
if (now - subscribe_time).total_seconds() < 60:
logger.debug(f"订阅标题:{subscribe.name} 新增小于1分钟暂不搜索...")
continue
# 随机休眠1-5分钟
if not sid and state in ['R', 'P']:
sleep_time = random.randint(60, 300)
logger.info(f'订阅搜索随机休眠 {sleep_time} 秒 ...')
time.sleep(sleep_time)
try:
logger.info(f'开始搜索订阅,标题:{subscribe.name} ...')
# 生成元数据
meta = MetaInfo(subscribe.name)
meta.year = subscribe.year
meta.begin_season = subscribe.season or None
try:
# 遍历订阅
for subscribe in subscribes:
if global_vars.is_system_stopped:
break
mediakey = subscribe.tmdbid or subscribe.doubanid
custom_word_list = subscribe.custom_words.split("\n") if subscribe.custom_words else None
# 校验当前时间减订阅创建时间是否大于1分钟否则跳过先留出编辑订阅的时间
if subscribe.date:
now = datetime.now()
subscribe_time = datetime.strptime(subscribe.date, '%Y-%m-%d %H:%M:%S')
if (now - subscribe_time).total_seconds() < 60:
logger.debug(f"订阅标题:{subscribe.name} 新增小于1分钟暂不搜索...")
continue
# 随机休眠1-5分钟
if not sid and state in ['R', 'P']:
sleep_time = random.randint(60, 300)
logger.info(f'订阅搜索随机休眠 {sleep_time} 秒 ...')
time.sleep(sleep_time)
try:
meta.type = MediaType(subscribe.type)
except ValueError:
logger.error(f'订阅 {subscribe.name} 类型错误:{subscribe.type}')
continue
# 识别媒体信息
mediainfo: MediaInfo = self.recognize_media(meta=meta, mtype=meta.type,
tmdbid=subscribe.tmdbid,
doubanid=subscribe.doubanid,
episode_group=subscribe.episode_group,
cache=False)
if not mediainfo:
logger.warn(
f'未识别到媒体信息,标题:{subscribe.name}tmdbid{subscribe.tmdbid}doubanid{subscribe.doubanid}')
continue
logger.info(f'开始搜索订阅,标题:{subscribe.name} ...')
# 生成元数据
meta = MetaInfo(subscribe.name)
meta.year = subscribe.year
meta.begin_season = subscribe.season or None
try:
meta.type = MediaType(subscribe.type)
except ValueError:
logger.error(f'订阅 {subscribe.name} 类型错误:{subscribe.type}')
continue
# 识别媒体信息
mediainfo: MediaInfo = self.recognize_media(meta=meta, mtype=meta.type,
tmdbid=subscribe.tmdbid,
doubanid=subscribe.doubanid,
episode_group=subscribe.episode_group,
cache=False)
if not mediainfo:
logger.warn(
f'未识别到媒体信息,标题:{subscribe.name}tmdbid{subscribe.tmdbid}doubanid{subscribe.doubanid}')
continue
# 如果媒体已存在或已下载完毕,跳过当前订阅处理
exist_flag, no_exists = self.check_and_handle_existing_media(subscribe=subscribe,
meta=meta,
mediainfo=mediainfo,
mediakey=mediakey)
if exist_flag:
continue
# 如果媒体已存在或已下载完毕,跳过当前订阅处理
exist_flag, no_exists = self.check_and_handle_existing_media(subscribe=subscribe,
meta=meta,
mediainfo=mediainfo,
mediakey=mediakey)
if exist_flag:
continue
# 站点范围
sites = self.get_sub_sites(subscribe)
# 站点范围
sites = self.get_sub_sites(subscribe)
# 优先级过滤规则
if subscribe.best_version:
rule_groups = subscribe.filter_groups \
or SystemConfigOper().get(SystemConfigKey.BestVersionFilterRuleGroups) or []
else:
rule_groups = subscribe.filter_groups \
or SystemConfigOper().get(SystemConfigKey.SubscribeFilterRuleGroups) or []
# 搜索,同时电视剧会过滤掉不需要的剧集
contexts = SearchChain().process(mediainfo=mediainfo,
keyword=subscribe.keyword,
no_exists=no_exists,
sites=sites,
rule_groups=rule_groups,
area="imdbid" if subscribe.search_imdbid else "title",
custom_words=custom_word_list,
filter_params=self.get_params(subscribe))
if not contexts:
logger.warn(f'订阅 {subscribe.keyword or subscribe.name} 未搜索到资源')
self.finish_subscribe_or_not(subscribe=subscribe, meta=meta,
mediainfo=mediainfo, lefts=no_exists)
continue
# 过滤搜索结果
matched_contexts = []
for context in contexts:
if global_vars.is_system_stopped:
break
torrent_meta = context.meta_info
torrent_info = context.torrent_info
torrent_mediainfo = context.media_info
# 洗版
# 优先级过滤规则
if subscribe.best_version:
# 洗版时,非整季不要
if torrent_mediainfo.type == MediaType.TV:
if torrent_meta.episode_list:
logger.info(f'{subscribe.name} 正在洗版,{torrent_info.title} 不是整季')
continue
# 洗版时,优先级小于等于已下载优先级的不要
if subscribe.current_priority \
and torrent_info.pri_order <= subscribe.current_priority:
logger.info(
f'{subscribe.name} 正在洗版,{torrent_info.title} 优先级低于或等于已下载优先级')
continue
# 更新订阅自定义属性
if subscribe.media_category:
torrent_mediainfo.category = subscribe.media_category
if subscribe.episode_group:
torrent_mediainfo.episode_group = subscribe.episode_group
matched_contexts.append(context)
rule_groups = subscribe.filter_groups \
or SystemConfigOper().get(SystemConfigKey.BestVersionFilterRuleGroups) or []
else:
rule_groups = subscribe.filter_groups \
or SystemConfigOper().get(SystemConfigKey.SubscribeFilterRuleGroups) or []
if not matched_contexts:
logger.warn(f'订阅 {subscribe.name} 没有符合过滤条件的资源')
self.finish_subscribe_or_not(subscribe=subscribe, meta=meta,
mediainfo=mediainfo, lefts=no_exists)
continue
# 搜索,同时电视剧会过滤掉不需要的剧集
contexts = SearchChain().process(mediainfo=mediainfo,
keyword=subscribe.keyword,
no_exists=no_exists,
sites=sites,
rule_groups=rule_groups,
area="imdbid" if subscribe.search_imdbid else "title",
custom_words=custom_word_list,
filter_params=self.get_params(subscribe))
if not contexts:
logger.warn(f'订阅 {subscribe.keyword or subscribe.name} 未搜索到资源')
self.finish_subscribe_or_not(subscribe=subscribe, meta=meta,
mediainfo=mediainfo, lefts=no_exists)
continue
# 自动下载
downloads, lefts = DownloadChain().batch_download(
contexts=matched_contexts,
no_exists=no_exists,
username=subscribe.username,
save_path=subscribe.save_path,
downloader=subscribe.downloader,
source=self.get_subscribe_source_keyword(subscribe)
)
# 过滤搜索结果
matched_contexts = []
try:
for context in contexts:
if global_vars.is_system_stopped:
break
torrent_meta = context.meta_info
torrent_info = context.torrent_info
torrent_mediainfo = context.media_info
# 同步外部修改,更新订阅信息
subscribe = subscribeoper.get(subscribe.id)
# 洗版
if subscribe.best_version:
# 洗版时,非整季不要
if torrent_mediainfo.type == MediaType.TV:
if torrent_meta.episode_list:
logger.info(f'{subscribe.name} 正在洗版,{torrent_info.title} 不是整季')
continue
# 洗版时,优先级小于等于已下载优先级的不要
if subscribe.current_priority \
and torrent_info.pri_order <= subscribe.current_priority:
logger.info(
f'{subscribe.name} 正在洗版,{torrent_info.title} 优先级低于或等于已下载优先级')
continue
# 更新订阅自定义属性
if subscribe.media_category:
torrent_mediainfo.category = subscribe.media_category
if subscribe.episode_group:
torrent_mediainfo.episode_group = subscribe.episode_group
matched_contexts.append(context)
finally:
contexts.clear()
del contexts
# 判断是否应完成订阅
if subscribe:
self.finish_subscribe_or_not(subscribe=subscribe, meta=meta, mediainfo=mediainfo,
downloads=downloads, lefts=lefts)
finally:
# 如果状态为N则更新为R
if subscribe and subscribe.state == 'N':
subscribeoper.update(subscribe.id, {'state': 'R'})
if not matched_contexts:
logger.warn(f'订阅 {subscribe.name} 没有符合过滤条件的资源')
self.finish_subscribe_or_not(subscribe=subscribe, meta=meta,
mediainfo=mediainfo, lefts=no_exists)
continue
# 手动触发时发送系统消息
if manual:
if subscribes:
if sid:
self.messagehelper.put(f'{subscribes[0].name} 搜索完成!', title="订阅搜索", role="system")
# 自动下载
downloads, lefts = DownloadChain().batch_download(
contexts=matched_contexts,
no_exists=no_exists,
username=subscribe.username,
save_path=subscribe.save_path,
downloader=subscribe.downloader,
source=self.get_subscribe_source_keyword(subscribe)
)
# 同步外部修改,更新订阅信息
subscribe = subscribeoper.get(subscribe.id)
# 判断是否应完成订阅
if subscribe:
self.finish_subscribe_or_not(subscribe=subscribe, meta=meta, mediainfo=mediainfo,
downloads=downloads, lefts=lefts)
finally:
# 如果状态为N则更新为R
if subscribe and subscribe.state == 'N':
subscribeoper.update(subscribe.id, {'state': 'R'})
# 手动触发时发送系统消息
if manual:
if subscribes:
if sid:
self.messagehelper.put(f'{subscribes[0].name} 搜索完成!', title="订阅搜索", role="system")
else:
self.messagehelper.put('所有订阅搜索完成!', title="订阅搜索", role="system")
else:
self.messagehelper.put('所有订阅搜索完成', title="订阅搜索", role="system")
else:
self.messagehelper.put('没有找到订阅!', title="订阅搜索", role="system")
logger.debug(f"search Lock released at {datetime.now()}")
self.messagehelper.put('没有找到订阅', title="订阅搜索", role="system")
finally:
subscribes.clear()
del subscribes
finally:
if lock_acquired:
self._rlock.release()
logger.debug(f"search Lock released at {datetime.now()}")
# 如果不是大内存模式,进行垃圾回收
if not settings.BIG_MEMORY_MODE:
gc.collect()
def update_subscribe_priority(self, subscribe: Subscribe, meta: MetaBase,
mediainfo: MediaInfo, downloads: Optional[List[Context]]):
@@ -499,6 +526,9 @@ class SubscribeChain(ChainBase):
self.match(
TorrentsChain().refresh(sites=sites)
)
# 如果不是大内存模式,进行垃圾回收
if not settings.BIG_MEMORY_MODE:
gc.collect()
@staticmethod
def get_sub_sites(subscribe: Subscribe) -> List[int]:
@@ -527,13 +557,9 @@ class SubscribeChain(ChainBase):
获取订阅中涉及的所有站点清单(节约资源)
:return: 返回[]代表所有站点命中返回None代表没有订阅
"""
# 查询所有订阅
subscribes = SubscribeOper().list(self.get_states_for_search('R'))
if not subscribes:
return None
ret_sites = []
# 刷新订阅选中的Rss站点
for subscribe in subscribes:
for subscribe in SubscribeOper().list(self.get_states_for_search('R')):
# 刷新选中的站点
ret_sites.extend(self.get_sub_sites(subscribe))
# 去重
@@ -550,10 +576,14 @@ class SubscribeChain(ChainBase):
logger.warn('没有缓存资源,无法匹配订阅')
return
with self._rlock:
logger.debug(f"match lock acquired at {datetime.now()}")
# 所有订阅
subscribes = SubscribeOper().list(self.get_states_for_search('R'))
lock_acquired = False
try:
if lock_acquired := self._rlock.acquire(
blocking=True, timeout=self._LOCK_TIMOUT
):
logger.debug(f"match lock acquired at {datetime.now()}")
else:
logger.warn("match上锁超时")
# 预识别所有未识别的种子
processed_torrents: Dict[str, List[Context]] = {}
@@ -576,233 +606,243 @@ class SubscribeChain(ChainBase):
# 添加已预处理
processed_torrents[domain].append(context)
# 遍历订阅
for subscribe in subscribes:
if global_vars.is_system_stopped:
break
logger.info(f'开始匹配订阅,标题:{subscribe.name} ...')
mediakey = subscribe.tmdbid or subscribe.doubanid
# 生成元数据
meta = MetaInfo(subscribe.name)
meta.year = subscribe.year
meta.begin_season = subscribe.season or None
try:
meta.type = MediaType(subscribe.type)
except ValueError:
logger.error(f'订阅 {subscribe.name} 类型错误:{subscribe.type}')
continue
# 订阅的站点域名列表
domains = []
if subscribe.sites:
domains = SiteOper().get_domains_by_ids(subscribe.sites)
# 识别媒体信息
mediainfo: MediaInfo = self.recognize_media(meta=meta, mtype=meta.type,
tmdbid=subscribe.tmdbid,
doubanid=subscribe.doubanid,
episode_group=subscribe.episode_group,
cache=False)
if not mediainfo:
logger.warn(
f'未识别到媒体信息,标题:{subscribe.name}tmdbid{subscribe.tmdbid}doubanid{subscribe.doubanid}')
continue
# 如果媒体已存在或已下载完毕,跳过当前订阅处理
exist_flag, no_exists = self.check_and_handle_existing_media(subscribe=subscribe, meta=meta,
mediainfo=mediainfo,
mediakey=mediakey)
if exist_flag:
continue
# 清理多余信息
mediainfo.clear()
# 订阅识别词
if subscribe.custom_words:
custom_words_list = subscribe.custom_words.split("\n")
else:
custom_words_list = None
# 遍历预识别后的种子
_match_context = []
torrenthelper = TorrentHelper()
systemconfig = SystemConfigOper()
wordsmatcher = WordsMatcher()
for domain, contexts in processed_torrents.items():
# 所有订阅
subscribes = SubscribeOper().list(self.get_states_for_search('R'))
try:
for subscribe in subscribes:
if global_vars.is_system_stopped:
break
if domains and domain not in domains:
logger.info(f'开始匹配订阅,标题:{subscribe.name} ...')
mediakey = subscribe.tmdbid or subscribe.doubanid
# 生成元数据
meta = MetaInfo(subscribe.name)
meta.year = subscribe.year
meta.begin_season = subscribe.season or None
try:
meta.type = MediaType(subscribe.type)
except ValueError:
logger.error(f'订阅 {subscribe.name} 类型错误:{subscribe.type}')
continue
logger.debug(f'开始匹配站点:{domain},共缓存了 {len(contexts)} 个种子...')
for context in contexts:
# 订阅的站点域名列表
domains = []
if subscribe.sites:
domains = SiteOper().get_domains_by_ids(subscribe.sites)
# 识别媒体信息
mediainfo: MediaInfo = self.recognize_media(meta=meta, mtype=meta.type,
tmdbid=subscribe.tmdbid,
doubanid=subscribe.doubanid,
episode_group=subscribe.episode_group,
cache=False)
if not mediainfo:
logger.warn(
f'未识别到媒体信息,标题:{subscribe.name}tmdbid{subscribe.tmdbid}doubanid{subscribe.doubanid}')
continue
# 如果媒体已存在或已下载完毕,跳过当前订阅处理
exist_flag, no_exists = self.check_and_handle_existing_media(subscribe=subscribe, meta=meta,
mediainfo=mediainfo,
mediakey=mediakey)
if exist_flag:
continue
# 清理多余信息
mediainfo.clear()
# 订阅识别词
if subscribe.custom_words:
custom_words_list = subscribe.custom_words.split("\n")
else:
custom_words_list = None
# 遍历预识别后的种子
_match_context = []
torrenthelper = TorrentHelper()
systemconfig = SystemConfigOper()
wordsmatcher = WordsMatcher()
for domain, contexts in processed_torrents.items():
if global_vars.is_system_stopped:
break
# 提取信息
_context = copy.copy(context)
torrent_meta = _context.meta_info
torrent_mediainfo = _context.media_info
torrent_info = _context.torrent_info
# 不在订阅站点范围的不处理
sub_sites = self.get_sub_sites(subscribe)
if sub_sites and torrent_info.site not in sub_sites:
logger.debug(f"{torrent_info.site_name} - {torrent_info.title} 不符合订阅站点要求")
if domains and domain not in domains:
continue
logger.debug(f'开始匹配站点:{domain},共缓存了 {len(contexts)} 个种子...')
for context in contexts:
if global_vars.is_system_stopped:
break
# 提取信息
_context = copy.copy(context)
torrent_meta = _context.meta_info
torrent_mediainfo = _context.media_info
torrent_info = _context.torrent_info
# 有自定义识别词时,需要判断是否需要重新识别
if custom_words_list:
# 使用org_string应用一次后理论上不能再次应用
_, apply_words = wordsmatcher.prepare(torrent_meta.org_string,
custom_words=custom_words_list)
if apply_words:
# 不在订阅站点范围的不处理
sub_sites = self.get_sub_sites(subscribe)
if sub_sites and torrent_info.site not in sub_sites:
logger.debug(f"{torrent_info.site_name} - {torrent_info.title} 不符合订阅站点要求")
continue
# 有自定义识别词时,需要判断是否需要重新识别
if custom_words_list:
# 使用org_string应用一次后理论上不能再次应用
_, apply_words = wordsmatcher.prepare(torrent_meta.org_string,
custom_words=custom_words_list)
if apply_words:
logger.info(
f'{torrent_info.site_name} - {torrent_info.title} 因订阅存在自定义识别词,重新识别元数据...')
# 重新识别元数据
torrent_meta = MetaInfo(title=torrent_info.title, subtitle=torrent_info.description,
custom_words=custom_words_list)
# 更新元数据缓存
_context.meta_info = torrent_meta
# 重新识别媒体信息
torrent_mediainfo = self.recognize_media(meta=torrent_meta,
episode_group=subscribe.episode_group)
if torrent_mediainfo:
# 清理多余信息
torrent_mediainfo.clear()
# 更新种子缓存
_context.media_info = torrent_mediainfo
# 如果仍然没有识别到媒体信息,尝试标题匹配
if not torrent_mediainfo or (
not torrent_mediainfo.tmdb_id and not torrent_mediainfo.douban_id):
logger.info(
f'{torrent_info.site_name} - {torrent_info.title} 因订阅存在自定义识别词,重新识别元数据...')
# 重新识别元数据
torrent_meta = MetaInfo(title=torrent_info.title, subtitle=torrent_info.description,
custom_words=custom_words_list)
# 更新元数据缓存
_context.meta_info = torrent_meta
# 重新识别媒体信息
torrent_mediainfo = self.recognize_media(meta=torrent_meta,
episode_group=subscribe.episode_group)
if torrent_mediainfo:
# 清理多余信息
torrent_mediainfo.clear()
f'{torrent_info.site_name} - {torrent_info.title} 重新识别失败,尝试通过标题匹配...')
if torrenthelper.match_torrent(mediainfo=mediainfo,
torrent_meta=torrent_meta,
torrent=torrent_info):
# 匹配成功
logger.info(
f'{mediainfo.title_year} 通过标题匹配到可选资源:{torrent_info.site_name} - {torrent_info.title}')
torrent_mediainfo = mediainfo
# 更新种子缓存
_context.media_info = torrent_mediainfo
_context.media_info = mediainfo
else:
continue
# 如果仍然没有识别到媒体信息,尝试标题匹配
if not torrent_mediainfo or (not torrent_mediainfo.tmdb_id and not torrent_mediainfo.douban_id):
logger.info(
f'{torrent_info.site_name} - {torrent_info.title} 重新识别失败,尝试通过标题匹配...')
if torrenthelper.match_torrent(mediainfo=mediainfo,
torrent_meta=torrent_meta,
torrent=torrent_info):
# 匹配成功
# 直接比对媒体信息
if torrent_mediainfo and (torrent_mediainfo.tmdb_id or torrent_mediainfo.douban_id):
if torrent_mediainfo.type != mediainfo.type:
continue
if torrent_mediainfo.tmdb_id \
and torrent_mediainfo.tmdb_id != mediainfo.tmdb_id:
continue
if torrent_mediainfo.douban_id \
and torrent_mediainfo.douban_id != mediainfo.douban_id:
continue
logger.info(
f'{mediainfo.title_year} 通过标题匹配到可选资源:{torrent_info.site_name} - {torrent_info.title}')
torrent_mediainfo = mediainfo
# 更新种子缓存
_context.media_info = mediainfo
f'{mediainfo.title_year} 通过媒体信ID匹配到可选资源:{torrent_info.site_name} - {torrent_info.title}')
else:
continue
# 直接比对媒体信息
if torrent_mediainfo and (torrent_mediainfo.tmdb_id or torrent_mediainfo.douban_id):
if torrent_mediainfo.type != mediainfo.type:
continue
if torrent_mediainfo.tmdb_id \
and torrent_mediainfo.tmdb_id != mediainfo.tmdb_id:
continue
if torrent_mediainfo.douban_id \
and torrent_mediainfo.douban_id != mediainfo.douban_id:
continue
logger.info(
f'{mediainfo.title_year} 通过媒体信ID匹配到可选资源{torrent_info.site_name} - {torrent_info.title}')
else:
continue
# 如果是电视剧
if torrent_mediainfo.type == MediaType.TV:
# 有多季的不要
if len(torrent_meta.season_list) > 1:
logger.debug(f'{torrent_info.title} 有多季,不处理')
continue
# 比对季
if torrent_meta.begin_season:
if meta.begin_season != torrent_meta.begin_season:
# 如果是电视剧
if torrent_mediainfo.type == MediaType.TV:
# 有多季的不要
if len(torrent_meta.season_list) > 1:
logger.debug(f'{torrent_info.title} 有多季,不处理')
continue
# 比对季
if torrent_meta.begin_season:
if meta.begin_season != torrent_meta.begin_season:
logger.debug(f'{torrent_info.title} 季不匹配')
continue
elif meta.begin_season != 1:
logger.debug(f'{torrent_info.title} 季不匹配')
continue
elif meta.begin_season != 1:
logger.debug(f'{torrent_info.title} 季不匹配')
continue
# 非洗版
if not subscribe.best_version:
# 不是缺失的剧集不要
if no_exists and no_exists.get(mediakey):
# 缺失
no_exists_info = no_exists.get(mediakey).get(subscribe.season)
if no_exists_info:
# 是否有交集
if no_exists_info.episodes and \
torrent_meta.episode_list and \
not set(no_exists_info.episodes).intersection(
set(torrent_meta.episode_list)
):
logger.debug(
f'{torrent_info.title} 对应剧集 {torrent_meta.episode_list} 未包含缺失的剧集'
)
# 非洗版
if not subscribe.best_version:
# 不是缺失的剧集不要
if no_exists and no_exists.get(mediakey):
# 缺失集
no_exists_info = no_exists.get(mediakey).get(subscribe.season)
if no_exists_info:
# 是否有交
if no_exists_info.episodes and \
torrent_meta.episode_list and \
not set(no_exists_info.episodes).intersection(
set(torrent_meta.episode_list)
):
logger.debug(
f'{torrent_info.title} 对应剧集 {torrent_meta.episode_list} 未包含缺失的剧集'
)
continue
else:
# 洗版时,非整季不要
if meta.type == MediaType.TV:
if torrent_meta.episode_list:
logger.debug(f'{subscribe.name} 正在洗版,{torrent_info.title} 不是整季')
continue
else:
# 洗版时,非整季不要
if meta.type == MediaType.TV:
if torrent_meta.episode_list:
logger.debug(f'{subscribe.name} 正在洗版,{torrent_info.title} 不是整季')
continue
# 匹配订阅附加参数
if not torrenthelper.filter_torrent(torrent_info=torrent_info,
filter_params=self.get_params(subscribe)):
continue
# 优先级过滤规则
if subscribe.best_version:
rule_groups = subscribe.filter_groups \
or systemconfig.get(SystemConfigKey.BestVersionFilterRuleGroups)
else:
rule_groups = subscribe.filter_groups \
or systemconfig.get(SystemConfigKey.SubscribeFilterRuleGroups)
result: List[TorrentInfo] = self.filter_torrents(
rule_groups=rule_groups,
torrent_list=[torrent_info],
mediainfo=torrent_mediainfo)
if result is not None and not result:
# 不符合过滤规则
logger.debug(f"{torrent_info.title} 不匹配过滤规则")
continue
# 洗版时,优先级小于已下载优先级的不要
if subscribe.best_version:
if subscribe.current_priority \
and torrent_info.pri_order <= subscribe.current_priority:
logger.info(
f'{subscribe.name} 正在洗版,{torrent_info.title} 优先级低于或等于已下载优先级')
# 匹配订阅附加参数
if not torrenthelper.filter_torrent(torrent_info=torrent_info,
filter_params=self.get_params(subscribe)):
continue
# 匹配成功
logger.info(f'{mediainfo.title_year} 匹配成功:{torrent_info.title}')
# 自定义属性
if subscribe.media_category:
torrent_mediainfo.category = subscribe.media_category
if subscribe.episode_group:
torrent_mediainfo.episode_group = subscribe.episode_group
_match_context.append(_context)
# 优先级过滤规则
if subscribe.best_version:
rule_groups = subscribe.filter_groups \
or systemconfig.get(SystemConfigKey.BestVersionFilterRuleGroups)
else:
rule_groups = subscribe.filter_groups \
or systemconfig.get(SystemConfigKey.SubscribeFilterRuleGroups)
result: List[TorrentInfo] = self.filter_torrents(
rule_groups=rule_groups,
torrent_list=[torrent_info],
mediainfo=torrent_mediainfo)
if result is not None and not result:
# 不符合过滤规则
logger.debug(f"{torrent_info.title} 不匹配过滤规则")
continue
if not _match_context:
# 未匹配到资源
logger.info(f'{mediainfo.title_year} 未匹配到符合条件的资源')
self.finish_subscribe_or_not(subscribe=subscribe, meta=meta,
mediainfo=mediainfo, lefts=no_exists)
continue
# 洗版时,优先级小于已下载优先级的不要
if subscribe.best_version:
if subscribe.current_priority \
and torrent_info.pri_order <= subscribe.current_priority:
logger.info(
f'{subscribe.name} 正在洗版,{torrent_info.title} 优先级低于或等于已下载优先级')
continue
# 开始批量择优下载
logger.info(f'{mediainfo.title_year} 匹配完成,共匹配到{len(_match_context)}个资源')
downloads, lefts = DownloadChain().batch_download(contexts=_match_context,
no_exists=no_exists,
username=subscribe.username,
save_path=subscribe.save_path,
downloader=subscribe.downloader,
source=self.get_subscribe_source_keyword(subscribe)
)
# 匹配成功
logger.info(f'{mediainfo.title_year} 匹配成功:{torrent_info.title}')
# 自定义属性
if subscribe.media_category:
torrent_mediainfo.category = subscribe.media_category
if subscribe.episode_group:
torrent_mediainfo.episode_group = subscribe.episode_group
_match_context.append(_context)
# 同步外部修改,更新订阅信息
subscribe = SubscribeOper().get(subscribe.id)
if not _match_context:
# 未匹配到资源
logger.info(f'{mediainfo.title_year} 未匹配到符合条件的资源')
self.finish_subscribe_or_not(subscribe=subscribe, meta=meta,
mediainfo=mediainfo, lefts=no_exists)
continue
# 判断是否要完成订阅
if subscribe:
self.finish_subscribe_or_not(subscribe=subscribe, meta=meta, mediainfo=mediainfo,
downloads=downloads, lefts=lefts)
# 开始批量择优下载
logger.info(f'{mediainfo.title_year} 匹配完成,共匹配到{len(_match_context)}个资源')
downloads, lefts = DownloadChain().batch_download(contexts=_match_context,
no_exists=no_exists,
username=subscribe.username,
save_path=subscribe.save_path,
downloader=subscribe.downloader,
source=self.get_subscribe_source_keyword(subscribe)
)
logger.debug(f"match Lock released at {datetime.now()}")
# 同步外部修改,更新订阅信息
subscribe = SubscribeOper().get(subscribe.id)
# 判断是否要完成订阅
if subscribe:
self.finish_subscribe_or_not(subscribe=subscribe, meta=meta, mediainfo=mediainfo,
downloads=downloads, lefts=lefts)
finally:
processed_torrents.clear()
del processed_torrents
subscribes.clear()
del subscribes
finally:
if lock_acquired:
self._rlock.release()
logger.debug(f"match Lock released at {datetime.now()}")
def check(self):
"""
@@ -810,12 +850,8 @@ class SubscribeChain(ChainBase):
"""
# 查询所有订阅
subscribeoper = SubscribeOper()
subscribes = subscribeoper.list()
if not subscribes:
# 没有订阅不运行
return
# 遍历订阅
for subscribe in subscribes:
for subscribe in subscribeoper.list():
if global_vars.is_system_stopped:
break
logger.info(f'开始更新订阅元数据:{subscribe.name} ...')
@@ -871,11 +907,10 @@ class SubscribeChain(ChainBase):
follow_users: List[str] = SystemConfigOper().get(SystemConfigKey.FollowSubscribers)
if not follow_users:
return
share_subs = SubscribeHelper().get_shares()
logger.info(f'开始刷新follow用户分享订阅 ...')
success_count = 0
subscribeoper = SubscribeOper()
for share_sub in share_subs:
for share_sub in SubscribeHelper().get_shares():
if global_vars.is_system_stopped:
break
uid = share_sub.get("share_uid")

View File

@@ -6,11 +6,13 @@ from typing import Union, Optional
from app.chain import ChainBase
from app.core.config import settings
from app.core.plugin import PluginManager
from app.log import logger
from app.schemas import Notification, MessageChannel
from app.utils.http import RequestUtils
from app.utils.system import SystemUtils
from app.helper.system import SystemHelper
from app.helper.plugin import PluginHelper
from version import FRONTEND_VERSION, APP_VERSION
@@ -34,7 +36,7 @@ class SystemChain(ChainBase):
重启系统
"""
from app.core.config import global_vars
if channel and userid:
self.post_message(Notification(channel=channel, source=source,
title="系统正在重启,请耐心等候!", userid=userid))
@@ -147,6 +149,9 @@ class SystemChain(ChainBase):
logger.info(f"插件恢复完成,共恢复 {restored_count} 个项目")
# 安装缺少的依赖
PluginManager.install_plugin_missing_dependencies()
# 删除备份目录
try:
shutil.rmtree(backup_dir)

View File

@@ -98,6 +98,7 @@ class TorrentsChain(ChainBase):
if not site.get("rss"):
logger.error(f'站点 {domain} 未配置RSS地址')
return []
# 解析RSS
rss_items = RssHelper().parse(site.get("rss"), True if site.get("proxy") else False,
timeout=int(site.get("timeout") or 30))
if rss_items is None:
@@ -109,25 +110,28 @@ class TorrentsChain(ChainBase):
return []
# 组装种子
ret_torrents: List[TorrentInfo] = []
for item in rss_items:
if not item.get("title"):
continue
torrentinfo = TorrentInfo(
site=site.get("id"),
site_name=site.get("name"),
site_cookie=site.get("cookie"),
site_ua=site.get("ua") or settings.USER_AGENT,
site_proxy=site.get("proxy"),
site_order=site.get("pri"),
site_downloader=site.get("downloader"),
title=item.get("title"),
enclosure=item.get("enclosure"),
page_url=item.get("link"),
size=item.get("size"),
pubdate=item["pubdate"].strftime("%Y-%m-%d %H:%M:%S") if item.get("pubdate") else None,
)
ret_torrents.append(torrentinfo)
try:
for item in rss_items:
if not item.get("title"):
continue
torrentinfo = TorrentInfo(
site=site.get("id"),
site_name=site.get("name"),
site_cookie=site.get("cookie"),
site_ua=site.get("ua") or settings.USER_AGENT,
site_proxy=site.get("proxy"),
site_order=site.get("pri"),
site_downloader=site.get("downloader"),
title=item.get("title"),
enclosure=item.get("enclosure"),
page_url=item.get("link"),
size=item.get("size"),
pubdate=item["pubdate"].strftime("%Y-%m-%d %H:%M:%S") if item.get("pubdate") else None,
)
ret_torrents.append(torrentinfo)
finally:
rss_items.clear()
del rss_items
return ret_torrents
def refresh(self, stype: Optional[str] = None, sites: List[int] = None) -> Dict[str, List[Context]]:
@@ -136,6 +140,16 @@ class TorrentsChain(ChainBase):
:param stype: 强制指定缓存类型spider:爬虫缓存rss:rss缓存
:param sites: 强制指定站点ID列表为空则读取设置的订阅站点
"""
def __is_no_cache_site(_domain: str) -> bool:
"""
判断站点是否不需要缓存
"""
for url_key in settings.NO_CACHE_SITE_KEY.split(','):
if url_key in _domain:
return True
return False
# 刷新类型
if not stype:
stype = settings.SUBSCRIBE_MODE
@@ -152,13 +166,10 @@ class TorrentsChain(ChainBase):
torrents_cache[_domain] = [_torrent for _torrent in _torrents
if not TorrentHelper().is_invalid(_torrent.torrent_info.enclosure)]
# 所有站点索引
indexers = SitesHelper().get_indexers()
# 需要刷新的站点domain
domains = []
# 遍历站点缓存资源
for indexer in indexers:
for indexer in SitesHelper().get_indexers():
if global_vars.is_system_stopped:
break
# 未开启的站点不刷新
@@ -175,48 +186,57 @@ class TorrentsChain(ChainBase):
# 按pubdate降序排列
torrents.sort(key=lambda x: x.pubdate or '', reverse=True)
# 取前N条
torrents = torrents[:settings.CONF["refresh"]]
torrents = torrents[:settings.CONF.refresh]
if torrents:
# 过滤出没有处理过的种子 - 优化:使用集合查找,避免重复创建字符串列表
cached_signatures = {f'{t.torrent_info.title}{t.torrent_info.description}'
for t in torrents_cache.get(domain) or []}
torrents = [torrent for torrent in torrents
if f'{torrent.title}{torrent.description}' not in cached_signatures]
if __is_no_cache_site(domain):
# 不需要缓存的站点,直接处理
logger.info(f'{indexer.get("name")}{len(torrents)} 个种子 (不缓存)')
torrents_cache[domain] = []
else:
# 过滤出没有处理过的种子 - 优化:使用集合查找,避免重复创建字符串列表
cached_signatures = {f'{t.torrent_info.title}{t.torrent_info.description}'
for t in torrents_cache.get(domain) or []}
torrents = [torrent for torrent in torrents
if f'{torrent.title}{torrent.description}' not in cached_signatures]
if torrents:
logger.info(f'{indexer.get("name")}{len(torrents)} 个新种子')
else:
logger.info(f'{indexer.get("name")} 没有新种子')
continue
for torrent in torrents:
if global_vars.is_system_stopped:
break
logger.info(f'处理资源:{torrent.title} ...')
# 识别
meta = MetaInfo(title=torrent.title, subtitle=torrent.description)
if torrent.title != meta.org_string:
logger.info(f'种子名称应用识别词后发生改变:{torrent.title} => {meta.org_string}')
# 使用站点种子分类,校正类型识别
if meta.type != MediaType.TV \
and torrent.category == MediaType.TV.value:
meta.type = MediaType.TV
# 识别媒体信息
mediainfo: MediaInfo = MediaChain().recognize_by_meta(meta)
if not mediainfo:
logger.warn(f'{torrent.title} 未识别到媒体信息')
# 存储空的媒体信息
mediainfo = MediaInfo()
# 清理多余数据,减少内存占用
mediainfo.clear()
# 上下文
context = Context(meta_info=meta, media_info=mediainfo, torrent_info=torrent)
# 添加到缓存
if not torrents_cache.get(domain):
torrents_cache[domain] = [context]
else:
torrents_cache[domain].append(context)
# 如果超过了限制条数则移除掉前面的
if len(torrents_cache[domain]) > settings.CONF["torrents"]:
torrents_cache[domain] = torrents_cache[domain][-settings.CONF["torrents"]:]
try:
for torrent in torrents:
if global_vars.is_system_stopped:
break
logger.info(f'处理资源:{torrent.title} ...')
# 识别
meta = MetaInfo(title=torrent.title, subtitle=torrent.description)
if torrent.title != meta.org_string:
logger.info(f'种子名称应用识别词后发生改变:{torrent.title} => {meta.org_string}')
# 使用站点种子分类,校正类型识别
if meta.type != MediaType.TV \
and torrent.category == MediaType.TV.value:
meta.type = MediaType.TV
# 识别媒体信息
mediainfo: MediaInfo = MediaChain().recognize_by_meta(meta)
if not mediainfo:
logger.warn(f'{torrent.title} 未识别到媒体信息')
# 存储空的媒体信息
mediainfo = MediaInfo()
# 清理多余数据,减少内存占用
mediainfo.clear()
# 上下文
context = Context(meta_info=meta, media_info=mediainfo, torrent_info=torrent)
# 添加到缓存
if not torrents_cache.get(domain):
torrents_cache[domain] = [context]
else:
torrents_cache[domain].append(context)
# 如果超过了限制条数则移除掉前面的
if len(torrents_cache[domain]) > settings.CONF.torrents:
torrents_cache[domain] = torrents_cache[domain][-settings.CONF.torrents:]
finally:
torrents.clear()
del torrents
else:
logger.info(f'{indexer.get("name")} 没有获取到种子')

View File

@@ -1,10 +1,10 @@
import gc
import queue
import re
import threading
import traceback
from copy import deepcopy
from pathlib import Path
from queue import Queue
from time import sleep
from typing import List, Optional, Tuple, Union, Dict, Callable
@@ -15,9 +15,9 @@ from app.chain.storage import StorageChain
from app.chain.tmdb import TmdbChain
from app.core.config import settings, global_vars
from app.core.context import MediaInfo
from app.core.event import eventmanager
from app.core.meta import MetaBase
from app.core.metainfo import MetaInfoPath
from app.core.event import eventmanager
from app.db.downloadhistory_oper import DownloadHistoryOper
from app.db.models.downloadhistory import DownloadHistory
from app.db.models.transferhistory import TransferHistory
@@ -27,11 +27,11 @@ from app.helper.directory import DirectoryHelper
from app.helper.format import FormatParser
from app.helper.progress import ProgressHelper
from app.log import logger
from app.schemas import StorageOperSelectionEventData
from app.schemas import TransferInfo, TransferTorrent, Notification, EpisodeFormat, FileItem, TransferDirectoryConf, \
TransferTask, TransferQueue, TransferJob, TransferJobTask
from app.schemas.types import TorrentStatus, EventType, MediaType, ProgressKey, NotificationType, MessageChannel, \
SystemConfigKey, ChainEventType, ContentType
from app.schemas import StorageOperSelectionEventData
from app.utils.singleton import Singleton
from app.utils.string import StringUtils
@@ -212,6 +212,7 @@ class JobManager:
set(self._season_episodes[mediaid]) - set(task.meta.episode_list)
)
return task
return None
def remove_job(self, task: TransferTask) -> Optional[TransferJob]:
"""
@@ -225,6 +226,7 @@ class JobManager:
if __mediaid__ in self._season_episodes:
self._season_episodes.pop(__mediaid__)
return self._job_view.pop(__mediaid__)
return None
def is_done(self, task: TransferTask) -> bool:
"""
@@ -310,7 +312,7 @@ class JobManager:
def count(self, media: MediaInfo, season: Optional[int] = None) -> int:
"""
获取某项任务总数
获取某项任务成功总数
"""
__mediaid__ = self.__get_media_id(media=media, season=season)
with job_lock:
@@ -321,7 +323,7 @@ class JobManager:
def size(self, media: MediaInfo, season: Optional[int] = None) -> int:
"""
获取某项任务总大小
获取某项任务成功文件总大小
"""
__mediaid__ = self.__get_media_id(media=media, season=season)
with job_lock:
@@ -358,22 +360,20 @@ class TransferChain(ChainBase, metaclass=Singleton):
文件整理处理链
"""
# 可处理的文件后缀
all_exts = settings.RMT_MEDIAEXT
# 待整理任务队列
_queue = Queue()
# 文件整理线程
_transfer_thread = None
# 队列间隔时间(秒)
_transfer_interval = 15
def __init__(self):
super().__init__()
# 可处理的文件后缀
self.all_exts = settings.RMT_MEDIAEXT
# 待整理任务队列
self._queue = queue.Queue()
# 文件整理线程
self._transfer_thread = None
# 队列间隔时间(秒)
self._transfer_interval = 15
# 事件管理器
self.jobview = JobManager()
# 车移成功的文件清单
self._success_target_files: Dict[str, List[str]] = {}
# 启动整理任务
self.__init()
@@ -390,6 +390,44 @@ class TransferChain(ChainBase, metaclass=Singleton):
"""
整理完成后处理
"""
def __do_finished():
"""
完成时发送消息、刮削事件、移除任务等
"""
# 更新文件数量
transferinfo.file_count = self.jobview.count(task.mediainfo, task.meta.begin_season) or 1
# 更新文件大小
transferinfo.total_size = self.jobview.size(task.mediainfo,
task.meta.begin_season) or task.fileitem.size
# 更新文件清单
transferinfo.file_list_new = self._success_target_files.pop(transferinfo.target_diritem.path, [])
# 发送通知,实时手动整理时不发
if transferinfo.need_notify and (task.background or not task.manual):
se_str = None
if task.mediainfo.type == MediaType.TV:
season_episodes = self.jobview.season_episodes(task.mediainfo, task.meta.begin_season)
if season_episodes:
se_str = f"{task.meta.season} {StringUtils.format_ep(season_episodes)}"
else:
se_str = f"{task.meta.season}"
self.send_transfer_message(meta=task.meta,
mediainfo=task.mediainfo,
transferinfo=transferinfo,
season_episode=se_str,
username=task.username)
# 刮削事件
if transferinfo.need_scrape:
self.eventmanager.send_event(EventType.MetadataScrape, {
'meta': task.meta,
'mediainfo': task.mediainfo,
'fileitem': transferinfo.target_diritem,
'file_list': transferinfo.file_list_new,
'overwrite': False
})
# 移除已完成的任务
self.jobview.remove_job(task)
transferhis = TransferHistoryOper()
if not transferinfo.success:
# 转移失败
@@ -415,6 +453,10 @@ class TransferChain(ChainBase, metaclass=Singleton):
))
# 整理失败
self.jobview.fail_task(task)
with task_lock:
# 整理完成且有成功的任务时
if self.jobview.is_finished(task):
__do_finished()
return False, transferinfo.message
# 转移成功
@@ -443,55 +485,49 @@ class TransferChain(ChainBase, metaclass=Singleton):
})
with task_lock:
# 登记转移成功文件清单
target_dir_path = transferinfo.target_diritem.path
target_files = transferinfo.file_list_new
if self._success_target_files.get(target_dir_path):
self._success_target_files[target_dir_path].extend(target_files)
else:
self._success_target_files[target_dir_path] = target_files
# 全部整理成功时
if self.jobview.is_success(task):
# 移动模式删除空目录
if transferinfo.transfer_type in ["move"]:
# 所有成功的业务
tasks = self.jobview.success_tasks(task.mediainfo, task.meta.begin_season)
# 记录已处理的种子hash
processed_hashes = set()
storagechain = StorageChain()
downloadhistoryoper = DownloadHistoryOper()
for t in tasks:
# 下载器hash
if t.download_hash and t.download_hash not in processed_hashes:
processed_hashes.add(t.download_hash)
if self.remove_torrents(t.download_hash, downloader=t.downloader):
logger.info(f"移动模式删除种子成功:{t.download_hash} ")
# 删除残留目录
if t.fileitem:
storagechain.delete_media_file(t.fileitem, delete_self=False)
if not t.download_hash:
continue
# 通过download_hash获取种子保存目录
download_history = downloadhistoryoper.get_by_hash(t.download_hash)
if download_history and download_history.path:
# 检查种子目录下是否还有有效媒体文件
seed_dir_item = storagechain.get_file_item(storage=t.fileitem.storage,
path=Path(download_history.path))
if seed_dir_item and seed_dir_item.type == "dir":
remain_files = storagechain.list_files(seed_dir_item, recursion=True)
has_media = any(
f.extension and f.extension.lower() in [ext.lstrip('.') for ext in self.all_exts]
for f in remain_files if f.type == "file"
)
if not has_media:
if self.remove_torrents(t.download_hash, downloader=t.downloader):
logger.info(f"移动模式删除种子成功:{t.download_hash} ")
# 删除残留目录
if t.fileitem:
storagechain.delete_media_file(t.fileitem, delete_self=False)
else:
logger.info(
f"种子目录 {download_history.path} 还有未整理的媒体文件,暂不删除种子和残留目录")
# 整理完成且有成功的任务时
if self.jobview.is_finished(task):
# 发送通知,实时手动整理时不发
if transferinfo.need_notify and (task.background or not task.manual):
se_str = None
if task.mediainfo.type == MediaType.TV:
season_episodes = self.jobview.season_episodes(task.mediainfo, task.meta.begin_season)
if season_episodes:
se_str = f"{task.meta.season} {StringUtils.format_ep(season_episodes)}"
else:
se_str = f"{task.meta.season}"
# 更新文件数量
transferinfo.file_count = self.jobview.count(task.mediainfo, task.meta.begin_season) or 1
# 更新文件大小
transferinfo.total_size = self.jobview.size(task.mediainfo,
task.meta.begin_season) or task.fileitem.size
self.send_transfer_message(meta=task.meta,
mediainfo=task.mediainfo,
transferinfo=transferinfo,
season_episode=se_str,
username=task.username)
# 刮削事件
if transferinfo.need_scrape:
self.eventmanager.send_event(EventType.MetadataScrape, {
'meta': task.meta,
'mediainfo': task.mediainfo,
'fileitem': transferinfo.target_diritem
})
# 移除已完成的任务
self.jobview.remove_job(task)
__do_finished()
return True, ""
@@ -788,6 +824,7 @@ class TransferChain(ChainBase, metaclass=Singleton):
for dir_info in download_dirs):
return True
logger.info("开始整理下载器中已经完成下载的文件 ...")
# 从下载器获取种子列表
torrents: Optional[List[TransferTorrent]] = self.list_torrents(status=TorrentStatus.TRANSFER)
if not torrents:
@@ -796,70 +833,78 @@ class TransferChain(ChainBase, metaclass=Singleton):
logger.info(f"获取到 {len(torrents)} 个已完成的下载任务")
for torrent in torrents:
if global_vars.is_system_stopped:
break
# 文件路径
file_path = torrent.path
if not file_path.exists():
logger.warn(f"文件不存在:{file_path}")
continue
# 检查是否为下载器监控目录中的文件
is_downloader_monitor = False
for dir_info in download_dirs:
if dir_info.monitor_type != "downloader":
continue
if not dir_info.download_path:
continue
if file_path.is_relative_to(Path(dir_info.download_path)):
is_downloader_monitor = True
try:
for torrent in torrents:
if global_vars.is_system_stopped:
break
if not is_downloader_monitor:
logger.debug(f"文件 {file_path} 不在下载器监控目录中,不通过下载器进行整理")
continue
# 查询下载记录识别情况
downloadhis: DownloadHistory = DownloadHistoryOper().get_by_hash(torrent.hash)
if downloadhis:
# 类型
try:
mtype = MediaType(downloadhis.type)
except ValueError:
mtype = MediaType.TV
# 按TMDBID识别
mediainfo = self.recognize_media(mtype=mtype,
tmdbid=downloadhis.tmdbid,
doubanid=downloadhis.doubanid,
episode_group=downloadhis.episode_group)
if mediainfo:
# 补充图片
self.obtain_images(mediainfo)
# 更新自定义媒体类别
if downloadhis.media_category:
mediainfo.category = downloadhis.media_category
else:
# 非MoviePilot下载的任务按文件识别
mediainfo = None
# 文件路径
file_path = torrent.path
if not file_path.exists():
logger.warn(f"文件不存在:{file_path}")
continue
# 检查是否为下载器监控目录中的文件
is_downloader_monitor = False
for dir_info in download_dirs:
if dir_info.monitor_type != "downloader":
continue
if not dir_info.download_path:
continue
if file_path.is_relative_to(Path(dir_info.download_path)):
is_downloader_monitor = True
break
if not is_downloader_monitor:
logger.debug(f"文件 {file_path} 不在下载器监控目录中,不通过下载器进行整理")
continue
# 查询下载记录识别情况
downloadhis: DownloadHistory = DownloadHistoryOper().get_by_hash(torrent.hash)
if downloadhis:
# 类型
try:
mtype = MediaType(downloadhis.type)
except ValueError:
mtype = MediaType.TV
# 按TMDBID识别
mediainfo = self.recognize_media(mtype=mtype,
tmdbid=downloadhis.tmdbid,
doubanid=downloadhis.doubanid,
episode_group=downloadhis.episode_group)
if mediainfo:
# 补充图片
self.obtain_images(mediainfo)
# 更新自定义媒体类别
if downloadhis.media_category:
mediainfo.category = downloadhis.media_category
else:
# 非MoviePilot下载的任务按文件识别
mediainfo = None
# 执行实时整理,匹配源目录
state, errmsg = self.do_transfer(
fileitem=FileItem(
storage="local",
path=str(file_path).replace("\\", "/"),
type="dir" if not file_path.is_file() else "file",
name=file_path.name,
size=file_path.stat().st_size,
extension=file_path.suffix.lstrip('.'),
),
mediainfo=mediainfo,
downloader=torrent.downloader,
download_hash=torrent.hash,
background=False,
)
# 执行实时整理,匹配源目录
state, errmsg = self.do_transfer(
fileitem=FileItem(
storage="local",
path=str(file_path).replace("\\", "/"),
type="dir" if not file_path.is_file() else "file",
name=file_path.name,
size=file_path.stat().st_size,
extension=file_path.suffix.lstrip('.'),
),
mediainfo=mediainfo,
downloader=torrent.downloader,
download_hash=torrent.hash,
background=False,
)
# 设置下载任务状态
if not state:
logger.warn(f"整理下载器任务失败:{torrent.hash} - {errmsg}")
self.transfer_completed(hashs=torrent.hash, downloader=torrent.downloader)
# 设置下载任务状态
if not state:
logger.warn(f"整理下载器任务失败:{torrent.hash} - {errmsg}")
self.transfer_completed(hashs=torrent.hash, downloader=torrent.downloader)
finally:
torrents.clear()
del torrents
# 如果不是大内存模式,进行垃圾回收
if not settings.BIG_MEMORY_MODE:
gc.collect()
# 结束
logger.info("所有下载器中下载完成的文件已整理完成")
@@ -870,7 +915,7 @@ class TransferChain(ChainBase, metaclass=Singleton):
) -> List[Tuple[FileItem, bool]]:
"""
获取整理目录或文件列表
:param fileitem: 文件项
:param depth: 递归深度默认为1
"""
@@ -1032,111 +1077,116 @@ class TransferChain(ChainBase, metaclass=Singleton):
# 整理所有文件
transfer_tasks: List[TransferTask] = []
for file_item, bluray_dir in file_items:
if global_vars.is_system_stopped:
break
if continue_callback and not continue_callback():
break
file_path = Path(file_item.path)
# 回收站及隐藏的文件不处理
if file_item.path.find('/@Recycle/') != -1 \
or file_item.path.find('/#recycle/') != -1 \
or file_item.path.find('/.') != -1 \
or file_item.path.find('/@eaDir') != -1:
logger.debug(f"{file_item.path} 是回收站或隐藏的文件")
continue
# 整理屏蔽词不处理
is_blocked = False
if transfer_exclude_words:
for keyword in transfer_exclude_words:
if not keyword:
continue
if keyword and re.search(r"%s" % keyword, file_item.path, re.IGNORECASE):
logger.info(f"{file_item.path} 命中整理屏蔽词 {keyword},不处理")
is_blocked = True
break
if is_blocked:
continue
# 整理成功的不再处理
if not force:
transferd = TransferHistoryOper().get_by_src(file_item.path, storage=file_item.storage)
if transferd:
if not transferd.status:
all_success = False
logger.info(f"{file_item.path} 已整理过,如需重新处理,请删除整理记录。")
err_msgs.append(f"{file_item.name} 已整理过")
try:
for file_item, bluray_dir in file_items:
if global_vars.is_system_stopped:
break
if continue_callback and not continue_callback():
break
file_path = Path(file_item.path)
# 回收站及隐藏的文件不处理
if file_item.path.find('/@Recycle/') != -1 \
or file_item.path.find('/#recycle/') != -1 \
or file_item.path.find('/.') != -1 \
or file_item.path.find('/@eaDir') != -1:
logger.debug(f"{file_item.path} 是回收站或隐藏的文件")
continue
if not meta:
# 文件元数据
file_meta = MetaInfoPath(file_path)
else:
file_meta = meta
# 整理屏蔽词不处理
is_blocked = False
if transfer_exclude_words:
for keyword in transfer_exclude_words:
if not keyword:
continue
if keyword and re.search(r"%s" % keyword, file_item.path, re.IGNORECASE):
logger.info(f"{file_item.path} 命中整理屏蔽词 {keyword},不处理")
is_blocked = True
break
if is_blocked:
continue
# 合并季
if season is not None:
file_meta.begin_season = season
# 整理成功的不再处理
if not force:
transferd = TransferHistoryOper().get_by_src(file_item.path, storage=file_item.storage)
if transferd:
if not transferd.status:
all_success = False
logger.info(f"{file_item.path} 已整理过,如需重新处理,请删除整理记录。")
err_msgs.append(f"{file_item.name} 已整理过")
continue
if not file_meta:
all_success = False
logger.error(f"{file_path.name} 无法识别有效信息")
err_msgs.append(f"{file_path.name} 无法识别有效信息")
continue
if not meta:
# 文件元数据
file_meta = MetaInfoPath(file_path)
else:
file_meta = meta
# 自定义识别
if formaterHandler:
# 开始集、结束集、PART
begin_ep, end_ep, part = formaterHandler.split_episode(file_name=file_path.name, file_meta=file_meta)
if begin_ep is not None:
file_meta.begin_episode = begin_ep
file_meta.part = part
if end_ep is not None:
file_meta.end_episode = end_ep
# 合并季
if season is not None:
file_meta.begin_season = season
# 根据父路径获取下载历史
download_history = None
downloadhis = DownloadHistoryOper()
if bluray_dir:
# 蓝光原盘,按目录名查询
download_history = downloadhis.get_by_path(str(file_path))
else:
# 按文件全路径查询
download_file = downloadhis.get_file_by_fullpath(str(file_path))
if download_file:
download_history = downloadhis.get_by_hash(download_file.download_hash)
if not file_meta:
all_success = False
logger.error(f"{file_path.name} 无法识别有效信息")
err_msgs.append(f"{file_path.name} 无法识别有效信息")
continue
# 获取下载Hash
if download_history and (not downloader or not download_hash):
downloader = download_history.downloader
download_hash = download_history.download_hash
# 自定义识别
if formaterHandler:
# 开始集、结束集、PART
begin_ep, end_ep, part = formaterHandler.split_episode(file_name=file_path.name,
file_meta=file_meta)
if begin_ep is not None:
file_meta.begin_episode = begin_ep
file_meta.part = part
if end_ep is not None:
file_meta.end_episode = end_ep
# 后台整理
transfer_task = TransferTask(
fileitem=file_item,
meta=file_meta,
mediainfo=mediainfo,
target_directory=target_directory,
target_storage=target_storage,
target_path=target_path,
transfer_type=transfer_type,
scrape=scrape,
library_type_folder=library_type_folder,
library_category_folder=library_category_folder,
downloader=downloader,
download_hash=download_hash,
download_history=download_history,
manual=manual,
background=background
)
if background:
self.put_to_queue(task=transfer_task)
logger.info(f"{file_path.name} 已添加到整理队列")
else:
# 加入列表
self.__put_to_jobview(transfer_task)
transfer_tasks.append(transfer_task)
# 根据父路径获取下载历史
download_history = None
downloadhis = DownloadHistoryOper()
if bluray_dir:
# 蓝光原盘,按目录名查询
download_history = downloadhis.get_by_path(str(file_path))
else:
# 按文件全路径查询
download_file = downloadhis.get_file_by_fullpath(str(file_path))
if download_file:
download_history = downloadhis.get_by_hash(download_file.download_hash)
# 获取下载Hash
if download_history and (not downloader or not download_hash):
downloader = download_history.downloader
download_hash = download_history.download_hash
# 后台整理
transfer_task = TransferTask(
fileitem=file_item,
meta=file_meta,
mediainfo=mediainfo,
target_directory=target_directory,
target_storage=target_storage,
target_path=target_path,
transfer_type=transfer_type,
scrape=scrape,
library_type_folder=library_type_folder,
library_category_folder=library_category_folder,
downloader=downloader,
download_hash=download_hash,
download_history=download_history,
manual=manual,
background=background
)
if background:
self.put_to_queue(task=transfer_task)
logger.info(f"{file_path.name} 已添加到整理队列")
else:
# 加入列表
self.__put_to_jobview(transfer_task)
transfer_tasks.append(transfer_task)
finally:
file_items.clear()
del file_items
# 实时整理
if transfer_tasks:
@@ -1155,29 +1205,32 @@ class TransferChain(ChainBase, metaclass=Singleton):
progress.update(value=0,
text=__process_msg,
key=ProgressKey.FileTransfer)
for transfer_task in transfer_tasks:
if global_vars.is_system_stopped:
break
if continue_callback and not continue_callback():
break
# 更新进度
__process_msg = f"正在整理 {processed_num + fail_num + 1}/{total_num}{transfer_task.fileitem.name} ..."
logger.info(__process_msg)
progress.update(value=(processed_num + fail_num) / total_num * 100,
text=__process_msg,
key=ProgressKey.FileTransfer)
state, err_msg = self.__handle_transfer(
task=transfer_task,
callback=self.__default_callback
)
if not state:
all_success = False
logger.warn(f"{transfer_task.fileitem.name} {err_msg}")
err_msgs.append(f"{transfer_task.fileitem.name} {err_msg}")
fail_num += 1
else:
processed_num += 1
try:
for transfer_task in transfer_tasks:
if global_vars.is_system_stopped:
break
if continue_callback and not continue_callback():
break
# 更新进度
__process_msg = f"正在整理 {processed_num + fail_num + 1}/{total_num}{transfer_task.fileitem.name} ..."
logger.info(__process_msg)
progress.update(value=(processed_num + fail_num) / total_num * 100,
text=__process_msg,
key=ProgressKey.FileTransfer)
state, err_msg = self.__handle_transfer(
task=transfer_task,
callback=self.__default_callback
)
if not state:
all_success = False
logger.warn(f"{transfer_task.fileitem.name} {err_msg}")
err_msgs.append(f"{transfer_task.fileitem.name} {err_msg}")
fail_num += 1
else:
processed_num += 1
finally:
transfer_tasks.clear()
del transfer_tasks
# 整理结束
__end_msg = f"整理队列处理完成,共整理 {total_num} 个文件,失败 {fail_num}"
@@ -1187,7 +1240,8 @@ class TransferChain(ChainBase, metaclass=Singleton):
key=ProgressKey.FileTransfer)
progress.end(ProgressKey.FileTransfer)
return all_success, "".join(err_msgs)
error_msg = "".join(err_msgs[:2]) + (f",等{len(err_msgs)}个文件错误!" if len(err_msgs) > 2 else "")
return all_success, error_msg
def remote_transfer(self, arg_str: str, channel: MessageChannel,
userid: Union[str, int] = None, source: Optional[str] = None):

View File

@@ -225,6 +225,9 @@ class Command(metaclass=Singleton):
添加命令集合
"""
for cmd, command in source.items():
if not command.get("show", True):
continue
command_data = {
"type": command_type,
"description": command.get("description"),
@@ -261,6 +264,7 @@ class Command(metaclass=Singleton):
"func": self.send_plugin_event,
"description": command.get("desc"),
"category": command.get("category"),
"show": command.get("show", True),
"data": {
"etype": command.get("event"),
"data": command.get("data")
@@ -335,7 +339,8 @@ class Command(metaclass=Singleton):
return self._commands.get(cmd, {})
def register(self, cmd: str, func: Any, data: Optional[dict] = None,
desc: Optional[str] = None, category: Optional[str] = None) -> None:
desc: Optional[str] = None, category: Optional[str] = None,
show: bool = True) -> None:
"""
注册单个命令
"""
@@ -344,7 +349,8 @@ class Command(metaclass=Singleton):
"func": func,
"description": desc,
"category": category,
"data": data or {}
"data": data or {},
"show": show
}
def execute(self, cmd: str, data_str: Optional[str] = "",

View File

@@ -131,7 +131,7 @@ class CacheToolsBackend(CacheBackend):
- 不支持按 `key` 独立隔离 TTL 和 Maxsize仅支持作用于 region 级别
"""
def __init__(self, maxsize: Optional[int] = 1000, ttl: Optional[int] = 1800):
def __init__(self, maxsize: Optional[int] = 512, ttl: Optional[int] = 1800):
"""
初始化缓存实例
@@ -150,7 +150,7 @@ class CacheToolsBackend(CacheBackend):
region = self.get_region(region)
return self._region_caches.get(region)
def set(self, key: str, value: Any, ttl: Optional[int] = None,
def set(self, key: str, value: Any, ttl: Optional[int] = None,
region: Optional[str] = DEFAULT_CACHE_REGION, **kwargs) -> None:
"""
设置缓存值支持每个 key 独立配置 TTL 和 Maxsize
@@ -357,7 +357,7 @@ class RedisBackend(CacheBackend):
region = self.get_region(quote(region))
return f"{region}:key:{quote(key)}"
def set(self, key: str, value: Any, ttl: Optional[int] = None,
def set(self, key: str, value: Any, ttl: Optional[int] = None,
region: Optional[str] = DEFAULT_CACHE_REGION, **kwargs) -> None:
"""
设置缓存
@@ -454,7 +454,7 @@ class RedisBackend(CacheBackend):
self.client.close()
def get_cache_backend(maxsize: Optional[int] = 1000, ttl: Optional[int] = 1800) -> CacheBackend:
def get_cache_backend(maxsize: Optional[int] = 512, ttl: Optional[int] = 1800) -> CacheBackend:
"""
根据配置获取缓存后端实例
@@ -482,13 +482,13 @@ def get_cache_backend(maxsize: Optional[int] = 1000, ttl: Optional[int] = 1800)
return CacheToolsBackend(maxsize=maxsize, ttl=ttl)
def cached(region: Optional[str] = None, maxsize: Optional[int] = 1000, ttl: Optional[int] = 1800,
def cached(region: Optional[str] = None, maxsize: Optional[int] = 512, ttl: Optional[int] = 1800,
skip_none: Optional[bool] = True, skip_empty: Optional[bool] = False):
"""
自定义缓存装饰器,支持为每个 key 动态传递 maxsize 和 ttl
:param region: 缓存的区
:param maxsize: 缓存的最大条目数,默认值为 1000
:param maxsize: 缓存的最大条目数,默认值为 512
:param ttl: 缓存的存活时间,单位秒,默认值为 1800
:param skip_none: 跳过 None 缓存,默认为 True
:param skip_empty: 跳过空值缓存(如 None, [], {}, "", set()),默认为 False

View File

@@ -1,6 +1,8 @@
import copy
import json
import os
import platform
import re
import secrets
import sys
import threading
@@ -11,8 +13,38 @@ from dotenv import set_key
from pydantic import BaseModel, BaseSettings, validator, Field
from app.log import logger, log_settings, LogConfigModel
from app.schemas import MediaType
from app.utils.system import SystemUtils
from app.utils.url import UrlUtils
from version import APP_VERSION
class SystemConfModel(BaseModel):
"""
系统关键资源大小配置
"""
# 缓存种子数量
torrents: int = 0
# 订阅刷新处理数量
refresh: int = 0
# TMDB请求缓存数量
tmdb: int = 0
# 豆瓣请求缓存数量
douban: int = 0
# Bangumi请求缓存数量
bangumi: int = 0
# Fanart请求缓存数量
fanart: int = 0
# 元数据缓存过期时间(秒)
meta: int = 0
# 调度器数量
scheduler: int = 0
# 线程池大小
threadpool: int = 0
# 数据库连接池大小
dbpool: int = 0
# 数据库连接池溢出数量
dbpooloverflow: int = 0
class ConfigModel(BaseModel):
@@ -57,16 +89,12 @@ class ConfigModel(BaseModel):
DB_ECHO: bool = False
# 数据库连接池类型QueuePool, NullPool
DB_POOL_TYPE: str = "QueuePool"
# 是否在获取连接时进行预先 ping 操作,默认关闭
DB_POOL_PRE_PING: bool = False
# 数据库连接池的大小,默认 100
DB_POOL_SIZE: int = 100
# 数据库连接的回收时间(秒),默认 1800 秒
DB_POOL_RECYCLE: int = 1800
# 数据库连接池获取连接的超时时间(秒),默认 60 秒
DB_POOL_TIMEOUT: int = 60
# 数据库连接池最大溢出连接数,默认 500
DB_MAX_OVERFLOW: int = 500
# 是否在获取连接时进行预先 ping 操作
DB_POOL_PRE_PING: bool = True
# 数据库连接的回收时间(秒)
DB_POOL_RECYCLE: int = 300
# 数据库连接池获取连接的超时时间(秒)
DB_POOL_TIMEOUT: int = 30
# SQLite 的 busy_timeout 参数,默认为 60 秒
DB_TIMEOUT: int = 60
# SQLite 是否启用 WAL 模式,默认开启
@@ -187,6 +215,8 @@ class ConfigModel(BaseModel):
SITEDATA_REFRESH_INTERVAL: int = 6
# 读取和发送站点消息
SITE_MESSAGE: bool = True
# 不能缓存站点资源的站点域名,多个使用,分隔
NO_CACHE_SITE_KEY: str = "m-team"
# 种子标签
TORRENT_TAG: str = "MOVIEPILOT"
# 下载站点字幕
@@ -205,8 +235,6 @@ class ConfigModel(BaseModel):
COOKIECLOUD_INTERVAL: Optional[int] = 60 * 24
# CookieCloud同步黑名单多个域名,分割
COOKIECLOUD_BLACKLIST: Optional[str] = None
# CookieCloud对应的浏览器UA
USER_AGENT: str = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/113.0.0.0 Safari/537.36 Edg/113.0.1774.57"
# 电影重命名格式
MOVIE_RENAME_FORMAT: str = "{{title}}{% if year %} ({{year}}){% endif %}" \
"/{{title}}{% if year %} ({{year}}){% endif %}{% if part %}-{{part}}{% endif %}{% if videoFormat %} - {{videoFormat}}{% endif %}" \
@@ -250,12 +278,6 @@ class ConfigModel(BaseModel):
REPO_GITHUB_TOKEN: Optional[str] = None
# 大内存模式
BIG_MEMORY_MODE: bool = False
# 是否启用内存监控
MEMORY_ANALYSIS: bool = False
# 内存快照间隔(分钟)
MEMORY_SNAPSHOT_INTERVAL: int = 60
# 保留的内存快照文件数量
MEMORY_SNAPSHOT_KEEP_COUNT: int = 20
# 全局图片缓存,将媒体图片缓存到本地
GLOBAL_IMAGE_CACHE: bool = False
# 是否启用编码探测的性能模式
@@ -287,6 +309,8 @@ class ConfigModel(BaseModel):
DEFAULT_SUB: Optional[str] = "zh-cn"
# Docker Client API地址
DOCKER_CLIENT_API: Optional[str] = "tcp://127.0.0.1:38379"
# 工作流数据共享
WORKFLOW_STATISTIC_SHARE: bool = True
class Settings(BaseSettings, ConfigModel, LogConfigModel):
@@ -486,6 +510,13 @@ class Settings(BaseSettings, ConfigModel, LogConfigModel):
"""
return "v2"
@property
def USER_AGENT(self) -> str:
"""
全局用户代理字符串
"""
return f"{self.PROJECT_NAME}/{APP_VERSION[1:]} ({platform.system()} {platform.release()}; {SystemUtils.cpu_arch()})"
@property
def INNER_CONFIG_PATH(self):
return self.ROOT_PATH / "config"
@@ -525,43 +556,37 @@ class Settings(BaseSettings, ConfigModel, LogConfigModel):
return self.CONFIG_PATH / "cookies"
@property
def CONF(self):
def CONF(self) -> SystemConfModel:
"""
{
"torrents": "缓存种子数量",
"refresh": "订阅刷新处理数量",
"tmdb": "TMDB请求缓存数量",
"douban": "豆瓣请求缓存数量",
"fanart": "Fanart请求缓存数量",
"meta": "元数据缓存过期时间(秒)",
"memory": "最大占用内存MB",
"scheduler": "调度器缓存数量"
"threadpool": "线程池数量"
}
根据内存模式返回系统配置
"""
if self.BIG_MEMORY_MODE:
return {
"torrents": 200,
"refresh": 100,
"tmdb": 1024,
"douban": 512,
"bangumi": 512,
"fanart": 512,
"meta": (self.META_CACHE_EXPIRE or 24) * 3600,
"scheduler": 100,
"threadpool": 100
}
return {
"torrents": 100,
"refresh": 50,
"tmdb": 256,
"douban": 256,
"bangumi": 256,
"fanart": 128,
"meta": (self.META_CACHE_EXPIRE or 2) * 3600,
"scheduler": 50,
"threadpool": 50
}
return SystemConfModel(
torrents=200,
refresh=100,
tmdb=1024,
douban=512,
bangumi=512,
fanart=512,
meta=(self.META_CACHE_EXPIRE or 24) * 3600,
scheduler=100,
threadpool=100,
dbpool=100,
dbpooloverflow=50
)
return SystemConfModel(
torrents=100,
refresh=50,
tmdb=256,
douban=256,
bangumi=256,
fanart=128,
meta=(self.META_CACHE_EXPIRE or 2) * 3600,
scheduler=50,
threadpool=50,
dbpool=50,
dbpooloverflow=20
)
@property
def PROXY(self):
@@ -637,6 +662,23 @@ class Settings(BaseSettings, ConfigModel, LogConfigModel):
return None
return UrlUtils.combine_url(host=self.APP_DOMAIN, path=url)
def RENAME_FORMAT(self, media_type: MediaType):
"""
获取指定类型的重命名格式
:param media_type: MediaType.TV 或 MediaType.Movie
:return: 重命名格式
"""
rename_format = (
self.TV_RENAME_FORMAT
if media_type == MediaType.TV
else self.MOVIE_RENAME_FORMAT
)
# 规范重命名格式
rename_format = rename_format.replace("\\", "/")
rename_format = re.sub(r'/+', '/', rename_format)
return rename_format.strip("/")
# 实例化配置
settings = Settings()

View File

@@ -193,7 +193,7 @@ class MediaInfo:
# LOGO
logo_path: str = None
# 评分
vote_average: float = 0.0
vote_average: float = None
# 描述
overview: str = None
# 风格ID
@@ -237,9 +237,9 @@ class MediaInfo:
# 流媒体平台
networks: list = field(default_factory=list)
# 集数
number_of_episodes: int = 0
number_of_episodes: int = None
# 季数
number_of_seasons: int = 0
number_of_seasons: int = None
# 原产国
origin_country: list = field(default_factory=list)
# 原名
@@ -255,9 +255,9 @@ class MediaInfo:
# 标签
tagline: str = None
# 评价数量
vote_count: int = 0
vote_count: int = None
# 流行度
popularity: int = 0
popularity: int = None
# 时长
runtime: int = None
# 下一集
@@ -474,7 +474,16 @@ class MediaInfo:
self.names = info.get('names') or []
# 剩余属性赋值
for key, value in info.items():
if hasattr(self, key) and not getattr(self, key):
if not value:
continue
if not hasattr(self, key):
continue
current_value = getattr(self, key)
if current_value:
continue
if current_value is None:
setattr(self, key, value)
elif type(current_value) == type(value):
setattr(self, key, value)
def set_douban_info(self, info: dict):
@@ -606,7 +615,16 @@ class MediaInfo:
self.production_countries = [{"id": country, "name": country} for country in info.get("countries") or []]
# 剩余属性赋值
for key, value in info.items():
if not value:
continue
if not hasattr(self, key):
continue
current_value = getattr(self, key)
if current_value:
continue
if current_value is None:
setattr(self, key, value)
elif type(current_value) == type(value):
setattr(self, key, value)
def set_bangumi_info(self, info: dict):

View File

@@ -6,7 +6,6 @@ import threading
import time
import traceback
import uuid
from functools import lru_cache
from queue import Empty, PriorityQueue
from typing import Callable, Dict, List, Optional, Union
@@ -70,9 +69,6 @@ class EventManager(metaclass=Singleton):
EventManager 负责管理和调度广播事件和链式事件,包括订阅、发送和处理事件
"""
# 退出事件
__event = threading.Event()
def __init__(self):
self.__executor = ThreadHelper() # 动态线程池,用于消费事件
self.__consumer_threads = [] # 用于保存启动的事件消费者线程
@@ -82,6 +78,7 @@ class EventManager(metaclass=Singleton):
self.__disabled_handlers = set() # 禁用的事件处理器集合
self.__disabled_classes = set() # 禁用的事件处理器类集合
self.__lock = threading.Lock() # 线程锁
self.__event = threading.Event() # 退出事件
def start(self):
"""
@@ -263,7 +260,6 @@ class EventManager(metaclass=Singleton):
return handler_info
@classmethod
@lru_cache(maxsize=1000)
def __get_handler_identifier(cls, target: Union[Callable, type]) -> Optional[str]:
"""
获取处理器或处理器类的唯一标识符,包括模块名和类名/方法名
@@ -279,7 +275,6 @@ class EventManager(metaclass=Singleton):
return f"{module_name}.{qualname}"
@classmethod
@lru_cache(maxsize=1000)
def __get_class_from_callable(cls, handler: Callable) -> Optional[str]:
"""
获取可调用对象所属类的唯一标识符

View File

@@ -9,8 +9,6 @@ class CustomizationMatcher(metaclass=Singleton):
"""
识别自定义占位符
"""
customization = None
custom_separator = None
def __init__(self):
self.systemconfig = SystemConfigOper()

View File

@@ -55,6 +55,8 @@ class MetaBase(object):
resource_team: Optional[str] = None
# 识别的自定义占位符
customization: Optional[str] = None
# 识别的流媒体平台
web_source: Optional[str] = None
# 视频编码
video_encode: Optional[str] = None
# 音频编码

View File

@@ -67,7 +67,6 @@ class MetaVideo(MetaBase):
original_title = title
self._source = ""
self._effect = []
self.web_source = None
self._index = 0
# 判断是否纯数字命名
if isfile \
@@ -140,9 +139,6 @@ class MetaVideo(MetaBase):
self.resource_effect = " ".join(self._effect)
if self._source:
self.resource_type = self._source.strip()
# 添加流媒体平台
if self.web_source:
self.resource_type = f"{self.web_source} {self.resource_type}"
# 提取原盘DIY
if self.resource_type and "BluRay" in self.resource_type:
if (self.subtitle and re.findall(r'D[Ii]Y', self.subtitle)) \

View File

@@ -9,7 +9,6 @@ class ReleaseGroupsMatcher(metaclass=Singleton):
"""
识别制作组、字幕组
"""
__release_groups: str = None
# 内置组
RELEASE_GROUPS: dict = {
"0ff": ['FF(?:(?:A|WE)B|CD|E(?:DU|B)|TV)'],

View File

@@ -18,7 +18,7 @@ class StreamingPlatforms(metaclass=Singleton):
("PMTP", "Paramount+"),
("HMAX", "Max"),
("", "Max"),
("HULU", "Hulu"),
("HULU", "Hulu Networks"),
("MA", "Movies Anywhere"),
("BCORE", "Bravia Core"),
("MS", "Microsoft Store"),
@@ -26,18 +26,17 @@ class StreamingPlatforms(metaclass=Singleton):
("STAN", "Stan"),
("PCOK", "Peacock"),
("SKST", "SkyShowtime"),
("NOW", "Now TV"),
("NOW", "Now"),
("FXTL", "Foxtel Now"),
("BNGE", "Binge"),
("CRKL", "Crackle"),
("RKTN", "Rakuten TV"),
("ALL4", "All 4"),
("ALL4", "Channel 4"),
("AS", "Adult Swim"),
("BRTB", "Brtb TV"),
("CNLP", "Canal+"),
("CRIT", "Criterion Channel"),
("DSCP", "Discovery+"),
("", "ESPN"),
("FOOD", "Food Network"),
("MUBI", "Mubi"),
("PLAY", "Google Play"),
@@ -48,9 +47,11 @@ class StreamingPlatforms(metaclass=Singleton):
("", "LiTV"),
("", "MyVideo"),
("Hami", "Hami Video"),
("", "meWATCH"),
("HamiVideo", "Hami Video"),
("MW", "meWATCH"),
("CATCHPLAY", "CATCHPLAY+"),
("", "LINE TV"),
("CPP", "CATCHPLAY+"),
("LINETV", "LINE TV"),
("VIU", "Viu"),
("IQ", ""),
("", "WeTV"),
@@ -65,6 +66,215 @@ class StreamingPlatforms(metaclass=Singleton):
("FUNi", "Funimation"),
("HIDI", "HIDIVE"),
("UNXT", "U-NEXT"),
("FAA", "Filmarchiv Austria"),
("CC", "Comedy Central"),
("iP", "BBC iPlayer"),
("9NOW", "9Now"),
("ABC", ""),
("", "AMC"),
("", "ZEE5"),
("", "WAVO"),
("SHAHID", "Shahid"),
("Flixole", "FlixOlé"),
("TOU", "Ici TOU.TV"),
("ROKU", "Roku"),
("KNPY", "Kanopy"),
("SNXT", "Sun NXT"),
("CUR", "Curiosity Stream"),
("MY5", "Channel 5"),
("AHA", "aha"),
("WOWP", "WOW Presents Plus"),
("JC", "JioCinema"),
("", "Dekkoo"),
("FILMZIE", "Filmzie"),
("HoiChoi", "Hoichoi"),
("VIKI", "Rakuten Viki"),
("SF", "SF Anytime"),
("PLEX", "Plex"),
("SHDR", "Shudder"),
("CRAV", "Crave"),
("CPE", "Cineplex Entertainment"),
("JF HC", ""),
("JF", ""),
("JFFP", ""),
("VIAP", "Viaplay"),
("TUBI", "TubiTV"),
("", "PBS"),
("PBSK", "PBS KIDS"),
("LGP", "Lionsgate Play"),
("", "CTV"),
("", "Cineverse"),
("LN", "Love Nature"),
("MP", "Movistar Plus+"),
("RUNTIME", "Runtime"),
("STZ", "STARZ"),
("FUBO", "fuboTV"),
("TENK", "Tënk"),
("KNOW", "Knowledge Network"),
("TVO", "tvo"),
("", "OVID"),
("CBC", "CBC Gem"),
("FANDOR", "fandor"),
("CW", "The CW"),
("KNPY", "Kanopy"),
("FREE", "Freeform"),
("AE", "A&E"),
("LIFE", "Lifetime"),
("WWEN", "WWE Network"),
("CMAX", "Cinemax"),
("HLMK", "Hallmark"),
("BYU", "BYUtv"),
("", "ViX"),
("VICE", "Viceland"),
("", "TVING"),
("USAN", "USA Network"),
("FOX", ""),
("", "TCM"),
("BRAV", "BravoTV"),
("", "TNT"),
("", "ZDF"),
("", "IndieFlix"),
("", "TLC"),
("", "HGTV"),
("ANPL", "Animal Planet"),
("TRVL", "Travel Channel"),
("", "VH1"),
("SAINA", "Saina Play"),
("SP", "Saina Play"),
("OXGN", "Oxygen"),
("PSN", "PlayStation Network"),
("PMNT", "Paramount Network"),
("FAWESOME", "Fawesome"),
("KLASSIKI", "Klassiki"),
("STRP", "Star+"),
("NATG", "National Geographic"),
("REVEEL", "Reveel"),
("FYI", "FYI Network"),
("WatchiT", "WATCH IT"),
("ITVX", "ITV"),
("GAIA", "Gaia"),
("", "FlixLatino"),
("CNNP", "CNN+"),
("TROMA", "Troma"),
("IVI", "Ivi"),
("9NOW", "9Now"),
("A3P", "Atresplayer"),
("7PLUS", "7plus"),
("", "SBS"),
("TEN", "10Play"),
("AUBC", ""),
("DSNY", "Disney Networks"),
("OSN", "OSN+"),
("SVT", "Sveriges Television"),
("LACINETEK", "LaCinetek"),
("", "Maxdome"),
("RTL", "RTL+"),
("ARTE", "Arte"),
("JOYN", "Joyn"),
("TV2", "TV 2"),
("3SAT", "3sat"),
("FILMINGO", "filmingo"),
("", "WOW"),
("OKKO", "Okko"),
("", "Go3"),
("ARGP", "Argo"),
("VOYO", "Voyo"),
("VMAX", "vivamax"),
("FILMIN", "Filmin"),
("", "Mitele"),
("MY5", "Channel 5"),
("", "ARD"),
("BK", "Bentkey"),
("BOOM", "Boomerang"),
("", "CBS"),
("CLBI", "Club illico"),
("CMOR", "C More"),
("CMT", ""),
("", "CNBC"),
("COOK", "Cooking Channel"),
("CWS", "CW Seed"),
("DCU", "DC Universe"),
("DDY", "Digiturk Dilediğin Yerde"),
("DEST", "Destination America"),
("DISC", "Discovery Channel"),
("DW", "DailyWire+"),
("DLWP", "DailyWire+"),
("DPLY", "dplay"),
("DRPO", "Dropout"),
("EPIX", "EPIX MGM+"),
("ESQ", "Esquire"),
("ETV", "E!"),
("FBWatch", "Facebook Watch"),
("FPT", "FPT Play"),
("FTV", "France.tv"),
("GLOB", "GloboSat Play"),
("GLBO", "Globoplay"),
("GO90", "go90"),
("HIST", "History Channel"),
("HPLAY", "Hungama Play"),
("KS", "Kaleidescape"),
("", "MBC"),
("MMAX", "ManoramaMAX"),
("MNBC", "MSNBC"),
("MTOD", "Motor Trend OnDemand"),
("NBC", ""),
("NBLA", "Nebula"),
("NICK", "Nickelodeon"),
("ODK", "OnDemandKorea"),
("POGO", "PokerGO"),
("PUHU", "puhutv"),
("QIBI", "Quibi"),
("RTE", "RTÉ"),
("SESO", "Seeso"),
("SPIK", "Spike"),
("SS", "Simply South"),
("SYFY", "SyFy"),
("TIMV", "TIMvision"),
("TK", "Tentkotta"),
("", "TV4"),
("TVL", "TV Land"),
("", "TVNZ"),
("", "UKTV"),
("VLCT", "Discovery Velocity"),
("VMEO", "Vimeo"),
("VRV", "VRV Defunct"),
("WTCH", "Watcha"),
("", "NowPlayer"),
("HuluJP", "Hulu Networks"),
("Gaga", "GagaOOLala"),
("MyTVS", "MyTVSuper"),
("", "BBC"),
("CC", "Comedy Central"),
("NowE", "Now E"),
("WAVVE", "Wavve"),
("SE", ""),
("", "BritBox"),
("AOD", "Anime on Demand"),
("AF", ""),
("BCH", "Bandai Channel"),
("VMJ", "VideoMarket"),
("LFTL", "Laftel"),
("WAKA", "Wakanim"),
("WAKANIM", "Wakanim"),
("AO", "AnimeOnegai"),
("", "Lemino"),
("VIDIO", "Vidio"),
("TVER", "TVer"),
("", "MBS"),
("LFTLNET", "Laftel"),
("JONU", "Jonu Play"),
("PlutoTV", "Pluto TV"),
("AbemaTV", "Abema"),
("", "dTV"),
("NYMEY", "Nymey"),
("SMNS", "SAMANSA"),
("CTHP", "CATCHPLAY+"),
("HBOGO", "HBO GO"),
("HBO", "HBO"),
("FPTP", "FPT Play"),
("", "LOCIPO"),
("DANT", "DANET"),
("OV", "OceanVeil"),
]
def __init__(self):

View File

@@ -154,35 +154,35 @@ def find_metainfo(title: str) -> Tuple[str, dict]:
# 去除title中该部分
if tmdbid or mtype or begin_season or end_season or begin_episode or end_episode:
title = title.replace(f"{{[{result}]}}", '')
# 支持Emby格式的ID标签
# 1. [tmdbid=xxxx] 或 [tmdbid-xxxx] 格式
tmdb_match = re.search(r'\[tmdbid[=\-](\d+)\]', title)
if tmdb_match:
metainfo['tmdbid'] = tmdb_match.group(1)
title = re.sub(r'\[tmdbid[=\-](\d+)\]', '', title).strip()
# 2. [tmdb=xxxx] 或 [tmdb-xxxx] 格式
if not metainfo['tmdbid']:
tmdb_match = re.search(r'\[tmdb[=\-](\d+)\]', title)
if tmdb_match:
metainfo['tmdbid'] = tmdb_match.group(1)
title = re.sub(r'\[tmdb[=\-](\d+)\]', '', title).strip()
# 3. {tmdbid=xxxx} 或 {tmdbid-xxxx} 格式
if not metainfo['tmdbid']:
tmdb_match = re.search(r'\{tmdbid[=\-](\d+)\}', title)
if tmdb_match:
metainfo['tmdbid'] = tmdb_match.group(1)
title = re.sub(r'\{tmdbid[=\-](\d+)\}', '', title).strip()
# 4. {tmdb=xxxx} 或 {tmdb-xxxx} 格式
if not metainfo['tmdbid']:
tmdb_match = re.search(r'\{tmdb[=\-](\d+)\}', title)
if tmdb_match:
metainfo['tmdbid'] = tmdb_match.group(1)
title = re.sub(r'\{tmdb[=\-](\d+)\}', '', title).strip()
# 计算季集总数
if metainfo.get('begin_season') and metainfo.get('end_season'):
if metainfo['begin_season'] > metainfo['end_season']:

View File

@@ -16,14 +16,14 @@ class ModuleManager(metaclass=Singleton):
模块管理器
"""
# 模块列表
_modules: dict = {}
# 运行态模块列表
_running_modules: dict = {}
# 子模块类型集合
SubType = Union[DownloaderType, MediaServerType, MessageChannel, StorageSchema, OtherModulesType]
def __init__(self):
# 模块列表
self._modules: dict = {}
# 运行态模块列表
self._running_modules: dict = {}
self.load_modules()
def load_modules(self):

View File

@@ -3,12 +3,14 @@ import concurrent.futures
import importlib.util
import inspect
import os
import sys
import time
import traceback
from concurrent.futures import ThreadPoolExecutor, as_completed
from pathlib import Path
from typing import Any, Dict, List, Optional, Type, Union, Callable, Tuple
from app.helper.sites import SitesHelper
from fastapi import HTTPException
from starlette import status
from watchdog.events import FileSystemEventHandler
@@ -20,7 +22,6 @@ from app.core.event import eventmanager, Event
from app.db.plugindata_oper import PluginDataOper
from app.db.systemconfig_oper import SystemConfigOper
from app.helper.plugin import PluginHelper
from app.helper.sites import SitesHelper
from app.log import logger
from app.schemas.types import EventType, SystemConfigKey
from app.utils.crypto import RSAUtils
@@ -87,16 +88,15 @@ class PluginManager(metaclass=Singleton):
插件管理器
"""
# 插件列表
_plugins: dict = {}
# 运行态插件列表
_running_plugins: dict = {}
# 配置Key
_config_key: str = "plugin.%s"
# 监听器
_observer: Observer = None
def __init__(self):
# 插件列表
self._plugins: dict = {}
# 运行态插件列表
self._running_plugins: dict = {}
# 配置Key
self._config_key: str = "plugin.%s"
# 监听器
self._observer: Observer = None
# 开发者模式监测插件修改
if settings.DEV or settings.PLUGIN_AUTO_RELOAD:
self.__start_monitor()
@@ -198,10 +198,14 @@ class PluginManager(metaclass=Singleton):
# 清空指定插件
self._plugins.pop(pid, None)
self._running_plugins.pop(pid, None)
# 清除插件模块缓存,包括所有子模块
self._clear_plugin_modules(pid)
else:
# 清空
self._plugins = {}
self._running_plugins = {}
# 清除所有插件模块缓存
self._clear_plugin_modules()
logger.info("插件停止完成")
@staticmethod
@@ -366,25 +370,51 @@ class PluginManager(metaclass=Singleton):
"""
self.stop(plugin_id)
# 从模块列表中移除插件
from sys import modules
try:
del modules[f"app.plugins.{plugin_id.lower()}"]
except KeyError:
pass
def reload_plugin(self, plugin_id: str):
"""
将一个插件重新加载到内存
:param plugin_id: 插件ID
"""
# 先移除
# 先移除插件实例
self.stop(plugin_id)
# 重新加载
self.start(plugin_id)
# 广播事件
eventmanager.send_event(EventType.PluginReload, data={"plugin_id": plugin_id})
@staticmethod
def _clear_plugin_modules(plugin_id: Optional[str] = None):
"""
清除插件及其所有子模块的缓存
:param plugin_id: 插件ID
"""
# 构建插件模块前缀
if plugin_id:
plugin_module_prefix = f"app.plugins.{plugin_id.lower()}"
else:
plugin_module_prefix = "app.plugins"
# 收集需要删除的模块名(创建模块名列表的副本以避免迭代时修改字典)
modules_to_remove = []
for module_name in list(sys.modules.keys()):
if module_name == plugin_module_prefix or module_name.startswith(plugin_module_prefix + "."):
modules_to_remove.append(module_name)
# 删除模块
for module_name in modules_to_remove:
try:
del sys.modules[module_name]
logger.debug(f"已清除插件模块缓存:{module_name}")
except KeyError:
# 模块可能已经被删除
pass
if plugin_id:
if modules_to_remove:
logger.info(f"插件 {plugin_id} 共清除 {len(modules_to_remove)} 个模块缓存:{modules_to_remove}")
else:
logger.debug(f"插件 {plugin_id} 没有找到需要清除的模块缓存")
def sync(self) -> List[str]:
"""
安装本地不存在或需要更新的插件
@@ -1416,8 +1446,9 @@ class PluginManager(metaclass=Singleton):
content = f.read()
# 替换CSS中可能的类名引用
content = content.replace(original_class_name.lower(), clone_class_name.lower())
content = content.replace(original_class_name, clone_class_name)
content = content.replace(original_class_name.lower(),
clone_class_name.lower()).replace(original_class_name,
clone_class_name)
with open(file_path, 'w', encoding='utf-8') as f:
f.write(content)

View File

@@ -13,10 +13,9 @@ class WorkFlowManager(metaclass=Singleton):
工作流管理器
"""
# 所有动作定义
_actions: Dict[str, Any] = {}
def __init__(self):
# 所有动作定义
self._actions: Dict[str, Any] = {}
self.init()
def init(self):

View File

@@ -24,9 +24,9 @@ db_kwargs = {
# 当使用 QueuePool 时,添加 QueuePool 特有的参数
if pool_class == QueuePool:
db_kwargs.update({
"pool_size": settings.DB_POOL_SIZE,
"pool_size": settings.CONF.dbpool,
"pool_timeout": settings.DB_POOL_TIMEOUT,
"max_overflow": settings.DB_MAX_OVERFLOW
"max_overflow": settings.CONF.dbpooloverflow
})
# 创建数据库引擎
Engine = create_engine(**db_kwargs)
@@ -221,8 +221,7 @@ class Base:
@classmethod
@db_query
def list(cls, db: Session) -> List[Self]:
result = db.query(cls).all()
return list(result)
return db.query(cls).all()
def to_dict(self):
return {c.name: getattr(self, c.name, None) for c in self.__table__.columns} # noqa

View File

@@ -1,7 +1,7 @@
import time
from typing import Optional
from sqlalchemy import Column, Integer, String, Sequence, JSON, or_
from sqlalchemy import Column, Integer, String, Sequence, JSON
from sqlalchemy.orm import Session
from app.db import db_query, db_update, Base
@@ -74,8 +74,7 @@ class DownloadHistory(Base):
@staticmethod
@db_query
def list_by_page(db: Session, page: Optional[int] = 1, count: Optional[int] = 30):
result = db.query(DownloadHistory).offset((page - 1) * count).limit(count).all()
return list(result)
return db.query(DownloadHistory).offset((page - 1) * count).limit(count).all()
@staticmethod
@db_query
@@ -91,50 +90,47 @@ class DownloadHistory(Base):
据tmdbid、season、season_episode查询下载记录
tmdbid + mtype 或 title + year
"""
result = None
# TMDBID + 类型
if tmdbid and mtype:
# 电视剧某季某集
if season and episode:
result = db.query(DownloadHistory).filter(DownloadHistory.tmdbid == tmdbid,
DownloadHistory.type == mtype,
DownloadHistory.seasons == season,
DownloadHistory.episodes == episode).order_by(
return db.query(DownloadHistory).filter(DownloadHistory.tmdbid == tmdbid,
DownloadHistory.type == mtype,
DownloadHistory.seasons == season,
DownloadHistory.episodes == episode).order_by(
DownloadHistory.id.desc()).all()
# 电视剧某季
elif season:
result = db.query(DownloadHistory).filter(DownloadHistory.tmdbid == tmdbid,
DownloadHistory.type == mtype,
DownloadHistory.seasons == season).order_by(
return db.query(DownloadHistory).filter(DownloadHistory.tmdbid == tmdbid,
DownloadHistory.type == mtype,
DownloadHistory.seasons == season).order_by(
DownloadHistory.id.desc()).all()
else:
# 电视剧所有季集/电影
result = db.query(DownloadHistory).filter(DownloadHistory.tmdbid == tmdbid,
DownloadHistory.type == mtype).order_by(
return db.query(DownloadHistory).filter(DownloadHistory.tmdbid == tmdbid,
DownloadHistory.type == mtype).order_by(
DownloadHistory.id.desc()).all()
# 标题 + 年份
elif title and year:
# 电视剧某季某集
if season and episode:
result = db.query(DownloadHistory).filter(DownloadHistory.title == title,
DownloadHistory.year == year,
DownloadHistory.seasons == season,
DownloadHistory.episodes == episode).order_by(
return db.query(DownloadHistory).filter(DownloadHistory.title == title,
DownloadHistory.year == year,
DownloadHistory.seasons == season,
DownloadHistory.episodes == episode).order_by(
DownloadHistory.id.desc()).all()
# 电视剧某季
elif season:
result = db.query(DownloadHistory).filter(DownloadHistory.title == title,
DownloadHistory.year == year,
DownloadHistory.seasons == season).order_by(
return db.query(DownloadHistory).filter(DownloadHistory.title == title,
DownloadHistory.year == year,
DownloadHistory.seasons == season).order_by(
DownloadHistory.id.desc()).all()
else:
# 电视剧所有季集/电影
result = db.query(DownloadHistory).filter(DownloadHistory.title == title,
DownloadHistory.year == year).order_by(
return db.query(DownloadHistory).filter(DownloadHistory.title == title,
DownloadHistory.year == year).order_by(
DownloadHistory.id.desc()).all()
if result:
return list(result)
return []
@staticmethod
@@ -144,13 +140,12 @@ class DownloadHistory(Base):
查询某用户某时间之后的下载历史
"""
if username:
result = db.query(DownloadHistory).filter(DownloadHistory.date < date,
DownloadHistory.username == username).order_by(
return db.query(DownloadHistory).filter(DownloadHistory.date < date,
DownloadHistory.username == username).order_by(
DownloadHistory.id.desc()).all()
else:
result = db.query(DownloadHistory).filter(DownloadHistory.date < date).order_by(
return db.query(DownloadHistory).filter(DownloadHistory.date < date).order_by(
DownloadHistory.id.desc()).all()
return list(result)
@staticmethod
@db_query
@@ -173,12 +168,11 @@ class DownloadHistory(Base):
@staticmethod
@db_query
def list_by_type(db: Session, mtype: str, days: int):
result = db.query(DownloadHistory) \
return db.query(DownloadHistory) \
.filter(DownloadHistory.type == mtype,
DownloadHistory.date >= time.strftime("%Y-%m-%d %H:%M:%S",
time.localtime(time.time() - 86400 * int(days)))
).all()
return list(result)
class DownloadFiles(Base):
@@ -205,12 +199,10 @@ class DownloadFiles(Base):
@db_query
def get_by_hash(db: Session, download_hash: str, state: Optional[int] = None):
if state:
result = db.query(DownloadFiles).filter(DownloadFiles.download_hash == download_hash,
DownloadFiles.state == state).all()
return db.query(DownloadFiles).filter(DownloadFiles.download_hash == download_hash,
DownloadFiles.state == state).all()
else:
result = db.query(DownloadFiles).filter(DownloadFiles.download_hash == download_hash).all()
return list(result)
return db.query(DownloadFiles).filter(DownloadFiles.download_hash == download_hash).all()
@staticmethod
@db_query
@@ -225,8 +217,7 @@ class DownloadFiles(Base):
@staticmethod
@db_query
def get_by_savepath(db: Session, savepath: str):
result = db.query(DownloadFiles).filter(DownloadFiles.savepath == savepath).all()
return list(result)
return db.query(DownloadFiles).filter(DownloadFiles.savepath == savepath).all()
@staticmethod
@db_update

View File

@@ -37,7 +37,4 @@ class Message(Base):
@staticmethod
@db_query
def list_by_page(db: Session, page: Optional[int] = 1, count: Optional[int] = 30):
result = db.query(Message).order_by(Message.reg_time.desc()).offset((page - 1) * count).limit(
count).all()
result.sort(key=lambda x: x.reg_time, reverse=False)
return list(result)
return db.query(Message).order_by(Message.reg_time.desc()).offset((page - 1) * count).limit(count).all()

View File

@@ -16,8 +16,7 @@ class PluginData(Base):
@staticmethod
@db_query
def get_plugin_data(db: Session, plugin_id: str):
result = db.query(PluginData).filter(PluginData.plugin_id == plugin_id).all()
return list(result)
return db.query(PluginData).filter(PluginData.plugin_id == plugin_id).all()
@staticmethod
@db_query
@@ -37,5 +36,4 @@ class PluginData(Base):
@staticmethod
@db_query
def get_plugin_data_by_plugin_id(db: Session, plugin_id: str):
result = db.query(PluginData).filter(PluginData.plugin_id == plugin_id).all()
return list(result)
return db.query(PluginData).filter(PluginData.plugin_id == plugin_id).all()

View File

@@ -62,20 +62,17 @@ class Site(Base):
@staticmethod
@db_query
def get_actives(db: Session):
result = db.query(Site).filter(Site.is_active == 1).all()
return list(result)
return db.query(Site).filter(Site.is_active == 1).all()
@staticmethod
@db_query
def list_order_by_pri(db: Session):
result = db.query(Site).order_by(Site.pri).all()
return list(result)
return db.query(Site).order_by(Site.pri).all()
@staticmethod
@db_query
def get_domains_by_ids(db: Session, ids: list):
result = db.query(Site.domain).filter(Site.id.in_(ids)).all()
return [r[0] for r in result]
return [r[0] for r in db.query(Site.domain).filter(Site.id.in_(ids)).all()]
@staticmethod
@db_update

View File

@@ -104,12 +104,10 @@ class Subscribe(Base):
def get_by_state(db: Session, state: str):
# 如果 state 为空或 None返回所有订阅
if not state:
result = db.query(Subscribe).all()
return db.query(Subscribe).all()
else:
# 如果传入的状态不为空,拆分成多个状态
states = state.split(',')
result = db.query(Subscribe).filter(Subscribe.state.in_(states)).all()
return list(result)
return db.query(Subscribe).filter(Subscribe.state.in_(state.split(','))).all()
@staticmethod
@db_query
@@ -123,11 +121,10 @@ class Subscribe(Base):
@db_query
def get_by_tmdbid(db: Session, tmdbid: int, season: Optional[int] = None):
if season:
result = db.query(Subscribe).filter(Subscribe.tmdbid == tmdbid,
Subscribe.season == season).all()
return db.query(Subscribe).filter(Subscribe.tmdbid == tmdbid,
Subscribe.season == season).all()
else:
result = db.query(Subscribe).filter(Subscribe.tmdbid == tmdbid).all()
return list(result)
return db.query(Subscribe).filter(Subscribe.tmdbid == tmdbid).all()
@staticmethod
@db_query
@@ -170,26 +167,24 @@ class Subscribe(Base):
def list_by_username(db: Session, username: str, state: Optional[str] = None, mtype: Optional[str] = None):
if mtype:
if state:
result = db.query(Subscribe).filter(Subscribe.state == state,
Subscribe.username == username,
Subscribe.type == mtype).all()
return db.query(Subscribe).filter(Subscribe.state == state,
Subscribe.username == username,
Subscribe.type == mtype).all()
else:
result = db.query(Subscribe).filter(Subscribe.username == username,
Subscribe.type == mtype).all()
return db.query(Subscribe).filter(Subscribe.username == username,
Subscribe.type == mtype).all()
else:
if state:
result = db.query(Subscribe).filter(Subscribe.state == state,
Subscribe.username == username).all()
return db.query(Subscribe).filter(Subscribe.state == state,
Subscribe.username == username).all()
else:
result = db.query(Subscribe).filter(Subscribe.username == username).all()
return list(result)
return db.query(Subscribe).filter(Subscribe.username == username).all()
@staticmethod
@db_query
def list_by_type(db: Session, mtype: str, days: int):
result = db.query(Subscribe) \
return db.query(Subscribe) \
.filter(Subscribe.type == mtype,
Subscribe.date >= time.strftime("%Y-%m-%d %H:%M:%S",
time.localtime(time.time() - 86400 * int(days)))
).all()
return list(result)

View File

@@ -75,12 +75,11 @@ class SubscribeHistory(Base):
@staticmethod
@db_query
def list_by_type(db: Session, mtype: str, page: Optional[int] = 1, count: Optional[int] = 30):
result = db.query(SubscribeHistory).filter(
return db.query(SubscribeHistory).filter(
SubscribeHistory.type == mtype
).order_by(
SubscribeHistory.date.desc()
).offset((page - 1) * count).limit(count).all()
return list(result)
@staticmethod
@db_query

View File

@@ -63,35 +63,33 @@ class TransferHistory(Base):
@db_query
def list_by_title(db: Session, title: str, page: Optional[int] = 1, count: Optional[int] = 30, status: bool = None):
if status is not None:
result = db.query(TransferHistory).filter(
return db.query(TransferHistory).filter(
TransferHistory.status == status
).order_by(
TransferHistory.date.desc()
).offset((page - 1) * count).limit(count).all()
else:
result = db.query(TransferHistory).filter(or_(
return db.query(TransferHistory).filter(or_(
TransferHistory.title.like(f'%{title}%'),
TransferHistory.src.like(f'%{title}%'),
TransferHistory.dest.like(f'%{title}%'),
)).order_by(
TransferHistory.date.desc()
).offset((page - 1) * count).limit(count).all()
return list(result)
@staticmethod
@db_query
def list_by_page(db: Session, page: Optional[int] = 1, count: Optional[int] = 30, status: bool = None):
if status is not None:
result = db.query(TransferHistory).filter(
return db.query(TransferHistory).filter(
TransferHistory.status == status
).order_by(
TransferHistory.date.desc()
).offset((page - 1) * count).limit(count).all()
else:
result = db.query(TransferHistory).order_by(
return db.query(TransferHistory).order_by(
TransferHistory.date.desc()
).offset((page - 1) * count).limit(count).all()
return list(result)
@staticmethod
@db_query
@@ -115,8 +113,7 @@ class TransferHistory(Base):
@staticmethod
@db_query
def list_by_hash(db: Session, download_hash: str):
result = db.query(TransferHistory).filter(TransferHistory.download_hash == download_hash).all()
return list(result)
return db.query(TransferHistory).filter(TransferHistory.download_hash == download_hash).all()
@staticmethod
@db_query
@@ -128,8 +125,7 @@ class TransferHistory(Base):
TransferHistory.id.label('id')).filter(
TransferHistory.date >= time.strftime("%Y-%m-%d %H:%M:%S",
time.localtime(time.time() - 86400 * days))).subquery()
result = db.query(sub_query.c.date, func.count(sub_query.c.id)).group_by(sub_query.c.date).all()
return list(result)
return db.query(sub_query.c.date, func.count(sub_query.c.id)).group_by(sub_query.c.date).all()
@staticmethod
@db_query
@@ -153,70 +149,67 @@ class TransferHistory(Base):
@staticmethod
@db_query
def list_by(db: Session, mtype: Optional[str] = None, title: Optional[str] = None, year: Optional[str] = None, season: Optional[str] = None,
def list_by(db: Session, mtype: Optional[str] = None, title: Optional[str] = None, year: Optional[str] = None,
season: Optional[str] = None,
episode: Optional[str] = None, tmdbid: Optional[int] = None, dest: Optional[str] = None):
"""
据tmdbid、season、season_episode查询转移记录
tmdbid + mtype 或 title + year 必输
"""
result = None
# TMDBID + 类型
if tmdbid and mtype:
# 电视剧某季某集
if season and episode:
result = db.query(TransferHistory).filter(TransferHistory.tmdbid == tmdbid,
TransferHistory.type == mtype,
TransferHistory.seasons == season,
TransferHistory.episodes == episode,
TransferHistory.dest == dest).all()
return db.query(TransferHistory).filter(TransferHistory.tmdbid == tmdbid,
TransferHistory.type == mtype,
TransferHistory.seasons == season,
TransferHistory.episodes == episode,
TransferHistory.dest == dest).all()
# 电视剧某季
elif season:
result = db.query(TransferHistory).filter(TransferHistory.tmdbid == tmdbid,
TransferHistory.type == mtype,
TransferHistory.seasons == season).all()
return db.query(TransferHistory).filter(TransferHistory.tmdbid == tmdbid,
TransferHistory.type == mtype,
TransferHistory.seasons == season).all()
else:
if dest:
# 电影
result = db.query(TransferHistory).filter(TransferHistory.tmdbid == tmdbid,
TransferHistory.type == mtype,
TransferHistory.dest == dest).all()
return db.query(TransferHistory).filter(TransferHistory.tmdbid == tmdbid,
TransferHistory.type == mtype,
TransferHistory.dest == dest).all()
else:
# 电视剧所有季集
result = db.query(TransferHistory).filter(TransferHistory.tmdbid == tmdbid,
TransferHistory.type == mtype).all()
return db.query(TransferHistory).filter(TransferHistory.tmdbid == tmdbid,
TransferHistory.type == mtype).all()
# 标题 + 年份
elif title and year:
# 电视剧某季某集
if season and episode:
result = db.query(TransferHistory).filter(TransferHistory.title == title,
TransferHistory.year == year,
TransferHistory.seasons == season,
TransferHistory.episodes == episode,
TransferHistory.dest == dest).all()
return db.query(TransferHistory).filter(TransferHistory.title == title,
TransferHistory.year == year,
TransferHistory.seasons == season,
TransferHistory.episodes == episode,
TransferHistory.dest == dest).all()
# 电视剧某季
elif season:
result = db.query(TransferHistory).filter(TransferHistory.title == title,
TransferHistory.year == year,
TransferHistory.seasons == season).all()
return db.query(TransferHistory).filter(TransferHistory.title == title,
TransferHistory.year == year,
TransferHistory.seasons == season).all()
else:
if dest:
# 电影
result = db.query(TransferHistory).filter(TransferHistory.title == title,
TransferHistory.year == year,
TransferHistory.dest == dest).all()
return db.query(TransferHistory).filter(TransferHistory.title == title,
TransferHistory.year == year,
TransferHistory.dest == dest).all()
else:
# 电视剧所有季集
result = db.query(TransferHistory).filter(TransferHistory.title == title,
TransferHistory.year == year).all()
return db.query(TransferHistory).filter(TransferHistory.title == title,
TransferHistory.year == year).all()
# 类型 + 转移路径emby webhook season无tmdbid场景
elif mtype and season and dest:
# 电视剧某季
result = db.query(TransferHistory).filter(TransferHistory.type == mtype,
TransferHistory.seasons == season,
TransferHistory.dest.like(f"{dest}%")).all()
if result:
return list(result)
return db.query(TransferHistory).filter(TransferHistory.type == mtype,
TransferHistory.seasons == season,
TransferHistory.dest.like(f"{dest}%")).all()
return []
@staticmethod

View File

@@ -37,6 +37,11 @@ class Workflow(Base):
# 最后执行时间
last_time = Column(String)
@staticmethod
@db_query
def list(db):
return db.query(Workflow).all()
@staticmethod
@db_query
def get_enabled_workflows(db):

View File

@@ -78,7 +78,3 @@ class SystemConfigOper(DbOper, metaclass=Singleton):
if conf:
conf.delete(self._db, conf.id)
return True
def __del__(self):
if self._db:
self._db.close()

View File

@@ -50,10 +50,6 @@ class UserConfigOper(DbOper, metaclass=Singleton):
return self.__get_config_caches(username=username)
return self.__get_config_cache(username=username, key=key)
def __del__(self):
if self._db:
self._db.close()
def __set_config_cache(self, username: str, key: str, value: Any):
"""
设置配置缓存

View File

@@ -25,6 +25,12 @@ class WorkflowOper(DbOper):
"""
return Workflow.get(self._db, wid)
def list(self) -> List[Workflow]:
"""
获取所有工作流列表
"""
return Workflow.list(self._db)
def list_enabled(self) -> List[Workflow]:
"""
获取启用的工作流列表

View File

@@ -46,17 +46,17 @@ class PlaywrightHelper:
browser = playwright[self.browser_type].launch(headless=headless)
context = browser.new_context(user_agent=ua, proxy=proxies)
page = context.new_page()
if cookies:
page.set_extra_http_headers({"cookie": cookies})
if not self.__pass_cloudflare(url, page):
logger.warn("cloudflare challenge fail")
page.wait_for_load_state("networkidle", timeout=timeout * 1000)
# 回调函数
result = callback(page)
except Exception as e:
logger.error(f"网页操作失败: {str(e)}")
finally:
@@ -69,7 +69,7 @@ class PlaywrightHelper:
browser.close()
except Exception as e:
logger.error(f"Playwright初始化失败: {str(e)}")
return result
def get_page_source(self, url: str,
@@ -97,16 +97,16 @@ class PlaywrightHelper:
browser = playwright[self.browser_type].launch(headless=headless)
context = browser.new_context(user_agent=ua, proxy=proxies)
page = context.new_page()
if cookies:
page.set_extra_http_headers({"cookie": cookies})
if not self.__pass_cloudflare(url, page):
logger.warn("cloudflare challenge fail")
page.wait_for_load_state("networkidle", timeout=timeout * 1000)
source = page.content()
except Exception as e:
logger.error(f"获取网页源码失败: {str(e)}")
source = None
@@ -120,7 +120,7 @@ class PlaywrightHelper:
browser.close()
except Exception as e:
logger.error(f"Playwright初始化失败: {str(e)}")
return source

View File

@@ -96,53 +96,58 @@ class CookieHelper:
return None, None, "获取源码失败"
# 查找用户名输入框
html = etree.HTML(html_text)
username_xpath = None
for xpath in self._SITE_LOGIN_XPATH.get("username"):
if html.xpath(xpath):
username_xpath = xpath
break
if not username_xpath:
return None, None, "未找到用户名输入框"
# 查找密码输入框
password_xpath = None
for xpath in self._SITE_LOGIN_XPATH.get("password"):
if html.xpath(xpath):
password_xpath = xpath
break
if not password_xpath:
return None, None, "未找到密码输入框"
# 处理二步验证码
otp_code = TwoFactorAuth(two_step_code).get_code()
# 查找二步验证码输入框
twostep_xpath = None
if otp_code:
for xpath in self._SITE_LOGIN_XPATH.get("twostep"):
try:
username_xpath = None
for xpath in self._SITE_LOGIN_XPATH.get("username"):
if html.xpath(xpath):
twostep_xpath = xpath
username_xpath = xpath
break
# 查找验证码输入框
captcha_xpath = None
for xpath in self._SITE_LOGIN_XPATH.get("captcha"):
if html.xpath(xpath):
captcha_xpath = xpath
break
# 查找验证码图片
captcha_img_url = None
if captcha_xpath:
for xpath in self._SITE_LOGIN_XPATH.get("captcha_img"):
if not username_xpath:
return None, None, "未找到用户名输入框"
# 查找密码输入框
password_xpath = None
for xpath in self._SITE_LOGIN_XPATH.get("password"):
if html.xpath(xpath):
captcha_img_url = html.xpath(xpath)[0]
password_xpath = xpath
break
if not captcha_img_url:
return None, None, "未找到验证码图片"
# 查找登录按钮
submit_xpath = None
for xpath in self._SITE_LOGIN_XPATH.get("submit"):
if html.xpath(xpath):
submit_xpath = xpath
break
if not submit_xpath:
return None, None, "未找到登录按钮"
if not password_xpath:
return None, None, "未找到密码输入框"
# 处理二步验证码
otp_code = TwoFactorAuth(two_step_code).get_code()
# 查找二步验证码输入框
twostep_xpath = None
if otp_code:
for xpath in self._SITE_LOGIN_XPATH.get("twostep"):
if html.xpath(xpath):
twostep_xpath = xpath
break
# 查找验证码输入框
captcha_xpath = None
for xpath in self._SITE_LOGIN_XPATH.get("captcha"):
if html.xpath(xpath):
captcha_xpath = xpath
break
# 查找验证码图片
captcha_img_url = None
if captcha_xpath:
for xpath in self._SITE_LOGIN_XPATH.get("captcha_img"):
if html.xpath(xpath):
captcha_img_url = html.xpath(xpath)[0]
break
if not captcha_img_url:
return None, None, "未找到验证码图片"
# 查找登录按钮
submit_xpath = None
for xpath in self._SITE_LOGIN_XPATH.get("submit"):
if html.xpath(xpath):
submit_xpath = xpath
break
if not submit_xpath:
return None, None, "未找到登录按钮"
finally:
if html is not None:
del html
# 点击登录按钮
try:
# 等待登录按钮准备好
@@ -185,19 +190,23 @@ class CookieHelper:
if not otp_code:
return None, None, "需要二次验证码"
html = etree.HTML(page.content())
for xpath in self._SITE_LOGIN_XPATH.get("twostep"):
if html.xpath(xpath):
try:
# 刷新一下 2fa code
otp_code = TwoFactorAuth(two_step_code).get_code()
page.fill(xpath, otp_code)
# 登录按钮 xpath 理论上相同,不再重复查找
page.click(submit_xpath)
page.wait_for_load_state("networkidle", timeout=30 * 1000)
except Exception as e:
logger.error(f"二次验证码输入失败:{str(e)}")
return None, None, f"二次验证码输入失败:{str(e)}"
break
try:
for xpath in self._SITE_LOGIN_XPATH.get("twostep"):
if html.xpath(xpath):
try:
# 刷新一下 2fa code
otp_code = TwoFactorAuth(two_step_code).get_code()
page.fill(xpath, otp_code)
# 登录按钮 xpath 理论上相同,不再重复查找
page.click(submit_xpath)
page.wait_for_load_state("networkidle", timeout=30 * 1000)
except Exception as e:
logger.error(f"二次验证码输入失败:{str(e)}")
return None, None, f"二次验证码输入失败:{str(e)}"
break
finally:
if html is not None:
del html
# 登录后的源码
html_text = page.content()
if not html_text:

View File

@@ -1,12 +1,16 @@
import re
from pathlib import Path
from typing import List, Optional
from app import schemas
from app.core.context import MediaInfo
from app.db.systemconfig_oper import SystemConfigOper
from app.log import logger
from app.schemas.types import SystemConfigKey
from app.utils.system import SystemUtils
JINJA2_VAR_PATTERN = re.compile(r"\{\{.*?\}\}", re.DOTALL)
class DirectoryHelper:
"""
@@ -109,3 +113,42 @@ class DirectoryHelper:
return matched_dir
return matched_dirs[0]
return None
@staticmethod
def get_media_root_path(rename_format: str, rename_path: Path) -> Optional[Path]:
"""
获取重命名后的媒体文件根路径
:param rename_format: 重命名格式
:param rename_path: 重命名后的路径
:return: 媒体文件根路径
"""
if not rename_format:
logger.error("重命名格式不能为空")
return None
# 计算重命名中的文件夹层数
rename_list = rename_format.split("/")
rename_format_level = len(rename_list) - 1
# 查找标题参数所在层
for level, name in enumerate(rename_list):
matchs = JINJA2_VAR_PATTERN.findall(name)
if not matchs:
continue
# 处理特例,有的人重命名的第一层是年份、分辨率
if any("title" in m for m in matchs):
# 找出含标题的这一层作为媒体根目录
rename_format_level -= level
break
else:
# 假定第一层目录是媒体根目录
logger.warn(f"重命名格式 {rename_format} 缺少标题参数")
if rename_format_level > len(rename_path.parents):
# 通常因为路径以/结尾被Path规范化删除了
logger.error(f"路径 {rename_path} 不匹配重命名格式 {rename_format}")
return None
if rename_format_level <= 0:
# 所有媒体文件都存在一个目录内的特殊需求
rename_format_level = 1
# 媒体根路径
media_root = rename_path.parents[rename_format_level - 1]
return media_root

View File

@@ -8,7 +8,6 @@ import os
class DisplayHelper(metaclass=Singleton):
_display: Display = None
def __init__(self):
if not SystemUtils.is_docker():

View File

@@ -70,6 +70,9 @@ def enable_doh(enable: bool):
class DohHelper(metaclass=Singleton):
"""
DoH帮助类用于处理DNS over HTTPS解析。
"""
def __init__(self):
enable_doh(settings.DOH_ENABLE)

View File

@@ -1,457 +0,0 @@
import gc
import sys
import threading
import time
from datetime import datetime
from typing import Optional
import psutil
from pympler import muppy, summary, asizeof
from app.core.config import settings
from app.core.event import eventmanager, Event
from app.log import logger
from app.schemas import ConfigChangeEventData
from app.schemas.types import EventType
from app.utils.singleton import Singleton
class MemoryHelper(metaclass=Singleton):
"""
内存管理工具类,用于监控和优化内存使用
"""
def __init__(self):
# 检查间隔(秒) - 从配置获取默认5分钟
self._check_interval = settings.MEMORY_SNAPSHOT_INTERVAL * 60
self._monitoring = False
self._monitor_thread: Optional[threading.Thread] = None
# 内存快照保存目录
self._memory_snapshot_dir = settings.LOG_PATH / "memory_snapshots"
# 保留的快照文件数量
self._keep_count = settings.MEMORY_SNAPSHOT_KEEP_COUNT
@eventmanager.register(EventType.ConfigChanged)
def handle_config_changed(self, event: Event):
"""
处理配置变更事件,更新内存监控设置
:param event: 事件对象
"""
if not event:
return
event_data: ConfigChangeEventData = event.event_data
if event_data.key not in ['MEMORY_ANALYSIS', 'MEMORY_SNAPSHOT_INTERVAL', 'MEMORY_SNAPSHOT_KEEP_COUNT']:
return
# 更新配置
if event_data.key == 'MEMORY_SNAPSHOT_INTERVAL':
self._check_interval = settings.MEMORY_SNAPSHOT_INTERVAL * 60
elif event_data.key == 'MEMORY_SNAPSHOT_KEEP_COUNT':
self._keep_count = settings.MEMORY_SNAPSHOT_KEEP_COUNT
self.stop_monitoring()
self.start_monitoring()
def start_monitoring(self):
"""
开始内存监控
"""
if not settings.MEMORY_ANALYSIS:
return
if self._monitoring:
return
# 创建内存快照目录
self._memory_snapshot_dir.mkdir(parents=True, exist_ok=True)
# 初始化内存分析器
self._monitoring = True
self._monitor_thread = threading.Thread(target=self._monitor_loop, daemon=True)
self._monitor_thread.start()
logger.info("内存监控已启动")
def stop_monitoring(self):
"""
停止内存监控
"""
self._monitoring = False
if self._monitor_thread:
self._monitor_thread.join(timeout=5)
logger.info("内存监控已停止")
def _monitor_loop(self):
"""
内存监控循环
"""
logger.info("内存监控循环开始")
while self._monitoring:
try:
# 生成内存快照
self._create_memory_snapshot()
time.sleep(self._check_interval)
except Exception as e:
logger.error(f"内存监控出错: {e}")
# 出错后等待1分钟再继续
time.sleep(60)
logger.info("内存监控循环结束")
def _create_memory_snapshot(self):
"""
创建内存快照并保存到文件
"""
try:
# 获取当前时间戳
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
snapshot_file = self._memory_snapshot_dir / f"memory_snapshot_{timestamp}.txt"
# 获取系统内存使用情况
memory_usage = psutil.Process().memory_info().rss
logger.info(f"开始创建内存快照: {snapshot_file}")
# 第一步:写入基本信息和对象类型统计
self._write_basic_info(snapshot_file, memory_usage)
# 第二步:分析并写入类实例内存使用情况
self._append_class_analysis(snapshot_file)
# 第三步:分析并写入大内存变量详情
self._append_variable_analysis(snapshot_file)
logger.info(f"内存快照已保存: {snapshot_file}, 当前内存使用: {memory_usage / 1024 / 1024:.2f} MB")
# 清理过期的快照文件保留最近30个
self._cleanup_old_snapshots()
except Exception as e:
logger.error(f"创建内存快照失败: {e}")
@staticmethod
def _write_basic_info(snapshot_file, memory_usage):
"""
写入基本信息和对象类型统计
"""
# 获取当前进程的内存使用情况
all_objects = muppy.get_objects()
sum1 = summary.summarize(all_objects)
with open(snapshot_file, 'w', encoding='utf-8') as f:
f.write(f"内存快照时间: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}\n")
f.write(f"当前进程内存使用: {memory_usage / 1024 / 1024:.2f} MB\n")
f.write("=" * 80 + "\n")
f.write("对象类型统计:\n")
f.write("-" * 80 + "\n")
# 写入对象统计信息
for line in summary.format_(sum1):
f.write(line + "\n")
# 立即刷新到磁盘
f.flush()
logger.debug("基本信息已写入快照文件")
def _append_class_analysis(self, snapshot_file):
"""
分析并追加类实例内存使用情况
"""
with open(snapshot_file, 'a', encoding='utf-8') as f:
f.write("\n" + "=" * 80 + "\n")
f.write("类实例内存使用情况 (按内存大小排序):\n")
f.write("-" * 80 + "\n")
f.write("正在分析中...\n")
# 立即刷新,让用户知道这部分开始了
f.flush()
try:
logger.debug("开始分析类实例内存使用情况")
class_objects = self._get_class_memory_usage()
# 重新打开文件,移除"正在分析中..."并写入实际结果
with open(snapshot_file, 'r', encoding='utf-8') as f:
content = f.read()
# 替换"正在分析中..."
content = content.replace("正在分析中...\n", "")
with open(snapshot_file, 'w', encoding='utf-8') as f:
f.write(content)
if class_objects:
# 只显示前100个类
for i, class_info in enumerate(class_objects[:100], 1):
f.write(f"{i:3d}. {class_info['name']:<50} "
f"{class_info['size_mb']:>8.2f} MB ({class_info['count']} 个实例)\n")
else:
f.write("未找到有效的类实例信息\n")
f.flush()
except Exception as e:
logger.error(f"获取类实例信息失败: {e}")
# 即使出错也要更新文件
with open(snapshot_file, 'r', encoding='utf-8') as f:
content = f.read()
content = content.replace("正在分析中...\n", f"获取类实例信息失败: {e}\n")
with open(snapshot_file, 'w', encoding='utf-8') as f:
f.write(content)
f.flush()
logger.debug("类实例分析已完成并写入")
def _append_variable_analysis(self, snapshot_file):
"""
分析并追加大内存变量详情
"""
with open(snapshot_file, 'a', encoding='utf-8') as f:
f.write("\n" + "=" * 80 + "\n")
f.write("大内存变量详情 (前100个):\n")
f.write("-" * 80 + "\n")
f.write("正在分析中...\n")
# 立即刷新,让用户知道这部分开始了
f.flush()
try:
logger.debug("开始分析大内存变量")
large_variables = self._get_large_variables(100)
# 重新打开文件,移除"正在分析中..."并写入实际结果
with open(snapshot_file, 'r', encoding='utf-8') as f:
content = f.read()
# 替换最后的"正在分析中..."
content = content.replace("正在分析中...\n", "")
with open(snapshot_file, 'w', encoding='utf-8') as f:
f.write(content)
if large_variables:
for i, var_info in enumerate(large_variables, 1):
f.write(
f"{i:3d}. {var_info['name']:<30} {var_info['type']:<15} {var_info['size_mb']:>8.2f} MB\n")
else:
f.write("未找到大内存变量\n")
f.flush()
except Exception as e:
logger.error(f"获取大内存变量信息失败: {e}")
# 即使出错也要更新文件
with open(snapshot_file, 'r', encoding='utf-8') as f:
content = f.read()
content = content.replace("正在分析中...\n", f"获取变量信息失败: {e}\n")
with open(snapshot_file, 'w', encoding='utf-8') as f:
f.write(content)
f.flush()
logger.debug("大内存变量分析已完成并写入")
def _cleanup_old_snapshots(self):
"""
清理过期的内存快照文件,只保留最近的指定数量文件
"""
try:
snapshot_files = list(self._memory_snapshot_dir.glob("memory_snapshot_*.txt"))
if len(snapshot_files) > self._keep_count:
# 按修改时间排序,删除最旧的文件
snapshot_files.sort(key=lambda x: x.stat().st_mtime)
for old_file in snapshot_files[:-self._keep_count]:
old_file.unlink()
logger.debug(f"已删除过期内存快照: {old_file}")
except Exception as e:
logger.error(f"清理过期快照失败: {e}")
@staticmethod
def _get_class_memory_usage():
"""
获取所有类实例的内存使用情况,按内存大小排序
"""
class_info = {}
processed_count = 0
error_count = 0
# 获取所有对象
all_objects = muppy.get_objects()
logger.debug(f"开始分析 {len(all_objects)} 个对象的类实例内存使用情况")
for obj in all_objects:
try:
# 跳过类对象本身,统计类的实例
if isinstance(obj, type):
continue
# 获取对象的类名 - 这里可能会出错
obj_class = type(obj)
# 安全地获取类名
try:
if hasattr(obj_class, '__module__') and hasattr(obj_class, '__name__'):
class_name = f"{obj_class.__module__}.{obj_class.__name__}"
else:
class_name = str(obj_class)
except Exception as e:
# 如果获取类名失败,使用简单的类型描述
class_name = f"<unknown_class_{id(obj_class)}>"
logger.debug(f"获取类名失败: {e}")
# 计算对象本身的内存使用(不包括引用对象,避免重复计算)
size_bytes = sys.getsizeof(obj)
if size_bytes < 100: # 跳过太小的对象
continue
size_mb = size_bytes / 1024 / 1024
processed_count += 1
if class_name in class_info:
class_info[class_name]['size_mb'] += size_mb
class_info[class_name]['count'] += 1
else:
class_info[class_name] = {
'name': class_name,
'size_mb': size_mb,
'count': 1
}
except Exception as e:
# 捕获所有可能的异常包括SQLAlchemy、ORM等框架的异常
error_count += 1
if error_count <= 5: # 只记录前5个错误避免日志过多
logger.debug(f"分析对象时出错: {e}")
continue
logger.debug(f"类实例分析完成: 处理了 {processed_count} 个对象, 遇到 {error_count} 个错误")
# 按内存大小排序
sorted_classes = sorted(class_info.values(), key=lambda x: x['size_mb'], reverse=True)
return sorted_classes
def _get_large_variables(self, limit=100):
"""
获取大内存变量信息,按内存大小排序
使用已计算对象集合避免重复计算
"""
large_vars = []
processed_count = 0
calculated_objects = set() # 避免重复计算
# 获取所有对象
all_objects = muppy.get_objects()
logger.debug(f"开始分析 {len(all_objects)} 个对象的内存使用情况")
for obj in all_objects:
# 跳过类对象
if isinstance(obj, type):
continue
# 跳过已经计算过的对象
obj_id = id(obj)
if obj_id in calculated_objects:
continue
try:
# 首先使用 sys.getsizeof 快速筛选
shallow_size = sys.getsizeof(obj)
if shallow_size < 1024: # 只处理大于1KB的对象
continue
# 对于较大的对象,使用 asizeof 进行深度计算
size_bytes = asizeof.asizeof(obj)
# 只处理大于10KB的对象提高分析效率
if size_bytes < 10240:
continue
size_mb = size_bytes / 1024 / 1024
processed_count += 1
calculated_objects.add(obj_id)
# 获取对象信息
var_info = self._get_variable_info(obj, size_mb)
if var_info:
large_vars.append(var_info)
# 如果已经找到足够多的大对象,可以提前结束
if len(large_vars) >= limit * 2: # 多收集一些,后面排序筛选
break
except Exception as e:
# 更广泛的异常捕获
logger.debug(f"分析对象失败: {e}")
continue
logger.debug(f"处理了 {processed_count} 个大对象,找到 {len(large_vars)} 个有效变量")
# 按内存大小排序并返回前N个
large_vars.sort(key=lambda x: x['size_mb'], reverse=True)
return large_vars[:limit]
def _get_variable_info(self, obj, size_mb):
"""
获取变量的描述信息
"""
try:
obj_type = type(obj).__name__
# 尝试获取变量名
var_name = self._get_variable_name(obj)
# 生成描述性信息
if isinstance(obj, dict):
key_count = len(obj)
if key_count > 0:
sample_keys = list(obj.keys())[:3]
var_name += f" ({key_count}项, 键: {sample_keys})"
elif isinstance(obj, (list, tuple, set)):
var_name += f" ({len(obj)}个元素)"
elif isinstance(obj, str):
if len(obj) > 50:
var_name += f" (长度: {len(obj)}, 内容: '{obj[:50]}...')"
else:
var_name += f" ('{obj}')"
elif hasattr(obj, '__class__') and hasattr(obj.__class__, '__name__'):
if hasattr(obj, '__dict__'):
attr_count = len(obj.__dict__)
var_name += f" ({attr_count}个属性)"
return {
'name': var_name,
'type': obj_type,
'size_mb': size_mb
}
except Exception as e:
logger.debug(f"获取变量信息失败: {e}")
return None
@staticmethod
def _get_variable_name(obj):
"""
尝试获取变量名
"""
try:
# 尝试通过gc获取引用该对象的变量名
referrers = gc.get_referrers(obj)
for referrer in referrers:
if isinstance(referrer, dict):
# 检查是否在某个模块的全局变量中
for name, value in referrer.items():
if value is obj and isinstance(name, str):
return name
elif hasattr(referrer, '__dict__'):
# 检查是否在某个实例的属性中
for name, value in referrer.__dict__.items():
if value is obj and isinstance(name, str):
return f"{type(referrer).__name__}.{name}"
# 如果找不到变量名返回对象类型和id
return f"{type(obj).__name__}_{id(obj)}"
except Exception as e:
logger.debug(f"获取变量名失败: {e}")
return f"{type(obj).__name__}_{id(obj)}"

View File

@@ -183,6 +183,8 @@ class TemplateContextBuilder:
"videoCodec": meta.video_encode,
# 音频编码
"audioCodec": meta.audio_encode,
# 流媒体平台
"webSource": meta.web_source,
}
self._context.update({**meta_info, **tech_metadata, **episode_data})
@@ -539,8 +541,6 @@ class MessageQueueManager(metaclass=SingletonClass):
消息发送队列管理器
"""
schedule_periods: List[tuple[int, int, int, int]] = []
def __init__(
self,
send_callback: Optional[Callable] = None,
@@ -552,6 +552,8 @@ class MessageQueueManager(metaclass=SingletonClass):
:param send_callback: 实际发送消息的回调函数
:param check_interval: 时间检查间隔(秒)
"""
self.schedule_periods: List[tuple[int, int, int, int]] = []
self.init_config()
self.queue: queue.Queue[Any] = queue.Queue()

View File

@@ -9,7 +9,7 @@ class OcrHelper:
_ocr_b64_url = f"{settings.OCR_HOST}/captcha/base64"
def get_captcha_text(self, image_url: Optional[str] = None, image_b64: Optional[str] = None,
def get_captcha_text(self, image_url: Optional[str] = None, image_b64: Optional[str] = None,
cookie: Optional[str] = None, ua: Optional[str] = None):
"""
根据图片地址,获取验证码图片,并识别内容

View File

@@ -1,15 +1,16 @@
import sys
import importlib
import json
import shutil
import traceback
import site
import importlib
import sys
import traceback
from pathlib import Path
from typing import Any, Dict, List, Optional, Tuple, Set
from typing import Dict, List, Optional, Tuple, Set
from packaging.specifiers import SpecifierSet, InvalidSpecifier
from packaging.version import Version, InvalidVersion
from pkg_resources import Requirement, working_set
from requests import Response
from app.core.cache import cached
from app.core.config import settings
@@ -17,14 +18,14 @@ from app.db.systemconfig_oper import SystemConfigOper
from app.log import logger
from app.schemas.types import SystemConfigKey
from app.utils.http import RequestUtils
from app.utils.singleton import Singleton
from app.utils.singleton import WeakSingleton
from app.utils.system import SystemUtils
from app.utils.url import UrlUtils
PLUGIN_DIR = Path(settings.ROOT_PATH) / "app" / "plugins"
class PluginHelper(metaclass=Singleton):
class PluginHelper(metaclass=WeakSingleton):
"""
插件市场管理,下载安装插件到本地
"""
@@ -52,10 +53,10 @@ class PluginHelper(metaclass=Singleton):
# 如果强制刷新,直接调用不带缓存的版本
if force:
return self._get_plugins_uncached(repo_url, package_version)
# 正常情况下调用带缓存的版本
return self._get_plugins_cached(repo_url, package_version)
@cached(maxsize=64, ttl=1800)
def _get_plugins_cached(self, repo_url: str, package_version: Optional[str] = None) -> Optional[Dict[str, dict]]:
"""
@@ -64,7 +65,7 @@ class PluginHelper(metaclass=Singleton):
:param package_version: 首选插件版本 (如 "v2", "v3"),如果不指定则获取 v1 版本
"""
return self._get_plugins_uncached(repo_url, package_version)
def _get_plugins_uncached(self, repo_url: str, package_version: Optional[str] = None) -> Optional[Dict[str, dict]]:
"""
获取Github所有最新插件列表不使用缓存
@@ -85,11 +86,13 @@ class PluginHelper(metaclass=Singleton):
if res is None:
return None
if res:
content = res.text
try:
return json.loads(res.text)
return json.loads(content)
except json.JSONDecodeError:
logger.error(f"插件包数据解析失败:{res.text}")
return None
if "404: Not Found" not in content:
logger.warn(f"插件包数据解析失败:{content}")
return None
return {}
def get_plugin_package_version(self, pid: str, repo_url: str, package_version: Optional[str] = None) -> Optional[str]:
@@ -109,14 +112,11 @@ class PluginHelper(metaclass=Singleton):
package_version = settings.VERSION_FLAG
# 优先检查指定版本的插件,即 package.v(x).json 文件中是否存在该插件,如果存在,返回该版本号
plugins = self.get_plugins(repo_url, package_version)
if pid in plugins:
if pid in (self.get_plugins(repo_url, package_version) or []):
return package_version
# 如果指定版本的插件不存在,检查全局 package.json 文件,查看插件是否兼容指定的版本
global_plugins = self.get_plugins(repo_url)
plugin = global_plugins.get(pid, None)
plugin = (self.get_plugins(repo_url) or {}).get(pid, None)
# 检查插件是否明确支持当前指定的版本(如 v2 或 v3如果支持返回空字符串表示使用 package.jsonv1
if plugin and plugin.get(package_version) is True:
return ""
@@ -308,7 +308,7 @@ class PluginHelper(metaclass=Singleton):
return None, "连接仓库失败"
elif res.status_code != 200:
return None, f"连接仓库失败:{res.status_code} - " \
f"{'超出速率限制,请置GITHUB_TOKEN环境变量或稍后重试' if res.status_code == 403 else res.reason}"
f"{'超出速率限制,请置Github Token或稍后重试' if res.status_code == 403 else res.reason}"
try:
ret = res.json()
@@ -317,7 +317,7 @@ class PluginHelper(metaclass=Singleton):
else:
return None, "插件在仓库中不存在或返回数据格式不正确"
except Exception as e:
logger.error(f"插件数据解析失败:{res.text}{e}")
logger.error(f"插件数据解析失败:{e}")
return None, "插件数据解析失败"
def __download_files(self, pid: str, file_list: List[dict], user_repo: str,
@@ -399,8 +399,7 @@ class PluginHelper(metaclass=Singleton):
with open(requirements_file_path, "w", encoding="utf-8") as f:
f.write(requirements_txt)
success, message = self.__pip_install_with_fallback(requirements_file_path)
return success, message
return self.pip_install_with_fallback(requirements_file_path)
return True, "" # 如果 requirements.txt 为空,视作成功
@@ -417,7 +416,7 @@ class PluginHelper(metaclass=Singleton):
# 检查是否存在 requirements.txt 文件
if requirements_file.exists():
logger.info(f"{pid} 存在依赖,开始尝试安装依赖")
success, error_message = self.__pip_install_with_fallback(requirements_file)
success, error_message = self.pip_install_with_fallback(requirements_file)
if success:
return True, True, ""
else:
@@ -475,7 +474,7 @@ class PluginHelper(metaclass=Singleton):
shutil.rmtree(plugin_dir, ignore_errors=True)
@staticmethod
def __pip_install_with_fallback(requirements_file: Path) -> Tuple[bool, str]:
def pip_install_with_fallback(requirements_file: Path) -> Tuple[bool, str]:
"""
使用自动降级策略安装依赖,并确保新安装的包可被动态导入
:param requirements_file: 依赖的 requirements.txt 文件路径
@@ -520,7 +519,7 @@ class PluginHelper(metaclass=Singleton):
def __request_with_fallback(url: str,
headers: Optional[dict] = None,
timeout: Optional[int] = 60,
is_api: bool = False) -> Optional[Any]:
is_api: bool = False) -> Optional[Response]:
"""
使用自动降级策略,请求资源,优先级依次为镜像站、代理、直连
:param url: 目标URL
@@ -596,7 +595,6 @@ class PluginHelper(metaclass=Singleton):
def install_dependencies(self, dependencies: List[str]) -> Tuple[bool, str]:
"""
安装指定的依赖项列表
:param dependencies: 需要安装或更新的依赖项列表
:return: (success, message)
"""
@@ -611,12 +609,12 @@ class PluginHelper(metaclass=Singleton):
with open(requirements_temp_file, "w", encoding="utf-8") as f:
for dep in dependencies:
f.write(dep + "\n")
# 使用自动降级策略安装依赖
success, message = self.__pip_install_with_fallback(requirements_temp_file)
# 删除临时文件
requirements_temp_file.unlink()
return success, message
try:
# 使用自动降级策略安装依赖
return self.pip_install_with_fallback(requirements_temp_file)
finally:
# 删除临时文件
requirements_temp_file.unlink()
except Exception as e:
logger.error(f"安装依赖项时发生错误:{e}")
return False, f"安装依赖项时发生错误:{e}"
@@ -651,10 +649,20 @@ class PluginHelper(metaclass=Singleton):
"""
dependencies = {}
try:
install_plugins = {
plugin_id.lower() # 对应插件的小写目录名
for plugin_id in SystemConfigOper().get(
SystemConfigKey.UserInstalledPlugins
) or []
}
for plugin_dir in PLUGIN_DIR.iterdir():
if plugin_dir.is_dir():
requirements_file = plugin_dir / "requirements.txt"
if requirements_file.exists():
if plugin_dir.name not in install_plugins:
# 这个插件不在安装列表中 忽略它的依赖
logger.debug(f"忽略插件 {plugin_dir.name} 的依赖")
continue
# 解析当前插件的 requirements.txt获取依赖项
plugin_deps = self.__parse_requirements(requirements_file)
for pkg_name, version_specifiers in plugin_deps.items():

View File

@@ -1,12 +1,11 @@
from enum import Enum
from typing import Union, Dict, Optional
from typing import Union, Optional
from app.schemas.types import ProgressKey
from app.utils.singleton import Singleton
from app.utils.singleton import WeakSingleton
class ProgressHelper(metaclass=Singleton):
_process_detail: Dict[str, dict] = {}
class ProgressHelper(metaclass=WeakSingleton):
def __init__(self):
self._process_detail = {}

View File

@@ -8,6 +8,7 @@ from app.log import logger
from app.utils.http import RequestUtils
from app.utils.string import StringUtils
from app.utils.system import SystemUtils
from version import APP_VERSION
class ResourceHelper:
@@ -15,8 +16,8 @@ class ResourceHelper:
检测和更新资源包
"""
# 资源包的git仓库地址
_repo = f"{settings.GITHUB_PROXY}https://raw.githubusercontent.com/jxxghp/MoviePilot-Resources/main/package.json"
_files_api = f"https://api.github.com/repos/jxxghp/MoviePilot-Resources/contents/resources"
_repo = f"{settings.GITHUB_PROXY}https://raw.githubusercontent.com/jxxghp/MoviePilot-Resources/main/package.v2.json"
_files_api = f"https://api.github.com/repos/jxxghp/MoviePilot-Resources/contents/resources.v2"
_base_dir: Path = settings.ROOT_PATH
def __init__(self):
@@ -58,6 +59,12 @@ class ResourceHelper:
if rtype == "auth":
# 站点认证资源
local_version = SitesHelper().auth_version
# 阻断站点认证资源v2.3.0以下的版本直接更新,避免无限重启
if StringUtils.compare_version(local_version, "<", "2.3.0"):
continue
# 阻断主程序版本v2.6.3以下的版本直接更新,避免搜索异常
if StringUtils.compare_version(APP_VERSION, "<", "2.6.3"):
continue
elif rtype == "sites":
# 站点索引资源
local_version = SitesHelper().indexer_version
@@ -78,6 +85,8 @@ class ResourceHelper:
elif not r:
return None, "连接仓库失败"
files_info = r.json()
# 下载资源文件
success = True
for item in files_info:
save_path = need_updates.get(item.get("name"))
if not save_path:
@@ -90,16 +99,23 @@ class ResourceHelper:
timeout=180).get_res(download_url)
if not res:
logger.error(f"文件 {item.get('name')} 下载失败!")
success = False
break
elif res.status_code != 200:
logger.error(f"下载文件 {item.get('name')} 失败:{res.status_code} - {res.reason}")
success = False
break
# 创建插件文件夹
file_path = self._base_dir / save_path / item.get("name")
if not file_path.parent.exists():
file_path.parent.mkdir(parents=True, exist_ok=True)
# 写入文件
file_path.write_bytes(res.content)
logger.info("资源包更新完成,开始重启服务...")
SystemHelper.restart()
if success:
logger.info("资源包更新完成,开始重启服务...")
SystemHelper.restart()
else:
logger.warn("资源包更新失败,跳过升级!")
else:
logger.info("所有资源已最新,无需更新")
except json.JSONDecodeError:

View File

@@ -246,12 +246,17 @@ class RssHelper:
ret = RequestUtils(proxies=settings.PROXY if proxy else None,
timeout=timeout, headers=headers).get_res(url)
if not ret:
logger.error(f"获取RSS失败请求返回空值URL: {url}")
return False
except Exception as err:
logger.error(f"获取RSS失败{str(err)} - {traceback.format_exc()}")
return False
if ret:
# 检查HTTP状态码
if ret.status_code != 200:
logger.error(f"RSS请求失败状态码: {ret.status_code}, URL: {url}")
return False
ret_xml = None
root = None
try:
@@ -280,6 +285,17 @@ class RssHelper:
if not ret_xml:
ret_xml = ret.text
# 验证RSS内容是否有效
if not ret_xml or not ret_xml.strip():
logger.error("RSS内容为空")
return False
# 检查是否包含基本的RSS/XML结构
ret_xml_stripped = ret_xml.strip()
if not ret_xml_stripped.startswith('<'):
logger.error("RSS内容不是有效的XML格式")
return False
# 使用lxml.etree解析XML
parser = None
try:
@@ -292,7 +308,8 @@ class RssHelper:
huge_tree=False # 禁用大文档解析,避免内存问题
)
root = etree.fromstring(ret_xml.encode('utf-8'), parser=parser)
except etree.XMLSyntaxError:
except etree.XMLSyntaxError as xml_error:
logger.debug(f"XML解析失败{str(xml_error)}尝试HTML解析")
# 如果XML解析失败尝试作为HTML解析
try:
root = etree.HTML(ret_xml)
@@ -304,8 +321,15 @@ class RssHelper:
except Exception as e:
logger.error(f"HTML解析也失败{str(e)}")
return False
except Exception as general_error:
logger.error(f"解析RSS时发生未预期错误{str(general_error)}")
return False
finally:
if parser is not None:
try:
parser.close()
except Exception as close_error:
logger.debug(f"关闭解析器时出错:{str(close_error)}")
del parser
if root is None:
@@ -319,70 +343,72 @@ class RssHelper:
items_count = min(len(items), self.MAX_RSS_ITEMS)
if len(items) > self.MAX_RSS_ITEMS:
logger.warning(f"RSS条目过多: {len(items)},仅处理前{self.MAX_RSS_ITEMS}")
try:
for item in items[:items_count]:
try:
# 使用xpath提取信息更高效
title_nodes = item.xpath('.//title')
title = title_nodes[0].text if title_nodes and title_nodes[0].text else ""
if not title:
continue
for item in items[:items_count]:
try:
# 使用xpath提取信息更高效
title_nodes = item.xpath('.//title')
title = title_nodes[0].text if title_nodes and title_nodes[0].text else ""
if not title:
# 描述
desc_nodes = item.xpath('.//description | .//summary')
description = desc_nodes[0].text if desc_nodes and desc_nodes[0].text else ""
# 种子页面
link_nodes = item.xpath('.//link')
if link_nodes:
link = link_nodes[0].text if hasattr(link_nodes[0], 'text') and link_nodes[0].text else link_nodes[0].get('href', '')
else:
link = ""
# 种子链接
enclosure_nodes = item.xpath('.//enclosure')
enclosure = enclosure_nodes[0].get('url', '') if enclosure_nodes else ""
if not enclosure and not link:
continue
# 部分RSS只有link没有enclosure
if not enclosure and link:
enclosure = link
# 大小
size = 0
if enclosure_nodes:
size_attr = enclosure_nodes[0].get('length', '0')
if size_attr and str(size_attr).isdigit():
size = int(size_attr)
# 发布日期
pubdate_nodes = item.xpath('.//pubDate | .//published | .//updated')
pubdate = ""
if pubdate_nodes and pubdate_nodes[0].text:
pubdate = StringUtils.get_time(pubdate_nodes[0].text)
# 获取豆瓣昵称
nickname_nodes = item.xpath('.//*[local-name()="creator"]')
nickname = nickname_nodes[0].text if nickname_nodes and nickname_nodes[0].text else ""
# 返回对象
tmp_dict = {
'title': title,
'enclosure': enclosure,
'size': size,
'description': description,
'link': link,
'pubdate': pubdate
}
# 如果豆瓣昵称不为空返回数据增加豆瓣昵称供doubansync插件获取
if nickname:
tmp_dict['nickname'] = nickname
ret_array.append(tmp_dict)
except Exception as e1:
logger.debug(f"解析RSS条目失败{str(e1)} - {traceback.format_exc()}")
continue
# 描述
desc_nodes = item.xpath('.//description | .//summary')
description = desc_nodes[0].text if desc_nodes and desc_nodes[0].text else ""
# 种子页面
link_nodes = item.xpath('.//link')
if link_nodes:
link = link_nodes[0].text if hasattr(link_nodes[0], 'text') and link_nodes[0].text else \
link_nodes[0].get('href', '')
else:
link = ""
# 种子链接
enclosure_nodes = item.xpath('.//enclosure')
enclosure = enclosure_nodes[0].get('url', '') if enclosure_nodes else ""
if not enclosure and not link:
continue
# 部分RSS只有link没有enclosure
if not enclosure and link:
enclosure = link
# 大小
size = 0
if enclosure_nodes:
size_attr = enclosure_nodes[0].get('length', '0')
if size_attr and str(size_attr).isdigit():
size = int(size_attr)
# 发布日期
pubdate_nodes = item.xpath('.//pubDate | .//published | .//updated')
pubdate = ""
if pubdate_nodes and pubdate_nodes[0].text:
pubdate = StringUtils.get_time(pubdate_nodes[0].text)
# 获取豆瓣昵称
nickname_nodes = item.xpath('.//*[local-name()="creator"]')
nickname = nickname_nodes[0].text if nickname_nodes and nickname_nodes[0].text else ""
# 返回对象
tmp_dict = {
'title': title,
'enclosure': enclosure,
'size': size,
'description': description,
'link': link,
'pubdate': pubdate
}
# 如果豆瓣昵称不为空返回数据增加豆瓣昵称供doubansync插件获取
if nickname:
tmp_dict['nickname'] = nickname
ret_array.append(tmp_dict)
except Exception as e1:
logger.debug(f"解析RSS条目失败{str(e1)} - {traceback.format_exc()}")
continue
finally:
items.clear()
del items
except Exception as e2:
logger.error(f"解析RSS失败{str(e2)} - {traceback.format_exc()}")

View File

@@ -107,8 +107,7 @@ class ServiceBaseHelper(Generic[TConf]):
迭代所有模块的实例及其对应的配置,返回 ServiceInfo 实例
"""
configs = self.get_configs()
modules = self.modulemanager.get_running_type_modules(self.module_type)
for module in modules:
for module in self.modulemanager.get_running_type_modules(self.module_type):
if not module:
continue
module_instances = module.get_instances()

View File

@@ -8,11 +8,11 @@ from app.db.systemconfig_oper import SystemConfigOper
from app.log import logger
from app.schemas.types import SystemConfigKey
from app.utils.http import RequestUtils
from app.utils.singleton import Singleton
from app.utils.singleton import WeakSingleton
from app.utils.system import SystemUtils
class SubscribeHelper(metaclass=Singleton):
class SubscribeHelper(metaclass=WeakSingleton):
"""
订阅数据统计/订阅分享等
"""
@@ -29,6 +29,8 @@ class SubscribeHelper(metaclass=Singleton):
_sub_shares = f"{settings.MP_SERVER_HOST}/subscribe/shares"
_sub_share_statistic = f"{settings.MP_SERVER_HOST}/subscribe/share/statistics"
_sub_fork = f"{settings.MP_SERVER_HOST}/subscribe/fork/%s"
_shares_cache_region = "subscribe_share"
@@ -58,7 +60,7 @@ class SubscribeHelper(metaclass=Singleton):
self.get_user_uuid()
self.get_github_user()
@cached(maxsize=20, ttl=1800)
@cached(maxsize=5, ttl=1800)
def get_statistic(self, stype: str, page: Optional[int] = 1, count: Optional[int] = 30) -> List[dict]:
"""
获取订阅统计数据
@@ -215,6 +217,18 @@ class SubscribeHelper(metaclass=Singleton):
return res.json()
return []
@cached(maxsize=1, ttl=1800)
def get_share_statistics(self) -> List[dict]:
"""
获取订阅分享统计数据
"""
if not settings.SUBSCRIBE_STATISTIC_SHARE:
return []
res = RequestUtils(proxies=settings.PROXY, timeout=15).get_res(self._sub_share_statistic)
if res and res.status_code == 200:
return res.json()
return []
def get_user_uuid(self) -> str:
"""
获取用户uuid

View File

@@ -91,10 +91,10 @@ class SystemHelper:
# 检查是否有有效的重启策略
auto_restart_policies = ['always', 'unless-stopped', 'on-failure']
has_restart_policy = policy_name in auto_restart_policies
logger.info(f"容器重启策略: {policy_name}, 支持自动重启: {has_restart_policy}")
return has_restart_policy
except Exception as e:
logger.warning(f"检查重启策略失败: {str(e)}")
return False
@@ -106,7 +106,7 @@ class SystemHelper:
"""
if not SystemUtils.is_docker():
return False, "非Docker环境无法重启"
try:
# 检查容器是否配置了自动重启策略
has_restart_policy = SystemHelper._check_restart_policy()

View File

@@ -1,8 +1,7 @@
from concurrent.futures import ThreadPoolExecutor
from typing import Optional
from app.utils.singleton import Singleton
from app.core.config import settings
from app.utils.singleton import Singleton
class ThreadHelper(metaclass=Singleton):
@@ -10,7 +9,7 @@ class ThreadHelper(metaclass=Singleton):
线程池管理
"""
def __init__(self):
self.pool = ThreadPoolExecutor(max_workers=settings.CONF['threadpool'])
self.pool = ThreadPoolExecutor(max_workers=settings.CONF.threadpool)
def submit(self, func, *args, **kwargs):
"""
@@ -28,6 +27,3 @@ class ThreadHelper(metaclass=Singleton):
:return:
"""
self.pool.shutdown()
def __del__(self):
self.shutdown()

View File

@@ -1,10 +1,9 @@
import datetime
import re
from pathlib import Path
from typing import Tuple, Optional, List, Union, Dict
from typing import Tuple, Optional, List, Union, Dict, Any
from urllib.parse import unquote
from requests import Response
from torrentool.api import Torrent
from app.core.config import settings
@@ -16,17 +15,17 @@ from app.db.systemconfig_oper import SystemConfigOper
from app.log import logger
from app.schemas.types import MediaType, SystemConfigKey
from app.utils.http import RequestUtils
from app.utils.singleton import Singleton
from app.utils.singleton import WeakSingleton
from app.utils.string import StringUtils
class TorrentHelper(metaclass=Singleton):
class TorrentHelper(metaclass=WeakSingleton):
"""
种子帮助类
"""
# 失败的种子:站点链接
_invalid_torrents = []
def __init__(self):
self._invalid_torrents = []
def download_torrent(self, url: str,
cookie: Optional[str] = None,
@@ -40,6 +39,22 @@ class TorrentHelper(metaclass=Singleton):
"""
if url.startswith("magnet:"):
return None, url, "", [], f"磁力链接"
# 构建 torrent 种子文件的存储路径
file_path = (Path(settings.TEMP_PATH) / StringUtils.md5_hash(url)).with_suffix(".torrent")
if file_path.exists():
try:
# 获取种子目录和文件清单
folder_name, file_list = self.get_torrent_info(file_path)
# 无法获取信息,则认为缓存文件无效
if not folder_name and not file_list:
raise ValueError("无效的缓存种子文件")
# 获取种子数据
content = file_path.read_bytes()
# 成功拿到种子数据
return file_path, content, folder_name, file_list, ""
except Exception as err:
logger.error(f"处理缓存的种子文件 {file_path} 时出错: {err},将重新下载")
file_path.unlink(missing_ok=True)
# 请求种子文件
req = RequestUtils(
ua=ua,
@@ -106,10 +121,6 @@ class TorrentHelper(metaclass=Singleton):
if req.content:
# 检查是不是种子文件,如果不是仍然抛出异常
try:
# 读取种子文件名
file_name = self.get_url_filename(req, url)
# 种子文件路径
file_path = Path(settings.TEMP_PATH) / file_name
# 保存到文件
file_path.write_bytes(req.content)
# 获取种子目录和文件清单
@@ -170,7 +181,7 @@ class TorrentHelper(metaclass=Singleton):
return "", []
@staticmethod
def get_url_filename(req: Response, url: str) -> str:
def get_url_filename(req: Any, url: str) -> str:
"""
从下载请求中获取种子文件名
"""
@@ -308,7 +319,7 @@ class TorrentHelper(metaclass=Singleton):
self._invalid_torrents.append(url)
@staticmethod
def match_torrent(mediainfo: MediaInfo, torrent_meta: MetaInfo, torrent: TorrentInfo) -> bool:
def match_torrent(mediainfo: MediaInfo, torrent_meta: MetaBase, torrent: TorrentInfo) -> bool:
"""
检查种子是否匹配媒体信息
:param mediainfo: 需要匹配的媒体信息

View File

@@ -18,14 +18,14 @@ class WallpaperHelper(metaclass=Singleton):
获取登录页面壁纸
"""
if settings.WALLPAPER == "bing":
url = self.get_bing_wallpaper()
return self.get_bing_wallpaper()
elif settings.WALLPAPER == "mediaserver":
url = self.get_mediaserver_wallpaper()
return self.get_mediaserver_wallpaper()
elif settings.WALLPAPER == "customize":
url = self.get_customize_wallpaper()
else:
url = self.get_tmdb_wallpaper()
return url
return self.get_customize_wallpaper()
elif settings.WALLPAPER == "tmdb":
return self.get_tmdb_wallpaper()
return ''
def get_wallpapers(self, num: int = 10) -> List[str]:
"""
@@ -37,8 +37,9 @@ class WallpaperHelper(metaclass=Singleton):
return self.get_mediaserver_wallpapers(num)
elif settings.WALLPAPER == "customize":
return self.get_customize_wallpapers()
else:
elif settings.WALLPAPER == "tmdb":
return self.get_tmdb_wallpapers(num)
return []
@cached(maxsize=1, ttl=3600)
def get_tmdb_wallpaper(self) -> Optional[str]:

132
app/helper/workflow.py Normal file
View File

@@ -0,0 +1,132 @@
import json
from typing import List, Tuple, Optional
from app.core.cache import cached, cache_backend
from app.core.config import settings
from app.db.workflow_oper import WorkflowOper
from app.log import logger
from app.utils.http import RequestUtils
from app.utils.singleton import WeakSingleton
from app.utils.system import SystemUtils
class WorkflowHelper(metaclass=WeakSingleton):
"""
工作流分享等
"""
_workflow_share = f"{settings.MP_SERVER_HOST}/workflow/share"
_workflow_shares = f"{settings.MP_SERVER_HOST}/workflow/shares"
_workflow_fork = f"{settings.MP_SERVER_HOST}/workflow/fork/%s"
_shares_cache_region = "workflow_share"
_share_user_id = None
def __init__(self):
self.get_user_uuid()
def workflow_share(self, workflow_id: int,
share_title: str, share_comment: str, share_user: str) -> Tuple[bool, str]:
"""
分享工作流
"""
if not settings.WORKFLOW_STATISTIC_SHARE: # 使用独立的工作流分享开关
return False, "当前没有开启工作流数据共享功能"
# 获取工作流信息
workflow = WorkflowOper().get(workflow_id)
if not workflow:
return False, "工作流不存在"
if not workflow.actions or not workflow.flows:
return False, "请分享有动作和流程的工作流"
workflow_dict = workflow.to_dict()
workflow_dict.pop("id", None)
workflow_dict.pop("context", None)
workflow_dict['actions'] = json.dumps(workflow_dict['actions'] or [])
workflow_dict['flows'] = json.dumps(workflow_dict['flows'] or [])
# 发送分享请求
res = RequestUtils(proxies=settings.PROXY or {}, content_type="application/json",
timeout=10).post(self._workflow_share,
json={
"share_title": share_title,
"share_comment": share_comment,
"share_user": share_user,
"share_uid": self._share_user_id,
**workflow_dict
})
if res is None:
return False, "连接MoviePilot服务器失败"
if res.ok:
# 清除 get_shares 的缓存,以便实时看到结果
cache_backend.clear(region=self._shares_cache_region)
return True, ""
else:
return False, res.json().get("message")
def share_delete(self, share_id: int) -> Tuple[bool, str]:
"""
删除分享
"""
if not settings.WORKFLOW_STATISTIC_SHARE: # 使用独立的工作流分享开关
return False, "当前没有开启工作流数据共享功能"
res = RequestUtils(proxies=settings.PROXY or {},
timeout=5).delete_res(f"{self._workflow_share}/{share_id}",
params={"share_uid": self._share_user_id})
if res is None:
return False, "连接MoviePilot服务器失败"
if res.ok:
# 清除 get_shares 的缓存,以便实时看到结果
cache_backend.clear(region=self._shares_cache_region)
return True, ""
else:
return False, res.json().get("message")
def workflow_fork(self, share_id: int) -> Tuple[bool, str]:
"""
复用分享的工作流
"""
if not settings.WORKFLOW_STATISTIC_SHARE: # 使用独立的工作流分享开关
return False, "当前没有开启工作流数据共享功能"
res = RequestUtils(proxies=settings.PROXY or {}, timeout=5, headers={
"Content-Type": "application/json"
}).get_res(self._workflow_fork % share_id)
if res is None:
return False, "连接MoviePilot服务器失败"
if res.ok:
return True, ""
else:
return False, res.json().get("message")
@cached(region=_shares_cache_region)
def get_shares(self, name: Optional[str] = None, page: Optional[int] = 1, count: Optional[int] = 30) -> List[dict]:
"""
获取工作流分享数据
"""
if not settings.WORKFLOW_STATISTIC_SHARE: # 使用独立的工作流分享开关
return []
res = RequestUtils(proxies=settings.PROXY or {}, timeout=15).get_res(self._workflow_shares, params={
"name": name,
"page": page,
"count": count
})
if res and res.status_code == 200:
return res.json()
return []
def get_user_uuid(self) -> str:
"""
获取用户uuid
"""
if not self._share_user_id:
self._share_user_id = SystemUtils.generate_user_unique_id()
logger.info(f"当前用户UUID: {self._share_user_id}")
return self._share_user_id or ""

View File

@@ -1,5 +1,6 @@
import multiprocessing
import os
import setproctitle
import signal
import sys
import threading
@@ -19,6 +20,9 @@ if SystemUtils.is_frozen():
from app.core.config import settings
from app.db.init import init_db, update_db
# 设置进程名
setproctitle.setproctitle(settings.PROJECT_NAME)
# uvicorn服务
Server = uvicorn.Server(Config(app, host=settings.HOST, port=settings.PORT,
reload=settings.DEV, workers=multiprocessing.cpu_count(),
@@ -83,7 +87,7 @@ if __name__ == '__main__':
# 注册信号处理器
signal.signal(signal.SIGTERM, signal_handler)
signal.signal(signal.SIGINT, signal_handler)
# 启动托盘
start_tray()
# 初始化数据库

View File

@@ -51,7 +51,7 @@ class BangumiModule(_ModuleBase):
获取模块子类型
"""
return MediaRecognizeType.Bangumi
@staticmethod
def get_priority() -> int:
"""

View File

@@ -30,7 +30,7 @@ class BangumiApi(object):
self._session = requests.Session()
self._req = RequestUtils(session=self._session)
@cached(maxsize=settings.CONF["bangumi"], ttl=settings.CONF["meta"])
@cached(maxsize=settings.CONF.bangumi, ttl=settings.CONF.meta)
def __invoke(self, url, key: Optional[str] = None, **kwargs):
req_url = self._base_url + url
params = {}

View File

@@ -12,10 +12,10 @@ import requests
from app.core.cache import cached
from app.core.config import settings
from app.utils.http import RequestUtils
from app.utils.singleton import Singleton
from app.utils.singleton import WeakSingleton
class DoubanApi(metaclass=Singleton):
class DoubanApi(metaclass=WeakSingleton):
_urls = {
# 搜索类
# sort=U:近期热门 T:标记最多 S:评分最高 R:最新上映
@@ -151,7 +151,6 @@ class DoubanApi(metaclass=Singleton):
_api_key2 = "0ab215a8b1977939201640fa14c66bab"
_base_url = "https://frodo.douban.com/api/v2"
_api_url = "https://api.douban.com/v2"
_session = None
def __init__(self):
self._session = requests.Session()
@@ -171,14 +170,14 @@ class DoubanApi(metaclass=Singleton):
).digest()
).decode()
@cached(maxsize=settings.CONF["douban"], ttl=settings.CONF["meta"])
@cached(maxsize=settings.CONF.douban, ttl=settings.CONF.meta)
def __invoke_recommend(self, url: str, **kwargs) -> dict:
"""
推荐/发现类API
"""
return self.__invoke(url, **kwargs)
@cached(maxsize=settings.CONF["douban"], ttl=settings.CONF["meta"])
@cached(maxsize=settings.CONF.douban, ttl=settings.CONF.meta)
def __invoke_search(self, url: str, **kwargs) -> dict:
"""
搜索类API
@@ -213,7 +212,7 @@ class DoubanApi(metaclass=Singleton):
return resp.json()
return resp.json() if resp else {}
@cached(maxsize=settings.CONF["douban"], ttl=settings.CONF["meta"])
@cached(maxsize=settings.CONF.douban, ttl=settings.CONF.meta)
def __post(self, url: str, **kwargs) -> dict:
"""
POST请求

View File

@@ -10,16 +10,16 @@ from app.core.config import settings
from app.core.meta import MetaBase
from app.core.metainfo import MetaInfo
from app.log import logger
from app.utils.singleton import Singleton
from app.utils.singleton import WeakSingleton
from app.schemas.types import MediaType
lock = RLock()
CACHE_EXPIRE_TIMESTAMP_STR = "cache_expire_timestamp"
EXPIRE_TIMESTAMP = settings.CONF["meta"]
EXPIRE_TIMESTAMP = settings.CONF.meta
class DoubanCache(metaclass=Singleton):
class DoubanCache(metaclass=WeakSingleton):
"""
豆瓣缓存数据
{
@@ -29,9 +29,6 @@ class DoubanCache(metaclass=Singleton):
"type": MediaType
}
"""
_meta_data: dict = {}
# 缓存文件路径
_meta_path: Path = None
# TMDB缓存过期
_tmdb_cache_expire: bool = True
@@ -233,3 +230,6 @@ class DoubanCache(metaclass=Singleton):
if not cache_media_info:
return
self._meta_data[key]['title'] = cn_title
def __del__(self):
self.save()

View File

@@ -166,7 +166,8 @@ class Emby:
type=library_type,
image=image,
link=f'{self._playhost or self._host}web/index.html'
f'#!/videos?serverId={self.serverid}&parentId={library.get("Id")}'
f'#!/videos?serverId={self.serverid}&parentId={library.get("Id")}',
server_type= "emby"
)
)
return libraries
@@ -1167,7 +1168,8 @@ class Emby:
type=item_type,
image=image,
link=link,
percent=item.get("UserData", {}).get("PlayedPercentage")
percent=item.get("UserData", {}).get("PlayedPercentage"),
server_type='emby'
))
return ret_resume
else:
@@ -1219,7 +1221,8 @@ class Emby:
type=item_type,
image=image,
link=link,
BackdropImageTags=item.get("BackdropImageTags")
BackdropImageTags=item.get("BackdropImageTags"),
server_type='emby'
))
return ret_latest
else:

View File

@@ -440,7 +440,7 @@ class FanartModule(_ModuleBase):
return result
@classmethod
@cached(maxsize=settings.CONF["fanart"], ttl=settings.CONF["meta"])
@cached(maxsize=settings.CONF.fanart, ttl=settings.CONF.meta)
def __request_fanart(cls, media_type: MediaType, queryid: Union[str, int]) -> Optional[dict]:
if media_type == MediaType.MOVIE:
image_url = cls._movie_url % queryid

View File

@@ -1,6 +1,7 @@
from pathlib import Path
from typing import Optional, List, Tuple, Union, Dict, Callable
from app.chain.tmdb import TmdbChain
from app.core.config import settings
from app.core.context import MediaInfo
from app.core.meta import MetaBase
@@ -139,13 +140,29 @@ class FileManagerModule(_ModuleBase):
"""
handler = TransHandler()
# 重命名格式
rename_format = settings.TV_RENAME_FORMAT \
if mediainfo.type == MediaType.TV else settings.MOVIE_RENAME_FORMAT
rename_format = settings.RENAME_FORMAT(mediainfo.type)
# 获取集信息
episodes_info: Optional[List[TmdbEpisode]] = None
if mediainfo.type == MediaType.TV:
# 判断注意season为0的情况
season_num = mediainfo.season
if season_num is None and meta.season_seq:
if meta.season_seq.isdigit():
season_num = int(meta.season_seq)
# 默认值1
if season_num is None:
season_num = 1
episodes_info = TmdbChain().tmdb_episodes(
tmdbid=mediainfo.tmdb_id,
season=season_num,
episode_group=mediainfo.episode_group,
)
# 获取重命名后的名称
path = handler.get_rename_path(
template_string=rename_format,
rename_dict=handler.get_naming_dict(meta=meta,
mediainfo=mediainfo,
episodes_info=episodes_info,
file_ext=Path(meta.title).suffix)
)
return str(path)
@@ -344,9 +361,14 @@ class FileManagerModule(_ModuleBase):
return None
return storage_oper.get_parent(fileitem)
def snapshot_storage(self, storage: str, path: Path) -> Optional[Dict[str, float]]:
def snapshot_storage(self, storage: str, path: Path,
last_snapshot_time: float = None, max_depth: int = 5) -> Optional[Dict[str, Dict]]:
"""
快照存储
:param storage: 存储类型
:param path: 路径
:param last_snapshot_time: 上次快照时间,用于增量快照
:param max_depth: 最大递归深度,避免过深遍历
"""
if storage not in self._support_storages:
return None
@@ -354,7 +376,7 @@ class FileManagerModule(_ModuleBase):
if not storage_oper:
logger.error(f"不支持 {storage} 的快照处理")
return None
return storage_oper.snapshot(path)
return storage_oper.snapshot(path, last_snapshot_time=last_snapshot_time, max_depth=max_depth)
def storage_usage(self, storage: str) -> Optional[StorageUsage]:
"""
@@ -406,6 +428,12 @@ class FileManagerModule(_ModuleBase):
message=f"{target_path} 不是有效目录")
# 获取目标路径
if target_directory:
# 目标媒体库目录未设置
if not target_directory.library_path:
logger.error(f"目标媒体库目录未设置,无法整理文件,源路径:{fileitem.path}")
return TransferInfo(success=False,
fileitem=fileitem,
message="目标媒体库目录未设置")
# 整理方式
if not transfer_type:
transfer_type = target_directory.transfer_type
@@ -505,19 +533,35 @@ class FileManagerModule(_ModuleBase):
# 媒体分类路径
dir_path = handler.get_dest_dir(mediainfo=mediainfo, target_dir=dest_dir)
# 重命名格式
rename_format = settings.TV_RENAME_FORMAT \
if mediainfo.type == MediaType.TV else settings.MOVIE_RENAME_FORMAT
rename_format = settings.RENAME_FORMAT(mediainfo.type)
# 元数据补上常用属性,尽可能确保重命名后的路径不出现空白
meta = MetaInfo(mediainfo.title)
if meta.type == MediaType.UNKNOWN and mediainfo.type is not None:
meta.type = mediainfo.type
if meta.year is None:
meta.year = mediainfo.year
if meta.begin_season is None:
meta.begin_season = 1
if meta.begin_episode is None:
meta.begin_episode = 1
# 获取路径(重命名路径)
target_path = handler.get_rename_path(
path=dir_path,
template_string=rename_format,
rename_dict=handler.get_naming_dict(meta=MetaInfo(mediainfo.title),
rename_dict=handler.get_naming_dict(meta=meta,
mediainfo=mediainfo)
)
# 计算重命名中的文件夹层数
rename_format_level = len(rename_format.split("/")) - 1
# 取相对路径的第1层目录
media_path = target_path.parents[rename_format_level - 1]
# 获取重命名后的媒体文件根路径
media_path = DirectoryHelper.get_media_root_path(
rename_format, rename_path=target_path
)
if not media_path:
# 忽略
continue
if dir_path != media_path and dir_path.is_relative_to(media_path):
# 兜底检查,避免不必要的扫盘
logger.warn(f"{media_path} 是媒体库目录 {dir_path} 的父目录,忽略获取媒体文件列表,请检查重命名格式!")
continue
# 检索媒体文件
fileitem = storage_oper.get_item(media_path)
if not fileitem:
@@ -543,9 +587,12 @@ class FileManagerModule(_ModuleBase):
if not settings.LOCAL_EXISTS_SEARCH:
return None
logger.debug(f"正在本地媒体库中查找 {mediainfo.title_year}...")
# 检查媒体库
fileitems = self.media_files(mediainfo)
if not fileitems:
logger.debug(f"{mediainfo.title_year} 不在本地媒体库中")
return None
if mediainfo.type == MediaType.MOVIE:

View File

@@ -4,6 +4,7 @@ from typing import Optional, List, Dict, Tuple
from app import schemas
from app.helper.storage import StorageHelper
from app.log import logger
class StorageBase(metaclass=ABCMeta):
@@ -135,7 +136,8 @@ class StorageBase(metaclass=ABCMeta):
pass
@abstractmethod
def upload(self, fileitem: schemas.FileItem, path: Path, new_name: Optional[str] = None) -> Optional[schemas.FileItem]:
def upload(self, fileitem: schemas.FileItem, path: Path,
new_name: Optional[str] = None) -> Optional[schemas.FileItem]:
"""
上传文件
:param fileitem: 上传目录项
@@ -192,21 +194,44 @@ class StorageBase(metaclass=ABCMeta):
"""
pass
def snapshot(self, path: Path) -> Dict[str, float]:
def snapshot(self, path: Path, last_snapshot_time: float = None, max_depth: int = 5) -> Dict[str, Dict]:
"""
快照文件系统,输出所有层级文件信息(不含目录)
:param path: 路径
:param last_snapshot_time: 上次快照时间,用于增量快照
:param max_depth: 最大递归深度,避免过深遍历
"""
files_info = {}
def __snapshot_file(_fileitm: schemas.FileItem):
def __snapshot_file(_fileitm: schemas.FileItem, current_depth: int = 0):
"""
递归获取文件信息
"""
if _fileitm.type == "dir":
for sub_file in self.list(_fileitm):
__snapshot_file(sub_file)
else:
files_info[_fileitm.path] = _fileitm.size
try:
if _fileitm.type == "dir":
# 检查递归深度限制
if current_depth >= max_depth:
return
# 增量检查:如果目录修改时间早于上次快照,跳过
if (last_snapshot_time and
_fileitm.modify_time and
_fileitm.modify_time <= last_snapshot_time):
return
# 遍历子文件
sub_files = self.list(_fileitm)
for sub_file in sub_files:
__snapshot_file(sub_file, current_depth + 1)
else:
# 记录文件的完整信息用于比对
files_info[_fileitm.path] = {
'size': _fileitm.size or 0,
'modify_time': getattr(_fileitm, 'modify_time', 0),
'type': _fileitm.type
}
except Exception as e:
logger.debug(f"Snapshot error for {_fileitm.path}: {e}")
fileitem = self.get_item(path)
if not fileitem:

View File

@@ -15,7 +15,7 @@ from app.core.config import settings
from app.log import logger
from app.modules.filemanager import StorageBase
from app.schemas.types import StorageSchema
from app.utils.singleton import Singleton
from app.utils.singleton import WeakSingleton
from app.utils.string import StringUtils
lock = threading.Lock()
@@ -29,7 +29,7 @@ class SessionInvalidException(Exception):
pass
class AliPan(StorageBase, metaclass=Singleton):
class AliPan(StorageBase, metaclass=WeakSingleton):
"""
阿里云盘相关操作
"""
@@ -43,17 +43,12 @@ class AliPan(StorageBase, metaclass=Singleton):
"copy": "复制"
}
# 验证参数
_auth_state = {}
# 上传进度值
_last_progress = 0
# 基础url
base_url = "https://openapi.alipan.com"
def __init__(self):
super().__init__()
self._auth_state = {}
self.session = requests.Session()
self._init_session()
@@ -244,6 +239,7 @@ class AliPan(StorageBase, metaclass=Singleton):
conf = self.get_conf()
conf.update(result)
self.set_config(conf)
return None
def _request_api(self, method: str, endpoint: str,
result_key: Optional[str] = None, **kwargs) -> Optional[Union[dict, list]]:
@@ -369,7 +365,7 @@ class AliPan(StorageBase, metaclass=Singleton):
break
next_marker = resp.get("next_marker")
for item in resp.get("items", []):
items.append(self.__get_fileitem(item, parent=fileitem.path))
items.append(self.__get_fileitem(item, parent=str(fileitem.path)))
if len(resp.get("items")) < 100:
break
return items

View File

@@ -1,10 +1,9 @@
import json
from datetime import datetime
from pathlib import Path
from typing import Optional, List, Dict
from typing import Optional, List
import requests
from requests import Response
from app import schemas
from app.core.cache import cached
@@ -13,14 +12,14 @@ from app.log import logger
from app.modules.filemanager.storages import StorageBase
from app.schemas.types import StorageSchema
from app.utils.http import RequestUtils
from app.utils.singleton import Singleton
from app.utils.singleton import WeakSingleton
from app.utils.url import UrlUtils
class Alist(StorageBase, metaclass=Singleton):
class Alist(StorageBase, metaclass=WeakSingleton):
"""
Alist相关操作
api文档https://alist.nn.ci/zh/guide/api
api文档https://oplist.org/zh/
"""
# 存储类型
@@ -39,7 +38,7 @@ class Alist(StorageBase, metaclass=Singleton):
"""
初始化
"""
pass
self.__generate_token.clear_cache() # noqa
@property
def __get_base_url(self) -> str:
@@ -77,7 +76,7 @@ class Alist(StorageBase, metaclass=Singleton):
token = conf.get("token")
if token:
return str(token)
resp: Response = RequestUtils(headers={
resp = RequestUtils(headers={
'Content-Type': 'application/json'
}).post_res(
self.__get_api_url("/api/auth/login"),
@@ -102,20 +101,20 @@ class Alist(StorageBase, metaclass=Singleton):
"""
if resp is None:
logger.warning("alist】请求登录失败无法连接alist服务")
logger.warning("OpenList】请求登录失败无法连接alist服务")
return ""
if resp.status_code != 200:
logger.warning(f"alist】更新令牌请求发送失败状态码{resp.status_code}")
logger.warning(f"OpenList】更新令牌请求发送失败状态码{resp.status_code}")
return ""
result = resp.json()
if result["code"] != 200:
logger.critical(f'alist】更新令牌错误信息{result["message"]}')
logger.critical(f'OpenList】更新令牌错误信息{result["message"]}')
return ""
logger.debug("alist】AList获取令牌成功")
logger.debug("OpenList】AList获取令牌成功")
return result["data"]["token"]
def __get_header_with_token(self) -> dict:
@@ -128,7 +127,7 @@ class Alist(StorageBase, metaclass=Singleton):
"""
检查存储是否可用
"""
pass
return True if self.__generate_token else False
def list(
self,
@@ -151,7 +150,7 @@ class Alist(StorageBase, metaclass=Singleton):
if item:
return [item]
return []
resp: Response = RequestUtils(
resp = RequestUtils(
headers=self.__get_header_with_token()
).post_res(
self.__get_api_url("/api/fs/list"),
@@ -200,11 +199,11 @@ class Alist(StorageBase, metaclass=Singleton):
"""
if resp is None:
logger.warn(f"alist】请求获取目录 {fileitem.path} 的文件列表失败无法连接alist服务")
logger.warn(f"OpenList】请求获取目录 {fileitem.path} 的文件列表失败无法连接alist服务")
return []
if resp.status_code != 200:
logger.warn(
f"alist】请求获取目录 {fileitem.path} 的文件列表失败,状态码:{resp.status_code}"
f"OpenList】请求获取目录 {fileitem.path} 的文件列表失败,状态码:{resp.status_code}"
)
return []
@@ -212,7 +211,7 @@ class Alist(StorageBase, metaclass=Singleton):
if result["code"] != 200:
logger.warn(
f'alist】获取目录 {fileitem.path} 的文件列表失败,错误信息:{result["message"]}'
f'OpenList】获取目录 {fileitem.path} 的文件列表失败,错误信息:{result["message"]}'
)
return []
@@ -240,7 +239,7 @@ class Alist(StorageBase, metaclass=Singleton):
:param name: 目录名
"""
path = Path(fileitem.path) / name
resp: Response = RequestUtils(
resp = RequestUtils(
headers=self.__get_header_with_token()
).post_res(
self.__get_api_url("/api/fs/mkdir"),
@@ -258,15 +257,15 @@ class Alist(StorageBase, metaclass=Singleton):
}
"""
if resp is None:
logger.warn(f"alist】请求创建目录 {path} 失败无法连接alist服务")
logger.warn(f"OpenList】请求创建目录 {path} 失败无法连接alist服务")
return None
if resp.status_code != 200:
logger.warn(f"alist】请求创建目录 {path} 失败,状态码:{resp.status_code}")
logger.warn(f"OpenList】请求创建目录 {path} 失败,状态码:{resp.status_code}")
return None
result = resp.json()
if result["code"] != 200:
logger.warn(f'alist】创建目录 {path} 失败,错误信息:{result["message"]}')
logger.warn(f'OpenList】创建目录 {path} 失败,错误信息:{result["message"]}')
return None
return self.get_item(path)
@@ -304,7 +303,7 @@ class Alist(StorageBase, metaclass=Singleton):
:param per_page: 每页数量
:param refresh: 是否刷新
"""
resp: Response = RequestUtils(
resp = RequestUtils(
headers=self.__get_header_with_token()
).post_res(
self.__get_api_url("/api/fs/get"),
@@ -348,15 +347,15 @@ class Alist(StorageBase, metaclass=Singleton):
}
"""
if resp is None:
logger.warn(f"alist】请求获取文件 {path} 失败无法连接alist服务")
logger.warn(f"OpenList】请求获取文件 {path} 失败无法连接alist服务")
return None
if resp.status_code != 200:
logger.warn(f"alist】请求获取文件 {path} 失败,状态码:{resp.status_code}")
logger.warn(f"OpenList】请求获取文件 {path} 失败,状态码:{resp.status_code}")
return None
result = resp.json()
if result["code"] != 200:
logger.debug(f'alist】获取文件 {path} 失败,错误信息:{result["message"]}')
logger.debug(f'OpenList】获取文件 {path} 失败,错误信息:{result["message"]}')
return None
return schemas.FileItem(
@@ -377,11 +376,47 @@ class Alist(StorageBase, metaclass=Singleton):
"""
return self.get_folder(Path(fileitem.path).parent)
def __is_empty_dir(self, fileitem: schemas.FileItem) -> bool:
"""
判断目录是否为空
"""
if fileitem.type != "dir":
return False
# 获取目录内容
items = self.list(fileitem)
return len(items) == 0
def delete(self, fileitem: schemas.FileItem) -> bool:
"""
删除文件
删除文件或目录空目录用专用API
"""
resp: Response = RequestUtils(
# 如果是空目录,优先用 remove_empty_directory
if fileitem.type == "dir" and self.__is_empty_dir(fileitem):
resp = RequestUtils(
headers=self.__get_header_with_token()
).post_res(
self.__get_api_url("/api/fs/remove_empty_directory"),
json={
"src_dir": fileitem.path,
},
)
if resp is None:
logger.warn(f"【OpenList】请求删除空目录 {fileitem.path} 失败无法连接alist服务")
return False
if resp.status_code != 200:
logger.warn(
f"【OpenList】请求删除空目录 {fileitem.path} 失败,状态码:{resp.status_code}"
)
return False
result = resp.json()
if result["code"] != 200:
logger.warn(
f'【OpenList】删除空目录 {fileitem.path} 失败,错误信息:{result["message"]}'
)
return False
return True
# 其它情况(文件或非空目录)
resp = RequestUtils(
headers=self.__get_header_with_token()
).post_res(
self.__get_api_url("/api/fs/remove"),
@@ -390,33 +425,18 @@ class Alist(StorageBase, metaclass=Singleton):
"names": [fileitem.name],
},
)
"""
{
"names": [
"string"
],
"dir": "string"
}
======================================
{
"code": 200,
"message": "success",
"data": null
}
"""
if resp is None:
logger.warn(f"alist】请求删除文件 {fileitem.path} 失败无法连接alist服务")
logger.warn(f"OpenList】请求删除文件 {fileitem.path} 失败无法连接alist服务")
return False
if resp.status_code != 200:
logger.warn(
f"alist】请求删除文件 {fileitem.path} 失败,状态码:{resp.status_code}"
f"OpenList】请求删除文件 {fileitem.path} 失败,状态码:{resp.status_code}"
)
return False
result = resp.json()
if result["code"] != 200:
logger.warn(
f'alist】删除文件 {fileitem.path} 失败,错误信息:{result["message"]}'
f'OpenList】删除文件 {fileitem.path} 失败,错误信息:{result["message"]}'
)
return False
return True
@@ -425,7 +445,7 @@ class Alist(StorageBase, metaclass=Singleton):
"""
重命名文件
"""
resp: Response = RequestUtils(
resp = RequestUtils(
headers=self.__get_header_with_token()
).post_res(
self.__get_api_url("/api/fs/rename"),
@@ -447,18 +467,18 @@ class Alist(StorageBase, metaclass=Singleton):
}
"""
if not resp:
logger.warn(f"alist】请求重命名文件 {fileitem.path} 失败无法连接alist服务")
logger.warn(f"OpenList】请求重命名文件 {fileitem.path} 失败无法连接alist服务")
return False
if resp.status_code != 200:
logger.warn(
f"alist】请求重命名文件 {fileitem.path} 失败,状态码:{resp.status_code}"
f"OpenList】请求重命名文件 {fileitem.path} 失败,状态码:{resp.status_code}"
)
return False
result = resp.json()
if result["code"] != 200:
logger.warn(
f'alist】重命名文件 {fileitem.path} 失败,错误信息:{result["message"]}'
f'OpenList】重命名文件 {fileitem.path} 失败,错误信息:{result["message"]}'
)
return False
@@ -476,7 +496,7 @@ class Alist(StorageBase, metaclass=Singleton):
:param path: 文件保存路径
:param password: 文件密码
"""
resp: Response = RequestUtils(
resp = RequestUtils(
headers=self.__get_header_with_token()
).post_res(
self.__get_api_url("/api/fs/get"),
@@ -512,15 +532,15 @@ class Alist(StorageBase, metaclass=Singleton):
}
"""
if not resp:
logger.warn(f"alist】请求获取文件 {path} 失败无法连接alist服务")
logger.warn(f"OpenList】请求获取文件 {path} 失败无法连接alist服务")
return None
if resp.status_code != 200:
logger.warn(f"alist】请求获取文件 {path} 失败,状态码:{resp.status_code}")
logger.warn(f"OpenList】请求获取文件 {path} 失败,状态码:{resp.status_code}")
return None
result = resp.json()
if result["code"] != 200:
logger.warn(f'alist】获取文件 {path} 失败,错误信息:{result["message"]}')
logger.warn(f'OpenList】获取文件 {path} 失败,错误信息:{result["message"]}')
return None
if result["data"]["raw_url"]:
@@ -561,13 +581,13 @@ class Alist(StorageBase, metaclass=Singleton):
headers.setdefault("As-Task", str(task).lower())
headers.setdefault("File-Path", encoded_path)
with open(path, "rb") as f:
resp: Response = RequestUtils(headers=headers).put_res(
resp = RequestUtils(headers=headers).put_res(
self.__get_api_url("/api/fs/put"),
data=f,
)
if resp.status_code != 200:
logger.warn(f"alist】请求上传文件 {path} 失败,状态码:{resp.status_code}")
logger.warn(f"OpenList】请求上传文件 {path} 失败,状态码:{resp.status_code}")
return None
new_item = self.get_item(Path(fileitem.path) / path.name)
@@ -590,7 +610,7 @@ class Alist(StorageBase, metaclass=Singleton):
:param path: 目标目录
:param new_name: 新文件名
"""
resp: Response = RequestUtils(
resp = RequestUtils(
headers=self.__get_header_with_token()
).post_res(
self.__get_api_url("/api/fs/copy"),
@@ -617,19 +637,19 @@ class Alist(StorageBase, metaclass=Singleton):
"""
if resp is None:
logger.warn(
f"alist】请求复制文件 {fileitem.path} 失败无法连接alist服务"
f"OpenList】请求复制文件 {fileitem.path} 失败无法连接alist服务"
)
return False
if resp.status_code != 200:
logger.warn(
f"alist】请求复制文件 {fileitem.path} 失败,状态码:{resp.status_code}"
f"OpenList】请求复制文件 {fileitem.path} 失败,状态码:{resp.status_code}"
)
return False
result = resp.json()
if result["code"] != 200:
logger.warn(
f'alist】复制文件 {fileitem.path} 失败,错误信息:{result["message"]}'
f'OpenList】复制文件 {fileitem.path} 失败,错误信息:{result["message"]}'
)
return False
# 重命名
@@ -649,7 +669,7 @@ class Alist(StorageBase, metaclass=Singleton):
# 先重命名
if fileitem.name != new_name:
self.rename(fileitem, new_name)
resp: Response = RequestUtils(
resp = RequestUtils(
headers=self.__get_header_with_token()
).post_res(
self.__get_api_url("/api/fs/move"),
@@ -676,19 +696,19 @@ class Alist(StorageBase, metaclass=Singleton):
"""
if resp is None:
logger.warn(
f"alist】请求移动文件 {fileitem.path} 失败无法连接alist服务"
f"OpenList】请求移动文件 {fileitem.path} 失败无法连接alist服务"
)
return False
if resp.status_code != 200:
logger.warn(
f"alist】请求移动文件 {fileitem.path} 失败,状态码:{resp.status_code}"
f"OpenList】请求移动文件 {fileitem.path} 失败,状态码:{resp.status_code}"
)
return False
result = resp.json()
if result["code"] != 200:
logger.warn(
f'alist】移动文件 {fileitem.path} 失败,错误信息:{result["message"]}'
f'OpenList】移动文件 {fileitem.path} 失败,错误信息:{result["message"]}'
)
return False
return True
@@ -711,30 +731,6 @@ class Alist(StorageBase, metaclass=Singleton):
"""
pass
def snapshot(self, path: Path) -> Dict[str, float]:
"""
快照文件系统,输出所有层级文件信息(不含目录)
"""
files_info = {}
def __snapshot_file(_fileitm: schemas.FileItem):
"""
递归获取文件信息
"""
if _fileitm.type == "dir":
for sub_file in self.list(_fileitm):
__snapshot_file(sub_file)
else:
files_info[_fileitm.path] = _fileitm.size
fileitem = self.get_item(path)
if not fileitem:
return {}
__snapshot_file(fileitem)
return files_info
@staticmethod
def __parse_timestamp(time_str: str) -> float:
"""

View File

@@ -191,7 +191,8 @@ class LocalStorage(StorageBase):
"""
return Path(fileitem.path)
def upload(self, fileitem: schemas.FileItem, path: Path, new_name: Optional[str] = None) -> Optional[schemas.FileItem]:
def upload(self, fileitem: schemas.FileItem, path: Path,
new_name: Optional[str] = None) -> Optional[schemas.FileItem]:
"""
上传文件
:param fileitem: 上传目录项
@@ -260,8 +261,11 @@ class LocalStorage(StorageBase):
"""
存储使用情况
"""
library_dirs = DirectoryHelper().get_local_library_dirs()
total_storage, free_storage = SystemUtils.space_usage([Path(d.library_path) for d in library_dirs])
directory_helper = DirectoryHelper()
total_storage, free_storage = SystemUtils.space_usage(
[Path(d.download_path) for d in directory_helper.get_local_download_dirs() if d.download_path] +
[Path(d.library_path) for d in directory_helper.get_local_library_dirs() if d.library_path]
)
return schemas.StorageUsage(
total=total_storage,
available=free_storage

View File

@@ -0,0 +1,551 @@
import threading
import time
from pathlib import Path
from typing import List, Optional, Union
import smbclient
from smbclient import ClientConfig, register_session, reset_connection_cache
from smbprotocol.exceptions import SMBException, SMBResponseException, SMBAuthenticationError
from app import schemas
from app.core.config import settings
from app.log import logger
from app.modules.filemanager import StorageBase
from app.schemas.types import StorageSchema
from app.utils.singleton import WeakSingleton
lock = threading.Lock()
class SMBConnectionError(Exception):
"""
SMB 连接错误
"""
pass
class SMB(StorageBase, metaclass=WeakSingleton):
"""
SMB网络挂载存储相关操作 - 使用 smbclient 高级接口
"""
# 存储类型
schema = StorageSchema.SMB
# 支持的整理方式
transtype = {
"move": "移动",
"copy": "复制",
}
def __init__(self):
super().__init__()
self._connected = False
self._server_path = None
self._host = None
self._username = None
self._password = None
self._init_connection()
def _init_connection(self):
"""
初始化SMB连接配置
"""
try:
conf = self.get_conf()
if not conf:
return
self._host = conf.get("host")
self._username = conf.get("username")
self._password = conf.get("password")
domain = conf.get("domain", "")
share = conf.get("share", "")
port = conf.get("port", 445)
if not all([self._host, share]):
logger.error("【SMB】缺少必要的连接参数host 和 share")
return
# 构建服务器路径
self._server_path = f"\\\\{self._host}\\{share}"
# 配置全局客户端设置
ClientConfig(
username=self._username,
password=self._password,
domain=domain if domain else None,
connection_timeout=60,
port=port,
auth_protocol="negotiate", # 使用协商认证
require_secure_negotiate=False # 匿名访问时可能需要关闭安全协商
)
# 注册会话以启用连接池
register_session(
self._host,
username=self._username,
password=self._password,
port=port,
encrypt=False, # 根据需要启用加密
connection_timeout=60
)
# 测试连接
self._test_connection()
self._connected = True
# 判断是否为匿名访问
if self._is_anonymous_access():
logger.info(f"【SMB】匿名连接成功{self._server_path}")
else:
logger.info(f"【SMB】认证连接成功{self._server_path} (用户:{self._username})")
except Exception as e:
logger.error(f"【SMB】连接初始化失败{e}")
self._connected = False
def _test_connection(self):
"""
测试SMB连接
"""
try:
# 尝试列出根目录来测试连接
smbclient.listdir(self._server_path)
except SMBAuthenticationError as e:
raise SMBConnectionError(f"SMB认证失败{e}")
except SMBResponseException as e:
raise SMBConnectionError(f"SMB响应错误{e}")
except SMBException as e:
raise SMBConnectionError(f"SMB连接错误{e}")
except Exception as e:
raise SMBConnectionError(f"连接测试失败:{e}")
def _is_anonymous_access(self) -> bool:
"""
检查是否为匿名访问
"""
return not self._username and not self._password
def _check_connection(self):
"""
检查SMB连接状态
"""
if not self._connected or not self._server_path:
raise SMBConnectionError("【SMB】连接未建立或已断开请检查配置")
def _normalize_path(self, path: Union[str, Path]) -> str:
"""
标准化路径格式为SMB路径
"""
path_str = str(path)
# 处理根路径
if path_str in ["/", "\\"]:
return self._server_path
# 去除前导斜杠
if path_str.startswith("/"):
path_str = path_str[1:]
# 构建完整的SMB路径
if path_str:
return f"{self._server_path}\\{path_str.replace('/', '\\')}"
else:
return self._server_path
def _create_fileitem(self, stat_result, file_path: str, name: str) -> schemas.FileItem:
"""
创建文件项
"""
try:
# 检查是否为目录
is_directory = smbclient.path.isdir(file_path)
# 处理路径
relative_path = file_path.replace(self._server_path, "").replace("\\", "/")
if not relative_path.startswith("/"):
relative_path = "/" + relative_path
if is_directory and not relative_path.endswith("/"):
relative_path += "/"
# 获取时间戳
try:
modify_time = int(stat_result.st_mtime)
except (AttributeError, TypeError):
modify_time = int(time.time())
if is_directory:
return schemas.FileItem(
storage=self.schema.value,
type="dir",
path=relative_path,
name=name,
basename=name,
modify_time=modify_time
)
else:
return schemas.FileItem(
storage=self.schema.value,
type="file",
path=relative_path,
name=name,
basename=Path(name).stem,
extension=Path(name).suffix[1:] if Path(name).suffix else None,
size=getattr(stat_result, 'st_size', 0),
modify_time=modify_time
)
except Exception as e:
logger.error(f"【SMB】创建文件项失败{e}")
# 返回基本的文件项信息
return schemas.FileItem(
storage=self.schema.value,
type="file",
path=file_path.replace(self._server_path, "").replace("\\", "/"),
name=name,
basename=Path(name).stem,
modify_time=int(time.time())
)
def init_storage(self):
"""
初始化存储
"""
# 重置连接缓存
reset_connection_cache()
self._init_connection()
def check(self) -> bool:
"""
检查存储是否可用
"""
if not self._connected:
return False
try:
self._test_connection()
return True
except Exception as e:
logger.debug(f"【SMB】连接检查失败{e}")
self._connected = False
return False
def list(self, fileitem: schemas.FileItem) -> List[schemas.FileItem]:
"""
浏览文件
"""
try:
self._check_connection()
if fileitem.type == "file":
item = self.detail(fileitem)
if item:
return [item]
return []
# 构建SMB路径
smb_path = self._normalize_path(fileitem.path.rstrip("/"))
# 列出目录内容
try:
entries = smbclient.listdir(smb_path)
except SMBResponseException as e:
logger.error(f"【SMB】列出目录失败: {smb_path} - {e}")
return []
except SMBException as e:
logger.error(f"【SMB】列出目录失败: {smb_path} - {e}")
return []
items = []
for entry in entries:
if entry in [".", ".."]:
continue
entry_path = f"{smb_path}\\{entry}"
try:
stat_result = smbclient.stat(entry_path)
item = self._create_fileitem(stat_result, entry_path, entry)
items.append(item)
except Exception as e:
logger.debug(f"【SMB】获取文件信息失败: {entry_path} - {e}")
continue
return items
except Exception as e:
logger.error(f"【SMB】列出文件失败: {e}")
return []
def create_folder(self, fileitem: schemas.FileItem, name: str) -> Optional[schemas.FileItem]:
"""
创建目录
"""
try:
self._check_connection()
parent_path = self._normalize_path(fileitem.path.rstrip("/"))
new_path = f"{parent_path}\\{name}"
# 创建目录
smbclient.mkdir(new_path)
# 返回创建的目录信息
return schemas.FileItem(
storage=self.schema.value,
type="dir",
path=f"{fileitem.path.rstrip('/')}/{name}/",
name=name,
basename=name,
modify_time=int(time.time())
)
except Exception as e:
logger.error(f"【SMB】创建目录失败: {e}")
return None
def get_folder(self, path: Path) -> Optional[schemas.FileItem]:
"""
获取目录,如目录不存在则创建
"""
# 检查目录是否存在
folder = self.get_item(path)
if folder:
return folder
# 逐级创建目录
parts = path.parts
current_path = Path("/")
for part in parts[1:]: # 跳过根目录
current_path = current_path / part
folder = self.get_item(current_path)
if not folder:
parent_folder = self.get_item(current_path.parent)
if not parent_folder:
logger.error(f"【SMB】父目录不存在: {current_path.parent}")
return None
folder = self.create_folder(parent_folder, part)
if not folder:
return None
return folder
def get_item(self, path: Path) -> Optional[schemas.FileItem]:
"""
获取文件或目录不存在返回None
"""
try:
self._check_connection()
# 处理根目录
if str(path) == "/":
return schemas.FileItem(
storage=self.schema.value,
type="dir",
path="/",
name="",
basename="",
modify_time=int(time.time())
)
smb_path = self._normalize_path(str(path).rstrip("/"))
# 检查路径是否存在
if not smbclient.path.exists(smb_path):
return None
stat_result = smbclient.stat(smb_path)
file_name = Path(path).name
return self._create_fileitem(stat_result, smb_path, file_name)
except Exception as e:
logger.debug(f"【SMB】获取文件项失败: {e}")
return None
def detail(self, fileitem: schemas.FileItem) -> Optional[schemas.FileItem]:
"""
获取文件详情
"""
return self.get_item(Path(fileitem.path))
def delete(self, fileitem: schemas.FileItem) -> bool:
"""
删除文件或目录
"""
try:
self._check_connection()
smb_path = self._normalize_path(fileitem.path.rstrip("/"))
if fileitem.type == "dir":
# 删除目录
smbclient.rmdir(smb_path)
else:
# 删除文件
smbclient.remove(smb_path)
logger.info(f"【SMB】删除成功: {fileitem.path}")
return True
except Exception as e:
logger.error(f"【SMB】删除失败: {e}")
return False
def rename(self, fileitem: schemas.FileItem, name: str) -> bool:
"""
重命名文件
"""
try:
self._check_connection()
old_path = self._normalize_path(fileitem.path.rstrip("/"))
parent_path = Path(fileitem.path).parent
new_path = self._normalize_path(str(parent_path / name))
# 重命名
smbclient.rename(old_path, new_path)
logger.info(f"【SMB】重命名成功: {fileitem.path} -> {name}")
return True
except Exception as e:
logger.error(f"【SMB】重命名失败: {e}")
return False
def download(self, fileitem: schemas.FileItem, path: Path = None) -> Optional[Path]:
"""
下载文件
"""
try:
self._check_connection()
smb_path = self._normalize_path(fileitem.path)
local_path = path or settings.TEMP_PATH / fileitem.name
# 确保本地目录存在
local_path.parent.mkdir(parents=True, exist_ok=True)
# 使用更高效的文件传输方式
with smbclient.open_file(smb_path, mode="rb") as src_file:
with open(local_path, "wb") as dst_file:
# 使用更大的缓冲区提高性能
buffer_size = 1024 * 1024 # 1MB
while True:
chunk = src_file.read(buffer_size)
if not chunk:
break
dst_file.write(chunk)
logger.info(f"【SMB】下载成功: {fileitem.path} -> {local_path}")
return local_path
except Exception as e:
logger.error(f"【SMB】下载失败: {e}")
return None
def upload(self, fileitem: schemas.FileItem, path: Path,
new_name: Optional[str] = None) -> Optional[schemas.FileItem]:
"""
上传文件
"""
try:
self._check_connection()
target_name = new_name or path.name
target_path = Path(fileitem.path) / target_name
smb_path = self._normalize_path(str(target_path))
# 使用更高效的文件传输方式
with open(path, "rb") as src_file:
with smbclient.open_file(smb_path, mode="wb") as dst_file:
# 使用更大的缓冲区提高性能
buffer_size = 1024 * 1024 # 1MB
while True:
chunk = src_file.read(buffer_size)
if not chunk:
break
dst_file.write(chunk)
logger.info(f"【SMB】上传成功: {path} -> {target_path}")
# 返回上传后的文件信息
return self.get_item(target_path)
except Exception as e:
logger.error(f"【SMB】上传失败: {e}")
return None
def copy(self, fileitem: schemas.FileItem, path: Path, new_name: str) -> bool:
"""
复制文件
"""
try:
# 下载到临时文件
temp_file = self.download(fileitem)
if not temp_file:
return False
# 获取目标目录
target_folder = self.get_item(path)
if not target_folder:
return False
# 上传到目标位置
result = self.upload(target_folder, temp_file, new_name)
# 删除临时文件
if temp_file.exists():
temp_file.unlink()
return result is not None
except Exception as e:
logger.error(f"【SMB】复制失败: {e}")
return False
def move(self, fileitem: schemas.FileItem, path: Path, new_name: str) -> bool:
"""
移动文件
"""
try:
# 先复制
if not self.copy(fileitem, path, new_name):
return False
# 再删除原文件
if not self.delete(fileitem):
logger.warn(f"【SMB】删除原文件失败: {fileitem.path}")
return False
return True
except Exception as e:
logger.error(f"【SMB】移动失败: {e}")
return False
def link(self, fileitem: schemas.FileItem, target_file: Path) -> bool:
pass
def softlink(self, fileitem: schemas.FileItem, target_file: Path) -> bool:
pass
def usage(self) -> Optional[schemas.StorageUsage]:
"""
存储使用情况
"""
try:
self._check_connection()
volume_stat = smbclient.stat_volume(self._server_path)
return schemas.StorageUsage(
total=volume_stat.total_size,
available=volume_stat.caller_available_size
)
except Exception as e:
logger.error(f"【SMB】获取存储使用情况失败: {e}")
return None
def __del__(self):
"""
析构函数,清理连接
"""
try:
# smbclient 自动管理连接池,但我们可以重置缓存
if hasattr(self, '_connected') and self._connected:
reset_connection_cache()
except Exception as e:
logger.debug(f"【SMB】清理连接失败: {e}")

View File

@@ -18,7 +18,7 @@ from app.core.config import settings
from app.log import logger
from app.modules.filemanager import StorageBase
from app.schemas.types import StorageSchema
from app.utils.singleton import Singleton
from app.utils.singleton import WeakSingleton
from app.utils.string import StringUtils
lock = threading.Lock()
@@ -28,7 +28,7 @@ class NoCheckInException(Exception):
pass
class U115Pan(StorageBase, metaclass=Singleton):
class U115Pan(StorageBase, metaclass=WeakSingleton):
"""
115相关操作
"""
@@ -41,18 +41,12 @@ class U115Pan(StorageBase, metaclass=Singleton):
"move": "移动",
"copy": "复制"
}
# 验证参数
_auth_state = {}
# 上传进度值
_last_progress = 0
# 基础url
base_url = "https://proapi.115.com"
def __init__(self):
super().__init__()
self._auth_state = {}
self.session = requests.Session()
self._init_session()
@@ -219,8 +213,11 @@ class U115Pan(StorageBase, metaclass=Singleton):
# 处理速率限制
if resp.status_code == 429:
reset_time = int(resp.headers.get("X-RateLimit-Reset", 60))
time.sleep(reset_time + 5)
reset_time = 5 + int(resp.headers.get("X-RateLimit-Reset", 60))
logger.debug(
f"【115】{method} 请求 {endpoint} 限流,等待{reset_time}秒后重试"
)
time.sleep(reset_time)
return self._request_api(method, endpoint, result_key, **kwargs)
# 处理请求错误
@@ -489,7 +486,8 @@ class U115Pan(StorageBase, metaclass=Singleton):
type="file" if info_resp["file_category"] == "1" else "dir",
name=info_resp["file_name"],
basename=Path(info_resp["file_name"]).stem,
extension=Path(info_resp["file_name"]).suffix[1:] if info_resp["file_category"] == "1" else None,
extension=Path(info_resp["file_name"]).suffix[1:] if info_resp[
"file_category"] == "1" else None,
pickcode=info_resp["pick_code"],
size=StringUtils.num_filesize(info_resp['size']) if info_resp["file_category"] == "1" else None,
modify_time=info_resp["utime"]

View File

@@ -10,6 +10,7 @@ from app.core.context import MediaInfo
from app.core.event import eventmanager
from app.core.meta import MetaBase
from app.core.metainfo import MetaInfoPath
from app.helper.directory import DirectoryHelper
from app.helper.message import TemplateHelper
from app.log import logger
from app.modules.filemanager.storages import StorageBase
@@ -26,11 +27,10 @@ class TransHandler:
文件转移整理类
"""
result: Optional[TransferInfo] = None
inner_lock: Lock = Lock()
def __init__(self):
self.__reset_result()
self.result = None
def __reset_result(self):
"""
@@ -103,185 +103,211 @@ class TransHandler:
# 重置结果
self.__reset_result()
# 重命名格式
rename_format = settings.TV_RENAME_FORMAT \
if mediainfo.type == MediaType.TV else settings.MOVIE_RENAME_FORMAT
try:
# 判断是否为文件夹
if fileitem.type == "dir":
# 整理整个目录,一般为蓝光原盘
if need_rename:
new_path = self.get_rename_path(
path=target_path,
template_string=rename_format,
rename_dict=self.get_naming_dict(meta=in_meta,
mediainfo=mediainfo)
).parent
else:
new_path = target_path / fileitem.name
# 整理目录
new_diritem, errmsg = self.__transfer_dir(fileitem=fileitem,
mediainfo=mediainfo,
source_oper=source_oper,
target_oper=target_oper,
target_storage=target_storage,
target_path=new_path,
transfer_type=transfer_type)
if not new_diritem:
logger.error(f"文件夹 {fileitem.path} 整理失败:{errmsg}")
self.__set_result(success=False,
message=errmsg,
fileitem=fileitem,
transfer_type=transfer_type,
need_notify=need_notify)
return self.result
# 重命名格式
rename_format = settings.RENAME_FORMAT(mediainfo.type)
logger.info(f"文件夹 {fileitem.path} 整理成功")
# 计算目录下所有文件大小
total_size = sum(file.stat().st_size for file in Path(fileitem.path).rglob('*') if file.is_file())
# 返回整理后的路径
self.__set_result(success=True,
fileitem=fileitem,
target_item=new_diritem,
target_diritem=new_diritem,
total_size=total_size,
need_scrape=need_scrape,
need_notify=need_notify,
transfer_type=transfer_type)
return self.result
else:
# 整理单个文件
if mediainfo.type == MediaType.TV:
# 电视剧
if in_meta.begin_episode is None:
logger.warn(f"文件 {fileitem.path} 整理失败:未识别到文件集数")
# 判断是否为文件夹
if fileitem.type == "dir":
# 整理整个目录,一般为蓝光原盘
if need_rename:
new_path = self.get_rename_path(
path=target_path,
template_string=rename_format,
rename_dict=self.get_naming_dict(meta=in_meta,
mediainfo=mediainfo)
)
new_path = DirectoryHelper.get_media_root_path(
rename_format, rename_path=new_path
)
if not new_path:
self.__set_result(
success=False,
message="重命名格式无效",
fileitem=fileitem,
transfer_type=transfer_type,
need_notify=need_notify,
)
return self.result.copy()
else:
new_path = target_path / fileitem.name
# 整理目录
new_diritem, errmsg = self.__transfer_dir(fileitem=fileitem,
mediainfo=mediainfo,
source_oper=source_oper,
target_oper=target_oper,
target_storage=target_storage,
target_path=new_path,
transfer_type=transfer_type)
if not new_diritem:
logger.error(f"文件夹 {fileitem.path} 整理失败:{errmsg}")
self.__set_result(success=False,
message=f"未识别到文件集数",
message=errmsg,
fileitem=fileitem,
transfer_type=transfer_type,
need_notify=need_notify)
return self.result.copy()
logger.info(f"文件夹 {fileitem.path} 整理成功")
# 计算目录下所有文件大小
total_size = sum(file.stat().st_size for file in Path(fileitem.path).rglob('*') if file.is_file())
# 返回整理后的路径
self.__set_result(success=True,
fileitem=fileitem,
target_item=new_diritem,
target_diritem=new_diritem,
total_size=total_size,
need_scrape=need_scrape,
need_notify=need_notify,
transfer_type=transfer_type)
return self.result.copy()
else:
# 整理单个文件
if mediainfo.type == MediaType.TV:
# 电视剧
if in_meta.begin_episode is None:
logger.warn(f"文件 {fileitem.path} 整理失败:未识别到文件集数")
self.__set_result(success=False,
message="未识别到文件集数",
fileitem=fileitem,
fail_list=[fileitem.path],
transfer_type=transfer_type,
need_notify=need_notify)
return self.result.copy()
# 文件结束季为空
in_meta.end_season = None
# 文件总季数为1
if in_meta.total_season:
in_meta.total_season = 1
# 文件不可能超过2集
if in_meta.total_episode > 2:
in_meta.total_episode = 1
in_meta.end_episode = None
# 目的文件名
if need_rename:
new_file = self.get_rename_path(
path=target_path,
template_string=rename_format,
rename_dict=self.get_naming_dict(
meta=in_meta,
mediainfo=mediainfo,
episodes_info=episodes_info,
file_ext=f".{fileitem.extension}"
)
)
folder_path = DirectoryHelper.get_media_root_path(
rename_format, rename_path=new_file
)
if not folder_path:
self.__set_result(
success=False,
message="重命名格式无效",
fileitem=fileitem,
fail_list=[fileitem.path],
transfer_type=transfer_type,
need_notify=need_notify,
)
return self.result.copy()
else:
new_file = target_path / fileitem.name
folder_path = target_path
# 判断是否要覆盖
overflag = False
# 目标目录
target_diritem = target_oper.get_folder(folder_path)
if not target_diritem:
logger.error(f"目标目录 {folder_path} 获取失败")
self.__set_result(success=False,
message=f"目标目录 {folder_path} 获取失败",
fileitem=fileitem,
fail_list=[fileitem.path],
transfer_type=transfer_type,
need_notify=need_notify)
return self.result
# 文件结束季为空
in_meta.end_season = None
# 文件总季数为1
if in_meta.total_season:
in_meta.total_season = 1
# 文件不可能超过2集
if in_meta.total_episode > 2:
in_meta.total_episode = 1
in_meta.end_episode = None
# 目的文件名
if need_rename:
new_file = self.get_rename_path(
path=target_path,
template_string=rename_format,
rename_dict=self.get_naming_dict(
meta=in_meta,
mediainfo=mediainfo,
episodes_info=episodes_info,
file_ext=f".{fileitem.extension}"
)
)
else:
new_file = target_path / fileitem.name
# 判断是否要覆盖
overflag = False
# 计算重命名中的文件夹层级
rename_format_level = len(rename_format.split("/")) - 1
folder_path = new_file.parents[rename_format_level - 1]
# 目标目录
target_diritem = target_oper.get_folder(folder_path)
if not target_diritem:
logger.error(f"目标目录 {folder_path} 获取失败")
self.__set_result(success=False,
message=f"目标目录 {folder_path} 获取失败",
fileitem=fileitem,
fail_list=[fileitem.path],
transfer_type=transfer_type,
need_notify=need_notify)
return self.result
# 目标文件
target_item = target_oper.get_item(new_file)
if target_item:
# 目标文件已存在
target_file = new_file
if target_storage == "local" and new_file.is_symlink():
target_file = new_file.readlink()
if not target_file.exists():
overflag = True
if not overflag:
return self.result.copy()
# 目标文件
target_item = target_oper.get_item(new_file)
if target_item:
# 目标文件已存在
logger.info(f"目的文件系统中已经存在同名文件 {target_file},当前整理覆盖模式设置为 {overwrite_mode}")
if overwrite_mode == 'always':
# 总是覆盖同名文件
overflag = True
elif overwrite_mode == 'size':
# 存在时大覆盖小
if target_item.size < fileitem.size:
logger.info(f"目标文件文件大小更小,将覆盖:{new_file}")
target_file = new_file
if target_storage == "local" and new_file.is_symlink():
target_file = new_file.readlink()
if not target_file.exists():
overflag = True
else:
if not overflag:
# 目标文件已存在
logger.info(f"目的文件系统中已经存在同名文件 {target_file},当前整理覆盖模式设置为 {overwrite_mode}")
if overwrite_mode == 'always':
# 总是覆盖同名文件
overflag = True
elif overwrite_mode == 'size':
# 存在时大覆盖小
if target_item.size < fileitem.size:
logger.info(f"目标文件文件大小更小,将覆盖:{new_file}")
overflag = True
else:
self.__set_result(success=False,
message=f"媒体库存在同名文件,且质量更好",
fileitem=fileitem,
target_item=target_item,
target_diritem=target_diritem,
fail_list=[fileitem.path],
transfer_type=transfer_type,
need_notify=need_notify)
return self.result.copy()
elif overwrite_mode == 'never':
# 存在不覆盖
self.__set_result(success=False,
message=f"媒体库存在同名文件,且质量更好",
message=f"媒体库存在同名文件,当前覆盖模式为不覆盖",
fileitem=fileitem,
target_item=target_item,
target_diritem=target_diritem,
fail_list=[fileitem.path],
transfer_type=transfer_type,
need_notify=need_notify)
return self.result
elif overwrite_mode == 'never':
# 存在不覆盖
self.__set_result(success=False,
message=f"媒体库存在同名文件,当前覆盖模式为不覆盖",
fileitem=fileitem,
target_item=target_item,
target_diritem=target_diritem,
fail_list=[fileitem.path],
transfer_type=transfer_type,
need_notify=need_notify)
return self.result
elif overwrite_mode == 'latest':
# 仅保留最新版本
logger.info(f"当前整理覆盖模式设置为仅保留最新版本,将覆盖:{new_file}")
overflag = True
else:
if overwrite_mode == 'latest':
# 文件不存在,但仅保留最新版本
logger.info(f"当前整理覆盖模式设置为 {overwrite_mode},仅保留最新版本,正在删除已有版本文件 ...")
self.__delete_version_files(target_oper, new_file)
# 整理文件
new_item, err_msg = self.__transfer_file(fileitem=fileitem,
mediainfo=mediainfo,
target_storage=target_storage,
target_file=new_file,
transfer_type=transfer_type,
over_flag=overflag,
source_oper=source_oper,
target_oper=target_oper)
if not new_item:
logger.error(f"文件 {fileitem.path} 整理失败:{err_msg}")
self.__set_result(success=False,
message=err_msg,
return self.result.copy()
elif overwrite_mode == 'latest':
# 仅保留最新版本
logger.info(f"当前整理覆盖模式设置为仅保留最新版本,将覆盖:{new_file}")
overflag = True
else:
if overwrite_mode == 'latest':
# 文件不存在,但仅保留最新版本
logger.info(f"当前整理覆盖模式设置为 {overwrite_mode},仅保留最新版本,正在删除已有版本文件 ...")
self.__delete_version_files(target_oper, new_file)
# 整理文件
new_item, err_msg = self.__transfer_file(fileitem=fileitem,
mediainfo=mediainfo,
target_storage=target_storage,
target_file=new_file,
transfer_type=transfer_type,
over_flag=overflag,
source_oper=source_oper,
target_oper=target_oper)
if not new_item:
logger.error(f"文件 {fileitem.path} 整理失败:{err_msg}")
self.__set_result(success=False,
message=err_msg,
fileitem=fileitem,
fail_list=[fileitem.path],
transfer_type=transfer_type,
need_notify=need_notify)
return self.result.copy()
logger.info(f"文件 {fileitem.path} 整理成功")
self.__set_result(success=True,
fileitem=fileitem,
fail_list=[fileitem.path],
target_item=new_item,
target_diritem=target_diritem,
need_scrape=need_scrape,
transfer_type=transfer_type,
need_notify=need_notify)
return self.result
logger.info(f"文件 {fileitem.path} 整理成功")
self.__set_result(success=True,
fileitem=fileitem,
target_item=new_item,
target_diritem=target_diritem,
need_scrape=need_scrape,
transfer_type=transfer_type,
need_notify=need_notify)
return self.result
return self.result.copy()
finally:
self.result = None
@staticmethod
def __transfer_command(fileitem: FileItem, target_storage: str,

View File

@@ -54,7 +54,7 @@ class RuleParser:
if __name__ == '__main__':
# 测试代码
expression_str = """
SPECSUB & CNVOI & 4K & !BLU & !REMUX & !WEBDL & 60FPS & !DOLBY & !SDR & !3D > CNSUB & CNVOI & 4K & !BLU & !REMUX & !WEBDL & 60FPS & !DOLBY & !SDR & !3D > SPECSUB & 4K & !BLU & !REMUX & !WEBDL & 60FPS & !DOLBY & !SDR & !3D > CNSUB & 4K & !BLU & !REMUX & !WEBDL & 60FPS & !DOLBY & !SDR & !3D > SPECSUB & CNVOI & 4K & !BLU & !REMUX & !WEBDL & !DOLBY & HDR & !3D > CNSUB & CNVOI & 4K & !BLU & !REMUX & !WEBDL & !DOLBY & HDR & !3D > SPECSUB & CNVOI & 4K & !BLU & !REMUX & !WEBDL & !DOLBY & !3D > CNSUB & CNVOI & 4K & !BLU & !REMUX & !WEBDL & !DOLBY & !3D > SPECSUB & 4K & !BLU & !REMUX & !WEBDL & !DOLBY & HDR & !3D > CNSUB & 4K & !BLU & !REMUX & !WEBDL & !DOLBY & HDR & !3D > SPECSUB & 4K & !BLU & !REMUX & !WEBDL & !DOLBY & !3D > CNSUB & 4K & !BLU & !REMUX & !WEBDL & !DOLBY & !3D > SPECSUB & CNVOI & 4K & WEBDL & 60FPS & !DOLBY & !SDR & !3D > CNSUB & CNVOI & 4K & WEBDL & 60FPS & !DOLBY & !SDR & !3D > SPECSUB & 4K & WEBDL & 60FPS & !DOLBY & !SDR & !3D > CNSUB & 4K & WEBDL & 60FPS & !DOLBY & !SDR & !3D > SPECSUB & CNVOI & 4K & WEBDL & !DOLBY & HDR & !3D > CNSUB & CNVOI & 4K & WEBDL & !DOLBY & HDR & !3D > SPECSUB & CNVOI & 4K & WEBDL & !DOLBY & !3D > CNSUB & CNVOI & 4K & WEBDL & !DOLBY & !3D > SPECSUB & 4K & WEBDL & !DOLBY & HDR & !3D > CNSUB & 4K & WEBDL & !DOLBY & HDR & !3D > SPECSUB & 4K & WEBDL & !DOLBY & !3D > CNSUB & 4K & WEBDL & !DOLBY & !3D > SPECSUB & CNVOI & 4K & !BLU & !WEBDL & !DOLBY & HDR & !3D > CNSUB & CNVOI & 4K & !BLU & !WEBDL & !DOLBY & HDR & !3D > SPECSUB & CNVOI & 4K & !BLU & !WEBDL & !DOLBY & !3D > CNSUB & CNVOI & 4K & !BLU & !WEBDL & !DOLBY & !3D > SPECSUB & 4K & !BLU & !WEBDL & !DOLBY & HDR & !3D > CNSUB & 4K & !BLU & !WEBDL & !DOLBY & HDR & !3D > SPECSUB & 4K & !BLU & !WEBDL & !DOLBY & !SDR & !3D > CNSUB & 4K & !BLU & !WEBDL & !DOLBY & !SDR & !3D > 4K & !BLU & !REMUX & !DOLBY & HDR & !3D > 4K & !BLURAY & !REMUX & !DOLBY & !3D > SPECSUB & 1080P & !BLU & !REMUX & !WEBDL & !DOLBY & HDR & !3D > CNSUB & 1080P & !BLU & !REMUX & !WEBDL & !DOLBY & HDR & !3D > SPECSUB & 1080P & !BLU & !REMUX & !WEBDL & !DOLBY & !3D > CNSUB & 1080P & !BLU & !REMUX & !WEBDL & !DOLBY & !3D > SPECSUB & 1080P & !BLU & !WEBDL & !DOLBY & HDR & !3D > CNSUB & 1080P & !BLU & !WEBDL & !DOLBY & HDR & !3D > SPECSUB & 1080P & !BLU & !WEBDL & !DOLBY & !3D > CNSUB & 1080P & !BLU & !WEBDL & !DOLBY & !3D > SPECSUB & 1080P & WEBDL & !DOLBY & HDR & !3D > CNSUB & 1080P & WEBDL & !DOLBY & HDR & !3D > SPECSUB & 1080P & WEBDL & !DOLBY & !3D > CNSUB & 1080P & WEBDL & !DOLBY & !3D > 1080P & !BLU & !REMUX & !DOLBY & HDR & !3D > 1080P & !BLU & !REMUX & !DOLBY & !3D
SPECSUB & CNVOI & 4K & !BLU & !REMUX & !WEBDL & 60FPS & !DOLBY & !SDR & !3D > CNSUB & CNVOI & 4K & !BLU & !REMUX & !WEBDL & 60FPS & !DOLBY & !SDR & !3D > SPECSUB & 4K & !BLU & !REMUX & !WEBDL & 60FPS & !DOLBY & !SDR & !3D > CNSUB & 4K & !BLU & !REMUX & !WEBDL & 60FPS & !DOLBY & !SDR & !3D > SPECSUB & CNVOI & 4K & !BLU & !REMUX & !WEBDL & !DOLBY & HDR & !3D > CNSUB & CNVOI & 4K & !BLU & !REMUX & !WEBDL & !DOLBY & HDR & !3D > SPECSUB & CNVOI & 4K & !BLU & !REMUX & !WEBDL & !DOLBY & !3D > CNSUB & CNVOI & 4K & !BLU & !REMUX & !WEBDL & !DOLBY & !3D > SPECSUB & 4K & !BLU & !REMUX & !WEBDL & !DOLBY & HDR & !3D > CNSUB & 4K & !BLU & !REMUX & !WEBDL & !DOLBY & HDR & !3D > SPECSUB & 4K & !BLU & !REMUX & !WEBDL & !DOLBY & !3D > CNSUB & 4K & !BLU & !REMUX & !WEBDL & !DOLBY & !3D > SPECSUB & CNVOI & 4K & WEBDL & 60FPS & !DOLBY & !SDR & !3D > CNSUB & CNVOI & 4K & WEBDL & 60FPS & !DOLBY & !SDR & !3D > SPECSUB & 4K & WEBDL & 60FPS & !DOLBY & !SDR & !3D > CNSUB & 4K & WEBDL & 60FPS & !DOLBY & !SDR & !3D > SPECSUB & CNVOI & 4K & WEBDL & !DOLBY & HDR & !3D > CNSUB & CNVOI & 4K & WEBDL & !DOLBY & HDR & !3D > SPECSUB & CNVOI & 4K & WEBDL & !DOLBY & !3D > CNSUB & CNVOI & 4K & WEBDL & !DOLBY & !3D > SPECSUB & 4K & WEBDL & !DOLBY & HDR & !3D > CNSUB & 4K & WEBDL & !DOLBY & HDR & !3D > SPECSUB & 4K & WEBDL & !DOLBY & !3D > CNSUB & 4K & WEBDL & !DOLBY & !3D > SPECSUB & CNVOI & 4K & !BLU & !WEBDL & !DOLBY & HDR & !3D > CNSUB & CNVOI & 4K & !BLU & !WEBDL & !DOLBY & HDR & !3D > SPECSUB & CNVOI & 4K & !BLU & !WEBDL & !DOLBY & !3D > CNSUB & CNVOI & 4K & !BLU & !WEBDL & !DOLBY & !3D > SPECSUB & 4K & !BLU & !WEBDL & !DOLBY & HDR & !3D > CNSUB & 4K & !BLU & !WEBDL & !DOLBY & HDR & !3D > SPECSUB & 4K & !BLU & !WEBDL & !DOLBY & !SDR & !3D > CNSUB & 4K & !BLU & !WEBDL & !DOLBY & !SDR & !3D > 4K & !BLU & !REMUX & !DOLBY & HDR & !3D > 4K & !BLURAY & !REMUX & !DOLBY & !3D > SPECSUB & 1080P & !BLU & !REMUX & !WEBDL & !DOLBY & HDR & !3D > CNSUB & 1080P & !BLU & !REMUX & !WEBDL & !DOLBY & HDR & !3D > SPECSUB & 1080P & !BLU & !REMUX & !WEBDL & !DOLBY & !3D > CNSUB & 1080P & !BLU & !REMUX & !WEBDL & !DOLBY & !3D > SPECSUB & 1080P & !BLU & !WEBDL & !DOLBY & HDR & !3D > CNSUB & 1080P & !BLU & !WEBDL & !DOLBY & HDR & !3D > SPECSUB & 1080P & !BLU & !WEBDL & !DOLBY & !3D > CNSUB & 1080P & !BLU & !WEBDL & !DOLBY & !3D > SPECSUB & 1080P & WEBDL & !DOLBY & HDR & !3D > CNSUB & 1080P & WEBDL & !DOLBY & HDR & !3D > SPECSUB & 1080P & WEBDL & !DOLBY & !3D > CNSUB & 1080P & WEBDL & !DOLBY & !3D > 1080P & !BLU & !REMUX & !DOLBY & HDR & !3D > 1080P & !BLU & !REMUX & !DOLBY & !3D
"""
for exp in expression_str.split('>'):
parsed_expr = RuleParser().parse(exp.strip())

View File

@@ -5,10 +5,11 @@ from app.core.config import settings
from app.core.context import TorrentInfo
from app.db.site_oper import SiteOper
from app.helper.module import ModuleHelper
from app.helper.sites import SitesHelper, SiteSpider
from app.helper.sites import SitesHelper
from app.log import logger
from app.modules import _ModuleBase
from app.modules.indexer.parser import SiteParserBase
from app.modules.indexer.spider import SiteSpider
from app.modules.indexer.spider.haidan import HaiDanSpider
from app.modules.indexer.spider.hddolby import HddolbySpider
from app.modules.indexer.spider.mtorrent import MTorrentSpider
@@ -286,26 +287,29 @@ class IndexerModule(_ModuleBase):
return None
# 获取用户数据
logger.info(f"站点 {site.get('name')} 开始以 {site.get('schema')} 模型解析数据...")
site_obj.parse()
logger.debug(f"站点 {site.get('name')} 数据解析完成")
return SiteUserData(
domain=StringUtils.get_url_domain(site.get("url")),
userid=site_obj.userid,
username=site_obj.username,
user_level=site_obj.user_level,
join_at=site_obj.join_at,
upload=site_obj.upload,
download=site_obj.download,
ratio=site_obj.ratio,
bonus=site_obj.bonus,
seeding=site_obj.seeding,
seeding_size=site_obj.seeding_size,
seeding_info=site_obj.seeding_info or [],
leeching=site_obj.leeching,
leeching_size=site_obj.leeching_size,
message_unread=site_obj.message_unread,
message_unread_contents=site_obj.message_unread_contents or [],
updated_day=datetime.now().strftime('%Y-%m-%d'),
err_msg=site_obj.err_msg
)
try:
logger.info(f"站点 {site.get('name')} 开始以 {site.get('schema')} 模型解析数据...")
site_obj.parse()
logger.debug(f"站点 {site.get('name')} 数据解析完成")
return SiteUserData(
domain=StringUtils.get_url_domain(site.get("url")),
userid=site_obj.userid,
username=site_obj.username,
user_level=site_obj.user_level,
join_at=site_obj.join_at,
upload=site_obj.upload,
download=site_obj.download,
ratio=site_obj.ratio,
bonus=site_obj.bonus,
seeding=site_obj.seeding,
seeding_size=site_obj.seeding_size,
seeding_info=site_obj.seeding_info.copy() if site_obj.seeding_info else [],
leeching=site_obj.leeching,
leeching_size=site_obj.leeching_size,
message_unread=site_obj.message_unread,
message_unread_contents=site_obj.message_unread_contents.copy() if site_obj.message_unread_contents else [],
updated_day=datetime.now().strftime('%Y-%m-%d'),
err_msg=site_obj.err_msg
)
finally:
site_obj.clear()

View File

@@ -156,53 +156,57 @@ class SiteParserBase(metaclass=ABCMeta):
解析站点信息
:return:
"""
# Cookie模式时获取站点首页html
if self.request_mode == "apikey":
if not self.apikey and not self.token:
logger.warn(f"{self._site_name} 未设置cookie 或 apikey/token跳过后续操作")
return
self._index_html = {}
else:
# 检查是否已经登录
self._index_html = self._get_page_content(url=self._site_url)
if not self._parse_logged_in(self._index_html):
return
# 解析站点页面
self._parse_site_page(self._index_html)
# 解析用户基础信息
if self._user_basic_page:
self._parse_user_base_info(
self._get_page_content(
url=urljoin(self._base_url, self._user_basic_page),
params=self._user_basic_params,
headers=self._user_basic_headers
try:
# Cookie模式时获取站点首页html
if self.request_mode == "apikey":
if not self.apikey and not self.token:
logger.warn(f"{self._site_name} 未设置cookie 或 apikey/token跳过后续操作")
return
self._index_html = {}
else:
# 检查是否已经登录
self._index_html = self._get_page_content(url=self._site_url)
if not self._parse_logged_in(self._index_html):
return
# 解析站点页面
self._parse_site_page(self._index_html)
# 解析用户基础信息
if self._user_basic_page:
self._parse_user_base_info(
self._get_page_content(
url=urljoin(self._base_url, self._user_basic_page),
params=self._user_basic_params,
headers=self._user_basic_headers
)
)
)
else:
self._parse_user_base_info(self._index_html)
# 解析用户详细信息
if self._user_detail_page:
self._parse_user_detail_info(
self._get_page_content(
url=urljoin(self._base_url, self._user_detail_page),
params=self._user_detail_params,
headers=self._user_detail_headers
else:
self._parse_user_base_info(self._index_html)
# 解析用户详细信息
if self._user_detail_page:
self._parse_user_detail_info(
self._get_page_content(
url=urljoin(self._base_url, self._user_detail_page),
params=self._user_detail_params,
headers=self._user_detail_headers
)
)
)
# 解析用户未读消息
if settings.SITE_MESSAGE:
self._pase_unread_msgs()
# 解析用户上传、下载、分享率等信息
if self._user_traffic_page:
self._parse_user_traffic_info(
self._get_page_content(
url=urljoin(self._base_url, self._user_traffic_page),
params=self._user_traffic_params,
headers=self._user_traffic_headers
# 解析用户未读消息
if settings.SITE_MESSAGE:
self._pase_unread_msgs()
# 解析用户上传、下载、分享率等信息
if self._user_traffic_page:
self._parse_user_traffic_info(
self._get_page_content(
url=urljoin(self._base_url, self._user_traffic_page),
params=self._user_traffic_params,
headers=self._user_traffic_headers
)
)
)
# 解析用户做种信息
self._parse_seeding_pages()
# 解析用户做种信息
self._parse_seeding_pages()
finally:
# 关闭连接
self.close()
def _pase_unread_msgs(self):
"""
@@ -430,6 +434,22 @@ class SiteParserBase(metaclass=ABCMeta):
"""
pass
def close(self):
"""
关闭会话
"""
if self._session:
self._session.close()
self._session = None
def clear(self):
"""
清除当前解析器的所有信息
"""
self._index_html = ""
self.seeding_info.clear()
self.message_unread_contents.clear()
def to_dict(self):
"""
转化为字典

View File

@@ -14,15 +14,18 @@ class DiscuzUserInfo(SiteParserBase):
def _parse_user_base_info(self, html_text: str):
html_text = self._prepare_html_text(html_text)
html = etree.HTML(html_text)
user_info = html.xpath('//a[contains(@href, "&uid=")]')
if user_info:
user_id_match = re.search(r"&uid=(\d+)", user_info[0].attrib['href'])
if user_id_match and user_id_match.group().strip():
self.userid = user_id_match.group(1)
self._torrent_seeding_page = f"forum.php?&mod=torrents&cat_5up=on"
self._user_detail_page = user_info[0].attrib['href']
self.username = user_info[0].text.strip()
try:
user_info = html.xpath('//a[contains(@href, "&uid=")]')
if user_info:
user_id_match = re.search(r"&uid=(\d+)", user_info[0].attrib['href'])
if user_id_match and user_id_match.group().strip():
self.userid = user_id_match.group(1)
self._torrent_seeding_page = f"forum.php?&mod=torrents&cat_5up=on"
self._user_detail_page = user_info[0].attrib['href']
self.username = user_info[0].text.strip()
finally:
if html is not None:
del html
def _parse_site_page(self, html_text: str):
pass
@@ -34,40 +37,44 @@ class DiscuzUserInfo(SiteParserBase):
:return:
"""
html = etree.HTML(html_text)
if not StringUtils.is_valid_html_element(html):
return None
try:
if not StringUtils.is_valid_html_element(html):
return None
# 用户等级
user_levels_text = html.xpath('//a[contains(@href, "usergroup")]/text()')
if user_levels_text:
self.user_level = user_levels_text[-1].strip()
# 用户等级
user_levels_text = html.xpath('//a[contains(@href, "usergroup")]/text()')
if user_levels_text:
self.user_level = user_levels_text[-1].strip()
# 加入日期
join_at_text = html.xpath('//li[em[text()="注册时间"]]/text()')
if join_at_text:
self.join_at = StringUtils.unify_datetime_str(join_at_text[0].strip())
# 加入日期
join_at_text = html.xpath('//li[em[text()="注册时间"]]/text()')
if join_at_text:
self.join_at = StringUtils.unify_datetime_str(join_at_text[0].strip())
# 分享率
ratio_text = html.xpath('//li[contains(.//text(), "分享率")]//text()')
if ratio_text:
ratio_match = re.search(r"\(([\d,.]+)\)", ratio_text[0])
if ratio_match and ratio_match.group(1).strip():
self.bonus = StringUtils.str_float(ratio_match.group(1))
# 分享率
ratio_text = html.xpath('//li[contains(.//text(), "分享率")]//text()')
if ratio_text:
ratio_match = re.search(r"\(([\d,.]+)\)", ratio_text[0])
if ratio_match and ratio_match.group(1).strip():
self.bonus = StringUtils.str_float(ratio_match.group(1))
# 积分
bouns_text = html.xpath('//li[em[text()="积分"]]/text()')
if bouns_text:
self.bonus = StringUtils.str_float(bouns_text[0].strip())
# 积分
bouns_text = html.xpath('//li[em[text()="积分"]]/text()')
if bouns_text:
self.bonus = StringUtils.str_float(bouns_text[0].strip())
# 上传
upload_text = html.xpath('//li[em[contains(text(),"上传量")]]/text()')
if upload_text:
self.upload = StringUtils.num_filesize(upload_text[0].strip().split('/')[-1])
# 上传
upload_text = html.xpath('//li[em[contains(text(),"上传量")]]/text()')
if upload_text:
self.upload = StringUtils.num_filesize(upload_text[0].strip().split('/')[-1])
# 下载
download_text = html.xpath('//li[em[contains(text(),"下载量")]]/text()')
if download_text:
self.download = StringUtils.num_filesize(download_text[0].strip().split('/')[-1])
# 下载
download_text = html.xpath('//li[em[contains(text(),"下载量")]]/text()')
if download_text:
self.download = StringUtils.num_filesize(download_text[0].strip().split('/')[-1])
finally:
if html is not None:
del html
def _parse_user_torrent_seeding_info(self, html_text: str, multi_page: bool = False) -> Optional[str]:
"""
@@ -77,44 +84,48 @@ class DiscuzUserInfo(SiteParserBase):
:return: 下页地址
"""
html = etree.HTML(html_text)
if not StringUtils.is_valid_html_element(html):
return None
try:
if not StringUtils.is_valid_html_element(html):
return None
size_col = 3
seeders_col = 4
# 搜索size列
if html.xpath('//tr[position()=1]/td[.//img[@class="size"] and .//img[@alt="size"]]'):
size_col = len(html.xpath('//tr[position()=1]/td[.//img[@class="size"] '
'and .//img[@alt="size"]]/preceding-sibling::td')) + 1
# 搜索seeders列
if html.xpath('//tr[position()=1]/td[.//img[@class="seeders"] and .//img[@alt="seeders"]]'):
seeders_col = len(html.xpath('//tr[position()=1]/td[.//img[@class="seeders"] '
'and .//img[@alt="seeders"]]/preceding-sibling::td')) + 1
size_col = 3
seeders_col = 4
# 搜索size列
if html.xpath('//tr[position()=1]/td[.//img[@class="size"] and .//img[@alt="size"]]'):
size_col = len(html.xpath('//tr[position()=1]/td[.//img[@class="size"] '
'and .//img[@alt="size"]]/preceding-sibling::td')) + 1
# 搜索seeders列
if html.xpath('//tr[position()=1]/td[.//img[@class="seeders"] and .//img[@alt="seeders"]]'):
seeders_col = len(html.xpath('//tr[position()=1]/td[.//img[@class="seeders"] '
'and .//img[@alt="seeders"]]/preceding-sibling::td')) + 1
page_seeding = 0
page_seeding_size = 0
page_seeding_info = []
seeding_sizes = html.xpath(f'//tr[position()>1]/td[{size_col}]')
seeding_seeders = html.xpath(f'//tr[position()>1]/td[{seeders_col}]//text()')
if seeding_sizes and seeding_seeders:
page_seeding = len(seeding_sizes)
page_seeding = 0
page_seeding_size = 0
page_seeding_info = []
seeding_sizes = html.xpath(f'//tr[position()>1]/td[{size_col}]')
seeding_seeders = html.xpath(f'//tr[position()>1]/td[{seeders_col}]//text()')
if seeding_sizes and seeding_seeders:
page_seeding = len(seeding_sizes)
for i in range(0, len(seeding_sizes)):
size = StringUtils.num_filesize(seeding_sizes[i].xpath("string(.)").strip())
seeders = StringUtils.str_int(seeding_seeders[i])
for i in range(0, len(seeding_sizes)):
size = StringUtils.num_filesize(seeding_sizes[i].xpath("string(.)").strip())
seeders = StringUtils.str_int(seeding_seeders[i])
page_seeding_size += size
page_seeding_info.append([seeders, size])
page_seeding_size += size
page_seeding_info.append([seeders, size])
self.seeding += page_seeding
self.seeding_size += page_seeding_size
self.seeding_info.extend(page_seeding_info)
self.seeding += page_seeding
self.seeding_size += page_seeding_size
self.seeding_info.extend(page_seeding_info)
# 是否存在下页数据
next_page = None
next_page_text = html.xpath('//a[contains(.//text(), "下一页") or contains(.//text(), "下一頁")]/@href')
if next_page_text:
next_page = next_page_text[-1].strip()
# 是否存在下页数据
next_page = None
next_page_text = html.xpath('//a[contains(.//text(), "下一页") or contains(.//text(), "下一頁")]/@href')
if next_page_text:
next_page = next_page_text[-1].strip()
finally:
if html is not None:
del html
return next_page

View File

@@ -24,10 +24,13 @@ class FileListSiteUserInfo(SiteParserBase):
def _parse_user_base_info(self, html_text: str):
html_text = self._prepare_html_text(html_text)
html = etree.HTML(html_text)
ret = html.xpath(f'//a[contains(@href, "userdetails") and contains(@href, "{self.userid}")]//text()')
if ret:
self.username = str(ret[0])
try:
ret = html.xpath(f'//a[contains(@href, "userdetails") and contains(@href, "{self.userid}")]//text()')
if ret:
self.username = str(ret[0])
finally:
if html is not None:
del html
def _parse_user_traffic_info(self, html_text: str):
"""
@@ -40,39 +43,41 @@ class FileListSiteUserInfo(SiteParserBase):
def _parse_user_detail_info(self, html_text: str):
html_text = self._prepare_html_text(html_text)
html = etree.HTML(html_text)
try:
upload_html = html.xpath('//table//tr/td[text()="Uploaded"]/following-sibling::td//text()')
if upload_html:
self.upload = StringUtils.num_filesize(upload_html[0])
download_html = html.xpath('//table//tr/td[text()="Downloaded"]/following-sibling::td//text()')
if download_html:
self.download = StringUtils.num_filesize(download_html[0])
upload_html = html.xpath('//table//tr/td[text()="Uploaded"]/following-sibling::td//text()')
if upload_html:
self.upload = StringUtils.num_filesize(upload_html[0])
download_html = html.xpath('//table//tr/td[text()="Downloaded"]/following-sibling::td//text()')
if download_html:
self.download = StringUtils.num_filesize(download_html[0])
ratio_html = html.xpath('//table//tr/td[text()="Share ratio"]/following-sibling::td//text()')
if ratio_html:
share_ratio = StringUtils.str_float(ratio_html[0])
else:
share_ratio = 0
self.ratio = 0 if self.download == 0 else share_ratio
ratio_html = html.xpath('//table//tr/td[text()="Share ratio"]/following-sibling::td//text()')
if ratio_html:
share_ratio = StringUtils.str_float(ratio_html[0])
else:
share_ratio = 0
self.ratio = 0 if self.download == 0 else share_ratio
seed_html = html.xpath('//table//tr/td[text()="Seed bonus"]/following-sibling::td//text()')
if seed_html:
self.seeding = StringUtils.str_int(seed_html[1])
self.seeding_size = StringUtils.num_filesize(seed_html[3])
seed_html = html.xpath('//table//tr/td[text()="Seed bonus"]/following-sibling::td//text()')
if seed_html:
self.seeding = StringUtils.str_int(seed_html[1])
self.seeding_size = StringUtils.num_filesize(seed_html[3])
user_level_html = html.xpath('//table//tr/td[text()="Class"]/following-sibling::td//text()')
if user_level_html:
self.user_level = user_level_html[0].strip()
user_level_html = html.xpath('//table//tr/td[text()="Class"]/following-sibling::td//text()')
if user_level_html:
self.user_level = user_level_html[0].strip()
join_at_html = html.xpath('//table//tr/td[contains(text(), "Join")]/following-sibling::td//text()')
if join_at_html:
join_at = (join_at_html[0].split("("))[0].strip()
self.join_at = StringUtils.unify_datetime_str(join_at)
join_at_html = html.xpath('//table//tr/td[contains(text(), "Join")]/following-sibling::td//text()')
if join_at_html:
join_at = (join_at_html[0].split("("))[0].strip()
self.join_at = StringUtils.unify_datetime_str(join_at)
bonus_html = html.xpath('//a[contains(@href, "shop.php")]')
if bonus_html:
self.bonus = StringUtils.str_float(bonus_html[0].xpath("string(.)").strip())
pass
bonus_html = html.xpath('//a[contains(@href, "shop.php")]')
if bonus_html:
self.bonus = StringUtils.str_float(bonus_html[0].xpath("string(.)").strip())
finally:
if html is not None:
del html
def _parse_user_torrent_seeding_info(self, html_text: str, multi_page: Optional[bool] = False) -> Optional[str]:
"""
@@ -82,28 +87,32 @@ class FileListSiteUserInfo(SiteParserBase):
:return: 下页地址
"""
html = etree.HTML(html_text)
if not StringUtils.is_valid_html_element(html):
return None
try:
if not StringUtils.is_valid_html_element(html):
return None
size_col = 6
seeders_col = 7
size_col = 6
seeders_col = 7
page_seeding_size = 0
page_seeding_info = []
seeding_sizes = html.xpath(f'//table/tr[position()>1]/td[{size_col}]')
seeding_seeders = html.xpath(f'//table/tr[position()>1]/td[{seeders_col}]')
if seeding_sizes and seeding_seeders:
for i in range(0, len(seeding_sizes)):
size = StringUtils.num_filesize(seeding_sizes[i].xpath("string(.)").strip())
seeders = StringUtils.str_int(seeding_seeders[i].xpath("string(.)").strip())
page_seeding_size = 0
page_seeding_info = []
seeding_sizes = html.xpath(f'//table/tr[position()>1]/td[{size_col}]')
seeding_seeders = html.xpath(f'//table/tr[position()>1]/td[{seeders_col}]')
if seeding_sizes and seeding_seeders:
for i in range(0, len(seeding_sizes)):
size = StringUtils.num_filesize(seeding_sizes[i].xpath("string(.)").strip())
seeders = StringUtils.str_int(seeding_seeders[i].xpath("string(.)").strip())
page_seeding_size += size
page_seeding_info.append([seeders, size])
page_seeding_size += size
page_seeding_info.append([seeders, size])
self.seeding_info.extend(page_seeding_info)
self.seeding_info.extend(page_seeding_info)
# 是否存在下页数据
next_page = None
# 是否存在下页数据
next_page = None
finally:
if html is not None:
del html
return next_page

View File

@@ -14,46 +14,49 @@ class GazelleSiteUserInfo(SiteParserBase):
def _parse_user_base_info(self, html_text: str):
html_text = self._prepare_html_text(html_text)
html = etree.HTML(html_text)
try:
tmps = html.xpath('//a[contains(@href, "user.php?id=")]')
if tmps:
user_id_match = re.search(r"user.php\?id=(\d+)", tmps[0].attrib['href'])
if user_id_match and user_id_match.group().strip():
self.userid = user_id_match.group(1)
self._torrent_seeding_page = f"torrents.php?type=seeding&userid={self.userid}"
self._user_detail_page = f"user.php?id={self.userid}"
self.username = tmps[0].text.strip()
tmps = html.xpath('//a[contains(@href, "user.php?id=")]')
if tmps:
user_id_match = re.search(r"user.php\?id=(\d+)", tmps[0].attrib['href'])
if user_id_match and user_id_match.group().strip():
self.userid = user_id_match.group(1)
self._torrent_seeding_page = f"torrents.php?type=seeding&userid={self.userid}"
self._user_detail_page = f"user.php?id={self.userid}"
self.username = tmps[0].text.strip()
tmps = html.xpath('//*[@id="header-uploaded-value"]/@data-value')
if tmps:
self.upload = StringUtils.num_filesize(tmps[0])
else:
tmps = html.xpath('//li[@id="stats_seeding"]/span/text()')
tmps = html.xpath('//*[@id="header-uploaded-value"]/@data-value')
if tmps:
self.upload = StringUtils.num_filesize(tmps[0])
else:
tmps = html.xpath('//li[@id="stats_seeding"]/span/text()')
if tmps:
self.upload = StringUtils.num_filesize(tmps[0])
tmps = html.xpath('//*[@id="header-downloaded-value"]/@data-value')
if tmps:
self.download = StringUtils.num_filesize(tmps[0])
else:
tmps = html.xpath('//li[@id="stats_leeching"]/span/text()')
tmps = html.xpath('//*[@id="header-downloaded-value"]/@data-value')
if tmps:
self.download = StringUtils.num_filesize(tmps[0])
else:
tmps = html.xpath('//li[@id="stats_leeching"]/span/text()')
if tmps:
self.download = StringUtils.num_filesize(tmps[0])
self.ratio = 0.0 if self.download <= 0.0 else round(self.upload / self.download, 3)
self.ratio = 0.0 if self.download <= 0.0 else round(self.upload / self.download, 3)
tmps = html.xpath('//a[contains(@href, "bonus.php")]/@data-tooltip')
if tmps:
bonus_match = re.search(r"([\d,.]+)", tmps[0])
if bonus_match and bonus_match.group(1).strip():
self.bonus = StringUtils.str_float(bonus_match.group(1))
else:
tmps = html.xpath('//a[contains(@href, "bonus.php")]')
tmps = html.xpath('//a[contains(@href, "bonus.php")]/@data-tooltip')
if tmps:
bonus_text = tmps[0].xpath("string(.)")
bonus_match = re.search(r"([\d,.]+)", bonus_text)
bonus_match = re.search(r"([\d,.]+)", tmps[0])
if bonus_match and bonus_match.group(1).strip():
self.bonus = StringUtils.str_float(bonus_match.group(1))
else:
tmps = html.xpath('//a[contains(@href, "bonus.php")]')
if tmps:
bonus_text = tmps[0].xpath("string(.)")
bonus_match = re.search(r"([\d,.]+)", bonus_text)
if bonus_match and bonus_match.group(1).strip():
self.bonus = StringUtils.str_float(bonus_match.group(1))
finally:
if html is not None:
del html
def _parse_site_page(self, html_text: str):
pass
@@ -65,27 +68,31 @@ class GazelleSiteUserInfo(SiteParserBase):
:return:
"""
html = etree.HTML(html_text)
if not StringUtils.is_valid_html_element(html):
return None
try:
if not StringUtils.is_valid_html_element(html):
return None
# 用户等级
user_levels_text = html.xpath('//*[@id="class-value"]/@data-value')
if user_levels_text:
self.user_level = user_levels_text[0].strip()
else:
user_levels_text = html.xpath('//li[contains(text(), "用户等级")]/text()')
# 用户等级
user_levels_text = html.xpath('//*[@id="class-value"]/@data-value')
if user_levels_text:
self.user_level = user_levels_text[0].split(':')[1].strip()
self.user_level = user_levels_text[0].strip()
else:
user_levels_text = html.xpath('//li[contains(text(), "用户等级")]/text()')
if user_levels_text:
self.user_level = user_levels_text[0].split(':')[1].strip()
# 加入日期
join_at_text = html.xpath('//*[@id="join-date-value"]/@data-value')
if join_at_text:
self.join_at = StringUtils.unify_datetime_str(join_at_text[0].strip())
else:
join_at_text = html.xpath(
'//div[contains(@class, "box_userinfo_stats")]//li[contains(text(), "加入时间")]/span/text()')
# 加入日期
join_at_text = html.xpath('//*[@id="join-date-value"]/@data-value')
if join_at_text:
self.join_at = StringUtils.unify_datetime_str(join_at_text[0].strip())
else:
join_at_text = html.xpath(
'//div[contains(@class, "box_userinfo_stats")]//li[contains(text(), "加入时间")]/span/text()')
if join_at_text:
self.join_at = StringUtils.unify_datetime_str(join_at_text[0].strip())
finally:
if html is not None:
del html
def _parse_user_torrent_seeding_info(self, html_text: str, multi_page: Optional[bool] = False) -> Optional[str]:
"""
@@ -95,48 +102,52 @@ class GazelleSiteUserInfo(SiteParserBase):
:return: 下页地址
"""
html = etree.HTML(html_text)
if not StringUtils.is_valid_html_element(html):
return None
try:
if not StringUtils.is_valid_html_element(html):
return None
size_col = 3
# 搜索size列
if html.xpath('//table[contains(@id, "torrent")]//tr[1]/td'):
size_col = len(html.xpath('//table[contains(@id, "torrent")]//tr[1]/td')) - 3
# 搜索seeders列
seeders_col = size_col + 2
size_col = 3
# 搜索size列
if html.xpath('//table[contains(@id, "torrent")]//tr[1]/td'):
size_col = len(html.xpath('//table[contains(@id, "torrent")]//tr[1]/td')) - 3
# 搜索seeders列
seeders_col = size_col + 2
page_seeding = 0
page_seeding_size = 0
page_seeding_info = []
seeding_sizes = html.xpath(f'//table[contains(@id, "torrent")]//tr[position()>1]/td[{size_col}]')
seeding_seeders = html.xpath(f'//table[contains(@id, "torrent")]//tr[position()>1]/td[{seeders_col}]/text()')
if seeding_sizes and seeding_seeders:
page_seeding = len(seeding_sizes)
page_seeding = 0
page_seeding_size = 0
page_seeding_info = []
seeding_sizes = html.xpath(f'//table[contains(@id, "torrent")]//tr[position()>1]/td[{size_col}]')
seeding_seeders = html.xpath(f'//table[contains(@id, "torrent")]//tr[position()>1]/td[{seeders_col}]/text()')
if seeding_sizes and seeding_seeders:
page_seeding = len(seeding_sizes)
for i in range(0, len(seeding_sizes)):
size = StringUtils.num_filesize(seeding_sizes[i].xpath("string(.)").strip())
seeders = int(seeding_seeders[i])
for i in range(0, len(seeding_sizes)):
size = StringUtils.num_filesize(seeding_sizes[i].xpath("string(.)").strip())
seeders = int(seeding_seeders[i])
page_seeding_size += size
page_seeding_info.append([seeders, size])
page_seeding_size += size
page_seeding_info.append([seeders, size])
if multi_page:
self.seeding += page_seeding
self.seeding_size += page_seeding_size
self.seeding_info.extend(page_seeding_info)
else:
if not self.seeding:
self.seeding = page_seeding
if not self.seeding_size:
self.seeding_size = page_seeding_size
if not self.seeding_info:
self.seeding_info = page_seeding_info
if multi_page:
self.seeding += page_seeding
self.seeding_size += page_seeding_size
self.seeding_info.extend(page_seeding_info)
else:
if not self.seeding:
self.seeding = page_seeding
if not self.seeding_size:
self.seeding_size = page_seeding_size
if not self.seeding_info:
self.seeding_info = page_seeding_info
# 是否存在下页数据
next_page = None
next_page_text = html.xpath('//a[contains(.//text(), "Next") or contains(.//text(), "下一页")]/@href')
if next_page_text:
next_page = next_page_text[-1].strip()
# 是否存在下页数据
next_page = None
next_page_text = html.xpath('//a[contains(.//text(), "Next") or contains(.//text(), "下一页")]/@href')
if next_page_text:
next_page = next_page_text[-1].strip()
finally:
if html is not None:
del html
return next_page

View File

@@ -14,67 +14,79 @@ class IptSiteUserInfo(SiteParserBase):
def _parse_user_base_info(self, html_text: str):
html_text = self._prepare_html_text(html_text)
html = etree.HTML(html_text)
tmps = html.xpath('//a[contains(@href, "/u/")]//text()')
tmps_id = html.xpath('//a[contains(@href, "/u/")]/@href')
if tmps:
self.username = str(tmps[-1])
if tmps_id:
user_id_match = re.search(r"/u/(\d+)", tmps_id[0])
if user_id_match and user_id_match.group().strip():
self.userid = user_id_match.group(1)
self._user_detail_page = f"user.php?u={self.userid}"
self._torrent_seeding_page = f"peers?u={self.userid}"
try:
tmps = html.xpath('//a[contains(@href, "/u/")]//text()')
tmps_id = html.xpath('//a[contains(@href, "/u/")]/@href')
if tmps:
self.username = str(tmps[-1])
if tmps_id:
user_id_match = re.search(r"/u/(\d+)", tmps_id[0])
if user_id_match and user_id_match.group().strip():
self.userid = user_id_match.group(1)
self._user_detail_page = f"user.php?u={self.userid}"
self._torrent_seeding_page = f"peers?u={self.userid}"
tmps = html.xpath('//div[@class = "stats"]/div/div')
if tmps:
self.upload = StringUtils.num_filesize(str(tmps[0].xpath('span/text()')[1]).strip())
self.download = StringUtils.num_filesize(str(tmps[0].xpath('span/text()')[2]).strip())
self.seeding = StringUtils.str_int(tmps[0].xpath('a')[2].xpath('text()')[0])
self.leeching = StringUtils.str_int(tmps[0].xpath('a')[2].xpath('text()')[1])
self.ratio = StringUtils.str_float(str(tmps[0].xpath('span/text()')[0]).strip().replace('-', '0'))
self.bonus = StringUtils.str_float(tmps[0].xpath('a')[3].xpath('text()')[0])
tmps = html.xpath('//div[@class = "stats"]/div/div')
if tmps:
self.upload = StringUtils.num_filesize(str(tmps[0].xpath('span/text()')[1]).strip())
self.download = StringUtils.num_filesize(str(tmps[0].xpath('span/text()')[2]).strip())
self.seeding = StringUtils.str_int(tmps[0].xpath('a')[2].xpath('text()')[0])
self.leeching = StringUtils.str_int(tmps[0].xpath('a')[2].xpath('text()')[1])
self.ratio = StringUtils.str_float(str(tmps[0].xpath('span/text()')[0]).strip().replace('-', '0'))
self.bonus = StringUtils.str_float(tmps[0].xpath('a')[3].xpath('text()')[0])
finally:
if html is not None:
del html
def _parse_site_page(self, html_text: str):
pass
def _parse_user_detail_info(self, html_text: str):
html = etree.HTML(html_text)
if not StringUtils.is_valid_html_element(html):
return
try:
if not StringUtils.is_valid_html_element(html):
return
user_levels_text = html.xpath('//tr/th[text()="Class"]/following-sibling::td[1]/text()')
if user_levels_text:
self.user_level = user_levels_text[0].strip()
user_levels_text = html.xpath('//tr/th[text()="Class"]/following-sibling::td[1]/text()')
if user_levels_text:
self.user_level = user_levels_text[0].strip()
# 加入日期
join_at_text = html.xpath('//tr/th[text()="Join date"]/following-sibling::td[1]/text()')
if join_at_text:
self.join_at = StringUtils.unify_datetime_str(join_at_text[0].split(' (')[0])
# 加入日期
join_at_text = html.xpath('//tr/th[text()="Join date"]/following-sibling::td[1]/text()')
if join_at_text:
self.join_at = StringUtils.unify_datetime_str(join_at_text[0].split(' (')[0])
finally:
if html is not None:
del html
def _parse_user_torrent_seeding_info(self, html_text: str, multi_page: bool = False) -> Optional[str]:
html = etree.HTML(html_text)
if not StringUtils.is_valid_html_element(html):
return None
# seeding start
seeding_end_pos = 3
if html.xpath('//tr/td[text() = "Leechers"]'):
seeding_end_pos = len(html.xpath('//tr/td[text() = "Leechers"]/../preceding-sibling::tr')) + 1
seeding_end_pos = seeding_end_pos - 3
try:
if not StringUtils.is_valid_html_element(html):
return None
# seeding start
seeding_end_pos = 3
if html.xpath('//tr/td[text() = "Leechers"]'):
seeding_end_pos = len(html.xpath('//tr/td[text() = "Leechers"]/../preceding-sibling::tr')) + 1
seeding_end_pos = seeding_end_pos - 3
page_seeding = 0
page_seeding_size = 0
seeding_torrents = html.xpath('//tr/td[text() = "Seeders"]/../following-sibling::tr/td[position()=6]/text()')
if seeding_torrents:
page_seeding = seeding_end_pos
for per_size in seeding_torrents[:seeding_end_pos]:
if '(' in per_size and ')' in per_size:
per_size = per_size.split('(')[-1]
per_size = per_size.split(')')[0]
page_seeding = 0
page_seeding_size = 0
seeding_torrents = html.xpath('//tr/td[text() = "Seeders"]/../following-sibling::tr/td[position()=6]/text()')
if seeding_torrents:
page_seeding = seeding_end_pos
for per_size in seeding_torrents[:seeding_end_pos]:
if '(' in per_size and ')' in per_size:
per_size = per_size.split('(')[-1]
per_size = per_size.split(')')[0]
page_seeding_size += StringUtils.num_filesize(per_size)
page_seeding_size += StringUtils.num_filesize(per_size)
self.seeding = page_seeding
self.seeding_size = page_seeding_size
self.seeding = page_seeding
self.seeding_size = page_seeding_size
finally:
if html is not None:
del html
def _parse_user_traffic_info(self, html_text: str):
pass

View File

@@ -23,12 +23,16 @@ class NexusAudiencesSiteUserInfo(NexusPhpSiteUserInfo):
if not html_text:
return
html = etree.HTML(html_text)
if not StringUtils.is_valid_html_element(html):
return
total_row = html.xpath('//table[@class="table table-bordered"]//tr[td[1][normalize-space()="Total"]]')
if not total_row:
return
seeding_count = total_row[0].xpath('./td[2]/text()')
seeding_size = total_row[0].xpath('./td[3]/text()')
self.seeding = StringUtils.str_int(seeding_count[0]) if seeding_count else 0
self.seeding_size = StringUtils.num_filesize(seeding_size[0].strip()) if seeding_size else 0
try:
if not StringUtils.is_valid_html_element(html):
return
total_row = html.xpath('//table[@class="table table-bordered"]//tr[td[1][normalize-space()="Total"]]')
if not total_row:
return
seeding_count = total_row[0].xpath('./td[2]/text()')
seeding_size = total_row[0].xpath('./td[3]/text()')
self.seeding = StringUtils.str_int(seeding_count[0]) if seeding_count else 0
self.seeding_size = StringUtils.num_filesize(seeding_size[0].strip()) if seeding_size else 0
finally:
if html is not None:
del html

View File

@@ -17,21 +17,25 @@ class NexusHhanclubSiteUserInfo(NexusPhpSiteUserInfo):
html_text = self._prepare_html_text(html_text)
html = etree.HTML(html_text)
# 上传、下载、分享率
upload_match = re.search(r"[_<>/a-zA-Z-=\"'\s#;]+([\d,.\s]+[KMGTPI]*B)",
html.xpath('//*[@id="user-info-panel"]/div[2]/div[2]/div[4]/text()')[0])
download_match = re.search(r"[_<>/a-zA-Z-=\"'\s#;]+([\d,.\s]+[KMGTPI]*B)",
html.xpath('//*[@id="user-info-panel"]/div[2]/div[2]/div[5]/text()')[0])
ratio_match = re.search(r"分享率][:_<>/a-zA-Z-=\"'\s#;]+([\d,.\s]+)",
html.xpath('//*[@id="user-info-panel"]/div[2]/div[1]/div[1]/div/text()')[0])
try:
# 上传、下载、分享率
upload_match = re.search(r"[_<>/a-zA-Z-=\"'\s#;]+([\d,.\s]+[KMGTPI]*B)",
html.xpath('//*[@id="user-info-panel"]/div[2]/div[2]/div[4]/text()')[0])
download_match = re.search(r"[_<>/a-zA-Z-=\"'\s#;]+([\d,.\s]+[KMGTPI]*B)",
html.xpath('//*[@id="user-info-panel"]/div[2]/div[2]/div[5]/text()')[0])
ratio_match = re.search(r"分享率][:_<>/a-zA-Z-=\"'\s#;]+([\d,.\s]+)",
html.xpath('//*[@id="user-info-panel"]/div[2]/div[1]/div[1]/div/text()')[0])
# 计算分享率
self.upload = StringUtils.num_filesize(upload_match.group(1).strip()) if upload_match else 0
self.download = StringUtils.num_filesize(download_match.group(1).strip()) if download_match else 0
# 优先使用页面上的分享率
calc_ratio = 0.0 if self.download <= 0.0 else round(self.upload / self.download, 3)
self.ratio = StringUtils.str_float(ratio_match.group(1)) if (
ratio_match and ratio_match.group(1).strip()) else calc_ratio
# 计算分享率
self.upload = StringUtils.num_filesize(upload_match.group(1).strip()) if upload_match else 0
self.download = StringUtils.num_filesize(download_match.group(1).strip()) if download_match else 0
# 优先使用页面上的分享率
calc_ratio = 0.0 if self.download <= 0.0 else round(self.upload / self.download, 3)
self.ratio = StringUtils.str_float(ratio_match.group(1)) if (
ratio_match and ratio_match.group(1).strip()) else calc_ratio
finally:
if html is not None:
del html
def _parse_user_detail_info(self, html_text: str):
"""
@@ -42,12 +46,16 @@ class NexusHhanclubSiteUserInfo(NexusPhpSiteUserInfo):
super()._parse_user_detail_info(html_text)
html = etree.HTML(html_text)
if not StringUtils.is_valid_html_element(html):
return
# 加入时间
join_at_text = html.xpath('//*[@id="mainContent"]/div/div[2]/div[4]/div[3]/span[2]/text()[1]')
if join_at_text:
self.join_at = StringUtils.unify_datetime_str(join_at_text[0].split(' (')[0].strip())
try:
if not StringUtils.is_valid_html_element(html):
return
# 加入时间
join_at_text = html.xpath('//*[@id="mainContent"]/div/div[2]/div[4]/div[3]/span[2]/text()[1]')
if join_at_text:
self.join_at = StringUtils.unify_datetime_str(join_at_text[0].split(' (')[0].strip())
finally:
if html is not None:
del html
def _get_user_level(self, html):
super()._get_user_level(html)

View File

@@ -34,21 +34,25 @@ class NexusPhpSiteUserInfo(SiteParserBase):
:return:
"""
html = etree.HTML(html_text)
if not StringUtils.is_valid_html_element(html):
return
try:
if not StringUtils.is_valid_html_element(html):
return
message_labels = html.xpath('//a[@href="messages.php"]/..')
message_labels.extend(html.xpath('//a[contains(@href, "messages.php")]/..'))
if message_labels:
message_text = message_labels[0].xpath("string(.)")
message_labels = html.xpath('//a[@href="messages.php"]/..')
message_labels.extend(html.xpath('//a[contains(@href, "messages.php")]/..'))
if message_labels:
message_text = message_labels[0].xpath("string(.)")
logger.debug(f"{self._site_name} 消息原始信息 {message_text}")
message_unread_match = re.findall(r"[^Date](信息箱\s*|\((?![^)]*:)|你有\xa0)(\d+)", message_text)
logger.debug(f"{self._site_name} 消息原始信息 {message_text}")
message_unread_match = re.findall(r"[^Date](信息箱\s*|\((?![^)]*:)|你有\xa0)(\d+)", message_text)
if message_unread_match and len(message_unread_match[-1]) == 2:
self.message_unread = StringUtils.str_int(message_unread_match[-1][1])
elif message_text.isdigit():
self.message_unread = StringUtils.str_int(message_text)
if message_unread_match and len(message_unread_match[-1]) == 2:
self.message_unread = StringUtils.str_int(message_unread_match[-1][1])
elif message_text.isdigit():
self.message_unread = StringUtils.str_int(message_text)
finally:
if html is not None:
del html
def _parse_user_base_info(self, html_text: str):
"""
@@ -61,18 +65,23 @@ class NexusPhpSiteUserInfo(SiteParserBase):
self._parse_message_unread(html_text)
html = etree.HTML(html_text)
if not StringUtils.is_valid_html_element(html):
return
try:
if not StringUtils.is_valid_html_element(html):
return
ret = html.xpath(f'//a[contains(@href, "userdetails") and contains(@href, "{self.userid}")]//b//text()')
if ret:
self.username = str(ret[0])
return
ret = html.xpath(f'//a[contains(@href, "userdetails") and contains(@href, "{self.userid}")]//text()')
if ret:
self.username = str(ret[0])
ret = html.xpath(f'//a[contains(@href, "userdetails") and contains(@href, "{self.userid}")]//b//text()')
if ret:
self.username = str(ret[0])
return
ret = html.xpath(f'//a[contains(@href, "userdetails") and contains(@href, "{self.userid}")]//text()')
if ret:
self.username = str(ret[0])
ret = html.xpath('//a[contains(@href, "userdetails")]//strong//text()')
finally:
if html is not None:
del html
ret = html.xpath('//a[contains(@href, "userdetails")]//strong//text()')
if ret:
self.username = str(ret[0])
return
@@ -98,28 +107,32 @@ class NexusPhpSiteUserInfo(SiteParserBase):
self.leeching = StringUtils.str_int(leeching_match.group(2)) if leeching_match and leeching_match.group(
2).strip() else 0
html = etree.HTML(html_text)
has_ucoin, self.bonus = self._parse_ucoin(html)
if has_ucoin:
return
tmps = html.xpath('//a[contains(@href,"mybonus")]/text()') if html else None
if tmps:
bonus_text = str(tmps[0]).strip()
bonus_match = re.search(r"([\d,.]+)", bonus_text)
if bonus_match and bonus_match.group(1).strip():
self.bonus = StringUtils.str_float(bonus_match.group(1))
return
bonus_match = re.search(r"mybonus.[\[\]:<>/a-zA-Z_\-=\"'\s#;.(使用魔力值豆]+\s*([\d,.]+)[<()&\s]", html_text)
try:
if bonus_match and bonus_match.group(1).strip():
self.bonus = StringUtils.str_float(bonus_match.group(1))
has_ucoin, self.bonus = self._parse_ucoin(html)
if has_ucoin:
return
bonus_match = re.search(r"[魔力值|\]][\[\]:<>/a-zA-Z_\-=\"'\s#;]+\s*([\d,.]+|\"[\d,.]+\")[<>()&\s]",
html_text,
flags=re.S)
if bonus_match and bonus_match.group(1).strip():
self.bonus = StringUtils.str_float(bonus_match.group(1).strip('"'))
except Exception as err:
logger.error(f"{self._site_name} 解析魔力值出错, 错误信息: {str(err)}")
tmps = html.xpath('//a[contains(@href,"mybonus")]/text()') if html else None
if tmps:
bonus_text = str(tmps[0]).strip()
bonus_match = re.search(r"([\d,.]+)", bonus_text)
if bonus_match and bonus_match.group(1).strip():
self.bonus = StringUtils.str_float(bonus_match.group(1))
return
bonus_match = re.search(r"mybonus.[\[\]:<>/a-zA-Z_\-=\"'\s#;.(使用魔力值豆]+\s*([\d,.]+)[<()&\s]", html_text)
try:
if bonus_match and bonus_match.group(1).strip():
self.bonus = StringUtils.str_float(bonus_match.group(1))
return
bonus_match = re.search(r"[魔力值|\]][\[\]:<>/a-zA-Z_\-=\"'\s#;]+\s*([\d,.]+|\"[\d,.]+\")[<>()&\s]",
html_text,
flags=re.S)
if bonus_match and bonus_match.group(1).strip():
self.bonus = StringUtils.str_float(bonus_match.group(1).strip('"'))
except Exception as err:
logger.error(f"{self._site_name} 解析魔力值出错, 错误信息: {str(err)}")
finally:
if html is not None:
del html
@staticmethod
def _parse_ucoin(html):
@@ -155,72 +168,76 @@ class NexusPhpSiteUserInfo(SiteParserBase):
:return: 下页地址
"""
html = etree.HTML(str(html_text).replace(r'\/', '/'))
if not StringUtils.is_valid_html_element(html):
return None
try:
if not StringUtils.is_valid_html_element(html):
return None
# 首页存在扩展链接,使用扩展链接
seeding_url_text = html.xpath('//a[contains(@href,"torrents.php") '
'and contains(@href,"seeding")]/@href')
if multi_page is False and seeding_url_text and seeding_url_text[0].strip():
self._torrent_seeding_page = seeding_url_text[0].strip()
return self._torrent_seeding_page
# 首页存在扩展链接,使用扩展链接
seeding_url_text = html.xpath('//a[contains(@href,"torrents.php") '
'and contains(@href,"seeding")]/@href')
if multi_page is False and seeding_url_text and seeding_url_text[0].strip():
self._torrent_seeding_page = seeding_url_text[0].strip()
return self._torrent_seeding_page
size_col = 3
seeders_col = 4
# 搜索size列
size_col_xpath = '//tr[position()=1]/' \
'td[(img[@class="size"] and img[@alt="size"])' \
' or (text() = "大小")' \
' or (a/img[@class="size" and @alt="size"])]'
if html.xpath(size_col_xpath):
size_col = len(html.xpath(f'{size_col_xpath}/preceding-sibling::td')) + 1
# 搜索seeders列
seeders_col_xpath = '//tr[position()=1]/' \
'td[(img[@class="seeders"] and img[@alt="seeders"])' \
' or (text() = "在做种")' \
' or (a/img[@class="seeders" and @alt="seeders"])]'
if html.xpath(seeders_col_xpath):
seeders_col = len(html.xpath(f'{seeders_col_xpath}/preceding-sibling::td')) + 1
size_col = 3
seeders_col = 4
# 搜索size列
size_col_xpath = '//tr[position()=1]/' \
'td[(img[@class="size"] and img[@alt="size"])' \
' or (text() = "大小")' \
' or (a/img[@class="size" and @alt="size"])]'
if html.xpath(size_col_xpath):
size_col = len(html.xpath(f'{size_col_xpath}/preceding-sibling::td')) + 1
# 搜索seeders列
seeders_col_xpath = '//tr[position()=1]/' \
'td[(img[@class="seeders"] and img[@alt="seeders"])' \
' or (text() = "在做种")' \
' or (a/img[@class="seeders" and @alt="seeders"])]'
if html.xpath(seeders_col_xpath):
seeders_col = len(html.xpath(f'{seeders_col_xpath}/preceding-sibling::td')) + 1
page_seeding = 0
page_seeding_size = 0
page_seeding_info = []
# 如果 table class="torrents"则增加table[@class="torrents"]
table_class = '//table[@class="torrents"]' if html.xpath('//table[@class="torrents"]') else ''
seeding_sizes = html.xpath(f'{table_class}//tr[position()>1]/td[{size_col}]')
seeding_seeders = html.xpath(f'{table_class}//tr[position()>1]/td[{seeders_col}]/b/a/text()')
if not seeding_seeders:
seeding_seeders = html.xpath(f'{table_class}//tr[position()>1]/td[{seeders_col}]//text()')
if seeding_sizes and seeding_seeders:
page_seeding = len(seeding_sizes)
page_seeding = 0
page_seeding_size = 0
page_seeding_info = []
# 如果 table class="torrents"则增加table[@class="torrents"]
table_class = '//table[@class="torrents"]' if html.xpath('//table[@class="torrents"]') else ''
seeding_sizes = html.xpath(f'{table_class}//tr[position()>1]/td[{size_col}]')
seeding_seeders = html.xpath(f'{table_class}//tr[position()>1]/td[{seeders_col}]/b/a/text()')
if not seeding_seeders:
seeding_seeders = html.xpath(f'{table_class}//tr[position()>1]/td[{seeders_col}]//text()')
if seeding_sizes and seeding_seeders:
page_seeding = len(seeding_sizes)
for i in range(0, len(seeding_sizes)):
size = StringUtils.num_filesize(seeding_sizes[i].xpath("string(.)").strip())
seeders = StringUtils.str_int(seeding_seeders[i])
for i in range(0, len(seeding_sizes)):
size = StringUtils.num_filesize(seeding_sizes[i].xpath("string(.)").strip())
seeders = StringUtils.str_int(seeding_seeders[i])
page_seeding_size += size
page_seeding_info.append([seeders, size])
page_seeding_size += size
page_seeding_info.append([seeders, size])
self.seeding += page_seeding
self.seeding_size += page_seeding_size
self.seeding_info.extend(page_seeding_info)
self.seeding += page_seeding
self.seeding_size += page_seeding_size
self.seeding_info.extend(page_seeding_info)
# 是否存在下页数据
next_page = None
next_page_text = html.xpath(
'//a[contains(.//text(), "下一页") or contains(.//text(), "下一頁") or contains(.//text(), ">")]/@href')
# 防止识别到详情页
while next_page_text:
next_page = next_page_text.pop().strip()
if not next_page.startswith('details.php'):
break
# 是否存在下页数据
next_page = None
next_page_text = html.xpath(
'//a[contains(.//text(), "下一页") or contains(.//text(), "下一頁") or contains(.//text(), ">")]/@href')
# fix up page url
if next_page:
if self.userid not in next_page:
next_page = f'{next_page}&userid={self.userid}&type=seeding'
# 防止识别到详情页
while next_page_text:
next_page = next_page_text.pop().strip()
if not next_page.startswith('details.php'):
break
next_page = None
# fix up page url
if next_page:
if self.userid not in next_page:
next_page = f'{next_page}&userid={self.userid}&type=seeding'
finally:
if html is not None:
del html
return next_page
@@ -231,57 +248,61 @@ class NexusPhpSiteUserInfo(SiteParserBase):
:return:
"""
html = etree.HTML(html_text)
if not StringUtils.is_valid_html_element(html):
return
try:
if not StringUtils.is_valid_html_element(html):
return
self._get_user_level(html)
self._get_user_level(html)
self._fixup_traffic_info(html)
self._fixup_traffic_info(html)
# 加入日期
join_at_text = html.xpath(
'//tr/td[text()="加入日期" or text()="注册日期" or *[text()="加入日期"]]/following-sibling::td[1]//text()'
'|//div/b[text()="加入日期"]/../text()')
if join_at_text:
self.join_at = StringUtils.unify_datetime_str(join_at_text[0].split(' (')[0].strip())
# 加入日期
join_at_text = html.xpath(
'//tr/td[text()="加入日期" or text()="注册日期" or *[text()="加入日期"]]/following-sibling::td[1]//text()'
'|//div/b[text()="加入日期"]/../text()')
if join_at_text:
self.join_at = StringUtils.unify_datetime_str(join_at_text[0].split(' (')[0].strip())
# 做种体积 & 做种数
# seeding 页面获取不到的话,此处再获取一次
seeding_sizes = html.xpath('//tr/td[text()="当前上传"]/following-sibling::td[1]//'
'table[tr[1][td[4 and text()="尺寸"]]]//tr[position()>1]/td[4]')
seeding_seeders = html.xpath('//tr/td[text()="当前上传"]/following-sibling::td[1]//'
'table[tr[1][td[5 and text()="做种者"]]]//tr[position()>1]/td[5]//text()')
tmp_seeding = len(seeding_sizes)
tmp_seeding_size = 0
tmp_seeding_info = []
for i in range(0, len(seeding_sizes)):
size = StringUtils.num_filesize(seeding_sizes[i].xpath("string(.)").strip())
seeders = StringUtils.str_int(seeding_seeders[i])
# 做种体积 & 做种数
# seeding 页面获取不到的话,此处再获取一次
seeding_sizes = html.xpath('//tr/td[text()="当前上传"]/following-sibling::td[1]//'
'table[tr[1][td[4 and text()="尺寸"]]]//tr[position()>1]/td[4]')
seeding_seeders = html.xpath('//tr/td[text()="当前上传"]/following-sibling::td[1]//'
'table[tr[1][td[5 and text()="做种者"]]]//tr[position()>1]/td[5]//text()')
tmp_seeding = len(seeding_sizes)
tmp_seeding_size = 0
tmp_seeding_info = []
for i in range(0, len(seeding_sizes)):
size = StringUtils.num_filesize(seeding_sizes[i].xpath("string(.)").strip())
seeders = StringUtils.str_int(seeding_seeders[i])
tmp_seeding_size += size
tmp_seeding_info.append([seeders, size])
tmp_seeding_size += size
tmp_seeding_info.append([seeders, size])
if not self.seeding_size:
self.seeding_size = tmp_seeding_size
if not self.seeding:
self.seeding = tmp_seeding
if not self.seeding_info:
self.seeding_info = tmp_seeding_info
if not self.seeding_size:
self.seeding_size = tmp_seeding_size
if not self.seeding:
self.seeding = tmp_seeding
if not self.seeding_info:
self.seeding_info = tmp_seeding_info
seeding_sizes = html.xpath('//tr/td[text()="做种统计"]/following-sibling::td[1]//text()')
if seeding_sizes:
seeding_match = re.search(r"总做种数:\s+(\d+)", seeding_sizes[0], re.IGNORECASE)
seeding_size_match = re.search(r"总做种体积:\s+([\d,.\s]+[KMGTPI]*B)", seeding_sizes[0], re.IGNORECASE)
tmp_seeding = StringUtils.str_int(seeding_match.group(1)) if (
seeding_match and seeding_match.group(1)) else 0
tmp_seeding_size = StringUtils.num_filesize(
seeding_size_match.group(1).strip()) if seeding_size_match else 0
if not self.seeding_size:
self.seeding_size = tmp_seeding_size
if not self.seeding:
self.seeding = tmp_seeding
seeding_sizes = html.xpath('//tr/td[text()="做种统计"]/following-sibling::td[1]//text()')
if seeding_sizes:
seeding_match = re.search(r"总做种数:\s+(\d+)", seeding_sizes[0], re.IGNORECASE)
seeding_size_match = re.search(r"总做种体积:\s+([\d,.\s]+[KMGTPI]*B)", seeding_sizes[0], re.IGNORECASE)
tmp_seeding = StringUtils.str_int(seeding_match.group(1)) if (
seeding_match and seeding_match.group(1)) else 0
tmp_seeding_size = StringUtils.num_filesize(
seeding_size_match.group(1).strip()) if seeding_size_match else 0
if not self.seeding_size:
self.seeding_size = tmp_seeding_size
if not self.seeding:
self.seeding = tmp_seeding
self._fixup_torrent_seeding_page(html)
self._fixup_torrent_seeding_page(html)
finally:
if html is not None:
del html
def _fixup_torrent_seeding_page(self, html):
"""
@@ -348,43 +369,51 @@ class NexusPhpSiteUserInfo(SiteParserBase):
def _parse_message_unread_links(self, html_text: str, msg_links: list) -> Optional[str]:
html = etree.HTML(html_text)
if not StringUtils.is_valid_html_element(html):
return None
try:
if not StringUtils.is_valid_html_element(html):
return None
message_links = html.xpath('//tr[not(./td/img[@alt="Read"])]/td/a[contains(@href, "viewmessage")]/@href')
msg_links.extend(message_links)
# 是否存在下页数据
next_page = None
next_page_text = html.xpath('//a[contains(.//text(), "下一页") or contains(.//text(), "下一頁")]/@href')
if next_page_text:
next_page = next_page_text[-1].strip()
message_links = html.xpath('//tr[not(./td/img[@alt="Read"])]/td/a[contains(@href, "viewmessage")]/@href')
msg_links.extend(message_links)
# 是否存在下页数据
next_page = None
next_page_text = html.xpath('//a[contains(.//text(), "下一页") or contains(.//text(), "下一頁")]/@href')
if next_page_text:
next_page = next_page_text[-1].strip()
finally:
if html is not None:
del html
return next_page
def _parse_message_content(self, html_text):
html = etree.HTML(html_text)
if not StringUtils.is_valid_html_element(html):
return None, None, None
# 标题
message_head_text = None
message_head = html.xpath('//h1/text()'
'|//div[@class="layui-card-header"]/span[1]/text()')
if message_head:
message_head_text = message_head[-1].strip()
try:
if not StringUtils.is_valid_html_element(html):
return None, None, None
# 标题
message_head_text = None
message_head = html.xpath('//h1/text()'
'|//div[@class="layui-card-header"]/span[1]/text()')
if message_head:
message_head_text = message_head[-1].strip()
# 消息时间
message_date_text = None
message_date = html.xpath('//h1/following-sibling::table[.//tr/td[@class="colhead"]]//tr[2]/td[2]'
'|//div[@class="layui-card-header"]/span[2]/span[2]')
if message_date:
message_date_text = message_date[0].xpath("string(.)").strip()
# 消息时间
message_date_text = None
message_date = html.xpath('//h1/following-sibling::table[.//tr/td[@class="colhead"]]//tr[2]/td[2]'
'|//div[@class="layui-card-header"]/span[2]/span[2]')
if message_date:
message_date_text = message_date[0].xpath("string(.)").strip()
# 消息内容
message_content_text = None
message_content = html.xpath('//h1/following-sibling::table[.//tr/td[@class="colhead"]]//tr[3]/td'
'|//div[contains(@class,"layui-card-body")]')
if message_content:
message_content_text = message_content[0].xpath("string(.)").strip()
# 消息内容
message_content_text = None
message_content = html.xpath('//h1/following-sibling::table[.//tr/td[@class="colhead"]]//tr[3]/td'
'|//div[contains(@class,"layui-card-body")]')
if message_content:
message_content_text = message_content[0].xpath("string(.)").strip()
finally:
if html is not None:
del html
return message_head_text, message_date_text, message_content_text

View File

@@ -114,48 +114,56 @@ class NexusRabbitSiteUserInfo(SiteParserBase):
def _parse_user_base_info(self, html_text: str):
"""只有奶糖余额才需要在 base 中获取,其它均可以在详情页拿到"""
html = etree.HTML(html_text)
if not StringUtils.is_valid_html_element(html):
return
bonus = html.xpath(
'//div[contains(text(), "奶糖余额")]/following-sibling::div[1]/text()'
)
if bonus:
self.bonus = StringUtils.str_float(bonus[0].strip())
try:
if not StringUtils.is_valid_html_element(html):
return
bonus = html.xpath(
'//div[contains(text(), "奶糖余额")]/following-sibling::div[1]/text()'
)
if bonus:
self.bonus = StringUtils.str_float(bonus[0].strip())
finally:
if html is not None:
del html
def _parse_user_detail_info(self, html_text: str):
html = etree.HTML(html_text)
if not StringUtils.is_valid_html_element(html):
return
# 缩小一下查找范围,所有的信息都在这个 div 里
user_info = html.xpath('//div[contains(@class, "layui-hares-user-info-right")]')
if not user_info:
return
user_info = user_info[0]
# 用户名
if username := user_info.xpath(
'.//span[contains(text(), "用户名")]/a/span/text()'
):
self.username = username[0].strip()
# 等级
if user_level := user_info.xpath('.//span[contains(text(), "等级")]/b/text()'):
self.user_level = user_level[0].strip()
# 加入日期
if join_date := user_info.xpath('.//span[contains(text(), "注册日期")]/text()'):
join_date = join_date[0].strip().split("\r")[0].removeprefix("注册日期:")
self.join_at = StringUtils.unify_datetime_str(join_date)
# 上传量
if upload := user_info.xpath('.//span[contains(text(), "上传量")]/text()'):
self.upload = StringUtils.num_filesize(
upload[0].strip().removeprefix("上传量:")
)
# 下载量
if download := user_info.xpath('.//span[contains(text(), "下载量")]/text()'):
self.download = StringUtils.num_filesize(
download[0].strip().removeprefix("下载量:")
)
# 分享率
if ratio := user_info.xpath('.//span[contains(text(), "分享率")]/em/text()'):
self.ratio = StringUtils.str_float(ratio[0].strip())
try:
if not StringUtils.is_valid_html_element(html):
return
# 缩小一下查找范围,所有的信息都在这个 div 里
user_info = html.xpath('//div[contains(@class, "layui-hares-user-info-right")]')
if not user_info:
return
user_info = user_info[0]
# 用户名
if username := user_info.xpath(
'.//span[contains(text(), "用户名")]/a/span/text()'
):
self.username = username[0].strip()
# 等级
if user_level := user_info.xpath('.//span[contains(text(), "等级")]/b/text()'):
self.user_level = user_level[0].strip()
# 加入日期
if join_date := user_info.xpath('.//span[contains(text(), "注册日期")]/text()'):
join_date = join_date[0].strip().split("\r")[0].removeprefix("注册日期:")
self.join_at = StringUtils.unify_datetime_str(join_date)
# 上传量
if upload := user_info.xpath('.//span[contains(text(), "上传量")]/text()'):
self.upload = StringUtils.num_filesize(
upload[0].strip().removeprefix("上传量:")
)
# 下载量
if download := user_info.xpath('.//span[contains(text(), "下载量")]/text()'):
self.download = StringUtils.num_filesize(
download[0].strip().removeprefix("下载量:")
)
# 分享率
if ratio := user_info.xpath('.//span[contains(text(), "分享率")]/em/text()'):
self.ratio = StringUtils.str_float(ratio[0].strip())
finally:
if html is not None:
del html
def _parse_message_content(self, html_text):
"""

View File

@@ -24,9 +24,13 @@ class SmallHorseSiteUserInfo(SiteParserBase):
def _parse_user_base_info(self, html_text: str):
html_text = self._prepare_html_text(html_text)
html = etree.HTML(html_text)
ret = html.xpath('//a[contains(@href, "user.php")]//text()')
if ret:
self.username = str(ret[0])
try:
ret = html.xpath('//a[contains(@href, "user.php")]//text()')
if ret:
self.username = str(ret[0])
finally:
if html is not None:
del html
def _parse_user_traffic_info(self, html_text: str):
"""
@@ -36,21 +40,25 @@ class SmallHorseSiteUserInfo(SiteParserBase):
"""
html_text = self._prepare_html_text(html_text)
html = etree.HTML(html_text)
tmps = html.xpath('//ul[@class = "stats nobullet"]')
if tmps:
if tmps[1].xpath("li") and tmps[1].xpath("li")[0].xpath("span//text()"):
self.join_at = StringUtils.unify_datetime_str(tmps[1].xpath("li")[0].xpath("span//text()")[0])
self.upload = StringUtils.num_filesize(str(tmps[1].xpath("li")[2].xpath("text()")[0]).split(":")[1].strip())
self.download = StringUtils.num_filesize(
str(tmps[1].xpath("li")[3].xpath("text()")[0]).split(":")[1].strip())
if tmps[1].xpath("li")[4].xpath("span//text()"):
self.ratio = StringUtils.str_float(str(tmps[1].xpath("li")[4].xpath("span//text()")[0]).replace('', '0'))
else:
self.ratio = StringUtils.str_float(str(tmps[1].xpath("li")[5].xpath("text()")[0]).split(":")[1])
self.bonus = StringUtils.str_float(str(tmps[1].xpath("li")[5].xpath("text()")[0]).split(":")[1])
self.user_level = str(tmps[3].xpath("li")[0].xpath("text()")[0]).split(":")[1].strip()
self.leeching = StringUtils.str_int(
(tmps[4].xpath("li")[6].xpath("text()")[0]).split(":")[1].replace("[", ""))
try:
tmps = html.xpath('//ul[@class = "stats nobullet"]')
if tmps:
if tmps[1].xpath("li") and tmps[1].xpath("li")[0].xpath("span//text()"):
self.join_at = StringUtils.unify_datetime_str(tmps[1].xpath("li")[0].xpath("span//text()")[0])
self.upload = StringUtils.num_filesize(str(tmps[1].xpath("li")[2].xpath("text()")[0]).split(":")[1].strip())
self.download = StringUtils.num_filesize(
str(tmps[1].xpath("li")[3].xpath("text()")[0]).split(":")[1].strip())
if tmps[1].xpath("li")[4].xpath("span//text()"):
self.ratio = StringUtils.str_float(str(tmps[1].xpath("li")[4].xpath("span//text()")[0]).replace('', '0'))
else:
self.ratio = StringUtils.str_float(str(tmps[1].xpath("li")[5].xpath("text()")[0]).split(":")[1])
self.bonus = StringUtils.str_float(str(tmps[1].xpath("li")[5].xpath("text()")[0]).split(":")[1])
self.user_level = str(tmps[3].xpath("li")[0].xpath("text()")[0]).split(":")[1].strip()
self.leeching = StringUtils.str_int(
(tmps[4].xpath("li")[6].xpath("text()")[0]).split(":")[1].replace("[", ""))
finally:
if html is not None:
del html
def _parse_user_detail_info(self, html_text: str):
pass
@@ -63,39 +71,42 @@ class SmallHorseSiteUserInfo(SiteParserBase):
:return: 下页地址
"""
html = etree.HTML(html_text)
if not StringUtils.is_valid_html_element(html):
return None
try:
if not StringUtils.is_valid_html_element(html):
return None
size_col = 6
seeders_col = 8
size_col = 6
seeders_col = 8
page_seeding = 0
page_seeding_size = 0
page_seeding_info = []
seeding_sizes = html.xpath(f'//table[@id="torrent_table"]//tr[position()>1]/td[{size_col}]')
seeding_seeders = html.xpath(f'//table[@id="torrent_table"]//tr[position()>1]/td[{seeders_col}]')
if seeding_sizes and seeding_seeders:
page_seeding = len(seeding_sizes)
page_seeding = 0
page_seeding_size = 0
page_seeding_info = []
seeding_sizes = html.xpath(f'//table[@id="torrent_table"]//tr[position()>1]/td[{size_col}]')
seeding_seeders = html.xpath(f'//table[@id="torrent_table"]//tr[position()>1]/td[{seeders_col}]')
if seeding_sizes and seeding_seeders:
page_seeding = len(seeding_sizes)
for i in range(0, len(seeding_sizes)):
size = StringUtils.num_filesize(seeding_sizes[i].xpath("string(.)").strip())
seeders = StringUtils.str_int(seeding_seeders[i].xpath("string(.)").strip())
for i in range(0, len(seeding_sizes)):
size = StringUtils.num_filesize(seeding_sizes[i].xpath("string(.)").strip())
seeders = StringUtils.str_int(seeding_seeders[i].xpath("string(.)").strip())
page_seeding_size += size
page_seeding_info.append([seeders, size])
page_seeding_size += size
page_seeding_info.append([seeders, size])
self.seeding += page_seeding
self.seeding_size += page_seeding_size
self.seeding_info.extend(page_seeding_info)
# 是否存在下页数据
next_page = None
next_pages = html.xpath('//ul[@class="pagination"]/li[contains(@class,"active")]/following-sibling::li')
if next_pages and len(next_pages) > 1:
page_num = next_pages[0].xpath("string(.)").strip()
if page_num.isdigit():
next_page = f"{self._torrent_seeding_page}&page={page_num}"
self.seeding += page_seeding
self.seeding_size += page_seeding_size
self.seeding_info.extend(page_seeding_info)
# 是否存在下页数据
next_page = None
next_pages = html.xpath('//ul[@class="pagination"]/li[contains(@class,"active")]/following-sibling::li')
if next_pages and len(next_pages) > 1:
page_num = next_pages[0].xpath("string(.)").strip()
if page_num.isdigit():
next_page = f"{self._torrent_seeding_page}&page={page_num}"
finally:
if html is not None:
del html
return next_page
def _parse_message_unread_links(self, html_text: str, msg_links: list) -> Optional[str]:

View File

@@ -32,29 +32,33 @@ class TorrentLeechSiteUserInfo(SiteParserBase):
"""
html_text = self._prepare_html_text(html_text)
html = etree.HTML(html_text)
upload_html = html.xpath('//div[contains(@class,"profile-uploaded")]//span/text()')
if upload_html:
self.upload = StringUtils.num_filesize(upload_html[0])
download_html = html.xpath('//div[contains(@class,"profile-downloaded")]//span/text()')
if download_html:
self.download = StringUtils.num_filesize(download_html[0])
ratio_html = html.xpath('//div[contains(@class,"profile-ratio")]//span/text()')
if ratio_html:
self.ratio = StringUtils.str_float(ratio_html[0].replace('', '0'))
try:
upload_html = html.xpath('//div[contains(@class,"profile-uploaded")]//span/text()')
if upload_html:
self.upload = StringUtils.num_filesize(upload_html[0])
download_html = html.xpath('//div[contains(@class,"profile-downloaded")]//span/text()')
if download_html:
self.download = StringUtils.num_filesize(download_html[0])
ratio_html = html.xpath('//div[contains(@class,"profile-ratio")]//span/text()')
if ratio_html:
self.ratio = StringUtils.str_float(ratio_html[0].replace('', '0'))
user_level_html = html.xpath('//table[contains(@class, "profileViewTable")]'
'//tr/td[text()="Class"]/following-sibling::td/text()')
if user_level_html:
self.user_level = user_level_html[0].strip()
user_level_html = html.xpath('//table[contains(@class, "profileViewTable")]'
'//tr/td[text()="Class"]/following-sibling::td/text()')
if user_level_html:
self.user_level = user_level_html[0].strip()
join_at_html = html.xpath('//table[contains(@class, "profileViewTable")]'
'//tr/td[text()="Registration date"]/following-sibling::td/text()')
if join_at_html:
self.join_at = StringUtils.unify_datetime_str(join_at_html[0].strip())
join_at_html = html.xpath('//table[contains(@class, "profileViewTable")]'
'//tr/td[text()="Registration date"]/following-sibling::td/text()')
if join_at_html:
self.join_at = StringUtils.unify_datetime_str(join_at_html[0].strip())
bonus_html = html.xpath('//span[contains(@class, "total-TL-points")]/text()')
if bonus_html:
self.bonus = StringUtils.str_float(bonus_html[0].strip())
bonus_html = html.xpath('//span[contains(@class, "total-TL-points")]/text()')
if bonus_html:
self.bonus = StringUtils.str_float(bonus_html[0].strip())
finally:
if html is not None:
del html
def _parse_user_detail_info(self, html_text: str):
pass
@@ -67,33 +71,37 @@ class TorrentLeechSiteUserInfo(SiteParserBase):
:return: 下页地址
"""
html = etree.HTML(html_text)
if not StringUtils.is_valid_html_element(html):
return None
try:
if not StringUtils.is_valid_html_element(html):
return None
size_col = 2
seeders_col = 7
size_col = 2
seeders_col = 7
page_seeding = 0
page_seeding_size = 0
page_seeding_info = []
seeding_sizes = html.xpath(f'//tbody/tr/td[{size_col}]')
seeding_seeders = html.xpath(f'//tbody/tr/td[{seeders_col}]/text()')
if seeding_sizes and seeding_seeders:
page_seeding = len(seeding_sizes)
page_seeding = 0
page_seeding_size = 0
page_seeding_info = []
seeding_sizes = html.xpath(f'//tbody/tr/td[{size_col}]')
seeding_seeders = html.xpath(f'//tbody/tr/td[{seeders_col}]/text()')
if seeding_sizes and seeding_seeders:
page_seeding = len(seeding_sizes)
for i in range(0, len(seeding_sizes)):
size = StringUtils.num_filesize(seeding_sizes[i].xpath("string(.)").strip())
seeders = StringUtils.str_int(seeding_seeders[i])
for i in range(0, len(seeding_sizes)):
size = StringUtils.num_filesize(seeding_sizes[i].xpath("string(.)").strip())
seeders = StringUtils.str_int(seeding_seeders[i])
page_seeding_size += size
page_seeding_info.append([seeders, size])
page_seeding_size += size
page_seeding_info.append([seeders, size])
self.seeding += page_seeding
self.seeding_size += page_seeding_size
self.seeding_info.extend(page_seeding_info)
self.seeding += page_seeding
self.seeding_size += page_seeding_size
self.seeding_info.extend(page_seeding_info)
# 是否存在下页数据
next_page = None
# 是否存在下页数据
next_page = None
finally:
if html is not None:
del html
return next_page

View File

@@ -14,21 +14,24 @@ class Unit3dSiteUserInfo(SiteParserBase):
def _parse_user_base_info(self, html_text: str):
html_text = self._prepare_html_text(html_text)
html = etree.HTML(html_text)
try:
tmps = html.xpath('//a[contains(@href, "/users/") and contains(@href, "settings")]/@href')
if tmps:
user_name_match = re.search(r"/users/(.+)/settings", tmps[0])
if user_name_match and user_name_match.group().strip():
self.username = user_name_match.group(1)
self._torrent_seeding_page = f"/users/{self.username}/active?perPage=100&client=&seeding=include"
self._user_detail_page = f"/users/{self.username}"
tmps = html.xpath('//a[contains(@href, "/users/") and contains(@href, "settings")]/@href')
if tmps:
user_name_match = re.search(r"/users/(.+)/settings", tmps[0])
if user_name_match and user_name_match.group().strip():
self.username = user_name_match.group(1)
self._torrent_seeding_page = f"/users/{self.username}/active?perPage=100&client=&seeding=include"
self._user_detail_page = f"/users/{self.username}"
tmps = html.xpath('//a[contains(@href, "bonus/earnings")]')
if tmps:
bonus_text = tmps[0].xpath("string(.)")
bonus_match = re.search(r"([\d,.]+)", bonus_text)
if bonus_match and bonus_match.group(1).strip():
self.bonus = StringUtils.str_float(bonus_match.group(1))
tmps = html.xpath('//a[contains(@href, "bonus/earnings")]')
if tmps:
bonus_text = tmps[0].xpath("string(.)")
bonus_match = re.search(r"([\d,.]+)", bonus_text)
if bonus_match and bonus_match.group(1).strip():
self.bonus = StringUtils.str_float(bonus_match.group(1))
finally:
if html is not None:
del html
def _parse_site_page(self, html_text: str):
pass
@@ -40,21 +43,25 @@ class Unit3dSiteUserInfo(SiteParserBase):
:return:
"""
html = etree.HTML(html_text)
if not StringUtils.is_valid_html_element(html):
return None
try:
if not StringUtils.is_valid_html_element(html):
return None
# 用户等级
user_levels_text = html.xpath('//div[contains(@class, "content")]//span[contains(@class, "badge-user")]/text()')
if user_levels_text:
self.user_level = user_levels_text[0].strip()
# 用户等级
user_levels_text = html.xpath('//div[contains(@class, "content")]//span[contains(@class, "badge-user")]/text()')
if user_levels_text:
self.user_level = user_levels_text[0].strip()
# 加入日期
join_at_text = html.xpath('//div[contains(@class, "content")]//h4[contains(text(), "注册日期") '
'or contains(text(), "註冊日期") '
'or contains(text(), "Registration date")]/text()')
if join_at_text:
self.join_at = StringUtils.unify_datetime_str(
join_at_text[0].replace('注册日期', '').replace('註冊日期', '').replace('Registration date', ''))
# 加入日期
join_at_text = html.xpath('//div[contains(@class, "content")]//h4[contains(text(), "注册日期") '
'or contains(text(), "註冊日期") '
'or contains(text(), "Registration date")]/text()')
if join_at_text:
self.join_at = StringUtils.unify_datetime_str(
join_at_text[0].replace('注册日期', '').replace('註冊日期', '').replace('Registration date', ''))
finally:
if html is not None:
del html
def _parse_user_torrent_seeding_info(self, html_text: str, multi_page: Optional[bool] = False) -> Optional[str]:
"""
@@ -64,44 +71,48 @@ class Unit3dSiteUserInfo(SiteParserBase):
:return: 下页地址
"""
html = etree.HTML(html_text)
if not StringUtils.is_valid_html_element(html):
return None
try:
if not StringUtils.is_valid_html_element(html):
return None
size_col = 9
seeders_col = 2
# 搜索size列
if html.xpath('//thead//th[contains(@class,"size")]'):
size_col = len(html.xpath('//thead//th[contains(@class,"size")][1]/preceding-sibling::th')) + 1
# 搜索seeders列
if html.xpath('//thead//th[contains(@class,"seeders")]'):
seeders_col = len(html.xpath('//thead//th[contains(@class,"seeders")]/preceding-sibling::th')) + 1
size_col = 9
seeders_col = 2
# 搜索size列
if html.xpath('//thead//th[contains(@class,"size")]'):
size_col = len(html.xpath('//thead//th[contains(@class,"size")][1]/preceding-sibling::th')) + 1
# 搜索seeders列
if html.xpath('//thead//th[contains(@class,"seeders")]'):
seeders_col = len(html.xpath('//thead//th[contains(@class,"seeders")]/preceding-sibling::th')) + 1
page_seeding = 0
page_seeding_size = 0
page_seeding_info = []
seeding_sizes = html.xpath(f'//tr[position()]/td[{size_col}]')
seeding_seeders = html.xpath(f'//tr[position()]/td[{seeders_col}]')
if seeding_sizes and seeding_seeders:
page_seeding = len(seeding_sizes)
page_seeding = 0
page_seeding_size = 0
page_seeding_info = []
seeding_sizes = html.xpath(f'//tr[position()]/td[{size_col}]')
seeding_seeders = html.xpath(f'//tr[position()]/td[{seeders_col}]')
if seeding_sizes and seeding_seeders:
page_seeding = len(seeding_sizes)
for i in range(0, len(seeding_sizes)):
size = StringUtils.num_filesize(seeding_sizes[i].xpath("string(.)").strip())
seeders = StringUtils.str_int(seeding_seeders[i].xpath("string(.)").strip())
for i in range(0, len(seeding_sizes)):
size = StringUtils.num_filesize(seeding_sizes[i].xpath("string(.)").strip())
seeders = StringUtils.str_int(seeding_seeders[i].xpath("string(.)").strip())
page_seeding_size += size
page_seeding_info.append([seeders, size])
page_seeding_size += size
page_seeding_info.append([seeders, size])
self.seeding += page_seeding
self.seeding_size += page_seeding_size
self.seeding_info.extend(page_seeding_info)
self.seeding += page_seeding
self.seeding_size += page_seeding_size
self.seeding_info.extend(page_seeding_info)
# 是否存在下页数据
next_page = None
next_pages = html.xpath('//ul[@class="pagination"]/li[contains(@class,"active")]/following-sibling::li')
if next_pages and len(next_pages) > 1:
page_num = next_pages[0].xpath("string(.)").strip()
if page_num.isdigit():
next_page = f"{self._torrent_seeding_page}&page={page_num}"
# 是否存在下页数据
next_page = None
next_pages = html.xpath('//ul[@class="pagination"]/li[contains(@class,"active")]/following-sibling::li')
if next_pages and len(next_pages) > 1:
page_num = next_pages[0].xpath("string(.)").strip()
if page_num.isdigit():
next_page = f"{self._torrent_seeding_page}&page={page_num}"
finally:
if html is not None:
del html
return next_page

Some files were not shown because too many files have changed in this diff Show More