Compare commits

...

768 Commits

Author SHA1 Message Date
jxxghp
bcd235521e v2.3.9
- 优化多处UI细节
- 修复了订阅分享参数传递问题,开放了订阅分享管理功能
2025-04-10 08:34:16 +08:00
jxxghp
31a2eac302 fix:订阅分享参数传递 2025-04-10 08:19:59 +08:00
jxxghp
7e6b7e5dd5 更新 subscribe.py 2025-04-09 17:32:07 +08:00
jxxghp
9ec9f48425 feat:增加订阅管理员 #4123 2025-04-09 13:26:58 +08:00
jxxghp
a3bec43eab feat:增加订阅管理员 #4123 2025-04-09 13:26:10 +08:00
jxxghp
f429b6397e fix RecommendMediaSource 2025-04-08 18:52:54 +08:00
jxxghp
9d6e7dc288 Merge pull request #4115 from lddsb/patch-1 2025-04-08 17:58:36 +08:00
Dee Luo
a27c09c1e8 perf: 放宽制作组后缀匹配
支持 制作组xxx 这样的后缀匹配
2025-04-08 16:35:38 +08:00
jxxghp
ceb0697c73 - 适配馒头API变动 2025-04-07 21:30:41 +08:00
jxxghp
6ad6a08bf1 Merge pull request #4110 from cddjr/trimemedia
提升飞牛服务端地址的兼容性
2025-04-07 21:15:38 +08:00
jxxghp
fac6ad7116 Merge pull request #4109 from cddjr/fix_mteam
修复馒头请求参数错误的问题
2025-04-07 21:14:42 +08:00
景大侠
7d8cda0457 修复馒头请求参数错误的问题 2025-04-07 21:04:21 +08:00
景大侠
33fc3fd63b 新增删除媒体的api 2025-04-07 17:20:47 +08:00
景大侠
8d39cc87f7 提升服务端地址的兼容性 2025-04-07 16:37:41 +08:00
景大侠
d0b1348c96 fix some warnings 2025-04-07 16:21:39 +08:00
jxxghp
0afc38f6b8 Merge pull request #4103 from wikrin/v2 2025-04-07 11:07:11 +08:00
Attente
264896ba17 fix: 剧集组刮削 2025-04-07 09:25:06 +08:00
jxxghp
08decf0b82 feat:新增默认插件库 2025-04-07 08:06:59 +08:00
jxxghp
98381265e6 更新 u115.py 2025-04-07 07:37:00 +08:00
DDSRem
d323159719 Update requirements.in 2025-04-06 13:10:56 +08:00
jxxghp
7ef21e1d1c Merge pull request #4098 from DDS-Derek/dev 2025-04-06 12:02:01 +08:00
DDSRem
2d6b2ab7d7 bump: python environment upgrade 3.12
links https://github.com/jxxghp/MoviePilot/issues/3543
2025-04-06 11:56:00 +08:00
jxxghp
a1e6fd88a9 更新 version.py 2025-04-06 07:53:29 +08:00
jxxghp
e72ff867fc fix 115 pickcode 2025-04-05 09:29:08 +08:00
jxxghp
8512641984 更新 scraper.py 2025-04-04 22:13:14 +08:00
jxxghp
f1aa64d191 fix episodes group 2025-04-04 12:17:42 +08:00
jxxghp
347262538f fix episodes group 2025-04-04 08:59:12 +08:00
jxxghp
82510d60ca 更新 __init__.py 2025-04-03 22:48:29 +08:00
jxxghp
6104cd04c3 更新 context.py 2025-04-03 20:32:56 +08:00
jxxghp
44eb58426a feat:支持指定剧集组识别和刮削 2025-04-03 18:43:04 +08:00
jxxghp
078b60cc1e feat:支持指定剧集组识别和刮削 2025-04-03 18:35:02 +08:00
jxxghp
21e120a4f8 refactor:减少一次接口查询 2025-04-03 10:43:31 +08:00
jxxghp
439b834aa8 更新 version.py 2025-04-02 18:39:50 +08:00
jxxghp
ddbe8324be README增加开发说明 2025-03-30 11:36:19 +08:00
jxxghp
8ffe93113b README增加开发说明 2025-03-30 09:53:34 +08:00
jxxghp
8b31b7cb8a v2.3.6-1
- 修复媒体服务器库存检索问题
- 继续优化搜索页面
2025-03-30 09:23:46 +08:00
jxxghp
e09e21caa9 Merge pull request #4067 from cddjr/fix_media_exists 2025-03-30 02:48:19 +08:00
景大侠
20b145c679 继续修复媒体缺失问题 2025-03-30 02:41:24 +08:00
jxxghp
c5730cf1ad Merge pull request #4065 from cddjr/fix_v235_emby_bug 2025-03-29 23:18:34 +08:00
景大侠
f16b038463 修复v2.3.5引入的emby误报媒体缺失的bug 2025-03-29 23:15:58 +08:00
jxxghp
c08beec232 fix:优化未扫码报错 2025-03-29 22:02:59 +08:00
jxxghp
946361e0ae 更新 requirements.in 2025-03-29 20:30:57 +08:00
jxxghp
97cf65a231 更新 version.py 2025-03-29 20:21:54 +08:00
jxxghp
d7eb6ac15d 更新 alipan.py 2025-03-29 19:30:22 +08:00
jxxghp
075afdbb77 fix alipan upload 2025-03-29 15:39:29 +08:00
jxxghp
2ac047504a fix alipan 2025-03-29 14:52:49 +08:00
jxxghp
c44aa50ef5 fix 上传进度条 2025-03-29 14:33:45 +08:00
jxxghp
7ffafb49c4 fix alipan upload 2025-03-29 10:26:59 +08:00
jxxghp
9b7d57a853 fix alipan api 2025-03-29 09:42:23 +08:00
jxxghp
ac19b3b512 fix alipan api 2025-03-28 21:22:02 +08:00
jxxghp
b030317186 fix: 减少115遍历 2025-03-28 20:58:35 +08:00
jxxghp
b506059874 Merge pull request #4059 from cddjr/trimemedia 2025-03-28 20:13:16 +08:00
景大侠
cf7ba6e17f 移除测试代码 2025-03-28 19:54:47 +08:00
jxxghp
b7ce5663a3 fix ide warnings 2025-03-28 19:43:55 +08:00
jxxghp
58fa8064ad Merge pull request #4058 from cddjr/trimemedia
初步支持飞牛影视
2025-03-28 19:28:35 +08:00
jxxghp
ed48f56526 fix alipan 2025-03-28 17:48:30 +08:00
景大侠
896eb13f7d 初步支持飞牛影视 2025-03-28 16:26:40 +08:00
jxxghp
b8cd1c46c1 feat:Alipan Open Api 2025-03-28 13:40:29 +08:00
jxxghp
c5e84273c0 fix 115目录创建 2025-03-27 19:55:01 +08:00
jxxghp
f21653ffb7 修复115列表异常问题 2025-03-27 17:27:01 +08:00
jxxghp
65c8116cc9 fix 115列表异常处理 2025-03-27 17:26:07 +08:00
jxxghp
5e442433e5 fix 115列表出错时抛出异常 2025-03-27 12:48:19 +08:00
jxxghp
7041347e76 更新 version.py 2025-03-27 12:13:19 +08:00
jxxghp
810c205709 fix 115 2025-03-27 12:04:49 +08:00
jxxghp
ec7035990a fix 2025-03-26 20:12:08 +08:00
jxxghp
da6d9bb2bd fix 115 upload 2025-03-26 18:31:20 +08:00
jxxghp
e009043c63 fix log 2025-03-26 14:00:41 +08:00
jxxghp
79020e9338 hack fix 115 callback format error 2025-03-26 10:39:40 +08:00
jxxghp
2020244cae fix _path_to_id 2025-03-26 08:54:51 +08:00
jxxghp
43fe8f25f8 fix _path_to_id 2025-03-26 08:50:25 +08:00
jxxghp
9522888a60 fix 115 2025-03-26 08:30:30 +08:00
jxxghp
70c183ae2b try fix 115 upload 2025-03-26 07:15:31 +08:00
jxxghp
5d56eb9bef fix 115 upload 2025-03-25 21:33:29 +08:00
jxxghp
a461414a04 fix 115 callback encode 2025-03-25 20:37:46 +08:00
jxxghp
5737c3dca6 fix 115日志频率 2025-03-25 20:00:44 +08:00
jxxghp
57ea50e59c fix 115 callback 2025-03-25 19:38:39 +08:00
jxxghp
7f630e8460 fix 115 callback 2025-03-25 19:37:00 +08:00
jxxghp
108e8502e1 fix 115 上传进度 2025-03-25 19:27:53 +08:00
jxxghp
4aa986d122 fix 115 秒传检测 2025-03-25 18:26:45 +08:00
jxxghp
60239bbfc4 fix bug 2025-03-25 13:57:39 +08:00
jxxghp
93ef3b1f1a add debug logging 2025-03-25 13:48:00 +08:00
jxxghp
d9ed135be4 fix 115 2025-03-25 12:58:03 +08:00
jxxghp
e83fe0aabe fix storage logging 2025-03-25 08:34:36 +08:00
jxxghp
4be7426ae7 fix 115 2025-03-24 22:57:16 +08:00
jxxghp
0ce5ef7f56 fix 115 upload 2025-03-24 21:49:27 +08:00
jxxghp
c2c0946423 fix 115 upload 2025-03-24 21:39:03 +08:00
jxxghp
63049f61f7 fix typing 2025-03-24 19:14:04 +08:00
jxxghp
1918b0f192 fix 115 api 2025-03-24 19:11:18 +08:00
jxxghp
a3ad49b1fa fix 115 api 2025-03-24 19:03:57 +08:00
jxxghp
bed63d1e2b fix 115 api 2025-03-24 19:02:24 +08:00
jxxghp
4a8e739686 fix 115 api 2025-03-24 13:11:23 +08:00
jxxghp
d502f33041 fix 115 open api 2025-03-24 12:04:23 +08:00
jxxghp
4a0ecf36c7 fix typing 2025-03-24 08:40:18 +08:00
jxxghp
afb9e49755 fix typing 2025-03-24 08:11:02 +08:00
jxxghp
18f65e5597 fix year type 2025-03-23 23:16:11 +08:00
jxxghp
22b69f7dac fix blanke 2025-03-23 22:35:37 +08:00
jxxghp
15df062825 更新 discover.py 2025-03-23 22:23:31 +08:00
jxxghp
ed607d3895 更新 recommend.py 2025-03-23 21:57:48 +08:00
jxxghp
f9b0db623d fix cython type error 2025-03-23 21:39:37 +08:00
jxxghp
740cf12c11 fix cython errors 2025-03-23 19:09:48 +08:00
jxxghp
4c4bf698b1 更新 scheduler.py 2025-03-23 18:26:36 +08:00
jxxghp
dc74e749c9 更新 bulit-lite.yml 2025-03-23 18:03:30 +08:00
jxxghp
fa52c542d7 fix lite Dockfile 2025-03-23 15:55:02 +08:00
jxxghp
850d480c7c fix:build lite 2025-03-23 14:48:20 +08:00
jxxghp
a92cc9dce9 更新 bulit-lite.yml 2025-03-23 14:31:29 +08:00
jxxghp
4944a0a456 更新 Dockerfile.lite 2025-03-23 14:28:45 +08:00
jxxghp
13c40058a8 fix:build lite 2025-03-23 13:00:07 +08:00
jxxghp
1410c03c26 feat:build lite 2025-03-23 12:40:14 +08:00
jxxghp
2f38b3040d fix:修复代码兼容性写法 2025-03-23 12:10:21 +08:00
jxxghp
79411a7350 fix:修复代码兼容性写法 2025-03-23 09:00:24 +08:00
jxxghp
ee94c2af32 Merge pull request #4034 from DDS-Derek/dev 2025-03-22 11:31:25 +08:00
DDSRem
d46e5c8d86 bump: docker version 6.1.3 to 7.1.0 2025-03-22 11:13:06 +08:00
jxxghp
95cd10bfba fix #4014 2025-03-22 08:15:58 +08:00
jxxghp
59ed08b92d fix 115 api 2025-03-21 21:08:14 +08:00
jxxghp
2b9f7bca51 fix 115 api 2025-03-21 21:01:37 +08:00
jxxghp
a860a8c02b fix 115 open api 2025-03-21 19:06:53 +08:00
jxxghp
f2cbb8d2f7 fix 115 open api 2025-03-21 18:53:26 +08:00
jxxghp
ea61599589 add 115 open api 2025-03-21 13:27:31 +08:00
jxxghp
0b59c95f63 fix #4029 2025-03-21 11:24:08 +08:00
jxxghp
66d4308810 fix https://github.com/jxxghp/MoviePilot-Frontend/issues/312 2025-03-21 11:19:29 +08:00
jxxghp
f2648df2ad add special domains 2025-03-20 13:00:53 +08:00
jxxghp
d20f68e897 remove setup.py 2025-03-20 08:53:02 +08:00
jxxghp
338021645d 更新 requirements.in 2025-03-19 21:50:26 +08:00
jxxghp
a0a11842cb fix workflow count 2025-03-15 10:16:25 +08:00
jxxghp
f5832d6a25 Merge pull request #4012 from fanrongbin/v2 2025-03-14 17:22:23 +08:00
Robin-PC-X1C
8fa6d9de39 20250314 修改rss.py
修改原因:管理员在mp添加多个豆瓣id时,不同的豆瓣用户订阅内容,发送通知时统一为“豆瓣想看”,无法区分
修改后:增加豆瓣昵称获取,便于推送订阅通知消息时,区分豆瓣用户名称
2025-03-14 16:42:41 +08:00
jxxghp
e662338d6f Merge pull request #3995 from KoWming/v2 2025-03-10 12:48:31 +08:00
KoWming
2c1d6817dd Update security.py 2025-03-10 12:46:06 +08:00
jxxghp
5d4a3fec1f v2.3.4
- 新增支持设定消息发送的时间范围
- 探索标签页支持拖动排序
- 修复演员头像不显示的问题
- 修复站点流控不生效的问题
- 修复短时间内重复保存设定后定时任务消失的问题
- 修复工作流执行数据叠加的问题
2025-03-10 10:08:34 +08:00
jxxghp
6603a30e7e fix MessageQueueManager 2025-03-10 10:02:32 +08:00
jxxghp
81d08ca517 fix MessageQueueManager 2025-03-10 08:24:28 +08:00
jxxghp
e04506a614 fix workflow message link 2025-03-09 21:07:52 +08:00
jxxghp
39756512ae feat: 支持消息发送时间范围 2025-03-09 19:34:05 +08:00
jxxghp
71c29ea5e7 fix ide warnings 2025-03-09 18:35:52 +08:00
jxxghp
87ce266b14 fix warnings 2025-03-09 16:48:32 +08:00
jxxghp
ed6d856c24 Merge remote-tracking branch 'origin/v2' into v2 2025-03-09 16:33:01 +08:00
jxxghp
d3ecbef946 fix warnings 2025-03-09 08:37:05 +08:00
jxxghp
7b24f5eb21 fix:站点流控 2025-03-07 08:19:28 +08:00
jxxghp
e1f82e338a fix:定时任务初始化加锁 2025-03-07 08:07:57 +08:00
jxxghp
a835d34a01 Merge pull request #3975 from so1ve/patch-1 2025-03-06 06:54:11 +08:00
Ray
79d70c9977 fix: 标签为"官组"的种子应识别为官种 2025-03-05 22:10:28 +08:00
jxxghp
aea82723cb Merge pull request #3965 from mackerel-12138/fix_s0_scrap 2025-03-05 11:56:22 +08:00
zhanglijun
d47ff0b31a 修复s0集信息错误 2025-03-04 23:18:41 +08:00
jxxghp
affcb9d5c3 fix bug 2025-03-04 14:22:32 +08:00
jxxghp
9be2686733 Merge pull request #3957 from thsrite/v2 2025-03-03 14:22:06 +08:00
thsrite
7126fed2b5 fix docker container log duplicate printing 2025-03-03 13:44:38 +08:00
jxxghp
5bc4330e1c 修复HDDolby 2025-03-02 14:55:18 +08:00
jxxghp
b25ac7116e 更新 hddolby.py 2025-03-02 14:41:55 +08:00
jxxghp
8896867bb3 更新 fetch_medias.py 2025-03-02 14:23:37 +08:00
jxxghp
ba7c9eec7b fix 2025-03-02 13:16:46 +08:00
jxxghp
9b95fde8d1 v2.3.3
- 增加了多个索引和认证站点支持
- HDDolby切换为使用API(需要调整站点设置,否则无法正常刷新站点数据)
- 调整了IYUU认证使用的域名地址
- 继续完善任务工作流
2025-03-02 12:48:32 +08:00
jxxghp
2851f16395 feat:actions增加缓存机制 2025-03-02 12:27:36 +08:00
jxxghp
0d63dfb931 fix actions 2025-03-02 11:15:52 +08:00
jxxghp
37558e3135 更新 hddolby.py 2025-03-02 10:24:17 +08:00
jxxghp
96021e42a2 fix 2025-03-02 10:08:03 +08:00
jxxghp
c32b845515 feat:actions增加识别选项 2025-03-02 09:45:24 +08:00
jxxghp
147d980c54 fix hddolby 2025-03-02 08:51:09 +08:00
jxxghp
f91c43dde9 fix hddolby 2025-03-02 08:08:46 +08:00
jxxghp
4cf5cb06a0 fix hddolby 2025-03-02 08:06:25 +08:00
jxxghp
8e4b4c3144 add hddolby userdata api 2025-03-01 21:28:15 +08:00
jxxghp
c302013696 add hddolby api 2025-03-01 21:24:01 +08:00
jxxghp
37cb94c59d add hddolby api 2025-03-01 21:08:37 +08:00
jxxghp
01f7c6bc2b fix 2025-03-01 18:55:16 +08:00
jxxghp
8bd6ccb0de fix 完善事件和消息发送 2025-03-01 18:34:39 +08:00
jxxghp
ed8895dfbb v2.3.2
- 任务工作流支持手动停止、支持导入导出流程数据、完善动作组件等
2025-03-01 15:51:15 +08:00
jxxghp
a55632051b fix fetch_medias action 2025-03-01 13:54:29 +08:00
jxxghp
7e347a458d add ScanFileAction 2025-02-28 21:23:44 +08:00
jxxghp
cce71f23e2 add ScanFileAction 2025-02-28 21:11:51 +08:00
jxxghp
d68461a127 更新 scheduler.py 2025-02-28 19:37:39 +08:00
jxxghp
1bd12a9411 feat:工作流手动中止 2025-02-28 19:02:38 +08:00
jxxghp
4086ba4763 更新 version.py 2025-02-28 12:30:45 +08:00
jxxghp
6a9cdf71d7 fix AddDownloadAction 2025-02-28 12:12:52 +08:00
jxxghp
a9644c4f86 fix actions 2025-02-28 11:56:26 +08:00
jxxghp
cf62ad5e8e fix actions 2025-02-28 11:15:24 +08:00
jxxghp
f8ed16666c fix actions execute 2025-02-27 20:39:42 +08:00
jxxghp
37926b4c19 fix actions 2025-02-27 18:58:11 +08:00
jxxghp
b080a2003f fix actions 2025-02-27 17:08:38 +08:00
jxxghp
ab0008be86 fix actions 2025-02-27 13:09:01 +08:00
jxxghp
4a42b0d000 fix import 2025-02-26 21:13:41 +08:00
jxxghp
e3d4b19dac fix actionid type 2025-02-26 20:28:10 +08:00
jxxghp
403d600db4 fix workflow edit api 2025-02-26 19:06:30 +08:00
jxxghp
835e6e8891 fix workflow scheduler 2025-02-26 18:32:25 +08:00
jxxghp
eec25113b5 fix workflow scheduler 2025-02-26 18:24:27 +08:00
jxxghp
a7c4161f91 fix workflow executor 2025-02-26 12:57:57 +08:00
jxxghp
799eb9e6ef add workflow executor 2025-02-26 08:37:37 +08:00
jxxghp
88993cb67b fix workflow api 2025-02-25 17:27:21 +08:00
jxxghp
0dc9c98c06 fix workflow api 2025-02-25 13:35:32 +08:00
jxxghp
c1c91cec44 fix workflow api 2025-02-25 13:25:56 +08:00
jxxghp
19b6927320 fix workflow process 2025-02-25 12:42:15 +08:00
jxxghp
0889ebc8b8 fix workflow schema 2025-02-25 08:25:19 +08:00
jxxghp
fb249c0ea5 fix workflow excute 2025-02-25 08:22:02 +08:00
jxxghp
feb22ff0a7 Merge pull request #3922 from WingGao/v2 2025-02-22 17:51:13 +08:00
WingGao
3c95156ce1 fix: alist不应该缓存失败的结果 2025-02-22 15:05:04 +08:00
jxxghp
8b6dca6a46 fix bug 2025-02-22 11:22:21 +08:00
jxxghp
43907eea26 fix 2025-02-22 11:12:14 +08:00
jxxghp
67145a80d0 add workflow apis 2025-02-22 10:35:57 +08:00
jxxghp
0b3138fec6 fix actions 2025-02-22 09:57:32 +08:00
jxxghp
b84896b4f9 Merge pull request #3919 from InfinityPacer/feature/plugin 2025-02-22 07:46:02 +08:00
InfinityPacer
efd046d2f8 fix(plugin): handle None response for online plugins retrieval 2025-02-22 00:34:35 +08:00
jxxghp
06fcf817bb Merge pull request #3917 from gtsicko/v2 2025-02-21 07:29:23 +08:00
gtsicko
16a94d9054 fix: 修复带路径的WECHAT_PROXY不生效 2025-02-20 23:41:14 +08:00
jxxghp
5bf502188d fix 2025-02-20 19:32:58 +08:00
jxxghp
5269b4bc82 fix #3914
feat:搜索支持指定站点
2025-02-20 13:03:12 +08:00
jxxghp
e3f8ed9886 add downloads path 2025-02-20 10:51:22 +08:00
jxxghp
74de554fb0 Merge pull request #3914 from TimoYoung/v2 2025-02-19 18:01:49 +08:00
jxxghp
b41de1a982 fix actions 2025-02-19 17:44:14 +08:00
Timo_Young
25f7d9ccdd Merge branch 'jxxghp:v2' into v2 2025-02-19 17:28:22 +08:00
yangyux
9646745181 fix: mtype为空且tmdbid在movie和tv中都存在时的识别错误问题 2025-02-19 17:27:38 +08:00
jxxghp
1317d9c4f0 fix actions 2025-02-19 16:43:42 +08:00
jxxghp
351029a842 fix AddDownloadAction 2025-02-19 15:24:13 +08:00
jxxghp
15e1fb61ac fix actions 2025-02-19 08:33:15 +08:00
jxxghp
1889a829b5 fix workflow process 2025-02-19 08:16:35 +08:00
jxxghp
53a14fce38 fix workflow process 2025-02-19 08:15:49 +08:00
jxxghp
d9ed7b09c7 v2.3.0
- 站点资源浏览支持关键字和分类搜索,优化了界面,修改了站点卡片点击时的交互行为
- 优化了APP模式下更多菜单、滚动条等多处UI细节
2025-02-18 17:05:24 +08:00
jxxghp
4dcb18f00e fix: site browse api 2025-02-18 16:32:10 +08:00
jxxghp
0a52fe0a7a refactor: site browse api 2025-02-17 19:01:05 +08:00
jxxghp
e5a4d11cf9 fix workflow 2025-02-17 15:08:24 +08:00
jxxghp
6c233f13de fix workflow chain 2025-02-17 12:38:29 +08:00
jxxghp
00aee3496c add workflow oper 2025-02-17 11:54:11 +08:00
jxxghp
77ae40e3d6 fix workflow 2025-02-17 11:40:32 +08:00
jxxghp
68cba44476 fix modules load 2025-02-16 17:24:17 +08:00
jxxghp
b86d06f632 add workflow lifecycle 2025-02-16 16:53:38 +08:00
jxxghp
0b7cf305a0 add action templates 2025-02-16 13:45:15 +08:00
jxxghp
21ae36bc3a add action templates 2025-02-16 12:52:29 +08:00
jxxghp
4e2d9e9165 Merge pull request #3899 from Mister-album/v2-sync 2025-02-15 08:10:15 +08:00
Mister-album
6cee308894 添加为指定字幕添加.default后缀设置为默认字幕功能 2025-02-14 19:58:29 +08:00
jxxghp
b8f4cd5fea v2.2.9
- 资源包升级以提升安全性
- 优化了页面数据刷新机制

注意:本次升级后会默认清理一次种子识别缓存
2025-02-14 19:35:49 +08:00
jxxghp
aa1557ad9e fix setup 2025-02-14 17:37:10 +08:00
jxxghp
f03da6daca fix setup 2025-02-14 17:17:16 +08:00
jxxghp
30eb4385d4 fix sites 2025-02-14 13:44:18 +08:00
jxxghp
4c9afcc1a8 fix 2025-02-14 13:32:20 +08:00
jxxghp
dd47432a45 fix 2025-02-14 12:55:32 +08:00
jxxghp
0ba6974bd6 fix #3843
fix #3829
2025-02-13 08:08:13 +08:00
jxxghp
827d8f6d84 add workflow framework 2025-02-12 17:49:01 +08:00
jxxghp
943a462c69 Merge pull request #3885 from InfinityPacer/feature/security 2025-02-11 17:21:04 +08:00
InfinityPacer
a1bc773fb5 feat(security): add AVIF support 2025-02-11 17:10:50 +08:00
InfinityPacer
ac169b7d22 feat(security): add cache default extension for files without suffix 2025-02-11 17:09:43 +08:00
jxxghp
eecbbfea3a 更新 version.py 2025-02-10 22:28:06 +08:00
jxxghp
635ddb044e add depends for DiscoverMediaSource 2025-02-10 22:05:56 +08:00
jxxghp
1a6123489d 更新 config.py 2025-02-10 07:52:40 +08:00
jxxghp
4e69195a8d Merge pull request #3876 from InfinityPacer/feature/security 2025-02-10 07:11:28 +08:00
InfinityPacer
e48c8ee652 Revert "fix is_safe_url"
This reverts commit 5e2ad34864.
2025-02-10 02:22:53 +08:00
InfinityPacer
7df07b86b9 feat(security): add cmvideo image for http with port 2025-02-10 02:19:08 +08:00
jxxghp
5e2ad34864 fix is_safe_url 2025-02-09 22:08:21 +08:00
jxxghp
e9a147d43c 更新 config.py 2025-02-09 16:30:47 +08:00
jxxghp
a340ee045e 更新 config.py 2025-02-09 16:28:41 +08:00
jxxghp
12405f3c34 v2.2.8
- 推荐支持通过插件扩展,支持显示/隐藏榜单
- 完善了对探索类插件的支持,加强了插件UI配置能力
- 优化了TheMovieDB探索的风格过滤条件
- DOH选项调整为默认关闭
2025-02-09 12:14:40 +08:00
jxxghp
1e465ee231 refactor:优化API结构 2025-02-09 11:44:53 +08:00
jxxghp
f06c24c23e refactor apis 2025-02-08 21:47:43 +08:00
jxxghp
4b93ee4843 更新 version.py 2025-02-08 20:19:39 +08:00
jxxghp
c022e05ab9 DOH_ENABLE => false 2025-02-08 16:50:51 +08:00
jxxghp
c2a0d9d657 add media/seasons api 2025-02-08 12:44:47 +08:00
jxxghp
6fcf2c2f1f add SECURITY_IMAGE_DOMAINS 2025-02-08 08:18:58 +08:00
jxxghp
bc37daef58 - 新增图片安全域名,以支持探索插件 2025-02-07 18:23:25 +08:00
jxxghp
fab5995c4e feat:增加安全域名 thetvdb.com 2025-02-07 18:11:36 +08:00
jxxghp
0ba8aa75f5 v2.2.7 2025-02-07 08:12:01 +08:00
jxxghp
e24b3ed07a feat:使用名称、年份兜底转换 2025-02-06 20:31:37 +08:00
jxxghp
f9bddcb406 feat:订阅支持通用mediaid 2025-02-06 19:19:43 +08:00
jxxghp
247b3b24a1 fix DiscoverMediaSource 2025-02-06 18:03:27 +08:00
jxxghp
759c18acda feat:增加事件 DiscoverSource、MediaRecognizeConvert 2025-02-06 17:35:58 +08:00
jxxghp
b2462c5950 fix 2025-02-06 11:48:56 +08:00
jxxghp
3d947f712c feat:放开媒体库类型控制 2025-02-06 11:45:10 +08:00
jxxghp
89d917e487 fix 2025-02-05 17:41:30 +08:00
jxxghp
28b0a20b26 Merge pull request #3852 from zouyonghao/v2 2025-02-05 15:59:29 +08:00
Yonghao Zou
6d4396f4ba fix(jellyfin): support audio event 2025-02-05 15:23:01 +08:00
jxxghp
75dd0f27cf 更新 version.py 2025-02-04 13:30:02 +08:00
jxxghp
cb9be86c10 更新 version.py 2025-02-03 11:57:21 +08:00
jxxghp
0b8f021505 Merge pull request #3845 from InfinityPacer/feature/event 2025-02-03 07:40:58 +08:00
InfinityPacer
f2d3b1c13f feat(event): add mediainfo field for TransferInterceptEventData 2025-02-03 01:46:22 +08:00
InfinityPacer
6f24c6ba49 fix(event): reorder code execution 2025-02-03 00:14:15 +08:00
jxxghp
c5a9df88dc Merge pull request #3841 from InfinityPacer/feature/cache 2025-02-02 12:24:52 +08:00
InfinityPacer
20b2df364a chore(deps): add async_timeout~=5.0.1 for redis if Python < 3.11.3 2025-02-02 12:04:09 +08:00
jxxghp
e89103b96f Merge pull request #3840 from InfinityPacer/feature/cache 2025-02-02 11:30:04 +08:00
InfinityPacer
49f1c9c10b fix(cache): check cache existence when skip_none is False 2025-02-02 11:18:02 +08:00
InfinityPacer
b320c84c4c fix(cache): refine caching behavior for Fanart requests 2025-02-02 11:17:36 +08:00
jxxghp
e916b84ee5 Merge pull request #3839 from InfinityPacer/feature/site 2025-02-02 07:03:14 +08:00
InfinityPacer
18633a3b41 fix(site): update seeding parse for audiences 2025-02-02 01:06:43 +08:00
jxxghp
0683498497 fix #3833 2025-01-31 18:40:12 +08:00
jxxghp
7468fa4f1e Merge pull request #3833 from cddjr/fix_site_test
fix 网络异常时,站点测试误报鉴权或Cookie过期
2025-01-31 18:27:03 +08:00
景大侠
ab2b33a9fd fix 网络异常时,站点测试误报鉴权或Cookie过期 2025-01-31 16:53:40 +08:00
InfinityPacer
8bedac023b Merge pull request #3831 from InfinityPacer/feature/event
fix(event): update event type to TransferIntercept
2025-01-31 13:45:05 +08:00
InfinityPacer
7893b41175 fix(event): update event type to TransferIntercept 2025-01-31 13:43:57 +08:00
jxxghp
ab73dbb3cd 更新 version.py 2025-01-31 12:36:35 +08:00
jxxghp
cb042dbe68 Merge pull request #3830 from InfinityPacer/feature/event 2025-01-31 07:27:30 +08:00
InfinityPacer
bba0d363d7 feat(event): update comment 2025-01-31 01:40:15 +08:00
InfinityPacer
8635d8c53f feat(event): add TransferIntercept event for cancel transfer 2025-01-31 01:37:14 +08:00
jxxghp
dae6894e8b Merge pull request #3829 from cddjr/fix_missing_episodes_info 2025-01-30 21:25:05 +08:00
景大侠
b76991a027 fix 文件整理在特定情况下会缺失剧集信息 2025-01-30 21:14:34 +08:00
jxxghp
de61c43db4 fix #3828 2025-01-30 20:10:15 +08:00
jxxghp
890afc2a72 fix bug 2025-01-30 20:04:33 +08:00
jxxghp
8d4e1f3af6 更新 user_oper.py 2025-01-30 09:45:30 +08:00
jxxghp
85507a4fff feat:通过消息订阅时转换为MP用户名 2025-01-30 08:37:35 +08:00
jxxghp
6d395f9866 add UserOper list 2025-01-29 19:55:46 +08:00
jxxghp
c589f42181 fix 2025-01-29 19:02:40 +08:00
jxxghp
87bb121060 Merge pull request #3824 from cddjr/feat_tmdb_content_rating 2025-01-29 17:34:56 +08:00
景大侠
42cd35ab3c feat(TMDB): 增加内容分级的刮削 2025-01-29 16:01:44 +08:00
jxxghp
669da0d882 Merge pull request #3821 from InfinityPacer/feature/subscribe 2025-01-29 07:03:41 +08:00
InfinityPacer
9ac1346f80 fix(subscribe): support default filter group when add 2025-01-28 23:44:26 +08:00
jxxghp
f6981734d0 更新 version.py 2025-01-28 16:06:03 +08:00
jxxghp
cb6aa61b6b fix apis 2025-01-27 17:56:32 +08:00
jxxghp
2ed9cfcc9a fix api 2025-01-27 17:08:22 +08:00
jxxghp
2e796f41cb fix api 2025-01-27 13:45:57 +08:00
jxxghp
7d13e43c6f fix apis 2025-01-27 11:09:05 +08:00
jxxghp
db684de6e9 Merge pull request #3815 from Akimio521/fix/transfer-background 2025-01-27 08:16:22 +08:00
Akimio521
510ef59aa0 fix: 计算任务时某些fileitem.size是None 2025-01-27 00:35:41 +08:00
jxxghp
d56083a29e Merge pull request #3810 from Akimio521/feat/alist-token 2025-01-26 08:54:14 +08:00
Akimio521
8aed2b334e feat: 支持使用永久令牌进行认证 2025-01-25 22:31:53 +08:00
jxxghp
3bf27f224c Merge pull request #3808 from InfinityPacer/feature/plugin 2025-01-25 07:42:44 +08:00
InfinityPacer
dc9a54e74f fix(command): ensure command data isolation by using deepcopy 2025-01-25 00:32:42 +08:00
InfinityPacer
79dc194dd6 feat(plugin): add kwargs support for post_message 2025-01-25 00:18:08 +08:00
jxxghp
8e12249201 Merge pull request #3804 from InfinityPacer/feature/subscribe 2025-01-24 17:46:00 +08:00
InfinityPacer
4fa8f5b248 feat(event): use latest subscribe_info in SubscribeModified 2025-01-24 17:26:54 +08:00
InfinityPacer
3089c0c524 feat(event): add old_subscribe_info to event and update triggers 2025-01-24 17:24:29 +08:00
jxxghp
ba1ca0819e fix 关注订阅时判断历史记录 2025-01-23 13:16:32 +08:00
jxxghp
4666b9051d 更新 version.py 2025-01-23 07:11:50 +08:00
jxxghp
56c524a822 Merge pull request #3792 from InfinityPacer/feature/cache 2025-01-23 06:55:31 +08:00
jxxghp
43e8df1b9f Merge pull request #3791 from InfinityPacer/feature/subscribe 2025-01-23 06:54:09 +08:00
InfinityPacer
dbc465f6e5 fix(cache): update tmdb match_web base_wait to 5 for finer control 2025-01-23 00:11:36 +08:00
InfinityPacer
bfbd3c527c fix(cache): ensure consistent parameter ordering in get_cache_key 2025-01-22 23:53:19 +08:00
InfinityPacer
412405f69b fix(subscribe): optimize site list retrieval in get_sub_sites 2025-01-22 23:22:15 +08:00
jxxghp
12b74eb04f 更新 subscribe.py 2025-01-22 22:50:51 +08:00
jxxghp
2305a6287a fix #3777 2025-01-22 22:23:15 +08:00
jxxghp
68245be081 fix meta 2025-01-22 22:20:17 +08:00
jxxghp
29e01294bd Merge pull request #3789 from InfinityPacer/feature/cache
fix(cache): enhance fanart image caching
2025-01-22 22:16:37 +08:00
InfinityPacer
d35bee54a6 fix(limit): log accurate wait time after triggering limit 2025-01-22 21:34:59 +08:00
InfinityPacer
bf63be18e4 fix(cache): enhance fanart image caching 2025-01-22 21:34:44 +08:00
jxxghp
3dc7adc61a 更新 scheduler.py 2025-01-22 19:39:02 +08:00
jxxghp
047d1e0afd Merge pull request #3788 from InfinityPacer/feature/cache
feat(cache): optimize serialization with type-based caching
2025-01-22 18:47:08 +08:00
InfinityPacer
7c017faf31 feat(cache): optimize serialization with type-based caching 2025-01-22 17:41:11 +08:00
jxxghp
7a59565761 fix 优化订阅匹配的识别量 2025-01-22 16:37:49 +08:00
jxxghp
9afb904d40 Merge pull request #3785 from InfinityPacer/feature/cache
fix(cache): enhance tmdb match_web rate-limiting and caching
2025-01-22 15:27:08 +08:00
jxxghp
8189de589a fix #3775 2025-01-22 15:21:10 +08:00
jxxghp
c458d7525d fix #3778 2025-01-22 15:01:24 +08:00
InfinityPacer
5c7bd95f6b fix(cache): enhance tmdb match_web rate-limiting and caching 2025-01-22 14:58:56 +08:00
InfinityPacer
70c4509682 feat(cache): add exists to check key presence in cache backends 2025-01-22 14:25:30 +08:00
jxxghp
f34e36c571 feat:Follow订阅分享人功能 2025-01-22 13:32:13 +08:00
jxxghp
5054ffe7e4 Merge pull request #3776 from kiri-to/patch-1 2025-01-21 19:38:30 +08:00
kiri-to
ed30933ca2 Update nexus_php.py
修复'站点数据刷新'时潜在429问题
2025-01-21 19:10:52 +08:00
jxxghp
2a4111ecce 更新 version.py 2025-01-21 12:56:09 +08:00
jxxghp
5bc8709605 fix 全x集未识别集数问题 2025-01-21 08:16:20 +08:00
jxxghp
efa2edf869 fix 2025-01-20 18:28:06 +08:00
jxxghp
5c1e972feb 更新 __init__.py 2025-01-20 16:02:53 +08:00
jxxghp
8c23e7a7b7 fix #3760 2025-01-20 08:30:29 +08:00
jxxghp
57183f8cdc Merge pull request #3759 from wikrin/v2 2025-01-20 07:13:26 +08:00
Attente
0481b49c04 refactor(app/helper): optimize module reloading mechanism 2025-01-19 22:40:07 +08:00
jxxghp
7eb9b5e92d Merge pull request #3755 from InfinityPacer/feature/cache 2025-01-19 12:56:56 +08:00
InfinityPacer
2a409d83d4 feat(redis): update redis maxmemory 2025-01-19 12:41:30 +08:00
jxxghp
785a3f5de8 Merge pull request #3752 from InfinityPacer/feature/cache 2025-01-19 08:06:50 +08:00
InfinityPacer
7c17c1c73b feat(redis): update comments 2025-01-19 05:18:49 +08:00
InfinityPacer
0ea429782c feat(redis): optimize serialize 2025-01-19 05:13:31 +08:00
InfinityPacer
7a8f880dbe feat(redis): optimize memory limit and cache cleanup 2025-01-19 04:28:16 +08:00
InfinityPacer
0a86b72110 feat(redis): add encoding for keys and optimize deletion 2025-01-19 04:28:16 +08:00
InfinityPacer
cb5c06ee7e feat(redis): add Redis support 2025-01-19 04:28:15 +08:00
InfinityPacer
9f22ce5cc0 feat(cache): remove maxsize from recommend_cache 2025-01-19 01:26:33 +08:00
jxxghp
86e1fbc28a Merge pull request #3751 from InfinityPacer/feature/cache 2025-01-19 01:02:54 +08:00
InfinityPacer
a5c5f7c718 feat(cache): enhance cache region and decorator 2025-01-19 00:55:45 +08:00
InfinityPacer
ff5d94782f fix(TMDB): adjust trending cache maxsize to 1024 2025-01-18 23:45:03 +08:00
jxxghp
58a1bd2c86 Merge pull request #3750 from wikrin/v2 2025-01-18 07:08:18 +08:00
jxxghp
f78ba6afb0 Merge pull request #3749 from InfinityPacer/feature/cache 2025-01-18 07:07:52 +08:00
Attente
331f3455f8 fix: 指定集数 2025-01-18 02:52:26 +08:00
InfinityPacer
ad0241b7f1 feat(cache): set default skip_empty to False 2025-01-18 02:44:56 +08:00
InfinityPacer
d9508533e1 feat(cache): add cache region support 2025-01-18 02:32:08 +08:00
InfinityPacer
6d2059447e feat(cache): enhance get_plugins to skip empty during network errors 2025-01-18 02:14:01 +08:00
InfinityPacer
11d4f27268 feat(cache): migrate cachetools usage to unified cache backend 2025-01-18 02:12:20 +08:00
InfinityPacer
a29f987649 feat(cache): add cache backend implementations and decorator support 2025-01-18 02:10:17 +08:00
jxxghp
3e692c790e Merge remote-tracking branch 'origin/v2' into v2 2025-01-17 20:31:40 +08:00
jxxghp
35cc214492 fix #3743 2025-01-17 20:31:23 +08:00
jxxghp
bae7bff70d fix #3744 2025-01-17 16:41:01 +08:00
jxxghp
71ef6f6a61 fix media_files Exception 2025-01-17 12:32:55 +08:00
jxxghp
a8e161661c v2.2.2
- 分享的订阅支持删除(仅新分享的订阅有效)
- 媒体信息搜索支持系列合集
- 优化了实时日志的性能
- 优化了订阅识别词的生效优先级
- 优化了多处UI细节
2025-01-16 19:59:52 +08:00
jxxghp
2b07766f9a feat:支持搜索系列合集 2025-01-16 17:58:52 +08:00
jxxghp
adeb5361ab feat:支持搜索系列合集 2025-01-16 17:51:47 +08:00
jxxghp
bd6e43c41d Merge pull request #3737 from wikrin/event 2025-01-15 20:39:34 +08:00
Attente
450289c7b7 feat(event): 添加订阅调整事件
- `编辑`订阅信息后,发送订阅调整事件
- 新增 `EventType.SubscribeModified` 枚举值
- 事件数据包含`subscribe_id: int` 和更新后的订阅信息`subscribe_info: dict`
2025-01-15 20:16:32 +08:00
jxxghp
aa93c560e5 feat:分享订阅删除功能 2025-01-15 13:31:16 +08:00
jxxghp
22b1ebe1cf fix #3724 2025-01-15 08:14:39 +08:00
jxxghp
84bcf15e9b Merge pull request #3724 from wikrin/subscribe_words
fix: - 修复订阅识别词在下载阶段不生效的问题
2025-01-15 08:10:03 +08:00
Attente
5b66803f6d fix: 修复订阅识别词在下载阶段不生效的问题
- 将`季集匹配`从`优先级规则组匹配模块`移至`种子帮助类`
- - `FilterModule.__match_season_episodes()` ==> `TorrentHelper.match_season_episodes()`
- - 确保需要`订阅识别词` `偏移季集`的种子能够正确匹配
2025-01-15 03:43:50 +08:00
Attente
88cbde47da fix: 更新应用订阅识别词的种子元数据, 附加参数过滤空项 2025-01-15 03:23:05 +08:00
Attente
03b96fa88b fix: 类型注解 2025-01-15 02:54:22 +08:00
jxxghp
397a8a9536 v2.2.1
- 订阅分享支持搜索词,修复了订阅复用人数显示
- 新增`VCronField`前端组件供插件使用,以简化cron表达式的输入
2025-01-13 12:52:58 +08:00
jxxghp
1da0a706a3 fix 订阅匹配缓存问题 2025-01-13 12:41:25 +08:00
jxxghp
4f2a110b5f fix 订阅匹配缓存问题 2025-01-13 12:11:56 +08:00
jxxghp
bb356ffcee Merge pull request #3721 from InfinityPacer/feature/site 2025-01-13 11:41:19 +08:00
jxxghp
6c986416ca fix 订阅分享显示复用人数 2025-01-13 08:55:06 +08:00
jxxghp
951ec138ef Merge pull request #3720 from InfinityPacer/feature/site 2025-01-13 07:09:23 +08:00
InfinityPacer
23e779ed94 fix(site): handle NoneType for userdata.user_level in regex search 2025-01-13 02:02:08 +08:00
InfinityPacer
29fccd3887 fix(site): update regex for unread message matching 2025-01-13 01:30:59 +08:00
jxxghp
1bef723332 Merge pull request #3717 from cddjr/fix_mteam_test 2025-01-12 21:25:16 +08:00
景大侠
3c41fed0ef fix 馒头连通性测试失败 2025-01-12 20:14:30 +08:00
jxxghp
5947d0e6d0 fix transfer 2025-01-09 22:24:01 +08:00
jxxghp
0e4fa86372 更新 transfer.py 2025-01-09 21:34:37 +08:00
jxxghp
f32405b646 fix 下载器整理 2025-01-09 21:06:31 +08:00
jxxghp
13955dafe3 v2.2.0
- 分享订阅后立即刷新生效
- 认证站点新增支持`YemaPT`
- 问题修复与细节改进
2025-01-09 19:22:20 +08:00
jxxghp
eaca396a9f add rsa 2025-01-09 18:53:55 +08:00
jxxghp
fabd9f2f75 feat:分享订阅后清除缓存 2025-01-09 16:01:52 +08:00
jxxghp
0d8480769f feat:实时手动整理时不发消息 2025-01-09 12:58:09 +08:00
jxxghp
dc850f1c48 fix version 2025-01-09 12:32:46 +08:00
jxxghp
fb311f3d8a fix #3583 2025-01-09 07:59:17 +08:00
jxxghp
293d89510a fix bug 2025-01-08 12:28:53 +08:00
jxxghp
9446e88012 fix #3689 2025-01-08 11:37:58 +08:00
jxxghp
6f593beeed fix #3687 2025-01-07 20:58:27 +08:00
jxxghp
0dc20cd9b4 Merge pull request #3689 from InfinityPacer/feature/transfer 2025-01-07 20:40:47 +08:00
InfinityPacer
a0543e914e fix(transfer): switch downloader monitor to foreground 2025-01-07 19:54:53 +08:00
jxxghp
1435cd6526 Merge pull request #3686 from InfinityPacer/feature/recommend 2025-01-07 16:30:42 +08:00
jxxghp
7e24181c37 fix noqa 2025-01-07 14:44:44 +08:00
jxxghp
922c391ffc fix 2025-01-07 14:39:15 +08:00
jxxghp
39169e8faa fix 2025-01-07 14:38:26 +08:00
jxxghp
433712aa80 fix tvdbapi 2025-01-07 14:36:37 +08:00
jxxghp
23650657cd add noqa
fix #3670
2025-01-07 14:20:31 +08:00
jxxghp
b5d58b8a9e 更新 __init__.py 2025-01-07 07:19:04 +08:00
jxxghp
0514ff0189 更新 __init__.py 2025-01-07 07:06:40 +08:00
jxxghp
9a15e3f9b3 Merge pull request #3683 from InfinityPacer/feature/module 2025-01-07 06:56:43 +08:00
InfinityPacer
104113852a fix(recommend): add global exit handling 2025-01-07 02:04:02 +08:00
InfinityPacer
430702abd3 feat(transmission): add protocol support 2025-01-07 00:52:58 +08:00
jxxghp
d7300777cb 更新 version.py 2025-01-06 18:03:14 +08:00
jxxghp
4fd61a9c8d Merge pull request #3680 from InfinityPacer/feature/module 2025-01-06 17:58:33 +08:00
InfinityPacer
af2b4aa867 perf(log): optimize get_caller for improved performance 2025-01-06 17:46:35 +08:00
jxxghp
7e252f1692 fix bug 2025-01-06 13:34:51 +08:00
jxxghp
a7e7174cb2 v2.1.9
- 消息发送范围增加了`操作用户和管理员`选项,修复了入库消息不按规则发送的问题
- 修复了IOS桌面图标模式下,弹窗会导致底栏UI错位的问题
- 优化了刮削的处理逻辑
2025-01-06 12:00:38 +08:00
jxxghp
6e2d0c2aad fix #3674 2025-01-06 11:47:05 +08:00
jxxghp
aeb65d7cac fix #3618 2025-01-06 10:56:30 +08:00
jxxghp
e7c580d375 fix #3646 2025-01-06 10:28:26 +08:00
jxxghp
90fedade76 fix #3673 2025-01-06 10:08:46 +08:00
jxxghp
49d9715106 Merge pull request #3673 from Aqr-K/refactor/stringUtils
refactor(string): 优化 `compare_version` 方法
2025-01-06 10:04:41 +08:00
jxxghp
c194e8c59a fix scraping 2025-01-06 08:22:04 +08:00
jxxghp
b6f9315e2b Merge pull request #3675 from InfinityPacer/feature/recommend 2025-01-06 06:57:07 +08:00
InfinityPacer
f91f99de52 fix(log): update logger handlers without reset 2025-01-06 01:53:47 +08:00
InfinityPacer
3ad3a769ab fix(recommend): add global exit handling 2025-01-06 00:37:22 +08:00
Aqr-K
261bb5fa81 fix: 调整变量顺序,更加直观 2025-01-05 17:07:11 +08:00
Aqr-K
704dcf46d3 refactor(string): 调整 preprocess_versionconversion_version 2025-01-05 16:54:02 +08:00
Aqr-K
9fab50edb0 refactor(string): 优化 版本比较 方法 2025-01-05 16:22:28 +08:00
jxxghp
5d2a911849 feat:手动刮削时强制覆盖 2025-01-05 15:38:13 +08:00
jxxghp
89e96ee27a feat:消息支持管理员+操作用户同时发送 2025-01-05 13:21:41 +08:00
jxxghp
41636395ff fix 整理入库消息用户隔离 2025-01-05 12:35:21 +08:00
jxxghp
6f1f89ac26 Merge pull request #3669 from Aqr-K/feature/plugin 2025-01-05 09:47:46 +08:00
jxxghp
607eb4b4aa v2.1.8
- 修复已知问题,优化UI细节
2025-01-04 14:20:57 +08:00
Aqr-K
3078c076dc fix(plugin): 调整判断顺序 2025-01-04 14:20:03 +08:00
Aqr-K
a7794fa2ad feat(plugin): feat(log): plugin monitor supports hot update. 2025-01-04 05:42:51 +08:00
jxxghp
846b4e645c Merge pull request #3664 from Aqr-K/feature/log 2025-01-03 13:38:18 +08:00
Aqr-K
3775e99b02 Remove: del Todo 2025-01-03 13:17:46 +08:00
Aqr-K
cea77bddee feat(log): log supports hot update. 2025-01-03 06:08:29 +08:00
jxxghp
8ac0d169d2 fix 目录监控蓝光原盘 2025-01-02 13:30:59 +08:00
jxxghp
d5ac9f65f6 fix bug 2025-01-01 10:50:14 +08:00
jxxghp
4b3f04c73f fix 目录监控控重 2024-12-31 12:42:28 +08:00
jxxghp
bb478c949a 更新 version.py 2024-12-31 07:15:18 +08:00
jxxghp
11b1003d4d fix 中入队列等待时间,以例聚合消息发送 2024-12-30 19:25:29 +08:00
jxxghp
c0ad5f2970 fix 整理队列锁 2024-12-30 19:02:16 +08:00
jxxghp
54c98cf3a1 fix 目录监控消息重复发送 2024-12-30 18:59:20 +08:00
jxxghp
dfbe8a2c0e fix 目录监控消息重复发送 2024-12-30 18:57:45 +08:00
jxxghp
873f80d534 fix 重复添加队列任务 2024-12-30 18:42:36 +08:00
jxxghp
089992db74 Merge pull request #3640 from InfinityPacer/feature/transfer 2024-12-30 07:00:53 +08:00
jxxghp
f07ab73fde Merge pull request #3639 from InfinityPacer/feature/site 2024-12-30 06:59:02 +08:00
InfinityPacer
166674bfe7 feat(transfer): match source dir in subdirs or prioritize same drive 2024-12-30 02:11:48 +08:00
InfinityPacer
adb4a8fe01 feat(site): add proxy support for site sync 2024-12-30 00:37:54 +08:00
jxxghp
c49e79dda3 rollback #3584 2024-12-29 14:41:55 +08:00
jxxghp
a3b5e51356 fix encoding 2024-12-29 12:54:36 +08:00
jxxghp
8f91e23208 Merge pull request #3634 from InfinityPacer/feature/subscribe 2024-12-29 07:54:12 +08:00
InfinityPacer
b768929cd8 fix(transfer): handle task removal on media info failure 2024-12-29 02:26:30 +08:00
jxxghp
49d5e5b953 v2.1.6 2024-12-28 20:10:34 +08:00
jxxghp
ce4792e87b Merge pull request #3632 from wikrin/v2 2024-12-28 20:07:10 +08:00
Attente
3ea0b1f36b refactor(app): improve code readability and consistency in FileMonitorHandler
- Rename 'size' parameter to 'file_size' in on_created and on_moved methods
- This change enhances code clarity and maintains consistency with other parts of the codebase
2024-12-28 20:05:47 +08:00
jxxghp
51c7852b77 更新 transfer.py 2024-12-28 15:58:07 +08:00
jxxghp
7947f10579 fix size 2024-12-28 14:37:21 +08:00
DDSRem
fca9297fa7 Revert "Merge branch 'rfc-python-bump-312' into v2"
This reverts commit 0ec5e3b365, reversing
changes made to c18937ecc7.
2024-12-28 11:56:54 +08:00
DDSRem
0ec5e3b365 Merge branch 'rfc-python-bump-312' into v2 2024-12-28 11:55:39 +08:00
jxxghp
c18937ecc7 fix bug 2024-12-28 11:00:12 +08:00
jxxghp
8b962757b7 fix bug 2024-12-28 10:57:40 +08:00
jxxghp
2b40e42965 fix bug 2024-12-27 21:16:38 +08:00
jxxghp
0eac7816bc fix bug 2024-12-27 18:36:49 +08:00
jxxghp
e3552d4086 feat:识别支持后台处理 2024-12-27 17:45:04 +08:00
jxxghp
75bb52ccca fix 统一整理记录名称 2024-12-27 07:58:58 +08:00
jxxghp
22c485d177 fix 2024-12-26 21:19:18 +08:00
jxxghp
78dab5038c fix transfer apis 2024-12-26 19:58:23 +08:00
jxxghp
15cc02b083 fix transfer count 2024-12-26 19:25:23 +08:00
jxxghp
419f2e90ce Merge pull request #3621 from InfinityPacer/feature/subscribe 2024-12-26 17:25:05 +08:00
jxxghp
a29e3c23fe Merge pull request #3619 from InfinityPacer/feature/module 2024-12-26 17:24:49 +08:00
InfinityPacer
aa9ae4dd09 feat(TMDB): add episode_type field to TmdbEpisode 2024-12-26 16:39:01 +08:00
InfinityPacer
d02bf33345 feat(config): add TOKENIZED_SEARCH 2024-12-26 13:56:08 +08:00
InfinityPacer
0a1dc1724c chore(deps): add jieba~=0.42.1 for tokenization 2024-12-26 13:55:04 +08:00
jxxghp
80b866e135 Merge remote-tracking branch 'origin/v2' into v2 2024-12-26 13:29:48 +08:00
jxxghp
e7030c734e add remove queue api 2024-12-26 13:29:34 +08:00
jxxghp
e5458ee127 Merge pull request #3615 from wikrin/del_bdmv 2024-12-26 09:25:28 +08:00
Attente
3f60cb3f7d fix(storage): delete Blu-ray directory when removing movie file
- Add logic to delete `BDMV` and `CERTIFICATE` directories when a movie file is removed
- This ensures that empty Blu-ray folders are also cleaned up during the deletion process
2024-12-26 09:00:04 +08:00
jxxghp
8c800836d5 add remove queue api 2024-12-26 08:12:59 +08:00
jxxghp
abfc146335 更新 transfer.py 2024-12-26 07:13:37 +08:00
jxxghp
dd4ff03b08 Merge pull request #3614 from wikrin/v2 2024-12-26 06:59:52 +08:00
jxxghp
be792cb40a Merge pull request #3613 from InfinityPacer/feature/recommend 2024-12-26 06:59:11 +08:00
Attente
cec5cf22de feat(transfer): Update file_items filtering logic to allow bluray directories 2024-12-26 02:41:49 +08:00
InfinityPacer
6ec5f3b98b feat(recommend): support caching by page 2024-12-25 23:07:56 +08:00
jxxghp
0ac43fd3c7 feat:手动整理API支持后台 2024-12-25 20:38:00 +08:00
jxxghp
a600f2f05b Merge pull request #3611 from InfinityPacer/feature/recommend 2024-12-25 19:31:20 +08:00
InfinityPacer
0c0a1c1dad feat(recommend): support caching poster images 2024-12-25 19:24:32 +08:00
jxxghp
c69df36b98 add transfer queue api 2024-12-25 18:11:57 +08:00
jxxghp
20ac9fbfbe fix transfer log 2024-12-25 12:59:43 +08:00
jxxghp
b9756db115 fix jobview 2024-12-25 08:24:57 +08:00
jxxghp
5bfa36418b Merge pull request #3608 from wikrin/split_episode 2024-12-25 07:01:24 +08:00
Attente
30c696adfe fix(format): evaluate offset for start and end episodes 2024-12-25 05:07:54 +08:00
Attente
31887ab4b1 fix(format): improve episode parsing logic 2024-12-25 04:50:23 +08:00
jxxghp
3678de09bf 更新 transfer.py 2024-12-24 21:51:48 +08:00
jxxghp
3f9172146d fix MediaServerSeasonInfo 2024-12-24 21:16:56 +08:00
jxxghp
fc4480644a fix bug 2024-12-24 21:07:12 +08:00
jxxghp
2062214a3b fix bug 2024-12-24 14:17:35 +08:00
jxxghp
01487cfdf6 fix transfer 2024-12-24 14:08:47 +08:00
jxxghp
a2c913a5b2 fix transfer 2024-12-24 14:06:45 +08:00
jxxghp
84f5d1c879 fix bug 2024-12-24 13:31:58 +08:00
jxxghp
48c289edf2 feat: 后台整理队列 2024-12-24 13:14:17 +08:00
jxxghp
c9949581ef Merge pull request #3604 from InfinityPacer/feature/module 2024-12-24 10:49:43 +08:00
InfinityPacer
b4e3dc275d fix(proxy): add proxy for MP_SERVER_HOST 2024-12-24 10:10:19 +08:00
jxxghp
00f85836fa 更新 transfer.py 2024-12-23 22:02:45 +08:00
jxxghp
c4300332c9 TODO 后台整理队列 2024-12-23 21:46:59 +08:00
jxxghp
10f8efc457 TODO 后台整理队列 2024-12-23 18:59:36 +08:00
jxxghp
1b48eb8959 fix ide warnings 2024-12-23 16:58:49 +08:00
jxxghp
61d7374d95 fix ide warnings 2024-12-23 16:58:04 +08:00
jxxghp
baa48610ea refactor:Command提到上层 2024-12-23 13:38:02 +08:00
jxxghp
ece8d0368b Merge remote-tracking branch 'origin/v2' into v2 2024-12-23 12:40:42 +08:00
jxxghp
a9ffebb3ea fix schemas 2024-12-23 12:40:32 +08:00
jxxghp
b6c043aae9 Merge pull request #3598 from InfinityPacer/feature/recommend 2024-12-23 12:09:59 +08:00
jxxghp
d45d49edbd fix schemas default_factory 2024-12-23 11:35:38 +08:00
jxxghp
27f474b192 fix setup 2024-12-23 11:10:08 +08:00
InfinityPacer
544119c49f Revert "feat(recommend): add semaphore to limit concurrent requests"
This reverts commit 33de1c3618.
2024-12-23 10:29:37 +08:00
jxxghp
800a66dc99 Merge pull request #3596 from InfinityPacer/feature/module 2024-12-23 06:54:38 +08:00
InfinityPacer
33de1c3618 feat(recommend): add semaphore to limit concurrent requests 2024-12-23 02:51:23 +08:00
InfinityPacer
6fec16d78a fix(cache): include method name and default parameters in cache key 2024-12-23 01:39:34 +08:00
InfinityPacer
a5d6062aa8 feat(recommend): add job to refresh recommend cache 2024-12-23 01:32:17 +08:00
InfinityPacer
de532f47fb feat(auth): add logging for site auth 2024-12-23 00:20:03 +08:00
jxxghp
60bcc802cf Merge pull request #3593 from wikrin/v2 2024-12-22 10:40:23 +08:00
jxxghp
c143545ef9 Merge pull request #3591 from InfinityPacer/feature/module 2024-12-22 10:28:15 +08:00
Attente
0e8fdac6d6 fix(filemanager): correct season_episode metadata mapping
- Update season_episode field in FileManagerModule to use meta.episode instead of meta.episodes
- This change ensures accurate season and episode information is displayed
2024-12-22 10:24:40 +08:00
jxxghp
45e6dd1561 Merge pull request #3590 from InfinityPacer/feature/recommend 2024-12-22 09:11:51 +08:00
jxxghp
23c37c9a81 Merge pull request #3588 from wikrin/v2 2024-12-22 09:08:11 +08:00
InfinityPacer
098279ceb6 fix #3565 2024-12-22 02:04:36 +08:00
InfinityPacer
1fb791455e chore(recommend): update comment 2024-12-22 01:37:25 +08:00
InfinityPacer
3339bbca50 feat(recommend): switch API calls to use RecommendChain 2024-12-22 01:27:11 +08:00
InfinityPacer
ec77213ca6 feat(recommend): add cached_with_empty_check decorator 2024-12-22 01:09:06 +08:00
InfinityPacer
de1c2c98d2 feat(recommend): add log_execution_time decorator to RecommendChain methods 2024-12-22 01:03:44 +08:00
InfinityPacer
98247fa47a feat: add log_execution_time decorator 2024-12-22 01:02:07 +08:00
InfinityPacer
1eef95421a feat(recommend): add RecommendChain 2024-12-22 01:00:47 +08:00
Attente
b8de563a45 refactor(app): optimize download path logic
- Simplify download path determination logic
- Remove redundant code for save path calculation
2024-12-21 23:56:44 +08:00
jxxghp
fd5fbd779b Merge pull request #3584 from zhzero-hub/v2 2024-12-21 20:15:39 +08:00
zhzero
cb07550388 TorrentSpider添加encoding key 2024-12-21 14:51:55 +08:00
jxxghp
a51632c0a3 Merge pull request #3583 from wikrin/torrent_layout 2024-12-21 07:58:46 +08:00
Attente
9756bf6ac8 refactor(downloader): 新增支持种子文件布局处理
- 在 `DownloadChain` 中根据`种子文件布局`拼接`savepath`
- 在 `QbittorrentModule` 和 `TransmissionModule` 中添加种子文件布局信息
- 修改 `download` 方法的返回值,增加种子文件布局参数
2024-12-21 04:50:10 +08:00
DDSRem
aaa96cff87 Merge pull request #3582 from Aqr-K/patch-1
revert
2024-12-20 23:27:32 +08:00
Aqr-K
a50959d254 revert 2024-12-20 23:26:55 +08:00
DDSRem
b1bd858df1 chore(deps): update dependency python-115 to v0.0.9.8.8.4 2024-12-20 23:21:59 +08:00
DDSRem
c2d6d9b1ac chore(deps): update dependency python-115 to v0.0.9.8.8.4 2024-12-20 23:18:04 +08:00
DDSRem
7288dd24e0 Merge pull request #3580 from jxxghp/v2
Sync
2024-12-20 23:16:30 +08:00
jxxghp
8f05ea581c v2.1.5 2024-12-20 15:40:36 +08:00
jxxghp
03a0bc907b Merge pull request #3569 from yubanmeiqin9048/patch-1 2024-12-19 22:16:27 +08:00
yubanmeiqin9048
5ce4c8a055 feat(filemanager): 增加字幕正则式 2024-12-19 22:01:06 +08:00
jxxghp
b04181fed9 更新 version.py 2024-12-19 20:24:11 +08:00
jxxghp
eee843bafd Merge pull request #3567 from InfinityPacer/feature/cache 2024-12-19 20:21:00 +08:00
InfinityPacer
134fd0761d refactor(cache): split douban cache into recommend and search 2024-12-19 20:00:29 +08:00
InfinityPacer
669481af06 feat(cache): unify bangumi cache strategy 2024-12-19 19:42:17 +08:00
jxxghp
b5640b3179 Merge pull request #3564 from InfinityPacer/feature/subscribe 2024-12-19 16:17:14 +08:00
InfinityPacer
9abb305dbb fix(subscribe): ensure best version is empty set 2024-12-19 15:41:51 +08:00
InfinityPacer
0fd4791479 fix(event): align field names with SubscribeComplete 2024-12-19 10:58:11 +08:00
jxxghp
ce2ecdf44c Merge pull request #3562 from InfinityPacer/feature/subscribe 2024-12-19 07:02:26 +08:00
InfinityPacer
949c0d3b76 feat(subscribe): optimize best version to support multiple states 2024-12-19 00:51:53 +08:00
jxxghp
316915842a Merge pull request #3559 from InfinityPacer/feature/site 2024-12-18 19:24:34 +08:00
jxxghp
1dd7dc36c3 Merge pull request #3557 from InfinityPacer/feature/subscribe 2024-12-18 19:24:00 +08:00
InfinityPacer
fca763b814 fix(site): avoid err_msg cannot be updated when it's None 2024-12-18 16:39:14 +08:00
InfinityPacer
9311125c72 fix(subscribe): avoid reinitializing the dictionary 2024-12-18 15:49:21 +08:00
InfinityPacer
3f1d4933c1 Merge pull request #3553 from InfinityPacer/feature/subscribe
fix(dependencies): pin python-115 version
2024-12-18 12:47:51 +08:00
InfinityPacer
7fb23b5069 fix(dependencies): pin python-115 version 2024-12-18 12:46:28 +08:00
DDSRem
d74ad343f1 Merge pull request #3551 from InfinityPacer/feature/subscribe
Revert "chore(deps): update dependency python-115 to v0.0.9.8.8.3"
2024-12-18 10:42:17 +08:00
InfinityPacer
c0a8351e58 Revert "chore(deps): update dependency python-115 to v0.0.9.8.8.3"
This reverts commit d182a7079d.
2024-12-18 10:39:37 +08:00
jxxghp
8e309e8658 更新 version.py 2024-12-17 22:19:32 +08:00
jxxghp
3400a9f87a fix #3548 2024-12-17 12:44:37 +08:00
jxxghp
c6830059b2 Merge pull request #3548 from 0honus0/v2 2024-12-17 11:54:36 +08:00
honus
7e4a18b365 fix rclone __get_fileitem err 2024-12-17 00:18:52 +08:00
honus
9ecc8c14d8 fix rclone bug 2024-12-16 23:20:49 +08:00
DDSRem
a3c048b9c8 chore(deps): upgrade beautifulsoup4 4.12.2 to 4.12.3 2024-12-16 21:40:27 +08:00
DDSRem
3c08054234 chore(ci): beta image only provides amd64 architecture 2024-12-16 21:30:41 +08:00
DDSRem
07e91d4eb1 chore(deps): playwright 1.37.0 to 1.49.1
fix `greenlet==2.0.2` build error
2024-12-16 21:29:44 +08:00
DDSRem
c104498b43 chore(deps): environment and dependency upgrades 2024-12-16 21:11:14 +08:00
jxxghp
91ba71ad23 Merge pull request #3546 from InfinityPacer/feature/subscribe 2024-12-16 19:47:30 +08:00
jxxghp
5ae8914060 Merge pull request #3545 from xianghuawe/v2 2024-12-16 19:46:18 +08:00
InfinityPacer
77c8f1244f Merge branch 'v2' of https://github.com/jxxghp/MoviePilot into feature/subscribe 2024-12-16 19:09:14 +08:00
InfinityPacer
5d5c8a0af7 feat(event): add SubscribeDeleted event 2024-12-16 19:09:00 +08:00
coder_wen
dcaf3e6678 fix: change alist.py upload api to put, fix big file upload over memory limit #3265 2024-12-16 15:14:16 +08:00
jxxghp
c0170a173c Merge pull request #3542 from DDS-Derek/dev 2024-12-16 12:56:19 +08:00
DDSRem
d182a7079d chore(deps): update dependency python-115 to v0.0.9.8.8.3 2024-12-16 12:28:50 +08:00
jxxghp
b5cc5653b2 Merge pull request #3536 from InfinityPacer/feature/subscribe 2024-12-15 07:56:06 +08:00
jxxghp
bdbd908b3a Merge pull request #3535 from InfinityPacer/feature/event 2024-12-15 07:55:15 +08:00
InfinityPacer
11fedb1ffc fix(download): optimize performance by checking binary content 2024-12-15 01:27:30 +08:00
InfinityPacer
7de82f6c0d fix(event): remove unnecessary code 2024-12-15 00:17:53 +08:00
jxxghp
782829c992 Merge pull request #3531 from InfinityPacer/feature/subscribe 2024-12-13 20:18:58 +08:00
InfinityPacer
6ab76453d4 feat(events): update episodes field to Download event 2024-12-13 20:05:40 +08:00
jxxghp
56767b92d7 Merge pull request #3524 from InfinityPacer/feature/subscribe 2024-12-12 17:29:17 +08:00
InfinityPacer
621df40c66 feat(event): add support for priority in event registration 2024-12-12 15:38:28 +08:00
jxxghp
ba7cb76640 Merge pull request #3519 from InfinityPacer/feature/subscribe 2024-12-11 22:27:24 +08:00
InfinityPacer
d353853472 feat(subscribe): add support for update movie downloaded note 2024-12-11 20:19:47 +08:00
InfinityPacer
1fcf5f4709 feat(subscribe): add state reset to 'R' on subscription reset 2024-12-11 20:01:10 +08:00
InfinityPacer
0ec4630461 fix(subscribe): avoid redundant updates for remaining episodes 2024-12-11 16:31:11 +08:00
InfinityPacer
fa45dea1aa fix(subscribe): prioritize update state when fininsh subscribe 2024-12-11 16:18:03 +08:00
InfinityPacer
2217583052 fix(subscribe): update missing episode logic and return status 2024-12-11 15:51:04 +08:00
InfinityPacer
f4dc7a133e fix(subscribe): update subscription state after download 2024-12-11 15:47:45 +08:00
jxxghp
26b1e64bad Merge pull request #3518 from InfinityPacer/feature/subscribe 2024-12-11 13:32:17 +08:00
InfinityPacer
a1d8af6521 fix(subscribe): update remove_site to set sites as an empty list 2024-12-11 12:39:13 +08:00
jxxghp
9fb3d093ff Merge pull request #3517 from wikrin/match_rule 2024-12-11 06:54:58 +08:00
jxxghp
8c9b37a12f Merge pull request #3516 from InfinityPacer/feature/subscribe 2024-12-11 06:53:42 +08:00
Attente
73e4596d1a feat(filter): add publish time filter for torrents
- 在 `TorrentInfo` 类中添加 `pub_minutes` 方法以计算自发布以来的`分钟`数
- 在 FilterModule 中实现发布时间过滤
- 支持发布时间的单值和范围比较
2024-12-10 23:36:54 +08:00
InfinityPacer
83798e6823 feat(event): add multiple IDs to source with json 2024-12-10 21:23:52 +08:00
InfinityPacer
6d9595b643 feat(event): add source tracking in download event 2024-12-10 18:50:50 +08:00
jxxghp
dc047d949d Merge pull request #3511 from wikrin/offset 2024-12-10 07:13:10 +08:00
Attente
a31b4bc0a1 refactor(app): improve episode offset calculation
- Remove unnecessary try-except block
2024-12-10 00:37:50 +08:00
Attente
94b8633803 手动整理中集数偏移可不使用集数定位 2024-12-10 00:32:01 +08:00
jxxghp
107e85033f Merge pull request #3507 from InfinityPacer/feature/subscribe 2024-12-09 19:38:48 +08:00
InfinityPacer
eea8060182 feat(plugin): add username support for post_message 2024-12-09 19:27:25 +08:00
jxxghp
83f7869de4 Merge pull request #3506 from thsrite/v2 2024-12-09 17:32:49 +08:00
thsrite
4f0eff8b88 fix site vip level ignores ratio warning 2024-12-09 16:43:05 +08:00
jxxghp
58b438c345 fix #3343 2024-12-08 08:51:58 +08:00
jxxghp
bc57bb1a78 更新 version.py 2024-12-07 07:41:14 +08:00
jxxghp
e08ab0dd33 Merge pull request #3341 from InfinityPacer/feature/subscribe 2024-12-07 07:39:28 +08:00
InfinityPacer
64bfa246ae fix: replace is None with is_(None) for proper SQLAlchemy filter 2024-12-07 01:09:03 +08:00
jxxghp
cde4db1a56 v2.1.2 2024-12-06 15:55:56 +08:00
jxxghp
29ae910953 fix build 2024-12-06 12:31:29 +08:00
jxxghp
314f90cc40 upgrade python-115 2024-12-06 12:30:13 +08:00
jxxghp
1c22e3d024 Merge pull request #3337 from InfinityPacer/feature/subscribe
feat(event): add ResourceDownload event for cancel download
2024-12-06 11:17:34 +08:00
InfinityPacer
233d62479f feat(event): add options to ResourceDownloadEventData 2024-12-06 10:47:56 +08:00
jxxghp
6974f2ebd7 Merge pull request #3335 from mackerel-12138/fix_scraper 2024-12-06 06:53:24 +08:00
InfinityPacer
c030166cf5 feat(event): send events for resource download based on source 2024-12-06 02:08:36 +08:00
InfinityPacer
4c511eaea6 chore(event): update ResourceDownloadEventData comment 2024-12-06 02:06:00 +08:00
InfinityPacer
6e443a1127 feat(event): add ResourceDownload event for cancel download 2024-12-06 01:55:44 +08:00
InfinityPacer
896e473c41 fix(event): filter and handle only enabled event handlers 2024-12-06 01:54:51 +08:00
zhanglijun
12f10ebedf fix: 音轨文件重命名整理 2024-12-06 00:40:38 +08:00
jxxghp
ba9f85747c Merge pull request #3330 from InfinityPacer/feature/subscribe 2024-12-05 17:10:47 +08:00
InfinityPacer
2954c02a7c feat(subscribe): add subscription status update API 2024-12-05 16:24:05 +08:00
InfinityPacer
312e602f12 feat(subscribe): add Pending and Suspended subscription states 2024-12-05 16:22:09 +08:00
InfinityPacer
ed37fcbb07 feat(subscribe): update get_by_state to handle multiple states 2024-12-05 16:20:14 +08:00
jxxghp
6acf8fbf00 Merge pull request #3324 from InfinityPacer/feature/subscribe 2024-12-05 06:54:45 +08:00
InfinityPacer
a1e178c805 feat(event): add ResourceSelection event for update resource contexts 2024-12-04 20:21:57 +08:00
jxxghp
922e2fc446 Merge pull request #3323 from Aqr-K/feat-module 2024-12-04 18:19:15 +08:00
jxxghp
db4c8cb3f2 Merge pull request #3322 from InfinityPacer/feature/subscribe 2024-12-04 18:18:32 +08:00
Aqr-K
1c578746fe fix(module): 补全 indexer 缺少 get_subtype 方法
- 补全 `indexer` 缺少 `get_subtype` 方法。
- 增加 `get_running_subtype_module` 方法,可结合 `types` 快速获取单个运行中的 `module` 。
2024-12-04 18:14:56 +08:00
InfinityPacer
68f88117b6 feat(events): add episodes field to DownloadAdded event for unpack 2024-12-04 16:11:35 +08:00
jxxghp
108c0a89f6 Merge pull request #3320 from InfinityPacer/feature/subscribe 2024-12-04 12:18:19 +08:00
InfinityPacer
92dacdf6a2 fix(subscribe): add RLock to prevent duplicate subscription downloads 2024-12-04 11:07:45 +08:00
InfinityPacer
6aa684d6a5 fix(subscribe): handle case when no subscriptions are found 2024-12-04 11:03:32 +08:00
InfinityPacer
efece8cc56 fix(subscribe): add check for None before updating subscription 2024-12-04 10:27:33 +08:00
jxxghp
383c8ca19a Merge pull request #3313 from Aqr-K/feat-module 2024-12-03 18:09:49 +08:00
jxxghp
0a73681280 Merge pull request #3315 from InfinityPacer/feature/scheduler 2024-12-03 18:09:23 +08:00
InfinityPacer
c1ecda280c fix #3312 2024-12-03 17:33:00 +08:00
Aqr-K
825fc35134 feat(modules): 增加子级 type 分类。
- 在 `types` 里,针对各个模块的类型进行子级分类。
- 为每个模块统一添加 `get_subtype` 方法,这样一来,能更精准快速地区分与调用子类的每个模块,又能获取 ModuleType 所规定的分类以及对应存在的子模块类型支持列表,从而有效解决当下调用时需繁琐遍历每个 module 以获取 get_name 或 _channel 的问题。
- 解决因消息渠道前端返回所保存的 type 与后端规定值不一致,而需要频繁调用 _channel 私有方法才能获取分类所可能产生的问题。
2024-12-03 14:57:19 +08:00
jxxghp
8f543ca602 Merge pull request #3309 from yxlimo/tmdbid-for-downloader 2024-12-03 06:55:36 +08:00
yxlimo
f0ecc1a497 fix: return last record when get downloadhistory by hash 2024-12-02 22:55:57 +08:00
jxxghp
71f170a1ad Merge pull request #3293 from wikrin/v2 2024-12-01 10:23:51 +08:00
Attente
3709b65b0e fix(api): correct variable reference in media scraping logic
- Change incorrect reference from media_info to mediainfo
2024-12-01 03:40:30 +08:00
jxxghp
9d6eb0f1e1 Merge pull request #3291 from mackerel-12138/fix_scraper 2024-11-30 16:06:04 +08:00
jxxghp
c93306147b Merge pull request #3290 from mackerel-12138/fix_poster 2024-11-30 16:05:11 +08:00
zhanglijun
5e8f924a2f fix: 修复指定tmdbid刮削时tmdbid丢失问题 2024-11-30 15:57:47 +08:00
zhanglijun
54988d6397 fix: 修复fanart季图片下载缺失/错误的问题 2024-11-30 13:51:30 +08:00
jxxghp
112761dc4c Merge pull request #3287 from InfinityPacer/feature/security 2024-11-30 07:15:52 +08:00
InfinityPacer
ef20508840 feat(auth): handle service instance retrieval with proper null check 2024-11-30 01:14:36 +08:00
InfinityPacer
589a1765ed feat(auth): support specifying service for authentication 2024-11-30 01:04:48 +08:00
jxxghp
2c666e24f3 Merge pull request #3283 from InfinityPacer/feature/subscribe 2024-11-29 21:12:25 +08:00
InfinityPacer
168e3c5533 fix(subscribe): move state update to finally to prevent duplicates 2024-11-29 18:56:19 +08:00
jxxghp
cda8b2573a Merge pull request #3282 from InfinityPacer/feature/subscribe 2024-11-29 16:47:56 +08:00
InfinityPacer
4cb4eb23b8 fix(subscribe): prevent fallback to search rules if not defined 2024-11-29 16:15:37 +08:00
jxxghp
f208b65570 更新 version.py 2024-11-29 08:59:55 +08:00
jxxghp
8a0a530036 Merge pull request #3279 from wikrin/v2 2024-11-29 07:36:34 +08:00
Attente
76643f13ed Update system.py 2024-11-29 07:33:02 +08:00
Attente
6992284a77 fix(api): 修复规则测试未获取到媒体信息导致的过滤失败问题 2024-11-29 07:25:08 +08:00
jxxghp
9a142799cd Merge pull request #3274 from InfinityPacer/feature/encoding 2024-11-28 17:29:22 +08:00
InfinityPacer
027d1567c3 feat(encoding): set PERFORMANCE_MODE to enabled by default 2024-11-28 17:07:14 +08:00
jxxghp
65af737dfd Merge pull request #3272 from wikrin/transfer 2024-11-28 07:23:25 +08:00
jxxghp
48aa0e3d0b Merge pull request #3271 from wikrin/v2 2024-11-28 07:22:16 +08:00
jxxghp
b4e31893ff Merge pull request #3268 from mackerel-12138/fix_scraper 2024-11-28 07:21:43 +08:00
Attente
4f1b95352a 改进手动整理逻辑 关联后端 jxxghp/MoviePilot-Frontend#255 2024-11-28 05:39:26 +08:00
Attente
ca664cb569 fix: 修复批量整理时媒体库目录匹配不正确的问题 2024-11-28 05:19:09 +08:00
zhanglijun
fe4ea73286 修复季nfo刮削错误, 优化季标题取值 2024-11-27 23:27:08 +08:00
jxxghp
9e9cca6de4 Merge pull request #3262 from InfinityPacer/feature/encoding 2024-11-27 16:25:46 +08:00
InfinityPacer
2e7e74c803 feat(encoding): update configuration to performance mode 2024-11-27 13:52:17 +08:00
InfinityPacer
916597047d Merge branch 'v2' of https://github.com/jxxghp/MoviePilot into feature/encoding 2024-11-27 12:52:01 +08:00
InfinityPacer
83fc474dbe feat(encoding): enhance encoding detection with confidence threshold 2024-11-27 12:33:57 +08:00
jxxghp
f67bf49e69 Merge pull request #3255 from InfinityPacer/feature/event 2024-11-27 06:59:54 +08:00
jxxghp
bf9043f526 Merge pull request #3254 from mackerel-12138/v2 2024-11-27 06:58:47 +08:00
InfinityPacer
a98de604a1 refactor(event): rename SmartRename to TransferRename 2024-11-27 00:50:34 +08:00
InfinityPacer
e160a745a7 fix(event): correct visualize_handlers 2024-11-27 00:49:37 +08:00
zhanglijun
7f2c6ef167 fix: 增加入参判断 2024-11-26 22:25:42 +08:00
jxxghp
2086651dbe Merge pull request #3235 from wikrin/fix 2024-11-26 22:17:32 +08:00
zhanglijun
132fde2308 修复季海报下载路径和第0季海报命名 2024-11-26 22:01:00 +08:00
jxxghp
4e27a1e623 fix #3247 2024-11-26 08:25:01 +08:00
Attente
a453831deb get_dir增加入参
- `include_unsorted`用于表示可否`包含`整理方式`为`不整理`的目录配置
2024-11-26 03:11:25 +08:00
jxxghp
1035ceb4ac Merge pull request #3245 from wikrin/v2 2024-11-25 23:04:15 +08:00
Attente
b7cb917347 fix(transfer): add library type and category folder support
- Add library_type_folder and library_category_folder parameters to the transfer function
- This enhances the transfer functionality by allowing sorting files into folders based on library type and category
2024-11-25 23:02:17 +08:00
jxxghp
680ad164dc Merge pull request #3236 from InfinityPacer/feature/scheduler 2024-11-25 17:54:05 +08:00
InfinityPacer
aed68253e9 feat(scheduler): expose internal methods for external invocation 2024-11-25 16:33:17 +08:00
InfinityPacer
b83c7a5656 feat(scheduler): support plugin method arguments via func_kwargs 2024-11-25 16:31:30 +08:00
InfinityPacer
491456b0a2 feat(scheduler): support plugin replacement for system services 2024-11-25 16:30:11 +08:00
Attente
84465a6536 不整理目录的下载路径可以被下载器获取
修改自动匹配源存储器类型入参
2024-11-25 13:51:04 +08:00
jxxghp
9acbcf4922 v2.1.0 2024-11-25 08:05:07 +08:00
jxxghp
8dc4290695 fix scrape bug 2024-11-25 07:58:17 +08:00
jxxghp
5c95945691 Update README.md 2024-11-24 18:16:37 +08:00
jxxghp
11115d50fb fix dockerfile 2024-11-24 18:14:09 +08:00
jxxghp
7f83d56a7e fix alipan 2024-11-24 17:55:08 +08:00
jxxghp
28805e9e17 fix alipan 2024-11-24 17:45:12 +08:00
jxxghp
88a098abc1 fix log 2024-11-24 17:35:04 +08:00
jxxghp
a3cc9830de fix scraping upload 2024-11-24 17:25:42 +08:00
jxxghp
43623efa99 fix log 2024-11-24 17:19:24 +08:00
jxxghp
ff73b2cb5d fix #3203 2024-11-24 17:11:19 +08:00
jxxghp
6cab14366c Merge pull request #3228 from YemaPT/fix-yemapt-taglist-none 2024-11-24 16:24:38 +08:00
yemapt
576d215d8c fix(yemapt): judge tag list none 2024-11-24 16:22:54 +08:00
jxxghp
a2c10c86bf Merge pull request #3226 from YemaPT/feature-yemapt-optimize 2024-11-24 14:08:04 +08:00
yemapt
21bede3f00 feat(yemapt): update search api and enrich torrent content 2024-11-24 13:45:31 +08:00
jxxghp
0a39322281 Merge pull request #3224 from wikrin/v2 2024-11-24 10:32:47 +08:00
Attente
be323d3da1 fix: 减少入参扩大适用范围 2024-11-24 10:22:29 +08:00
jxxghp
fa8860bf62 Merge pull request #3223 from wikrin/v2
fix: 入参错误
2024-11-24 08:56:58 +08:00
Attente
a700958edb fix: 入参错误 2024-11-24 08:54:59 +08:00
jxxghp
9349973d16 Merge pull request #3221 from wikrin/v2 2024-11-24 07:34:42 +08:00
Attente
c0d3637d12 refactor: change library type and category folder parameters to optional 2024-11-24 00:04:08 +08:00
jxxghp
79473ca229 Merge pull request #3196 from wikrin/fix 2024-11-23 23:01:09 +08:00
Attente
fccbe39547 修改target_directory获取逻辑 2024-11-23 22:41:55 +08:00
Attente
85324acacc 下载流程中get_dir()添加storage="local"入参 2024-11-23 22:41:55 +08:00
Attente
9dec4d704b get_dir去除fileitem参数
- 和`src_path & storage`重复, 需要的话直接传入这两项
2024-11-23 22:41:55 +08:00
jxxghp
72732277a1 fix alipan 2024-11-23 21:54:03 +08:00
jxxghp
8d737f9e37 fix alipan && rclone get_folder 2024-11-23 21:43:53 +08:00
jxxghp
96b3746caa fix alist delete 2024-11-23 21:29:08 +08:00
jxxghp
c690ea3c39 fix #3214
fix #3199
2024-11-23 21:26:22 +08:00
jxxghp
3282fb88e0 Merge pull request #3219 from mackerel-12138/s0_fix 2024-11-23 20:25:08 +08:00
zhanglijun
b9c2b9a044 重命名格式支持S0重命名为Specials,SPs 2024-11-23 20:22:37 +08:00
zhanglijun
24b58dc002 修复S0刮削问题
修复某些情况下剧集根目录判断错误的问题
2024-11-23 20:13:01 +08:00
jxxghp
42c56497c6 Merge pull request #3218 from DDS-Derek/issue_rfc 2024-11-23 12:34:52 +08:00
jxxghp
c7512d1580 Merge pull request #3217 from DDS-Derek/fix_tmp 2024-11-23 12:34:39 +08:00
jxxghp
7d25bf7b48 Merge pull request #3215 from mackerel-12138/v2 2024-11-23 12:34:04 +08:00
DDSRem
99daa3a95e chore(issue): add rfc template 2024-11-23 12:31:28 +08:00
jxxghp
0a923bced9 fix storage 2024-11-23 12:29:34 +08:00
DDSRem
06e3b0def2 fix(update): useless tmp directory when not updated 2024-11-23 12:25:46 +08:00
jxxghp
0feecc3eca fix #3204 2024-11-23 11:48:23 +08:00
jxxghp
0afbc58263 fix #3191 自动整理时,优先同盘 2024-11-23 11:31:56 +08:00
jxxghp
7c7561029a fix #3178 手动整理时支持选择一二级分类 2024-11-23 11:19:25 +08:00
zhanglijun
65683999e1 change comment 2024-11-23 11:00:37 +08:00
zhanglijun
f72e26015f delete unused code 2024-11-23 10:58:32 +08:00
zhanglijun
b4e5c50655 修复重命名时S0年份为None的问题
增加重命名配置 剧集日期
2024-11-23 10:55:21 +08:00
jxxghp
f395dc68c3 fix #3209 刮削加锁 2024-11-23 10:48:54 +08:00
jxxghp
27cf5bb7e6 feat:远程交互刷新数据时发送统计消息 2024-11-23 10:36:48 +08:00
jxxghp
9b573535cd Merge pull request #3201 from InfinityPacer/feature/event 2024-11-22 16:25:52 +08:00
jxxghp
cb32305b86 Merge pull request #3200 from cddjr/fix_subscribe_search_filter 2024-11-22 14:04:08 +08:00
景大侠
f7164450d0 fix: 将订阅规则过滤前置,避免因imdbid匹配而跳过 2024-11-22 13:47:18 +08:00
InfinityPacer
344862dbd4 feat(event): support smart rename event 2024-11-22 13:41:14 +08:00
InfinityPacer
f1d0e9d50a Revert "fix #3154 相同事件避免并发处理"
This reverts commit 79c637e003.
2024-11-22 12:41:14 +08:00
jxxghp
9ba9e8f41c v2.0.9 2024-11-22 08:11:07 +08:00
jxxghp
78fc5b7017 Merge pull request #3193 from wikrin/fix_any_files 2024-11-22 08:10:12 +08:00
Attente
fe07830b71 fix: 某些情况下误删媒体文件的问题 2024-11-22 07:45:01 +08:00
jxxghp
350f1faf2a Merge pull request #3189 from InfinityPacer/feature/module 2024-11-21 20:16:06 +08:00
InfinityPacer
103cfe0b47 fix(config): ensure accurate handling of env config updates 2024-11-21 20:08:18 +08:00
jxxghp
0953c1be16 Merge pull request #3187 from InfinityPacer/feature/scheduler 2024-11-21 17:43:29 +08:00
InfinityPacer
c299bf6f7c fix(auth): adjust auth to occur before module init 2024-11-21 17:37:48 +08:00
InfinityPacer
c0eb9d824c Revert "fix(auth): initialize plugin service only during retry auth"
This reverts commit 9f4cf530f8.
2024-11-21 16:41:56 +08:00
jxxghp
ebffdebdb2 refactor: 优化缓存策略 2024-11-21 15:52:08 +08:00
jxxghp
acd9e38477 Merge pull request #3186 from InfinityPacer/feature/scheduler 2024-11-21 14:54:01 +08:00
InfinityPacer
9f4cf530f8 fix(auth): initialize plugin service only during retry auth 2024-11-21 14:49:42 +08:00
jxxghp
84897aa592 fix #3162 2024-11-21 13:50:49 +08:00
jxxghp
23c5982f5a Merge pull request #3185 from InfinityPacer/feature/module 2024-11-21 12:42:05 +08:00
InfinityPacer
1849930b72 feat(qb): add support for ignoring category check via kwargs 2024-11-21 12:35:15 +08:00
jxxghp
4f1d3a7572 fix #3180 2024-11-21 12:13:44 +08:00
jxxghp
824c3ac5d6 fix #3176 2024-11-21 10:25:46 +08:00
jxxghp
1cec6ed6d1 v2.0.8
- 修复云盘扫码问题
2024-11-20 20:43:44 +08:00
jxxghp
fff75c7fe2 fix 115 2024-11-20 20:40:32 +08:00
jxxghp
81fecf1e07 fix alipan 2024-11-20 20:39:48 +08:00
jxxghp
ad8f687f8e fix alipan 2024-11-20 20:36:50 +08:00
jxxghp
a3172d7503 fix 扫码逻辑与底层模块解耦 2024-11-20 20:17:18 +08:00
jxxghp
8d5e0b26d5 fix:115支持Cookie 2024-11-20 13:14:37 +08:00
jxxghp
b1b980f550 Merge pull request #3171 from Sowevo/v2 2024-11-20 07:07:08 +08:00
Sowevo
8196589cff Merge branch 'jxxghp:v2' into v2 2024-11-19 22:43:31 +08:00
sowevo
cb9f41cb65 plex的item_id统一使用全路径
获取图片时兼容外网地址为Plex的官方转发地址https://app.plex.tv的情况
2024-11-19 22:41:55 +08:00
238 changed files with 15040 additions and 5865 deletions

45
.github/ISSUE_TEMPLATE/rfc.yml vendored Normal file
View File

@@ -0,0 +1,45 @@
name: 功能提案
description: Request for Comments
title: "[RFC]"
labels: ["RFC"]
body:
- type: markdown
attributes:
value: |
一份提案(RFC)定位为 **「在某功能/重构的具体开发前,用于开发者间 review 技术设计/方案的文档」**
目的是让协作的开发者间清晰的知道「要做什么」和「具体会怎么做」,以及所有的开发者都能公开透明的参与讨论;
以便评估和讨论产生的影响 (遗漏的考虑、向后兼容性、与现有功能的冲突)
因此提案侧重在对解决问题的 **方案、设计、步骤** 的描述上。
如果仅希望讨论是否添加或改进某功能本身,请使用 -> [Issue: 功能改进](https://github.com/jxxghp/MoviePilot/issues/new?assignees=&labels=feature+request&projects=&template=feature_request.yml&title=%5BFeature+Request%5D%3A+)
- type: textarea
id: background
attributes:
label: 背景 or 问题
description: 简单描述遇到的什么问题或需要改动什么。可以引用其他 issue、讨论、文档等。
validations:
required: true
- type: textarea
id: goal
attributes:
label: "目标 & 方案简述"
description: 简单描述提案此提案实现后,**预期的目标效果**,以及简单大致描述会采取的方案/步骤,可能会/不会产生什么影响。
validations:
required: true
- type: textarea
id: design
attributes:
label: "方案设计 & 实现步骤"
description: |
详细描述你设计的具体方案,可以考虑拆分列表或要点,一步步描述具体打算如何实现的步骤和相关细节。
这部份不需要一次性写完整,即使在创建完此提案 issue 后,依旧可以再次编辑修改。
validations:
required: false
- type: textarea
id: alternative
attributes:
label: "替代方案 & 对比"
description: |
[可选] 为来实现目标效果,还考虑过什么其他方案,有什么对比?
validations:
required: false

View File

@@ -5,10 +5,7 @@ on:
branches:
- v2
paths:
- '.github/workflows/build.yml'
- 'Dockerfile'
- 'version.py'
- 'requirements.in'
jobs:
Docker-build:

55
.github/workflows/bulit-lite.yml vendored Normal file
View File

@@ -0,0 +1,55 @@
name: MoviePilot Builder v2 Lite
on:
workflow_dispatch:
push:
branches:
- v2
paths:
- 'version.py'
jobs:
Docker-build:
runs-on: ubuntu-latest
name: Build Docker Image
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Release version
id: release_version
run: |
app_version=$(cat version.py |sed -ne "s/APP_VERSION\s=\s'v\(.*\)'/\1/gp")
echo "app_version=$app_version" >> $GITHUB_ENV
- name: Docker Meta
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ secrets.DOCKER_USERNAME }}/moviepilot-v2
tags: |
type=raw,value=lite-latest
- name: Set Up QEMU
uses: docker/setup-qemu-action@v3
- name: Set Up Buildx
uses: docker/setup-buildx-action@v3
- name: Login DockerHub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Build Image
uses: docker/build-push-action@v5
with:
context: .
file: Dockerfile.lite
platforms: |
linux/amd64
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha, scope=${{ github.workflow }}-docker
cache-to: type=gha, scope=${{ github.workflow }}-docker

3
.gitignore vendored
View File

@@ -1,6 +1,9 @@
.idea/
*.c
*.so
*.pyd
build/
cython_cache/
dist/
nginx/
test.py

View File

@@ -1,4 +1,4 @@
FROM python:3.11.4-slim-bookworm
FROM python:3.12.8-slim-bookworm
ENV LANG="C.UTF-8" \
TZ="Asia/Shanghai" \
HOME="/moviepilot" \
@@ -10,10 +10,7 @@ ENV LANG="C.UTF-8" \
UMASK=000 \
PORT=3001 \
NGINX_PORT=3000 \
PROXY_HOST="" \
MOVIEPILOT_AUTO_UPDATE=false \
AUTH_SITE="iyuu" \
IYUU_SIGN=""
MOVIEPILOT_AUTO_UPDATE=release
WORKDIR "/app"
RUN apt-get update -y \
&& apt-get upgrade -y \

93
Dockerfile.lite Normal file
View File

@@ -0,0 +1,93 @@
FROM python:3.12.8-slim-bookworm
ENV LANG="C.UTF-8" \
TZ="Asia/Shanghai" \
HOME="/moviepilot" \
CONFIG_DIR="/config" \
TERM="xterm" \
DISPLAY=:987 \
PUID=0 \
PGID=0 \
UMASK=000 \
PORT=3001 \
NGINX_PORT=3000 \
MOVIEPILOT_AUTO_UPDATE=release
WORKDIR "/app"
RUN apt-get update -y \
&& apt-get upgrade -y \
&& apt-get -y install \
musl-dev \
nginx \
gettext-base \
locales \
procps \
gosu \
bash \
wget \
curl \
busybox \
dumb-init \
jq \
fuse3 \
rsync \
ffmpeg \
nano \
&& \
if [ "$(uname -m)" = "x86_64" ]; \
then ln -s /usr/lib/x86_64-linux-musl/libc.so /lib/libc.musl-x86_64.so.1; \
elif [ "$(uname -m)" = "aarch64" ]; \
then ln -s /usr/lib/aarch64-linux-musl/libc.so /lib/libc.musl-aarch64.so.1; \
fi \
&& curl https://rclone.org/install.sh | bash \
&& curl --insecure -fsSL https://raw.githubusercontent.com/DDS-Derek/Aria2-Pro-Core/master/aria2-install.sh | bash \
&& apt-get autoremove -y \
&& apt-get clean -y \
&& rm -rf \
/tmp/* \
/moviepilot/.cache \
/var/lib/apt/lists/* \
/var/tmp/*
COPY requirements.in requirements.in
RUN apt-get update -y \
&& apt-get install -y build-essential \
&& pip install --upgrade pip \
&& pip install Cython pip-tools \
&& pip-compile requirements.in \
&& pip install -r requirements.txt \
&& playwright install-deps chromium \
&& apt-get remove -y build-essential \
&& apt-get autoremove -y \
&& apt-get clean -y \
&& rm -rf \
/tmp/* \
/moviepilot/.cache \
/var/lib/apt/lists/* \
/var/tmp/*
COPY . .
RUN cp -f /app/nginx.conf /etc/nginx/nginx.template.conf \
&& cp -f /app/update /usr/local/bin/mp_update \
&& cp -f /app/entrypoint /entrypoint \
&& cp -f /app/docker_http_proxy.conf /etc/nginx/docker_http_proxy.conf \
&& chmod +x /entrypoint /usr/local/bin/mp_update \
&& mkdir -p ${HOME} \
&& groupadd -r moviepilot -g 918 \
&& useradd -r moviepilot -g moviepilot -d ${HOME} -s /bin/bash -u 918 \
&& python_ver=$(python3 -V | awk '{print $2}') \
&& echo "/app/" > /usr/local/lib/python${python_ver%.*}/site-packages/app.pth \
&& echo 'fs.inotify.max_user_watches=5242880' >> /etc/sysctl.conf \
&& echo 'fs.inotify.max_user_instances=5242880' >> /etc/sysctl.conf \
&& locale-gen zh_CN.UTF-8 \
&& python3 /app/setup.py \
&& find /app/app -type f -name "*.py" ! -path "/app/app/main.py" -exec rm -f {} \; \
&& FRONTEND_VERSION=$(sed -n "s/^FRONTEND_VERSION\s*=\s*'\([^']*\)'/\1/p" /app/version.py) \
&& curl -sL "https://github.com/jxxghp/MoviePilot-Frontend/releases/download/${FRONTEND_VERSION}/dist.zip" | busybox unzip -d / - \
&& mv /dist /public \
&& curl -sL "https://github.com/jxxghp/MoviePilot-Plugins/archive/refs/heads/main.zip" | busybox unzip -d /tmp - \
&& mv -f /tmp/MoviePilot-Plugins-main/plugins.v2/* /app/app/plugins/ \
&& cat /tmp/MoviePilot-Plugins-main/package.json | jq -r 'to_entries[] | select(.value.v2 == true) | .key' | awk '{print tolower($0)}' | \
while read -r i; do if [ ! -d "/app/app/plugins/$i" ]; then mv "/tmp/MoviePilot-Plugins-main/plugins/$i" "/app/app/plugins/"; else echo "跳过 $i"; fi; done \
&& curl -sL "https://github.com/jxxghp/MoviePilot-Resources/archive/refs/heads/main.zip" | busybox unzip -d /tmp - \
&& mv -f /tmp/MoviePilot-Resources-main/resources/* /app/app/helper/ \
&& rm -rf /tmp/* /app/build
EXPOSE 3000
VOLUME [ "/config" ]
ENTRYPOINT [ "/entrypoint" ]

View File

@@ -6,6 +6,7 @@
![GitHub repo size](https://img.shields.io/github/repo-size/jxxghp/MoviePilot?style=for-the-badge)
![GitHub issues](https://img.shields.io/github/issues/jxxghp/MoviePilot?style=for-the-badge)
![Docker Pulls](https://img.shields.io/docker/pulls/jxxghp/moviepilot?style=for-the-badge)
![Docker Pulls V2](https://img.shields.io/docker/pulls/jxxghp/moviepilot-v2?style=for-the-badge)
![Platform](https://img.shields.io/badge/platform-Windows%20%7C%20Linux%20%7C%20Synology-blue?style=for-the-badge)
@@ -25,6 +26,34 @@
访问官方Wikihttps://wiki.movie-pilot.org
## 参与开发
需要 `Python 3.12``Node JS v20.12.1`
- 克隆主项目 [MoviePilot](https://github.com/jxxghp/MoviePilot)
```shell
git clone https://github.com/jxxghp/MoviePilot
```
- 克隆资源项目 [MoviePilot-Resources](https://github.com/jxxghp/MoviePilot-Resources) ,将 `resources` 目录下对应平台及版本的库 `.so`/`.pyd`/`.bin` 文件复制到 `app/helper` 目录
```shell
git clone https://github.com/jxxghp/MoviePilot-Resources
```
- 安装后端依赖,设置`app`为源代码根目录,运行 `main.py` 启动后端服务,默认监听端口:`3001`API文档地址`http://localhost:3001/docs`
```shell
pip install -r requirements.txt
python3 main.py
```
- 克隆前端项目 [MoviePilot-Frontend](https://github.com/jxxghp/MoviePilot-Frontend)
```shell
git clone https://github.com/jxxghp/MoviePilot-Frontend
```
- 安装前端依赖,运行前端项目,访问:`http://localhost:5173`
```shell
yarn
yarn dev
```
- 参考 [插件开发指引](https://wiki.movie-pilot.org/zh/plugindev) 在 `app/plugins` 目录下开发插件代码
## 贡献者
<a href="https://github.com/jxxghp/MoviePilot/graphs/contributors">

106
app/actions/__init__.py Normal file
View File

@@ -0,0 +1,106 @@
from abc import ABC, abstractmethod
from typing import Union
from app.chain import ChainBase
from app.db.systemconfig_oper import SystemConfigOper
from app.schemas import ActionContext, ActionParams
class ActionChain(ChainBase):
pass
class BaseAction(ABC):
"""
工作流动作基类
"""
# 动作ID
_action_id = None
# 完成标志
_done_flag = False
# 执行信息
_message = ""
# 缓存键值
_cache_key = "WorkflowCache-%s"
def __init__(self, action_id: str):
self._action_id = action_id
self.systemconfigoper = SystemConfigOper()
@classmethod
@property
@abstractmethod
def name(cls) -> str: # noqa
pass
@classmethod
@property
@abstractmethod
def description(cls) -> str: # noqa
pass
@classmethod
@property
@abstractmethod
def data(cls) -> dict: # noqa
pass
@property
def done(self) -> bool:
"""
判断动作是否完成
"""
return self._done_flag
@property
@abstractmethod
def success(self) -> bool:
"""
判断动作是否成功
"""
pass
@property
def message(self) -> str:
"""
执行信息
"""
return self._message
def job_done(self, message: str = None):
"""
标记动作完成
"""
self._message = message
self._done_flag = True
def check_cache(self, workflow_id: int, key: str) -> bool:
"""
检查是否处理过
"""
workflow_key = self._cache_key % workflow_id
workflow_cache = self.systemconfigoper.get(workflow_key) or {}
action_cache = workflow_cache.get(self._action_id) or []
return key in action_cache
def save_cache(self, workflow_id: int, data: Union[list, str]):
"""
保存缓存
"""
workflow_key = self._cache_key % workflow_id
workflow_cache = self.systemconfigoper.get(workflow_key) or {}
action_cache = workflow_cache.get(self._action_id) or []
if isinstance(data, list):
action_cache.extend(data)
else:
action_cache.append(data)
workflow_cache[self._action_id] = action_cache
self.systemconfigoper.set(workflow_key, workflow_cache)
@abstractmethod
def execute(self, workflow_id: int, params: ActionParams, context: ActionContext) -> ActionContext:
"""
执行动作
"""
raise NotImplementedError

121
app/actions/add_download.py Normal file
View File

@@ -0,0 +1,121 @@
from typing import Optional
from pydantic import Field
from app.actions import BaseAction
from app.chain.download import DownloadChain
from app.chain.media import MediaChain
from app.core.config import global_vars
from app.core.metainfo import MetaInfo
from app.log import logger
from app.schemas import ActionParams, ActionContext, DownloadTask, MediaType
class AddDownloadParams(ActionParams):
"""
添加下载资源参数
"""
downloader: Optional[str] = Field(default=None, description="下载器")
save_path: Optional[str] = Field(default=None, description="保存路径")
labels: Optional[str] = Field(default=None, description="标签(,分隔)")
only_lack: Optional[bool] = Field(default=False, description="仅下载缺失的资源")
class AddDownloadAction(BaseAction):
"""
添加下载资源
"""
# 已添加的下载
_added_downloads = []
_has_error = False
def __init__(self, action_id: str):
super().__init__(action_id)
self.downloadchain = DownloadChain()
self.mediachain = MediaChain()
self._added_downloads = []
self._has_error = False
@classmethod
@property
def name(cls) -> str: # noqa
return "添加下载"
@classmethod
@property
def description(cls) -> str: # noqa
return "根据资源列表添加下载任务"
@classmethod
@property
def data(cls) -> dict: # noqa
return AddDownloadParams().dict()
@property
def success(self) -> bool:
return not self._has_error
def execute(self, workflow_id: int, params: dict, context: ActionContext) -> ActionContext:
"""
将上下文中的torrents添加到下载任务中
"""
params = AddDownloadParams(**params)
_started = False
for t in context.torrents:
if global_vars.is_workflow_stopped(workflow_id):
break
# 检查缓存
cache_key = f"{t.torrent_info.site}-{t.torrent_info.title}"
if self.check_cache(workflow_id, cache_key):
logger.info(f"{t.torrent_info.title} 已添加过下载,跳过")
continue
if not t.meta_info:
t.meta_info = MetaInfo(title=t.torrent_info.title, subtitle=t.torrent_info.description)
if not t.media_info:
t.media_info = self.mediachain.recognize_media(meta=t.meta_info)
if not t.media_info:
self._has_error = True
logger.warning(f"{t.torrent_info.title} 未识别到媒体信息,无法下载")
continue
if params.only_lack:
exists_info = self.downloadchain.media_exists(t.media_info)
if exists_info:
if t.media_info.type == MediaType.MOVIE:
# 电影
logger.warning(f"{t.torrent_info.title} 媒体库中已存在,跳过")
continue
else:
# 电视剧
exists_seasons = exists_info.seasons or {}
if len(t.meta_info.season_list) > 1:
# 多季不下载
logger.warning(f"{t.meta_info.title} 有多季,跳过")
continue
else:
exists_episodes = exists_seasons.get(t.meta_info.begin_season)
if exists_episodes:
if set(t.meta_info.episode_list).issubset(exists_episodes):
logger.warning(f"{t.meta_info.title}{t.meta_info.begin_season} 季第 {t.meta_info.episode_list} 集已存在,跳过")
continue
_started = True
did = self.downloadchain.download_single(context=t,
downloader=params.downloader,
save_path=params.save_path,
label=params.labels)
if did:
self._added_downloads.append(did)
# 保存缓存
self.save_cache(workflow_id, cache_key)
if self._added_downloads:
logger.info(f"已添加 {len(self._added_downloads)} 个下载任务")
context.downloads.extend(
[DownloadTask(download_id=did, downloader=params.downloader) for did in self._added_downloads]
)
elif _started:
self._has_error = True
self.job_done(f"已添加 {len(self._added_downloads)} 个下载任务")
return context

View File

@@ -0,0 +1,92 @@
from app.actions import BaseAction
from app.chain.subscribe import SubscribeChain
from app.core.config import settings, global_vars
from app.core.context import MediaInfo
from app.db.subscribe_oper import SubscribeOper
from app.log import logger
from app.schemas import ActionParams, ActionContext
class AddSubscribeParams(ActionParams):
"""
添加订阅参数
"""
pass
class AddSubscribeAction(BaseAction):
"""
添加订阅
"""
_added_subscribes = []
_has_error = False
def __init__(self, action_id: str):
super().__init__(action_id)
self.subscribechain = SubscribeChain()
self.subscribeoper = SubscribeOper()
self._added_subscribes = []
self._has_error = False
@classmethod
@property
def name(cls) -> str: # noqa
return "添加订阅"
@classmethod
@property
def description(cls) -> str: # noqa
return "根据媒体列表添加订阅"
@classmethod
@property
def data(cls) -> dict: # noqa
return AddSubscribeParams().dict()
@property
def success(self) -> bool:
return not self._has_error
def execute(self, workflow_id: int, params: dict, context: ActionContext) -> ActionContext:
"""
将medias中的信息添加订阅如果订阅不存在的话
"""
_started = False
for media in context.medias:
if global_vars.is_workflow_stopped(workflow_id):
break
# 检查缓存
cache_key = f"{media.type}-{media.title}-{media.year}-{media.season}"
if self.check_cache(workflow_id, cache_key):
logger.info(f"{media.title} {media.year} 已添加过订阅,跳过")
continue
mediainfo = MediaInfo()
mediainfo.from_dict(media.dict())
if self.subscribechain.exists(mediainfo):
logger.info(f"{media.title} 已存在订阅")
continue
# 添加订阅
_started = True
sid, message = self.subscribechain.add(mtype=mediainfo.type,
title=mediainfo.title,
year=mediainfo.year,
tmdbid=mediainfo.tmdb_id,
season=mediainfo.season,
doubanid=mediainfo.douban_id,
bangumiid=mediainfo.bangumi_id,
username=settings.SUPERUSER)
if sid:
self._added_subscribes.append(sid)
# 保存缓存
self.save_cache(workflow_id, cache_key)
if self._added_subscribes:
logger.info(f"已添加 {len(self._added_subscribes)} 个订阅")
for sid in self._added_subscribes:
context.subscribes.append(self.subscribeoper.get(sid))
elif _started:
self._has_error = True
self.job_done(f"已添加 {len(self._added_subscribes)} 个订阅")
return context

View File

@@ -0,0 +1,68 @@
from app.actions import BaseAction, ActionChain
from app.core.config import global_vars
from app.schemas import ActionParams, ActionContext
from app.log import logger
class FetchDownloadsParams(ActionParams):
"""
获取下载任务参数
"""
pass
class FetchDownloadsAction(BaseAction):
"""
获取下载任务
"""
_downloads = []
def __init__(self, action_id: str):
super().__init__(action_id)
self.chain = ActionChain()
self._downloads = []
@classmethod
@property
def name(cls) -> str: # noqa
return "获取下载任务"
@classmethod
@property
def description(cls) -> str: # noqa
return "获取下载队列中的任务状态"
@classmethod
@property
def data(cls) -> dict: # noqa
return FetchDownloadsParams().dict()
@property
def success(self) -> bool:
return self.done
def execute(self, workflow_id: int, params: dict, context: ActionContext) -> ActionContext:
"""
更新downloads中的下载任务状态
"""
__all_complete = False
for download in self._downloads:
if global_vars.is_workflow_stopped(workflow_id):
break
logger.info(f"获取下载任务 {download.download_id} 状态 ...")
torrents = self.chain.list_torrents(hashs=[download.download_id])
if not torrents:
download.completed = True
continue
for t in torrents:
download.path = t.path
if t.progress >= 100:
logger.info(f"下载任务 {download.download_id} 已完成")
download.completed = True
else:
logger.info(f"下载任务 {download.download_id} 未完成")
download.completed = False
if all([d.completed for d in self._downloads]):
self.job_done()
return context

176
app/actions/fetch_medias.py Normal file
View File

@@ -0,0 +1,176 @@
from typing import List, Optional
from pydantic import Field
from app.actions import BaseAction
from app.chain.recommend import RecommendChain
from app.schemas import ActionParams, ActionContext
from app.core.config import settings, global_vars
from app.core.event import eventmanager
from app.log import logger
from app.schemas import RecommendSourceEventData, MediaInfo
from app.schemas.types import ChainEventType
from app.utils.http import RequestUtils
class FetchMediasParams(ActionParams):
"""
获取媒体数据参数
"""
source_type: Optional[str] = Field(default="ranking", description="来源")
sources: Optional[List[str]] = Field(default=[], description="榜单")
api_path: Optional[str] = Field(default=None, description="API路径")
class FetchMediasAction(BaseAction):
"""
获取媒体数据
"""
_inner_sources = []
_medias = []
_has_error = False
def __init__(self, action_id: str):
super().__init__(action_id)
self._medias = []
self._has_error = False
self.__inner_sources = [
{
"func": RecommendChain().tmdb_trending,
"name": '流行趋势',
},
{
"func": RecommendChain().douban_movie_showing,
"name": '正在热映',
},
{
"func": RecommendChain().bangumi_calendar,
"name": 'Bangumi每日放送',
},
{
"func": RecommendChain().tmdb_movies,
"name": 'TMDB热门电影',
},
{
"func": RecommendChain().tmdb_tvs,
"name": 'TMDB热门电视剧',
},
{
"func": RecommendChain().douban_movie_hot,
"name": '豆瓣热门电影',
},
{
"func": RecommendChain().douban_tv_hot,
"name": '豆瓣热门电视剧',
},
{
"func": RecommendChain().douban_tv_animation,
"name": '豆瓣热门动漫',
},
{
"func": RecommendChain().douban_movies,
"name": '豆瓣最新电影',
},
{
"func": RecommendChain().douban_tvs,
"name": '豆瓣最新电视剧',
},
{
"func": RecommendChain().douban_movie_top250,
"name": '豆瓣电影TOP250',
},
{
"func": RecommendChain().douban_tv_weekly_chinese,
"name": '豆瓣国产剧集榜',
},
{
"func": RecommendChain().douban_tv_weekly_global,
"name": '豆瓣全球剧集榜',
}
]
# 广播事件,请示额外的推荐数据源支持
event_data = RecommendSourceEventData()
event = eventmanager.send_event(ChainEventType.RecommendSource, event_data)
# 使用事件返回的上下文数据
if event and event.event_data:
event_data: RecommendSourceEventData = event.event_data
if event_data.extra_sources:
self.__inner_sources.extend([s.dict() for s in event_data.extra_sources])
@classmethod
@property
def name(cls) -> str: # noqa
return "获取媒体数据"
@classmethod
@property
def description(cls) -> str: # noqa
return "获取榜单等媒体数据列表"
@classmethod
@property
def data(cls) -> dict: # noqa
return FetchMediasParams().dict()
@property
def success(self) -> bool:
return not self._has_error
def __get_source(self, source: str):
"""
获取数据源
"""
for s in self.__inner_sources:
if s['name'] == source:
return s
return None
def execute(self, workflow_id: int, params: dict, context: ActionContext) -> ActionContext:
"""
获取媒体数据填充到medias
"""
params = FetchMediasParams(**params)
try:
if params.source_type == "ranking":
for name in params.sources:
if global_vars.is_workflow_stopped(workflow_id):
break
source = self.__get_source(name)
if not source:
continue
logger.info(f"获取媒体数据 {source} ...")
results = []
if source.get("func"):
results = source['func']()
else:
# 调用内部API获取数据
api_url = f"http://127.0.0.1:{settings.PORT}/api/v1/{source['api_path']}?token={settings.API_TOKEN}"
res = RequestUtils(timeout=15).post_res(api_url)
if res:
results = res.json()
if results:
logger.info(f"{name} 获取到 {len(results)} 条数据")
self._medias.extend([MediaInfo(**r) for r in results])
else:
logger.error(f"{name} 获取数据失败")
else:
# 调用内部API获取数据
api_url = f"http://127.0.0.1:{settings.PORT}{params.api_path}?token={settings.API_TOKEN}"
res = RequestUtils(timeout=15).post_res(api_url)
if res:
results = res.json()
if results:
logger.info(f"{params.api_path} 获取到 {len(results)} 条数据")
self._medias.extend([MediaInfo(**r) for r in results])
except Exception as e:
logger.error(f"获取媒体数据失败: {e}")
self._has_error = True
if self._medias:
context.medias.extend(self._medias)
self.job_done(f"获取到 {len(self._medias)} 条媒数据")
return context

117
app/actions/fetch_rss.py Normal file
View File

@@ -0,0 +1,117 @@
from typing import Optional
from pydantic import Field
from app.actions import BaseAction, ActionChain
from app.core.config import settings, global_vars
from app.core.context import Context
from app.core.metainfo import MetaInfo
from app.helper.rss import RssHelper
from app.log import logger
from app.schemas import ActionParams, ActionContext, TorrentInfo
class FetchRssParams(ActionParams):
"""
获取RSS资源列表参数
"""
url: str = Field(default=None, description="RSS地址")
proxy: Optional[bool] = Field(default=False, description="是否使用代理")
timeout: Optional[int] = Field(default=15, description="超时时间")
content_type: Optional[str] = Field(default=None, description="Content-Type")
referer: Optional[str] = Field(default=None, description="Referer")
ua: Optional[str] = Field(default=None, description="User-Agent")
match_media: Optional[str] = Field(default=None, description="匹配媒体信息")
class FetchRssAction(BaseAction):
"""
获取RSS资源列表
"""
_rss_torrents = []
_has_error = False
def __init__(self, action_id: str):
super().__init__(action_id)
self.rsshelper = RssHelper()
self.chain = ActionChain()
self._rss_torrents = []
self._has_error = False
@classmethod
@property
def name(cls) -> str: # noqa
return "获取RSS资源"
@classmethod
@property
def description(cls) -> str: # noqa
return "订阅RSS地址获取资源"
@classmethod
@property
def data(cls) -> dict: # noqa
return FetchRssParams().dict()
@property
def success(self) -> bool:
return not self._has_error
def execute(self, workflow_id: int, params: dict, context: ActionContext) -> ActionContext:
"""
请求RSS地址获取数据并解析为资源列表
"""
params = FetchRssParams(**params)
if not params.url:
return context
headers = {}
if params.content_type:
headers["Content-Type"] = params.content_type
if params.referer:
headers["Referer"] = params.referer
if params.ua:
headers["User-Agent"] = params.ua
rss_items = self.rsshelper.parse(url=params.url,
proxy=settings.PROXY if params.proxy else None,
timeout=params.timeout,
headers=headers)
if rss_items is None or rss_items is False:
logger.error(f'RSS地址 {params.url} 请求失败!')
self._has_error = True
return context
if not rss_items:
logger.error(f'RSS地址 {params.url} 未获取到RSS数据')
return context
# 组装种子
for item in rss_items:
if global_vars.is_workflow_stopped(workflow_id):
break
if not item.get("title"):
continue
torrentinfo = TorrentInfo(
title=item.get("title"),
enclosure=item.get("enclosure"),
page_url=item.get("link"),
size=item.get("size"),
pubdate=item["pubdate"].strftime("%Y-%m-%d %H:%M:%S") if item.get("pubdate") else None,
)
meta = MetaInfo(title=torrentinfo.title, subtitle=torrentinfo.description)
mediainfo = None
if params.match_media:
mediainfo = self.chain.recognize_media(meta)
if not mediainfo:
logger.warning(f"{torrentinfo.title} 未识别到媒体信息")
continue
self._rss_torrents.append(Context(meta_info=meta, media_info=mediainfo, torrent_info=torrentinfo))
if self._rss_torrents:
logger.info(f"获取到 {len(self._rss_torrents)} 个RSS资源")
context.torrents.extend(self._rss_torrents)
self.job_done(f"获取到 {len(self._rss_torrents)} 个资源")
return context

View File

@@ -0,0 +1,104 @@
import random
import time
from typing import Optional, List
from pydantic import Field
from app.actions import BaseAction
from app.chain.search import SearchChain
from app.core.config import global_vars
from app.log import logger
from app.schemas import ActionParams, ActionContext, MediaType
class FetchTorrentsParams(ActionParams):
"""
获取站点资源参数
"""
search_type: Optional[str] = Field(default="keyword", description="搜索类型")
name: Optional[str] = Field(default=None, description="资源名称")
year: Optional[str] = Field(default=None, description="年份")
type: Optional[str] = Field(default=None, description="资源类型 (电影/电视剧)")
season: Optional[int] = Field(default=None, description="季度")
sites: Optional[List[int]] = Field(default=[], description="站点列表")
match_media: Optional[bool] = Field(default=False, description="匹配媒体信息")
class FetchTorrentsAction(BaseAction):
"""
搜索站点资源
"""
_torrents = []
def __init__(self, action_id: str):
super().__init__(action_id)
self.searchchain = SearchChain()
self._torrents = []
@classmethod
@property
def name(cls) -> str: # noqa
return "搜索站点资源"
@classmethod
@property
def description(cls) -> str: # noqa
return "搜索站点种子资源列表"
@classmethod
@property
def data(cls) -> dict: # noqa
return FetchTorrentsParams().dict()
@property
def success(self) -> bool:
return self.done
def execute(self, workflow_id: int, params: dict, context: ActionContext) -> ActionContext:
"""
搜索站点,获取资源列表
"""
params = FetchTorrentsParams(**params)
if params.search_type == "keyword":
# 按关键字搜索
torrents = self.searchchain.search_by_title(title=params.name, sites=params.sites, cache_local=False)
for torrent in torrents:
if global_vars.is_workflow_stopped(workflow_id):
break
if params.year and torrent.meta_info.year != params.year:
continue
if params.type and torrent.media_info and torrent.media_info.type != MediaType(params.type):
continue
if params.season and torrent.meta_info.begin_season != params.season:
continue
# 识别媒体信息
if params.match_media:
torrent.media_info = self.searchchain.recognize_media(torrent.meta_info)
if not torrent.media_info:
logger.warning(f"{torrent.torrent_info.title} 未识别到媒体信息")
continue
self._torrents.append(torrent)
else:
# 搜索媒体列表
for media in context.medias:
if global_vars.is_workflow_stopped(workflow_id):
break
torrents = self.searchchain.search_by_id(tmdbid=media.tmdb_id,
doubanid=media.douban_id,
mtype=MediaType(media.type),
sites=params.sites)
for torrent in torrents:
self._torrents.append(torrent)
# 随机休眠 5-30秒
sleep_time = random.randint(5, 30)
logger.info(f"随机休眠 {sleep_time} 秒 ...")
time.sleep(sleep_time)
if self._torrents:
context.torrents.extend(self._torrents)
logger.info(f"共搜索到 {len(self._torrents)} 条资源")
self.job_done(f"搜索到 {len(self._torrents)} 个资源")
return context

View File

@@ -0,0 +1,71 @@
from typing import Optional
from pydantic import Field
from app.actions import BaseAction
from app.core.config import global_vars
from app.log import logger
from app.schemas import ActionParams, ActionContext
class FilterMediasParams(ActionParams):
"""
过滤媒体数据参数
"""
type: Optional[str] = Field(default=None, description="媒体类型 (电影/电视剧)")
vote: Optional[int] = Field(default=0, description="评分")
year: Optional[str] = Field(default=None, description="年份")
class FilterMediasAction(BaseAction):
"""
过滤媒体数据
"""
_medias = []
def __init__(self, action_id: str):
super().__init__(action_id)
self._medias = []
@classmethod
@property
def name(cls) -> str: # noqa
return "过滤媒体数据"
@classmethod
@property
def description(cls) -> str: # noqa
return "对媒体数据列表进行过滤"
@classmethod
@property
def data(cls) -> dict: # noqa
return FilterMediasParams().dict()
@property
def success(self) -> bool:
return self.done
def execute(self, workflow_id: int, params: dict, context: ActionContext) -> ActionContext:
"""
过滤medias中媒体数据
"""
params = FilterMediasParams(**params)
for media in context.medias:
if global_vars.is_workflow_stopped(workflow_id):
break
if params.type and media.type != params.type:
continue
if params.vote and media.vote_average < params.vote:
continue
if params.year and media.year != params.year:
continue
self._medias.append(media)
logger.info(f"过滤后剩余 {len(self._medias)} 条媒体数据")
context.medias = self._medias
self.job_done(f"过滤后剩余 {len(self._medias)} 条媒体数据")
return context

View File

@@ -0,0 +1,88 @@
from typing import Optional, List
from pydantic import Field
from app.actions import BaseAction, ActionChain
from app.core.config import global_vars
from app.helper.torrent import TorrentHelper
from app.log import logger
from app.schemas import ActionParams, ActionContext
class FilterTorrentsParams(ActionParams):
"""
过滤资源数据参数
"""
rule_groups: Optional[List[str]] = Field(default=[], description="规则组")
quality: Optional[str] = Field(default=None, description="资源质量")
resolution: Optional[str] = Field(default=None, description="资源分辨率")
effect: Optional[str] = Field(default=None, description="特效")
include: Optional[str] = Field(default=None, description="包含规则")
exclude: Optional[str] = Field(default=None, description="排除规则")
size: Optional[str] = Field(default=None, description="资源大小范围MB")
class FilterTorrentsAction(BaseAction):
"""
过滤资源数据
"""
_torrents = []
def __init__(self, action_id: str):
super().__init__(action_id)
self.torrenthelper = TorrentHelper()
self.chain = ActionChain()
self._torrents = []
@classmethod
@property
def name(cls) -> str: # noqa
return "过滤资源"
@classmethod
@property
def description(cls) -> str: # noqa
return "对资源列表数据进行过滤"
@classmethod
@property
def data(cls) -> dict: # noqa
return FilterTorrentsParams().dict()
@property
def success(self) -> bool:
return self.done
def execute(self, workflow_id: int, params: dict, context: ActionContext) -> ActionContext:
"""
过滤torrents中的资源
"""
params = FilterTorrentsParams(**params)
for torrent in context.torrents:
if global_vars.is_workflow_stopped(workflow_id):
break
if self.torrenthelper.filter_torrent(
torrent_info=torrent.torrent_info,
filter_params={
"quality": params.quality,
"resolution": params.resolution,
"effect": params.effect,
"include": params.include,
"exclude": params.exclude,
"size": params.size
}
):
if self.chain.filter_torrents(
rule_groups=params.rule_groups,
torrent_list=[torrent.torrent_info],
mediainfo=torrent.media_info
):
self._torrents.append(torrent)
logger.info(f"过滤后剩余 {len(self._torrents)} 个资源")
context.torrents = self._torrents
self.job_done(f"过滤后剩余 {len(self._torrents)} 个资源")
return context

86
app/actions/scan_file.py Normal file
View File

@@ -0,0 +1,86 @@
from pathlib import Path
from typing import Optional
from pydantic import Field
from app.actions import BaseAction
from app.chain.storage import StorageChain
from app.core.config import global_vars, settings
from app.log import logger
from app.schemas import ActionParams, ActionContext
class ScanFileParams(ActionParams):
"""
整理文件参数
"""
# 存储
storage: Optional[str] = Field(default="local", description="存储")
directory: Optional[str] = Field(default=None, description="目录")
class ScanFileAction(BaseAction):
"""
整理文件
"""
_fileitems = []
_has_error = False
def __init__(self, action_id: str):
super().__init__(action_id)
self.storagechain = StorageChain()
self._fileitems = []
self._has_error = False
@classmethod
@property
def name(cls) -> str: # noqa
return "扫描目录"
@classmethod
@property
def description(cls) -> str: # noqa
return "扫描目录文件到队列"
@classmethod
@property
def data(cls) -> dict: # noqa
return ScanFileParams().dict()
@property
def success(self) -> bool:
return not self._has_error
def execute(self, workflow_id: int, params: dict, context: ActionContext) -> ActionContext:
"""
扫描目录中的所有文件记录到fileitems
"""
params = ScanFileParams(**params)
if not params.storage or not params.directory:
return context
fileitem = self.storagechain.get_file_item(params.storage, Path(params.directory))
if not fileitem:
logger.error(f"目录不存在: 【{params.storage}{params.directory}")
self._has_error = True
return context
files = self.storagechain.list_files(fileitem, recursion=True)
for file in files:
if global_vars.is_workflow_stopped(workflow_id):
break
if not file.extension or f".{file.extension.lower()}" not in settings.RMT_MEDIAEXT:
continue
# 检查缓存
cache_key = f"{file.path}"
if self.check_cache(workflow_id, cache_key):
logger.info(f"{file.path} 已处理过,跳过")
continue
self._fileitems.append(fileitem)
# 保存缓存
self.save_cache(workflow_id, cache_key)
if self._fileitems:
context.fileitems.extend(self._fileitems)
self.job_done(f"扫描到 {len(self._fileitems)} 个文件")
return context

View File

@@ -0,0 +1,86 @@
from pathlib import Path
from app.actions import BaseAction
from app.core.config import global_vars
from app.schemas import ActionParams, ActionContext
from app.chain.media import MediaChain
from app.chain.storage import StorageChain
from app.core.metainfo import MetaInfoPath
from app.log import logger
class ScrapeFileParams(ActionParams):
"""
刮削文件参数
"""
pass
class ScrapeFileAction(BaseAction):
"""
刮削文件
"""
_scraped_files = []
_has_error = False
def __init__(self, action_id: str):
super().__init__(action_id)
self.storagechain = StorageChain()
self.mediachain = MediaChain()
self._scraped_files = []
self._has_error = False
@classmethod
@property
def name(cls) -> str: # noqa
return "刮削文件"
@classmethod
@property
def description(cls) -> str: # noqa
return "刮削媒体信息和图片"
@classmethod
@property
def data(cls) -> dict: # noqa
return ScrapeFileParams().dict()
@property
def success(self) -> bool:
return not self._has_error
def execute(self, workflow_id: int, params: dict, context: ActionContext) -> ActionContext:
"""
刮削fileitems中的所有文件
"""
# 失败次数
_failed_count = 0
for fileitem in context.fileitems:
if global_vars.is_workflow_stopped(workflow_id):
break
if fileitem in self._scraped_files:
continue
if not self.storagechain.exists(fileitem):
continue
# 检查缓存
cache_key = f"{fileitem.path}"
if self.check_cache(workflow_id, cache_key):
logger.info(f"{fileitem.path} 已刮削过,跳过")
continue
meta = MetaInfoPath(Path(fileitem.path))
mediainfo = self.mediachain.recognize_media(meta)
if not mediainfo:
_failed_count += 1
logger.info(f"{fileitem.path} 未识别到媒体信息,无法刮削")
continue
self.mediachain.scrape_metadata(fileitem=fileitem, meta=meta, mediainfo=mediainfo)
self._scraped_files.append(fileitem)
# 保存缓存
self.save_cache(workflow_id, cache_key)
if not self._scraped_files and _failed_count:
self._has_error = True
self.job_done(f"成功刮削 {len(self._scraped_files)} 个文件,失败 {_failed_count}")
return context

48
app/actions/send_event.py Normal file
View File

@@ -0,0 +1,48 @@
from app.actions import BaseAction
from app.core.event import eventmanager
from app.schemas import ActionParams, ActionContext
from app.schemas.types import ChainEventType
class SendEventParams(ActionParams):
"""
发送事件参数
"""
pass
class SendEventAction(BaseAction):
"""
发送事件
"""
@classmethod
@property
def name(cls) -> str: # noqa
return "发送事件"
@classmethod
@property
def description(cls) -> str: # noqa
return "发送任务执行事件"
@classmethod
@property
def data(cls) -> dict: # noqa
return SendEventParams().dict()
@property
def success(self) -> bool:
return self.done
def execute(self, workflow_id: int, params: dict, context: ActionContext) -> ActionContext:
"""
发送工作流事件,以更插件干预工作流执行
"""
# 触发资源下载事件,更新执行上下文
event = eventmanager.send_event(ChainEventType.WorkflowExecution, context)
if event and event.event_data:
context = event.event_data
self.job_done()
return context

View File

@@ -0,0 +1,74 @@
from typing import List, Optional, Union
from pydantic import Field
from app.actions import BaseAction, ActionChain
from app.schemas import ActionParams, ActionContext, Notification
from core.config import settings
class SendMessageParams(ActionParams):
"""
发送消息参数
"""
client: Optional[List[str]] = Field(default=[], description="消息渠道")
userid: Optional[Union[str, int]] = Field(default=None, description="用户ID")
class SendMessageAction(BaseAction):
"""
发送消息
"""
def __init__(self, action_id: str):
super().__init__(action_id)
self.chain = ActionChain()
@classmethod
@property
def name(cls) -> str: # noqa
return "发送消息"
@classmethod
@property
def description(cls) -> str: # noqa
return "发送任务执行消息"
@classmethod
@property
def data(cls) -> dict: # noqa
return SendMessageParams().dict()
@property
def success(self) -> bool:
return self.done
def execute(self, workflow_id: int, params: dict, context: ActionContext) -> ActionContext:
"""
发送messages中的消息
"""
params = SendMessageParams(**params)
msg_text = f"当前进度:{context.progress}%"
index = 1
if context.execute_history:
for history in context.execute_history:
if not history.message:
continue
msg_text += f"\n{index}. {history.action}{history.message}"
index += 1
# 发送消息
if not params.client:
params.client = [""]
for client in params.client:
self.chain.post_message(
Notification(
source=client,
userid=params.userid,
title="【工作流执行结果】",
text=msg_text,
link=settings.MP_DOMAIN("#/workflow")
)
)
self.job_done()
return context

View File

@@ -0,0 +1,139 @@
import copy
from pathlib import Path
from typing import Optional
from pydantic import Field
from app.actions import BaseAction
from app.core.config import global_vars
from app.db.transferhistory_oper import TransferHistoryOper
from app.schemas import ActionParams, ActionContext
from app.chain.storage import StorageChain
from app.chain.transfer import TransferChain
from app.log import logger
class TransferFileParams(ActionParams):
"""
整理文件参数
"""
# 来源
source: Optional[str] = Field(default="downloads", description="来源")
class TransferFileAction(BaseAction):
"""
整理文件
"""
_fileitems = []
_has_error = False
def __init__(self, action_id: str):
super().__init__(action_id)
self.transferchain = TransferChain()
self.storagechain = StorageChain()
self.transferhis = TransferHistoryOper()
self._fileitems = []
self._has_error = False
@classmethod
@property
def name(cls) -> str: # noqa
return "整理文件"
@classmethod
@property
def description(cls) -> str: # noqa
return "整理队列中的文件"
@classmethod
@property
def data(cls) -> dict: # noqa
return TransferFileParams().dict()
@property
def success(self) -> bool:
return not self._has_error
def execute(self, workflow_id: int, params: dict, context: ActionContext) -> ActionContext:
"""
从 downloads / fileitems 中整理文件记录到fileitems
"""
def check_continue():
"""
检查是否继续整理文件
"""
if global_vars.is_workflow_stopped(workflow_id):
return False
return True
params = TransferFileParams(**params)
# 失败次数
_failed_count = 0
if params.source == "downloads":
# 从下载任务中整理文件
for download in context.downloads:
if global_vars.is_workflow_stopped(workflow_id):
break
if not download.completed:
logger.info(f"下载任务 {download.download_id} 未完成")
continue
# 检查缓存
cache_key = f"{download.download_id}"
if self.check_cache(workflow_id, cache_key):
logger.info(f"{download.path} 已整理过,跳过")
continue
fileitem = self.storagechain.get_file_item(storage="local", path=Path(download.path))
if not fileitem:
logger.info(f"文件 {download.path} 不存在")
continue
transferd = self.transferhis.get_by_src(fileitem.path, storage=fileitem.storage)
if transferd:
# 已经整理过的文件不再整理
continue
logger.info(f"开始整理文件 {download.path} ...")
state, errmsg = self.transferchain.do_transfer(fileitem, background=False)
if not state:
_failed_count += 1
logger.error(f"整理文件 {download.path} 失败: {errmsg}")
continue
logger.info(f"整理文件 {download.path} 完成")
self._fileitems.append(fileitem)
self.save_cache(workflow_id, cache_key)
else:
# 从 fileitems 中整理文件
for fileitem in copy.deepcopy(context.fileitems):
if not check_continue():
break
# 检查缓存
cache_key = f"{fileitem.path}"
if self.check_cache(workflow_id, cache_key):
logger.info(f"{fileitem.path} 已整理过,跳过")
continue
transferd = self.transferhis.get_by_src(fileitem.path, storage=fileitem.storage)
if transferd:
# 已经整理过的文件不再整理
continue
logger.info(f"开始整理文件 {fileitem.path} ...")
state, errmsg = self.transferchain.do_transfer(fileitem, background=False,
continue_callback=check_continue)
if not state:
_failed_count += 1
logger.error(f"整理文件 {fileitem.path} 失败: {errmsg}")
continue
logger.info(f"整理文件 {fileitem.path} 完成")
# 从 fileitems 中移除已整理的文件
context.fileitems.remove(fileitem)
self._fileitems.append(fileitem)
# 记录已整理的文件
self.save_cache(workflow_id, cache_key)
if self._fileitems:
context.fileitems.extend(self._fileitems)
elif _failed_count:
self._has_error = True
self.job_done(f"整理成功 {len(self._fileitems)} 个文件,失败 {_failed_count}")
return context

View File

@@ -2,7 +2,7 @@ from fastapi import APIRouter
from app.api.endpoints import login, user, site, message, webhook, subscribe, \
media, douban, search, plugin, tmdb, history, system, download, dashboard, \
transfer, mediaserver, bangumi, storage
transfer, mediaserver, bangumi, storage, discover, recommend, workflow
api_router = APIRouter()
api_router.include_router(login.router, prefix="/login", tags=["login"])
@@ -24,3 +24,6 @@ api_router.include_router(storage.router, prefix="/storage", tags=["storage"])
api_router.include_router(transfer.router, prefix="/transfer", tags=["transfer"])
api_router.include_router(mediaserver.router, prefix="/mediaserver", tags=["mediaserver"])
api_router.include_router(bangumi.router, prefix="/bangumi", tags=["bangumi"])
api_router.include_router(discover.router, prefix="/discover", tags=["discover"])
api_router.include_router(recommend.router, prefix="/recommend", tags=["recommend"])
api_router.include_router(workflow.router, prefix="/workflow", tags=["workflow"])

View File

@@ -1,4 +1,4 @@
from typing import List, Any
from typing import List, Any, Optional
from fastapi import APIRouter, Depends
@@ -10,23 +10,10 @@ from app.core.security import verify_token
router = APIRouter()
@router.get("/calendar", summary="Bangumi每日放送", response_model=List[schemas.MediaInfo])
def calendar(page: int = 1,
count: int = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览Bangumi每日放送
"""
medias = BangumiChain().calendar()
if medias:
return [media.to_dict() for media in medias[(page - 1) * count: page * count]]
return []
@router.get("/credits/{bangumiid}", summary="查询Bangumi演职员表", response_model=List[schemas.MediaPerson])
def bangumi_credits(bangumiid: int,
page: int = 1,
count: int = 20,
page: Optional[int] = 1,
count: Optional[int] = 20,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询Bangumi演职员表
@@ -39,8 +26,8 @@ def bangumi_credits(bangumiid: int,
@router.get("/recommend/{bangumiid}", summary="查询Bangumi推荐", response_model=List[schemas.MediaInfo])
def bangumi_recommend(bangumiid: int,
page: int = 1,
count: int = 20,
page: Optional[int] = 1,
count: Optional[int] = 20,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询Bangumi推荐
@@ -62,14 +49,15 @@ def bangumi_person(person_id: int,
@router.get("/person/credits/{person_id}", summary="人物参演作品", response_model=List[schemas.MediaInfo])
def bangumi_person_credits(person_id: int,
page: int = 1,
page: Optional[int] = 1,
count: Optional[int] = 20,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据人物ID查询人物参演作品
"""
medias = BangumiChain().person_credits(person_id=person_id)
if medias:
return [media.to_dict() for media in medias[(page - 1) * 20: page * 20]]
return [media.to_dict() for media in medias[(page - 1) * count: page * count]]
return []

View File

@@ -1,5 +1,5 @@
from pathlib import Path
from typing import Any, List, Optional
from typing import Any, List, Optional, Annotated
from fastapi import APIRouter, Depends
from sqlalchemy.orm import Session
@@ -18,7 +18,7 @@ router = APIRouter()
@router.get("/statistic", summary="媒体数量统计", response_model=schemas.Statistic)
def statistic(name: str = None, _: schemas.TokenPayload = Depends(verify_token)) -> Any:
def statistic(name: Optional[str] = None, _: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询媒体数量统计信息
"""
@@ -37,7 +37,7 @@ def statistic(name: str = None, _: schemas.TokenPayload = Depends(verify_token))
@router.get("/statistic2", summary="媒体数量统计API_TOKEN", response_model=schemas.Statistic)
def statistic2(_: str = Depends(verify_apitoken)) -> Any:
def statistic2(_: Annotated[str, Depends(verify_apitoken)]) -> Any:
"""
查询媒体数量统计信息 API_TOKEN认证?token=xxx
"""
@@ -66,7 +66,7 @@ def storage(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
@router.get("/storage2", summary="本地存储空间API_TOKEN", response_model=schemas.Storage)
def storage2(_: str = Depends(verify_apitoken)) -> Any:
def storage2(_: Annotated[str, Depends(verify_apitoken)]) -> Any:
"""
查询本地存储空间信息 API_TOKEN认证?token=xxx
"""
@@ -82,7 +82,7 @@ def processes(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
@router.get("/downloader", summary="下载器信息", response_model=schemas.DownloaderInfo)
def downloader(name: str = None, _: schemas.TokenPayload = Depends(verify_token)) -> Any:
def downloader(name: Optional[str] = None, _: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询下载器信息
"""
@@ -103,7 +103,7 @@ def downloader(name: str = None, _: schemas.TokenPayload = Depends(verify_token)
@router.get("/downloader2", summary="下载器信息API_TOKEN", response_model=schemas.DownloaderInfo)
def downloader2(_: str = Depends(verify_apitoken)) -> Any:
def downloader2(_: Annotated[str, Depends(verify_apitoken)]) -> Any:
"""
查询下载器信息 API_TOKEN认证?token=xxx
"""
@@ -119,7 +119,7 @@ def schedule(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
@router.get("/schedule2", summary="后台服务API_TOKEN", response_model=List[schemas.ScheduleInfo])
def schedule2(_: str = Depends(verify_apitoken)) -> Any:
def schedule2(_: Annotated[str, Depends(verify_apitoken)]) -> Any:
"""
查询下载器信息 API_TOKEN认证?token=xxx
"""
@@ -127,7 +127,7 @@ def schedule2(_: str = Depends(verify_apitoken)) -> Any:
@router.get("/transfer", summary="文件整理统计", response_model=List[int])
def transfer(days: int = 7, db: Session = Depends(get_db),
def transfer(days: Optional[int] = 7, db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询文件整理统计信息
@@ -145,7 +145,7 @@ def cpu(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
@router.get("/cpu2", summary="获取当前CPU使用率API_TOKEN", response_model=int)
def cpu2(_: str = Depends(verify_apitoken)) -> Any:
def cpu2(_: Annotated[str, Depends(verify_apitoken)]) -> Any:
"""
获取当前CPU使用率 API_TOKEN认证?token=xxx
"""
@@ -161,7 +161,7 @@ def memory(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
@router.get("/memory2", summary="获取当前内存使用量和使用率API_TOKEN", response_model=List[int])
def memory2(_: str = Depends(verify_apitoken)) -> Any:
def memory2(_: Annotated[str, Depends(verify_apitoken)]) -> Any:
"""
获取当前内存使用率 API_TOKEN认证?token=xxx
"""

View File

@@ -0,0 +1,130 @@
from typing import Any, List, Optional
from fastapi import APIRouter, Depends
from app import schemas
from app.core.event import eventmanager
from app.core.security import verify_token
from app.schemas import DiscoverSourceEventData
from app.schemas.types import ChainEventType, MediaType
from chain.bangumi import BangumiChain
from chain.douban import DoubanChain
from chain.tmdb import TmdbChain
router = APIRouter()
@router.get("/source", summary="获取探索数据源", response_model=List[schemas.DiscoverMediaSource])
def source(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
获取探索数据源
"""
# 广播事件,请示额外的探索数据源支持
event_data = DiscoverSourceEventData()
event = eventmanager.send_event(ChainEventType.DiscoverSource, event_data)
# 使用事件返回的上下文数据
if event and event.event_data:
event_data: DiscoverSourceEventData = event.event_data
if event_data.extra_sources:
return event_data.extra_sources
return []
@router.get("/bangumi", summary="探索Bangumi", response_model=List[schemas.MediaInfo])
def bangumi(type: Optional[int] = 2,
cat: Optional[int] = None,
sort: Optional[str] = 'rank',
year: Optional[str] = None,
page: Optional[int] = 1,
count: Optional[int] = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
探索Bangumi
"""
medias = BangumiChain().discover(type=type, cat=cat, sort=sort, year=year,
limit=count, offset=(page - 1) * count)
if medias:
return [media.to_dict() for media in medias]
return []
@router.get("/douban_movies", summary="探索豆瓣电影", response_model=List[schemas.MediaInfo])
def douban_movies(sort: Optional[str] = "R",
tags: Optional[str] = "",
page: Optional[int] = 1,
count: Optional[int] = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览豆瓣电影信息
"""
movies = DoubanChain().douban_discover(mtype=MediaType.MOVIE,
sort=sort, tags=tags, page=page, count=count)
return [media.to_dict() for media in movies] if movies else []
@router.get("/douban_tvs", summary="探索豆瓣剧集", response_model=List[schemas.MediaInfo])
def douban_tvs(sort: Optional[str] = "R",
tags: Optional[str] = "",
page: Optional[int] = 1,
count: Optional[int] = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览豆瓣剧集信息
"""
tvs = DoubanChain().douban_discover(mtype=MediaType.TV,
sort=sort, tags=tags, page=page, count=count)
return [media.to_dict() for media in tvs] if tvs else []
@router.get("/tmdb_movies", summary="探索TMDB电影", response_model=List[schemas.MediaInfo])
def tmdb_movies(sort_by: Optional[str] = "popularity.desc",
with_genres: Optional[str] = "",
with_original_language: Optional[str] = "",
with_keywords: Optional[str] = "",
with_watch_providers: Optional[str] = "",
vote_average: Optional[float] = 0.0,
vote_count: Optional[int] = 0,
release_date: Optional[str] = "",
page: Optional[int] = 1,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览TMDB电影信息
"""
movies = TmdbChain().tmdb_discover(mtype=MediaType.MOVIE,
sort_by=sort_by,
with_genres=with_genres,
with_original_language=with_original_language,
with_keywords=with_keywords,
with_watch_providers=with_watch_providers,
vote_average=vote_average,
vote_count=vote_count,
release_date=release_date,
page=page)
return [movie.to_dict() for movie in movies] if movies else []
@router.get("/tmdb_tvs", summary="探索TMDB剧集", response_model=List[schemas.MediaInfo])
def tmdb_tvs(sort_by: Optional[str] = "popularity.desc",
with_genres: Optional[str] = "",
with_original_language: Optional[str] = "",
with_keywords: Optional[str] = "",
with_watch_providers: Optional[str] = "",
vote_average: Optional[float] = 0.0,
vote_count: Optional[int] = 0,
release_date: Optional[str] = "",
page: Optional[int] = 1,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览TMDB剧集信息
"""
tvs = TmdbChain().tmdb_discover(mtype=MediaType.TV,
sort_by=sort_by,
with_genres=with_genres,
with_original_language=with_original_language,
with_keywords=with_keywords,
with_watch_providers=with_watch_providers,
vote_average=vote_average,
vote_count=vote_count,
release_date=release_date,
page=page)
return [tv.to_dict() for tv in tvs] if tvs else []

View File

@@ -1,4 +1,4 @@
from typing import Any, List
from typing import Any, List, Optional
from fastapi import APIRouter, Depends
@@ -22,7 +22,7 @@ def douban_person(person_id: int,
@router.get("/person/credits/{person_id}", summary="人物参演作品", response_model=List[schemas.MediaInfo])
def douban_person_credits(person_id: int,
page: int = 1,
page: Optional[int] = 1,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据人物ID查询人物参演作品
@@ -33,129 +33,6 @@ def douban_person_credits(person_id: int,
return []
@router.get("/showing", summary="豆瓣正在热映", response_model=List[schemas.MediaInfo])
def movie_showing(page: int = 1,
count: int = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览豆瓣正在热映
"""
movies = DoubanChain().movie_showing(page=page, count=count)
if movies:
return [media.to_dict() for media in movies]
return []
@router.get("/movies", summary="豆瓣电影", response_model=List[schemas.MediaInfo])
def douban_movies(sort: str = "R",
tags: str = "",
page: int = 1,
count: int = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览豆瓣电影信息
"""
movies = DoubanChain().douban_discover(mtype=MediaType.MOVIE,
sort=sort, tags=tags, page=page, count=count)
if movies:
return [media.to_dict() for media in movies]
return []
@router.get("/tvs", summary="豆瓣剧集", response_model=List[schemas.MediaInfo])
def douban_tvs(sort: str = "R",
tags: str = "",
page: int = 1,
count: int = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览豆瓣剧集信息
"""
tvs = DoubanChain().douban_discover(mtype=MediaType.TV,
sort=sort, tags=tags, page=page, count=count)
if tvs:
return [media.to_dict() for media in tvs]
return []
@router.get("/movie_top250", summary="豆瓣电影TOP250", response_model=List[schemas.MediaInfo])
def movie_top250(page: int = 1,
count: int = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览豆瓣剧集信息
"""
movies = DoubanChain().movie_top250(page=page, count=count)
if movies:
return [media.to_dict() for media in movies]
return []
@router.get("/tv_weekly_chinese", summary="豆瓣国产剧集周榜", response_model=List[schemas.MediaInfo])
def tv_weekly_chinese(page: int = 1,
count: int = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
中国每周剧集口碑榜
"""
tvs = DoubanChain().tv_weekly_chinese(page=page, count=count)
if tvs:
return [media.to_dict() for media in tvs]
return []
@router.get("/tv_weekly_global", summary="豆瓣全球剧集周榜", response_model=List[schemas.MediaInfo])
def tv_weekly_global(page: int = 1,
count: int = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
全球每周剧集口碑榜
"""
tvs = DoubanChain().tv_weekly_global(page=page, count=count)
if tvs:
return [media.to_dict() for media in tvs]
return []
@router.get("/tv_animation", summary="豆瓣动画剧集", response_model=List[schemas.MediaInfo])
def tv_animation(page: int = 1,
count: int = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
热门动画剧集
"""
tvs = DoubanChain().tv_animation(page=page, count=count)
if tvs:
return [media.to_dict() for media in tvs]
return []
@router.get("/movie_hot", summary="豆瓣热门电影", response_model=List[schemas.MediaInfo])
def movie_hot(page: int = 1,
count: int = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
热门电影
"""
movies = DoubanChain().movie_hot(page=page, count=count)
if movies:
return [media.to_dict() for media in movies]
return []
@router.get("/tv_hot", summary="豆瓣热门电视剧", response_model=List[schemas.MediaInfo])
def tv_hot(page: int = 1,
count: int = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
热门电视剧
"""
tvs = DoubanChain().tv_hot(page=page, count=count)
if tvs:
return [media.to_dict() for media in tvs]
return []
@router.get("/credits/{doubanid}/{type_name}", summary="豆瓣演员阵容", response_model=List[schemas.MediaPerson])
def douban_credits(doubanid: str,
type_name: str,

View File

@@ -1,4 +1,4 @@
from typing import Any, List
from typing import Any, List, Annotated, Optional
from fastapi import APIRouter, Depends, Body
@@ -9,14 +9,16 @@ from app.core.context import MediaInfo, Context, TorrentInfo
from app.core.metainfo import MetaInfo
from app.core.security import verify_token
from app.db.models.user import User
from app.db.systemconfig_oper import SystemConfigOper
from app.db.user_oper import get_current_active_user
from app.schemas.types import SystemConfigKey
router = APIRouter()
@router.get("/", summary="正在下载", response_model=List[schemas.DownloadingTorrent])
def list(
name: str = None,
def current(
name: Optional[str] = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询正在下载的任务
@@ -28,8 +30,8 @@ def list(
def download(
media_in: schemas.MediaInfo,
torrent_in: schemas.TorrentInfo,
downloader: str = Body(None),
save_path: str = Body(None),
downloader: Annotated[str | None, Body()] = None,
save_path: Annotated[str | None, Body()] = None,
current_user: User = Depends(get_current_active_user)) -> Any:
"""
添加下载任务(含媒体信息)
@@ -49,7 +51,7 @@ def download(
torrent_info=torrentinfo
)
did = DownloadChain().download_single(context=context, username=current_user.name,
downloader=downloader, save_path=save_path)
downloader=downloader, save_path=save_path, source="Manual")
if not did:
return schemas.Response(success=False, message="任务添加失败")
return schemas.Response(success=True, data={
@@ -60,8 +62,8 @@ def download(
@router.post("/add", summary="添加下载(不含媒体信息)", response_model=schemas.Response)
def add(
torrent_in: schemas.TorrentInfo,
downloader: str = Body(None),
save_path: str = Body(None),
downloader: Annotated[str | None, Body()] = None,
save_path: Annotated[str | None, Body()] = None,
current_user: User = Depends(get_current_active_user)) -> Any:
"""
添加下载任务(不含媒体信息)
@@ -82,7 +84,7 @@ def add(
torrent_info=torrentinfo
)
did = DownloadChain().download_single(context=context, username=current_user.name,
downloader=downloader, save_path=save_path)
downloader=downloader, save_path=save_path, source="Manual")
if not did:
return schemas.Response(success=False, message="任务添加失败")
return schemas.Response(success=True, data={
@@ -111,6 +113,17 @@ def stop(hashString: str,
return schemas.Response(success=True if ret else False)
@router.get("/clients", summary="查询可用下载器", response_model=List[dict])
def clients(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询可用下载器
"""
downloaders: List[dict] = SystemConfigOper().get(SystemConfigKey.Downloaders)
if downloaders:
return [{"name": d.get("name"), "type": d.get("type")} for d in downloaders if d.get("enabled")]
return []
@router.delete("/{hashString}", summary="删除下载任务", response_model=schemas.Response)
def delete(hashString: str,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:

View File

@@ -1,10 +1,12 @@
from typing import List, Any
from typing import List, Any, Optional
import jieba
from fastapi import APIRouter, Depends
from sqlalchemy.orm import Session
from app import schemas
from app.chain.storage import StorageChain
from app.core.config import settings
from app.core.event import eventmanager
from app.core.security import verify_token
from app.db import get_db
@@ -18,8 +20,8 @@ router = APIRouter()
@router.get("/download", summary="查询下载历史记录", response_model=List[schemas.DownloadHistory])
def download_history(page: int = 1,
count: int = 30,
def download_history(page: Optional[int] = 1,
count: Optional[int] = 30,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
@@ -39,15 +41,15 @@ def delete_download_history(history_in: schemas.DownloadHistory,
return schemas.Response(success=True)
@router.get("/transfer", summary="查询转移历史记录", response_model=schemas.Response)
def transfer_history(title: str = None,
page: int = 1,
count: int = 30,
status: bool = None,
@router.get("/transfer", summary="查询整理记录", response_model=schemas.Response)
def transfer_history(title: Optional[str] = None,
page: Optional[int] = 1,
count: Optional[int] = 30,
status: Optional[bool] = None,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询转移历史记录
查询整理记录
"""
if title == "失败":
title = None
@@ -57,6 +59,9 @@ def transfer_history(title: str = None,
status = True
if title:
if settings.TOKENIZED_SEARCH:
words = jieba.cut(title, HMM=False)
title = "%".join(words)
total = TransferHistory.count_by_title(db, title=title, status=status)
result = TransferHistory.list_by_title(db, title=title, page=page,
count=count, status=status)
@@ -71,14 +76,14 @@ def transfer_history(title: str = None,
})
@router.delete("/transfer", summary="删除转移历史记录", response_model=schemas.Response)
@router.delete("/transfer", summary="删除整理记录", response_model=schemas.Response)
def delete_transfer_history(history_in: schemas.TransferHistory,
deletesrc: bool = False,
deletedest: bool = False,
deletesrc: Optional[bool] = False,
deletedest: Optional[bool] = False,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(get_current_active_superuser)) -> Any:
"""
删除转移历史记录
删除整理记录
"""
history: TransferHistory = TransferHistory.get(db, history_in.id)
if not history:
@@ -86,9 +91,7 @@ def delete_transfer_history(history_in: schemas.TransferHistory,
# 册除媒体库文件
if deletedest and history.dest_fileitem:
dest_fileitem = schemas.FileItem(**history.dest_fileitem)
state = StorageChain().delete_media_file(fileitem=dest_fileitem, mtype=MediaType(history.type))
if not state:
return schemas.Response(success=False, message=f"{dest_fileitem.path} 删除失败")
StorageChain().delete_media_file(fileitem=dest_fileitem, mtype=MediaType(history.type))
# 删除源文件
if deletesrc and history.src_fileitem:
@@ -109,11 +112,11 @@ def delete_transfer_history(history_in: schemas.TransferHistory,
return schemas.Response(success=True)
@router.get("/empty/transfer", summary="清空转移历史记录", response_model=schemas.Response)
@router.get("/empty/transfer", summary="清空整理记录", response_model=schemas.Response)
def delete_transfer_history(db: Session = Depends(get_db),
_: User = Depends(get_current_active_superuser)) -> Any:
"""
清空转移历史记录
清空整理记录
"""
TransferHistory.truncate(db)
return schemas.Response(success=True)

View File

@@ -1,5 +1,5 @@
from datetime import timedelta
from typing import Any, List
from typing import Any, List, Annotated
from fastapi import APIRouter, Depends, Form, HTTPException
from fastapi.security import OAuth2PasswordRequestForm
@@ -18,8 +18,8 @@ router = APIRouter()
@router.post("/access-token", summary="获取token", response_model=schemas.Token)
def login_access_token(
form_data: OAuth2PasswordRequestForm = Depends(),
otp_password: str = Form(None)
form_data: Annotated[OAuth2PasswordRequestForm, Depends()],
otp_password: Annotated[str | None, Form()] = None
) -> Any:
"""
获取认证Token

View File

@@ -1,22 +1,25 @@
from pathlib import Path
from typing import List, Any, Union
from typing import List, Any, Union, Annotated, Optional
from fastapi import APIRouter, Depends
from app import schemas
from app.chain.media import MediaChain
from app.chain.tmdb import TmdbChain
from app.core.config import settings
from app.core.context import Context
from app.core.event import eventmanager
from app.core.metainfo import MetaInfo, MetaInfoPath
from app.core.security import verify_token, verify_apitoken
from app.schemas import MediaType
from app.schemas import MediaType, MediaRecognizeConvertEventData
from app.schemas.types import ChainEventType
router = APIRouter()
@router.get("/recognize", summary="识别媒体信息(种子)", response_model=schemas.Context)
def recognize(title: str,
subtitle: str = None,
subtitle: Optional[str] = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据标题、副标题识别媒体信息
@@ -30,9 +33,10 @@ def recognize(title: str,
@router.get("/recognize2", summary="识别种子媒体信息API_TOKEN", response_model=schemas.Context)
def recognize2(title: str,
subtitle: str = None,
_: str = Depends(verify_apitoken)) -> Any:
def recognize2(_: Annotated[str, Depends(verify_apitoken)],
title: str,
subtitle: Optional[str] = None
) -> Any:
"""
根据标题、副标题识别媒体信息 API_TOKEN认证?token=xxx
"""
@@ -55,7 +59,7 @@ def recognize_file(path: str,
@router.get("/recognize_file2", summary="识别文件媒体信息API_TOKEN", response_model=schemas.Context)
def recognize_file2(path: str,
_: str = Depends(verify_apitoken)) -> Any:
_: Annotated[str, Depends(verify_apitoken)]) -> Any:
"""
根据文件路径识别媒体信息 API_TOKEN认证?token=xxx
"""
@@ -65,14 +69,15 @@ def recognize_file2(path: str,
@router.get("/search", summary="搜索媒体/人物信息", response_model=List[dict])
def search(title: str,
type: str = "media",
type: Optional[str] = "media",
page: int = 1,
count: int = 8,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
模糊搜索媒体/人物信息列表 media媒体信息person人物信息
"""
def __get_source(obj: Union[dict, schemas.MediaPerson]):
def __get_source(obj: Union[schemas.MediaInfo, schemas.MediaPerson, dict]):
"""
获取对象属性
"""
@@ -85,6 +90,8 @@ def search(title: str,
_, medias = MediaChain().search(title=title)
if medias:
result = [media.to_dict() for media in medias]
elif type == "collection":
result = MediaChain().search_collections(name=title)
else:
result = MediaChain().search_persons(name=title)
if result:
@@ -99,7 +106,7 @@ def search(title: str,
@router.post("/scrape/{storage}", summary="刮削媒体信息", response_model=schemas.Response)
def scrape(fileitem: schemas.FileItem,
storage: str = "local",
storage: Optional[str] = "local",
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
刮削媒体信息
@@ -111,13 +118,13 @@ def scrape(fileitem: schemas.FileItem,
scrape_path = Path(fileitem.path)
meta = MetaInfoPath(scrape_path)
mediainfo = chain.recognize_by_meta(meta)
if not media_info:
if not mediainfo:
return schemas.Response(success=False, message="刮削失败,无法识别媒体信息")
if storage == "local":
if not scrape_path.exists():
return schemas.Response(success=False, message="刮削路径不存在")
# 手动刮削
chain.scrape_metadata(fileitem=fileitem, meta=meta, mediainfo=mediainfo)
chain.scrape_metadata(fileitem=fileitem, meta=meta, mediainfo=mediainfo, overwrite=True)
return schemas.Response(success=True, message=f"{fileitem.path} 刮削完成")
@@ -129,25 +136,108 @@ def category(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
return MediaChain().media_category() or {}
@router.get("/group/seasons/{episode_group}", summary="查询剧集组季信息", response_model=List[schemas.MediaSeason])
def group_seasons(episode_group: str, _: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询剧集组季信息themoviedb
"""
return TmdbChain().tmdb_group_seasons(group_id=episode_group)
@router.get("/groups/{tmdbid}", summary="查询媒体剧集组", response_model=List[dict])
def seasons(tmdbid: int, _: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询媒体剧集组列表themoviedb
"""
mediainfo = MediaChain().recognize_media(tmdbid=tmdbid, mtype=MediaType.TV)
if not mediainfo:
return []
return mediainfo.episode_groups
@router.get("/seasons", summary="查询媒体季信息", response_model=List[schemas.MediaSeason])
def seasons(mediaid: Optional[str] = None,
title: Optional[str] = None,
year: str = None,
season: int = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询媒体季信息
"""
if mediaid:
if mediaid.startswith("tmdb:"):
tmdbid = int(mediaid[5:])
seasons_info = TmdbChain().tmdb_seasons(tmdbid=tmdbid)
if seasons_info:
if season:
return [sea for sea in seasons_info if sea.season_number == season]
return seasons_info
if title:
meta = MetaInfo(title)
if year:
meta.year = year
mediainfo = MediaChain().recognize_media(meta, mtype=MediaType.TV)
if mediainfo:
if settings.RECOGNIZE_SOURCE == "themoviedb":
seasons_info = TmdbChain().tmdb_seasons(tmdbid=mediainfo.tmdb_id)
if seasons_info:
if season:
return [sea for sea in seasons_info if sea.season_number == season]
return seasons_info
else:
sea = season or 1
return schemas.MediaSeason(
season_number=sea,
poster_path=mediainfo.poster_path,
name=f"{sea}",
air_date=mediainfo.release_date,
overview=mediainfo.overview,
vote_average=mediainfo.vote_average,
episode_count=mediainfo.number_of_episodes
)
return []
@router.get("/{mediaid}", summary="查询媒体详情", response_model=schemas.MediaInfo)
def media_info(mediaid: str, type_name: str,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
def detail(mediaid: str, type_name: str, title: Optional[str] = None, year: int = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据媒体ID查询themoviedb或豆瓣媒体信息type_name: 电影/电视剧
"""
mtype = MediaType(type_name)
tmdbid, doubanid, bangumiid = None, None, None
mediainfo = None
if mediaid.startswith("tmdb:"):
tmdbid = int(mediaid[5:])
mediainfo = MediaChain().recognize_media(tmdbid=int(mediaid[5:]), mtype=mtype)
elif mediaid.startswith("douban:"):
doubanid = mediaid[7:]
mediainfo = MediaChain().recognize_media(doubanid=mediaid[7:], mtype=mtype)
elif mediaid.startswith("bangumi:"):
bangumiid = int(mediaid[8:])
if not tmdbid and not doubanid and not bangumiid:
return schemas.MediaInfo()
mediainfo = MediaChain().recognize_media(bangumiid=int(mediaid[8:]), mtype=mtype)
else:
# 广播事件解析媒体信息
event_data = MediaRecognizeConvertEventData(
mediaid=mediaid,
convert_type=settings.RECOGNIZE_SOURCE
)
event = eventmanager.send_event(ChainEventType.MediaRecognizeConvert, event_data)
# 使用事件返回的上下文数据
if event and event.event_data:
event_data: MediaRecognizeConvertEventData = event.event_data
if event_data.media_dict:
new_id = event_data.media_dict.get("id")
if event_data.convert_type == "themoviedb":
mediainfo = MediaChain().recognize_media(tmdbid=new_id, mtype=mtype)
elif event_data.convert_type == "douban":
mediainfo = MediaChain().recognize_media(doubanid=new_id, mtype=mtype)
elif title:
# 使用名称识别兜底
meta = MetaInfo(title)
if year:
meta.year = year
if mtype:
meta.type = mtype
mediainfo = MediaChain().recognize_media(meta=meta)
# 识别
mediainfo = MediaChain().recognize_media(tmdbid=tmdbid, doubanid=doubanid, bangumiid=bangumiid, mtype=mtype)
if mediainfo:
MediaChain().obtain_images(mediainfo)
return mediainfo.to_dict()
return schemas.MediaInfo()

View File

@@ -1,4 +1,4 @@
from typing import Any, List, Dict
from typing import Any, List, Dict, Optional
from fastapi import APIRouter, Depends
from sqlalchemy.orm import Session
@@ -12,8 +12,10 @@ from app.core.security import verify_token
from app.db import get_db
from app.db.mediaserver_oper import MediaServerOper
from app.db.models import MediaServerItem
from app.db.systemconfig_oper import SystemConfigOper
from app.helper.mediaserver import MediaServerHelper
from app.schemas import MediaType, NotExistMediaInfo
from app.schemas.types import SystemConfigKey
router = APIRouter()
@@ -41,11 +43,11 @@ def play_item(itemid: str, _: schemas.TokenPayload = Depends(verify_token)) -> s
@router.get("/exists", summary="查询本地是否存在(数据库)", response_model=schemas.Response)
def exists_local(title: str = None,
year: int = None,
mtype: str = None,
tmdbid: int = None,
season: int = None,
def exists_local(title: Optional[str] = None,
year: Optional[str] = None,
mtype: Optional[str] = None,
tmdbid: Optional[int] = None,
season: Optional[int] = None,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
@@ -119,7 +121,7 @@ def not_exists(media_in: schemas.MediaInfo,
@router.get("/latest", summary="最新入库条目", response_model=List[schemas.MediaServerPlayItem])
def latest(server: str, count: int = 18,
def latest(server: str, count: Optional[int] = 18,
userinfo: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
获取媒体服务器最新入库条目
@@ -128,7 +130,7 @@ def latest(server: str, count: int = 18,
@router.get("/playing", summary="正在播放条目", response_model=List[schemas.MediaServerPlayItem])
def playing(server: str, count: int = 12,
def playing(server: str, count: Optional[int] = 12,
userinfo: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
获取媒体服务器正在播放条目
@@ -137,9 +139,20 @@ def playing(server: str, count: int = 12,
@router.get("/library", summary="媒体库列表", response_model=List[schemas.MediaServerLibrary])
def library(server: str, hidden: bool = False,
def library(server: str, hidden: Optional[bool] = False,
userinfo: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
获取媒体服务器媒体库列表
"""
return MediaServerChain().librarys(server=server, username=userinfo.username, hidden=hidden) or []
@router.get("/clients", summary="查询可用媒体服务器", response_model=List[dict])
def clients(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询可用媒体服务器
"""
mediaservers: List[dict] = SystemConfigOper().get(SystemConfigKey.MediaServers)
if mediaservers:
return [{"name": d.get("name"), "type": d.get("type")} for d in mediaservers if d.get("enabled")]
return []

View File

@@ -1,5 +1,5 @@
import json
from typing import Union, Any, List
from typing import Union, Any, List, Optional
from fastapi import APIRouter, BackgroundTasks, Depends, Request
from pywebpush import WebPushException, webpush
@@ -60,8 +60,8 @@ def web_message(text: str, current_user: User = Depends(get_current_active_super
@router.get("/web", summary="获取WEB消息", response_model=List[dict])
def get_web_message(_: schemas.TokenPayload = Depends(verify_token),
db: Session = Depends(get_db),
page: int = 1,
count: int = 20):
page: Optional[int] = 1,
count: Optional[int] = 20):
"""
获取WEB消息列表
"""
@@ -77,7 +77,7 @@ def get_web_message(_: schemas.TokenPayload = Depends(verify_token),
def wechat_verify(echostr: str, msg_signature: str, timestamp: Union[str, int], nonce: str,
source: str = None) -> Any:
source: Optional[str] = None) -> Any:
"""
微信验证响应
"""
@@ -114,8 +114,8 @@ def vocechat_verify() -> Any:
@router.get("/", summary="回调请求验证")
def incoming_verify(token: str = None, echostr: str = None, msg_signature: str = None,
timestamp: Union[str, int] = None, nonce: str = None, source: str = None,
def incoming_verify(token: Optional[str] = None, echostr: Optional[str] = None, msg_signature: Optional[str] = None,
timestamp: Union[str, int] = None, nonce: Optional[str] = None, source: Optional[str] = None,
_: schemas.TokenPayload = Depends(verify_apitoken)) -> Any:
"""
微信/VoceChat等验证响应

View File

@@ -3,7 +3,7 @@ from typing import Annotated, Any, List, Optional
from fastapi import APIRouter, Depends, Header
from app import schemas
from app.chain.command import CommandChain
from app.command import Command
from app.core.config import settings
from app.core.plugin import PluginManager
from app.core.security import verify_apikey, verify_token
@@ -118,7 +118,7 @@ def _clean_protected_routes(existing_paths: dict):
@router.get("/", summary="所有插件", response_model=List[schemas.Plugin])
def all_plugins(_: schemas.TokenPayload = Depends(get_current_active_superuser),
state: str = "all") -> List[schemas.Plugin]:
state: Optional[str] = "all") -> List[schemas.Plugin]:
"""
查询所有插件清单包括本地插件和在线插件插件状态installed, market, all
"""
@@ -181,8 +181,8 @@ def statistic(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
@router.get("/install/{plugin_id}", summary="安装插件", response_model=schemas.Response)
def install(plugin_id: str,
repo_url: str = "",
force: bool = False,
repo_url: Optional[str] = "",
force: Optional[bool] = False,
_: schemas.TokenPayload = Depends(get_current_active_superuser)) -> Any:
"""
安装插件
@@ -212,7 +212,7 @@ def install(plugin_id: str,
# 注册插件服务
Scheduler().update_plugin_job(plugin_id)
# 注册菜单命令
CommandChain().init_commands(plugin_id)
Command().init_commands(plugin_id)
# 注册插件API
register_plugin_api(plugin_id)
return schemas.Response(success=True)
@@ -280,7 +280,7 @@ def reset_plugin(plugin_id: str,
# 注册插件服务
Scheduler().update_plugin_job(plugin_id)
# 注册菜单命令
CommandChain().init_commands(plugin_id)
Command().init_commands(plugin_id)
# 注册插件API
register_plugin_api(plugin_id)
return schemas.Response(success=True)
@@ -308,7 +308,7 @@ def set_plugin_config(plugin_id: str, conf: dict,
# 注册插件服务
Scheduler().update_plugin_job(plugin_id)
# 注册菜单命令
CommandChain().init_commands(plugin_id)
Command().init_commands(plugin_id)
# 注册插件API
register_plugin_api(plugin_id)
return schemas.Response(success=True)

View File

@@ -0,0 +1,191 @@
from typing import Any, List, Optional
from fastapi import APIRouter, Depends
from app import schemas
from app.core.event import eventmanager
from app.core.security import verify_token
from app.schemas.types import ChainEventType
from app.chain.recommend import RecommendChain
from app.schemas import RecommendSourceEventData
router = APIRouter()
@router.get("/source", summary="获取推荐数据源", response_model=List[schemas.RecommendMediaSource])
def source(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
获取推荐数据源
"""
# 广播事件,请示额外的推荐数据源支持
event_data = RecommendSourceEventData()
event = eventmanager.send_event(ChainEventType.RecommendSource, event_data)
# 使用事件返回的上下文数据
if event and event.event_data:
event_data: RecommendSourceEventData = event.event_data
if event_data.extra_sources:
return event_data.extra_sources
return []
@router.get("/bangumi_calendar", summary="Bangumi每日放送", response_model=List[schemas.MediaInfo])
def bangumi_calendar(page: Optional[int] = 1,
count: Optional[int] = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览Bangumi每日放送
"""
return RecommendChain().bangumi_calendar(page=page, count=count)
@router.get("/douban_showing", summary="豆瓣正在热映", response_model=List[schemas.MediaInfo])
def douban_showing(page: Optional[int] = 1,
count: Optional[int] = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览豆瓣正在热映
"""
return RecommendChain().douban_movie_showing(page=page, count=count)
@router.get("/douban_movies", summary="豆瓣电影", response_model=List[schemas.MediaInfo])
def douban_movies(sort: Optional[str] = "R",
tags: Optional[str] = "",
page: Optional[int] = 1,
count: Optional[int] = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览豆瓣电影信息
"""
return RecommendChain().douban_movies(sort=sort, tags=tags, page=page, count=count)
@router.get("/douban_tvs", summary="豆瓣剧集", response_model=List[schemas.MediaInfo])
def douban_tvs(sort: Optional[str] = "R",
tags: Optional[str] = "",
page: Optional[int] = 1,
count: Optional[int] = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览豆瓣剧集信息
"""
return RecommendChain().douban_tvs(sort=sort, tags=tags, page=page, count=count)
@router.get("/douban_movie_top250", summary="豆瓣电影TOP250", response_model=List[schemas.MediaInfo])
def douban_movie_top250(page: Optional[int] = 1,
count: Optional[int] = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览豆瓣剧集信息
"""
return RecommendChain().douban_movie_top250(page=page, count=count)
@router.get("/douban_tv_weekly_chinese", summary="豆瓣国产剧集周榜", response_model=List[schemas.MediaInfo])
def douban_tv_weekly_chinese(page: Optional[int] = 1,
count: Optional[int] = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
中国每周剧集口碑榜
"""
return RecommendChain().douban_tv_weekly_chinese(page=page, count=count)
@router.get("/douban_tv_weekly_global", summary="豆瓣全球剧集周榜", response_model=List[schemas.MediaInfo])
def douban_tv_weekly_global(page: Optional[int] = 1,
count: Optional[int] = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
全球每周剧集口碑榜
"""
return RecommendChain().douban_tv_weekly_global(page=page, count=count)
@router.get("/douban_tv_animation", summary="豆瓣动画剧集", response_model=List[schemas.MediaInfo])
def douban_tv_animation(page: Optional[int] = 1,
count: Optional[int] = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
热门动画剧集
"""
return RecommendChain().douban_tv_animation(page=page, count=count)
@router.get("/douban_movie_hot", summary="豆瓣热门电影", response_model=List[schemas.MediaInfo])
def douban_movie_hot(page: Optional[int] = 1,
count: Optional[int] = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
热门电影
"""
return RecommendChain().douban_movie_hot(page=page, count=count)
@router.get("/douban_tv_hot", summary="豆瓣热门电视剧", response_model=List[schemas.MediaInfo])
def douban_tv_hot(page: Optional[int] = 1,
count: Optional[int] = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
热门电视剧
"""
return RecommendChain().douban_tv_hot(page=page, count=count)
@router.get("/tmdb_movies", summary="TMDB电影", response_model=List[schemas.MediaInfo])
def tmdb_movies(sort_by: Optional[str] = "popularity.desc",
with_genres: Optional[str] = "",
with_original_language: Optional[str] = "",
with_keywords: Optional[str] = "",
with_watch_providers: Optional[str] = "",
vote_average: Optional[float] = 0.0,
vote_count: Optional[int] = 0,
release_date: Optional[str] = "",
page: Optional[int] = 1,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览TMDB电影信息
"""
return RecommendChain().tmdb_movies(sort_by=sort_by,
with_genres=with_genres,
with_original_language=with_original_language,
with_keywords=with_keywords,
with_watch_providers=with_watch_providers,
vote_average=vote_average,
vote_count=vote_count,
release_date=release_date,
page=page)
@router.get("/tmdb_tvs", summary="TMDB剧集", response_model=List[schemas.MediaInfo])
def tmdb_tvs(sort_by: Optional[str] = "popularity.desc",
with_genres: Optional[str] = "",
with_original_language: Optional[str] = "",
with_keywords: Optional[str] = "",
with_watch_providers: Optional[str] = "",
vote_average: Optional[float] = 0.0,
vote_count: Optional[int] = 0,
release_date: Optional[str] = "",
page: Optional[int] = 1,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览TMDB剧集信息
"""
return RecommendChain().tmdb_tvs(sort_by=sort_by,
with_genres=with_genres,
with_original_language=with_original_language,
with_keywords=with_keywords,
with_watch_providers=with_watch_providers,
vote_average=vote_average,
vote_count=vote_count,
release_date=release_date,
page=page)
@router.get("/tmdb_trending", summary="TMDB流行趋势", response_model=List[schemas.MediaInfo])
def tmdb_trending(page: Optional[int] = 1,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
TMDB流行趋势
"""
return RecommendChain().tmdb_trending(page=page)

View File

@@ -1,4 +1,4 @@
from typing import List, Any
from typing import List, Any, Optional
from fastapi import APIRouter, Depends
@@ -6,8 +6,11 @@ from app import schemas
from app.chain.media import MediaChain
from app.chain.search import SearchChain
from app.core.config import settings
from app.core.event import eventmanager
from app.core.metainfo import MetaInfo
from app.core.security import verify_token
from app.schemas.types import MediaType
from app.schemas import MediaRecognizeConvertEventData
from app.schemas.types import MediaType, ChainEventType
router = APIRouter()
@@ -23,43 +26,60 @@ def search_latest(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
@router.get("/media/{mediaid}", summary="精确搜索资源", response_model=schemas.Response)
def search_by_id(mediaid: str,
mtype: str = None,
area: str = "title",
season: str = None,
mtype: Optional[str] = None,
area: Optional[str] = "title",
title: Optional[str] = None,
year: Optional[str] = None,
season: Optional[str] = None,
sites: Optional[str] = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据TMDBID/豆瓣ID精确搜索站点资源 tmdb:/douban:/bangumi:
"""
if mtype:
mtype = MediaType(mtype)
media_type = MediaType(mtype)
else:
media_type = None
if season:
season = int(season)
media_season = int(season)
else:
media_season = None
if sites:
site_list = [int(site) for site in sites.split(",") if site]
else:
site_list = None
torrents = None
# 根据前缀识别媒体ID
if mediaid.startswith("tmdb:"):
tmdbid = int(mediaid.replace("tmdb:", ""))
if settings.RECOGNIZE_SOURCE == "douban":
# 通过TMDBID识别豆瓣ID
doubaninfo = MediaChain().get_doubaninfo_by_tmdbid(tmdbid=tmdbid, mtype=mtype)
doubaninfo = MediaChain().get_doubaninfo_by_tmdbid(tmdbid=tmdbid, mtype=media_type)
if doubaninfo:
torrents = SearchChain().search_by_id(doubanid=doubaninfo.get("id"),
mtype=mtype, area=area, season=season)
mtype=media_type, area=area, season=media_season,
sites=site_list)
else:
return schemas.Response(success=False, message="未识别到豆瓣媒体信息")
else:
torrents = SearchChain().search_by_id(tmdbid=tmdbid, mtype=mtype, area=area, season=season)
torrents = SearchChain().search_by_id(tmdbid=tmdbid, mtype=media_type, area=area, season=media_season,
sites=site_list)
elif mediaid.startswith("douban:"):
doubanid = mediaid.replace("douban:", "")
if settings.RECOGNIZE_SOURCE == "themoviedb":
# 通过豆瓣ID识别TMDBID
tmdbinfo = MediaChain().get_tmdbinfo_by_doubanid(doubanid=doubanid, mtype=mtype)
tmdbinfo = MediaChain().get_tmdbinfo_by_doubanid(doubanid=doubanid, mtype=media_type)
if tmdbinfo:
if tmdbinfo.get('season') and not season:
season = tmdbinfo.get('season')
if tmdbinfo.get('season') and not media_season:
media_season = tmdbinfo.get('season')
torrents = SearchChain().search_by_id(tmdbid=tmdbinfo.get("id"),
mtype=mtype, area=area, season=season)
mtype=media_type, area=area, season=media_season,
sites=site_list)
else:
return schemas.Response(success=False, message="未识别到TMDB媒体信息")
else:
torrents = SearchChain().search_by_id(doubanid=doubanid, mtype=mtype, area=area, season=season)
torrents = SearchChain().search_by_id(doubanid=doubanid, mtype=media_type, area=area, season=media_season,
sites=site_list)
elif mediaid.startswith("bangumi:"):
bangumiid = int(mediaid.replace("bangumi:", ""))
if settings.RECOGNIZE_SOURCE == "themoviedb":
@@ -67,7 +87,8 @@ def search_by_id(mediaid: str,
tmdbinfo = MediaChain().get_tmdbinfo_by_bangumiid(bangumiid=bangumiid)
if tmdbinfo:
torrents = SearchChain().search_by_id(tmdbid=tmdbinfo.get("id"),
mtype=mtype, area=area, season=season)
mtype=media_type, area=area, season=media_season,
sites=site_list)
else:
return schemas.Response(success=False, message="未识别到TMDB媒体信息")
else:
@@ -75,12 +96,49 @@ def search_by_id(mediaid: str,
doubaninfo = MediaChain().get_doubaninfo_by_bangumiid(bangumiid=bangumiid)
if doubaninfo:
torrents = SearchChain().search_by_id(doubanid=doubaninfo.get("id"),
mtype=mtype, area=area, season=season)
mtype=media_type, area=area, season=media_season,
sites=site_list)
else:
return schemas.Response(success=False, message="未识别到豆瓣媒体信息")
else:
return schemas.Response(success=False, message="未知的媒体ID")
# 未知前缀,广播事件解析媒体信息
event_data = MediaRecognizeConvertEventData(
mediaid=mediaid,
convert_type=settings.RECOGNIZE_SOURCE
)
event = eventmanager.send_event(ChainEventType.MediaRecognizeConvert, event_data)
# 使用事件返回的上下文数据
if event and event.event_data:
event_data: MediaRecognizeConvertEventData = event.event_data
if event_data.media_dict:
search_id = event_data.media_dict.get("id")
if event_data.convert_type == "themoviedb":
torrents = SearchChain().search_by_id(tmdbid=search_id,
mtype=media_type, area=area, season=media_season)
elif event_data.convert_type == "douban":
torrents = SearchChain().search_by_id(doubanid=search_id,
mtype=media_type, area=area, season=media_season)
else:
if not title:
return schemas.Response(success=False, message="未知的媒体ID")
# 使用名称识别兜底
meta = MetaInfo(title)
if year:
meta.year = year
if media_type:
meta.type = media_type
if media_season:
meta.type = MediaType.TV
meta.begin_season = media_season
mediainfo = MediaChain().recognize_media(meta=meta)
if mediainfo:
if settings.RECOGNIZE_SOURCE == "themoviedb":
torrents = SearchChain().search_by_id(tmdbid=mediainfo.tmdb_id,
mtype=media_type, area=area, season=media_season)
else:
torrents = SearchChain().search_by_id(doubanid=mediainfo.douban_id,
mtype=media_type, area=area, season=media_season)
# 返回搜索结果
if not torrents:
return schemas.Response(success=False, message="未搜索到任何资源")
else:
@@ -88,14 +146,15 @@ def search_by_id(mediaid: str,
@router.get("/title", summary="模糊搜索资源", response_model=schemas.Response)
def search_by_title(keyword: str = None,
page: int = 0,
site: int = None,
def search_by_title(keyword: Optional[str] = None,
page: Optional[int] = 0,
sites: Optional[str] = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据名称模糊搜索站点资源,支持分页,关键词为空是返回首页资源
"""
torrents = SearchChain().search_by_title(title=keyword, page=page, site=site)
torrents = SearchChain().search_by_title(title=keyword, page=page,
sites=[int(site) for site in sites.split(",") if site] if sites else None)
if not torrents:
return schemas.Response(success=False, message="未搜索到任何资源")
return schemas.Response(success=True, data=[torrent.to_dict() for torrent in torrents])

View File

@@ -1,4 +1,4 @@
from typing import List, Any
from typing import List, Any, Dict, Optional
from fastapi import APIRouter, Depends, HTTPException
from sqlalchemy.orm import Session
@@ -8,6 +8,7 @@ from app import schemas
from app.chain.site import SiteChain
from app.chain.torrents import TorrentsChain
from app.core.event import EventManager
from app.core.plugin import PluginManager
from app.core.security import verify_token
from app.db import get_db
from app.db.models import User
@@ -144,7 +145,7 @@ def update_cookie(
site_id: int,
username: str,
password: str,
code: str = None,
code: Optional[str] = None,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(get_current_active_superuser)) -> Any:
"""
@@ -202,7 +203,7 @@ def read_userdata_latest(
@router.get("/userdata/{site_id}", summary="查询某站点用户数据", response_model=schemas.Response)
def read_userdata(
site_id: int,
workdate: str = None,
workdate: Optional[str] = None,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(get_current_active_superuser)) -> Any:
"""
@@ -258,8 +259,41 @@ def site_icon(site_id: int,
})
@router.get("/category/{site_id}", summary="站点分类", response_model=List[schemas.SiteCategory])
def site_category(site_id: int,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
获取站点分类
"""
site = Site.get(db, site_id)
if not site:
raise HTTPException(
status_code=404,
detail=f"站点 {site_id} 不存在",
)
indexer = SitesHelper().get_indexer(site.domain)
if not indexer:
raise HTTPException(
status_code=404,
detail=f"站点 {site.domain} 不支持",
)
category: Dict[str, List[dict]] = indexer.get('category') or []
if not category:
return []
result = []
for cats in category.values():
for cat in cats:
if cat not in result:
result.append(cat)
return result
@router.get("/resource/{site_id}", summary="站点资源", response_model=List[schemas.TorrentInfo])
def site_resource(site_id: int,
keyword: Optional[str] = None,
cat: Optional[str] = None,
page: Optional[int] = 0,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(get_current_active_superuser)) -> Any:
"""
@@ -271,7 +305,7 @@ def site_resource(site_id: int,
status_code=404,
detail=f"站点 {site_id} 不存在",
)
torrents = TorrentsChain().browse(domain=site.domain)
torrents = TorrentsChain().browse(domain=site.domain, keyword=keyword, cat=cat, page=page)
if not torrents:
return []
return [torrent.to_dict() for torrent in torrents]
@@ -351,6 +385,8 @@ def auth_site(
return schemas.Response(success=False, message="请输入认证站点和认证参数")
status, msg = SitesHelper().check_user(auth_info.site, auth_info.params)
SystemConfigOper().set(SystemConfigKey.UserSiteAuthParams, auth_info.dict())
PluginManager().init_config()
Scheduler().init_plugin_jobs()
return schemas.Response(success=status, message=msg)

View File

@@ -1,6 +1,6 @@
from datetime import datetime
from pathlib import Path
from typing import Any, List
from typing import Any, List, Optional
from fastapi import APIRouter, Depends, HTTPException
from starlette.responses import FileResponse, Response
@@ -27,11 +27,12 @@ def qrcode(name: str, _: schemas.TokenPayload = Depends(verify_token)) -> Any:
qrcode_data, errmsg = StorageChain().generate_qrcode(name)
if qrcode_data:
return schemas.Response(success=True, data=qrcode_data, message=errmsg)
return schemas.Response(success=False)
return schemas.Response(success=False, message=errmsg)
@router.get("/check/{name}", summary="二维码登录确认", response_model=schemas.Response)
def check(name: str, ck: str = None, t: str = None, _: schemas.TokenPayload = Depends(verify_token)) -> Any:
def check(name: str, ck: Optional[str] = None, t: Optional[str] = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
二维码登录确认
"""
@@ -57,7 +58,7 @@ def save(name: str,
@router.post("/list", summary="所有目录和文件", response_model=List[schemas.FileItem])
def list_files(fileitem: schemas.FileItem,
sort: str = 'updated_at',
sort: Optional[str] = 'updated_at',
_: User = Depends(get_current_active_superuser)) -> Any:
"""
查询当前目录下所有目录和文件
@@ -140,7 +141,7 @@ def image(fileitem: schemas.FileItem,
@router.post("/rename", summary="重命名文件或目录", response_model=schemas.Response)
def rename(fileitem: schemas.FileItem,
new_name: str,
recursive: bool = False,
recursive: Optional[bool] = False,
_: User = Depends(get_current_active_superuser)) -> Any:
"""
重命名文件或目录

View File

@@ -1,4 +1,4 @@
from typing import List, Any
from typing import List, Any, Annotated, Optional
import cn2an
from fastapi import APIRouter, Request, BackgroundTasks, Depends, HTTPException, Header
@@ -8,16 +8,18 @@ from app import schemas
from app.chain.subscribe import SubscribeChain
from app.core.config import settings
from app.core.context import MediaInfo
from app.core.event import eventmanager
from app.core.metainfo import MetaInfo
from app.core.security import verify_token, verify_apitoken
from app.db import get_db
from app.db.models.subscribe import Subscribe
from app.db.models.subscribehistory import SubscribeHistory
from app.db.models.user import User
from app.db.systemconfig_oper import SystemConfigOper
from app.db.user_oper import get_current_active_user
from app.helper.subscribe import SubscribeHelper
from app.scheduler import Scheduler
from app.schemas.types import MediaType
from app.schemas.types import MediaType, EventType, SystemConfigKey
router = APIRouter()
@@ -42,7 +44,7 @@ def read_subscribes(
@router.get("/list", summary="查询所有订阅API_TOKEN", response_model=List[schemas.Subscribe])
def list_subscribes(_: str = Depends(verify_apitoken)) -> Any:
def list_subscribes(_: Annotated[str, Depends(verify_apitoken)]) -> Any:
"""
查询所有订阅 API_TOKEN认证?token=xxx
"""
@@ -73,21 +75,12 @@ def create_subscribe(
title = subscribe_in.name
else:
title = None
# 订阅用户
subscribe_in.username = current_user.name
sid, message = SubscribeChain().add(mtype=mtype,
title=title,
year=subscribe_in.year,
tmdbid=subscribe_in.tmdbid,
season=subscribe_in.season,
doubanid=subscribe_in.doubanid,
bangumiid=subscribe_in.bangumiid,
username=current_user.name,
best_version=subscribe_in.best_version,
save_path=subscribe_in.save_path,
search_imdbid=subscribe_in.search_imdbid,
custom_words=subscribe_in.custom_words,
media_category=subscribe_in.media_category,
filter_groups=subscribe_in.filter_groups,
exist_ok=True)
exist_ok=True,
**subscribe_in.dict())
return schemas.Response(
success=bool(sid), message=message, data={"id": sid}
)
@@ -107,6 +100,7 @@ def update_subscribe(
if not subscribe:
return schemas.Response(success=False, message="订阅不存在")
# 避免更新缺失集数
old_subscribe_dict = subscribe.to_dict()
subscribe_dict = subscribe_in.dict()
if not subscribe_in.lack_episode:
# 没有缺失集数时缺失集数清空避免更新为0
@@ -121,20 +115,53 @@ def update_subscribe(
if subscribe_in.total_episode != subscribe.total_episode:
subscribe_dict["manual_total_episode"] = 1
subscribe.update(db, subscribe_dict)
# 发送订阅调整事件
eventmanager.send_event(EventType.SubscribeModified, {
"subscribe_id": subscribe.id,
"old_subscribe_info": old_subscribe_dict,
"subscribe_info": subscribe.to_dict(),
})
return schemas.Response(success=True)
@router.put("/status/{subid}", summary="更新订阅状态", response_model=schemas.Response)
def update_subscribe_status(
subid: int,
state: str,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
更新订阅状态
"""
subscribe = Subscribe.get(db, subid)
if not subscribe:
return schemas.Response(success=False, message="订阅不存在")
valid_states = ["R", "P", "S"]
if state not in valid_states:
return schemas.Response(success=False, message="无效的订阅状态")
old_subscribe_dict = subscribe.to_dict()
subscribe.update(db, {
"state": state
})
# 发送订阅调整事件
eventmanager.send_event(EventType.SubscribeModified, {
"subscribe_id": subscribe.id,
"old_subscribe_info": old_subscribe_dict,
"subscribe_info": subscribe.to_dict(),
})
return schemas.Response(success=True)
@router.get("/media/{mediaid}", summary="查询订阅", response_model=schemas.Subscribe)
def subscribe_mediaid(
mediaid: str,
season: int = None,
title: str = None,
season: Optional[int] = None,
title: Optional[str] = None,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据 TMDBID/豆瓣ID/BangumiId 查询订阅 tmdb:/douban:
"""
result = None
title_check = False
if mediaid.startswith("tmdb:"):
tmdbid = mediaid[5:]
@@ -155,6 +182,10 @@ def subscribe_mediaid(
result = Subscribe.get_by_bangumiid(db, int(bangumiid))
if not result and title:
title_check = True
else:
result = Subscribe.get_by_mediaid(db, mediaid)
if not result and title:
title_check = True
# 使用名称检查订阅
if title_check and title:
meta = MetaInfo(title)
@@ -185,9 +216,17 @@ def reset_subscribes(
"""
subscribe = Subscribe.get(db, subid)
if subscribe:
old_subscribe_dict = subscribe.to_dict()
subscribe.update(db, {
"note": "",
"lack_episode": subscribe.total_episode
"note": [],
"lack_episode": subscribe.total_episode,
"state": "R"
})
# 发送订阅调整事件
eventmanager.send_event(EventType.SubscribeModified, {
"subscribe_id": subscribe.id,
"old_subscribe_info": old_subscribe_dict,
"subscribe_info": subscribe.to_dict(),
})
return schemas.Response(success=True)
return schemas.Response(success=False, message="订阅不存在")
@@ -245,30 +284,44 @@ def search_subscribe(
@router.delete("/media/{mediaid}", summary="删除订阅", response_model=schemas.Response)
def delete_subscribe_by_mediaid(
mediaid: str,
season: int = None,
season: Optional[int] = None,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)
) -> Any:
"""
根据TMDBID或豆瓣ID删除订阅 tmdb:/douban:
"""
delete_subscribes = []
if mediaid.startswith("tmdb:"):
tmdbid = mediaid[5:]
if not tmdbid or not str(tmdbid).isdigit():
return schemas.Response(success=False)
Subscribe().delete_by_tmdbid(db, int(tmdbid), season)
subscribes = Subscribe().get_by_tmdbid(db, int(tmdbid), season)
delete_subscribes.extend(subscribes)
elif mediaid.startswith("douban:"):
doubanid = mediaid[7:]
if not doubanid:
return schemas.Response(success=False)
Subscribe().delete_by_doubanid(db, doubanid)
subscribe = Subscribe().get_by_doubanid(db, doubanid)
if subscribe:
delete_subscribes.append(subscribe)
else:
subscribe = Subscribe().get_by_mediaid(db, mediaid)
if subscribe:
delete_subscribes.append(subscribe)
for subscribe in delete_subscribes:
Subscribe().delete(db, subscribe.id)
# 发送事件
eventmanager.send_event(EventType.SubscribeDeleted, {
"subscribe_id": subscribe.id,
"subscribe_info": subscribe.to_dict()
})
return schemas.Response(success=True)
@router.post("/seerr", summary="OverSeerr/JellySeerr通知订阅", response_model=schemas.Response)
async def seerr_subscribe(request: Request, background_tasks: BackgroundTasks,
authorization: str = Header(None)) -> Any:
authorization: Annotated[str | None, Header()] = None) -> Any:
"""
Jellyseerr/Overseerr网络勾子通知订阅
"""
@@ -322,8 +375,8 @@ async def seerr_subscribe(request: Request, background_tasks: BackgroundTasks,
@router.get("/history/{mtype}", summary="查询订阅历史", response_model=List[schemas.Subscribe])
def subscribe_history(
mtype: str,
page: int = 1,
count: int = 30,
page: Optional[int] = 1,
count: Optional[int] = 30,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
@@ -348,9 +401,9 @@ def delete_subscribe(
@router.get("/popular", summary="热门订阅(基于用户共享数据)", response_model=List[schemas.MediaInfo])
def popular_subscribes(
stype: str,
page: int = 1,
count: int = 30,
min_sub: int = None,
page: Optional[int] = 1,
count: Optional[int] = 30,
min_sub: Optional[int] = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询热门订阅
@@ -429,6 +482,17 @@ def subscribe_share(
return schemas.Response(success=state, message=errmsg)
@router.delete("/share/{share_id}", summary="删除分享", response_model=schemas.Response)
def subscribe_share_delete(
share_id: int,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
删除分享
"""
state, errmsg = SubscribeHelper().share_delete(share_id=share_id)
return schemas.Response(success=state, message=errmsg)
@router.post("/fork", summary="复用订阅", response_model=schemas.Response)
def subscribe_fork(
sub: schemas.SubscribeShare,
@@ -448,11 +512,47 @@ def subscribe_fork(
return result
@router.get("/follow", summary="查询已Follow的订阅分享人", response_model=List[str])
def followed_subscribers(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询已Follow的订阅分享人
"""
return SystemConfigOper().get(SystemConfigKey.FollowSubscribers) or []
@router.post("/follow", summary="Follow订阅分享人", response_model=schemas.Response)
def follow_subscriber(
share_uid: Optional[str] = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
Follow订阅分享人
"""
subscribers = SystemConfigOper().get(SystemConfigKey.FollowSubscribers) or []
if share_uid and share_uid not in subscribers:
subscribers.append(share_uid)
SystemConfigOper().set(SystemConfigKey.FollowSubscribers, subscribers)
return schemas.Response(success=True)
@router.delete("/follow", summary="取消Follow订阅分享人", response_model=schemas.Response)
def unfollow_subscriber(
share_uid: Optional[str] = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
取消Follow订阅分享人
"""
subscribers = SystemConfigOper().get(SystemConfigKey.FollowSubscribers) or []
if share_uid and share_uid in subscribers:
subscribers.remove(share_uid)
SystemConfigOper().set(SystemConfigKey.FollowSubscribers, subscribers)
return schemas.Response(success=True)
@router.get("/shares", summary="查询分享的订阅", response_model=List[schemas.SubscribeShare])
def popular_subscribes(
name: str = None,
page: int = 1,
count: int = 30,
name: Optional[str] = None,
page: Optional[int] = 1,
count: Optional[int] = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询分享的订阅
@@ -485,9 +585,14 @@ def delete_subscribe(
subscribe = Subscribe.get(db, subscribe_id)
if subscribe:
subscribe.delete(db, subscribe_id)
# 统计订阅
SubscribeHelper().sub_done_async({
"tmdbid": subscribe.tmdbid,
"doubanid": subscribe.doubanid
})
# 发送事件
eventmanager.send_event(EventType.SubscribeDeleted, {
"subscribe_id": subscribe_id,
"subscribe_info": subscribe.to_dict()
})
# 统计订阅
SubscribeHelper().sub_done_async({
"tmdbid": subscribe.tmdbid,
"doubanid": subscribe.doubanid
})
return schemas.Response(success=True)

View File

@@ -5,9 +5,10 @@ import tempfile
from collections import deque
from datetime import datetime
from pathlib import Path
from typing import Optional, Union
from typing import Optional, Union, Annotated
import aiofiles
import pillow_avif # noqa 用于自动注册AVIF支持
from PIL import Image
from fastapi import APIRouter, Depends, HTTPException, Header, Request, Response
from fastapi.responses import StreamingResponse
@@ -16,16 +17,18 @@ from app import schemas
from app.chain.search import SearchChain
from app.chain.system import SystemChain
from app.core.config import global_vars, settings
from app.core.metainfo import MetaInfo
from app.core.module import ModuleManager
from app.core.security import verify_apitoken, verify_resource_token, verify_token
from app.db.models import User
from app.db.systemconfig_oper import SystemConfigOper
from app.db.user_oper import get_current_active_superuser
from app.helper.mediaserver import MediaServerHelper
from app.helper.message import MessageHelper
from app.helper.message import MessageHelper, MessageQueueManager
from app.helper.progress import ProgressHelper
from app.helper.rule import RuleHelper
from app.helper.sites import SitesHelper
from app.helper.subscribe import SubscribeHelper
from app.log import logger
from app.monitor import Monitor
from app.scheduler import Scheduler
@@ -49,7 +52,6 @@ def fetch_image(
"""
处理图片缓存逻辑支持HTTP缓存和磁盘缓存
"""
if not url:
raise HTTPException(status_code=404, detail="URL not provided")
@@ -67,6 +69,10 @@ def fetch_image(
sanitized_path = SecurityUtils.sanitize_url_path(url)
cache_path = settings.CACHE_PATH / "images" / sanitized_path
# 没有文件类型,则添加后缀,在恶意文件类型和实际需求下的折衷选择
if not cache_path.suffix:
cache_path = cache_path.with_suffix(".jpg")
# 确保缓存路径和文件类型合法
if not SecurityUtils.is_safe_path(settings.CACHE_PATH, cache_path, settings.SECURITY_IMAGE_SUFFIXES):
raise HTTPException(status_code=400, detail="Invalid cache path or file type")
@@ -87,7 +93,8 @@ def fetch_image(
# 请求远程图片
referer = "https://movie.douban.com/" if "doubanio.com" in url else None
proxies = settings.PROXY if proxy else None
response = RequestUtils(ua=settings.USER_AGENT, proxies=proxies, referer=referer).get_res(url=url)
response = RequestUtils(ua=settings.USER_AGENT, proxies=proxies, referer=referer,
accept_type="image/avif,image/webp,image/apng,*/*").get_res(url=url)
if not response:
raise HTTPException(status_code=502, detail="Failed to fetch the image from the remote server")
@@ -135,7 +142,7 @@ def fetch_image(
def proxy_img(
imgurl: str,
proxy: bool = False,
if_none_match: Optional[str] = Header(None),
if_none_match: Annotated[str | None, Header()] = None,
_: schemas.TokenPayload = Depends(verify_resource_token)
) -> Response:
"""
@@ -152,7 +159,7 @@ def proxy_img(
@router.get("/cache/image", summary="图片缓存")
def cache_img(
url: str,
if_none_match: Optional[str] = Header(None),
if_none_match: Annotated[str | None, Header()] = None,
_: schemas.TokenPayload = Depends(verify_resource_token)
) -> Response:
"""
@@ -173,6 +180,11 @@ def get_global_setting():
exclude={"SECRET_KEY", "RESOURCE_SECRET_KEY", "API_TOKEN", "TMDB_API_KEY", "TVDB_API_KEY", "FANART_API_KEY",
"COOKIECLOUD_KEY", "COOKIECLOUD_PASSWORD", "GITHUB_TOKEN", "REPO_GITHUB_TOKEN"}
)
# 追加用户唯一ID和订阅分享管理权限
info.update({
"USER_UNIQUE_ID": SubscribeHelper().get_user_uuid(),
"SUBSCRIBE_SHARE_MANAGE": SubscribeHelper().is_admin_user(),
})
return schemas.Response(success=True,
data=info)
@@ -278,7 +290,8 @@ def set_setting(key: str, value: Union[list, dict, bool, int, str] = None,
@router.get("/message", summary="实时消息")
async def get_message(request: Request, role: str = "system", _: schemas.TokenPayload = Depends(verify_resource_token)):
async def get_message(request: Request, role: Optional[str] = "system",
_: schemas.TokenPayload = Depends(verify_resource_token)):
"""
实时获取系统消息返回格式为SSE
"""
@@ -299,7 +312,7 @@ async def get_message(request: Request, role: str = "system", _: schemas.TokenPa
@router.get("/logging", summary="实时日志")
async def get_logging(request: Request, length: int = 50, logfile: str = "moviepilot.log",
async def get_logging(request: Request, length: Optional[int] = 50, logfile: Optional[str] = "moviepilot.log",
_: schemas.TokenPayload = Depends(verify_resource_token)):
"""
实时获取系统日志
@@ -371,7 +384,7 @@ def latest_version(_: schemas.TokenPayload = Depends(verify_token)):
@router.get("/ruletest", summary="过滤规则测试", response_model=schemas.Response)
def ruletest(title: str,
rulegroup_name: str,
subtitle: str = None,
subtitle: Optional[str] = None,
_: schemas.TokenPayload = Depends(verify_token)):
"""
过滤规则测试,规则类型 1-订阅2-洗版3-搜索
@@ -385,9 +398,14 @@ def ruletest(title: str,
if not rulegroup:
return schemas.Response(success=False, message=f"过滤规则组 {rulegroup_name} 不存在!")
# 根据标题查询媒体信息
media_info = SearchChain().recognize_media(MetaInfo(title=title, subtitle=subtitle))
if not media_info:
return schemas.Response(success=False, message="未识别到媒体信息!")
# 过滤
result = SearchChain().filter_torrents(rule_groups=[rulegroup.name],
torrent_list=[torrent])
torrent_list=[torrent], mediainfo=media_info)
if not result:
return schemas.Response(success=False, message="不符合过滤规则!")
return schemas.Response(success=True, data={
@@ -464,6 +482,7 @@ def reload_module(_: User = Depends(get_current_active_superuser)):
"""
重新加载模块(仅管理员)
"""
MessageQueueManager().init_config()
ModuleManager().reload()
Scheduler().init()
Monitor().init()
@@ -484,7 +503,7 @@ def run_scheduler(jobid: str,
@router.get("/runscheduler2", summary="运行服务API_TOKEN", response_model=schemas.Response)
def run_scheduler2(jobid: str,
_: str = Depends(verify_apitoken)):
_: Annotated[str, Depends(verify_apitoken)]):
"""
执行命令API_TOKEN认证
"""

View File

@@ -1,4 +1,4 @@
from typing import List, Any
from typing import List, Any, Optional
from fastapi import APIRouter, Depends
@@ -59,10 +59,24 @@ def tmdb_recommend(tmdbid: int,
return []
@router.get("/collection/{collection_id}", summary="系列合集详情", response_model=List[schemas.MediaInfo])
def tmdb_collection(collection_id: int,
page: Optional[int] = 1,
count: Optional[int] = 20,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据合集ID查询合集详情
"""
medias = TmdbChain().tmdb_collection(collection_id=collection_id)
if medias:
return [media.to_dict() for media in medias][(page - 1) * count:page * count]
return []
@router.get("/credits/{tmdbid}/{type_name}", summary="演员阵容", response_model=List[schemas.MediaPerson])
def tmdb_credits(tmdbid: int,
type_name: str,
page: int = 1,
page: Optional[int] = 1,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据TMDBID查询演员阵容type_name: 电影/电视剧
@@ -88,7 +102,7 @@ def tmdb_person(person_id: int,
@router.get("/person/credits/{person_id}", summary="人物参演作品", response_model=List[schemas.MediaInfo])
def tmdb_person_credits(person_id: int,
page: int = 1,
page: Optional[int] = 1,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据人物ID查询人物参演作品
@@ -99,60 +113,10 @@ def tmdb_person_credits(person_id: int,
return []
@router.get("/movies", summary="TMDB电影", response_model=List[schemas.MediaInfo])
def tmdb_movies(sort_by: str = "popularity.desc",
with_genres: str = "",
with_original_language: str = "",
page: int = 1,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览TMDB电影信息
"""
movies = TmdbChain().tmdb_discover(mtype=MediaType.MOVIE,
sort_by=sort_by,
with_genres=with_genres,
with_original_language=with_original_language,
page=page)
if not movies:
return []
return [movie.to_dict() for movie in movies]
@router.get("/tvs", summary="TMDB剧集", response_model=List[schemas.MediaInfo])
def tmdb_tvs(sort_by: str = "popularity.desc",
with_genres: str = "",
with_original_language: str = "",
page: int = 1,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览TMDB剧集信息
"""
tvs = TmdbChain().tmdb_discover(mtype=MediaType.TV,
sort_by=sort_by,
with_genres=with_genres,
with_original_language=with_original_language,
page=page)
if not tvs:
return []
return [tv.to_dict() for tv in tvs]
@router.get("/trending", summary="TMDB流行趋势", response_model=List[schemas.MediaInfo])
def tmdb_trending(page: int = 1,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览TMDB剧集信息
"""
infos = TmdbChain().tmdb_trending(page=page)
if not infos:
return []
return [info.to_dict() for info in infos]
@router.get("/{tmdbid}/{season}", summary="TMDB季所有集", response_model=List[schemas.TmdbEpisode])
def tmdb_season_episodes(tmdbid: int, season: int,
def tmdb_season_episodes(tmdbid: int, season: int, episode_group: Optional[str] = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据TMDBID查询某季的所有信信息
"""
return TmdbChain().tmdb_episodes(tmdbid=tmdbid, season=season)
return TmdbChain().tmdb_episodes(tmdbid=tmdbid, season=season, episode_group=episode_group)

View File

@@ -1,8 +1,7 @@
from pathlib import Path
from typing import Any, Optional
from typing import Any, List, Annotated, Optional
from fastapi import APIRouter, Depends
from pydantic import BaseModel
from sqlalchemy.orm import Session
from app import schemas
@@ -14,30 +13,11 @@ from app.core.security import verify_token, verify_apitoken
from app.db import get_db
from app.db.models.transferhistory import TransferHistory
from app.db.user_oper import get_current_active_superuser
from app.schemas import MediaType, FileItem
from app.schemas import MediaType, FileItem, ManualTransferItem
router = APIRouter()
class ManualTransferItem(BaseModel):
fileitem: FileItem = None,
logid: Optional[int] = None,
target_storage: Optional[str] = None,
target_path: Optional[str] = None,
tmdbid: Optional[int] = None,
doubanid: Optional[str] = None,
type_name: Optional[str] = None,
season: Optional[int] = None,
transfer_type: Optional[str] = None,
episode_format: Optional[str] = None,
episode_detail: Optional[str] = None,
episode_part: Optional[str] = None,
episode_offset: Optional[str] = None,
min_filesize: Optional[int] = 0,
scrape: bool = False,
from_history: bool = False
@router.get("/name", summary="查询整理后的名称", response_model=schemas.Response)
def query_name(path: str, filetype: str,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
@@ -67,13 +47,35 @@ def query_name(path: str, filetype: str,
})
@router.get("/queue", summary="查询整理队列", response_model=List[schemas.TransferJob])
def query_queue(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询整理队列
:param _: Token校验
"""
return TransferChain().get_queue_tasks()
@router.delete("/queue", summary="从整理队列中删除任务", response_model=schemas.Response)
def remove_queue(fileitem: schemas.FileItem, _: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询整理队列
:param fileitem: 文件项
:param _: Token校验
"""
TransferChain().remove_from_queue(fileitem)
return schemas.Response(success=True)
@router.post("/manual", summary="手动转移", response_model=schemas.Response)
def manual_transfer(transer_item: ManualTransferItem,
background: Optional[bool] = False,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(get_current_active_superuser)) -> Any:
"""
手动转移,文件或历史记录,支持自定义剧集识别格式
:param transer_item: 手工整理项
:param background: 后台运行
:param db: 数据库
:param _: Token校验
"""
@@ -83,7 +85,7 @@ def manual_transfer(transer_item: ManualTransferItem,
# 查询历史记录
history: TransferHistory = TransferHistory.get(db, transer_item.logid)
if not history:
return schemas.Response(success=False, message=f"历史记录不存在ID{transer_item.logid}")
return schemas.Response(success=False, message=f"整理记录不存在ID{transer_item.logid}")
# 强制转移
force = True
if history.status and ("move" in history.mode):
@@ -144,11 +146,15 @@ def manual_transfer(transer_item: ManualTransferItem,
doubanid=transer_item.doubanid,
mtype=mtype,
season=transer_item.season,
episode_group=transer_item.episode_group,
transfer_type=transer_item.transfer_type,
epformat=epformat,
min_filesize=transer_item.min_filesize,
scrape=transer_item.scrape,
force=force
library_type_folder=transer_item.library_type_folder,
library_category_folder=transer_item.library_category_folder,
force=force,
background=background
)
# 失败
if not state:
@@ -160,7 +166,7 @@ def manual_transfer(transer_item: ManualTransferItem,
@router.get("/now", summary="立即执行下载器文件整理", response_model=schemas.Response)
def now(_: str = Depends(verify_apitoken)) -> Any:
def now(_: Annotated[str, Depends(verify_apitoken)]) -> Any:
"""
立即执行下载器文件整理 API_TOKEN认证?token=xxx
"""

View File

@@ -1,4 +1,4 @@
from typing import Any
from typing import Any, Annotated
from fastapi import APIRouter, BackgroundTasks, Request, Depends
@@ -19,7 +19,7 @@ def start_webhook_chain(body: Any, form: Any, args: Any):
@router.post("/", summary="Webhook消息响应", response_model=schemas.Response)
async def webhook_message(background_tasks: BackgroundTasks,
request: Request,
_: str = Depends(verify_apitoken)
_: Annotated[str, Depends(verify_apitoken)]
) -> Any:
"""
Webhook响应配置请求中需要添加参数token=API_TOKEN&source=媒体服务器名
@@ -33,7 +33,7 @@ async def webhook_message(background_tasks: BackgroundTasks,
@router.get("/", summary="Webhook消息响应", response_model=schemas.Response)
def webhook_message(background_tasks: BackgroundTasks,
request: Request, _: str = Depends(verify_apitoken)) -> Any:
request: Request, _: Annotated[str, Depends(verify_apitoken)]) -> Any:
"""
Webhook响应配置请求中需要添加参数token=API_TOKEN&source=媒体服务器名
"""

View File

@@ -0,0 +1,162 @@
from datetime import datetime
from typing import List, Any, Optional
from fastapi import APIRouter, Depends
from sqlalchemy.orm import Session
from app import schemas
from app.core.config import global_vars
from app.core.workflow import WorkFlowManager
from app.db import get_db
from app.db.models.workflow import Workflow
from app.db.systemconfig_oper import SystemConfigOper
from app.db.user_oper import get_current_active_user
from app.chain.workflow import WorkflowChain
from app.scheduler import Scheduler
router = APIRouter()
@router.get("/", summary="所有工作流", response_model=List[schemas.Workflow])
def list_workflows(db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(get_current_active_user)) -> Any:
"""
获取工作流列表
"""
return Workflow.list(db)
@router.post("/", summary="创建工作流", response_model=schemas.Response)
def create_workflow(workflow: schemas.Workflow,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(get_current_active_user)) -> Any:
"""
创建工作流
"""
if Workflow.get_by_name(db, workflow.name):
return schemas.Response(success=False, message="已存在相同名称的工作流")
if not workflow.add_time:
workflow.add_time = datetime.strftime(datetime.now(), "%Y-%m-%d %H:%M:%S")
if not workflow.state:
workflow.state = "P"
Workflow(**workflow.dict()).create(db)
return schemas.Response(success=True, message="创建工作流成功")
@router.get("/actions", summary="所有动作", response_model=List[dict])
def list_actions(_: schemas.TokenPayload = Depends(get_current_active_user)) -> Any:
"""
获取所有动作
"""
return WorkFlowManager().list_actions()
@router.get("/{workflow_id}", summary="工作流详情", response_model=schemas.Workflow)
def get_workflow(workflow_id: int,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(get_current_active_user)) -> Any:
"""
获取工作流详情
"""
return Workflow.get(db, workflow_id)
@router.put("/{workflow_id}", summary="更新工作流", response_model=schemas.Response)
def update_workflow(workflow: schemas.Workflow,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(get_current_active_user)) -> Any:
"""
更新工作流
"""
wf = Workflow.get(db, workflow.id)
if not wf:
return schemas.Response(success=False, message="工作流不存在")
wf.update(db, workflow.dict())
return schemas.Response(success=True, message="更新成功")
@router.delete("/{workflow_id}", summary="删除工作流", response_model=schemas.Response)
def delete_workflow(workflow_id: int,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(get_current_active_user)) -> Any:
"""
删除工作流
"""
workflow = Workflow.get(db, workflow_id)
if not workflow:
return schemas.Response(success=False, message="工作流不存在")
# 删除定时任务
Scheduler().remove_workflow_job(workflow)
# 删除工作流
Workflow.delete(db, workflow_id)
# 删除缓存
SystemConfigOper().delete(f"WorkflowCache-{workflow_id}")
return schemas.Response(success=True, message="删除成功")
@router.post("/{workflow_id}/run", summary="执行工作流", response_model=schemas.Response)
def run_workflow(workflow_id: int,
from_begin: Optional[bool] = True,
_: schemas.TokenPayload = Depends(get_current_active_user)) -> Any:
"""
执行工作流
"""
state, errmsg = WorkflowChain().process(workflow_id, from_begin=from_begin)
if not state:
return schemas.Response(success=False, message=errmsg)
return schemas.Response(success=True)
@router.post("/{workflow_id}/start", summary="启用工作流", response_model=schemas.Response)
def start_workflow(workflow_id: int,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(get_current_active_user)) -> Any:
"""
启用工作流
"""
workflow = Workflow.get(db, workflow_id)
if not workflow:
return schemas.Response(success=False, message="工作流不存在")
# 添加定时任务
Scheduler().update_workflow_job(workflow)
# 更新状态
workflow.update_state(db, workflow_id, "W")
return schemas.Response(success=True)
@router.post("/{workflow_id}/pause", summary="停用工作流", response_model=schemas.Response)
def pause_workflow(workflow_id: int,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(get_current_active_user)) -> Any:
"""
停用工作流
"""
workflow = Workflow.get(db, workflow_id)
if not workflow:
return schemas.Response(success=False, message="工作流不存在")
# 删除定时任务
Scheduler().remove_workflow_job(workflow)
# 停止工作流
global_vars.stop_workflow(workflow_id)
# 更新状态
workflow.update_state(db, workflow_id, "P")
return schemas.Response(success=True)
@router.post("/{workflow_id}/reset", summary="重置工作流", response_model=schemas.Response)
def reset_workflow(workflow_id: int,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(get_current_active_user)) -> Any:
"""
重置工作流
"""
workflow = Workflow.get(db, workflow_id)
if not workflow:
return schemas.Response(success=False, message="工作流不存在")
# 停止工作流
global_vars.stop_workflow(workflow_id)
# 重置工作流
workflow.reset(db, workflow_id, reset_count=True)
# 删除缓存
SystemConfigOper().delete(f"WorkflowCache-{workflow_id}")
return schemas.Response(success=True)

View File

@@ -1,4 +1,4 @@
from typing import Any, List
from typing import Any, List, Annotated
from fastapi import APIRouter, HTTPException, Depends
from sqlalchemy.orm import Session
@@ -18,7 +18,7 @@ arr_router = APIRouter(tags=['servarr'])
@arr_router.get("/system/status", summary="系统状态")
def arr_system_status(_: str = Depends(verify_apikey)) -> Any:
def arr_system_status(_: Annotated[str, Depends(verify_apikey)]) -> Any:
"""
模拟Radarr、Sonarr系统状态
"""
@@ -72,7 +72,7 @@ def arr_system_status(_: str = Depends(verify_apikey)) -> Any:
@arr_router.get("/qualityProfile", summary="质量配置")
def arr_qualityProfile(_: str = Depends(verify_apikey)) -> Any:
def arr_qualityProfile(_: Annotated[str, Depends(verify_apikey)]) -> Any:
"""
模拟Radarr、Sonarr质量配置
"""
@@ -113,7 +113,7 @@ def arr_qualityProfile(_: str = Depends(verify_apikey)) -> Any:
@arr_router.get("/rootfolder", summary="根目录")
def arr_rootfolder(_: str = Depends(verify_apikey)) -> Any:
def arr_rootfolder(_: Annotated[str, Depends(verify_apikey)]) -> Any:
"""
模拟Radarr、Sonarr根目录
"""
@@ -129,7 +129,7 @@ def arr_rootfolder(_: str = Depends(verify_apikey)) -> Any:
@arr_router.get("/tag", summary="标签")
def arr_tag(_: str = Depends(verify_apikey)) -> Any:
def arr_tag(_: Annotated[str, Depends(verify_apikey)]) -> Any:
"""
模拟Radarr、Sonarr标签
"""
@@ -142,7 +142,7 @@ def arr_tag(_: str = Depends(verify_apikey)) -> Any:
@arr_router.get("/languageprofile", summary="语言")
def arr_languageprofile(_: str = Depends(verify_apikey)) -> Any:
def arr_languageprofile(_: Annotated[str, Depends(verify_apikey)]) -> Any:
"""
模拟Radarr、Sonarr语言
"""
@@ -168,7 +168,7 @@ def arr_languageprofile(_: str = Depends(verify_apikey)) -> Any:
@arr_router.get("/movie", summary="所有订阅电影", response_model=List[schemas.RadarrMovie])
def arr_movies(_: str = Depends(verify_apikey), db: Session = Depends(get_db)) -> Any:
def arr_movies(_: Annotated[str, Depends(verify_apikey)], db: Session = Depends(get_db)) -> Any:
"""
查询Rardar电影
"""
@@ -259,7 +259,7 @@ def arr_movies(_: str = Depends(verify_apikey), db: Session = Depends(get_db)) -
@arr_router.get("/movie/lookup", summary="查询电影", response_model=List[schemas.RadarrMovie])
def arr_movie_lookup(term: str, db: Session = Depends(get_db), _: str = Depends(verify_apikey)) -> Any:
def arr_movie_lookup(term: str, _: Annotated[str, Depends(verify_apikey)], db: Session = Depends(get_db)) -> Any:
"""
查询Rardar电影 term: `tmdb:${id}`
存在和不存在均不能返回错误
@@ -305,7 +305,7 @@ def arr_movie_lookup(term: str, db: Session = Depends(get_db), _: str = Depends(
@arr_router.get("/movie/{mid}", summary="电影订阅详情", response_model=schemas.RadarrMovie)
def arr_movie(mid: int, db: Session = Depends(get_db), _: str = Depends(verify_apikey)) -> Any:
def arr_movie(mid: int, _: Annotated[str, Depends(verify_apikey)], db: Session = Depends(get_db)) -> Any:
"""
查询Rardar电影订阅
"""
@@ -331,9 +331,9 @@ def arr_movie(mid: int, db: Session = Depends(get_db), _: str = Depends(verify_a
@arr_router.post("/movie", summary="新增电影订阅")
def arr_add_movie(movie: RadarrMovie,
db: Session = Depends(get_db),
_: str = Depends(verify_apikey)
def arr_add_movie(_: Annotated[str, Depends(verify_apikey)],
movie: RadarrMovie,
db: Session = Depends(get_db)
) -> Any:
"""
新增Rardar电影订阅
@@ -362,7 +362,7 @@ def arr_add_movie(movie: RadarrMovie,
@arr_router.delete("/movie/{mid}", summary="删除电影订阅", response_model=schemas.Response)
def arr_remove_movie(mid: int, db: Session = Depends(get_db), _: str = Depends(verify_apikey)) -> Any:
def arr_remove_movie(mid: int, _: Annotated[str, Depends(verify_apikey)], db: Session = Depends(get_db)) -> Any:
"""
删除Rardar电影订阅
"""
@@ -378,7 +378,7 @@ def arr_remove_movie(mid: int, db: Session = Depends(get_db), _: str = Depends(v
@arr_router.get("/series", summary="所有剧集", response_model=List[schemas.SonarrSeries])
def arr_series(_: str = Depends(verify_apikey), db: Session = Depends(get_db)) -> Any:
def arr_series(_: Annotated[str, Depends(verify_apikey)], db: Session = Depends(get_db)) -> Any:
"""
查询Sonarr剧集
"""
@@ -514,7 +514,7 @@ def arr_series(_: str = Depends(verify_apikey), db: Session = Depends(get_db)) -
@arr_router.get("/series/lookup", summary="查询剧集")
def arr_series_lookup(term: str, db: Session = Depends(get_db), _: str = Depends(verify_apikey)) -> Any:
def arr_series_lookup(term: str, _: Annotated[str, Depends(verify_apikey)], db: Session = Depends(get_db)) -> Any:
"""
查询Sonarr剧集 term: `tvdb:${id}` title
"""
@@ -603,7 +603,7 @@ def arr_series_lookup(term: str, db: Session = Depends(get_db), _: str = Depends
@arr_router.get("/series/{tid}", summary="剧集详情")
def arr_serie(tid: int, db: Session = Depends(get_db), _: str = Depends(verify_apikey)) -> Any:
def arr_serie(tid: int, _: Annotated[str, Depends(verify_apikey)], db: Session = Depends(get_db)) -> Any:
"""
查询Sonarr剧集
"""
@@ -638,8 +638,8 @@ def arr_serie(tid: int, db: Session = Depends(get_db), _: str = Depends(verify_a
@arr_router.post("/series", summary="新增剧集订阅")
def arr_add_series(tv: schemas.SonarrSeries,
db: Session = Depends(get_db),
_: str = Depends(verify_apikey)) -> Any:
_: Annotated[str, Depends(verify_apikey)],
db: Session = Depends(get_db)) -> Any:
"""
新增Sonarr剧集订阅
"""
@@ -680,8 +680,16 @@ def arr_add_series(tv: schemas.SonarrSeries,
)
@arr_router.put("/series", summary="更新剧集订阅")
def arr_update_series(tv: schemas.SonarrSeries) -> Any:
"""
更新Sonarr剧集订阅
"""
return arr_add_series(tv)
@arr_router.delete("/series/{tid}", summary="删除剧集订阅")
def arr_remove_series(tid: int, db: Session = Depends(get_db), _: str = Depends(verify_apikey)) -> Any:
def arr_remove_series(tid: int, _: Annotated[str, Depends(verify_apikey)], db: Session = Depends(get_db)) -> Any:
"""
删除Sonarr剧集订阅
"""

View File

@@ -19,7 +19,7 @@ class GzipRequest(Request):
body = await super().body()
if "gzip" in self.headers.getlist("Content-Encoding"):
body = gzip.decompress(body)
self._body = body
self._body = body # noqa
return self._body

View File

@@ -1,3 +1,4 @@
import copy
import gc
import pickle
import traceback
@@ -6,7 +7,6 @@ from pathlib import Path
from typing import Optional, Any, Tuple, List, Set, Union, Dict
from qbittorrentapi import TorrentFilesList
from ruamel.yaml import CommentedMap
from transmission_rpc import File
from app.core.config import settings
@@ -16,7 +16,7 @@ from app.core.meta import MetaBase
from app.core.module import ModuleManager
from app.db.message_oper import MessageOper
from app.db.user_oper import UserOper
from app.helper.message import MessageHelper
from app.helper.message import MessageHelper, MessageQueueManager
from app.helper.service import ServiceConfigHelper
from app.log import logger
from app.schemas import TransferInfo, TransferTorrent, ExistMediaInfo, DownloadingTorrent, CommingMessage, Notification, \
@@ -38,6 +38,9 @@ class ChainBase(metaclass=ABCMeta):
self.eventmanager = EventManager()
self.messageoper = MessageOper()
self.messagehelper = MessageHelper()
self.messagequeue = MessageQueueManager(
send_callback=self.run_module
)
self.useroper = UserOper()
@staticmethod
@@ -61,7 +64,7 @@ class ChainBase(metaclass=ABCMeta):
"""
try:
with open(settings.TEMP_PATH / filename, 'wb') as f:
pickle.dump(cache, f)
pickle.dump(cache, f) # noqa
except Exception as err:
logger.error(f"保存缓存 {filename} 出错:{str(err)}")
finally:
@@ -76,7 +79,7 @@ class ChainBase(metaclass=ABCMeta):
"""
cache_path = settings.TEMP_PATH / filename
if cache_path.exists():
Path(cache_path).unlink()
cache_path.unlink()
def run_module(self, method: str, *args, **kwargs) -> Any:
"""
@@ -91,10 +94,10 @@ class ChainBase(metaclass=ABCMeta):
if isinstance(ret, tuple):
return all(value is None for value in ret)
else:
return result is None
return ret is None
logger.debug(f"请求模块执行:{method} ...")
result = None
logger.debug(f"请求模块执行:{method} ...")
modules = self.modulemanager.get_running_modules(method)
# 按优先级排序
modules = sorted(modules, key=lambda x: x.get_priority())
@@ -143,10 +146,11 @@ class ChainBase(metaclass=ABCMeta):
return result
def recognize_media(self, meta: MetaBase = None,
mtype: MediaType = None,
tmdbid: int = None,
doubanid: str = None,
bangumiid: int = None,
mtype: Optional[MediaType] = None,
tmdbid: Optional[int] = None,
doubanid: Optional[str] = None,
bangumiid: Optional[int] = None,
episode_group: Optional[str] = None,
cache: bool = True) -> Optional[MediaInfo]:
"""
识别媒体信息不含Fanart图片
@@ -155,6 +159,7 @@ class ChainBase(metaclass=ABCMeta):
:param tmdbid: tmdbid
:param doubanid: 豆瓣ID
:param bangumiid: BangumiID
:param episode_group: 剧集组
:param cache: 是否使用缓存
:return: 识别的媒体信息,包括剧集信息
"""
@@ -170,10 +175,11 @@ class ChainBase(metaclass=ABCMeta):
doubanid = None
bangumiid = None
return self.run_module("recognize_media", meta=meta, mtype=mtype,
tmdbid=tmdbid, doubanid=doubanid, bangumiid=bangumiid, cache=cache)
tmdbid=tmdbid, doubanid=doubanid, bangumiid=bangumiid,
episode_group=episode_group, cache=cache)
def match_doubaninfo(self, name: str, imdbid: str = None,
mtype: MediaType = None, year: str = None, season: int = None,
def match_doubaninfo(self, name: str, imdbid: Optional[str] = None,
mtype: Optional[MediaType] = None, year: Optional[str] = None, season: Optional[int] = None,
raise_exception: bool = False) -> Optional[dict]:
"""
搜索和匹配豆瓣信息
@@ -187,8 +193,8 @@ class ChainBase(metaclass=ABCMeta):
return self.run_module("match_doubaninfo", name=name, imdbid=imdbid,
mtype=mtype, year=year, season=season, raise_exception=raise_exception)
def match_tmdbinfo(self, name: str, mtype: MediaType = None,
year: str = None, season: int = None) -> Optional[dict]:
def match_tmdbinfo(self, name: str, mtype: Optional[MediaType] = None,
year: Optional[str] = None, season: Optional[int] = None) -> Optional[dict]:
"""
搜索和匹配TMDB信息
:param name: 标题
@@ -208,8 +214,8 @@ class ChainBase(metaclass=ABCMeta):
return self.run_module("obtain_images", mediainfo=mediainfo)
def obtain_specific_image(self, mediaid: Union[str, int], mtype: MediaType,
image_type: MediaImageType, image_prefix: str = None,
season: int = None, episode: int = None) -> Optional[str]:
image_type: MediaImageType, image_prefix: Optional[str] = None,
season: Optional[int] = None, episode: Optional[int] = None) -> Optional[str]:
"""
获取指定媒体信息图片,返回图片地址
:param mediaid: 媒体ID
@@ -223,7 +229,7 @@ class ChainBase(metaclass=ABCMeta):
image_prefix=image_prefix, image_type=image_type,
season=season, episode=episode)
def douban_info(self, doubanid: str, mtype: MediaType = None,
def douban_info(self, doubanid: str, mtype: Optional[MediaType] = None,
raise_exception: bool = False) -> Optional[dict]:
"""
获取豆瓣信息
@@ -242,7 +248,7 @@ class ChainBase(metaclass=ABCMeta):
"""
return self.run_module("tvdb_info", tvdbid=tvdbid)
def tmdb_info(self, tmdbid: int, mtype: MediaType, season: int = None) -> Optional[dict]:
def tmdb_info(self, tmdbid: int, mtype: MediaType, season: Optional[int] = None) -> Optional[dict]:
"""
获取TMDB信息
:param tmdbid: int
@@ -300,10 +306,17 @@ class ChainBase(metaclass=ABCMeta):
"""
return self.run_module("search_persons", name=name)
def search_torrents(self, site: CommentedMap,
def search_collections(self, name: str) -> Optional[List[MediaInfo]]:
"""
搜索集合信息
:param name: 集合名称
"""
return self.run_module("search_collections", name=name)
def search_torrents(self, site: dict,
keywords: List[str],
mtype: MediaType = None,
page: int = 0) -> List[TorrentInfo]:
mtype: Optional[MediaType] = None,
page: Optional[int] = 0) -> List[TorrentInfo]:
"""
搜索一个站点的种子资源
:param site: 站点
@@ -315,34 +328,35 @@ class ChainBase(metaclass=ABCMeta):
return self.run_module("search_torrents", site=site, keywords=keywords,
mtype=mtype, page=page)
def refresh_torrents(self, site: CommentedMap) -> List[TorrentInfo]:
def refresh_torrents(self, site: dict, keyword: Optional[str] = None,
cat: Optional[str] = None, page: Optional[int] = 0) -> List[TorrentInfo]:
"""
获取站点最新一页的种子,多个站点需要多线程处理
:param site: 站点
:param keyword: 标题
:param cat: 分类
:param page: 页码
:reutrn: 种子资源列表
"""
return self.run_module("refresh_torrents", site=site)
return self.run_module("refresh_torrents", site=site, keyword=keyword, cat=cat, page=page)
def filter_torrents(self, rule_groups: List[str],
torrent_list: List[TorrentInfo],
season_episodes: Dict[int, list] = None,
mediainfo: MediaInfo = None) -> List[TorrentInfo]:
"""
过滤种子资源
:param rule_groups: 过滤规则组名称列表
:param torrent_list: 资源列表
:param season_episodes: 季集数过滤 {season:[episodes]}
:param mediainfo: 识别的媒体信息
:return: 过滤后的资源列表,添加资源优先级
"""
return self.run_module("filter_torrents", rule_groups=rule_groups,
torrent_list=torrent_list, season_episodes=season_episodes,
mediainfo=mediainfo)
torrent_list=torrent_list, mediainfo=mediainfo)
def download(self, content: Union[Path, str], download_dir: Path, cookie: str,
episodes: Set[int] = None, category: str = None,
downloader: str = None
) -> Optional[Tuple[Optional[str], Optional[str], str]]:
episodes: Set[int] = None, category: Optional[str] = None, label: Optional[str] = None,
downloader: Optional[str] = None
) -> Optional[Tuple[Optional[str], Optional[str], Optional[str], str]]:
"""
根据种子文件,选择并添加下载任务
:param content: 种子文件地址或者磁力链接
@@ -350,11 +364,12 @@ class ChainBase(metaclass=ABCMeta):
:param cookie: cookie
:param episodes: 需要下载的集数
:param category: 种子分类
:param label: 标签
:param downloader: 下载器
:return: 下载器名称、种子Hash、错误信息
:return: 下载器名称、种子Hash、种子文件布局、错误原因
"""
return self.run_module("download", content=content, download_dir=download_dir,
cookie=cookie, episodes=episodes, category=category,
cookie=cookie, episodes=episodes, category=category, label=label,
downloader=downloader)
def download_added(self, context: Context, download_dir: Path, torrent_path: Path = None) -> None:
@@ -370,7 +385,7 @@ class ChainBase(metaclass=ABCMeta):
def list_torrents(self, status: TorrentStatus = None,
hashs: Union[list, str] = None,
downloader: str = None
downloader: Optional[str] = None
) -> Optional[List[Union[TransferTorrent, DownloadingTorrent]]]:
"""
获取下载器种子列表
@@ -383,8 +398,9 @@ class ChainBase(metaclass=ABCMeta):
def transfer(self, fileitem: FileItem, meta: MetaBase, mediainfo: MediaInfo,
target_directory: TransferDirectoryConf = None,
target_storage: str = None, target_path: Path = None,
transfer_type: str = None, scrape: bool = None,
target_storage: Optional[str] = None, target_path: Path = None,
transfer_type: Optional[str] = None, scrape: bool = None,
library_type_folder: bool = None, library_category_folder: bool = None,
episodes_info: List[TmdbEpisode] = None) -> Optional[TransferInfo]:
"""
文件转移
@@ -396,6 +412,8 @@ class ChainBase(metaclass=ABCMeta):
:param target_path: 目标路径
:param transfer_type: 转移模式
:param scrape: 是否刮削元数据
:param library_type_folder: 是否按类型创建目录
:param library_category_folder: 是否按类别创建目录
:param episodes_info: 当前季的全部集信息
:return: {path, target_path, message}
"""
@@ -404,9 +422,11 @@ class ChainBase(metaclass=ABCMeta):
target_directory=target_directory,
target_path=target_path, target_storage=target_storage,
transfer_type=transfer_type, scrape=scrape,
library_type_folder=library_type_folder,
library_category_folder=library_category_folder,
episodes_info=episodes_info)
def transfer_completed(self, hashs: str, downloader: str = None) -> None:
def transfer_completed(self, hashs: str, downloader: Optional[str] = None) -> None:
"""
下载器转移完成后的处理
:param hashs: 种子Hash
@@ -415,7 +435,7 @@ class ChainBase(metaclass=ABCMeta):
return self.run_module("transfer_completed", hashs=hashs, downloader=downloader)
def remove_torrents(self, hashs: Union[str, list], delete_file: bool = True,
downloader: str = None) -> bool:
downloader: Optional[str] = None) -> bool:
"""
删除下载器种子
:param hashs: 种子Hash
@@ -425,7 +445,7 @@ class ChainBase(metaclass=ABCMeta):
"""
return self.run_module("remove_torrents", hashs=hashs, delete_file=delete_file, downloader=downloader)
def start_torrents(self, hashs: Union[list, str], downloader: str = None) -> bool:
def start_torrents(self, hashs: Union[list, str], downloader: Optional[str] = None) -> bool:
"""
开始下载
:param hashs: 种子Hash
@@ -434,7 +454,7 @@ class ChainBase(metaclass=ABCMeta):
"""
return self.run_module("start_torrents", hashs=hashs, downloader=downloader)
def stop_torrents(self, hashs: Union[list, str], downloader: str = None) -> bool:
def stop_torrents(self, hashs: Union[list, str], downloader: Optional[str] = None) -> bool:
"""
停止下载
:param hashs: 种子Hash
@@ -444,7 +464,7 @@ class ChainBase(metaclass=ABCMeta):
return self.run_module("stop_torrents", hashs=hashs, downloader=downloader)
def torrent_files(self, tid: str,
downloader: str = None) -> Optional[Union[TorrentFilesList, List[File]]]:
downloader: Optional[str] = None) -> Optional[Union[TorrentFilesList, List[File]]]:
"""
获取种子文件
:param tid: 种子Hash
@@ -453,8 +473,8 @@ class ChainBase(metaclass=ABCMeta):
"""
return self.run_module("torrent_files", tid=tid, downloader=downloader)
def media_exists(self, mediainfo: MediaInfo, itemid: str = None,
server: str = None) -> Optional[ExistMediaInfo]:
def media_exists(self, mediainfo: MediaInfo, itemid: Optional[str] = None,
server: Optional[str] = None) -> Optional[ExistMediaInfo]:
"""
判断媒体文件是否存在
:param mediainfo: 识别的媒体信息
@@ -478,38 +498,62 @@ class ChainBase(metaclass=ABCMeta):
:param message: 消息体
:return: 成功或失败
"""
logger.info(f"发送消息channel={message.channel}"
f"source={message.source},"
f"title={message.title}, "
f"text={message.text}"
f"userid={message.userid}")
if not message.userid and message.mtype:
# 没有指定用户ID时按规则确定发送对象
# 默认发送全体
to_targets = None
notify_action = ServiceConfigHelper.get_notification_switch(message.mtype)
if notify_action == "admin":
# 仅发送管理员
logger.info(f"已设置 {message.mtype} 的消息只发送给管理员")
to_targets = self.useroper.get_settings(settings.SUPERUSER)
elif notify_action == "user":
# 发送对应用户
if message.username:
logger.info(f"已设置 {message.mtype} 的消息只发送给用户 {message.username}")
to_targets = self.useroper.get_settings(message.username)
if not message.username or to_targets is None:
if message.username:
logger.info(f"没有 {message.username} 这个用户,该消息将发送给管理员")
# 回滚发送管理员
to_targets = self.useroper.get_settings(settings.SUPERUSER)
message.targets = to_targets
# 发送事件
self.eventmanager.send_event(etype=EventType.NoticeMessage, data={**message.dict(), "type": message.mtype})
# 保存消息
# 保存原消息
self.messagehelper.put(message, role="user", title=message.title)
self.messageoper.add(**message.dict())
# 发送
self.run_module("post_message", message=message)
# 发送消息按设置隔离
if not message.userid and message.mtype:
# 消息隔离设置
notify_action = ServiceConfigHelper.get_notification_switch(message.mtype)
if notify_action:
# 'admin' 'user,admin' 'user' 'all'
actions = notify_action.split(",")
# 是否已发送管理员标志
admin_sended = False
send_orignal = False
for action in actions:
send_message = copy.deepcopy(message)
if action == "admin" and not admin_sended:
# 仅发送管理员
logger.info(f"{send_message.mtype} 的消息已设置发送给管理员")
# 读取管理员消息IDS
send_message.targets = self.useroper.get_settings(settings.SUPERUSER)
admin_sended = True
elif action == "user" and send_message.username:
# 发送对应用户
logger.info(f"{send_message.mtype} 的消息已设置发送给用户 {send_message.username}")
# 读取用户消息IDS
send_message.targets = self.useroper.get_settings(send_message.username)
if send_message.targets is None:
# 没有找到用户
if not admin_sended:
# 回滚发送管理员
logger.info(f"用户 {send_message.username} 不存在,消息将发送给管理员")
# 读取管理员消息IDS
send_message.targets = self.useroper.get_settings(settings.SUPERUSER)
admin_sended = True
else:
# 管理员发过了,此消息不发了
logger.info(f"用户 {send_message.username} 不存在,消息无法发送到对应用户")
continue
elif send_message.username == settings.SUPERUSER:
# 管理员同名已发送
admin_sended = True
else:
# 按原消息发送全体
if not admin_sended:
send_orignal = True
break
# 按设定发送
self.eventmanager.send_event(etype=EventType.NoticeMessage,
data={**send_message.dict(), "type": send_message.mtype})
self.messagequeue.send_message("post_message", message=send_message)
if not send_orignal:
return
# 发送消息事件
self.eventmanager.send_event(etype=EventType.NoticeMessage, data={**message.dict(), "type": message.mtype})
# 按原消息发送
self.messagequeue.send_message("post_message", message=message)
def post_medias_message(self, message: Notification, medias: List[MediaInfo]) -> None:
"""
@@ -521,7 +565,7 @@ class ChainBase(metaclass=ABCMeta):
note_list = [media.to_dict() for media in medias]
self.messagehelper.put(message, role="user", note=note_list, title=message.title)
self.messageoper.add(**message.dict(), note=note_list)
return self.run_module("post_medias_message", message=message, medias=medias)
return self.messagequeue.send_message("post_medias_message", message=message, medias=medias)
def post_torrents_message(self, message: Notification, torrents: List[Context]) -> None:
"""
@@ -533,9 +577,10 @@ class ChainBase(metaclass=ABCMeta):
note_list = [torrent.torrent_info.to_dict() for torrent in torrents]
self.messagehelper.put(message, role="user", note=note_list, title=message.title)
self.messageoper.add(**message.dict(), note=note_list)
return self.run_module("post_torrents_message", message=message, torrents=torrents)
return self.messagequeue.send_message("post_torrents_message", message=message, torrents=torrents)
def metadata_img(self, mediainfo: MediaInfo, season: int = None, episode: int = None) -> Optional[dict]:
def metadata_img(self, mediainfo: MediaInfo,
season: Optional[int] = None, episode: Optional[int] = None) -> Optional[dict]:
"""
获取图片名称和url
:param mediainfo: 媒体信息

View File

@@ -17,6 +17,12 @@ class BangumiChain(ChainBase, metaclass=Singleton):
"""
return self.run_module("bangumi_calendar")
def discover(self, **kwargs) -> Optional[List[MediaInfo]]:
"""
发现Bangumi番剧
"""
return self.run_module("bangumi_discover", **kwargs)
def bangumi_info(self, bangumiid: int) -> Optional[dict]:
"""
获取Bangumi信息

View File

@@ -9,13 +9,13 @@ class DashboardChain(ChainBase, metaclass=Singleton):
"""
各类仪表板统计处理链
"""
def media_statistic(self, server: str = None) -> Optional[List[schemas.Statistic]]:
def media_statistic(self, server: Optional[str] = None) -> Optional[List[schemas.Statistic]]:
"""
媒体数量统计
"""
return self.run_module("media_statistic", server=server)
def downloader_info(self, downloader: str = None) -> Optional[List[schemas.DownloaderInfo]]:
def downloader_info(self, downloader: Optional[str] = None) -> Optional[List[schemas.DownloaderInfo]]:
"""
下载器信息
"""

View File

@@ -19,7 +19,7 @@ class DoubanChain(ChainBase, metaclass=Singleton):
"""
return self.run_module("douban_person_detail", person_id=person_id)
def person_credits(self, person_id: int, page: int = 1) -> List[MediaInfo]:
def person_credits(self, person_id: int, page: Optional[int] = 1) -> List[MediaInfo]:
"""
根据人物ID查询人物参演作品
:param person_id: 人物ID
@@ -27,7 +27,7 @@ class DoubanChain(ChainBase, metaclass=Singleton):
"""
return self.run_module("douban_person_credits", person_id=person_id, page=page)
def movie_top250(self, page: int = 1, count: int = 30) -> Optional[List[MediaInfo]]:
def movie_top250(self, page: Optional[int] = 1, count: Optional[int] = 30) -> Optional[List[MediaInfo]]:
"""
获取豆瓣电影TOP250
:param page: 页码
@@ -35,26 +35,26 @@ class DoubanChain(ChainBase, metaclass=Singleton):
"""
return self.run_module("movie_top250", page=page, count=count)
def movie_showing(self, page: int = 1, count: int = 30) -> Optional[List[MediaInfo]]:
def movie_showing(self, page: Optional[int] = 1, count: Optional[int] = 30) -> Optional[List[MediaInfo]]:
"""
获取正在上映的电影
"""
return self.run_module("movie_showing", page=page, count=count)
def tv_weekly_chinese(self, page: int = 1, count: int = 30) -> Optional[List[MediaInfo]]:
def tv_weekly_chinese(self, page: Optional[int] = 1, count: Optional[int] = 30) -> Optional[List[MediaInfo]]:
"""
获取本周中国剧集榜
"""
return self.run_module("tv_weekly_chinese", page=page, count=count)
def tv_weekly_global(self, page: int = 1, count: int = 30) -> Optional[List[MediaInfo]]:
def tv_weekly_global(self, page: Optional[int] = 1, count: Optional[int] = 30) -> Optional[List[MediaInfo]]:
"""
获取本周全球剧集榜
"""
return self.run_module("tv_weekly_global", page=page, count=count)
def douban_discover(self, mtype: MediaType, sort: str, tags: str,
page: int = 0, count: int = 30) -> Optional[List[MediaInfo]]:
page: Optional[int] = 0, count: Optional[int] = 30) -> Optional[List[MediaInfo]]:
"""
发现豆瓣电影、剧集
:param mtype: 媒体类型
@@ -67,19 +67,19 @@ class DoubanChain(ChainBase, metaclass=Singleton):
return self.run_module("douban_discover", mtype=mtype, sort=sort, tags=tags,
page=page, count=count)
def tv_animation(self, page: int = 1, count: int = 30) -> Optional[List[MediaInfo]]:
def tv_animation(self, page: Optional[int] = 1, count: Optional[int] = 30) -> Optional[List[MediaInfo]]:
"""
获取动画剧集
"""
return self.run_module("tv_animation", page=page, count=count)
def movie_hot(self, page: int = 1, count: int = 30) -> Optional[List[MediaInfo]]:
def movie_hot(self, page: Optional[int] = 1, count: Optional[int] = 30) -> Optional[List[MediaInfo]]:
"""
获取热门电影
"""
return self.run_module("movie_hot", page=page, count=count)
def tv_hot(self, page: int = 1, count: int = 30) -> Optional[List[MediaInfo]]:
def tv_hot(self, page: Optional[int] = 1, count: Optional[int] = 30) -> Optional[List[MediaInfo]]:
"""
获取热门剧集
"""

View File

@@ -19,8 +19,8 @@ from app.helper.directory import DirectoryHelper
from app.helper.message import MessageHelper
from app.helper.torrent import TorrentHelper
from app.log import logger
from app.schemas import ExistMediaInfo, NotExistMediaInfo, DownloadingTorrent, Notification
from app.schemas.types import MediaType, TorrentStatus, EventType, MessageChannel, NotificationType
from app.schemas import ExistMediaInfo, NotExistMediaInfo, DownloadingTorrent, Notification, ResourceSelectionEventData, ResourceDownloadEventData
from app.schemas.types import MediaType, TorrentStatus, EventType, MessageChannel, NotificationType, ChainEventType
from app.utils.http import RequestUtils
from app.utils.string import StringUtils
@@ -39,8 +39,8 @@ class DownloadChain(ChainBase):
self.messagehelper = MessageHelper()
def post_download_message(self, meta: MetaBase, mediainfo: MediaInfo, torrent: TorrentInfo,
channel: MessageChannel = None, username: str = None,
download_episodes: str = None):
channel: MessageChannel = None, username: Optional[str] = None,
download_episodes: Optional[str] = None):
"""
发送添加下载的消息,根据消息场景开关决定发给谁
:param meta: 元数据
@@ -97,7 +97,7 @@ class DownloadChain(ChainBase):
def download_torrent(self, torrent: TorrentInfo,
channel: MessageChannel = None,
source: str = None,
source: Optional[str] = None,
userid: Union[str, int] = None
) -> Tuple[Optional[Union[Path, str]], str, list]:
"""
@@ -105,7 +105,7 @@ class DownloadChain(ChainBase):
:return: 种子路径,种子目录名,种子文件清单
"""
def __get_redict_url(url: str, ua: str = None, cookie: str = None) -> Optional[str]:
def __get_redict_url(url: str, ua: Optional[str] = None, cookie: Optional[str] = None) -> Optional[str]:
"""
获取下载链接, url格式[base64]url
"""
@@ -191,7 +191,7 @@ class DownloadChain(ChainBase):
logger.error(f"下载种子文件失败:{torrent.title} - {torrent_url}")
self.post_message(Notification(
channel=channel,
source=source,
source=source if channel else None,
mtype=NotificationType.Manual,
title=f"{torrent.title} 种子下载失败!",
text=f"错误信息:{error_msg}\n站点:{torrent.site_name}",
@@ -203,34 +203,61 @@ class DownloadChain(ChainBase):
def download_single(self, context: Context, torrent_file: Path = None,
episodes: Set[int] = None,
channel: MessageChannel = None, source: str = None,
downloader: str = None,
save_path: str = None,
channel: MessageChannel = None,
source: Optional[str] = None,
downloader: Optional[str] = None,
save_path: Optional[str] = None,
userid: Union[str, int] = None,
username: str = None,
media_category: str = None) -> Optional[str]:
username: Optional[str] = None,
label: Optional[str] = None) -> Optional[str]:
"""
下载及发送通知
:param context: 资源上下文
:param torrent_file: 种子文件路径
:param episodes: 需要下载的集数
:param channel: 通知渠道
:param source: 通知来源
:param source: 来源消息通知、Subscribe、Manual等
:param downloader: 下载器
:param save_path: 保存路径
:param userid: 用户ID
:param username: 调用下载的用户名/插件名
:param media_category: 自定义媒体类别
:param label: 自定义标签
"""
_torrent = context.torrent_info
_media = context.media_info
_meta = context.meta_info
_site_downloader = _torrent.site_downloader
# 发送资源下载事件,允许外部拦截下载
event_data = ResourceDownloadEventData(
context=context,
episodes=episodes or context.meta_info.episode_list,
channel=channel,
origin=source,
downloader=downloader,
options={
"save_path": save_path,
"userid": userid,
"username": username,
"media_category": _media.category
}
)
# 触发资源下载事件
event = eventmanager.send_event(ChainEventType.ResourceDownload, event_data)
if event and event.event_data:
event_data: ResourceDownloadEventData = event.event_data
# 如果事件被取消,跳过资源下载
if event_data.cancel:
logger.debug(
f"Resource download canceled by event: {event_data.source},"
f"Reason: {event_data.reason}")
return None
# 补充完整的media数据
if not _media.genre_ids:
new_media = self.recognize_media(mtype=_media.type, tmdbid=_media.tmdb_id,
doubanid=_media.douban_id, bangumiid=_media.bangumi_id)
doubanid=_media.douban_id, bangumiid=_media.bangumi_id,
episode_group=_media.episode_group)
if new_media:
_media = new_media
@@ -256,7 +283,7 @@ class DownloadChain(ChainBase):
download_dir = Path(save_path)
else:
# 根据媒体信息查询下载目录配置
dir_info = self.directoryhelper.get_dir(_media)
dir_info = self.directoryhelper.get_dir(_media, storage="local", include_unsorted=True)
# 拼装子目录
if dir_info:
# 一级目录
@@ -284,18 +311,26 @@ class DownloadChain(ChainBase):
episodes=episodes,
download_dir=download_dir,
category=_media.category,
label=label,
downloader=downloader or _site_downloader)
if result:
_downloader, _hash, error_msg = result
_downloader, _hash, _layout, error_msg = result
else:
_downloader, _hash, error_msg = None, None, "未找到下载器"
_downloader, _hash, _layout, error_msg = None, None, None, "未找到下载器"
if _hash:
# 下载文件路径
if _folder_name:
download_path = download_dir / _folder_name
else:
# `不创建子文件夹` 或 `不存在子文件夹`
if _layout == "NoSubfolder" or not _folder_name:
# 下载路径记录至文件
download_path = download_dir / _file_list[0] if _file_list else download_dir
# 原始布局
elif _folder_name:
download_path = download_dir / _folder_name
# 创建子文件夹
else:
download_path = download_dir / Path(_file_list[0]).stem if _file_list else download_dir
# 文件保存路径
_save_path = download_dir if _layout == "NoSubfolder" or not _folder_name else download_path
# 登记下载记录
self.downloadhis.add(
@@ -310,6 +345,7 @@ class DownloadChain(ChainBase):
seasons=_meta.season,
episodes=download_episodes or _meta.episode,
image=_media.get_backdrop_image(),
downloader=_downloader,
download_hash=_hash,
torrent_name=_torrent.title,
torrent_description=_torrent.description,
@@ -318,7 +354,9 @@ class DownloadChain(ChainBase):
username=username,
channel=channel.value if channel else None,
date=time.strftime("%Y-%m-%d %H:%M:%S", time.localtime()),
media_category=media_category
media_category=_media.category,
episode_group=_media.episode_group,
note={"source": source}
)
# 登记下载文件
@@ -337,8 +375,8 @@ class DownloadChain(ChainBase):
files_to_add.append({
"download_hash": _hash,
"downloader": _downloader,
"fullpath": str(download_dir / _folder_name / file),
"savepath": str(download_dir / _folder_name),
"fullpath": str(_save_path / file),
"savepath": str(_save_path),
"filepath": file,
"torrentname": _meta.org_string,
})
@@ -355,7 +393,9 @@ class DownloadChain(ChainBase):
"hash": _hash,
"context": context,
"username": username,
"downloader": _downloader
"downloader": _downloader,
"episodes": episodes or _meta.episode_list,
"source": source
})
else:
# 下载失败
@@ -364,7 +404,7 @@ class DownloadChain(ChainBase):
# 只发送给对应渠道和用户
self.post_message(Notification(
channel=channel,
source=source,
source=source if channel else None,
mtype=NotificationType.Manual,
title="添加下载任务失败:%s %s"
% (_media.title_year, _meta.season_episode),
@@ -378,13 +418,12 @@ class DownloadChain(ChainBase):
def batch_download(self,
contexts: List[Context],
no_exists: Dict[Union[int, str], Dict[int, NotExistMediaInfo]] = None,
save_path: str = None,
save_path: Optional[str] = None,
channel: MessageChannel = None,
source: str = None,
userid: str = None,
username: str = None,
media_category: str = None,
downloader: str = None
source: Optional[str] = None,
userid: Optional[str] = None,
username: Optional[str] = None,
downloader: Optional[str] = None
) -> Tuple[List[Context], Dict[Union[int, str], Dict[int, NotExistMediaInfo]]]:
"""
根据缺失数据,自动种子列表中组合择优下载
@@ -392,10 +431,9 @@ class DownloadChain(ChainBase):
:param no_exists: 缺失的剧集信息
:param save_path: 保存路径
:param channel: 通知渠道
:param source: 通知来源
:param source: 来源(消息通知、订阅、手工下载等)
:param userid: 用户ID
:param username: 调用下载的用户名/插件名
:param media_category: 自定义媒体类别
:param downloader: 下载器
:return: 已经下载的资源列表、剩余未下载到的剧集 no_exists[tmdb_id/douban_id] = {season: NotExistMediaInfo}
"""
@@ -457,6 +495,22 @@ class DownloadChain(ChainBase):
return 9999
return no_exist[season].total_episode
# 发送资源选择事件,允许外部修改上下文数据
logger.debug(f"Initial contexts: {len(contexts)} items, Downloader: {downloader}")
event_data = ResourceSelectionEventData(
contexts=contexts,
downloader=downloader,
origin=source
)
event = eventmanager.send_event(ChainEventType.ResourceSelection, event_data)
# 如果事件修改了上下文数据,使用更新后的数据
if event and event.event_data:
event_data: ResourceSelectionEventData = event.event_data
if event_data.updated and event_data.updated_contexts is not None:
logger.debug(f"Contexts updated by event: "
f"{len(event_data.updated_contexts)} items (source: {event_data.source})")
contexts = event_data.updated_contexts
# 分组排序
contexts = TorrentHelper().sort_group_torrents(contexts)
@@ -468,14 +522,14 @@ class DownloadChain(ChainBase):
logger.info(f"开始下载电影 {context.torrent_info.title} ...")
if self.download_single(context, save_path=save_path, channel=channel,
source=source, userid=userid, username=username,
media_category=media_category, downloader=downloader):
downloader=downloader):
# 下载成功
logger.info(f"{context.torrent_info.title} 添加下载成功")
downloaded_list.append(context)
# 电视剧整季匹配
logger.info(f"开始匹配电视剧整季:{no_exists}")
if no_exists:
logger.info(f"开始匹配电视剧整季:{no_exists}")
# 先把整季缺失的拿出来,看是否刚好有所有季都满足的种子 {tmdbid: [seasons]}
need_seasons: Dict[int, list] = {}
for need_mid, need_tv in no_exists.items():
@@ -553,8 +607,7 @@ class DownloadChain(ChainBase):
source=source,
userid=userid,
username=username,
media_category=media_category,
downloader=downloader,
downloader=downloader
)
else:
# 下载
@@ -562,7 +615,6 @@ class DownloadChain(ChainBase):
download_id = self.download_single(context, save_path=save_path,
channel=channel, source=source,
userid=userid, username=username,
media_category=media_category,
downloader=downloader)
if download_id:
@@ -578,8 +630,8 @@ class DownloadChain(ChainBase):
# 全部下载完成
break
# 电视剧季内的集匹配
logger.info(f"开始电视剧完整集匹配:{no_exists}")
if no_exists:
logger.info(f"开始电视剧完整集匹配:{no_exists}")
# TMDBID列表
need_tv_list = list(no_exists)
for need_mid in need_tv_list:
@@ -634,7 +686,6 @@ class DownloadChain(ChainBase):
download_id = self.download_single(context, save_path=save_path,
channel=channel, source=source,
userid=userid, username=username,
media_category=media_category,
downloader=downloader)
if download_id:
# 下载成功
@@ -648,8 +699,8 @@ class DownloadChain(ChainBase):
logger.info(f"{need_season} 剩余需要集:{need_episodes}")
# 仍然缺失的剧集从整季中选择需要的集数文件下载仅支持QB和TR
logger.info(f"开始电视剧多集拆包匹配:{no_exists}")
if no_exists:
logger.info(f"开始电视剧多集拆包匹配:{no_exists}")
# TMDBID列表
no_exists_list = list(no_exists)
for need_mid in no_exists_list:
@@ -724,7 +775,6 @@ class DownloadChain(ChainBase):
source=source,
userid=userid,
username=username,
media_category=media_category,
downloader=downloader
)
if not download_id:
@@ -810,7 +860,8 @@ class DownloadChain(ChainBase):
# 补充媒体信息
mediainfo: MediaInfo = self.recognize_media(mtype=mediainfo.type,
tmdbid=mediainfo.tmdb_id,
doubanid=mediainfo.douban_id)
doubanid=mediainfo.douban_id,
episode_group=mediainfo.episode_group)
if not mediainfo:
logger.error(f"媒体信息识别失败!")
return False, {}
@@ -877,7 +928,7 @@ class DownloadChain(ChainBase):
# 全部存在
return True, no_exists
def remote_downloading(self, channel: MessageChannel, userid: Union[str, int] = None, source: str = None):
def remote_downloading(self, channel: MessageChannel, userid: Union[str, int] = None, source: Optional[str] = None):
"""
查询正在下载的任务,并发送消息
"""
@@ -911,7 +962,7 @@ class DownloadChain(ChainBase):
link=settings.MP_DOMAIN('#/downloading')
))
def downloading(self, name: str = None) -> List[DownloadingTorrent]:
def downloading(self, name: Optional[str] = None) -> List[DownloadingTorrent]:
"""
查询正在下载的任务
"""

View File

@@ -11,12 +11,15 @@ from app.core.event import eventmanager, Event
from app.core.meta import MetaBase
from app.core.metainfo import MetaInfo, MetaInfoPath
from app.log import logger
from app.schemas import FileItem
from app.schemas.types import EventType, MediaType, ChainEventType
from app.utils.http import RequestUtils
from app.utils.singleton import Singleton
from app.utils.string import StringUtils
recognize_lock = Lock()
scraping_lock = Lock()
scraping_files = []
class MediaChain(ChainBase, metaclass=Singleton):
@@ -29,7 +32,7 @@ class MediaChain(ChainBase, metaclass=Singleton):
self.storagechain = StorageChain()
def metadata_nfo(self, meta: MetaBase, mediainfo: MediaInfo,
season: int = None, episode: int = None) -> Optional[str]:
season: Optional[int] = None, episode: Optional[int] = None) -> Optional[str]:
"""
获取NFO文件内容文本
:param meta: 元数据
@@ -39,13 +42,13 @@ class MediaChain(ChainBase, metaclass=Singleton):
"""
return self.run_module("metadata_nfo", meta=meta, mediainfo=mediainfo, season=season, episode=episode)
def recognize_by_meta(self, metainfo: MetaBase) -> Optional[MediaInfo]:
def recognize_by_meta(self, metainfo: MetaBase, episode_group: Optional[str] = None) -> Optional[MediaInfo]:
"""
根据主副标题识别媒体信息
"""
title = metainfo.title
# 识别媒体信息
mediainfo: MediaInfo = self.recognize_media(meta=metainfo)
mediainfo: MediaInfo = self.recognize_media(meta=metainfo, episode_group=episode_group)
if not mediainfo:
# 尝试使用辅助识别,如果有注册响应事件的话
if eventmanager.check(ChainEventType.NameRecognize):
@@ -109,7 +112,7 @@ class MediaChain(ChainBase, metaclass=Singleton):
# 重新识别
return self.recognize_media(meta=org_meta)
def recognize_by_path(self, path: str) -> Optional[Context]:
def recognize_by_path(self, path: str, episode_group: Optional[str] = None) -> Optional[Context]:
"""
根据文件路径识别媒体信息
"""
@@ -118,7 +121,7 @@ class MediaChain(ChainBase, metaclass=Singleton):
# 元数据
file_meta = MetaInfoPath(file_path)
# 识别媒体信息
mediainfo = self.recognize_media(meta=file_meta)
mediainfo = self.recognize_media(meta=file_meta, episode_group=episode_group)
if not mediainfo:
# 尝试使用辅助识别,如果有注册响应事件的话
if eventmanager.check(ChainEventType.NameRecognize):
@@ -235,7 +238,7 @@ class MediaChain(ChainBase, metaclass=Singleton):
return None
def get_doubaninfo_by_tmdbid(self, tmdbid: int,
mtype: MediaType = None, season: int = None) -> Optional[dict]:
mtype: MediaType = None, season: Optional[int] = None) -> Optional[dict]:
"""
根据TMDBID获取豆瓣信息
"""
@@ -301,12 +304,24 @@ class MediaChain(ChainBase, metaclass=Singleton):
if not event:
return
event_data = event.event_data or {}
fileitem = event_data.get("fileitem")
meta = event_data.get("meta")
mediainfo = event_data.get("mediainfo")
fileitem: FileItem = event_data.get("fileitem")
meta: MetaBase = event_data.get("meta")
mediainfo: MediaInfo = event_data.get("mediainfo")
overwrite = event_data.get("overwrite", False)
if not fileitem:
return
self.scrape_metadata(fileitem=fileitem, meta=meta, mediainfo=mediainfo)
# 刮削锁
with scraping_lock:
if fileitem.path in scraping_files:
return
scraping_files.append(fileitem.path)
try:
# 执行刮削
self.scrape_metadata(fileitem=fileitem, meta=meta, mediainfo=mediainfo, overwrite=overwrite)
finally:
# 释放锁
with scraping_lock:
scraping_files.remove(fileitem.path)
def scrape_metadata(self, fileitem: schemas.FileItem,
meta: MetaBase = None, mediainfo: MediaInfo = None,
@@ -322,6 +337,20 @@ class MediaChain(ChainBase, metaclass=Singleton):
:param overwrite: 是否覆盖已有文件
"""
def is_bluray_folder(_fileitem: schemas.FileItem) -> bool:
"""
判断是否为原盘目录
"""
if not _fileitem or _fileitem.type != "dir":
return False
# 蓝光原盘目录必备的文件或文件夹
required_files = ['BDMV', 'CERTIFICATE']
# 检查目录下是否存在所需文件或文件夹
for item in self.storagechain.list_files(_fileitem):
if item.name in required_files:
return True
return False
def __list_files(_fileitem: schemas.FileItem):
"""
列出下级文件
@@ -337,14 +366,19 @@ class MediaChain(ChainBase, metaclass=Singleton):
"""
if not _fileitem or not _content or not _path:
return
tmp_file = settings.TEMP_PATH / _path.name
# 保存文件到临时目录,文件名随机
tmp_file = settings.TEMP_PATH / f"{_path.name}.{StringUtils.generate_random_str(10)}"
tmp_file.write_bytes(_content)
_fileitem.path = str(_path.parent)
item = self.storagechain.upload_file(fileitem=_fileitem, path=tmp_file)
if item:
logger.info(f"已保存文件:{Path(item.path) / item.name}")
if tmp_file.exists():
tmp_file.unlink()
# 获取文件的父目录
try:
item = self.storagechain.upload_file(fileitem=_fileitem, path=tmp_file, new_name=_path.name)
if item:
logger.info(f"已保存文件:{item.path}")
else:
logger.warn(f"文件保存失败:{_path}")
finally:
if tmp_file.exists():
tmp_file.unlink()
def __download_image(_url: str) -> Optional[bytes]:
"""
@@ -379,26 +413,39 @@ class MediaChain(ChainBase, metaclass=Singleton):
if fileitem.type == "file":
# 是否已存在
nfo_path = filepath.with_suffix(".nfo")
if not overwrite and self.storagechain.get_file_item(storage=fileitem.storage, path=nfo_path):
logger.debug(f"已存在nfo文件{nfo_path}")
return
# 电影文件
logger.info(f"正在生成电影nfo{mediainfo.title_year} - {filepath.name}")
movie_nfo = self.metadata_nfo(meta=meta, mediainfo=mediainfo)
if not movie_nfo:
logger.warn(f"{filepath.name} nfo文件生成失败")
return
# 保存或上传nfo文件到上级目录
if not parent:
parent = self.storagechain.get_parent_item(fileitem)
__save_file(_fileitem=parent, _path=nfo_path, _content=movie_nfo)
if overwrite or not self.storagechain.get_file_item(storage=fileitem.storage, path=nfo_path):
# 电影文件
movie_nfo = self.metadata_nfo(meta=meta, mediainfo=mediainfo)
if movie_nfo:
# 保存或上传nfo文件到上级目录
__save_file(_fileitem=parent, _path=nfo_path, _content=movie_nfo)
else:
logger.warn(f"{filepath.name} nfo文件生成失败")
else:
logger.info(f"已存在nfo文件{nfo_path}")
else:
# 电影目录
files = __list_files(_fileitem=fileitem)
for file in files:
self.scrape_metadata(fileitem=file,
meta=meta, mediainfo=mediainfo,
init_folder=False, parent=fileitem)
if is_bluray_folder(fileitem):
# 原盘目录
nfo_path = filepath / (filepath.name + ".nfo")
if overwrite or not self.storagechain.get_file_item(storage=fileitem.storage, path=nfo_path):
# 生成原盘nfo
movie_nfo = self.metadata_nfo(meta=meta, mediainfo=mediainfo)
if movie_nfo:
# 保存或上传nfo文件到当前目录
__save_file(_fileitem=fileitem, _path=nfo_path, _content=movie_nfo)
else:
logger.warn(f"{filepath.name} nfo文件生成失败")
else:
logger.info(f"已存在nfo文件{nfo_path}")
else:
# 处理目录内的文件
files = __list_files(_fileitem=fileitem)
for file in files:
self.scrape_metadata(fileitem=file,
meta=meta, mediainfo=mediainfo,
init_folder=False, parent=fileitem,
overwrite=overwrite)
# 生成目录内图片文件
if init_folder:
# 图片
@@ -410,59 +457,60 @@ class MediaChain(ChainBase, metaclass=Singleton):
and attr_value.startswith("http"):
image_name = attr_name.replace("_path", "") + Path(attr_value).suffix
image_path = filepath / image_name
if not overwrite and self.storagechain.get_file_item(storage=fileitem.storage,
path=image_path):
logger.debug(f"已存在图片文件:{image_path}")
continue
# 下载图片
content = __download_image(_url=attr_value)
# 写入图片到当前目录
if content:
__save_file(_fileitem=fileitem, _path=image_path, _content=content)
if overwrite or not self.storagechain.get_file_item(storage=fileitem.storage,
path=image_path):
# 下载图片
content = __download_image(_url=attr_value)
# 写入图片到当前目录
if content:
__save_file(_fileitem=fileitem, _path=image_path, _content=content)
else:
logger.info(f"已存在图片文件:{image_path}")
else:
# 电视剧
if fileitem.type == "file":
# 是否已存在
nfo_path = filepath.with_suffix(".nfo")
if not overwrite and self.storagechain.get_file_item(storage=fileitem.storage, path=nfo_path):
logger.debug(f"已存在nfo文件{nfo_path}")
return
# 重新识别季集
file_meta = MetaInfoPath(filepath)
if not file_meta.begin_episode:
logger.warn(f"{filepath.name} 无法识别文件集数!")
return
file_mediainfo = self.recognize_media(meta=file_meta)
file_mediainfo = self.recognize_media(meta=file_meta, tmdbid=mediainfo.tmdb_id,
episode_group=mediainfo.episode_group)
if not file_mediainfo:
logger.warn(f"{filepath.name} 无法识别文件媒体信息!")
return
# 获取集的nfo文件
episode_nfo = self.metadata_nfo(meta=file_meta, mediainfo=file_mediainfo,
season=file_meta.begin_season, episode=file_meta.begin_episode)
if not episode_nfo:
logger.warn(f"{filepath.name} nfo生成失败")
return
# 保存或上传nfo文件到上级目录
if not parent:
parent = self.storagechain.get_parent_item(fileitem)
__save_file(_fileitem=parent, _path=nfo_path, _content=episode_nfo)
# 是否已存在
nfo_path = filepath.with_suffix(".nfo")
if overwrite or not self.storagechain.get_file_item(storage=fileitem.storage, path=nfo_path):
# 获取集的nfo文件
episode_nfo = self.metadata_nfo(meta=file_meta, mediainfo=file_mediainfo,
season=file_meta.begin_season,
episode=file_meta.begin_episode)
if episode_nfo:
# 保存或上传nfo文件到上级目录
if not parent:
parent = self.storagechain.get_parent_item(fileitem)
__save_file(_fileitem=parent, _path=nfo_path, _content=episode_nfo)
else:
logger.warn(f"{filepath.name} nfo文件生成失败")
else:
logger.info(f"已存在nfo文件{nfo_path}")
# 获取集的图片
image_dict = self.metadata_img(mediainfo=file_mediainfo,
season=file_meta.begin_season, episode=file_meta.begin_episode)
if image_dict:
for episode, image_url in image_dict.items():
image_path = filepath.with_suffix(Path(image_url).suffix)
if not overwrite and self.storagechain.get_file_item(storage=fileitem.storage, path=image_path):
logger.debug(f"已存在图片文件:{image_path}")
continue
# 下载图片
content = __download_image(image_url)
# 保存图片文件到当前目录
if content:
if not parent:
parent = self.storagechain.get_parent_item(fileitem)
__save_file(_fileitem=parent, _path=image_path, _content=content)
if overwrite or not self.storagechain.get_file_item(storage=fileitem.storage, path=image_path):
# 下载图片
content = __download_image(image_url)
# 保存图片文件到当前目录
if content:
if not parent:
parent = self.storagechain.get_parent_item(fileitem)
__save_file(_fileitem=parent, _path=image_path, _content=content)
else:
logger.info(f"已存在图片文件:{image_path}")
else:
# 当前为目录,处理目录内的文件
files = __list_files(_fileitem=fileitem)
@@ -470,65 +518,96 @@ class MediaChain(ChainBase, metaclass=Singleton):
self.scrape_metadata(fileitem=file,
meta=meta, mediainfo=mediainfo,
parent=fileitem if file.type == "file" else None,
init_folder=True if file.type == "dir" else False)
init_folder=True if file.type == "dir" else False,
overwrite=overwrite)
# 生成目录的nfo和图片
if init_folder:
# 识别文件夹名称
season_meta = MetaInfo(filepath.name)
if season_meta.begin_season:
# 当前文件夹为Specials或者SPs时设置为S0
if filepath.name in settings.RENAME_FORMAT_S0_NAMES:
season_meta.begin_season = 0
if season_meta.begin_season is not None:
# 是否已存在
nfo_path = filepath / "season.nfo"
if not overwrite and self.storagechain.get_file_item(storage=fileitem.storage, path=nfo_path):
logger.debug(f"已存在nfo文件{nfo_path}")
return
# 当前目录有季号生成季nfo
season_nfo = self.metadata_nfo(meta=meta, mediainfo=mediainfo, season=season_meta.begin_season)
if not season_nfo:
logger.warn(f"无法生成电视剧季nfo文件{meta.name}")
return
# 写入nfo到根目录
__save_file(_fileitem=fileitem, _path=nfo_path, _content=season_nfo)
if overwrite or not self.storagechain.get_file_item(storage=fileitem.storage, path=nfo_path):
# 当前目录有季号生成季nfo
season_nfo = self.metadata_nfo(meta=meta, mediainfo=mediainfo,
season=season_meta.begin_season)
if season_nfo:
# 写入nfo到根目录
__save_file(_fileitem=fileitem, _path=nfo_path, _content=season_nfo)
else:
logger.warn(f"无法生成电视剧季nfo文件{meta.name}")
else:
logger.info(f"已存在nfo文件{nfo_path}")
# TMDB季poster图片
image_dict = self.metadata_img(mediainfo=mediainfo, season=season_meta.begin_season)
if image_dict:
for image_name, image_url in image_dict.items():
image_path = filepath.with_name(image_name)
if not overwrite and self.storagechain.get_file_item(storage=fileitem.storage,
path=image_path):
logger.debug(f"已存在图片文件:{image_path}")
continue
# 下载图片
content = __download_image(image_url)
# 保存图片文件到当前目录
if content:
__save_file(_fileitem=fileitem, _path=image_path, _content=content)
if overwrite or not self.storagechain.get_file_item(storage=fileitem.storage,
path=image_path):
# 下载图片
content = __download_image(image_url)
# 保存图片文件到剧集目录
if content:
if not parent:
parent = self.storagechain.get_parent_item(fileitem)
__save_file(_fileitem=parent, _path=image_path, _content=content)
else:
logger.info(f"已存在图片文件:{image_path}")
# 额外fanart季图片poster thumb banner
image_dict = self.metadata_img(mediainfo=mediainfo)
if image_dict:
for image_name, image_url in image_dict.items():
if image_name.startswith("season"):
image_path = filepath.with_name(image_name)
# 只下载当前刮削季的图片
image_season = "00" if "specials" in image_name else image_name[6:8]
if image_season != str(season_meta.begin_season).rjust(2, '0'):
logger.info(f"当前刮削季为:{season_meta.begin_season},跳过文件:{image_path}")
continue
if overwrite or not self.storagechain.get_file_item(storage=fileitem.storage,
path=image_path):
# 下载图片
content = __download_image(image_url)
# 保存图片文件到当前目录
if content:
if not parent:
parent = self.storagechain.get_parent_item(fileitem)
__save_file(_fileitem=parent, _path=image_path, _content=content)
else:
logger.info(f"已存在图片文件:{image_path}")
# 判断当前目录是不是剧集根目录
if season_meta.name:
if not season_meta.season:
# 是否已存在
nfo_path = filepath / "tvshow.nfo"
if not overwrite and self.storagechain.get_file_item(storage=fileitem.storage, path=nfo_path):
logger.debug(f"已存在nfo文件{nfo_path}")
return
# 当前目录有名称生成tvshow nfo 和 tv图片
tv_nfo = self.metadata_nfo(meta=meta, mediainfo=mediainfo)
if not tv_nfo:
logger.warn(f"无法生成电视剧nfo文件{meta.name}")
return
# 写入tvshow nfo到根目录
__save_file(_fileitem=fileitem, _path=nfo_path, _content=tv_nfo)
if overwrite or not self.storagechain.get_file_item(storage=fileitem.storage, path=nfo_path):
# 当前目录有名称生成tvshow nfo 和 tv图片
tv_nfo = self.metadata_nfo(meta=meta, mediainfo=mediainfo)
if tv_nfo:
# 写入tvshow nfo到根目录
__save_file(_fileitem=fileitem, _path=nfo_path, _content=tv_nfo)
else:
logger.warn(f"无法生成电视剧nfo文件{meta.name}")
else:
logger.info(f"已存在nfo文件{nfo_path}")
# 生成目录图片
image_dict = self.metadata_img(mediainfo=mediainfo)
if image_dict:
for image_name, image_url in image_dict.items():
image_path = filepath / image_name
if not overwrite and self.storagechain.get_file_item(storage=fileitem.storage,
path=image_path):
logger.debug(f"已存在图片文件:{image_path}")
# 不下载季图片
if image_name.startswith("season"):
continue
# 下载图片
content = __download_image(image_url)
# 保存图片文件到当前目录
if content:
__save_file(_fileitem=fileitem, _path=image_path, _content=content)
image_path = filepath / image_name
if overwrite or not self.storagechain.get_file_item(storage=fileitem.storage,
path=image_path):
# 下载图片
content = __download_image(image_url)
# 保存图片文件到当前目录
if content:
__save_file(_fileitem=fileitem, _path=image_path, _content=content)
else:
logger.info(f"已存在图片文件:{image_path}")
logger.info(f"{filepath.name} 刮削完成")

View File

@@ -1,9 +1,8 @@
import threading
from typing import List, Union, Optional, Generator
from cachetools import cached, TTLCache
from typing import List, Union, Optional, Generator, Any
from app.chain import ChainBase
from app.core.cache import cached
from app.core.config import global_vars
from app.db.mediaserver_oper import MediaServerOper
from app.helper.service import ServiceConfigHelper
@@ -22,14 +21,15 @@ class MediaServerChain(ChainBase):
super().__init__()
self.dboper = MediaServerOper()
def librarys(self, server: str, username: str = None, hidden: bool = False) -> List[MediaServerLibrary]:
def librarys(self, server: str, username: Optional[str] = None,
hidden: bool = False) -> List[MediaServerLibrary]:
"""
获取媒体服务器所有媒体库
"""
return self.run_module("mediaserver_librarys", server=server, username=username, hidden=hidden)
def items(self, server: str, library_id: Union[str, int], start_index: int = 0, limit: Optional[int] = -1) \
-> Optional[Generator]:
def items(self, server: str, library_id: Union[str, int],
start_index: Optional[int] = 0, limit: Optional[int] = -1) -> Generator[Any, None, None]:
"""
获取媒体服务器项目列表,支持分页和不分页逻辑,默认不分页获取所有数据
@@ -82,28 +82,31 @@ class MediaServerChain(ChainBase):
"""
return self.run_module("mediaserver_tv_episodes", server=server, item_id=item_id)
def playing(self, server: str, count: int = 20, username: str = None) -> List[MediaServerPlayItem]:
def playing(self, server: str, count: Optional[int] = 20,
username: Optional[str] = None) -> List[MediaServerPlayItem]:
"""
获取媒体服务器正在播放信息
"""
return self.run_module("mediaserver_playing", count=count, server=server, username=username)
def latest(self, server: str, count: int = 20, username: str = None) -> List[MediaServerPlayItem]:
def latest(self, server: str, count: Optional[int] = 20,
username: Optional[str] = None) -> List[MediaServerPlayItem]:
"""
获取媒体服务器最新入库条目
"""
return self.run_module("mediaserver_latest", count=count, server=server, username=username)
@cached(cache=TTLCache(maxsize=1, ttl=3600))
def get_latest_wallpapers(self, server: str = None, count: int = 10,
remote: bool = True, username: str = None) -> List[str]:
@cached(maxsize=1, ttl=3600)
def get_latest_wallpapers(self, server: Optional[str] = None, count: Optional[int] = 10,
remote: bool = True, username: Optional[str] = None) -> List[str]:
"""
获取最新最新入库条目海报作为壁纸缓存1小时
"""
return self.run_module("mediaserver_latest_images", server=server, count=count,
remote=remote, username=username)
def get_latest_wallpaper(self, server: str = None, remote: bool = True, username: str = None) -> Optional[str]:
def get_latest_wallpaper(self, server: Optional[str] = None,
remote: bool = True, username: Optional[str] = None) -> Optional[str]:
"""
获取最新最新入库条目海报作为壁纸缓存1小时
"""

View File

@@ -111,6 +111,8 @@ class MessageChain(ChainBase):
info = self.message_parser(source=source, body=body, form=form, args=args)
if not info:
return
# 更新消息来源
source = info.source
# 渠道
channel = info.channel
# 用户ID
@@ -293,6 +295,8 @@ class MessageChain(ChainBase):
return
else:
best_version = True
# 转换用户名
mp_name = self.useroper.get_name(**{f"{channel.name.lower()}_userid": userid}) if channel else None
# 添加订阅状态为N
self.subscribechain.add(title=mediainfo.title,
year=mediainfo.year,
@@ -302,7 +306,7 @@ class MessageChain(ChainBase):
channel=channel,
source=source,
userid=userid,
username=username,
username=mp_name or username,
best_version=best_version)
elif cache_type == "Torrent":
if int(text) == 0:
@@ -503,6 +507,8 @@ class MessageChain(ChainBase):
note = downloaded
else:
note = None
# 转换用户名
mp_name = self.useroper.get_name(**{f"{channel.name.lower()}_userid": userid}) if channel else None
# 添加订阅状态为R
self.subscribechain.add(title=_current_media.title,
year=_current_media.year,
@@ -512,7 +518,7 @@ class MessageChain(ChainBase):
channel=channel,
source=source,
userid=userid,
username=username,
username=mp_name or username,
state="R",
note=note)

316
app/chain/recommend.py Normal file
View File

@@ -0,0 +1,316 @@
import io
import tempfile
from pathlib import Path
from typing import List, Optional
import pillow_avif # noqa 用于自动注册AVIF支持
from PIL import Image
from app.chain import ChainBase
from app.chain.bangumi import BangumiChain
from app.chain.douban import DoubanChain
from app.chain.tmdb import TmdbChain
from app.core.cache import cache_backend, cached
from app.core.config import settings, global_vars
from app.log import logger
from app.schemas import MediaType
from app.utils.common import log_execution_time
from app.utils.http import RequestUtils
from app.utils.security import SecurityUtils
from app.utils.singleton import Singleton
# 推荐相关的专用缓存
recommend_ttl = 24 * 3600
recommend_cache_region = "recommend"
class RecommendChain(ChainBase, metaclass=Singleton):
"""
推荐处理链,单例运行
"""
def __init__(self):
super().__init__()
self.tmdbchain = TmdbChain()
self.doubanchain = DoubanChain()
self.bangumichain = BangumiChain()
self.cache_max_pages = 5
def refresh_recommend(self):
"""
刷新推荐
"""
logger.debug("Starting to refresh Recommend data.")
cache_backend.clear(region=recommend_cache_region)
logger.debug("Recommend Cache has been cleared.")
# 推荐来源方法
recommend_methods = [
self.tmdb_movies,
self.tmdb_tvs,
self.tmdb_trending,
self.bangumi_calendar,
self.douban_movie_showing,
self.douban_movies,
self.douban_tvs,
self.douban_movie_top250,
self.douban_tv_weekly_chinese,
self.douban_tv_weekly_global,
self.douban_tv_animation,
self.douban_movie_hot,
self.douban_tv_hot,
]
# 缓存并刷新所有推荐数据
recommends = []
# 记录哪些方法已完成
methods_finished = set()
# 这里避免区间内连续调用相同来源,因此遍历方案为每页遍历所有推荐来源,再进行页数遍历
for page in range(1, self.cache_max_pages + 1):
for method in recommend_methods:
if global_vars.is_system_stopped:
return
if method in methods_finished:
continue
logger.debug(f"Fetch {method.__name__} data for page {page}.")
data = method(page=page)
if not data:
logger.debug("All recommendation methods have finished fetching data. Ending pagination early.")
methods_finished.add(method)
continue
recommends.extend(data)
# 如果所有方法都已经完成,提前结束循环
if len(methods_finished) == len(recommend_methods):
break
# 缓存收集到的海报
self.__cache_posters(recommends)
logger.debug("Recommend data refresh completed.")
def __cache_posters(self, datas: List[dict]):
"""
提取 poster_path 并缓存图片
:param datas: 数据列表
"""
if not settings.GLOBAL_IMAGE_CACHE:
return
for data in datas:
if global_vars.is_system_stopped:
return
poster_path = data.get("poster_path")
if poster_path:
poster_url = poster_path.replace("original", "w500")
logger.debug(f"Caching poster image: {poster_url}")
self.__fetch_and_save_image(poster_url)
@staticmethod
def __fetch_and_save_image(url: str):
"""
请求并保存图片
:param url: 图片路径
"""
if not settings.GLOBAL_IMAGE_CACHE or not url:
return
# 生成缓存路径
sanitized_path = SecurityUtils.sanitize_url_path(url)
cache_path = settings.CACHE_PATH / "images" / sanitized_path
# 没有文件类型,则添加后缀,在恶意文件类型和实际需求下的折衷选择
if not cache_path.suffix:
cache_path = cache_path.with_suffix(".jpg")
# 确保缓存路径和文件类型合法
if not SecurityUtils.is_safe_path(settings.CACHE_PATH, cache_path, settings.SECURITY_IMAGE_SUFFIXES):
logger.debug(f"Invalid cache path or file type for URL: {url}, sanitized path: {sanitized_path}")
return
# 本地存在缓存图片,则直接跳过
if cache_path.exists():
logger.debug(f"Cache hit: Image already exists at {cache_path}")
return
# 请求远程图片
referer = "https://movie.douban.com/" if "doubanio.com" in url else None
proxies = settings.PROXY if not referer else None
response = RequestUtils(ua=settings.USER_AGENT, proxies=proxies, referer=referer).get_res(url=url)
if not response:
logger.debug(f"Empty response for URL: {url}")
return
# 验证下载的内容是否为有效图片
try:
Image.open(io.BytesIO(response.content)).verify()
except Exception as e:
logger.debug(f"Invalid image format for URL {url}: {e}")
return
if not cache_path:
return
try:
if not cache_path.parent.exists():
cache_path.parent.mkdir(parents=True, exist_ok=True)
with tempfile.NamedTemporaryFile(dir=cache_path.parent, delete=False) as tmp_file:
tmp_file.write(response.content)
temp_path = Path(tmp_file.name)
temp_path.replace(cache_path)
logger.debug(f"Successfully cached image at {cache_path} for URL: {url}")
except Exception as e:
logger.debug(f"Failed to write cache file {cache_path} for URL {url}: {e}")
@log_execution_time(logger=logger)
@cached(ttl=recommend_ttl, region=recommend_cache_region)
def tmdb_movies(self, sort_by: Optional[str] = "popularity.desc",
with_genres: Optional[str] = "",
with_original_language: Optional[str] = "",
with_keywords: Optional[str] = "",
with_watch_providers: Optional[str] = "",
vote_average: Optional[float] = 0.0,
vote_count: Optional[int] = 0,
release_date: Optional[str] = "",
page: Optional[int] = 1) -> List[dict]:
"""
TMDB热门电影
"""
movies = self.tmdbchain.tmdb_discover(mtype=MediaType.MOVIE,
sort_by=sort_by,
with_genres=with_genres,
with_original_language=with_original_language,
with_keywords=with_keywords,
with_watch_providers=with_watch_providers,
vote_average=vote_average,
vote_count=vote_count,
release_date=release_date,
page=page)
return [movie.to_dict() for movie in movies] if movies else []
@log_execution_time(logger=logger)
@cached(ttl=recommend_ttl, region=recommend_cache_region)
def tmdb_tvs(self, sort_by: Optional[str] = "popularity.desc",
with_genres: Optional[str] = "",
with_original_language: Optional[str] = "zh|en|ja|ko",
with_keywords: Optional[str] = "",
with_watch_providers: Optional[str] = "",
vote_average: Optional[float] = 0.0,
vote_count: Optional[int] = 0,
release_date: Optional[str] = "",
page: Optional[int] = 1) -> List[dict]:
"""
TMDB热门电视剧
"""
tvs = self.tmdbchain.tmdb_discover(mtype=MediaType.TV,
sort_by=sort_by,
with_genres=with_genres,
with_original_language=with_original_language,
with_keywords=with_keywords,
with_watch_providers=with_watch_providers,
vote_average=vote_average,
vote_count=vote_count,
release_date=release_date,
page=page)
return [tv.to_dict() for tv in tvs] if tvs else []
@log_execution_time(logger=logger)
@cached(ttl=recommend_ttl, region=recommend_cache_region)
def tmdb_trending(self, page: Optional[int] = 1) -> List[dict]:
"""
TMDB流行趋势
"""
infos = self.tmdbchain.tmdb_trending(page=page)
return [info.to_dict() for info in infos] if infos else []
@log_execution_time(logger=logger)
@cached(ttl=recommend_ttl, region=recommend_cache_region)
def bangumi_calendar(self, page: Optional[int] = 1, count: Optional[int] = 30) -> List[dict]:
"""
Bangumi每日放送
"""
medias = self.bangumichain.calendar()
return [media.to_dict() for media in medias[(page - 1) * count: page * count]] if medias else []
@log_execution_time(logger=logger)
@cached(ttl=recommend_ttl, region=recommend_cache_region)
def douban_movie_showing(self, page: Optional[int] = 1, count: Optional[int] = 30) -> List[dict]:
"""
豆瓣正在热映
"""
movies = self.doubanchain.movie_showing(page=page, count=count)
return [media.to_dict() for media in movies] if movies else []
@log_execution_time(logger=logger)
@cached(ttl=recommend_ttl, region=recommend_cache_region)
def douban_movies(self, sort: Optional[str] = "R", tags: Optional[str] = "",
page: Optional[int] = 1, count: Optional[int] = 30) -> List[dict]:
"""
豆瓣最新电影
"""
movies = self.doubanchain.douban_discover(mtype=MediaType.MOVIE,
sort=sort, tags=tags, page=page, count=count)
return [media.to_dict() for media in movies] if movies else []
@log_execution_time(logger=logger)
@cached(ttl=recommend_ttl, region=recommend_cache_region)
def douban_tvs(self, sort: Optional[str] = "R", tags: Optional[str] = "",
page: Optional[int] = 1, count: Optional[int] = 30) -> List[dict]:
"""
豆瓣最新电视剧
"""
tvs = self.doubanchain.douban_discover(mtype=MediaType.TV,
sort=sort, tags=tags, page=page, count=count)
return [media.to_dict() for media in tvs] if tvs else []
@log_execution_time(logger=logger)
@cached(ttl=recommend_ttl, region=recommend_cache_region)
def douban_movie_top250(self, page: Optional[int] = 1, count: Optional[int] = 30) -> List[dict]:
"""
豆瓣电影TOP250
"""
movies = self.doubanchain.movie_top250(page=page, count=count)
return [media.to_dict() for media in movies] if movies else []
@log_execution_time(logger=logger)
@cached(ttl=recommend_ttl, region=recommend_cache_region)
def douban_tv_weekly_chinese(self, page: Optional[int] = 1, count: Optional[int] = 30) -> List[dict]:
"""
豆瓣国产剧集榜
"""
tvs = self.doubanchain.tv_weekly_chinese(page=page, count=count)
return [media.to_dict() for media in tvs] if tvs else []
@log_execution_time(logger=logger)
@cached(ttl=recommend_ttl, region=recommend_cache_region)
def douban_tv_weekly_global(self, page: Optional[int] = 1, count: Optional[int] = 30) -> List[dict]:
"""
豆瓣全球剧集榜
"""
tvs = self.doubanchain.tv_weekly_global(page=page, count=count)
return [media.to_dict() for media in tvs] if tvs else []
@log_execution_time(logger=logger)
@cached(ttl=recommend_ttl, region=recommend_cache_region)
def douban_tv_animation(self, page: Optional[int] = 1, count: Optional[int] = 30) -> List[dict]:
"""
豆瓣热门动漫
"""
tvs = self.doubanchain.tv_animation(page=page, count=count)
return [media.to_dict() for media in tvs] if tvs else []
@log_execution_time(logger=logger)
@cached(ttl=recommend_ttl, region=recommend_cache_region)
def douban_movie_hot(self, page: Optional[int] = 1, count: Optional[int] = 30) -> List[dict]:
"""
豆瓣热门电影
"""
movies = self.doubanchain.movie_hot(page=page, count=count)
return [media.to_dict() for media in movies] if movies else []
@log_execution_time(logger=logger)
@cached(ttl=recommend_ttl, region=recommend_cache_region)
def douban_tv_hot(self, page: Optional[int] = 1, count: Optional[int] = 30) -> List[dict]:
"""
豆瓣热门电视剧
"""
tvs = self.doubanchain.tv_hot(page=page, count=count)
return [media.to_dict() for media in tvs] if tvs else []

View File

@@ -34,15 +34,17 @@ class SearchChain(ChainBase):
self.systemconfig = SystemConfigOper()
self.torrenthelper = TorrentHelper()
def search_by_id(self, tmdbid: int = None, doubanid: str = None,
mtype: MediaType = None, area: str = "title", season: int = None) -> List[Context]:
def search_by_id(self, tmdbid: Optional[int] = None, doubanid: Optional[str] = None,
mtype: MediaType = None, area: Optional[str] = "title", season: Optional[int] = None,
sites: List[int] = None) -> List[Context]:
"""
根据TMDBID/豆瓣ID搜索资源精确匹配但不不过滤本地存在的资源
根据TMDBID/豆瓣ID搜索资源精确匹配不过滤本地存在的资源
:param tmdbid: TMDB ID
:param doubanid: 豆瓣 ID
:param mtype: 媒体,电影 or 电视剧
:param area: 搜索范围title or imdbid
:param season: 季数
:param sites: 站点ID列表
"""
mediainfo = self.recognize_media(tmdbid=tmdbid, doubanid=doubanid, mtype=mtype)
if not mediainfo:
@@ -55,25 +57,27 @@ class SearchChain(ChainBase):
season: NotExistMediaInfo(episodes=[])
}
}
results = self.process(mediainfo=mediainfo, area=area, no_exists=no_exists)
results = self.process(mediainfo=mediainfo, sites=sites, area=area, no_exists=no_exists)
# 保存到本地文件
bytes_results = pickle.dumps(results)
self.save_cache(bytes_results, self.__result_temp_file)
return results
def search_by_title(self, title: str, page: int = 0, site: int = None) -> List[Context]:
def search_by_title(self, title: str, page: Optional[int] = 0,
sites: List[int] = None, cache_local: Optional[bool] = True) -> List[Context]:
"""
根据标题搜索资源,不识别不过滤,直接返回站点内容
:param title: 标题,为空时返回所有站点首页内容
:param page: 页码
:param site: 站点ID
:param sites: 站点ID列表
:param cache_local: 是否缓存到本地
"""
if title:
logger.info(f'开始搜索资源,关键词:{title} ...')
else:
logger.info(f'开始浏览资源,站点:{site} ...')
logger.info(f'开始浏览资源,站点:{sites} ...')
# 搜索
torrents = self.__search_all_sites(keywords=[title], sites=[site] if site else None, page=page) or []
torrents = self.__search_all_sites(keywords=[title], sites=sites, page=page) or []
if not torrents:
logger.warn(f'{title} 未搜索到资源')
return []
@@ -81,8 +85,9 @@ class SearchChain(ChainBase):
contexts = [Context(meta_info=MetaInfo(title=torrent.title, subtitle=torrent.description),
torrent_info=torrent) for torrent in torrents]
# 保存到本地文件
bytes_results = pickle.dumps(contexts)
self.save_cache(bytes_results, self.__result_temp_file)
if cache_local:
bytes_results = pickle.dumps(contexts)
self.save_cache(bytes_results, self.__result_temp_file)
return contexts
def last_search_results(self) -> List[Context]:
@@ -100,11 +105,11 @@ class SearchChain(ChainBase):
return []
def process(self, mediainfo: MediaInfo,
keyword: str = None,
keyword: Optional[str] = None,
no_exists: Dict[int, Dict[int, NotExistMediaInfo]] = None,
sites: List[int] = None,
rule_groups: List[str] = None,
area: str = "title",
area: Optional[str] = "title",
custom_words: List[str] = None,
filter_params: Dict[str, str] = None) -> List[Context]:
"""
@@ -125,7 +130,6 @@ class SearchChain(ChainBase):
"""
return self.filter_torrents(rule_groups=rule_groups,
torrent_list=torrent_list,
season_episodes=season_episodes,
mediainfo=mediainfo) or []
# 豆瓣标题处理
@@ -185,7 +189,10 @@ class SearchChain(ChainBase):
# 开始过滤
self.progress.update(value=0, text=f'开始过滤,总 {len(torrents)} 个资源,请稍候...',
key=ProgressKey.Search)
# 匹配订阅附加参数
if filter_params:
logger.info(f'开始附加参数过滤,附加参数:{filter_params} ...')
torrents = [torrent for torrent in torrents if self.torrenthelper.filter_torrent(torrent, filter_params)]
# 开始过滤规则过滤
if rule_groups is None:
# 取搜索过滤规则
@@ -221,11 +228,19 @@ class SearchChain(ChainBase):
key=ProgressKey.Search)
if not torrent.title:
continue
# 识别元数据
torrent_meta = MetaInfo(title=torrent.title, subtitle=torrent.description,
custom_words=custom_words)
if torrent.title != torrent_meta.org_string:
logger.info(f"种子名称应用识别词后发生改变:{torrent.title} => {torrent_meta.org_string}")
# 季集数过滤
if season_episodes \
and not self.torrenthelper.match_season_episodes(
torrent=torrent,
meta=torrent_meta,
season_episodes=season_episodes):
continue
# 比对IMDBID
if torrent.imdbid \
and mediainfo.imdb_id \
@@ -234,11 +249,6 @@ class SearchChain(ChainBase):
_match_torrents.append((torrent, torrent_meta))
continue
# 匹配订阅附加参数
if filter_params and not self.torrenthelper.filter_torrent(torrent_info=torrent,
filter_params=filter_params):
continue
# 比对种子
if self.torrenthelper.match_torrent(mediainfo=mediainfo,
torrent_meta=torrent_meta,
@@ -281,8 +291,8 @@ class SearchChain(ChainBase):
def __search_all_sites(self, keywords: List[str],
mediainfo: Optional[MediaInfo] = None,
sites: List[int] = None,
page: int = 0,
area: str = "title") -> Optional[List[TorrentInfo]]:
page: Optional[int] = 0,
area: Optional[str] = "title") -> Optional[List[TorrentInfo]]:
"""
多线程搜索多个站点
:param mediainfo: 识别的媒体信息
@@ -302,11 +312,6 @@ class SearchChain(ChainBase):
for indexer in self.siteshelper.get_indexers():
# 检查站点索引开关
if not sites or indexer.get("id") in sites:
# 站点流控
state, msg = self.siteshelper.check(indexer.get("domain"))
if state:
logger.warn(msg)
continue
indexer_sites.append(indexer)
if not indexer_sites:
logger.warn('未开启任何有效站点,无法搜索资源')

View File

@@ -1,11 +1,10 @@
import base64
import re
from datetime import datetime
from typing import Optional, Tuple, Union
from typing import Optional, Tuple, Union, Dict
from urllib.parse import urljoin
from lxml import etree
from ruamel.yaml import CommentedMap
from app.chain import ChainBase
from app.core.config import global_vars, settings
@@ -52,9 +51,10 @@ class SiteChain(ChainBase):
"1ptba.com": self.__indexphp_test,
"star-space.net": self.__indexphp_test,
"yemapt.org": self.__yema_test,
"hddolby.com": self.__hddolby_test,
}
def refresh_userdata(self, site: CommentedMap = None) -> Optional[SiteUserData]:
def refresh_userdata(self, site: dict = None) -> Optional[SiteUserData]:
"""
刷新站点的用户数据
:param site: 站点
@@ -86,25 +86,36 @@ class SiteChain(ChainBase):
f"{userdata.message_unread} 条新消息,请登陆查看",
link=site.get("url")
))
# 低分享率警告
if userdata.ratio and float(userdata.ratio) < 1 and not bool(
re.search(r"(贵宾|VIP?)", userdata.user_level or "", re.IGNORECASE)):
self.post_message(Notification(
mtype=NotificationType.SiteMessage,
title=f"【站点分享率低预警】",
text=f"站点 {site.get('name')} 分享率 {userdata.ratio},请注意!"
))
return userdata
def refresh_userdatas(self) -> None:
def refresh_userdatas(self) -> Optional[Dict[str, SiteUserData]]:
"""
刷新所有站点的用户数据
"""
sites = self.siteshelper.get_indexers()
any_site_updated = False
result = {}
for site in sites:
if global_vars.is_system_stopped:
return
return None
if site.get("is_active"):
userdata = self.refresh_userdata(site)
if userdata:
any_site_updated = True
result[site.get("name")] = userdata
if any_site_updated:
EventManager().send_event(EventType.SiteRefreshed, {
"site_id": "*"
})
return result
def is_special_site(self, domain: str) -> bool:
"""
@@ -126,10 +137,14 @@ class SiteChain(ChainBase):
proxies=settings.PROXY if site.proxy else None,
timeout=site.timeout or 15
).get_res(url=site.url)
if res and res.status_code == 200:
if res is None:
return False, "无法打开网站!"
if res.status_code == 200:
csrf_token = re.search(r'<meta name="x-csrf-token" content="(.+?)">', res.text)
if csrf_token:
token = csrf_token.group(1)
else:
return False, f"错误:{res.status_code} {res.reason}"
if not token:
return False, "无法获取Token"
# 调用查询用户信息接口
@@ -143,11 +158,15 @@ class SiteChain(ChainBase):
proxies=settings.PROXY if site.proxy else None,
timeout=site.timeout or 15
).get_res(url=f"{site.url}api/user/getInfo")
if user_res and user_res.status_code == 200:
if user_res is None:
return False, "无法打开网站!"
if user_res.status_code == 200:
user_info = user_res.json()
if user_info and user_info.get("data"):
return True, "连接成功"
return False, "Cookie已失效"
return False, "Cookie已失效"
else:
return False, f"错误:{user_res.status_code} {user_res.reason}"
@staticmethod
def __mteam_test(site: Site) -> Tuple[bool, str]:
@@ -158,30 +177,24 @@ class SiteChain(ChainBase):
domain = StringUtils.get_url_domain(site.url)
url = f"https://api.{domain}/api/member/profile"
headers = {
"Content-Type": "application/json",
"User-Agent": user_agent,
"Accept": "application/json, text/plain, */*",
"Authorization": site.token
"x-api-key": site.apikey,
}
res = RequestUtils(
headers=headers,
proxies=settings.PROXY if site.proxy else None,
timeout=site.timeout or 15
).post_res(url=url)
if res and res.status_code == 200:
user_info = res.json()
if user_info and user_info.get("data"):
# 更新最后访问时间
res = RequestUtils(headers=headers,
timeout=site.timeout or 15,
proxies=settings.PROXY if site.proxy else None,
referer=f"{site.url}index"
).post_res(url=f"https://api.{domain}/api/member/updateLastBrowse")
if res:
return True, "连接成功"
else:
return True, f"连接成功,但更新状态失败"
return False, "鉴权已过期或无效"
if res is None:
return False, "无法打开网站!"
if res.status_code == 200:
user_info = res.json() or {}
if user_info.get("data"):
return True, "连接成功"
return False, user_info.get("message", "鉴权已过期或无效")
else:
return False, f"错误:{res.status_code} {res.reason}"
@staticmethod
def __yema_test(site: Site) -> Tuple[bool, str]:
@@ -201,11 +214,15 @@ class SiteChain(ChainBase):
proxies=settings.PROXY if site.proxy else None,
timeout=site.timeout or 15
).get_res(url=url)
if res and res.status_code == 200:
if res is None:
return False, "无法打开网站!"
if res.status_code == 200:
user_info = res.json()
if user_info and user_info.get("success"):
return True, "连接成功"
return False, "Cookie已过期"
return False, "Cookie已过期"
else:
return False, f"错误:{res.status_code} {res.reason}"
def __indexphp_test(self, site: Site) -> Tuple[bool, str]:
"""
@@ -214,6 +231,32 @@ class SiteChain(ChainBase):
site.url = f"{site.url}index.php"
return self.__test(site)
@staticmethod
def __hddolby_test(site: Site) -> Tuple[bool, str]:
"""
判断站点是否已经登陆hddolby
"""
url = f"{site.url}api/v1/user/data"
headers = {
"Content-Type": "application/json",
"Accept": "application/json, text/plain, */*",
"x-api-key": site.apikey,
}
res = RequestUtils(
headers=headers,
proxies=settings.PROXY if site.proxy else None,
timeout=site.timeout or 15
).get_res(url=url)
if res is None:
return False, "无法打开网站!"
if res.status_code == 200:
user_info = res.json()
if user_info and user_info.get("status") == 0:
return True, "连接成功"
return False, "APIKEY已过期"
else:
return False, f"错误:{res.status_code} {res.reason}"
@staticmethod
def __parse_favicon(url: str, cookie: str, ua: str) -> Tuple[str, Optional[str]]:
"""
@@ -308,6 +351,7 @@ class SiteChain(ChainBase):
continue
# 新增站点
domain_url = __indexer_domain(inx=indexer, sub_domain=domain)
proxy = False
res = RequestUtils(cookies=cookie,
ua=settings.USER_AGENT
).get_res(url=domain_url)
@@ -325,16 +369,37 @@ class SiteChain(ChainBase):
logger.warn(f"站点 {indexer.get('name')} 连接状态码:{res.status_code},无法添加站点")
continue
else:
_fail_count += 1
logger.warn(f"站点 {indexer.get('name')} 连接失败,无法添加站点")
continue
if not settings.PROXY_HOST:
_fail_count += 1
logger.warn(f"站点 {indexer.get('name')} 连接失败,无法添加站点")
continue
else:
# 如果配置了代理,尝试通过代理重试
logger.info(f"站点 {indexer.get('name')} 初次连接失败,尝试通过代理重试...")
proxy = True
res = RequestUtils(cookies=cookie,
ua=settings.USER_AGENT,
proxies=settings.PROXY
).get_res(url=domain_url)
if res and res.status_code in [200, 500, 403]:
if not indexer.get("public") and not SiteUtils.is_logged_in(res.text):
logger.warn(f"站点 {indexer.get('name')} 登录失败,即使通过代理,无法添加站点")
_fail_count += 1
continue
logger.info(f"站点 {indexer.get('name')} 通过代理连接成功")
else:
logger.warn(f"站点 {indexer.get('name')} 通过代理连接失败,无法添加站点")
_fail_count += 1
continue
# 获取rss地址
rss_url = None
if not indexer.get("public") and domain_url:
# 自动生成rss地址
rss_url, errmsg = self.rsshelper.get_rss_link(url=domain_url,
cookie=cookie,
ua=settings.USER_AGENT)
ua=settings.USER_AGENT,
proxy=proxy)
if errmsg:
logger.warn(errmsg)
# 插入数据库
@@ -344,6 +409,7 @@ class SiteChain(ChainBase):
domain=domain,
cookie=cookie,
rss=rss_url,
proxy=1 if proxy else 0,
public=1 if indexer.get("public") else 0)
_add_count += 1
@@ -512,18 +578,18 @@ class SiteChain(ChainBase):
elif res.status_code == 200:
msg = "Cookie已失效"
else:
msg = f"状态码{res.status_code}"
msg = f"错误{res.status_code} {res.reason}"
return False, f"{msg}"
elif public and res.status_code != 200:
return False, f"状态码{res.status_code}"
return False, f"错误{res.status_code} {res.reason}"
elif res is not None:
return False, f"状态码{res.status_code}"
return False, f"错误{res.status_code} {res.reason}"
else:
return False, f"无法打开网站!"
return True, "连接成功"
def remote_list(self, channel: MessageChannel,
userid: Union[str, int] = None, source: str = None):
userid: Union[str, int] = None, source: Optional[str] = None):
"""
查询所有站点,发送消息
"""
@@ -557,7 +623,7 @@ class SiteChain(ChainBase):
)
def remote_disable(self, arg_str: str, channel: MessageChannel,
userid: Union[str, int] = None, source: str = None):
userid: Union[str, int] = None, source: Optional[str] = None):
"""
禁用站点
"""
@@ -582,7 +648,7 @@ class SiteChain(ChainBase):
self.remote_list(channel=channel, userid=userid, source=source)
def remote_enable(self, arg_str: str, channel: MessageChannel,
userid: Union[str, int] = None, source: str = None):
userid: Union[str, int] = None, source: Optional[str] = None):
"""
启用站点
"""
@@ -608,7 +674,7 @@ class SiteChain(ChainBase):
self.remote_list(channel=channel, userid=userid, source=source)
def update_cookie(self, site_info: Site,
username: str, password: str, two_step_code: str = None) -> Tuple[bool, str]:
username: str, password: str, two_step_code: Optional[str] = None) -> Tuple[bool, str]:
"""
根据用户名密码更新站点Cookie
:param site_info: 站点信息
@@ -637,7 +703,7 @@ class SiteChain(ChainBase):
return False, "未知错误"
def remote_cookie(self, arg_str: str, channel: MessageChannel,
userid: Union[str, int] = None, source: str = None):
userid: Union[str, int] = None, source: Optional[str] = None):
"""
使用用户名密码更新站点Cookie
"""
@@ -705,3 +771,66 @@ class SiteChain(ChainBase):
source=source,
title=f"{site_info.name}】 Cookie&UA更新成功",
userid=userid))
def remote_refresh_userdatas(self, channel: MessageChannel,
userid: Union[str, int] = None, source: Optional[str] = None):
"""
刷新所有站点用户数据
"""
logger.info("收到命令,开始刷新站点数据 ...")
self.post_message(Notification(
channel=channel,
source=source,
title="开始刷新站点数据 ...",
userid=userid
))
# 刷新站点数据
site_datas = self.refresh_userdatas()
if site_datas:
# 发送消息
messages = {}
# 总上传
incUploads = 0
# 总下载
incDownloads = 0
# 今天日期
today_date = datetime.now().strftime("%Y-%m-%d")
for rand, site in enumerate(site_datas.keys()):
upload = int(site_datas[site].upload or 0)
download = int(site_datas[site].download or 0)
updated_date = site_datas[site].updated_day
if updated_date and updated_date != today_date:
updated_date = f"{updated_date}"
else:
updated_date = ""
if upload > 0 or download > 0:
incUploads += upload
incDownloads += download
messages[upload + (rand / 1000)] = (
f"{site}{updated_date}\n"
+ f"上传量:{StringUtils.str_filesize(upload)}\n"
+ f"下载量:{StringUtils.str_filesize(download)}\n"
+ "————————————"
)
if incDownloads or incUploads:
sorted_messages = [messages[key] for key in sorted(messages.keys(), reverse=True)]
sorted_messages.insert(0, f"【汇总】\n"
f"总上传:{StringUtils.str_filesize(incUploads)}\n"
f"总下载:{StringUtils.str_filesize(incDownloads)}\n"
f"————————————")
self.post_message(Notification(
channel=channel,
source=source,
title="【站点数据统计】",
text="\n".join(sorted_messages),
userid=userid
))
else:
self.post_message(Notification(
channel=channel,
source=source,
title="没有刷新到任何站点数据!",
userid=userid
))

View File

@@ -63,7 +63,7 @@ class StorageChain(ChainBase):
return self.run_module("download_file", fileitem=fileitem, path=path)
def upload_file(self, fileitem: schemas.FileItem, path: Path,
new_name: str = None) -> Optional[schemas.FileItem]:
new_name: Optional[str] = None) -> Optional[schemas.FileItem]:
"""
上传文件
:param fileitem: 保存目录项
@@ -84,6 +84,18 @@ class StorageChain(ChainBase):
"""
return self.run_module("rename_file", fileitem=fileitem, name=name)
def exists(self, fileitem: schemas.FileItem) -> Optional[bool]:
"""
判断文件或目录是否存在
"""
return True if self.get_item(fileitem) else False
def get_item(self, fileitem: schemas.FileItem) -> Optional[schemas.FileItem]:
"""
查询目录或文件
"""
return self.get_file_item(storage=fileitem.storage, path=Path(fileitem.path))
def get_file_item(self, storage: str, path: Path) -> Optional[schemas.FileItem]:
"""
根据路径获取文件项
@@ -108,7 +120,7 @@ class StorageChain(ChainBase):
"""
return self.run_module("storage_usage", storage=storage)
def support_transtype(self, storage: str) -> Optional[str]:
def support_transtype(self, storage: str) -> Optional[dict]:
"""
获取支持的整理方式
"""
@@ -125,6 +137,12 @@ class StorageChain(ChainBase):
return False
if fileitem.type == "dir":
# 本身是目录
if _blue_dir := self.list_files(fileitem=fileitem, recursion=False):
# 删除蓝光目录
for _f in _blue_dir:
if _f.type == "dir" and _f.name in ["BDMV", "CERTIFICATE"]:
logger.warn(f"{fileitem.storage}{_f.path} 删除蓝光目录")
self.delete_file(_f)
if self.any_files(fileitem, extensions=media_exts) is False:
logger.warn(f"{fileitem.storage}{fileitem.path} 不存在其它媒体文件,删除空目录")
return self.delete_file(fileitem)
@@ -135,9 +153,17 @@ class StorageChain(ChainBase):
if not self.delete_file(fileitem):
logger.warn(f"{fileitem.storage}{fileitem.path} 删除失败")
return False
# 处理上级目录
if mtype and mtype == MediaType.TV:
dir_item = self.get_file_item(storage=fileitem.storage, path=Path(fileitem.path).parent.parent)
if mtype:
# 重命名格式
rename_format = settings.TV_RENAME_FORMAT \
if mtype == MediaType.TV else settings.MOVIE_RENAME_FORMAT
# 计算重命名中的文件夹层数
rename_format_level = len(rename_format.split("/")) - 1
if rename_format_level < 1:
return True
# 处理上级目录
dir_item = self.get_file_item(storage=fileitem.storage,
path=Path(fileitem.path).parents[rename_format_level - 1])
else:
dir_item = self.get_parent_item(fileitem)
if dir_item and len(Path(dir_item.path).parts) > 2:

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,7 @@
import json
import re
from typing import Union
from pathlib import Path
from typing import Union, Optional
from app.chain import ChainBase
from app.core.config import settings
@@ -24,7 +25,7 @@ class SystemChain(ChainBase, metaclass=Singleton):
# 重启完成检测
self.restart_finish()
def remote_clear_cache(self, channel: MessageChannel, userid: Union[int, str], source: str = None):
def remote_clear_cache(self, channel: MessageChannel, userid: Union[int, str], source: Optional[str] = None):
"""
清理系统缓存
"""
@@ -32,7 +33,7 @@ class SystemChain(ChainBase, metaclass=Singleton):
self.post_message(Notification(channel=channel, source=source,
title=f"缓存清理完成!", userid=userid))
def restart(self, channel: MessageChannel, userid: Union[int, str], source: str = None):
def restart(self, channel: MessageChannel, userid: Union[int, str], source: Optional[str] = None):
"""
重启系统
"""
@@ -64,7 +65,7 @@ class SystemChain(ChainBase, metaclass=Singleton):
title += f"当前前端版本:{front_local_version},远程版本:{front_release_version}"
return title
def version(self, channel: MessageChannel, userid: Union[int, str], source: str = None):
def version(self, channel: MessageChannel, userid: Union[int, str], source: Optional[str] = None):
"""
查看当前版本、远程版本
"""
@@ -161,4 +162,15 @@ class SystemChain(ChainBase, metaclass=Singleton):
"""
获取前端版本
"""
if SystemUtils.is_frozen() and SystemUtils.is_windows():
version_file = settings.CONFIG_PATH.parent / "nginx" / "html" / "version.txt"
else:
version_file = Path(settings.FRONTEND_PATH) / "version.txt"
if version_file.exists():
try:
with open(version_file, 'r') as f:
version = str(f.read()).strip()
return version
except Exception as err:
logger.debug(f"加载版本文件 {version_file} 出错:{str(err)}")
return FRONTEND_VERSION

View File

@@ -1,10 +1,9 @@
import random
from typing import Optional, List
from cachetools import cached, TTLCache
from app import schemas
from app.chain import ChainBase
from app.core.cache import cached
from app.core.context import MediaInfo
from app.schemas import MediaType
from app.utils.singleton import Singleton
@@ -15,22 +14,41 @@ class TmdbChain(ChainBase, metaclass=Singleton):
TheMovieDB处理链单例运行
"""
def tmdb_discover(self, mtype: MediaType, sort_by: str, with_genres: str,
with_original_language: str, page: int = 1) -> Optional[List[MediaInfo]]:
def tmdb_discover(self, mtype: MediaType,
sort_by: str,
with_genres: str,
with_original_language: str,
with_keywords: str,
with_watch_providers: str,
vote_average: float,
vote_count: int,
release_date: str,
page: Optional[int] = 1) -> Optional[List[MediaInfo]]:
"""
:param mtype: 媒体类型
:param sort_by: 排序方式
:param with_genres: 类型
:param with_original_language: 语言
:param with_keywords: 关键字
:param with_watch_providers: 提供商
:param vote_average: 评分
:param vote_count: 评分人数
:param release_date: 上映日期
:param page: 页码
:return: 媒体信息列表
"""
return self.run_module("tmdb_discover", mtype=mtype,
sort_by=sort_by, with_genres=with_genres,
sort_by=sort_by,
with_genres=with_genres,
with_original_language=with_original_language,
with_keywords=with_keywords,
with_watch_providers=with_watch_providers,
vote_average=vote_average,
vote_count=vote_count,
release_date=release_date,
page=page)
def tmdb_trending(self, page: int = 1) -> Optional[List[MediaInfo]]:
def tmdb_trending(self, page: Optional[int] = 1) -> Optional[List[MediaInfo]]:
"""
TMDB流行趋势
:param page: 第几页
@@ -38,6 +56,13 @@ class TmdbChain(ChainBase, metaclass=Singleton):
"""
return self.run_module("tmdb_trending", page=page)
def tmdb_collection(self, collection_id: int) -> Optional[List[MediaInfo]]:
"""
根据合集ID查询集合
:param collection_id: 合集ID
"""
return self.run_module("tmdb_collection", collection_id=collection_id)
def tmdb_seasons(self, tmdbid: int) -> List[schemas.TmdbSeason]:
"""
根据TMDBID查询themoviedb所有季信息
@@ -45,13 +70,21 @@ class TmdbChain(ChainBase, metaclass=Singleton):
"""
return self.run_module("tmdb_seasons", tmdbid=tmdbid)
def tmdb_episodes(self, tmdbid: int, season: int) -> List[schemas.TmdbEpisode]:
def tmdb_group_seasons(self, group_id: str) -> List[schemas.TmdbSeason]:
"""
根据剧集组ID查询themoviedb所有季集信息
:param group_id: 剧集组ID
"""
return self.run_module("tmdb_group_seasons", group_id=group_id)
def tmdb_episodes(self, tmdbid: int, season: int, episode_group: Optional[str] = None) -> List[schemas.TmdbEpisode]:
"""
根据TMDBID查询某季的所有信信息
:param tmdbid: TMDBID
:param season: 季
:param episode_group: 剧集组
"""
return self.run_module("tmdb_episodes", tmdbid=tmdbid, season=season)
return self.run_module("tmdb_episodes", tmdbid=tmdbid, season=season, episode_group=episode_group)
def movie_similar(self, tmdbid: int) -> Optional[List[MediaInfo]]:
"""
@@ -81,7 +114,7 @@ class TmdbChain(ChainBase, metaclass=Singleton):
"""
return self.run_module("tmdb_tv_recommend", tmdbid=tmdbid)
def movie_credits(self, tmdbid: int, page: int = 1) -> Optional[List[schemas.MediaPerson]]:
def movie_credits(self, tmdbid: int, page: Optional[int] = 1) -> Optional[List[schemas.MediaPerson]]:
"""
根据TMDBID查询电影演职人员
:param tmdbid: TMDBID
@@ -89,7 +122,7 @@ class TmdbChain(ChainBase, metaclass=Singleton):
"""
return self.run_module("tmdb_movie_credits", tmdbid=tmdbid, page=page)
def tv_credits(self, tmdbid: int, page: int = 1) -> Optional[List[schemas.MediaPerson]]:
def tv_credits(self, tmdbid: int, page: Optional[int] = 1) -> Optional[List[schemas.MediaPerson]]:
"""
根据TMDBID查询电视剧演职人员
:param tmdbid: TMDBID
@@ -104,7 +137,7 @@ class TmdbChain(ChainBase, metaclass=Singleton):
"""
return self.run_module("tmdb_person_detail", person_id=person_id)
def person_credits(self, person_id: int, page: int = 1) -> Optional[List[MediaInfo]]:
def person_credits(self, person_id: int, page: Optional[int] = 1) -> Optional[List[MediaInfo]]:
"""
根据人物ID查询人物参演作品
:param person_id: 人物ID
@@ -112,7 +145,7 @@ class TmdbChain(ChainBase, metaclass=Singleton):
"""
return self.run_module("tmdb_person_credits", person_id=person_id, page=page)
@cached(cache=TTLCache(maxsize=1, ttl=3600))
@cached(maxsize=1, ttl=3600)
def get_random_wallpager(self) -> Optional[str]:
"""
获取随机壁纸缓存1个小时
@@ -126,8 +159,8 @@ class TmdbChain(ChainBase, metaclass=Singleton):
return info.backdrop_path
return None
@cached(cache=TTLCache(maxsize=1, ttl=3600))
def get_trending_wallpapers(self, num: int = 10) -> List[str]:
@cached(maxsize=1, ttl=3600)
def get_trending_wallpapers(self, num: Optional[int] = 10) -> List[str]:
"""
获取所有流行壁纸
"""

View File

@@ -1,6 +1,6 @@
import re
import traceback
from typing import Dict, List, Union
from typing import Dict, List, Union, Optional
from cachetools import cached, TTLCache
@@ -48,7 +48,7 @@ class TorrentsChain(ChainBase, metaclass=Singleton):
self.post_message(Notification(channel=channel,
title=f"种子刷新完成!", userid=userid))
def get_torrents(self, stype: str = None) -> Dict[str, List[Context]]:
def get_torrents(self, stype: Optional[str] = None) -> Dict[str, List[Context]]:
"""
获取当前缓存的种子
:param stype: 强制指定缓存类型spider:爬虫缓存rss:rss缓存
@@ -73,17 +73,21 @@ class TorrentsChain(ChainBase, metaclass=Singleton):
logger.info(f'种子缓存数据清理完成')
@cached(cache=TTLCache(maxsize=128, ttl=595))
def browse(self, domain: str) -> List[TorrentInfo]:
def browse(self, domain: str, keyword: Optional[str] = None, cat: Optional[str] = None,
page: Optional[int] = 0) -> List[TorrentInfo]:
"""
浏览站点首页内容返回种子清单TTL缓存10分钟
:param domain: 站点域名
:param keyword: 搜索标题
:param cat: 搜索分类
:param page: 页码
"""
logger.info(f'开始获取站点 {domain} 最新种子 ...')
site = self.siteshelper.get_indexer(domain)
if not site:
logger.error(f'站点 {domain} 不存在!')
return []
return self.refresh_torrents(site=site)
return self.refresh_torrents(site=site, keyword=keyword, cat=cat, page=page)
@cached(cache=TTLCache(maxsize=128, ttl=295))
def rss(self, domain: str) -> List[TorrentInfo]:
@@ -131,7 +135,7 @@ class TorrentsChain(ChainBase, metaclass=Singleton):
return ret_torrents
def refresh(self, stype: str = None, sites: List[int] = None) -> Dict[str, List[Context]]:
def refresh(self, stype: Optional[str] = None, sites: List[int] = None) -> Dict[str, List[Context]]:
"""
刷新站点最新资源,识别并缓存起来
:param stype: 强制指定缓存类型spider:爬虫缓存rss:rss缓存
@@ -175,7 +179,7 @@ class TorrentsChain(ChainBase, metaclass=Singleton):
# 按pubdate降序排列
torrents.sort(key=lambda x: x.pubdate or '', reverse=True)
# 取前N条
torrents = torrents[:settings.CACHE_CONF.get('refresh')]
torrents = torrents[:settings.CACHE_CONF["refresh"]]
if torrents:
# 过滤出没有处理过的种子
torrents = [torrent for torrent in torrents
@@ -215,8 +219,8 @@ class TorrentsChain(ChainBase, metaclass=Singleton):
else:
torrents_cache[domain].append(context)
# 如果超过了限制条数则移除掉前面的
if len(torrents_cache[domain]) > settings.CACHE_CONF.get('torrents'):
torrents_cache[domain] = torrents_cache[domain][-settings.CACHE_CONF.get('torrents'):]
if len(torrents_cache[domain]) > settings.CACHE_CONF["torrents"]:
torrents_cache[domain] = torrents_cache[domain][-settings.CACHE_CONF["torrents"]:]
# 回收资源
del torrents
else:

File diff suppressed because it is too large Load Diff

View File

@@ -7,7 +7,7 @@ from app.core.security import get_password_hash, verify_password
from app.db.models.user import User
from app.db.user_oper import UserOper
from app.log import logger
from app.schemas.event import AuthCredentials, AuthInterceptCredentials
from app.schemas import AuthCredentials, AuthInterceptCredentials
from app.schemas.types import ChainEventType
from app.utils.otp import OtpUtils
from app.utils.singleton import Singleton
@@ -30,7 +30,7 @@ class UserChain(ChainBase, metaclass=Singleton):
password: Optional[str] = None,
mfa_code: Optional[str] = None,
code: Optional[str] = None,
grant_type: str = "password"
grant_type: Optional[str] = "password"
) -> Union[Tuple[bool, Optional[str]], Tuple[bool, Optional[User]]]:
"""
认证用户,根据不同的 grant_type 处理不同的认证流程

250
app/chain/workflow.py Normal file
View File

@@ -0,0 +1,250 @@
import base64
import pickle
import threading
from collections import defaultdict, deque
from concurrent.futures import ThreadPoolExecutor
from time import sleep
from typing import List, Tuple, Optional
from pydantic.fields import Callable
from app.chain import ChainBase
from app.core.config import global_vars
from app.core.workflow import WorkFlowManager
from app.db.models import Workflow
from app.db.workflow_oper import WorkflowOper
from app.log import logger
from app.schemas import ActionContext, ActionFlow, Action, ActionExecution
class WorkflowExecutor:
"""
工作流执行器
"""
def __init__(self, workflow: Workflow, step_callback: Callable = None):
"""
初始化工作流执行器
:param workflow: 工作流对象
:param step_callback: 步骤回调函数
"""
# 工作流数据
self.workflow = workflow
self.step_callback = step_callback
self.actions = {action['id']: Action(**action) for action in workflow.actions}
self.flows = [ActionFlow(**flow) for flow in workflow.flows]
self.total_actions = len(self.actions)
self.finished_actions = 0
self.success = True
self.errmsg = ""
# 工作流管理器
self.workflowmanager = WorkFlowManager()
# 线程安全队列
self.queue = deque()
# 锁用于保证线程安全
self.lock = threading.Lock()
# 线程池
self.executor = ThreadPoolExecutor()
# 跟踪运行中的任务数
self.running_tasks = 0
# 构建邻接表、入度表
self.adjacency = defaultdict(list)
self.indegree = defaultdict(int)
for flow in self.flows:
source = flow.source
target = flow.target
self.adjacency[source].append(target)
self.indegree[target] += 1
# 初始化所有节点的入度确保未被引用的节点入度为0
for action_id in self.actions:
if action_id not in self.indegree:
self.indegree[action_id] = 0
# 初始上下文
if workflow.current_action and workflow.context:
logger.info(f"工作流已执行动作:{workflow.current_action}")
# Base64解码
decoded_data = base64.b64decode(workflow.context["content"])
# 反序列化数据
self.context = pickle.loads(decoded_data)
else:
self.context = ActionContext()
# 恢复工作流
global_vars.workflow_resume(self.workflow.id)
# 初始化队列添加入度为0的节点
for action_id in self.actions:
if self.indegree[action_id] == 0:
self.queue.append(action_id)
def execute(self):
"""
执行工作流
"""
while True:
with self.lock:
# 退出条件:队列为空且无运行任务
if not self.queue and self.running_tasks == 0:
break
# 退出条件:出现了错误
if not self.success:
break
if not self.queue:
sleep(0.1)
continue
# 取出队首节点
node_id = self.queue.popleft()
# 标记任务开始
self.running_tasks += 1
# 已停机
if global_vars.is_workflow_stopped(self.workflow.id):
global_vars.workflow_resume(self.workflow.id)
break
# 已执行的跳过
if (self.workflow.current_action
and node_id in self.workflow.current_action.split(',')):
continue
# 提交任务到线程池
future = self.executor.submit(
self.execute_node,
self.workflow.id,
node_id,
self.context
)
future.add_done_callback(self.on_node_complete)
def execute_node(self, workflow_id: int, node_id: int,
context: ActionContext) -> Tuple[Action, bool, str, ActionContext]:
"""
执行单个节点操作返回修改后的上下文和节点ID
"""
action = self.actions[node_id]
state, message, result_ctx = self.workflowmanager.excute(workflow_id, action, context=context)
return action, state, message, result_ctx
def on_node_complete(self, future):
"""
节点完成回调:更新上下文、处理后继节点
"""
action, state, message, result_ctx = future.result()
try:
self.finished_actions += 1
# 更新当前进度
self.context.progress = round(self.finished_actions / self.total_actions) * 100
# 补充执行历史
self.context.execute_history.append(
ActionExecution(
action=action.name,
result=state,
message=message
)
)
# 节点执行失败
if not state:
self.success = False
self.errmsg = f"{action.name} 失败"
return
with self.lock:
# 更新主上下文
self.merge_context(result_ctx)
# 回调
if self.step_callback:
self.step_callback(action, self.context)
# 处理后继节点
successors = self.adjacency.get(action.id, [])
for succ_id in successors:
with self.lock:
self.indegree[succ_id] -= 1
if self.indegree[succ_id] == 0:
self.queue.append(succ_id)
finally:
# 标记任务完成
with self.lock:
self.running_tasks -= 1
def merge_context(self, context: ActionContext):
"""
合并上下文
"""
for key, value in context.dict().items():
if not getattr(self.context, key, None):
setattr(self.context, key, value)
class WorkflowChain(ChainBase):
"""
工作流链
"""
def __init__(self):
super().__init__()
self.workflowoper = WorkflowOper()
def process(self, workflow_id: int, from_begin: Optional[bool] = True) -> Tuple[bool, str]:
"""
处理工作流
:param workflow_id: 工作流ID
:param from_begin: 是否从头开始默认为True
"""
def save_step(action: Action, context: ActionContext):
"""
保存上下文到数据库
"""
# 序列化数据
serialized_data = pickle.dumps(context)
# 使用Base64编码字节流
encoded_data = base64.b64encode(serialized_data).decode('utf-8')
self.workflowoper.step(workflow_id, action_id=action.id, context={
"content": encoded_data
})
# 重置工作流
if from_begin:
self.workflowoper.reset(workflow_id)
# 查询工作流数据
workflow = self.workflowoper.get(workflow_id)
if not workflow:
logger.warn(f"工作流 {workflow_id} 不存在")
return False, "工作流不存在"
if not workflow.actions:
logger.warn(f"工作流 {workflow.name} 无动作")
return False, "工作流无动作"
if not workflow.flows:
logger.warn(f"工作流 {workflow.name} 无流程")
return False, "工作流无流程"
logger.info(f"开始处理 {workflow.name},共 {len(workflow.actions)} 个动作 ...")
self.workflowoper.start(workflow_id)
# 执行工作流
executor = WorkflowExecutor(workflow, step_callback=save_step)
executor.execute()
if not executor.success:
logger.info(f"工作流 {workflow.name} 执行失败:{executor.errmsg}")
self.workflowoper.fail(workflow_id, result=executor.errmsg)
return False, executor.errmsg
else:
logger.info(f"工作流 {workflow.name} 执行完成")
self.workflowoper.success(workflow_id)
return True, ""
def get_workflows(self) -> List[Workflow]:
"""
获取工作流列表
"""
return self.workflowoper.list_enabled()

View File

@@ -1,3 +1,4 @@
import copy
import threading
import traceback
from typing import Any, Union, Dict, Optional
@@ -15,15 +16,18 @@ from app.helper.message import MessageHelper
from app.helper.thread import ThreadHelper
from app.log import logger
from app.scheduler import Scheduler
from app.schemas import Notification
from app.schemas.event import CommandRegisterEventData
from app.schemas import Notification, CommandRegisterEventData
from app.schemas.types import EventType, MessageChannel, ChainEventType
from app.utils.object import ObjectUtils
from app.utils.singleton import Singleton
from app.utils.structures import DictUtils
class CommandChain(ChainBase, metaclass=Singleton):
class CommandChain(ChainBase):
pass
class Command(metaclass=Singleton):
"""
全局命令管理消费事件
"""
@@ -54,6 +58,11 @@ class CommandChain(ChainBase, metaclass=Singleton):
"description": "更新站点Cookie",
"data": {}
},
"/site_statistic": {
"func": SiteChain().remote_refresh_userdatas,
"description": "站点数据统计",
"data": {}
},
"/site_enable": {
"func": SiteChain().remote_enable,
"description": "启用站点",
@@ -205,7 +214,7 @@ class CommandChain(ChainBase, metaclass=Singleton):
if filtered_initial_commands != self._registered_commands or force_register:
logger.debug("Command set has changed or force registration is enabled.")
self._registered_commands = filtered_initial_commands
super().register_commands(commands=filtered_initial_commands)
CommandChain().register_commands(commands=filtered_initial_commands)
else:
logger.debug("Command set unchanged, skipping broadcast registration.")
except Exception as e:
@@ -243,7 +252,7 @@ class CommandChain(ChainBase, metaclass=Singleton):
event = eventmanager.send_event(ChainEventType.CommandRegister, event_data)
return event, commands
def __build_plugin_commands(self, pid: Optional[str] = None) -> Dict[str, dict]:
def __build_plugin_commands(self, _: Optional[str] = None) -> Dict[str, dict]:
"""
构建插件命令
"""
@@ -264,15 +273,15 @@ class CommandChain(ChainBase, metaclass=Singleton):
}
return plugin_commands
def __run_command(self, command: Dict[str, any], data_str: str = "",
channel: MessageChannel = None, source: str = None, userid: Union[str, int] = None):
def __run_command(self, command: Dict[str, any], data_str: Optional[str] = "",
channel: MessageChannel = None, source: Optional[str] = None, userid: Union[str, int] = None):
"""
运行定时服务
"""
if command.get("type") == "scheduler":
# 定时服务
if userid:
self.post_message(
CommandChain().post_message(
Notification(
channel=channel,
source=source,
@@ -285,7 +294,7 @@ class CommandChain(ChainBase, metaclass=Singleton):
self.scheduler.start(job_id=command.get("id"))
if userid:
self.post_message(
CommandChain().post_message(
Notification(
channel=channel,
source=source,
@@ -295,7 +304,7 @@ class CommandChain(ChainBase, metaclass=Singleton):
)
else:
# 命令
cmd_data = command['data'] if command.get('data') else {}
cmd_data = copy.deepcopy(command['data']) if command.get('data') else {}
args_num = ObjectUtils.arguments(command['func'])
if args_num > 0:
if cmd_data:
@@ -330,8 +339,8 @@ class CommandChain(ChainBase, metaclass=Singleton):
"""
return self._commands.get(cmd, {})
def register(self, cmd: str, func: Any, data: dict = None,
desc: str = None, category: str = None) -> None:
def register(self, cmd: str, func: Any, data: Optional[dict] = None,
desc: Optional[str] = None, category: Optional[str] = None) -> None:
"""
注册单个命令
"""
@@ -343,8 +352,8 @@ class CommandChain(ChainBase, metaclass=Singleton):
"data": data or {}
}
def execute(self, cmd: str, data_str: str = "",
channel: MessageChannel = None, source: str = None,
def execute(self, cmd: str, data_str: Optional[str] = "",
channel: MessageChannel = None, source: Optional[str] = None,
userid: Union[str, int] = None) -> None:
"""
执行命令
@@ -402,7 +411,7 @@ class CommandChain(ChainBase, metaclass=Singleton):
channel=event_channel, source=event_source, userid=event_user)
@eventmanager.register(EventType.ModuleReload)
def module_reload_event(self, event: ManagerEvent) -> None:
def module_reload_event(self, _: ManagerEvent) -> None:
"""
注册模块重载事件
"""

576
app/core/cache.py Normal file
View File

@@ -0,0 +1,576 @@
import inspect
import json
import pickle
import threading
from abc import ABC, abstractmethod
from functools import wraps
from typing import Any, Dict, Optional
from urllib.parse import quote
import redis
from cachetools import TTLCache
from cachetools.keys import hashkey
from app.core.config import settings
from app.log import logger
# 默认缓存区
DEFAULT_CACHE_REGION = "DEFAULT"
lock = threading.Lock()
class CacheBackend(ABC):
"""
缓存后端基类,定义通用的缓存接口
"""
@abstractmethod
def set(self, key: str, value: Any, ttl: int, region: Optional[str] = DEFAULT_CACHE_REGION, **kwargs) -> None:
"""
设置缓存
:param key: 缓存的键
:param value: 缓存的值
:param ttl: 缓存的存活时间,单位秒
:param region: 缓存的区
:param kwargs: 其他参数
"""
pass
@abstractmethod
def exists(self, key: str, region: Optional[str] = DEFAULT_CACHE_REGION) -> bool:
"""
判断缓存键是否存在
:param key: 缓存的键
:param region: 缓存的区
:return: 存在返回 True否则返回 False
"""
pass
@abstractmethod
def get(self, key: str, region: Optional[str] = DEFAULT_CACHE_REGION) -> Any:
"""
获取缓存
:param key: 缓存的键
:param region: 缓存的区
:return: 返回缓存的值,如果缓存不存在返回 None
"""
pass
@abstractmethod
def delete(self, key: str, region: Optional[str] = DEFAULT_CACHE_REGION) -> None:
"""
删除缓存
:param key: 缓存的键
:param region: 缓存的区
"""
pass
@abstractmethod
def clear(self, region: Optional[str] = None) -> None:
"""
清除指定区域的缓存或全部缓存
:param region: 缓存的区
"""
pass
@abstractmethod
def close(self) -> None:
"""
关闭缓存连接
"""
pass
@staticmethod
def get_region(region: Optional[str] = DEFAULT_CACHE_REGION):
"""
获取缓存的区
"""
return f"region:{region}" if region else "region:default"
@staticmethod
def get_cache_key(func, args, kwargs):
"""
获取缓存的键,通过哈希函数对函数的参数进行处理
:param func: 被装饰的函数
:param args: 位置参数
:param kwargs: 关键字参数
:return: 缓存键
"""
signature = inspect.signature(func)
# 绑定传入的参数并应用默认值
bound = signature.bind(*args, **kwargs)
bound.apply_defaults()
# 忽略第一个参数,如果它是实例(self)或类(cls)
parameters = list(signature.parameters.keys())
if parameters and parameters[0] in ("self", "cls"):
bound.arguments.pop(parameters[0], None)
# 按照函数签名顺序提取参数值列表
keys = [
bound.arguments[param] for param in signature.parameters if param in bound.arguments
]
# 使用有序参数生成缓存键
return f"{func.__name__}_{hashkey(*keys)}"
class CacheToolsBackend(CacheBackend):
"""
基于 `cachetools.TTLCache` 实现的缓存后端
特性:
- 支持动态设置缓存的 TTLTime To Live存活时间和最大条目数Maxsize
- 缓存实例按区域region划分不同 region 拥有独立的缓存实例
- 同一 region 共享相同的 TTL 和 Maxsize设置时只能作用于整个 region
限制:
- 不支持按 `key` 独立隔离 TTL 和 Maxsize仅支持作用于 region 级别
"""
def __init__(self, maxsize: Optional[int] = 1000, ttl: Optional[int] = 1800):
"""
初始化缓存实例
:param maxsize: 缓存的最大条目数
:param ttl: 默认缓存存活时间,单位秒
"""
self.maxsize = maxsize
self.ttl = ttl
# 存储各个 region 的缓存实例region -> TTLCache
self._region_caches: Dict[str, TTLCache] = {}
def __get_region_cache(self, region: str) -> Optional[TTLCache]:
"""
获取指定区域的缓存实例,如果不存在则返回 None
"""
region = self.get_region(region)
return self._region_caches.get(region)
def set(self, key: str, value: Any, ttl: Optional[int] = None,
region: Optional[str] = DEFAULT_CACHE_REGION, **kwargs) -> None:
"""
设置缓存值支持每个 key 独立配置 TTL 和 Maxsize
:param key: 缓存的键
:param value: 缓存的值
:param ttl: 缓存的存活时间,单位秒如果未传入则使用默认值
:param region: 缓存的区
:param kwargs: maxsize: 缓存的最大条目数如果未传入则使用默认值
"""
ttl = ttl or self.ttl
maxsize = kwargs.get("maxsize", self.maxsize)
region = self.get_region(region)
# 如果该 key 尚未有缓存实例,则创建一个新的 TTLCache 实例
region_cache = self._region_caches.setdefault(region, TTLCache(maxsize=maxsize, ttl=ttl))
# 设置缓存值
with lock:
region_cache[key] = value
def exists(self, key: str, region: Optional[str] = DEFAULT_CACHE_REGION) -> bool:
"""
判断缓存键是否存在
:param key: 缓存的键
:param region: 缓存的区
:return: 存在返回 True否则返回 False
"""
region_cache = self.__get_region_cache(region)
if region_cache is None:
return False
return key in region_cache
def get(self, key: str, region: Optional[str] = DEFAULT_CACHE_REGION) -> Any:
"""
获取缓存的值
:param key: 缓存的键
:param region: 缓存的区
:return: 返回缓存的值,如果缓存不存在返回 None
"""
region_cache = self.__get_region_cache(region)
if region_cache is None:
return None
return region_cache.get(key)
def delete(self, key: str, region: Optional[str] = DEFAULT_CACHE_REGION) -> None:
"""
删除缓存
:param key: 缓存的键
:param region: 缓存的区
"""
region_cache = self.__get_region_cache(region)
if region_cache is None:
return None
with lock:
del region_cache[key]
def clear(self, region: Optional[str] = None) -> None:
"""
清除指定区域的缓存或全部缓存
:param region: 缓存的区
"""
if region:
# 清理指定缓存区
region_cache = self.__get_region_cache(region)
if region_cache:
with lock:
region_cache.clear()
logger.info(f"Cleared cache for region: {region}")
else:
# 清除所有区域的缓存
for region_cache in self._region_caches.values():
with lock:
region_cache.clear()
logger.info("Cleared all cache")
def close(self) -> None:
"""
内存缓存不需要关闭资源
"""
pass
class RedisBackend(CacheBackend):
"""
基于 Redis 实现的缓存后端,支持通过 Redis 存储缓存
特性:
- 支持动态设置缓存的 TTLTime To Live存活时间
- 支持分区域region管理缓存不同的 region 采用独立的命名空间
- 支持自定义最大内存限制maxmemory和内存淘汰策略如 allkeys-lru
限制:
- 由于 Redis 的分布式特性,写入和读取可能受到网络延迟的影响
- Pickle 反序列化可能存在安全风险,需进一步重构调用来源,避免复杂对象缓存
"""
# 类型缓存集合,针对非容器简单类型
_complex_serializable_types = set()
_simple_serializable_types = set()
def __init__(self, redis_url: Optional[str] = "redis://localhost", ttl: Optional[int] = 1800):
"""
初始化 Redis 缓存实例
:param redis_url: Redis 服务的 URL
:param ttl: 缓存的存活时间,单位秒
"""
self.redis_url = redis_url
self.ttl = ttl
try:
self.client = redis.Redis.from_url(
redis_url,
decode_responses=False,
socket_timeout=30,
socket_connect_timeout=5,
health_check_interval=60,
)
# 测试连接,确保 Redis 可用
self.client.ping()
logger.debug(f"Successfully connected to Redis")
self.set_memory_limit()
except Exception as e:
logger.error(f"Failed to connect to Redis: {e}")
raise RuntimeError("Redis connection failed") from e
def set_memory_limit(self, policy: Optional[str] = "allkeys-lru"):
"""
动态设置 Redis 最大内存和内存淘汰策略
:param policy: 淘汰策略(如 'allkeys-lru'
"""
try:
# 如果有显式值,则直接使用,为 0 时说明不限制,如果未配置,开启 BIG_MEMORY_MODE 时为 "1024mb",未开启时为 "256mb"
maxmemory = settings.CACHE_REDIS_MAXMEMORY or ("1024mb" if settings.BIG_MEMORY_MODE else "256mb")
self.client.config_set("maxmemory", maxmemory)
self.client.config_set("maxmemory-policy", policy)
logger.debug(f"Redis maxmemory set to {maxmemory}, policy: {policy}")
except Exception as e:
logger.error(f"Failed to set Redis maxmemory or policy: {e}")
@staticmethod
def is_container_type(t):
return t in (list, dict, tuple, set)
@classmethod
def serialize(cls, value: Any) -> bytes:
"""
将值序列化为二进制数据,根据序列化方式标识格式
"""
vt = type(value)
# 针对非容器类型使用缓存策略
if not cls.is_container_type(vt):
# 如果已知需要复杂序列化
if vt in cls._complex_serializable_types:
return b"PICKLE" + b"\x00" + pickle.dumps(value)
# 如果已知可以简单序列化
if vt in cls._simple_serializable_types:
json_data = json.dumps(value).encode("utf-8")
return b"JSON" + b"\x00" + json_data
# 对于未知的非容器类型,尝试简单序列化,如抛出异常,再使用复杂序列化
try:
json_data = json.dumps(value).encode("utf-8")
cls._simple_serializable_types.add(vt)
return b"JSON" + b"\x00" + json_data
except TypeError:
cls._complex_serializable_types.add(vt)
return b"PICKLE" + b"\x00" + pickle.dumps(value)
# 针对容器类型,每次尝试简单序列化,不使用缓存
else:
try:
json_data = json.dumps(value).encode("utf-8")
return b"JSON" + b"\x00" + json_data
except TypeError:
return b"PICKLE" + b"\x00" + pickle.dumps(value)
@classmethod
def deserialize(cls, value: bytes) -> Any:
"""
将二进制数据反序列化为原始值,根据格式标识区分序列化方式
"""
format_marker, data = value.split(b"\x00", 1)
if format_marker == b"JSON":
return json.loads(data.decode("utf-8"))
elif format_marker == b"PICKLE":
return pickle.loads(data)
else:
raise ValueError("Unknown serialization format")
# @staticmethod
# def serialize(value: Any) -> bytes:
# return msgpack.packb(value, use_bin_type=True)
#
# @staticmethod
# def deserialize(value: bytes) -> Any:
# return msgpack.unpackb(value, raw=False)
def get_redis_key(self, region: str, key: str) -> str:
"""
获取缓存 Key
"""
# 使用 region 作为缓存键的一部分
region = self.get_region(quote(region))
return f"{region}:key:{quote(key)}"
def set(self, key: str, value: Any, ttl: Optional[int] = None,
region: Optional[str] = DEFAULT_CACHE_REGION, **kwargs) -> None:
"""
设置缓存
:param key: 缓存的键
:param value: 缓存的值
:param ttl: 缓存的存活时间,单位秒如果未传入则使用默认值
:param region: 缓存的区
:param kwargs: kwargs
"""
try:
ttl = ttl or self.ttl
redis_key = self.get_redis_key(region, key)
# 对值进行序列化
serialized_value = self.serialize(value)
kwargs.pop("maxsize", None)
self.client.set(redis_key, serialized_value, ex=ttl, **kwargs)
except Exception as e:
logger.error(f"Failed to set key: {key} in region: {region}, error: {e}")
def exists(self, key: str, region: Optional[str] = DEFAULT_CACHE_REGION) -> bool:
"""
判断缓存键是否存在
:param key: 缓存的键
:param region: 缓存的区
:return: 存在返回 True否则返回 False
"""
try:
redis_key = self.get_redis_key(region, key)
return self.client.exists(redis_key) == 1
except Exception as e:
logger.error(f"Failed to exists key: {key} region: {region}, error: {e}")
return False
def get(self, key: str, region: Optional[str] = DEFAULT_CACHE_REGION) -> Optional[Any]:
"""
获取缓存的值
:param key: 缓存的键
:param region: 缓存的区
:return: 返回缓存的值,如果缓存不存在返回 None
"""
try:
redis_key = self.get_redis_key(region, key)
value = self.client.get(redis_key)
if value is not None:
return self.deserialize(value) # noqa
return None
except Exception as e:
logger.error(f"Failed to get key: {key} in region: {region}, error: {e}")
return None
def delete(self, key: str, region: Optional[str] = DEFAULT_CACHE_REGION) -> None:
"""
删除缓存
:param key: 缓存的键
:param region: 缓存的区
"""
try:
redis_key = self.get_redis_key(region, key)
self.client.delete(redis_key)
except Exception as e:
logger.error(f"Failed to delete key: {key} in region: {region}, error: {e}")
def clear(self, region: Optional[str] = None) -> None:
"""
清除指定区域的缓存或全部缓存
:param region: 缓存的区
"""
try:
if region:
cache_region = self.get_region(quote(region))
redis_key = f"{cache_region}:key:*"
# self.client.delete(*self.client.keys(redis_key))
with self.client.pipeline() as pipe:
for key in self.client.scan_iter(redis_key):
pipe.delete(key)
pipe.execute()
logger.info(f"Cleared Redis cache for region: {region}")
else:
self.client.flushdb()
logger.info("Cleared all Redis cache")
except Exception as e:
logger.error(f"Failed to clear cache, region: {region}, error: {e}")
def close(self) -> None:
"""
关闭 Redis 客户端的连接池
"""
if self.client:
self.client.close()
def get_cache_backend(maxsize: Optional[int] = 1000, ttl: Optional[int] = 1800) -> CacheBackend:
"""
根据配置获取缓存后端实例
:param maxsize: 缓存的最大条目数
:param ttl: 缓存的默认存活时间,单位秒
:return: 返回缓存后端实例
"""
cache_type = settings.CACHE_BACKEND_TYPE
logger.debug(f"Cache backend type from settings: {cache_type}")
if cache_type == "redis":
redis_url = settings.CACHE_BACKEND_URL
if redis_url:
try:
logger.debug(f"Attempting to use RedisBackend with URL: {redis_url}, TTL: {ttl}")
return RedisBackend(redis_url=redis_url, ttl=ttl)
except RuntimeError:
logger.warning("Falling back to CacheToolsBackend due to Redis connection failure.")
else:
logger.debug("Cache backend type is redis, but no valid REDIS_URL found. "
"Falling back to CacheToolsBackend.")
# 如果不是 Redis回退到内存缓存
logger.debug(f"Using CacheToolsBackend with default maxsize: {maxsize}, TTL: {ttl}")
return CacheToolsBackend(maxsize=maxsize, ttl=ttl)
def cached(region: Optional[str] = None, maxsize: Optional[int] = 1000, ttl: Optional[int] = 1800,
skip_none: Optional[bool] = True, skip_empty: Optional[bool] = False):
"""
自定义缓存装饰器,支持为每个 key 动态传递 maxsize 和 ttl
:param region: 缓存的区
:param maxsize: 缓存的最大条目数,默认值为 1000
:param ttl: 缓存的存活时间,单位秒,默认值为 1800
:param skip_none: 跳过 None 缓存,默认为 True
:param skip_empty: 跳过空值缓存(如 None, [], {}, "", set()),默认为 False
:return: 装饰器函数
"""
def should_cache(value: Any) -> bool:
"""
判断是否应该缓存结果,如果返回值是 None 或空值则不缓存
:param value: 要判断的缓存值
:return: 是否缓存结果
"""
if skip_none and value is None:
return False
# if skip_empty and value in [None, [], {}, "", set()]:
if skip_empty and not value:
return False
return True
def is_valid_cache_value(cache_key: str, cached_value: Any, cache_region: str) -> bool:
"""
判断指定的值是否为一个有效的缓存值
:param cache_key: 缓存的键
:param cached_value: 缓存的值
:param cache_region: 缓存的区
:return: 若值是有效的缓存值返回 True否则返回 False
"""
# 如果 skip_none 为 False且 value 为 None需要判断缓存实际是否存在
if not skip_none and cached_value is None:
if not cache_backend.exists(key=cache_key, region=cache_region):
return False
return True
def decorator(func):
# 获取缓存区
cache_region = region if region is not None else f"{func.__module__}.{func.__name__}"
@wraps(func)
def wrapper(*args, **kwargs):
# 获取缓存键
cache_key = cache_backend.get_cache_key(func, args, kwargs)
# 尝试获取缓存
cached_value = cache_backend.get(cache_key, region=cache_region)
if should_cache(cached_value) and is_valid_cache_value(cache_key, cached_value, cache_region):
return cached_value
# 执行函数并缓存结果
result = func(*args, **kwargs)
# 判断是否需要缓存
if not should_cache(result):
return result
# 设置缓存(如果有传入的 maxsize 和 ttl则覆盖默认值
cache_backend.set(cache_key, result, ttl=ttl, maxsize=maxsize, region=cache_region)
return result
def cache_clear():
"""
清理缓存区
"""
# 清理缓存区
cache_backend.clear(region=cache_region)
wrapper.cache_region = cache_region
wrapper.cache_clear = cache_clear
return wrapper
return decorator
# 缓存后端实例
cache_backend = get_cache_backend()
def close_cache() -> None:
"""
关闭缓存后端连接并清理资源
"""
try:
if cache_backend:
cache_backend.close()
logger.info("Cache backend closed successfully.")
except Exception as e:
logger.info(f"Error while closing cache backend: {e}")

View File

@@ -8,9 +8,9 @@ from pathlib import Path
from typing import Any, Dict, List, Optional, Tuple, Type
from dotenv import set_key
from pydantic import BaseModel, BaseSettings, validator
from pydantic import BaseModel, BaseSettings, validator, Field
from app.log import logger
from app.log import logger, log_settings, LogConfigModel
from app.utils.system import SystemUtils
from app.utils.url import UrlUtils
@@ -36,7 +36,7 @@ class ConfigModel(BaseModel):
# RESOURCE密钥
RESOURCE_SECRET_KEY: str = secrets.token_urlsafe(32)
# 允许的域名
ALLOWED_HOSTS: list = ["*"]
ALLOWED_HOSTS: list = Field(default_factory=lambda: ["*"])
# TOKEN过期时间
ACCESS_TOKEN_EXPIRE_MINUTES: int = 60 * 24 * 8
# RESOURCE_TOKEN过期时间
@@ -71,6 +71,12 @@ class ConfigModel(BaseModel):
DB_TIMEOUT: int = 60
# SQLite 是否启用 WAL 模式,默认关闭
DB_WAL_ENABLE: bool = False
# 缓存类型,支持 cachetools 和 redis默认使用 cachetools
CACHE_BACKEND_TYPE: str = "cachetools"
# 缓存连接字符串,仅外部缓存(如 Redis、Memcached需要
CACHE_BACKEND_URL: Optional[str] = None
# Redis 缓存最大内存限制,未配置时,如开启大内存模式时为 "1024mb",未开启时为 "256mb"
CACHE_REDIS_MAXMEMORY: Optional[str] = None
# 配置文件目录
CONFIG_DIR: Optional[str] = None
# 超级管理员
@@ -103,6 +109,10 @@ class ConfigModel(BaseModel):
FANART_ENABLE: bool = True
# Fanart API Key
FANART_API_KEY: str = "d2d31f9ecabea050fc7d68aa3146015f"
# 115 AppId
U115_APP_ID: str = "100196807"
# Alipan AppId
ALIPAN_APP_ID: str = "ac1bf04dc9fd4d9aaabb65b4a668d403"
# 元数据识别缓存过期时间(小时)
META_CACHE_EXPIRE: int = 0
# 电视剧动漫的分类genre_ids
@@ -112,31 +122,41 @@ class ConfigModel(BaseModel):
# 自动检查和更新站点资源包(站点索引、认证等)
AUTO_UPDATE_RESOURCE: bool = True
# 是否启用DOH解析域名
DOH_ENABLE: bool = True
DOH_ENABLE: bool = False
# 使用 DOH 解析的域名列表
DOH_DOMAINS: str = "api.themoviedb.org,api.tmdb.org,webservice.fanart.tv,api.github.com,github.com,raw.githubusercontent.com,api.telegram.org"
DOH_DOMAINS: str = ("api.themoviedb.org,"
"api.tmdb.org,"
"webservice.fanart.tv,"
"api.github.com,"
"github.com,"
"raw.githubusercontent.com,"
"api.telegram.org")
# DOH 解析服务器列表
DOH_RESOLVERS: str = "1.0.0.1,1.1.1.1,9.9.9.9,149.112.112.112"
# 支持的后缀格式
RMT_MEDIAEXT: list = ['.mp4', '.mkv', '.ts', '.iso',
'.rmvb', '.avi', '.mov', '.mpeg',
'.mpg', '.wmv', '.3gp', '.asf',
'.m4v', '.flv', '.m2ts', '.strm',
'.tp', '.f4v']
RMT_MEDIAEXT: list = Field(
default_factory=lambda: ['.mp4', '.mkv', '.ts', '.iso',
'.rmvb', '.avi', '.mov', '.mpeg',
'.mpg', '.wmv', '.3gp', '.asf',
'.m4v', '.flv', '.m2ts', '.strm',
'.tp', '.f4v']
)
# 支持的字幕文件后缀格式
RMT_SUBEXT: list = ['.srt', '.ass', '.ssa', '.sup']
RMT_SUBEXT: list = Field(default_factory=lambda: ['.srt', '.ass', '.ssa', '.sup'])
# 支持的音轨文件后缀格式
RMT_AUDIO_TRACK_EXT: list = ['.mka']
RMT_AUDIO_TRACK_EXT: list = Field(default_factory=lambda: ['.mka'])
# 音轨文件后缀格式
RMT_AUDIOEXT: list = ['.aac', '.ac3', '.amr', '.caf', '.cda', '.dsf',
'.dff', '.kar', '.m4a', '.mp1', '.mp2', '.mp3',
'.mid', '.mod', '.mka', '.mpc', '.nsf', '.ogg',
'.pcm', '.rmi', '.s3m', '.snd', '.spx', '.tak',
'.tta', '.vqf', '.wav', '.wma',
'.aifc', '.aiff', '.alac', '.adif', '.adts',
'.flac', '.midi', '.opus', '.sfalc']
RMT_AUDIOEXT: list = Field(
default_factory=lambda: ['.aac', '.ac3', '.amr', '.caf', '.cda', '.dsf',
'.dff', '.kar', '.m4a', '.mp1', '.mp2', '.mp3',
'.mid', '.mod', '.mka', '.mpc', '.nsf', '.ogg',
'.pcm', '.rmi', '.s3m', '.snd', '.spx', '.tak',
'.tta', '.vqf', '.wav', '.wma',
'.aifc', '.aiff', '.alac', '.adif', '.adts',
'.flac', '.midi', '.opus', '.sfalc']
)
# 下载器临时文件后缀
DOWNLOAD_TMPEXT: list = ['.!qb', '.part']
DOWNLOAD_TMPEXT: list = Field(default_factory=lambda: ['.!qb', '.part'])
# 媒体服务器同步间隔(小时)
MEDIASERVER_SYNC_INTERVAL: int = 6
# 订阅模式
@@ -189,7 +209,11 @@ class ConfigModel(BaseModel):
# 服务器地址,对应 https://github.com/jxxghp/MoviePilot-Server 项目
MP_SERVER_HOST: str = "https://movie-pilot.org"
# 插件市场仓库地址,多个地址使用,分隔,地址以/结尾
PLUGIN_MARKET: str = "https://github.com/jxxghp/MoviePilot-Plugins,https://github.com/thsrite/MoviePilot-Plugins,https://github.com/honue/MoviePilot-Plugins,https://github.com/InfinityPacer/MoviePilot-Plugins"
PLUGIN_MARKET: str = ("https://github.com/jxxghp/MoviePilot-Plugins,"
"https://github.com/thsrite/MoviePilot-Plugins,"
"https://github.com/honue/MoviePilot-Plugins,"
"https://github.com/InfinityPacer/MoviePilot-Plugins,"
"https://github.com/DDS-Derek/MoviePilot-Plugins")
# 插件安装数据共享
PLUGIN_STATISTIC_SHARE: bool = True
# 是否开启插件热加载
@@ -206,14 +230,41 @@ class ConfigModel(BaseModel):
BIG_MEMORY_MODE: bool = False
# 全局图片缓存,将媒体图片缓存到本地
GLOBAL_IMAGE_CACHE: bool = False
# 是否启用编码探测的性能模式
ENCODING_DETECTION_PERFORMANCE_MODE: bool = True
# 编码探测的最低置信度阈值
ENCODING_DETECTION_MIN_CONFIDENCE: float = 0.8
# 允许的图片缓存域名
SECURITY_IMAGE_DOMAINS: List[str] = ["image.tmdb.org", "static-mdb.v.geilijiasu.com", "doubanio.com", "lain.bgm.tv",
"raw.githubusercontent.com", "github.com"]
SECURITY_IMAGE_DOMAINS: List[str] = Field(
default_factory=lambda: ["image.tmdb.org",
"static-mdb.v.geilijiasu.com",
"doubanio.com",
"lain.bgm.tv",
"raw.githubusercontent.com",
"github.com",
"thetvdb.com",
"cctvpic.com",
"iqiyipic.com",
"hdslb.com",
"cmvideo.cn",
"ykimg.com",
"qpic.cn"]
)
# 允许的图片文件后缀格式
SECURITY_IMAGE_SUFFIXES: List[str] = [".jpg", ".jpeg", ".png", ".webp", ".gif", ".svg"]
SECURITY_IMAGE_SUFFIXES: List[str] = Field(
default_factory=lambda: [".jpg", ".jpeg", ".png", ".webp", ".gif", ".svg", ".avif"]
)
# 重命名时支持的S0别名
RENAME_FORMAT_S0_NAMES: List[str] = Field(
default_factory=lambda: ["Specials", "SPs"]
)
# 启用分词搜索
TOKENIZED_SEARCH: bool = False
# 为指定默认字幕添加.default后缀
DEFAULT_SUB: Optional[str] = "zh-cn"
class Settings(BaseSettings, ConfigModel):
class Settings(BaseSettings, ConfigModel, LogConfigModel):
"""
系统配置类
"""
@@ -317,10 +368,10 @@ class Settings(BaseSettings, ConfigModel):
raise ValueError(f"配置项 '{field_name}' 的值 '{value}' 无法转换成正确的类型") from e
logger.error(
f"配置项 '{field_name}' 的值 '{value}' 无法转换成正确的类型,使用默认值 '{default}',错误信息: {e}")
return default, True
return default, True
@validator('*', pre=True, always=True)
def generic_type_validator(cls, value: Any, field):
def generic_type_validator(cls, value: Any, field): # noqa
"""
通用校验器,尝试将配置值转换为期望的类型
"""
@@ -345,10 +396,9 @@ class Settings(BaseSettings, ConfigModel):
logger.warning(message)
if field.name in os.environ:
if is_converted:
message = f"配置项 '{field.name}' 已在环境变量中设置,请手动更新以保持一致性"
logger.warning(message)
return False, message
message = f"配置项 '{field.name}' 已在环境变量中设置,请手动更新以保持一致性"
logger.warning(message)
return False, message
else:
set_key(SystemUtils.get_env_path(), field.name, str(converted_value) if converted_value is not None else "")
if is_converted:
@@ -372,10 +422,12 @@ class Settings(BaseSettings, ConfigModel):
field.default, key)
# 如果没有抛出异常,则统一使用 converted_value 进行更新
if needs_update or str(value) != str(converted_value):
success, message = self.update_env_config(field, original_value, converted_value)
success, message = self.update_env_config(field, value, converted_value)
# 仅成功更新配置时,才更新内存
if success:
setattr(self, key, converted_value)
if hasattr(log_settings, key):
setattr(log_settings, key, converted_value)
return success, message
return True, ""
except Exception as e:
@@ -386,8 +438,21 @@ class Settings(BaseSettings, ConfigModel):
更新多个配置项
"""
results = {}
log_updated, plugin_monitor_updated = False, False
for k, v in env.items():
results[k] = self.update_setting(k, v)
if hasattr(log_settings, k):
log_updated = True
if k in ["PLUGIN_AUTO_RELOAD", "DEV"]:
plugin_monitor_updated = True
# 本次更新存在日志配置项更新,需要重新加载日志配置
if log_updated:
logger.update_loggers()
# 本次更新存在插件监控配置项更新,需要重新加载插件监控
if plugin_monitor_updated:
# 解决顶层循环导入问题
from app.core.plugin import PluginManager
PluginManager().reload_monitor()
return results
@property
@@ -437,22 +502,34 @@ class Settings(BaseSettings, ConfigModel):
@property
def CACHE_CONF(self):
"""
{
"torrents": "缓存种子数量",
"refresh": "订阅刷新处理数量",
"tmdb": "TMDB请求缓存数量",
"douban": "豆瓣请求缓存数量",
"fanart": "Fanart请求缓存数量",
"meta": "元数据缓存过期时间(秒)"
}
"""
if self.BIG_MEMORY_MODE:
return {
"torrents": 200,
"refresh": 100,
"tmdb": 1024,
"refresh": 50,
"torrents": 100,
"douban": 512,
"bangumi": 512,
"fanart": 512,
"meta": (self.META_CACHE_EXPIRE or 168) * 3600
"meta": (self.META_CACHE_EXPIRE or 24) * 3600
}
return {
"torrents": 100,
"refresh": 50,
"tmdb": 256,
"refresh": 30,
"torrents": 50,
"douban": 256,
"bangumi": 256,
"fanart": 128,
"meta": (self.META_CACHE_EXPIRE or 72) * 3600
"meta": (self.META_CACHE_EXPIRE or 2) * 3600
}
@property
@@ -535,6 +612,8 @@ class GlobalVar(object):
STOP_EVENT: threading.Event = threading.Event()
# webpush订阅
SUBSCRIPTIONS: List[dict] = []
# 需应急停止的工作流
EMERGENCY_STOP_WORKFLOWS: List[int] = []
def stop_system(self):
"""
@@ -561,6 +640,26 @@ class GlobalVar(object):
"""
self.SUBSCRIPTIONS.append(subscription)
def stop_workflow(self, workflow_id: int):
"""
停止工作流
"""
if workflow_id not in self.EMERGENCY_STOP_WORKFLOWS:
self.EMERGENCY_STOP_WORKFLOWS.append(workflow_id)
def workflow_resume(self, workflow_id: int):
"""
恢复工作流
"""
if workflow_id in self.EMERGENCY_STOP_WORKFLOWS:
self.EMERGENCY_STOP_WORKFLOWS.remove(workflow_id)
def is_workflow_stopped(self, workflow_id: int):
"""
是否停止工作流
"""
return self.is_system_stopped or workflow_id in self.EMERGENCY_STOP_WORKFLOWS
# 实例化配置
settings = Settings()

View File

@@ -1,6 +1,7 @@
import re
from dataclasses import dataclass, field, asdict
from typing import List, Dict, Any, Tuple
from dataclasses import dataclass, field
from datetime import datetime
from typing import List, Dict, Any, Tuple, Optional
from app.core.config import settings
from app.core.meta import MetaBase
@@ -36,7 +37,7 @@ class TorrentInfo:
# 详情页面
page_url: str = None
# 种子大小
size: float = 0
size: float = 0.0
# 做种者
seeders: int = 0
# 下载者
@@ -123,11 +124,25 @@ class TorrentInfo:
return ""
return StringUtils.diff_time_str(self.freedate)
def pub_minutes(self) -> float:
"""
返回发布时间距离当前时间的分钟数
"""
if not self.pubdate:
return 0
try:
pub_date = datetime.strptime(self.pubdate, "%Y-%m-%d %H:%M:%S")
now_datetime = datetime.now()
return (now_datetime - pub_date).total_seconds() // 60
except Exception as e:
print(f"种子发布时间获取失败: {e}")
return 0
def to_dict(self):
"""
返回字典
"""
dicts = asdict(self)
dicts = vars(self).copy()
dicts["volume_factor"] = self.volume_factor
dicts["freedate_diff"] = self.freedate_diff
return dicts
@@ -163,6 +178,8 @@ class MediaInfo:
douban_id: str = None
# Bangumi ID
bangumi_id: int = None
# 合集ID
collection_id: int = None
# 媒体原语种
original_language: str = None
# 媒体原发行标题
@@ -176,7 +193,7 @@ class MediaInfo:
# LOGO
logo_path: str = None
# 评分
vote_average: float = 0
vote_average: float = 0.0
# 描述
overview: str = None
# 风格ID
@@ -245,6 +262,12 @@ class MediaInfo:
runtime: int = None
# 下一集
next_episode_to_air: dict = field(default_factory=dict)
# 内容分级
content_rating: str = None
# 全部剧集组
episode_groups: List[dict] = field(default_factory=list)
# 剧集组
episode_group: str = None
def __post_init__(self):
# 设置媒体信息
@@ -382,6 +405,8 @@ class MediaInfo:
if info.get("external_ids"):
self.tvdb_id = info.get("external_ids", {}).get("tvdb_id")
self.imdb_id = info.get("external_ids", {}).get("imdb_id")
# 合集ID
self.collection_id = info.get('collection_id')
# 评分
self.vote_average = round(float(info.get('vote_average')), 1) if info.get('vote_average') else 0
# 描述
@@ -433,6 +458,10 @@ class MediaInfo:
air_date = seainfo.get("air_date")
if air_date:
self.season_years[season] = air_date[:4]
# 剧集组
if info.get("episode_groups"):
self.episode_groups = info.pop("episode_groups").get("results") or []
# 海报
if info.get('poster_path'):
self.poster_path = f"https://{settings.TMDB_IMAGE_DOMAIN}/t/p/original{info.get('poster_path')}"
@@ -693,7 +722,7 @@ class MediaInfo:
return self.backdrop_path.replace("original", "w500")
return default or ""
def get_message_image(self, default: bool = None):
def get_message_image(self, default: Optional[bool] = None):
"""
返回消息图片地址
"""
@@ -701,7 +730,7 @@ class MediaInfo:
return self.backdrop_path.replace("original", "w500")
return self.get_poster_image(default=default)
def get_poster_image(self, default: bool = None):
def get_poster_image(self, default: Optional[bool] = None):
"""
返回海报图片地址
"""
@@ -709,7 +738,7 @@ class MediaInfo:
return self.poster_path.replace("original", "w500")
return default or ""
def get_overview_string(self, max_len: int = 140):
def get_overview_string(self, max_len: Optional[int] = 140):
"""
返回带限定长度的简介信息
:param max_len: 内容长度
@@ -725,7 +754,7 @@ class MediaInfo:
"""
返回字典
"""
dicts = asdict(self)
dicts = vars(self).copy()
dicts["type"] = self.type.value if self.type else None
dicts["detail_link"] = self.detail_link
dicts["title_year"] = self.title_year
@@ -752,6 +781,7 @@ class MediaInfo:
self.spoken_languages = []
self.networks = []
self.next_episode_to_air = {}
self.episode_groups = []
@dataclass

View File

@@ -13,7 +13,7 @@ from typing import Callable, Dict, List, Optional, Union
from app.helper.message import MessageHelper
from app.helper.thread import ThreadHelper
from app.log import logger
from app.schemas.event import ChainEventData
from app.schemas import ChainEventData
from app.schemas.types import ChainEventType, EventType
from app.utils.limit import ExponentialBackoffRateLimiter
from app.utils.singleton import Singleton
@@ -31,7 +31,7 @@ class Event:
def __init__(self, event_type: Union[EventType, ChainEventType],
event_data: Optional[Union[Dict, ChainEventData]] = None,
priority: int = DEFAULT_EVENT_PRIORITY):
priority: Optional[int] = DEFAULT_EVENT_PRIORITY):
"""
:param event_type: 事件的类型,支持 EventType 或 ChainEventType
:param event_data: 可选,事件携带的数据,默认为空字典
@@ -84,7 +84,6 @@ class EventManager(metaclass=Singleton):
self.__disabled_handlers = set() # 禁用的事件处理器集合
self.__disabled_classes = set() # 禁用的事件处理器类集合
self.__lock = threading.Lock() # 线程锁
self.__processing_events = {} # 用于记录当前正在处理的事件 {event_hash: event}
def start(self):
"""
@@ -130,16 +129,8 @@ class EventManager(metaclass=Singleton):
for handler in handlers.values()
)
@staticmethod
def __get_event_hash(event: Event) -> str:
"""
计算事件的唯一标识符hash
"""
data_string = str(event.event_type.value) + str(event.event_data)
return str(uuid.uuid5(uuid.NAMESPACE_DNS, data_string))
def send_event(self, etype: Union[EventType, ChainEventType], data: Optional[Union[Dict, ChainEventData]] = None,
priority: int = DEFAULT_EVENT_PRIORITY) -> Optional[Event]:
priority: Optional[int] = DEFAULT_EVENT_PRIORITY) -> Optional[Event]:
"""
发送事件,根据事件类型决定是广播事件还是链式事件
:param etype: 事件类型 (EventType 或 ChainEventType)
@@ -148,12 +139,6 @@ class EventManager(metaclass=Singleton):
:return: 如果是链式事件,返回处理后的事件数据;否则返回 None
"""
event = Event(etype, data, priority)
event_hash = self.__get_event_hash(event)
with self.__lock:
if event_hash in self.__processing_events:
logger.debug(f"Duplicate event ignored: {event}")
return None
self.__processing_events[event_hash] = event
if isinstance(etype, EventType):
self.__trigger_broadcast_event(event)
elif isinstance(etype, ChainEventType):
@@ -162,7 +147,7 @@ class EventManager(metaclass=Singleton):
logger.error(f"Unknown event type: {etype}")
def add_event_listener(self, event_type: Union[EventType, ChainEventType], handler: Callable,
priority: int = DEFAULT_EVENT_PRIORITY):
priority: Optional[int] = DEFAULT_EVENT_PRIORITY):
"""
注册事件处理器,将处理器添加到对应的事件订阅列表中
:param event_type: 事件类型 (EventType 或 ChainEventType)
@@ -248,23 +233,29 @@ class EventManager(metaclass=Singleton):
可视化所有事件处理器,包括是否被禁用的状态
:return: 处理器列表,包含事件类型、处理器标识符、优先级(如果有)和状态
"""
def parse_handler_data(data):
"""
解析处理器数据,判断是否包含优先级
:param data: 订阅者数据,可能是元组或单一值
:return: (priority, handler),若没有优先级则返回 (None, handler)
"""
if isinstance(data, tuple) and len(data) == 2:
return data
return None, data
handler_info = []
# 统一处理广播事件和链式事件
for event_type, subscribers in {**self.__broadcast_subscribers, **self.__chain_subscribers}.items():
for handler_data in subscribers:
if isinstance(subscribers, dict):
priority, handler = handler_data
else:
priority = None
handler = handler_data
# 获取处理器的唯一标识符
handler_id = self.__get_handler_identifier(handler)
for handler_identifier, handler_data in subscribers.items():
# 解析优先级和处理器
priority, handler = parse_handler_data(handler_data)
# 检查处理器的启用状态
status = "enabled" if self.__is_handler_enabled(handler) else "disabled"
# 构建处理器信息字典
handler_dict = {
"event_type": event_type.value,
"handler_identifier": handler_id,
"handler_identifier": handler_identifier,
"status": status
}
if priority is not None:
@@ -302,7 +293,7 @@ class EventManager(metaclass=Singleton):
# 对于类实例(实现了 __call__ 方法)
if not inspect.isfunction(handler) and hasattr(handler, "__call__"):
handler_cls = handler.__class__
handler_cls = handler.__class__ # noqa
return cls.__get_handler_identifier(handler_cls)
# 对于未绑定方法、静态方法、类方法,使用 __qualname__ 提取类信息
@@ -335,14 +326,9 @@ class EventManager(metaclass=Singleton):
"""
触发链式事件,按顺序调用订阅的处理器,并记录处理耗时
"""
try:
logger.debug(f"Triggering synchronous chain event: {event}")
dispatch = self.__dispatch_chain_event(event)
return event if dispatch else None
finally:
event_hash = self.__get_event_hash(event)
with self.__lock:
self.__processing_events.pop(event_hash, None)
logger.debug(f"Triggering synchronous chain event: {event}")
dispatch = self.__dispatch_chain_event(event)
return event if dispatch else None
def __trigger_broadcast_event(self, event: Event):
"""
@@ -361,8 +347,17 @@ class EventManager(metaclass=Singleton):
if not handlers:
logger.debug(f"No handlers found for chain event: {event}")
return False
# 过滤出启用的处理器
enabled_handlers = {handler_id: (priority, handler) for handler_id, (priority, handler) in handlers.items()
if self.__is_handler_enabled(handler)}
if not enabled_handlers:
logger.debug(f"No enabled handlers found for chain event: {event}. Skipping execution.")
return False
self.__log_event_lifecycle(event, "Started")
for handler_id, (priority, handler) in handlers.items():
for handler_id, (priority, handler) in enabled_handlers.items():
start_time = time.time()
self.__safe_invoke_handler(handler, event)
logger.debug(
@@ -383,9 +378,6 @@ class EventManager(metaclass=Singleton):
return
for handler_id, handler in handlers.items():
self.__executor.submit(self.__safe_invoke_handler, handler, event)
event_hash = self.__get_event_hash(event)
with self.__lock:
self.__processing_events.pop(event_hash, None)
def __safe_invoke_handler(self, handler: Callable, event: Event):
"""
@@ -446,12 +438,15 @@ class EventManager(metaclass=Singleton):
# 如果类不在全局变量中,尝试动态导入模块并创建实例
try:
# 导入模块除了插件只有chain能响应事件
if not class_name.endswith("Chain"):
if class_name == "Command":
module_name = "app.command"
module = importlib.import_module(module_name)
elif class_name.endswith("Chain"):
module_name = f"app.chain.{class_name[:-5].lower()}"
module = importlib.import_module(module_name)
else:
logger.debug(f"事件处理出错:无效的 Chain 类名: {class_name},类名必须以 'Chain' 结尾")
return None
module_name = f"app.chain.{class_name[:-5].lower()}"
module = importlib.import_module(module_name)
if hasattr(module, class_name):
class_obj = getattr(module, class_name)()
return class_obj
@@ -510,13 +505,15 @@ class EventManager(metaclass=Singleton):
}
)
def register(self, etype: Union[EventType, ChainEventType, List[Union[EventType, ChainEventType]], type]):
def register(self, etype: Union[EventType, ChainEventType, List[Union[EventType, ChainEventType]], type],
priority: Optional[int] = DEFAULT_EVENT_PRIORITY):
"""
事件注册装饰器,用于将函数注册为事件的处理器
:param etype:
- 单个事件类型成员 (如 EventType.MetadataScrape, ChainEventType.PluginAction)
- 事件类型类 (EventType, ChainEventType)
- 或事件类型成员的列表
:param priority: 可选,链式事件的优先级,默认为 DEFAULT_EVENT_PRIORITY
"""
def decorator(f: Callable):
@@ -524,23 +521,18 @@ class EventManager(metaclass=Singleton):
if isinstance(etype, list):
# 传入的已经是列表,直接使用
event_list = etype
elif etype is EventType:
# 订阅所有事件
event_list = []
for et in etype:
event_list.append(et)
else:
# 不是列表则包裹成单一元素的列表
event_list = [etype]
# 遍历列表,处理每个事件类型
# 遍历列表,处理每个事件类型
for event in event_list:
if isinstance(event, (EventType, ChainEventType)):
self.add_event_listener(event, f)
self.add_event_listener(event, f, priority)
elif isinstance(event, type) and issubclass(event, (EventType, ChainEventType)):
# 如果是 EventType 或 ChainEventType 类,提取该类中的所有成员
for et in event.__members__.values():
self.add_event_listener(et, f)
self.add_event_listener(et, f, priority)
else:
raise ValueError(f"无效的事件类型: {event}")

View File

@@ -1,13 +1,13 @@
import traceback
from dataclasses import dataclass, asdict
from dataclasses import dataclass
from typing import Union, Optional, List, Self
import cn2an
import regex as re
from app.log import logger
from app.utils.string import StringUtils
from app.schemas.types import MediaType
from app.utils.string import StringUtils
@dataclass
@@ -69,7 +69,7 @@ class MetaBase(object):
_subtitle_flag = False
_title_episodel_re = r"Episode\s+(\d{1,4})"
_subtitle_season_re = r"(?<![全共]\s*)[第\s]+([0-9一二三四五六七八九十S\-]+)\s*季(?!\s*[全共])"
_subtitle_season_all_re = r"[全共]\s*([0-9一二三四五六七八九十]+)\s*季|([0-9一二三四五六七八九十]+)\s*季\s*全"
_subtitle_season_all_re = r"[全共]\s*([0-9一二三四五六七八九十]+)\s*季"
_subtitle_episode_re = r"(?<![全共]\s*)[第\s]+([0-9一二三四五六七八九十百零EP]+)\s*[集话話期幕](?!\s*[全共])"
_subtitle_episode_between_re = r"[第]*\s*([0-9一二三四五六七八九十百零]+)\s*[集话話期幕]?\s*-\s*第*\s*([0-9一二三四五六七八九十百零]+)\s*[集话話期幕]"
_subtitle_episode_all_re = r"([0-9一二三四五六七八九十百零]+)\s*集\s*全|[全共]\s*([0-9一二三四五六七八九十百零]+)\s*[集话話期幕]"
@@ -247,7 +247,7 @@ class MetaBase(object):
self.type = MediaType.TV
self._subtitle_flag = True
return
# x集全
# x集全/全x集
episode_all_str = re.search(r'%s' % self._subtitle_episode_all_re, title_text, re.IGNORECASE)
if episode_all_str:
episode_all = episode_all_str.group(1)
@@ -259,8 +259,6 @@ class MetaBase(object):
except Exception as err:
logger.debug(f'识别集失败:{str(err)} - {traceback.format_exc()}')
return
self.begin_episode = None
self.end_episode = None
self.type = MediaType.TV
self._subtitle_flag = True
return
@@ -589,9 +587,10 @@ class MetaBase(object):
"""
转为字典
"""
dicts = asdict(self)
dicts = vars(self).copy()
dicts["type"] = self.type.value if self.type else None
dicts["season_episode"] = self.season_episode
dicts["edition"] = self.edition
dicts["name"] = self.name
dicts["episode_list"] = self.episode_list
return dicts

View File

@@ -172,7 +172,7 @@ class MetaVideo(MetaBase):
return None
@staticmethod
def __is_pinyin(name_str: str) -> bool:
def __is_pinyin(name_str: Optional[str]) -> bool:
"""
判断是否拼音
"""
@@ -183,7 +183,7 @@ class MetaVideo(MetaBase):
return False
return True
def __fix_name(self, name: str):
def __fix_name(self, name: Optional[str]):
"""
去掉名字中不需要的干扰字符
"""
@@ -207,7 +207,7 @@ class MetaVideo(MetaBase):
name = None
return name
def __init_name(self, token: str):
def __init_name(self, token: Optional[str]):
"""
识别名称
"""

View File

@@ -70,7 +70,7 @@ class ReleaseGroupsMatcher(metaclass=Singleton):
"U2": [],
"ultrahd": [],
"others": ['B(?:MDru|eyondHD|TN)', 'C(?:fandora|trlhd|MRG)', 'DON', 'EVO', 'FLUX', 'HONE(?:|yG)',
'N(?:oGroup|T(?:b|G))', 'PandaMoon', 'SMURF', 'T(?:EPES|aengoo|rollHD )'],
'N(?:oGroup|T(?:b|G))', 'PandaMoon', 'SMURF', 'T(?:EPES|aengoo|rollHD )', 'UBWEB'],
"anime": ['ANi', 'HYSUB', 'KTXP', 'LoliHouse', 'MCE', 'Nekomoe kissaten', 'SweetSub', 'MingY',
'(?:Lilith|NC)-Raws', '织梦字幕组', '枫叶字幕组', '猎户手抄部', '喵萌奶茶屋', '漫猫字幕社',
'霜庭云花Sub', '北宇治字幕组', '氢气烤肉架', '云歌字幕组', '萌樱字幕组', '极影字幕社',
@@ -103,7 +103,7 @@ class ReleaseGroupsMatcher(metaclass=Singleton):
else:
groups = self.__release_groups
title = f"{title} "
groups_re = re.compile(r"(?<=[-@\[£【&])(?:%s)(?=[@.\s\]\[】&])" % groups, re.I)
groups_re = re.compile(r"(?<=[-@\[£【&])(?:%s)(?=[@.\s\S\]\[】&])" % groups, re.I)
# 处理一个制作组识别多次的情况,保留顺序
unique_groups = []
for item in re.findall(groups_re, title):

View File

@@ -1,5 +1,5 @@
from pathlib import Path
from typing import Tuple, List
from typing import Tuple, List, Optional
import regex as re
@@ -10,7 +10,7 @@ from app.log import logger
from app.schemas.types import MediaType
def MetaInfo(title: str, subtitle: str = None, custom_words: List[str] = None) -> MetaBase:
def MetaInfo(title: str, subtitle: Optional[str] = None, custom_words: List[str] = None) -> MetaBase:
"""
根据标题和副标题识别元数据
:param title: 标题、种子名、文件名
@@ -92,7 +92,8 @@ def is_anime(name: str) -> bool:
return True
if re.search(r'\s+-\s+[\dv]{1,4}\s+', name, re.IGNORECASE):
return True
if re.search(r"S\d{2}\s*-\s*S\d{2}|S\d{2}|\s+S\d{1,2}|EP?\d{2,4}\s*-\s*EP?\d{2,4}|EP?\d{2,4}|\s+EP?\d{1,4}", name,
if re.search(r"S\d{2}\s*-\s*S\d{2}|S\d{2}|\s+S\d{1,2}|EP?\d{2,4}\s*-\s*EP?\d{2,4}|EP?\d{2,4}|\s+EP?\d{1,4}",
name,
re.IGNORECASE):
return False
if re.search(r'\[[+0-9XVPI-]+]\s*\[', name, re.IGNORECASE):
@@ -133,13 +134,10 @@ def find_metainfo(title: str) -> Tuple[str, dict]:
# 查找媒体类型
mtype = re.findall(r'(?<=type=)\w+', result)
if mtype:
match mtype[0]:
case "movie":
metainfo['type'] = MediaType.MOVIE
case "tv":
metainfo['type'] = MediaType.TV
case _:
pass
if mtype[0] == "movies":
metainfo['type'] = MediaType.MOVIE
elif mtype[0] == "tv":
metainfo['type'] = MediaType.TV
# 查找季信息
begin_season = re.findall(r'(?<=s=)\d+', result)
if begin_season and begin_season[0].isdigit():

View File

@@ -1,11 +1,12 @@
import traceback
from typing import Generator, Optional, Tuple, Any
from typing import Generator, Optional, Tuple, Any, Union
from app.core.config import settings
from app.core.event import eventmanager
from app.helper.module import ModuleHelper
from app.log import logger
from app.schemas.types import EventType, ModuleType
from app.schemas.types import EventType, ModuleType, DownloaderType, MediaServerType, MessageChannel, StorageSchema, \
OtherModulesType
from app.utils.object import ObjectUtils
from app.utils.singleton import Singleton
@@ -19,6 +20,8 @@ class ModuleManager(metaclass=Singleton):
_modules: dict = {}
# 运行态模块列表
_running_modules: dict = {}
# 子模块类型集合
SubType = Union[DownloaderType, MediaServerType, MessageChannel, StorageSchema, OtherModulesType]
def __init__(self):
self.load_modules()
@@ -118,7 +121,7 @@ class ModuleManager(metaclass=Singleton):
获取实现了同一方法的模块列表
"""
if not self._running_modules:
return []
return
for _, module in self._running_modules.items():
if hasattr(module, method) \
and ObjectUtils.check_method(getattr(module, method)):
@@ -129,12 +132,23 @@ class ModuleManager(metaclass=Singleton):
获取指定类型的模块列表
"""
if not self._running_modules:
return []
return
for _, module in self._running_modules.items():
if hasattr(module, 'get_type') \
and module.get_type() == module_type:
yield module
def get_running_subtype_module(self, module_subtype: SubType) -> Generator:
"""
获取指定子类型的模块
"""
if not self._running_modules:
return
for _, module in self._running_modules.items():
if hasattr(module, 'get_subtype') \
and module.get_subtype() == module_subtype:
yield module
def get_module(self, module_id: str) -> Any:
"""
根据模块id获取模块

View File

@@ -111,7 +111,7 @@ class PluginManager(metaclass=Singleton):
# 启动插件
self.start()
def start(self, pid: str = None):
def start(self, pid: Optional[str] = None):
"""
启动加载插件
:param pid: 插件ID为空加载所有插件
@@ -194,7 +194,7 @@ class PluginManager(metaclass=Singleton):
# 禁用插件类的事件处理器
eventmanager.disable_event_handler(type(plugin))
def stop(self, pid: str = None):
def stop(self, pid: Optional[str] = None):
"""
停止插件服务
:param pid: 插件ID为空停止所有插件
@@ -220,11 +220,23 @@ class PluginManager(metaclass=Singleton):
self._running_plugins = {}
logger.info("插件停止完成")
def reload_monitor(self):
"""
重新加载插件文件修改监测
"""
if settings.DEV or settings.PLUGIN_AUTO_RELOAD:
if self._observer and self._observer.is_alive():
logger.info("插件文件修改监测已经在运行中...")
else:
self.__start_monitor()
else:
self.stop_monitor()
def __start_monitor(self):
"""
开发者模式下监测插件文件修改
启用监测插件文件修改监测
"""
logger.info("发者模式下开始监测插件文件修改...")
logger.info("开始监测插件文件修改...")
monitor_handler = PluginMonitorHandler()
self._observer = Observer()
self._observer.schedule(monitor_handler, str(settings.ROOT_PATH / "app" / "plugins"), recursive=True)
@@ -232,14 +244,16 @@ class PluginManager(metaclass=Singleton):
def stop_monitor(self):
"""
停止监测插件修改
停止监测插件文件修改监测
"""
# 停止监测
if self._observer:
if self._observer and self._observer.is_alive():
logger.info("正在停止插件文件修改监测...")
self._observer.stop()
self._observer.join()
logger.info("插件文件修改监测停止完成")
else:
logger.info("未启用插件文件修改监测,无需停止")
@staticmethod
def __stop_plugin(plugin: Any):
@@ -417,7 +431,7 @@ class PluginManager(metaclass=Singleton):
return plugin.get_page() or []
return []
def get_plugin_dashboard(self, pid: str, key: str = None, **kwargs) -> Optional[schemas.PluginDashboard]:
def get_plugin_dashboard(self, pid: str, key: Optional[str] = None, **kwargs) -> Optional[schemas.PluginDashboard]:
"""
获取插件仪表盘
:param pid: 插件ID
@@ -526,7 +540,8 @@ class PluginManager(metaclass=Singleton):
"name": "服务名称",
"trigger": "触发器cron、interval、date、CronTrigger.from_crontab()",
"func": self.xxx,
"kwargs": {} # 定时器参数
"kwargs": {} # 定时器参数,
"func_kwargs": {} # 方法参数
}]
"""
ret_services = []
@@ -667,7 +682,7 @@ class PluginManager(metaclass=Singleton):
# 相同 ID 的插件保留版本号最大的版本
max_versions = {}
for p in all_plugins:
if p.id not in max_versions or StringUtils.compare_version(p.plugin_version, max_versions[p.id]) > 0:
if p.id not in max_versions or StringUtils.compare_version(p.plugin_version, ">", max_versions[p.id]):
max_versions[p.id] = p.plugin_version
result = [p for p in all_plugins if p.plugin_version == max_versions[p.id]]
logger.info(f"共获取到 {len(result)} 个线上插件")
@@ -766,7 +781,7 @@ class PluginManager(metaclass=Singleton):
logger.debug(f"获取插件是否在本地包中存在失败,{e}")
return False
def get_plugins_from_market(self, market: str, package_version: str = None) -> Optional[List[schemas.Plugin]]:
def get_plugins_from_market(self, market: str, package_version: Optional[str] = None) -> Optional[List[schemas.Plugin]]:
"""
从指定的市场获取插件信息
:param market: 市场的 URL 或标识
@@ -778,10 +793,9 @@ class PluginManager(metaclass=Singleton):
# 已安装插件
installed_apps = self.systemconfig.get(SystemConfigKey.UserInstalledPlugins) or []
# 获取在线插件
online_plugins = self.pluginhelper.get_plugins(market, package_version) or {}
if not online_plugins:
if not package_version:
logger.warning(f"获取插件库失败:{market},请检查 GitHub 网络连接")
online_plugins = self.pluginhelper.get_plugins(market, package_version)
if online_plugins is None:
logger.warning(f"获取{package_version if package_version else ''}插件库失败:{market},请检查 GitHub 网络连接")
return []
ret_plugins = []
add_time = len(online_plugins)
@@ -808,7 +822,7 @@ class PluginManager(metaclass=Singleton):
plugin.has_update = False
if plugin_static:
installed_version = getattr(plugin_static, "plugin_version")
if StringUtils.compare_version(installed_version, plugin_info.get("version")) < 0:
if StringUtils.compare_version(installed_version, "<", plugin_info.get("version")):
# 需要更新
plugin.has_update = True
# 运行状态

View File

@@ -1,10 +1,11 @@
import base64
import datetime
import hashlib
import hmac
import json
import os
import traceback
from datetime import datetime, timedelta
from datetime import timedelta
from typing import Any, Union, Annotated, Optional
import jwt
@@ -43,9 +44,9 @@ api_key_query = APIKeyQuery(name="apikey", auto_error=False, scheme_name="api_ke
def create_access_token(
userid: Union[str, Any],
username: str,
super_user: bool = False,
super_user: Optional[bool] = False,
expires_delta: Optional[timedelta] = None,
level: int = 1,
level: Optional[int] = 1,
purpose: Optional[str] = "authentication"
) -> str:
"""
@@ -69,13 +70,13 @@ def create_access_token(
if expires_delta is not None:
if expires_delta.total_seconds() <= 0:
raise ValueError("过期时间必须为正数")
expire = datetime.utcnow() + expires_delta
expire = datetime.datetime.now(datetime.UTC) + expires_delta
else:
expire = datetime.utcnow() + default_expire
expire = datetime.datetime.now(datetime.UTC) + default_expire
to_encode = {
"exp": expire,
"iat": datetime.utcnow(),
"iat": datetime.datetime.now(datetime.UTC),
"sub": str(userid),
"username": username,
"super_user": super_user,
@@ -102,7 +103,7 @@ def __set_or_refresh_resource_token_cookie(request: Request, response: Response,
decoded_token = jwt.decode(resource_token, settings.RESOURCE_SECRET_KEY, algorithms=[ALGORITHM])
exp = decoded_token.get("exp")
if exp:
remaining_time = datetime.utcfromtimestamp(exp) - datetime.utcnow()
remaining_time = datetime.datetime.fromtimestamp(exp, tz=datetime.UTC) - datetime.datetime.now(datetime.UTC)
# 根据剩余时长提前刷新令牌
if remaining_time < timedelta(seconds=(settings.RESOURCE_ACCESS_TOKEN_EXPIRE_SECONDS / 3)):
raise jwt.ExpiredSignatureError
@@ -135,7 +136,7 @@ def __set_or_refresh_resource_token_cookie(request: Request, response: Response,
)
def __verify_token(token: str, purpose: str = "authentication") -> schemas.TokenPayload:
def __verify_token(token: str, purpose: Optional[str] = "authentication") -> schemas.TokenPayload:
"""
使用 JWT Token 进行身份认证并解析 Token 的内容
:param token: JWT 令牌
@@ -175,7 +176,7 @@ def __verify_token(token: str, purpose: str = "authentication") -> schemas.Token
def verify_token(
request: Request,
response: Response,
token: str = Security(oauth2_scheme)
token: Annotated[str, Security(oauth2_scheme)]
) -> schemas.TokenPayload:
"""
验证 JWT 令牌并自动处理 resource_token 写入
@@ -195,7 +196,7 @@ def verify_token(
def verify_resource_token(
resource_token: str = Security(resource_token_cookie)
resource_token: Annotated[str, Security(resource_token_cookie)]
) -> schemas.TokenPayload:
"""
验证资源访问令牌(从 Cookie 中获取)
@@ -248,7 +249,7 @@ def __verify_key(key: str, expected_key: str, key_type: str) -> str:
return key
def verify_apitoken(token: str = Security(__get_api_token)) -> str:
def verify_apitoken(token: Annotated[str, Security(__get_api_token)]) -> str:
"""
使用 API Token 进行身份认证
:param token: API Token从 URL 查询参数中获取
@@ -257,7 +258,7 @@ def verify_apitoken(token: str = Security(__get_api_token)) -> str:
return __verify_key(token, settings.API_TOKEN, "API_TOKEN")
def verify_apikey(apikey: str = Security(__get_api_key)) -> str:
def verify_apikey(apikey: Annotated[str, Security(__get_api_key)]) -> str:
"""
使用 API Key 进行身份认证
:param apikey: API Key从 URL 查询参数或请求头中获取
@@ -286,7 +287,7 @@ def decrypt(data: bytes, key: bytes) -> Optional[bytes]:
return None
def encrypt_message(message: str, key: bytes):
def encrypt_message(message: str, key: bytes) -> str:
"""
使用给定的key对消息进行加密并返回加密后的字符串
"""
@@ -295,14 +296,14 @@ def encrypt_message(message: str, key: bytes):
return encrypted_message.decode()
def hash_sha256(message):
def hash_sha256(message: str) -> str:
"""
对字符串做hash运算
"""
return hashlib.sha256(message.encode()).hexdigest()
def aes_decrypt(data, key):
def aes_decrypt(data: str, key: str) -> str:
"""
AES解密
"""
@@ -322,7 +323,7 @@ def aes_decrypt(data, key):
return result.decode('utf-8')
def aes_encrypt(data, key):
def aes_encrypt(data: str, key: str) -> str:
"""
AES加密
"""
@@ -338,7 +339,7 @@ def aes_encrypt(data, key):
return base64.b64encode(cipher.iv + result).decode('utf-8')
def nexusphp_encrypt(data_str: str, key):
def nexusphp_encrypt(data_str: str, key: bytes) -> str:
"""
NexusPHP加密
"""

112
app/core/workflow.py Normal file
View File

@@ -0,0 +1,112 @@
from time import sleep
from typing import Dict, Any, Tuple, List
from app.core.config import global_vars
from app.helper.module import ModuleHelper
from app.log import logger
from app.schemas import Action, ActionContext
from app.utils.singleton import Singleton
class WorkFlowManager(metaclass=Singleton):
"""
工作流管理器
"""
# 所有动作定义
_actions: Dict[str, Any] = {}
def __init__(self):
self.init()
def init(self):
"""
初始化
"""
def filter_func(obj: Any):
"""
过滤函数,确保只加载新定义的类
"""
if not isinstance(obj, type):
return False
if not hasattr(obj, 'execute') or not hasattr(obj, "name"):
return False
if obj.__name__ == "BaseAction":
return False
return obj.__module__.startswith("app.actions")
# 加载所有动作
self._actions = {}
actions = ModuleHelper.load(
"app.actions",
filter_func=lambda _, obj: filter_func(obj)
)
for action in actions:
logger.debug(f"加载动作: {action.__name__}")
try:
self._actions[action.__name__] = action
except Exception as err:
logger.error(f"加载动作失败: {action.__name__} - {err}")
def stop(self):
"""
停止
"""
pass
def excute(self, workflow_id: int, action: Action,
context: ActionContext = None) -> Tuple[bool, str, ActionContext]:
"""
执行工作流动作
"""
if not context:
context = ActionContext()
if action.type in self._actions:
# 实例化之前,清理掉类对象的数据
# 实例化
action_obj = self._actions[action.type](action.id)
# 执行
logger.info(f"执行动作: {action.id} - {action.name}")
try:
result_context = action_obj.execute(workflow_id, action.data, context)
except Exception as err:
logger.error(f"{action.name} 执行失败: {err}")
return False, f"{err}", context
loop = action.data.get("loop")
loop_interval = action.data.get("loop_interval")
if loop and loop_interval:
while not action_obj.done:
if global_vars.is_workflow_stopped(workflow_id):
break
# 等待
logger.info(f"{action.name} 等待 {loop_interval} 秒后继续执行 ...")
sleep(loop_interval)
# 执行
logger.info(f"继续执行动作: {action.id} - {action.name}")
result_context = action_obj.execute(workflow_id, action.data, result_context)
if action_obj.success:
logger.info(f"{action.name} 执行成功")
else:
logger.error(f"{action.name} 执行失败!")
return action_obj.success, action_obj.message, result_context
else:
logger.error(f"未找到动作: {action.type} - {action.name}")
return False, " ", context
def list_actions(self) -> List[dict]:
"""
获取所有动作
"""
return [
{
"type": key,
"name": action.name,
"description": action.description,
"data": {
"label": action.name,
**action.data
}
} for key, action in self._actions.items()
]

View File

@@ -13,7 +13,7 @@ connect_args = {
# 启用 WAL 模式时的额外配置
if settings.DB_WAL_ENABLE:
connect_args["check_same_thread"] = False
kwargs = {
db_kwargs = {
"url": f"sqlite:///{settings.CONFIG_PATH}/user.db",
"pool_pre_ping": settings.DB_POOL_PRE_PING,
"echo": settings.DB_ECHO,
@@ -23,13 +23,13 @@ kwargs = {
}
# 当使用 QueuePool 时,添加 QueuePool 特有的参数
if pool_class == QueuePool:
kwargs.update({
db_kwargs.update({
"pool_size": settings.DB_POOL_SIZE,
"pool_timeout": settings.DB_POOL_TIMEOUT,
"max_overflow": settings.DB_MAX_OVERFLOW
})
# 创建数据库引擎
Engine = create_engine(**kwargs)
Engine = create_engine(**db_kwargs)
# 根据配置设置日志模式
journal_mode = "WAL" if settings.DB_WAL_ENABLE else "DELETE"
with Engine.connect() as connection:
@@ -198,7 +198,7 @@ class Base:
@classmethod
@db_query
def get(cls, db: Session, rid: int) -> Self:
return db.query(cls).filter(cls.id == rid).first()
return db.query(cls).filter(and_(cls.id == rid)).first()
@db_update
def update(self, db: Session, payload: dict):
@@ -225,7 +225,7 @@ class Base:
return list(result)
def to_dict(self):
return {c.name: getattr(self, c.name, None) for c in self.__table__.columns}
return {c.name: getattr(self, c.name, None) for c in self.__table__.columns} # noqa
@declared_attr
def __tablename__(self) -> str:

View File

@@ -1,4 +1,4 @@
from typing import List
from typing import List, Optional
from app.db import DbOper
from app.db.models.downloadhistory import DownloadHistory, DownloadFiles
@@ -51,7 +51,7 @@ class DownloadHistoryOper(DbOper):
"""
DownloadFiles.truncate(self._db)
def get_files_by_hash(self, download_hash: str, state: int = None) -> List[DownloadFiles]:
def get_files_by_hash(self, download_hash: str, state: Optional[int] = None) -> List[DownloadFiles]:
"""
按Hash查询下载文件记录
:param download_hash: 数据key
@@ -97,7 +97,7 @@ class DownloadHistoryOper(DbOper):
return fileinfo.download_hash
return ""
def list_by_page(self, page: int = 1, count: int = 30) -> List[DownloadHistory]:
def list_by_page(self, page: Optional[int] = 1, count: Optional[int] = 30) -> List[DownloadHistory]:
"""
分页查询下载历史
"""
@@ -109,8 +109,8 @@ class DownloadHistoryOper(DbOper):
"""
DownloadHistory.truncate(self._db)
def get_last_by(self, mtype=None, title: str = None, year: str = None,
season: str = None, episode: str = None, tmdbid=None) -> List[DownloadHistory]:
def get_last_by(self, mtype=None, title: Optional[str] = None, year: Optional[str] = None,
season: Optional[str] = None, episode: Optional[str] = None, tmdbid=None) -> List[DownloadHistory]:
"""
按类型、标题、年份、季集查询下载记录
"""
@@ -122,7 +122,7 @@ class DownloadHistoryOper(DbOper):
episode=episode,
tmdbid=tmdbid)
def list_by_user_date(self, date: str, username: str = None) -> List[DownloadHistory]:
def list_by_user_date(self, date: str, username: Optional[str] = None) -> List[DownloadHistory]:
"""
查询某用户某时间之前的下载历史
"""
@@ -130,7 +130,7 @@ class DownloadHistoryOper(DbOper):
date=date,
username=username)
def list_by_date(self, date: str, type: str, tmdbid: str, seasons: str = None) -> List[DownloadHistory]:
def list_by_date(self, date: str, type: str, tmdbid: str, seasons: Optional[str] = None) -> List[DownloadHistory]:
"""
查询某时间之后的下载历史
"""
@@ -140,7 +140,7 @@ class DownloadHistoryOper(DbOper):
tmdbid=tmdbid,
seasons=seasons)
def list_by_type(self, mtype: str, days: int = 7) -> List[DownloadHistory]:
def list_by_type(self, mtype: str, days: Optional[int] = 7) -> List[DownloadHistory]:
"""
获取指定类型的下载历史
"""

View File

@@ -11,7 +11,7 @@ def init_db():
初始化数据库
"""
# 全量建表
Base.metadata.create_all(bind=Engine)
Base.metadata.create_all(bind=Engine) # noqa
def update_db():

View File

@@ -18,14 +18,14 @@ class MessageOper(DbOper):
def add(self,
channel: MessageChannel = None,
source: str = None,
source: Optional[str] = None,
mtype: NotificationType = None,
title: str = None,
text: str = None,
image: str = None,
link: str = None,
userid: str = None,
action: int = 1,
title: Optional[str] = None,
text: Optional[str] = None,
image: Optional[str] = None,
link: Optional[str] = None,
userid: Optional[str] = None,
action: Optional[int] = 1,
note: Union[list, dict] = None,
**kwargs):
"""
@@ -57,12 +57,12 @@ class MessageOper(DbOper):
# 从kwargs中去掉Message中没有的字段
for k in list(kwargs.keys()):
if k not in Message.__table__.columns.keys():
if k not in Message.__table__.columns.keys(): # noqa
kwargs.pop(k)
Message(**kwargs).create(self._db)
def list_by_page(self, page: int = 1, count: int = 30) -> Optional[str]:
def list_by_page(self, page: Optional[int] = 1, count: Optional[int] = 30) -> Optional[str]:
"""
获取媒体服务器数据ID
"""

View File

@@ -8,3 +8,4 @@ from .systemconfig import SystemConfig
from .transferhistory import TransferHistory
from .user import User
from .userconfig import UserConfig
from .workflow import Workflow

View File

@@ -1,4 +1,5 @@
import time
from typing import Optional
from sqlalchemy import Column, Integer, String, Sequence, JSON
from sqlalchemy.orm import Session
@@ -29,6 +30,8 @@ class DownloadHistory(Base):
episodes = Column(String)
# 海报
image = Column(String)
# 下载器
downloader = Column(String)
# 下载任务Hash
download_hash = Column(String, index=True)
# 种子名称
@@ -49,11 +52,15 @@ class DownloadHistory(Base):
note = Column(JSON)
# 自定义媒体类别
media_category = Column(String)
# 剧集组
episode_group = Column(String)
@staticmethod
@db_query
def get_by_hash(db: Session, download_hash: str):
return db.query(DownloadHistory).filter(DownloadHistory.download_hash == download_hash).first()
return db.query(DownloadHistory).filter(DownloadHistory.download_hash == download_hash).order_by(
DownloadHistory.date.desc()
).first()
@staticmethod
@db_query
@@ -63,7 +70,7 @@ class DownloadHistory(Base):
@staticmethod
@db_query
def list_by_page(db: Session, page: int = 1, count: int = 30):
def list_by_page(db: Session, page: Optional[int] = 1, count: Optional[int] = 30):
result = db.query(DownloadHistory).offset((page - 1) * count).limit(count).all()
return list(result)
@@ -74,8 +81,9 @@ class DownloadHistory(Base):
@staticmethod
@db_query
def get_last_by(db: Session, mtype: str = None, title: str = None, year: int = None, season: str = None,
episode: str = None, tmdbid: int = None):
def get_last_by(db: Session, mtype: Optional[str] = None, title: Optional[str] = None,
year: Optional[str] = None, season: Optional[str] = None,
episode: Optional[str] = None, tmdbid: Optional[int] = None):
"""
据tmdbid、season、season_episode查询转移记录
"""
@@ -119,7 +127,7 @@ class DownloadHistory(Base):
@staticmethod
@db_query
def list_by_user_date(db: Session, date: str, username: str = None):
def list_by_user_date(db: Session, date: str, username: Optional[str] = None):
"""
查询某用户某时间之后的下载历史
"""
@@ -134,7 +142,7 @@ class DownloadHistory(Base):
@staticmethod
@db_query
def list_by_date(db: Session, date: str, type: str, tmdbid: str, seasons: str = None):
def list_by_date(db: Session, date: str, type: str, tmdbid: str, seasons: Optional[str] = None):
"""
查询某时间之后的下载历史
"""
@@ -166,10 +174,10 @@ class DownloadFiles(Base):
下载文件记录
"""
id = Column(Integer, Sequence('id'), primary_key=True, index=True)
# 下载任务Hash
download_hash = Column(String, index=True)
# 下载器
downloader = Column(String)
# 下载任务Hash
download_hash = Column(String, index=True)
# 完整路径
fullpath = Column(String, index=True)
# 保存路径
@@ -183,7 +191,7 @@ class DownloadFiles(Base):
@staticmethod
@db_query
def get_by_hash(db: Session, download_hash: str, state: int = None):
def get_by_hash(db: Session, download_hash: str, state: Optional[int] = None):
if state:
result = db.query(DownloadFiles).filter(DownloadFiles.download_hash == download_hash,
DownloadFiles.state == state).all()

View File

@@ -1,3 +1,5 @@
from typing import Optional
from sqlalchemy import Column, Integer, String, Sequence, JSON
from sqlalchemy.orm import Session
@@ -34,7 +36,7 @@ class Message(Base):
@staticmethod
@db_query
def list_by_page(db: Session, page: int = 1, count: int = 30):
def list_by_page(db: Session, page: Optional[int] = 1, count: Optional[int] = 30):
result = db.query(Message).order_by(Message.reg_time.desc()).offset((page - 1) * count).limit(
count).all()
result.sort(key=lambda x: x.reg_time, reverse=False)

View File

@@ -1,6 +1,7 @@
from datetime import datetime
from typing import Optional
from sqlalchemy import Column, Integer, String, Sequence, Float, JSON, func
from sqlalchemy import Column, Integer, String, Sequence, Float, JSON, func, or_
from sqlalchemy.orm import Session
from app.db import db_query, Base
@@ -54,7 +55,7 @@ class SiteUserData(Base):
@staticmethod
@db_query
def get_by_domain(db: Session, domain: str, workdate: str = None, worktime: str = None):
def get_by_domain(db: Session, domain: str, workdate: Optional[str] = None, worktime: Optional[str] = None):
if workdate and worktime:
return db.query(SiteUserData).filter(SiteUserData.domain == domain,
SiteUserData.updated_day == workdate,
@@ -81,7 +82,7 @@ class SiteUserData(Base):
func.max(SiteUserData.updated_day).label('latest_update_day')
)
.group_by(SiteUserData.domain)
.filter(SiteUserData.err_msg == None)
.filter(or_(SiteUserData.err_msg.is_(None), SiteUserData.err_msg == ""))
.subquery()
)

View File

@@ -1,4 +1,5 @@
import time
from typing import Optional
from sqlalchemy import Column, Integer, String, Sequence, Float, JSON
from sqlalchemy.orm import Session
@@ -24,6 +25,7 @@ class Subscribe(Base):
tvdbid = Column(Integer)
doubanid = Column(String, index=True)
bangumiid = Column(Integer, index=True)
mediaid = Column(String, index=True)
# 季号
season = Column(Integer)
# 海报
@@ -54,7 +56,7 @@ class Subscribe(Base):
lack_episode = Column(Integer)
# 附加信息
note = Column(JSON)
# 状态N-新建 R-订阅中
# 状态N-新建 R-订阅中 P-待定 S-暂停
state = Column(String, nullable=False, index=True, default='N')
# 最后更新时间
last_update = Column(String)
@@ -82,10 +84,12 @@ class Subscribe(Base):
media_category = Column(String)
# 过滤规则组
filter_groups = Column(JSON, default=list)
# 选择的剧集组
episode_group = Column(String)
@staticmethod
@db_query
def exists(db: Session, tmdbid: int = None, doubanid: str = None, season: int = None):
def exists(db: Session, tmdbid: Optional[int] = None, doubanid: Optional[str] = None, season: Optional[int] = None):
if tmdbid:
if season:
return db.query(Subscribe).filter(Subscribe.tmdbid == tmdbid,
@@ -98,12 +102,26 @@ class Subscribe(Base):
@staticmethod
@db_query
def get_by_state(db: Session, state: str):
result = db.query(Subscribe).filter(Subscribe.state == state).all()
# 如果 state 为空或 None返回所有订阅
if not state:
result = db.query(Subscribe).all()
else:
# 如果传入的状态不为空,拆分成多个状态
states = state.split(',')
result = db.query(Subscribe).filter(Subscribe.state.in_(states)).all()
return list(result)
@staticmethod
@db_query
def get_by_tmdbid(db: Session, tmdbid: int, season: int = None):
def get_by_title(db: Session, title: str, season: Optional[int] = None):
if season:
return db.query(Subscribe).filter(Subscribe.name == title,
Subscribe.season == season).first()
return db.query(Subscribe).filter(Subscribe.name == title).first()
@staticmethod
@db_query
def get_by_tmdbid(db: Session, tmdbid: int, season: Optional[int] = None):
if season:
result = db.query(Subscribe).filter(Subscribe.tmdbid == tmdbid,
Subscribe.season == season).all()
@@ -111,14 +129,6 @@ class Subscribe(Base):
result = db.query(Subscribe).filter(Subscribe.tmdbid == tmdbid).all()
return list(result)
@staticmethod
@db_query
def get_by_title(db: Session, title: str, season: int = None):
if season:
return db.query(Subscribe).filter(Subscribe.name == title,
Subscribe.season == season).first()
return db.query(Subscribe).filter(Subscribe.name == title).first()
@staticmethod
@db_query
def get_by_doubanid(db: Session, doubanid: str):
@@ -129,6 +139,11 @@ class Subscribe(Base):
def get_by_bangumiid(db: Session, bangumiid: int):
return db.query(Subscribe).filter(Subscribe.bangumiid == bangumiid).first()
@staticmethod
@db_query
def get_by_mediaid(db: Session, mediaid: str):
return db.query(Subscribe).filter(Subscribe.mediaid == mediaid).first()
@db_update
def delete_by_tmdbid(self, db: Session, tmdbid: int, season: int):
subscrbies = self.get_by_tmdbid(db, tmdbid, season)
@@ -143,9 +158,16 @@ class Subscribe(Base):
subscribe.delete(db, subscribe.id)
return True
@db_update
def delete_by_mediaid(self, db: Session, mediaid: str):
subscribe = self.get_by_mediaid(db, mediaid)
if subscribe:
subscribe.delete(db, subscribe.id)
return True
@staticmethod
@db_query
def list_by_username(db: Session, username: str, state: str = None, mtype: str = None):
def list_by_username(db: Session, username: str, state: Optional[str] = None, mtype: Optional[str] = None):
if mtype:
if state:
result = db.query(Subscribe).filter(Subscribe.state == state,

View File

@@ -1,3 +1,5 @@
from typing import Optional
from sqlalchemy import Column, Integer, String, Sequence, Float, JSON
from sqlalchemy.orm import Session
@@ -22,6 +24,7 @@ class SubscribeHistory(Base):
tvdbid = Column(Integer)
doubanid = Column(String, index=True)
bangumiid = Column(Integer, index=True)
mediaid = Column(String, index=True)
# 季号
season = Column(Integer)
# 海报
@@ -66,13 +69,27 @@ class SubscribeHistory(Base):
media_category = Column(String)
# 过滤规则组
filter_groups = Column(JSON, default=list)
# 剧集组
episode_group = Column(String)
@staticmethod
@db_query
def list_by_type(db: Session, mtype: str, page: int = 1, count: int = 30):
def list_by_type(db: Session, mtype: str, page: Optional[int] = 1, count: Optional[int] = 30):
result = db.query(SubscribeHistory).filter(
SubscribeHistory.type == mtype
).order_by(
SubscribeHistory.date.desc()
SubscribeHistory.date.desc()
).offset((page - 1) * count).limit(count).all()
return list(result)
@staticmethod
@db_query
def exists(db: Session, tmdbid: Optional[int] = None, doubanid: Optional[str] = None, season: Optional[int] = None):
if tmdbid:
if season:
return db.query(SubscribeHistory).filter(SubscribeHistory.tmdbid == tmdbid,
SubscribeHistory.season == season).first()
return db.query(SubscribeHistory).filter(SubscribeHistory.tmdbid == tmdbid).first()
elif doubanid:
return db.query(SubscribeHistory).filter(SubscribeHistory.doubanid == doubanid).first()
return None

View File

@@ -1,4 +1,5 @@
import time
from typing import Optional
from sqlalchemy import Column, Integer, String, Sequence, Boolean, func, or_, JSON
from sqlalchemy.orm import Session
@@ -8,7 +9,7 @@ from app.db import db_query, db_update, Base
class TransferHistory(Base):
"""
转移历史记录
整理记录
"""
id = Column(Integer, Sequence('id'), primary_key=True, index=True)
# 源路径
@@ -43,6 +44,8 @@ class TransferHistory(Base):
episodes = Column(String)
# 海报
image = Column(String)
# 下载器
downloader = Column(String)
# 下载器hash
download_hash = Column(String, index=True)
# 转移成功状态
@@ -53,10 +56,12 @@ class TransferHistory(Base):
date = Column(String, index=True)
# 文件清单以JSON存储
files = Column(JSON, default=list)
# 剧集组
episode_group = Column(String)
@staticmethod
@db_query
def list_by_title(db: Session, title: str, page: int = 1, count: int = 30, status: bool = None):
def list_by_title(db: Session, title: str, page: Optional[int] = 1, count: Optional[int] = 30, status: bool = None):
if status is not None:
result = db.query(TransferHistory).filter(
TransferHistory.status == status
@@ -75,7 +80,7 @@ class TransferHistory(Base):
@staticmethod
@db_query
def list_by_page(db: Session, page: int = 1, count: int = 30, status: bool = None):
def list_by_page(db: Session, page: Optional[int] = 1, count: Optional[int] = 30, status: bool = None):
if status is not None:
result = db.query(TransferHistory).filter(
TransferHistory.status == status
@@ -95,7 +100,7 @@ class TransferHistory(Base):
@staticmethod
@db_query
def get_by_src(db: Session, src: str, storage: str = None):
def get_by_src(db: Session, src: str, storage: Optional[str] = None):
if storage:
return db.query(TransferHistory).filter(TransferHistory.src == src,
TransferHistory.src_storage == storage).first()
@@ -115,7 +120,7 @@ class TransferHistory(Base):
@staticmethod
@db_query
def statistic(db: Session, days: int = 7):
def statistic(db: Session, days: Optional[int] = 7):
"""
统计最近days天的下载历史数量按日期分组返回每日数量
"""
@@ -148,8 +153,8 @@ class TransferHistory(Base):
@staticmethod
@db_query
def list_by(db: Session, mtype: str = None, title: str = None, year: str = None, season: str = None,
episode: str = None, tmdbid: int = None, dest: str = None):
def list_by(db: Session, mtype: Optional[str] = None, title: Optional[str] = None, year: Optional[str] = None, season: Optional[str] = None,
episode: Optional[str] = None, tmdbid: Optional[int] = None, dest: Optional[str] = None):
"""
据tmdbid、season、season_episode查询转移记录
tmdbid + mtype 或 title + year 必输
@@ -216,7 +221,7 @@ class TransferHistory(Base):
@staticmethod
@db_query
def get_by_type_tmdbid(db: Session, mtype: str = None, tmdbid: int = None):
def get_by_type_tmdbid(db: Session, mtype: Optional[str] = None, tmdbid: Optional[int] = None):
"""
据tmdbid、type查询转移记录
"""
@@ -225,7 +230,7 @@ class TransferHistory(Base):
@staticmethod
@db_update
def update_download_hash(db: Session, historyid: int = None, download_hash: str = None):
def update_download_hash(db: Session, historyid: Optional[int] = None, download_hash: Optional[str] = None):
db.query(TransferHistory).filter(TransferHistory.id == historyid).update(
{
"download_hash": download_hash

103
app/db/models/workflow.py Normal file
View File

@@ -0,0 +1,103 @@
from datetime import datetime
from typing import Optional
from sqlalchemy import Column, Integer, JSON, Sequence, String, and_
from app.db import Base, db_query, db_update
class Workflow(Base):
"""
工作流表
"""
# ID
id = Column(Integer, Sequence('id'), primary_key=True, index=True)
# 名称
name = Column(String, index=True, nullable=False)
# 描述
description = Column(String)
# 定时器
timer = Column(String)
# 状态W-等待 R-运行中 P-暂停 S-成功 F-失败
state = Column(String, nullable=False, index=True, default='W')
# 已执行动作(,分隔)
current_action = Column(String)
# 任务执行结果
result = Column(String)
# 已执行次数
run_count = Column(Integer, default=0)
# 任务列表
actions = Column(JSON, default=list)
# 任务流
flows = Column(JSON, default=list)
# 执行上下文
context = Column(JSON, default=dict)
# 创建时间
add_time = Column(String, default=datetime.now().strftime('%Y-%m-%d %H:%M:%S'))
# 最后执行时间
last_time = Column(String)
@staticmethod
@db_query
def get_enabled_workflows(db):
return db.query(Workflow).filter(Workflow.state != 'P').all()
@staticmethod
@db_query
def get_by_name(db, name: str):
return db.query(Workflow).filter(Workflow.name == name).first()
@staticmethod
@db_update
def update_state(db, wid: int, state: str):
db.query(Workflow).filter(Workflow.id == wid).update({"state": state})
return True
@staticmethod
@db_update
def start(db, wid: int):
db.query(Workflow).filter(Workflow.id == wid).update({
"state": 'R'
})
return True
@staticmethod
@db_update
def fail(db, wid: int, result: str):
db.query(Workflow).filter(and_(Workflow.id == wid, Workflow.state != "P")).update({
"state": 'F',
"result": result,
"last_time": datetime.now().strftime('%Y-%m-%d %H:%M:%S')
})
return True
@staticmethod
@db_update
def success(db, wid: int, result: Optional[str] = None):
db.query(Workflow).filter(and_(Workflow.id == wid, Workflow.state != "P")).update({
"state": 'S',
"result": result,
"run_count": Workflow.run_count + 1,
"last_time": datetime.now().strftime('%Y-%m-%d %H:%M:%S')
})
return True
@staticmethod
@db_update
def reset(db, wid: int, reset_count: Optional[bool] = False):
db.query(Workflow).filter(Workflow.id == wid).update({
"state": 'W',
"result": None,
"current_action": None,
"run_count": 0 if reset_count else Workflow.run_count,
})
return True
@staticmethod
@db_update
def update_current_action(db, wid: int, action_id: str, context: dict):
db.query(Workflow).filter(Workflow.id == wid).update({
"current_action": Workflow.current_action + f",{action_id}" if Workflow.current_action else action_id,
"context": context
})
return True

View File

@@ -1,4 +1,4 @@
from typing import Any
from typing import Any, Optional
from app.db import DbOper
from app.db.models.plugindata import PluginData
@@ -24,7 +24,7 @@ class PluginDataOper(DbOper):
else:
PluginData(plugin_id=plugin_id, key=key, value=value).create(self._db)
def get_data(self, plugin_id: str, key: str = None) -> Any:
def get_data(self, plugin_id: str, key: Optional[str] = None) -> Any:
"""
获取插件数据
:param plugin_id: 插件id
@@ -38,7 +38,7 @@ class PluginDataOper(DbOper):
else:
return PluginData.get_plugin_data(self._db, plugin_id)
def del_data(self, plugin_id: str, key: str = None) -> Any:
def del_data(self, plugin_id: str, key: Optional[str] = None) -> Any:
"""
删除插件数据
:param plugin_id: 插件id

View File

@@ -1,5 +1,5 @@
from datetime import datetime
from typing import List, Tuple
from typing import List, Tuple, Optional
from app.db import DbOper
from app.db.models import SiteIcon
@@ -114,13 +114,15 @@ class SiteOper(DbOper):
"domain": domain,
"name": name,
"updated_day": current_day,
"updated_time": current_time
"updated_time": current_time,
"err_msg": payload.get("err_msg") or ""
})
# 按站点+天判断是否存在数据
siteuserdatas = SiteUserData.get_by_domain(self._db, domain=domain, workdate=current_day)
if siteuserdatas:
# 存在则更新
siteuserdatas[0].update(self._db, payload)
if not payload.get("err_msg"):
siteuserdatas[0].update(self._db, payload)
else:
# 不存在则插入
SiteUserData(**payload).create(self._db)
@@ -132,7 +134,7 @@ class SiteOper(DbOper):
"""
return SiteUserData.list(self._db)
def get_userdata_by_domain(self, domain: str, workdate: str = None) -> List[SiteUserData]:
def get_userdata_by_domain(self, domain: str, workdate: Optional[str] = None) -> List[SiteUserData]:
"""
获取站点用户数据
"""
@@ -171,7 +173,7 @@ class SiteOper(DbOper):
})
return True
def success(self, domain: str, seconds: int = None):
def success(self, domain: str, seconds: Optional[int] = None):
"""
站点访问成功
"""

View File

@@ -1,5 +1,5 @@
import time
from typing import Tuple, List
from typing import Tuple, List, Optional
from app.core.context import MediaInfo
from app.db import DbOper
@@ -20,21 +20,24 @@ class SubscribeOper(DbOper):
tmdbid=mediainfo.tmdb_id,
doubanid=mediainfo.douban_id,
season=kwargs.get('season'))
kwargs.update({
"name": mediainfo.title,
"year": mediainfo.year,
"type": mediainfo.type.value,
"tmdbid": mediainfo.tmdb_id,
"imdbid": mediainfo.imdb_id,
"tvdbid": mediainfo.tvdb_id,
"doubanid": mediainfo.douban_id,
"bangumiid": mediainfo.bangumi_id,
"episode_group": mediainfo.episode_group,
"poster": mediainfo.get_poster_image(),
"backdrop": mediainfo.get_backdrop_image(),
"vote": mediainfo.vote_average,
"description": mediainfo.overview,
"date": time.strftime("%Y-%m-%d %H:%M:%S", time.localtime())
})
if not subscribe:
subscribe = Subscribe(name=mediainfo.title,
year=mediainfo.year,
type=mediainfo.type.value,
tmdbid=mediainfo.tmdb_id,
imdbid=mediainfo.imdb_id,
tvdbid=mediainfo.tvdb_id,
doubanid=mediainfo.douban_id,
bangumiid=mediainfo.bangumi_id,
poster=mediainfo.get_poster_image(),
backdrop=mediainfo.get_backdrop_image(),
vote=mediainfo.vote_average,
description=mediainfo.overview,
date=time.strftime("%Y-%m-%d %H:%M:%S", time.localtime()),
**kwargs)
subscribe = Subscribe(**kwargs)
subscribe.create(self._db)
# 查询订阅
subscribe = Subscribe.exists(self._db,
@@ -45,7 +48,7 @@ class SubscribeOper(DbOper):
else:
return subscribe.id, "订阅已存在"
def exists(self, tmdbid: int = None, doubanid: str = None, season: int = None) -> bool:
def exists(self, tmdbid: Optional[int] = None, doubanid: Optional[str] = None, season: Optional[int] = None) -> bool:
"""
判断是否存在
"""
@@ -64,7 +67,7 @@ class SubscribeOper(DbOper):
"""
return Subscribe.get(self._db, rid=sid)
def list(self, state: str = None) -> List[Subscribe]:
def list(self, state: Optional[str] = None) -> List[Subscribe]:
"""
获取订阅列表
"""
@@ -83,22 +86,23 @@ class SubscribeOper(DbOper):
更新订阅
"""
subscribe = self.get(sid)
subscribe.update(self._db, payload)
if subscribe:
subscribe.update(self._db, payload)
return subscribe
def list_by_tmdbid(self, tmdbid: int, season: int = None) -> List[Subscribe]:
def list_by_tmdbid(self, tmdbid: int, season: Optional[int] = None) -> List[Subscribe]:
"""
获取指定tmdb_id的订阅
"""
return Subscribe.get_by_tmdbid(self._db, tmdbid=tmdbid, season=season)
def list_by_username(self, username: str, state: str = None, mtype: str = None) -> List[Subscribe]:
def list_by_username(self, username: str, state: Optional[str] = None, mtype: Optional[str] = None) -> List[Subscribe]:
"""
获取指定用户的订阅
"""
return Subscribe.list_by_username(self._db, username=username, state=state, mtype=mtype)
def list_by_type(self, mtype: str, days: int = 7) -> Subscribe:
def list_by_type(self, mtype: str, days: Optional[int] = 7) -> Subscribe:
"""
获取指定类型的订阅
"""
@@ -117,3 +121,16 @@ class SubscribeOper(DbOper):
kwargs.pop("id")
subscribe = SubscribeHistory(**kwargs)
subscribe.create(self._db)
def exist_history(self, tmdbid: Optional[int] = None, doubanid: Optional[str] = None, season: Optional[int] = None):
"""
判断是否存在订阅历史
"""
if tmdbid:
if season:
return True if SubscribeHistory.exists(self._db, tmdbid=tmdbid, season=season) else False
else:
return True if SubscribeHistory.exists(self._db, tmdbid=tmdbid) else False
elif doubanid:
return True if SubscribeHistory.exists(self._db, doubanid=doubanid) else False
return False

View File

@@ -1,5 +1,5 @@
import time
from typing import Any, List
from typing import Any, List, Optional
from app.core.context import MediaInfo
from app.core.meta import MetaBase
@@ -27,7 +27,7 @@ class TransferHistoryOper(DbOper):
"""
return TransferHistory.list_by_title(self._db, title)
def get_by_src(self, src: str, storage: str = None) -> TransferHistory:
def get_by_src(self, src: str, storage: Optional[str] = None) -> TransferHistory:
"""
按源查询转移记录
:param src: 数据key
@@ -58,14 +58,15 @@ class TransferHistoryOper(DbOper):
})
TransferHistory(**kwargs).create(self._db)
def statistic(self, days: int = 7) -> List[Any]:
def statistic(self, days: Optional[int] = 7) -> List[Any]:
"""
统计最近days天的下载历史数量
"""
return TransferHistory.statistic(self._db, days)
def get_by(self, title: str = None, year: str = None, mtype: str = None,
season: str = None, episode: str = None, tmdbid: int = None, dest: str = None) -> List[TransferHistory]:
def get_by(self, title: Optional[str] = None, year: Optional[str] = None, mtype: Optional[str] = None,
season: Optional[str] = None, episode: Optional[str] = None, tmdbid: Optional[int] = None,
dest: Optional[str] = None) -> List[TransferHistory]:
"""
按类型、标题、年份、季集查询转移记录
"""
@@ -78,7 +79,7 @@ class TransferHistoryOper(DbOper):
episode=episode,
tmdbid=tmdbid)
def get_by_type_tmdbid(self, mtype: str = None, tmdbid: int = None) -> TransferHistory:
def get_by_type_tmdbid(self, mtype: Optional[str] = None, tmdbid: Optional[int] = None) -> TransferHistory:
"""
按类型、tmdb查询转移记录
"""
@@ -120,7 +121,7 @@ class TransferHistoryOper(DbOper):
def add_success(self, fileitem: FileItem, mode: str, meta: MetaBase,
mediainfo: MediaInfo, transferinfo: TransferInfo,
download_hash: str = None):
downloader: Optional[str] = None, download_hash: Optional[str] = None):
"""
新增转移成功历史记录
"""
@@ -143,13 +144,14 @@ class TransferHistoryOper(DbOper):
seasons=meta.season,
episodes=meta.episode,
image=mediainfo.get_poster_image(),
downloader=downloader,
download_hash=download_hash,
status=1,
files=transferinfo.file_list
)
def add_fail(self, fileitem: FileItem, mode: str, meta: MetaBase, mediainfo: MediaInfo = None,
transferinfo: TransferInfo = None, download_hash: str = None):
transferinfo: TransferInfo = None, downloader: Optional[str] = None, download_hash: Optional[str] = None):
"""
新增转移失败历史记录
"""
@@ -173,7 +175,9 @@ class TransferHistoryOper(DbOper):
seasons=meta.season,
episodes=meta.episode,
image=mediainfo.get_poster_image(),
downloader=downloader,
download_hash=download_hash,
episode_group=mediainfo.episode_group,
status=0,
errmsg=transferinfo.message or '未知错误',
files=transferinfo.file_list
@@ -188,6 +192,7 @@ class TransferHistoryOper(DbOper):
mode=mode,
seasons=meta.season,
episodes=meta.episode,
downloader=downloader,
download_hash=download_hash,
status=0,
errmsg="未识别到媒体信息"

View File

@@ -1,4 +1,4 @@
from typing import Optional
from typing import Optional, List
from fastapi import Depends, HTTPException
from sqlalchemy.orm import Session
@@ -51,6 +51,12 @@ class UserOper(DbOper):
用户管理
"""
def list(self) -> List[User]:
"""
获取用户列表
"""
return User.list(self._db)
def add(self, **kwargs):
"""
新增用户
@@ -67,27 +73,6 @@ class UserOper(DbOper):
def get_permissions(self, name: str) -> dict:
"""
获取用户权限
{
"admin": "管理员",
"usermanage": "用户管理",
"dashboard": "仪表板",
"ranking": "推荐榜单",
"resource": {
"search": "搜索站点资源",
"download": "下载站点资源",
},
"subscribe": {
"request": "提交订阅请求",
"autopass": "订阅请求自动批准"
"approve": "审批订阅请求",
"calendar": "查看订阅日历",
"manage": "管理所有订阅"
},
"downloading": {
"view": "查看正在下载任务",
"manager": "管理正在下载任务"
}
}
"""
user = User.get_by_name(self._db, name)
if user:
@@ -111,3 +96,16 @@ class UserOper(DbOper):
if settings:
return settings.get(key)
return None
def get_name(self, **kwargs) -> Optional[str]:
"""
根据绑定账号获取用户名称
"""
users = self.list()
for user in users:
user_setting = user.settings
if user_setting:
for k, v in kwargs.items():
if user_setting.get(k) == str(v):
return user.name
return None

68
app/db/workflow_oper.py Normal file
View File

@@ -0,0 +1,68 @@
from typing import List, Tuple, Optional
from app.db import DbOper
from app.db.models.workflow import Workflow
class WorkflowOper(DbOper):
"""
工作流管理
"""
def add(self, **kwargs) -> Tuple[bool, str]:
"""
新增工作流
"""
wf = Workflow(**kwargs)
if not wf.get_by_name(self._db, kwargs.get("name")):
wf.create(self._db)
return True, "新增工作流成功"
return False, "工作流已存在"
def get(self, wid: int) -> Workflow:
"""
查询单个工作流
"""
return Workflow.get(self._db, wid)
def list_enabled(self) -> List[Workflow]:
"""
获取启用的工作流列表
"""
return Workflow.get_enabled_workflows(self._db)
def get_by_name(self, name: str) -> Workflow:
"""
按名称获取工作流
"""
return Workflow.get_by_name(self._db, name)
def start(self, wid: int) -> bool:
"""
启动
"""
return Workflow.start(self._db, wid)
def success(self, wid: int, result: Optional[str] = None) -> bool:
"""
成功
"""
return Workflow.success(self._db, wid, result)
def fail(self, wid: int, result: str) -> bool:
"""
失败
"""
return Workflow.fail(self._db, wid, result)
def step(self, wid: int, action_id: str, context: dict) -> bool:
"""
步进
"""
return Workflow.update_current_action(self._db, wid, action_id, context)
def reset(self, wid: int, reset_count: bool = False) -> bool:
"""
重置
"""
return Workflow.reset(self._db, wid, reset_count=reset_count)

View File

@@ -1,4 +1,4 @@
from typing import Callable, Any
from typing import Callable, Any, Optional
from playwright.sync_api import sync_playwright, Page
from cf_clearance import sync_cf_retry, sync_stealth
@@ -16,15 +16,15 @@ class PlaywrightHelper:
"""
sync_stealth(page, pure=True)
page.goto(url)
return sync_cf_retry(page)
return sync_cf_retry(page)[0]
def action(self, url: str,
callback: Callable,
cookies: str = None,
ua: str = None,
proxies: dict = None,
headless: bool = False,
timeout: int = 30) -> Any:
cookies: Optional[str] = None,
ua: Optional[str] = None,
proxies: Optional[dict] = None,
headless: Optional[bool] = False,
timeout: Optional[int] = 30) -> Any:
"""
访问网页接收Page对象并执行操作
:param url: 网页地址
@@ -57,11 +57,11 @@ class PlaywrightHelper:
return None
def get_page_source(self, url: str,
cookies: str = None,
ua: str = None,
proxies: dict = None,
headless: bool = False,
timeout: int = 20) -> str:
cookies: Optional[str] = None,
ua: Optional[str] = None,
proxies: Optional[dict] = None,
headless: Optional[bool] = False,
timeout: Optional[int] = 20) -> Optional[str]:
"""
获取网页源码
:param url: 网页地址

View File

@@ -73,8 +73,8 @@ class CookieHelper:
url: str,
username: str,
password: str,
two_step_code: str = None,
proxies: dict = None) -> Tuple[Optional[str], Optional[str], str]:
two_step_code: Optional[str] = None,
proxies: Optional[dict] = None) -> Tuple[Optional[str], Optional[str], str]:
"""
获取站点cookie和ua
:param url: 站点地址

View File

@@ -5,6 +5,7 @@ from app import schemas
from app.core.context import MediaInfo
from app.db.systemconfig_oper import SystemConfigOper
from app.schemas.types import SystemConfigKey
from app.utils.system import SystemUtils
class DirectoryHelper:
@@ -48,16 +49,18 @@ class DirectoryHelper:
"""
return [d for d in self.get_library_dirs() if d.library_storage == "local"]
def get_dir(self, media: MediaInfo, storage: str = "local",
src_path: Path = None, dest_path: Path = None, fileitem: schemas.FileItem = None
def get_dir(self, media: MediaInfo, include_unsorted: Optional[bool] = False,
storage: Optional[str] = None, src_path: Path = None,
target_storage: Optional[str] = None, dest_path: Path = None
) -> Optional[schemas.TransferDirectoryConf]:
"""
根据媒体信息获取下载目录、媒体库目录配置
:param media: 媒体信息
:param storage: 存储类型
:param include_unsorted: 包含不整理目录
:param storage: 源存储类型
:param target_storage: 目标存储类型
:param src_path: 源目录,有值时直接匹配
:param dest_path: 目标目录,有值时直接匹配
:param fileitem: 文件项,使用文件路径匹配
"""
# 处理类型
if not media:
@@ -65,35 +68,46 @@ class DirectoryHelper:
# 电影/电视剧
media_type = media.type.value
dirs = self.get_dirs()
# 如果存在源目录,并源目录为任一下载目录的子目录时,则进行源目录匹配,否则,允许源目录按同盘优先的逻辑匹配
matching_dirs = [d for d in dirs if src_path.is_relative_to(d.download_path)] if src_path else []
# 根据是否有匹配的源目录,决定要考虑的目录集合
dirs_to_consider = matching_dirs if matching_dirs else dirs
# 已匹配的目录
matched_dirs: List[schemas.TransferDirectoryConf] = []
# 按照配置顺序查找
for d in dirs:
for d in dirs_to_consider:
# 没有启用整理的目录
if not d.monitor_type:
if not d.monitor_type and not include_unsorted:
continue
# 存储类型不匹配
# 存储类型不匹配
if storage and d.storage != storage:
continue
# 下载目录
download_path = Path(d.download_path)
# 媒体库目录
library_path = Path(d.library_path)
# 有源目录时,源目录不匹配下载目录
if src_path and not src_path.is_relative_to(download_path):
continue
# 有文件项时,文件项不匹配下载目录
if fileitem and not Path(fileitem.path).is_relative_to(download_path):
# 目标存储类型不匹配
if target_storage and d.library_storage != target_storage:
continue
# 有目标目录时,目标目录不匹配媒体库目录
if dest_path and not dest_path.is_relative_to(library_path):
if dest_path and dest_path != Path(d.library_path):
continue
# 目录类型为全部的,符合条件
if not d.media_type:
return d
matched_dirs.append(d)
continue
# 目录类型相等,目录类别为全部,符合条件
if d.media_type == media_type and not d.media_category:
return d
matched_dirs.append(d)
continue
# 目录类型相等,目录类别相等,符合条件
if d.media_type == media_type and d.media_category == media.category:
return d
matched_dirs.append(d)
continue
if matched_dirs:
if src_path:
# 优先源目录同盘
for matched_dir in matched_dirs:
matched_path = Path(matched_dir.download_path)
if SystemUtils.is_same_disk(matched_path, src_path):
return matched_dir
return matched_dirs[0]
return None

View File

@@ -24,4 +24,3 @@ class DisplayHelper(metaclass=Singleton):
logger.info("正在停止虚拟显示...")
self._display.stop()
logger.info("虚拟显示已停止")

View File

@@ -129,7 +129,7 @@ def doh_query_json(resolver: str, host: str) -> Optional[str]:
if response.status != 200:
return None
response_body = response.read().decode("utf-8")
logger.debug("<== body: %s", response_body)
logger.debug("<== body: %s", response_body)
answer = json.loads(response_body)["Answer"]
return answer[0]["data"]
except Exception as e:

Some files were not shown because too many files have changed in this diff Show More