Compare commits

...

313 Commits

Author SHA1 Message Date
jxxghp
ff8a9dc8c7 v1.1.3
- 修复了历史记录重新整理记录缺失的问题
- 优化了数据库会话处理
- 优化了普通用户的菜单权限
- 优化了文件管理UI细节
- 调整了仪表仪显示内容
- 捷径新增了过滤规则测试功能
- 图片刮削下载失败时支持重试
- 播放限速插件支持手动配置不限速地址范围
2023-09-04 21:21:04 +08:00
jxxghp
4ee7daa673 Merge remote-tracking branch 'origin/main' 2023-09-04 20:40:28 +08:00
jxxghp
aca1673ee3 fix db session 2023-09-04 20:40:17 +08:00
jxxghp
87ece98471 Merge pull request #435 from thsrite/main 2023-09-04 20:24:39 +08:00
thsrite
4c16cd7bfb fix b7d2168f 2023-09-04 20:20:42 +08:00
jxxghp
712af24a72 fix 2023-09-04 20:13:16 +08:00
jxxghp
b7d2168f8e fix #434 2023-09-04 19:30:06 +08:00
jxxghp
65ad7123f9 fix #419 2023-09-04 18:08:11 +08:00
jxxghp
ce42e48b37 fix login api 2023-09-04 17:48:44 +08:00
jxxghp
45b53da056 Merge pull request #428 from thsrite/main 2023-09-04 11:47:52 +08:00
thsrite
70f93e02e4 fix #365 限速插件增加不限速地址范围,不设置默认不限速内网ip 2023-09-04 11:40:19 +08:00
jxxghp
e4b63eacae add system apis 2023-09-04 11:07:30 +08:00
jxxghp
96f17e2bc2 fix #426 刮削下载图片重试 2023-09-04 10:14:05 +08:00
jxxghp
7eb77875f1 fix 重连机制 2023-09-03 21:59:18 +08:00
jxxghp
bbc27bbe19 更新 README.md 2023-09-03 21:39:47 +08:00
jxxghp
3691b2a10b add 过滤规则测试API 2023-09-03 18:36:06 +08:00
jxxghp
08a3d02daf fix 调整重新整理的删除顺序 2023-09-03 17:37:06 +08:00
jxxghp
57abc7816b Merge pull request #420 from thsrite/main 2023-09-03 16:30:01 +08:00
thsrite
69c277777e fix 签到周期重启bug 2023-09-03 16:23:38 +08:00
jxxghp
5f88fe81e3 fix 手动整理时剧集处理 2023-09-03 14:38:24 +08:00
jxxghp
d043dbd89e v1.1.2 2023-09-03 14:22:26 +08:00
jxxghp
53a2887717 fix 蓝光原盘刮削 2023-09-03 14:14:41 +08:00
jxxghp
28d181db44 fix #403 修复蓝光原盘转移失败 2023-09-03 13:40:39 +08:00
jxxghp
7d3f43e488 fix 媒体库同步使用独立数据库会话 2023-09-03 13:11:42 +08:00
jxxghp
62df3f7c84 add 文件识别API 2023-09-03 13:04:08 +08:00
jxxghp
1338a061c4 更新 __init__.py 2023-09-03 11:14:36 +08:00
jxxghp
4f26f0607a 更新 transfer.py 2023-09-03 11:13:42 +08:00
jxxghp
b72aa314b6 emby/jellyfin异常数据兼容 2023-09-03 09:50:05 +08:00
jxxghp
082ec8d718 fix #340 前端已调整日志位置
fix #239 增加转移屏蔽词设置
2023-09-03 09:29:38 +08:00
jxxghp
e785f20c5a fix #352 历史记录重新整理时删除原已整理的文件 2023-09-03 08:40:26 +08:00
jxxghp
0050a96faf fix #406 支持QB分类自动管理模式 2023-09-03 07:56:20 +08:00
jxxghp
31b460f89f Merge pull request #408 from amtoaer/fix_subscribe_lack 2023-09-03 07:16:58 +08:00
jxxghp
89cd2bbadc Merge pull request #405 from thsrite/main 2023-09-03 07:15:16 +08:00
amtoaer
7d19467b6c fix: 修复自定义开始集导致的订阅集数不刷新问题 2023-09-03 01:53:47 +08:00
thsrite
97667249d5 Merge branch 'jxxghp:main' into main 2023-09-02 23:49:23 +08:00
thsrite
2e2472a387 fix 目录监控汇总消息适当增加处理时间 2023-09-02 23:48:48 +08:00
jxxghp
4b10028690 fix update 2023-09-02 22:27:09 +08:00
jxxghp
e0a492d8ab v1.1.1 2023-09-02 22:05:41 +08:00
jxxghp
52e89747b7 feat 电视剧无法识别集时发送消息 2023-09-02 21:38:01 +08:00
jxxghp
59b947fa65 fix 目录监控登录转移方式错误 2023-09-02 21:22:03 +08:00
jxxghp
212e2f1287 Merge pull request #399 from thsrite/main 2023-09-02 18:31:46 +08:00
thsrite
685be88c46 fix 目录监控增加失败历史记录 2023-09-02 18:28:12 +08:00
jxxghp
8297b3e199 更新 scheduler.py 2023-09-02 18:08:34 +08:00
jxxghp
75c5844d64 Merge pull request #397 from DDS-Derek/main 2023-09-02 17:53:24 +08:00
DDSRem
ad5ca69bbb feat: 前端下载前判断版本号是否获取成功 2023-09-02 17:49:14 +08:00
jxxghp
6befa35a26 Merge pull request #395 from WithdewHua/fix-torrentremover 2023-09-02 16:29:41 +08:00
WithdewHua
4fec6aede4 fix: 删除自动删种插件通知消息中多余的文件单位 2023-09-02 16:24:09 +08:00
jxxghp
68a3bc8732 Merge pull request #394 from amtoaer/main 2023-09-02 16:06:36 +08:00
amtoaer
ba2745266a fix: 修复消息中百分比多乘了 100 的问题 2023-09-02 16:03:28 +08:00
jxxghp
2fcf5039ff Merge pull request #392 from DDS-Derek/main 2023-09-02 14:44:43 +08:00
DDSRem
b37dc4471e fix: update env 2023-09-02 14:43:33 +08:00
jxxghp
ffc5c48830 更新 __init__.py 2023-09-02 13:34:17 +08:00
jxxghp
dbe3701032 Merge pull request #385 from DDS-Derek/main 2023-09-02 11:16:38 +08:00
DDSRem
751d405aac fix: update curl 2023-09-02 11:15:44 +08:00
jxxghp
9224169f31 Merge pull request #384 from DDS-Derek/main 2023-09-02 10:54:44 +08:00
DDSRem
62c1a924e8 feat: dev update 2023-09-02 10:52:19 +08:00
jxxghp
9fdd838b7a Merge pull request #368 from DDS-Derek/main 2023-09-02 08:40:57 +08:00
DDSDerek
510911b7a3 feat: add discussions 2023-09-02 08:39:52 +08:00
DDSDerek
36e68f44dc fix: delete discussion 2023-09-02 08:38:39 +08:00
jxxghp
374e633ca7 fix 调整数据库会话 #330 2023-09-02 08:18:01 +08:00
jxxghp
ec8c9c996a fix #356 猫站数据统计问题 2023-09-02 07:57:44 +08:00
jxxghp
3c753686c6 fix #359 定期自动刷新订阅的TMDB数据 2023-09-02 07:33:27 +08:00
jxxghp
5f4580282e fix #362 恢复动漫独立目录二级分类 2023-09-02 07:11:21 +08:00
jxxghp
5d9e0b699c fix 转移历史记录没有时间 2023-09-02 07:09:38 +08:00
jxxghp
5debfca89a fix #361
fix #357
2023-09-01 22:42:58 +08:00
jxxghp
3eeb9e299a Merge pull request #360 from thsrite/main 2023-09-01 21:16:28 +08:00
thsrite
9c4aba10bf Update downloadhistory_oper.py 2023-09-01 21:12:32 +08:00
jxxghp
7b37d86527 fix #358 2023-09-01 18:28:05 +08:00
jxxghp
55c061176d fix #358 2023-09-01 18:24:43 +08:00
jxxghp
5dc11b07e3 fix #342 2023-09-01 17:30:21 +08:00
jxxghp
0bb67824bd Merge remote-tracking branch 'origin/main' 2023-09-01 15:00:37 +08:00
jxxghp
ac1dcbed3c fix 偿试减少会话使用 2023-09-01 15:00:27 +08:00
jxxghp
d0a586a46b 更新 transfer.py 2023-09-01 12:07:37 +08:00
jxxghp
fa8dcea7da 更新 system.py 2023-09-01 12:07:04 +08:00
jxxghp
76a94a80ef Merge pull request #354 from thsrite/main 2023-09-01 11:51:06 +08:00
thsrite
9139c1297e fix 订阅创建一分钟内不自动搜索,留出编辑订阅的时间 2023-09-01 11:48:31 +08:00
jxxghp
4dba739d54 fix bug 2023-09-01 11:35:46 +08:00
jxxghp
fe80f86518 fix 2023-09-01 11:05:17 +08:00
jxxghp
7307105dcd - 站点新增支持Rousi、蝴蝶、OpenCD
- 电影搜索增加了纪录片类型
- 支持设置自建OCR识别服务地址
- 下载器监控、手动整理按文件登记历史记录
- 新增了下载器文件同步插件,可将非MoviePilot添加下载的任务文件导入数据库,以便删除文件时联动删除下载任务
- 整理历史记录支持批量操作
- 播放限速插件支持智能限速
- 刮削海报优先使用TMDB图片
- 修复了憨憨站点数据统计
- 修复了过滤规则无法清空的问题
- 修复了自定义订阅已处理状态计算的问题
- 修复了Slack消息过长导致发送失败的问题
- 修复了动漫独立目录时出现两级目录的问题
- 调整了暗黑主题的UI配色
2023-09-01 11:01:13 +08:00
jxxghp
1c7715d94c 更新 transfer.py 2023-09-01 07:35:27 +08:00
jxxghp
4dd2d6d307 更新 __init__.py 2023-09-01 07:34:12 +08:00
jxxghp
7cfd05a7a5 fix 通知标题计算方法 2023-09-01 07:29:49 +08:00
jxxghp
8eab38c91e fix 优化目录监控通知标题计算方法 2023-09-01 07:16:39 +08:00
jxxghp
6ad78fa875 add 剧集格式化方法 2023-08-31 21:29:28 +08:00
jxxghp
781cffb255 fix bug 2023-08-31 20:12:38 +08:00
jxxghp
2a7fc7bbe6 Merge pull request #350 from thsrite/main
feat 播放限速插件支持智能限速、不限速地址
2023-08-31 19:48:48 +08:00
thsrite
f65da9b202 fix 删除不限速地址配置 2023-08-31 19:48:00 +08:00
thsrite
0cf11db76a fix 自动限速 2023-08-31 19:33:16 +08:00
thsrite
37bada89ef Merge branch 'main' of https://github.com/thsrite/MoviePilot into main 2023-08-31 19:26:24 +08:00
thsrite
38d6467740 fix 播放限速 2023-08-31 19:26:18 +08:00
thsrite
3bc639bcab fix 播放限速 2023-08-31 19:08:50 +08:00
thsrite
7baa07474c Update __init__.py 2023-08-31 19:01:14 +08:00
jxxghp
8e304f77b4 fix ui 2023-08-31 19:01:10 +08:00
thsrite
93ec8df713 Merge branch 'jxxghp:main' into main 2023-08-31 17:07:06 +08:00
thsrite
8854acf908 Merge remote-tracking branch 'origin/main' 2023-08-31 17:05:55 +08:00
thsrite
143ffd18b7 feat 限速插件支持智能限速 2023-08-31 17:05:47 +08:00
jxxghp
212f9c250f fix #343 2023-08-31 16:38:46 +08:00
jxxghp
fa62943679 fix ui 2023-08-31 16:28:18 +08:00
jxxghp
3f95962ced Merge pull request #347 from thsrite/main
fix 下载器种子排除辅种、防止mp下载任务重复处理
2023-08-31 16:23:49 +08:00
jxxghp
e68aab423e Merge branch 'main' into main 2023-08-31 16:23:42 +08:00
jxxghp
49d51ca13e fix 2023-08-31 16:20:44 +08:00
jxxghp
f6b5994fe5 fix plugin manager 2023-08-31 16:13:19 +08:00
thsrite
8ad75e93a9 fix 下载器任务同步插件支持周期运行 2023-08-31 15:51:39 +08:00
jxxghp
796133e26f fix SyncDownloadFiles 2023-08-31 15:50:46 +08:00
thsrite
8414c5df0a fix 下载器种子排除辅种、防止mp下载任务重复处理 2023-08-31 15:31:35 +08:00
jxxghp
1fcdf633ba Merge pull request #345 from thsrite/main
feat 下载器种子同步插件 && fix 同步删除插件
2023-08-31 15:11:58 +08:00
jxxghp
b503dee631 add opencd 2023-08-31 15:08:18 +08:00
thsrite
0837950334 fix 下载器文件同步插件友情提示 2023-08-31 15:07:30 +08:00
thsrite
95787f6ef6 fix last_sync_time按照下载器设置 2023-08-31 15:02:05 +08:00
thsrite
3943a7a793 fix NAStool数据同步插件 2023-08-31 14:47:21 +08:00
thsrite
9f0bd2b933 fix 签到插件 2023-08-31 14:43:41 +08:00
thsrite
053c89bf9f fix 同步删除插件 2023-08-31 14:37:10 +08:00
thsrite
8739a67679 feat 下载器种子同步插件 2023-08-31 14:33:39 +08:00
jxxghp
cb41086fa3 fix 目录监控从表中查询download_hash 2023-08-31 13:56:51 +08:00
jxxghp
84cbeaada2 fix bug 2023-08-31 13:52:48 +08:00
jxxghp
344742871c fix bug 2023-08-31 12:45:33 +08:00
jxxghp
95df1c4c1c fix bug 2023-08-31 12:28:30 +08:00
jxxghp
593211c037 feat 下载时记录文件清单 2023-08-31 08:37:00 +08:00
jxxghp
f80e5739ca feat 媒体服务器/下载器定时检查重连 2023-08-31 08:15:43 +08:00
jxxghp
17fcd77b8e fix 2023-08-31 07:14:57 +08:00
jxxghp
f0666986f0 fix 2023-08-30 23:59:27 +08:00
jxxghp
854fafd880 fix 2023-08-30 23:07:48 +08:00
jxxghp
bdd45304c8 fix 2023-08-30 22:50:38 +08:00
jxxghp
c372d0451e fix 2023-08-30 22:40:36 +08:00
jxxghp
38eff64c95 need fix 2023-08-30 22:01:07 +08:00
jxxghp
9326676bb6 - 新增了正在热映推荐
- 站点新增支持Rousi、蝴蝶
- 电影搜索增加了纪录片类型
- 修复了憨憨站点数据统计
- 修复了过滤规则无法清空的问题
- 修复了自定义订阅已处理状态计算的问题
- 修复了Slack消息过长导致发送失败的问题
2023-08-30 19:39:36 +08:00
jxxghp
7df1d807bb fix README 2023-08-30 19:15:46 +08:00
jxxghp
cce543274e fix Ocr Host 2023-08-30 19:00:48 +08:00
jxxghp
3b7c1fed74 fix #283 2023-08-30 17:32:59 +08:00
jxxghp
e0dfbc213a fix #283 2023-08-30 17:09:49 +08:00
jxxghp
d76fa9bb00 fix #324 2023-08-30 16:56:49 +08:00
jxxghp
e59a498826 fix #271 2023-08-30 16:38:41 +08:00
jxxghp
e6452d68bb fix #326 2023-08-30 16:14:21 +08:00
jxxghp
0d830b237b fix #336 2023-08-30 15:51:01 +08:00
jxxghp
470ebb7b79 Merge remote-tracking branch 'origin/main' 2023-08-30 15:46:13 +08:00
jxxghp
a6819c08bf fix #286 2023-08-30 15:46:04 +08:00
jxxghp
16ba4587e1 Merge pull request #338 from thsrite/main 2023-08-30 15:37:00 +08:00
jxxghp
911651a5f7 Merge remote-tracking branch 'origin/main' 2023-08-30 15:31:21 +08:00
jxxghp
3f94f5f709 fix 站点数据统计UI 2023-08-30 15:31:11 +08:00
jxxghp
16289d86b6 fix hhanclub数据统计 2023-08-30 14:51:55 +08:00
thsrite
17450c7c70 fix 优选插件get_state 2023-08-30 14:00:20 +08:00
jxxghp
eac9fc02fa Merge pull request #333 from thsrite/main 2023-08-30 12:09:16 +08:00
thsrite
1a026ffb12 fix plugins 2023-08-30 12:03:52 +08:00
thsrite
85477a4bd3 fix #184 2023-08-30 10:26:24 +08:00
jxxghp
f8221bb526 Merge remote-tracking branch 'origin/main' 2023-08-30 08:29:00 +08:00
jxxghp
85a581f0cd feat 推荐新增正在热映
fix 豆瓣搜索API
2023-08-30 08:28:37 +08:00
jxxghp
ae7b48ad9f Merge pull request #325 from DDS-Derek/main 2023-08-29 22:40:58 +08:00
jxxghp
59907af4f4 Create LICENSE 2023-08-29 22:35:46 +08:00
DDSRem
e63f52bee5 feat: optimize image size 2023-08-29 22:20:18 +08:00
jxxghp
b9b8b86019 fix build 2023-08-29 19:47:42 +08:00
jxxghp
bfca8a52d6 fix build 2023-08-29 19:44:40 +08:00
jxxghp
99ccbfef22 Merge pull request #320 from thsrite/main 2023-08-29 19:15:31 +08:00
thsrite
5e2f4b413d fix 2b462a1b 2023-08-29 18:53:31 +08:00
jxxghp
a0ec38a6a9 Merge remote-tracking branch 'origin/main' 2023-08-29 17:14:56 +08:00
jxxghp
eae89b2d36 fix #318 2023-08-29 17:14:45 +08:00
jxxghp
e5926a489d Merge pull request #316 from thsrite/main 2023-08-29 15:49:41 +08:00
thsrite
8acfde7906 fix 签到插件 2023-08-29 15:46:20 +08:00
jxxghp
24a164f47e v1.0.9 2023-08-29 15:15:24 +08:00
jxxghp
72fbbffa02 Merge pull request #315 from thsrite/main
fix 站点签到插件支持仅模拟登陆
2023-08-29 14:04:57 +08:00
thsrite
95a87f3e33 feat 站点签到插件支持仅模拟登陆 2023-08-29 13:49:38 +08:00
jxxghp
55206ea092 fix #299 搜索时去掉特殊字符 2023-08-29 12:29:18 +08:00
jxxghp
c138cda735 fix #300 2023-08-29 12:22:14 +08:00
jxxghp
d0a92531ac fix #301
fix #303
2023-08-29 12:11:25 +08:00
jxxghp
96fc32efd0 fix #308 缺失集计算错误问题 2023-08-29 11:41:30 +08:00
jxxghp
a9a0acc091 fix #312 无年份无季集时优先匹配电影 2023-08-29 11:12:55 +08:00
jxxghp
fa6f2c01e0 fix #313 检查本地存在时未应用订阅总集数的问题 2023-08-29 10:48:27 +08:00
jxxghp
05a0026ea4 fix #306 2023-08-29 08:18:34 +08:00
jxxghp
8f352c23c8 更新 __init__.py 2023-08-28 22:58:09 +08:00
jxxghp
8bc883b621 fix 2023-08-28 19:04:37 +08:00
jxxghp
6a34c7196c Merge pull request #307 from thsrite/main 2023-08-28 14:53:26 +08:00
thsrite
58ded2ef5e feat 订阅站点单独配置 2023-08-28 13:23:56 +08:00
jxxghp
2b462a1b9c fix #305 2023-08-28 13:04:18 +08:00
jxxghp
a6d0504900 Merge pull request #305 from thsrite/main
feat 动漫一级分类 && fix bugs
2023-08-28 12:54:55 +08:00
thsrite
7717afab69 fix 动漫一级分类判断条件 2023-08-28 12:50:47 +08:00
jxxghp
683ba4cfad feat 手动整理支持自动识别批量处理,增加进度显示 2023-08-28 12:50:21 +08:00
jxxghp
921783d6bb fix #304 增加订阅搜索开关且默认关闭 2023-08-28 11:43:55 +08:00
thsrite
b7e9e8ee21 feat 动漫一级分类 2023-08-28 10:01:14 +08:00
thsrite
dadad74085 fix 剧集文件命名没有季,默认1 2023-08-28 10:00:54 +08:00
thsrite
e405c98bae fix qb下载按文件循序下载 2023-08-28 10:00:29 +08:00
jxxghp
9d4bec7d81 fix bug 2023-08-28 08:30:39 +08:00
jxxghp
d6a73d6017 Merge pull request #298 from thsrite/main 2023-08-27 20:40:15 +08:00
thsrite
b4a780aba7 fix #292 2023-08-27 20:30:54 +08:00
thsrite
f15f98fcfc fix 签到每天首次全量签到后续签到命中错误关键词 2023-08-27 20:20:42 +08:00
jxxghp
4bb8b01301 Merge pull request #296 from lightolly/dev/20230827 2023-08-27 18:47:25 +08:00
olly
aa8cb889f8 fix:修复tr下载显示速率问题 2023-08-27 18:22:34 +08:00
jxxghp
9e31c53fa5 Merge pull request #291 from DDS-Derek/main 2023-08-27 13:02:11 +08:00
DDSRem
4b23f3f076 fix: repeat install pysocks 2023-08-27 13:01:18 +08:00
DDSRem
52fac09021 fix: 更新成功提示 2023-08-27 12:38:03 +08:00
DDSRem
bb67e902c5 feat: 优化重启更新逻辑
先安装依赖,再替换文件,防止依赖安装失败导致无法正常启动
2023-08-27 12:36:48 +08:00
DDSRem
6206c5f4a3 fix: 优化代码 2023-08-27 12:29:55 +08:00
DDSRem
de3d3de411 feat: 依赖安装添加代理 2023-08-27 12:21:12 +08:00
jxxghp
91896946d8 fix 文件管理列表图标 2023-08-27 10:31:06 +08:00
jxxghp
cc545490cd fix 自动更新时重新安装依赖 2023-08-27 09:49:42 +08:00
jxxghp
4cfa051dfc v1.0.8 2023-08-27 09:44:15 +08:00
jxxghp
41a45b1a8d add dev最新代码一键升级脚本 2023-08-27 09:12:28 +08:00
jxxghp
66c7ca0b96 fix #272 支持使用Socks5代理 2023-08-27 08:42:52 +08:00
jxxghp
214a766d7d fix #284 #273 修复自定义总集数无效的问题 2023-08-27 08:34:02 +08:00
jxxghp
310dd7c229 fix API 2023-08-27 08:18:31 +08:00
jxxghp
4b91510695 fix #267 电影年份匹配上下浮动1年 2023-08-27 07:48:16 +08:00
jxxghp
f52deb3ff2 fix #285 2023-08-27 07:44:40 +08:00
jxxghp
9be9006013 fix 手动整理API 2023-08-26 23:51:48 +08:00
jxxghp
fc2312a045 feat 手动整理API 2023-08-26 22:47:41 +08:00
jxxghp
c593f6423c fix 重命名接口 2023-08-26 19:58:05 +08:00
jxxghp
200e5ff027 feat 文件下载、图片读取等Api 2023-08-26 19:27:01 +08:00
jxxghp
d7f2bbb121 Merge pull request #281 from thsrite/main 2023-08-26 15:01:29 +08:00
jxxghp
f4a1f420c5 feat 文件管理API 2023-08-26 14:31:05 +08:00
thsrite
ed8e02bb38 fix 2023-08-26 11:20:41 +08:00
thsrite
4049468444 Merge remote-tracking branch 'origin/main' into main 2023-08-26 10:51:26 +08:00
thsrite
f8d5e3f438 fix 目录监控消息通知剧集集数错误 2023-08-26 10:51:12 +08:00
jxxghp
fc50540ab1 Merge pull request #274 from thsrite/main 2023-08-25 21:32:09 +08:00
thsrite
624365542c fix 目录监控判断 2023-08-25 21:04:18 +08:00
jxxghp
bb93919707 Merge remote-tracking branch 'origin/main' 2023-08-25 17:05:47 +08:00
jxxghp
3acb2b254c fix #270 设置了开始集数时总集数比对出错的问题 2023-08-25 17:05:31 +08:00
jxxghp
ff900c5d01 Merge pull request #266 from thsrite/main 2023-08-25 16:06:04 +08:00
thsrite
8171124503 feat 可配置交互搜索自动下载用户 2023-08-25 14:27:19 +08:00
jxxghp
dbd858b27d fix requirements 2023-08-25 13:42:36 +08:00
jxxghp
df5337947c v1.0.7 2023-08-25 12:48:01 +08:00
jxxghp
ddf6f5c0b6 feat 种子缓存拆分为独立的模块 2023-08-25 12:44:59 +08:00
jxxghp
d879e54bb7 fix hhanclub 2023-08-25 12:24:25 +08:00
jxxghp
7666fa6db3 Merge pull request #260 from developer-wlj/wlj0807 2023-08-25 11:56:40 +08:00
jxxghp
cef33d370a fix #68 修改TVDB模块以支持代理 2023-08-25 11:34:41 +08:00
jxxghp
76cd4048e3 fix #241 2023-08-24 21:11:13 +08:00
jxxghp
6505aa9efb fix 蓝光原盘过滤 2023-08-24 20:43:51 +08:00
mayun110
81a29d3604 fix #233 2023-08-24 20:31:32 +08:00
jxxghp
86d7dceb84 fix #258 2023-08-24 20:12:14 +08:00
jxxghp
5775accd35 fix 蓝光原盘转移 2023-08-24 17:25:25 +08:00
jxxghp
fda8e3fdb6 fix 适当延长监控消息发送周期 2023-08-24 17:18:42 +08:00
jxxghp
3f72f89b15 fix README.md 2023-08-24 17:01:22 +08:00
jxxghp
6727b65ed4 fix service address 2023-08-24 16:46:26 +08:00
jxxghp
583a04167a Merge pull request #254 from thsrite/main
fix 目录监控转移消息统一发送
2023-08-24 13:54:23 +08:00
jxxghp
6fc9bd4ea0 fix hhanclub
add hudbt
2023-08-24 13:45:34 +08:00
thsrite
1361ed1a16 fix 目录监控target_path 2023-08-24 13:29:04 +08:00
thsrite
2781ed2ae1 fix 合并消息episode 2023-08-24 13:11:46 +08:00
thsrite
dd9258dc42 fix 目录监控转移消息统一发送 2023-08-24 13:07:31 +08:00
jxxghp
7c39a99e60 fix #178 蓝光原盘转移 2023-08-24 11:40:28 +08:00
jxxghp
96a30e8e24 fix 2023-08-24 11:12:09 +08:00
jxxghp
004047b6bb 回滚PR #233 2023-08-24 11:01:36 +08:00
jxxghp
10ee8d33fa fix 文件转移Bug 2023-08-24 10:34:38 +08:00
jxxghp
1bbb92d92b fix log 2023-08-24 10:19:52 +08:00
jxxghp
c246c036c9 fix api bug 2023-08-24 10:12:45 +08:00
jxxghp
b435b84782 fix bug 2023-08-24 09:11:02 +08:00
jxxghp
9607c398ff v1.0.6 2023-08-24 08:35:15 +08:00
jxxghp
2e2ce32c54 feat 支持IMDBID搜索 2023-08-24 08:34:29 +08:00
jxxghp
4298e36d74 feat 同名时优先匹配新片 2023-08-23 22:01:19 +08:00
jxxghp
e3a29178b6 feat 同名时优先匹配新片 2023-08-23 21:52:03 +08:00
jxxghp
613a4220d7 fix logging 2023-08-23 21:21:56 +08:00
jxxghp
91b3fe5b1d fix bug 2023-08-23 19:51:22 +08:00
jxxghp
8bb4db227a color logging 2023-08-23 19:09:15 +08:00
jxxghp
b82f232642 color logging 2023-08-23 18:59:33 +08:00
jxxghp
62c92820f0 Merge remote-tracking branch 'origin/main' 2023-08-23 18:57:23 +08:00
jxxghp
80bb49776a color logging 2023-08-23 18:57:10 +08:00
jxxghp
cad7687de6 更新 servarr.py 2023-08-23 18:22:08 +08:00
jxxghp
f0a680abc6 fix logging 2023-08-23 15:49:02 +08:00
jxxghp
318ba9816b fix log level 2023-08-23 13:46:19 +08:00
jxxghp
89ff7a4603 fix log level 2023-08-23 13:42:09 +08:00
jxxghp
4586a0c1fe fix bug 2023-08-23 12:54:38 +08:00
jxxghp
2682a80815 fix 转移历史记录 2023-08-23 12:50:08 +08:00
jxxghp
6f159958a1 fix bugs 2023-08-23 12:27:54 +08:00
jxxghp
d59ed1e160 Merge pull request #233 from developer-wlj/wlj0807 2023-08-23 11:38:45 +08:00
jxxghp
66a1f25465 feat 下载器监控支持转移合集 2023-08-23 08:47:03 +08:00
jxxghp
e5e33d4486 fix 自动更新 2023-08-23 07:10:43 +08:00
jxxghp
b77c17a999 fix RSS订阅插件 2023-08-23 06:55:14 +08:00
mayun110
e698e30826 fix #231 临时目录问题或重复通知文件转移失败问题 2023-08-22 23:48:08 +08:00
jxxghp
e448cafb21 fix 插件重复启动的问题 2023-08-22 21:02:35 +08:00
jxxghp
45faf0cf18 fix text 2023-08-22 20:15:58 +08:00
jxxghp
91e3788b73 v1.0.5 2023-08-22 18:03:45 +08:00
jxxghp
a890b4f01d fix trackers 2023-08-22 17:57:52 +08:00
jxxghp
c958e0e458 fix TorrentTransfer,通过下载器API补充Tracker 2023-08-22 17:30:54 +08:00
jxxghp
b831d71bf7 fix 多通知Bug 2023-08-22 13:43:10 +08:00
jxxghp
0cc104ef11 更新 __init__.py 2023-08-22 13:20:36 +08:00
jxxghp
b9c441108a 更新 __init__.py 2023-08-22 13:18:21 +08:00
jxxghp
4bdacf7ac1 Merge pull request #224 from yubanmeiqin9048/main-1 2023-08-22 12:34:21 +08:00
jxxghp
7435b7c702 feat 新增清理TMDB缓存命令 2023-08-22 12:32:48 +08:00
yubanmeiqin9048
42c7371d16 fix build 2023-08-22 12:21:31 +08:00
jxxghp
afe5ee9abb fix 通知消息不会多渠道发送问题 2023-08-22 11:39:03 +08:00
jxxghp
14c0063e7c fix hhanclub 2023-08-22 10:50:47 +08:00
jxxghp
064cf4c5c3 fix build 前端下载最新Release而不是同版本号Release 2023-08-22 10:44:09 +08:00
jxxghp
c9452d29c1 v1.0.4 2023-08-22 08:33:47 +08:00
jxxghp
781de29591 fix 数据库连接复用 2023-08-22 08:13:44 +08:00
jxxghp
a202b5efdd fix #215 2023-08-22 07:00:00 +08:00
jxxghp
f02ac2eaef fix 2023-08-21 18:05:17 +08:00
jxxghp
c82ab161d0 fix TorrentTransfer 2023-08-21 17:58:27 +08:00
jxxghp
538c20ee56 fix hhanclub
fix #206 目录监控移动模式删除空目录
2023-08-21 17:48:29 +08:00
jxxghp
995a672bf3 fix hhanclub 2023-08-21 17:35:22 +08:00
jxxghp
7acbd0904b fix tmdbapi 2023-08-21 16:44:42 +08:00
jxxghp
3b95453363 Merge pull request #210 from thsrite/main 2023-08-21 13:51:02 +08:00
thsrite
bd91ea5c50 格式化代码 2023-08-21 13:24:05 +08:00
thsrite
f387846732 fix download_hash补充逻辑 2023-08-21 13:18:42 +08:00
thsrite
7b0ba6112e fix 2023-08-21 13:12:28 +08:00
thsrite
6f927be081 fix 2023-08-21 13:11:19 +08:00
thsrite
1e7f5bf04e fix 尝试补充mp之外下载的download_hash 2023-08-21 13:09:17 +08:00
jxxghp
6ee934a745 Merge remote-tracking branch 'origin/main' 2023-08-21 12:33:00 +08:00
jxxghp
0d626ad4b8 fix H265|HEVC H254|AVC 2023-08-21 12:32:48 +08:00
jxxghp
3379a68476 Merge pull request #208 from thsrite/main 2023-08-21 12:02:34 +08:00
thsrite
6afdfa3b97 fix 2023-08-21 11:55:32 +08:00
thsrite
6337a72b0f fix #204 2023-08-21 11:02:38 +08:00
thsrite
4135df693c fix #202 2023-08-21 10:41:25 +08:00
jxxghp
75bd4d4b77 fix 不优先下载整季的Bug 2023-08-21 08:21:42 +08:00
jxxghp
5d9b45a2f8 Merge pull request #201 from thsrite/main 2023-08-21 07:51:24 +08:00
thsrite
2c4ef1f3a9 fix 2023-08-20 22:24:37 +08:00
thsrite
1ad39faf24 fix cookiecloud 2023-08-20 21:54:02 +08:00
thsrite
dc88fb74fd fix 暂停种子 2023-08-20 21:45:07 +08:00
thsrite
062e9e467d fix 2023-08-20 21:24:40 +08:00
thsrite
8b8473b92c fix 匹配download_hash 2023-08-20 21:08:30 +08:00
thsrite
dd76909d45 Merge remote-tracking branch 'origin/main' into main 2023-08-20 20:54:14 +08:00
thsrite
ebbd48dcf6 fix 同步删除逻辑 2023-08-20 20:54:06 +08:00
thsrite
aa27af811f fix 目录监控获取真实download_hash 2023-08-20 20:53:54 +08:00
jxxghp
81d6fcbe3f fix README.md 2023-08-20 19:16:11 +08:00
jxxghp
8a00a9c389 更新 README.md 2023-08-20 19:09:18 +08:00
jxxghp
3c96f1c687 Merge pull request #198 from thsrite/main 2023-08-20 18:48:26 +08:00
thsrite
e0497f590a fix 2023-08-20 18:38:54 +08:00
thsrite
40cf80406e fix 数据同步还原同步开关 2023-08-20 18:37:04 +08:00
thsrite
a469136049 fix 目录监控增加转移时间 2023-08-20 18:36:47 +08:00
120 changed files with 6805 additions and 1710 deletions

3
.dockerignore Normal file
View File

@@ -0,0 +1,3 @@
# Ignore git
.github
.git

View File

@@ -1,5 +1,8 @@
blank_issues_enabled: false
contact_links:
- name: 项目讨论
url: https://github.com/jxxghp/MoviePilot/discussions/new/choose
about: discussion
- name: Telegram 频道
url: https://t.me/moviepilot_channel
about: 更新日志

View File

@@ -1,17 +0,0 @@
name: 项目讨论
description: discussion
title: "[Discussion]: "
labels: ["discussion"]
body:
- type: markdown
attributes:
value: |
[BUG](https://github.com/jxxghp/MoviePilot/issues/new?assignees=&labels=bug&template=bug_report.yml&title=%5BBUG%5D%3A) 与 [Feature Request](https://github.com/jxxghp/MoviePilot/issues/new?assignees=&labels=feature+request&template=feature_request.yml&title=%5BFeature+Request%5D%3A+) 请转到对应位置提交。
- type: textarea
id: discussion
attributes:
label: 项目讨论
description: 请详细描述需要讨论的内容。
placeholder: "项目讨论"
validations:
required: true

View File

@@ -1,4 +1,4 @@
name: MoviePilot Docker
name: MoviePilot Builder
on:
workflow_dispatch:
push:
@@ -55,21 +55,8 @@ jobs:
linux/arm64
push: true
build-args: |
MOVIEPILOT_FRONTEND_VERSION=${{ env.app_version }}
MOVIEPILOT_VERSION=${{ env.app_version }}
tags: |
${{ secrets.DOCKER_USERNAME }}/moviepilot:latest
${{ secrets.DOCKER_USERNAME }}/moviepilot:${{ env.app_version }}
labels: ${{ steps.meta.outputs.labels }}
-
name: Create Release
id: create_release
uses: actions/create-release@latest
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
tag_name: v${{ env.app_version }}
release_name: v${{ env.app_version }}
body: ${{ github.event.commits[0].message }}
draft: false
prerelease: false

36
.github/workflows/release.yml vendored Normal file
View File

@@ -0,0 +1,36 @@
name: MoviePilot Release
on:
workflow_dispatch:
push:
branches:
- main
paths:
- version.py
jobs:
build:
runs-on: ubuntu-latest
name: Build Docker Image
steps:
-
name: Checkout
uses: actions/checkout@v3
-
name: Release Version
id: release_version
run: |
app_version=$(cat version.py |sed -ne "s/APP_VERSION\s=\s'v\(.*\)'/\1/gp")
echo "app_version=$app_version" >> $GITHUB_ENV
-
name: Generate Release
uses: actions/create-release@latest
with:
tag_name: v${{ env.app_version }}
name: v${{ env.app_version }}
body: ${{ github.event.commits[0].message }}
draft: false
prerelease: false
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -1,5 +1,5 @@
FROM python:3.11.4-slim-bullseye
ARG MOVIEPILOT_FRONTEND_VERSION
ARG MOVIEPILOT_VERSION
ENV LANG="C.UTF-8" \
HOME="/moviepilot" \
TERM="xterm" \
@@ -8,6 +8,7 @@ ENV LANG="C.UTF-8" \
PGID=0 \
UMASK=000 \
MOVIEPILOT_AUTO_UPDATE=true \
MOVIEPILOT_AUTO_UPDATE_DEV=false \
NGINX_PORT=3000 \
CONFIG_DIR="/config" \
API_TOKEN="moviepilot" \
@@ -18,7 +19,7 @@ ENV LANG="C.UTF-8" \
LIBRARY_PATH="" \
LIBRARY_CATEGORY="false" \
TRANSFER_TYPE="copy" \
COOKIECLOUD_HOST="https://nastool.org/cookiecloud" \
COOKIECLOUD_HOST="https://movie-pilot.org/cookiecloud" \
COOKIECLOUD_KEY="" \
COOKIECLOUD_PASSWORD="" \
MESSAGER="telegram" \
@@ -69,7 +70,8 @@ RUN apt-get update \
&& echo 'fs.inotify.max_user_watches=5242880' >> /etc/sysctl.conf \
&& echo 'fs.inotify.max_user_instances=5242880' >> /etc/sysctl.conf \
&& locale-gen zh_CN.UTF-8 \
&& curl -sL "https://github.com/jxxghp/MoviePilot-Frontend/releases/download/v${MOVIEPILOT_FRONTEND_VERSION}/dist.zip" | busybox unzip -d / - \
&& FRONTEND_VERSION=$(curl -sL "https://api.github.com/repos/jxxghp/MoviePilot-Frontend/releases/latest" | jq -r .tag_name) \
&& curl -sL "https://github.com/jxxghp/MoviePilot-Frontend/releases/download/${FRONTEND_VERSION}/dist.zip" | busybox unzip -d / - \
&& mv /dist /public \
&& apt-get remove -y build-essential \
&& apt-get autoremove -y \

674
LICENSE Normal file
View File

@@ -0,0 +1,674 @@
GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<https://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<https://www.gnu.org/licenses/why-not-lgpl.html>.

View File

@@ -21,20 +21,19 @@ Dockerhttps://hub.docker.com/r/jxxghp/moviepilot
2. **安装CookieCloud服务端可选**
MoviePilot内置了公共CookieCloud服务器如果需要自建服务可参考 [CookieCloud](https://github.com/easychen/CookieCloud) 项目进行安装
```shell
docker pull easychen/cookiecloud:latest
```
MoviePilot内置了公共CookieCloud服务器如果需要自建服务可参考 [CookieCloud](https://github.com/easychen/CookieCloud) 项目进行搭建docker镜像请点击 [这里](https://hub.docker.com/r/easychen/cookiecloud)
**声明:** 本项目不会收集用户敏感数据Cookie同步也是基于CookieCloud项目实现非本项目提供的能力。技术角度上CookieCloud采用端到端加密,在个人不泄露`用户KEY``端对端加密密码`的情况下第三方无法窃取任何用户信息(包括服务器持有者)。如果你不放心,可以不使用公共服务或者不使用本项目,但如果使用后发生了任何信息泄露与本项目无关!
3. **安装配套管理软件**
MoviePilot跟NAStool一样需要配套下载器和媒体服务器使用。
MoviePilot需要配套下载器和媒体服务器配合使用。
- 下载器支持qBittorrent、TransmissionQB版本号要求>= 4.3.9TR版本号要求>= 3.0推荐使用QB。
- 媒体服务器支持Jellyfin、Emby、Plex推荐使用Emby。
4. **安装MoviePilot**
目前仅提供docker镜像后续可能会提供更多安装方式。
目前仅提供docker镜像点击 [这里](https://hub.docker.com/r/jxxghp/moviepilot) 或执行命令:
```shell
docker pull jxxghp/moviepilot:latest
@@ -61,6 +60,7 @@ docker pull jxxghp/moviepilot:latest
- **DOWNLOAD_PATH** 下载保存目录,**注意:需要将`moviepilot``下载器`的映射路径保持一致**,否则会导致下载文件无法转移
- **DOWNLOAD_MOVIE_PATH** 电影下载保存目录,**必须是`DOWNLOAD_PATH`的下级路径**,不设置则下载到`DOWNLOAD_PATH`
- **DOWNLOAD_TV_PATH** 电视剧下载保存目录,**必须是`DOWNLOAD_PATH`的下级路径**,不设置则下载到`DOWNLOAD_PATH`
- **DOWNLOAD_ANIME_PATH** 动漫下载保存目录,**必须是`DOWNLOAD_PATH`的下级路径**,不设置则下载到`DOWNLOAD_PATH`
- **DOWNLOAD_CATEGORY** 下载二级分类开关,`true`/`false`,默认`false`,开启后会根据配置`category.yaml`自动在下载目录下建立二级目录分类
- **DOWNLOAD_SUBTITLE** 下载站点字幕,`true`/`false`,默认`true`
- **REFRESH_MEDIASERVER** 入库刷新媒体库,`true`/`false`,默认`true`
@@ -69,13 +69,17 @@ docker pull jxxghp/moviepilot:latest
- **LIBRARY_PATH** 媒体库目录,多个目录使用`,`分隔
- **LIBRARY_MOVIE_NAME** 电影媒体库目录名,默认`电影`
- **LIBRARY_TV_NAME** 电视剧媒体库目录名,默认`电视剧`
- **LIBRARY_ANIME_NAME** 动漫媒体库目录名,默认`电视剧/动漫`
- **LIBRARY_CATEGORY** 媒体库二级分类开关,`true`/`false`,默认`false`,开启后会根据配置`category.yaml`自动在媒体库目录下建立二级目录分类
- **TRANSFER_TYPE** 转移方式,支持`link`/`copy`/`move`/`softlink` **注意:在`link`和`softlink`转移方式下,转移后的文件会继承源文件的权限掩码,不受`UMASK`影响**
- **COOKIECLOUD_HOST** CookieCloud服务器地址格式`http://ip:port`必须配置,否则无法添加站点
- **COOKIECLOUD_HOST** CookieCloud服务器地址格式`http(s)://ip:port`不配置默认使用内建服务器`https://movie-pilot.org/cookiecloud`
- **COOKIECLOUD_KEY** CookieCloud用户KEY
- **COOKIECLOUD_PASSWORD** CookieCloud端对端加密密码
- **COOKIECLOUD_INTERVAL** CookieCloud同步间隔分钟
- **OCR_HOST** OCR识别服务器地址格式`http(s)://ip:port`用于识别站点二维码实现自动登录获取Cookie等不配置默认使用内建服务器`https://movie-pilot.org`,可使用 [这个镜像](https://hub.docker.com/r/jxxghp/moviepilot-ocr) 自行搭建。
- **USER_AGENT** CookieCloud对应的浏览器UA可选设置后可增加连接站点的成功率同步站点后可以在管理界面中修改
- **AUTO_DOWNLOAD_USER** 交互搜索自动下载用户ID使用,分割
- **SUBSCRIBE_SEARCH** 订阅搜索,`true`/`false`,默认`false`开启后会每隔24小时对所有订阅进行全量搜索以补齐缺失剧集一般情况下正常订阅即可订阅搜索只做为兜底会增加站点压力不建议开启
- **MESSAGER** 消息通知渠道,支持 `telegram`/`wechat`/`slack`,开启多个渠道时使用`,`分隔。同时还需要配置对应渠道的环境变量,非对应渠道的变量可删除,推荐使用`telegram`
- `wechat`设置项:
@@ -109,6 +113,7 @@ docker pull jxxghp/moviepilot:latest
- **QB_HOST** qbittorrent地址格式`ip:port`https需要添加`https://`前缀
- **QB_USER** qbittorrent用户名
- **QB_PASSWORD** qbittorrent密码
- **QB_CATEGORY** qbittorrent分类自动管理`true`/`false`,默认`false`,开启后会将下载二级分类传递到下载器,由下载器管理下载目录,需要同步开启`DOWNLOAD_CATEGORY`
- `transmission`设置项:

View File

@@ -0,0 +1,30 @@
"""1.0.4
Revision ID: 1e169250e949
Revises: 52ab4930be04
Create Date: 2023-09-01 09:56:33.907661
"""
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = '1e169250e949'
down_revision = '52ab4930be04'
branch_labels = None
depends_on = None
def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
try:
with op.batch_alter_table("subscribe") as batch_op:
batch_op.add_column(sa.Column('date', sa.String, nullable=True))
except Exception as e:
pass
# ### end Alembic commands ###
def downgrade() -> None:
pass

View File

@@ -0,0 +1,30 @@
"""1_0_3
Revision ID: 52ab4930be04
Revises: ec5fb51fc300
Create Date: 2023-08-28 13:21:45.152012
"""
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = '52ab4930be04'
down_revision = 'ec5fb51fc300'
branch_labels = None
depends_on = None
def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
op.execute("delete from systemconfig where key = 'RssSites';")
op.execute("insert into systemconfig(key, value) VALUES('RssSites', (select value from systemconfig where key= 'IndexerSites'));")
op.execute("delete from systemconfig where key = 'SearchResults';")
# ### end Alembic commands ###
def downgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
pass
# ### end Alembic commands ###

View File

@@ -1,7 +1,7 @@
from fastapi import APIRouter
from app.api.endpoints import login, user, site, message, webhook, subscribe, \
media, douban, search, plugin, tmdb, history, system, download, dashboard, rss
media, douban, search, plugin, tmdb, history, system, download, dashboard, rss, filebrowser, transfer
api_router = APIRouter()
api_router.include_router(login.router, prefix="/login", tags=["login"])
@@ -20,3 +20,5 @@ api_router.include_router(plugin.router, prefix="/plugin", tags=["plugin"])
api_router.include_router(download.router, prefix="/download", tags=["download"])
api_router.include_router(dashboard.router, prefix="/dashboard", tags=["dashboard"])
api_router.include_router(rss.router, prefix="/rss", tags=["rss"])
api_router.include_router(filebrowser.router, prefix="/filebrowser", tags=["filebrowser"])
api_router.include_router(transfer.router, prefix="/transfer", tags=["transfer"])

View File

@@ -124,3 +124,19 @@ def transfer(days: int = 7, db: Session = Depends(get_db),
"""
transfer_stat = TransferHistory.statistic(db, days)
return [stat[1] for stat in transfer_stat]
@router.get("/cpu", summary="获取当前CPU使用率", response_model=int)
def cpu(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
获取当前CPU使用率
"""
return SystemUtils.cpu_usage()
@router.get("/memory", summary="获取当前内存使用率", response_model=int)
def memory(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
获取当前内存使用率
"""
return SystemUtils.memory_usage()

View File

@@ -45,6 +45,21 @@ def recognize_doubanid(doubanid: str,
return schemas.Context()
@router.get("/showing", summary="豆瓣正在热映", response_model=List[schemas.MediaInfo])
def movie_showing(page: int = 1,
count: int = 30,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览豆瓣正在热映
"""
movies = DoubanChain(db).movie_showing(page=page, count=count)
if not movies:
return []
medias = [MediaInfo(douban_info=movie) for movie in movies]
return [media.to_dict() for media in medias]
@router.get("/movies", summary="豆瓣电影", response_model=List[schemas.MediaInfo])
def douban_movies(sort: str = "R",
tags: str = "",

View File

@@ -11,8 +11,6 @@ from app.core.context import MediaInfo, Context, TorrentInfo
from app.core.metainfo import MetaInfo
from app.core.security import verify_token
from app.db import get_db
from app.db.models.user import User
from app.db.userauth import get_current_active_superuser
from app.schemas import NotExistMediaInfo, MediaType
router = APIRouter()

View File

@@ -0,0 +1,176 @@
import shutil
from pathlib import Path
from typing import Any, List
from fastapi import APIRouter, Depends
from starlette.responses import FileResponse, Response
from app import schemas
from app.core.config import settings
from app.core.security import verify_token
from app.log import logger
from app.utils.system import SystemUtils
router = APIRouter()
IMAGE_TYPES = [".jpg", ".png", ".gif", ".bmp", ".jpeg", ".webp"]
@router.get("/list", summary="所有插件", response_model=List[schemas.FileItem])
def list_path(path: str, _: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询当前目录下所有目录和文件
"""
# 返回结果
ret_items = []
if not path or path == "/":
if SystemUtils.is_windows():
partitions = SystemUtils.get_windows_drives() or ["C:/"]
for partition in partitions:
ret_items.append(schemas.FileItem(
type="dir",
path=partition + "/",
name=partition,
basename=partition
))
return ret_items
else:
path = "/"
else:
if not SystemUtils.is_windows() and not path.startswith("/"):
path = "/" + path
# 遍历目录
path_obj = Path(path)
if not path_obj.exists():
logger.error(f"目录不存在:{path}")
return []
# 如果是文件
if path_obj.is_file():
ret_items.append(schemas.FileItem(
type="file",
path=str(path_obj).replace("\\", "/"),
name=path_obj.name,
basename=path_obj.stem,
extension=path_obj.suffix[1:],
size=path_obj.stat().st_size,
))
return ret_items
# 扁历所有目录
for item in SystemUtils.list_sub_directory(path_obj):
ret_items.append(schemas.FileItem(
type="dir",
path=str(item).replace("\\", "/") + "/",
name=item.name,
basename=item.stem,
))
# 遍历所有文件,不含子目录
for item in SystemUtils.list_sub_files(path_obj,
settings.RMT_MEDIAEXT
+ settings.RMT_SUBEXT
+ IMAGE_TYPES
+ [".nfo"]):
ret_items.append(schemas.FileItem(
type="file",
path=str(item).replace("\\", "/"),
name=item.name,
basename=item.stem,
extension=item.suffix[1:],
size=item.stat().st_size,
))
return ret_items
@router.get("/mkdir", summary="创建目录", response_model=schemas.Response)
def mkdir(path: str, _: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
创建目录
"""
if not path:
return schemas.Response(success=False)
path_obj = Path(path)
if path_obj.exists():
return schemas.Response(success=False)
path_obj.mkdir(parents=True, exist_ok=True)
return schemas.Response(success=True)
@router.get("/delete", summary="删除文件或目录", response_model=schemas.Response)
def delete(path: str, _: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
删除文件或目录
"""
if not path:
return schemas.Response(success=False)
path_obj = Path(path)
if not path_obj.exists():
return schemas.Response(success=True)
if path_obj.is_file():
path_obj.unlink()
else:
shutil.rmtree(path_obj, ignore_errors=True)
return schemas.Response(success=True)
@router.get("/download", summary="下载文件或目录")
def download(path: str, token: str) -> Any:
"""
下载文件或目录
"""
if not path:
return schemas.Response(success=False)
# 认证token
if not verify_token(token):
return None
path_obj = Path(path)
if not path_obj.exists():
return schemas.Response(success=False)
if path_obj.is_file():
# 做为文件流式下载
return FileResponse(path_obj)
else:
# 做为压缩包下载
shutil.make_archive(base_name=path_obj.stem, format="zip", root_dir=path_obj)
reponse = Response(content=path_obj.read_bytes(), media_type="application/zip")
# 删除压缩包
Path(f"{path_obj.stem}.zip").unlink()
return reponse
@router.get("/rename", summary="重命名文件或目录", response_model=schemas.Response)
def rename(path: str, new_name: str, _: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
重命名文件或目录
"""
if not path or not new_name:
return schemas.Response(success=False)
path_obj = Path(path)
if not path_obj.exists():
return schemas.Response(success=False)
path_obj.rename(path_obj.parent / new_name)
return schemas.Response(success=True)
@router.get("/image", summary="读取图片")
def image(path: str, token: str) -> Any:
"""
读取图片
"""
if not path:
return None
# 认证token
if not verify_token(token):
return None
path_obj = Path(path)
if not path_obj.exists():
return None
if not path_obj.is_file():
return None
# 判断是否图片文件
if path_obj.suffix.lower() not in IMAGE_TYPES:
return None
return Response(content=path_obj.read_bytes(), media_type="image/jpeg")

View File

@@ -74,7 +74,8 @@ def delete_transfer_history(history_in: schemas.TransferHistory,
if not history:
return schemas.Response(success=False, msg="记录不存在")
# 册除文件
TransferChain(db).delete_files(Path(history.dest))
if history.dest:
TransferChain(db).delete_files(Path(history.dest))
# 删除记录
TransferHistory.delete(db, history_in.id)
return schemas.Response(success=True)

View File

@@ -56,6 +56,9 @@ async def login_access_token(
user.id, expires_delta=access_token_expires
),
token_type="bearer",
super_user=user.is_superuser,
user_name=user.name,
avatar=user.avatar
)

View File

@@ -17,7 +17,7 @@ from app.schemas import MediaType
router = APIRouter()
@router.get("/recognize", summary="识别媒体信息", response_model=schemas.Context)
@router.get("/recognize", summary="识别媒体信息(种子)", response_model=schemas.Context)
def recognize(title: str,
subtitle: str = None,
db: Session = Depends(get_db),
@@ -32,6 +32,20 @@ def recognize(title: str,
return schemas.Context()
@router.get("/recognize_file", summary="识别媒体信息(文件)", response_model=schemas.Context)
def recognize(path: str,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据文件路径识别媒体信息
"""
# 识别媒体信息
context = MediaChain(db).recognize_by_path(path)
if context:
return context.to_dict()
return schemas.Context()
@router.get("/search", summary="搜索媒体信息", response_model=List[schemas.MediaInfo])
def search_by_title(title: str,
page: int = 1,

View File

@@ -26,6 +26,7 @@ async def search_latest(db: Session = Depends(get_db),
@router.get("/media/{mediaid}", summary="精确搜索资源", response_model=List[schemas.Context])
def search_by_tmdbid(mediaid: str,
mtype: str = None,
area: str = "title",
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
@@ -35,15 +36,16 @@ def search_by_tmdbid(mediaid: str,
tmdbid = int(mediaid.replace("tmdb:", ""))
if mtype:
mtype = MediaType(mtype)
torrents = SearchChain(db).search_by_tmdbid(tmdbid=tmdbid, mtype=mtype)
torrents = SearchChain(db).search_by_tmdbid(tmdbid=tmdbid, mtype=mtype, area=area)
elif mediaid.startswith("douban:"):
doubanid = mediaid.replace("douban:", "")
# 识别豆瓣信息
context = DoubanChain(db).recognize_by_doubanid(doubanid)
if not context or not context.media_info or not context.media_info.tmdb_id:
raise HTTPException(status_code=404, detail="无法识别TMDB媒体信息")
return []
torrents = SearchChain(db).search_by_tmdbid(tmdbid=context.media_info.tmdb_id,
mtype=context.media_info.type)
mtype=context.media_info.type,
area=area)
else:
return []
return [torrent.to_dict() for torrent in torrents]

View File

@@ -8,13 +8,14 @@ from app import schemas
from app.chain.cookiecloud import CookieCloudChain
from app.chain.search import SearchChain
from app.chain.site import SiteChain
from app.core.event import EventManager
from app.core.security import verify_token
from app.db import get_db
from app.db.models.site import Site
from app.db.models.siteicon import SiteIcon
from app.db.systemconfig_oper import SystemConfigOper
from app.helper.sites import SitesHelper
from app.schemas.types import SystemConfigKey
from app.schemas.types import SystemConfigKey, EventType
from app.utils.string import StringUtils
router = APIRouter()
@@ -90,6 +91,11 @@ def delete_site(
删除站点
"""
Site.delete(db, site_id)
# 插件站点删除
EventManager().send_event(EventType.SiteDeleted,
{
"site_id": site_id
})
return schemas.Response(success=True)
@@ -112,7 +118,13 @@ def cookie_cloud_sync(db: Session = Depends(get_db),
"""
Site.reset(db)
SystemConfigOper(db).set(SystemConfigKey.IndexerSites, [])
SystemConfigOper(db).set(SystemConfigKey.RssSites, [])
CookieCloudChain(db).process(manual=True)
# 插件站点删除
EventManager().send_event(EventType.SiteDeleted,
{
"site_id": None
})
return schemas.Response(success=True, message="站点已重置!")
@@ -216,6 +228,23 @@ def read_site_by_domain(
return site
@router.get("/rss", summary="所有订阅站点", response_model=List[schemas.Site])
def read_rss_sites(db: Session = Depends(get_db)) -> List[dict]:
"""
获取站点列表
"""
# 选中的rss站点
rss_sites = SystemConfigOper(db).get(SystemConfigKey.RssSites)
# 所有站点
all_site = Site.list_order_by_pri(db)
if not rss_sites or not all_site:
return []
# 选中的rss站点
rss_sites = [site for site in all_site if site and site.id in rss_sites]
return rss_sites
@router.get("/{site_id}", summary="站点详情", response_model=schemas.Site)
def read_site(
site_id: int,

View File

@@ -121,9 +121,15 @@ def subscribe_mediaid(
根据TMDBID或豆瓣ID查询订阅 tmdb:/douban:
"""
if mediaid.startswith("tmdb:"):
result = Subscribe.exists(db, int(mediaid[5:]), season)
tmdbid = mediaid[5:]
if not tmdbid or not str(tmdbid).isdigit():
return Subscribe()
result = Subscribe.exists(db, int(tmdbid), season)
elif mediaid.startswith("douban:"):
result = Subscribe.get_by_doubanid(db, mediaid[7:])
doubanid = mediaid[7:]
if not doubanid:
return Subscribe()
result = Subscribe.get_by_doubanid(db, doubanid)
else:
result = None
if result and result.sites:
@@ -157,9 +163,15 @@ def delete_subscribe_by_mediaid(
根据TMDBID或豆瓣ID删除订阅 tmdb:/douban:
"""
if mediaid.startswith("tmdb:"):
Subscribe().delete_by_tmdbid(db, int(mediaid[5:]), season)
tmdbid = mediaid[5:]
if not tmdbid or not str(tmdbid).isdigit():
return schemas.Response(success=False)
Subscribe().delete_by_tmdbid(db, int(tmdbid), season)
elif mediaid.startswith("douban:"):
Subscribe().delete_by_doubanid(db, mediaid[7:])
doubanid = mediaid[7:]
if not doubanid:
return schemas.Response(success=False)
Subscribe().delete_by_doubanid(db, doubanid)
return schemas.Response(success=True)

View File

@@ -9,12 +9,14 @@ from fastapi.responses import StreamingResponse
from sqlalchemy.orm import Session
from app import schemas
from app.chain.search import SearchChain
from app.core.config import settings
from app.core.security import verify_token
from app.db import get_db
from app.db.systemconfig_oper import SystemConfigOper
from app.helper.message import MessageHelper
from app.helper.progress import ProgressHelper
from app.schemas.types import SystemConfigKey
from app.utils.http import RequestUtils
from version import APP_VERSION
@@ -166,3 +168,33 @@ def latest_version(_: schemas.TokenPayload = Depends(verify_token)):
if ver_json:
return schemas.Response(success=True, data=ver_json)
return schemas.Response(success=False)
@router.get("/ruletest", summary="过滤规则测试", response_model=schemas.Response)
def ruletest(title: str,
subtitle: str = None,
ruletype: str = None,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)):
"""
过滤规则测试,规则类型 1-订阅2-洗版
"""
torrent = schemas.TorrentInfo(
title=title,
description=subtitle,
)
if ruletype == "2":
rule_string = SystemConfigOper(db).get(SystemConfigKey.FilterRules2)
else:
rule_string = SystemConfigOper(db).get(SystemConfigKey.FilterRules)
if not rule_string:
return schemas.Response(success=False, message="过滤规则未设置!")
# 过滤
result = SearchChain(db).filter_torrents(rule_string=rule_string,
torrent_list=[torrent])
if not result:
return schemas.Response(success=False, message="不符合过滤规则!")
return schemas.Response(success=True, data={
"priority": 100 - result[0].pri_order + 1
})

View File

@@ -0,0 +1,79 @@
from pathlib import Path
from typing import Any
from fastapi import APIRouter, Depends
from sqlalchemy.orm import Session
from app import schemas
from app.chain.transfer import TransferChain
from app.core.security import verify_token
from app.db import get_db
from app.schemas import MediaType
router = APIRouter()
@router.post("/manual", summary="手动转移", response_model=schemas.Response)
def manual_transfer(path: str,
target: str = None,
tmdbid: int = None,
type_name: str = None,
season: int = None,
transfer_type: str = None,
episode_format: str = None,
episode_detail: str = None,
episode_part: str = None,
episode_offset: int = 0,
min_filesize: int = 0,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
手动转移,支持自定义剧集识别格式
:param path: 转移路径或文件
:param target: 目标路径
:param type_name: 媒体类型、电影/电视剧
:param tmdbid: tmdbid
:param season: 剧集季号
:param transfer_type: 转移类型move/copy
:param episode_format: 剧集识别格式
:param episode_detail: 剧集识别详细信息
:param episode_part: 剧集识别分集信息
:param episode_offset: 剧集识别偏移量
:param min_filesize: 最小文件大小(MB)
:param db: 数据库
:param _: Token校验
"""
in_path = Path(path)
if target:
target = Path(target)
if not target.exists():
return schemas.Response(success=False, message=f"目标路径不存在")
# 类型
mtype = MediaType(type_name) if type_name else None
# 自定义格式
epformat = None
if episode_offset or episode_part or episode_detail or episode_format:
epformat = schemas.EpisodeFormat(
format=episode_format,
detail=episode_detail,
part=episode_part,
offset=episode_offset,
)
# 开始转移
state, errormsg = TransferChain(db).manual_transfer(
in_path=in_path,
target=target,
tmdbid=tmdbid,
mtype=mtype,
season=season,
transfer_type=transfer_type,
epformat=epformat,
min_filesize=min_filesize
)
# 失败
if not state:
if isinstance(errormsg, list):
errormsg = f"整理完成,{len(errormsg)} 个文件转移失败!"
return schemas.Response(success=False, message=errormsg)
# 成功
return schemas.Response(success=True)

View File

@@ -76,7 +76,7 @@ def arr_system_status(apikey: str) -> Any:
}
@arr_router.get("/qualityprofile", summary="质量配置")
@arr_router.get("/qualityProfile", summary="质量配置")
def arr_qualityProfile(apikey: str) -> Any:
"""
模拟Radarr、Sonarr质量配置

View File

@@ -97,8 +97,8 @@ class ChainBase(metaclass=ABCMeta):
if isinstance(temp, list):
result.extend(temp)
else:
# 返回结果非列表也非空,则继续执行下一模块
continue
# 中止继续执行
break
except Exception as err:
logger.error(f"运行模块 {method} 出错:{module.__class__.__name__} - {err}\n{traceback.print_exc()}")
return result
@@ -199,17 +199,19 @@ class ChainBase(metaclass=ABCMeta):
def search_torrents(self, site: CommentedMap,
mediainfo: Optional[MediaInfo] = None,
keyword: str = None,
page: int = 0) -> List[TorrentInfo]:
page: int = 0,
area: str = "title") -> List[TorrentInfo]:
"""
搜索一个站点的种子资源
:param site: 站点
:param mediainfo: 识别的媒体信息
:param keyword: 搜索关键词,如有按关键词搜索,否则按媒体信息名称搜索
:param page: 页码
:param area: 搜索区域
:reutrn: 资源列表
"""
return self.run_module("search_torrents", mediainfo=mediainfo, site=site,
keyword=keyword, page=page)
keyword=keyword, page=page, area=area)
def refresh_torrents(self, site: CommentedMap) -> List[TorrentInfo]:
"""
@@ -233,7 +235,7 @@ class ChainBase(metaclass=ABCMeta):
torrent_list=torrent_list, season_episodes=season_episodes)
def download(self, torrent_path: Path, download_dir: Path, cookie: str,
episodes: Set[int] = None,
episodes: Set[int] = None, category: str = None
) -> Optional[Tuple[Optional[str], str]]:
"""
根据种子文件,选择并添加下载任务
@@ -241,10 +243,11 @@ class ChainBase(metaclass=ABCMeta):
:param download_dir: 下载目录
:param cookie: cookie
:param episodes: 需要下载的集数
:param category: 种子分类
:return: 种子Hash错误信息
"""
return self.run_module("download", torrent_path=torrent_path, download_dir=download_dir,
cookie=cookie, episodes=episodes, )
cookie=cookie, episodes=episodes, category=category)
def download_added(self, context: Context, torrent_path: Path, download_dir: Path) -> None:
"""
@@ -269,29 +272,27 @@ class ChainBase(metaclass=ABCMeta):
"""
return self.run_module("list_torrents", status=status, hashs=hashs)
def transfer(self, path: Path, mediainfo: MediaInfo,
transfer_type: str,
target: Path = None,
meta: MetaBase = None) -> Optional[TransferInfo]:
def transfer(self, path: Path, meta: MetaBase, mediainfo: MediaInfo,
transfer_type: str, target: Path = None) -> Optional[TransferInfo]:
"""
文件转移
:param path: 文件路径
:param meta: 预识别的元数据
:param mediainfo: 识别的媒体信息
:param transfer_type: 转移模式
:param target: 转移目标路径
:param meta: 预识别的元数据,仅单文件转移时传递
:return: {path, target_path, message}
"""
return self.run_module("transfer", path=path, mediainfo=mediainfo,
transfer_type=transfer_type, target=target, meta=meta)
return self.run_module("transfer", path=path, meta=meta, mediainfo=mediainfo,
transfer_type=transfer_type, target=target)
def transfer_completed(self, hashs: Union[str, list], transinfo: TransferInfo) -> None:
def transfer_completed(self, hashs: Union[str, list], path: Path = None) -> None:
"""
转移完成后的处理
:param hashs: 种子Hash
:param transinfo: 转移信息
:param path: 源目录
"""
return self.run_module("transfer_completed", hashs=hashs, transinfo=transinfo)
return self.run_module("transfer_completed", hashs=hashs, path=path)
def remove_torrents(self, hashs: Union[str, list]) -> bool:
"""
@@ -319,7 +320,7 @@ class ChainBase(metaclass=ABCMeta):
def torrent_files(self, tid: str) -> Optional[Union[TorrentFilesList, List[File]]]:
"""
根据种子文件,选择并添加下载任务
获取种子文件
:param tid: 种子Hash
:return: 种子文件
"""
@@ -345,7 +346,7 @@ class ChainBase(metaclass=ABCMeta):
return self.run_module("refresh_mediaserver", mediainfo=mediainfo, file_path=file_path)
return None
def post_message(self, message: Notification) -> Optional[bool]:
def post_message(self, message: Notification) -> None:
"""
发送消息
:param message: 消息体
@@ -364,7 +365,7 @@ class ChainBase(metaclass=ABCMeta):
f"title={message.title}, "
f"text={message.text}"
f"userid={message.userid}")
return self.run_module("post_message", message=message)
self.run_module("post_message", message=message)
def post_medias_message(self, message: Notification, medias: List[MediaInfo]) -> Optional[bool]:
"""
@@ -406,3 +407,9 @@ class ChainBase(metaclass=ABCMeta):
定时任务每10分钟调用一次模块实现该接口以实现定时服务
"""
return self.run_module("scheduler_job")
def clear_cache(self) -> None:
"""
清理缓存,模块实现该接口响应清理缓存事件
"""
return self.run_module("clear_cache")

View File

@@ -8,9 +8,8 @@ from sqlalchemy.orm import Session
from app.chain import ChainBase
from app.chain.site import SiteChain
from app.core.config import settings
from app.db.siteicon_oper import SiteIconOper
from app.db.site_oper import SiteOper
from app.helper.browser import PlaywrightHelper
from app.db.siteicon_oper import SiteIconOper
from app.helper.cloudflare import under_challenge
from app.helper.cookiecloud import CookieCloudHelper
from app.helper.message import MessageHelper
@@ -72,12 +71,13 @@ class CookieCloudChain(ChainBase):
for domain, cookie in cookies.items():
# 获取站点信息
indexer = self.siteshelper.get_indexer(domain)
if self.siteoper.exists(domain):
site_info = self.siteoper.get_by_domain(domain)
if site_info:
# 检查站点连通性
status, msg = self.sitechain.test(domain)
# 更新站点Cookie
if status:
logger.info(f"站点【{indexer.get('name')}】连通性正常不同步CookieCloud数据")
logger.info(f"站点【{site_info.name}】连通性正常不同步CookieCloud数据")
continue
# 更新站点Cookie
self.siteoper.update_cookie(domain=domain, cookies=cookie)

View File

@@ -41,7 +41,7 @@ class DoubanChain(ChainBase):
if not mediainfo:
logger.warn(f'{meta.name} 未识别到TMDB媒体信息')
return Context(meta_info=meta, media_info=MediaInfo(douban_info=doubaninfo))
logger.info(f'识别到媒体信息:{mediainfo.type.value} {mediainfo.title_year}{meta.season}')
logger.info(f'识别到媒体信息:{mediainfo.type.value} {mediainfo.title_year} {meta.season}')
mediainfo.set_douban_info(doubaninfo)
return Context(meta_info=meta, media_info=mediainfo)

View File

@@ -104,39 +104,69 @@ class DownloadChain(ChainBase):
_folder_name = ""
if not torrent_file:
# 下载种子文件
torrent_file, _folder_name, _ = self.download_torrent(_torrent, userid=userid)
torrent_file, _folder_name, _file_list = self.download_torrent(_torrent, userid=userid)
if not torrent_file:
return
else:
# 获取种子文件的文件夹名和文件清单
_folder_name, _file_list = self.torrent.get_torrent_info(torrent_file)
# 下载目录
if not save_path:
if settings.DOWNLOAD_CATEGORY and _media and _media.category:
# 开启下载二级目录
if _media.type == MediaType.MOVIE:
# 电影
download_dir = Path(settings.DOWNLOAD_MOVIE_PATH or settings.DOWNLOAD_PATH) / _media.category
else:
download_dir = Path(settings.DOWNLOAD_TV_PATH or settings.DOWNLOAD_PATH) / _media.category
if settings.DOWNLOAD_ANIME_PATH \
and _media.genre_ids \
and set(_media.genre_ids).intersection(set(settings.ANIME_GENREIDS)):
# 动漫
download_dir = Path(settings.DOWNLOAD_ANIME_PATH)
else:
# 电视剧
download_dir = Path(settings.DOWNLOAD_TV_PATH or settings.DOWNLOAD_PATH) / _media.category
elif _media:
# 未开启下载二级目录
if _media.type == MediaType.MOVIE:
# 电影
download_dir = Path(settings.DOWNLOAD_MOVIE_PATH or settings.DOWNLOAD_PATH)
else:
download_dir = Path(settings.DOWNLOAD_TV_PATH or settings.DOWNLOAD_PATH)
if settings.DOWNLOAD_ANIME_PATH \
and _media.genre_ids \
and set(_media.genre_ids).intersection(set(settings.ANIME_GENREIDS)):
# 动漫
download_dir = Path(settings.DOWNLOAD_ANIME_PATH)
else:
# 电视剧
download_dir = Path(settings.DOWNLOAD_TV_PATH or settings.DOWNLOAD_PATH)
else:
# 未识别
download_dir = Path(settings.DOWNLOAD_PATH)
else:
# 自定义下载目录
download_dir = Path(save_path)
# 添加下载
result: Optional[tuple] = self.download(torrent_path=torrent_file,
cookie=_torrent.site_cookie,
episodes=episodes,
download_dir=download_dir)
download_dir=download_dir,
category=_media.category)
if result:
_hash, error_msg = result
else:
_hash, error_msg = None, "未知错误"
if _hash:
# 下载文件路径
if _folder_name:
download_path = download_dir / _folder_name
else:
download_path = download_dir / _file_list[0] if _file_list else download_dir
# 登记下载记录
self.downloadhis.add(
path=_folder_name or _torrent.title,
path=str(download_path),
type=_media.type.value,
title=_media.title,
year=_media.year,
@@ -152,6 +182,17 @@ class DownloadChain(ChainBase):
torrent_description=_torrent.description,
torrent_site=_torrent.site_name
)
# 登记下载文件
self.downloadhis.add_files([
{
"download_hash": _hash,
"downloader": settings.DOWNLOADER,
"fullpath": str(download_dir / _folder_name / file),
"savepath": str(download_dir / _folder_name),
"filepath": file,
"torrentname": _meta.org_string,
} for file in _file_list if file
])
# 发送消息
self.post_download_message(meta=_meta, mediainfo=_media, torrent=_torrent, channel=channel)
# 下载成功后处理
@@ -241,10 +282,10 @@ class DownloadChain(ChainBase):
获取需要的季的集数
"""
if not no_exists.get(tmdbid):
return 0
return 9999
no_exist = no_exists.get(tmdbid)
if not no_exist.get(season):
return 0
return 9999
return no_exist[season].total_episode
# 分组排序
@@ -298,17 +339,26 @@ class DownloadChain(ChainBase):
if not torrent_path:
continue
torrent_episodes = self.torrent.get_torrent_episodes(torrent_files)
if not torrent_episodes \
or len(torrent_episodes) >= __get_season_episodes(need_tmdbid,
torrent_season[0]):
# 下载
download_id = self.download_single(context=context,
torrent_file=torrent_path,
save_path=save_path,
userid=userid)
if torrent_episodes:
# 总集数
need_total = __get_season_episodes(need_tmdbid, torrent_season[0])
if len(torrent_episodes) < need_total:
# 更新集数范围
begin_ep = min(torrent_episodes)
end_ep = max(torrent_episodes)
meta.set_episodes(begin=begin_ep, end=end_ep)
logger.info(
f"{meta.org_string} 解析文件集数为 [{begin_ep}-{end_ep}],不是完整合集")
continue
else:
# 下载
download_id = self.download_single(context=context,
torrent_file=torrent_path,
save_path=save_path,
userid=userid)
else:
logger.info(
f"{meta.org_string} 解析文件集数为 {len(torrent_episodes)}未含所需集数")
f"{meta.org_string} 解析文件集数为 {len(torrent_episodes)}不是完整合集")
continue
else:
# 下载
@@ -460,13 +510,15 @@ class DownloadChain(ChainBase):
def get_no_exists_info(self, meta: MetaBase,
mediainfo: MediaInfo,
no_exists: Dict[int, Dict[int, NotExistMediaInfo]] = None
no_exists: Dict[int, Dict[int, NotExistMediaInfo]] = None,
totals: Dict[int, int] = None
) -> Tuple[bool, Dict[int, Dict[int, NotExistMediaInfo]]]:
"""
检查媒体库,查询是否存在,对于剧集同时返回不存在的季集信息
:param meta: 元数据
:param mediainfo: 已识别的媒体信息
:param no_exists: 在调用该方法前已经存储的不存在的季集信息,有传入时该函数搜索的内容将会叠加后输出
:param totals: 电视剧每季的总集数
:return: 当前媒体是否缺失,各标题总的季集和缺失的季集
"""
@@ -499,6 +551,10 @@ class DownloadChain(ChainBase):
if not no_exists:
no_exists = {}
if not totals:
totals = {}
if mediainfo.type == MediaType.MOVIE:
# 电影
itemid = self.mediaserver.get_item_id(mtype=mediainfo.type.value,
@@ -523,40 +579,54 @@ class DownloadChain(ChainBase):
itemid = self.mediaserver.get_item_id(mtype=mediainfo.type.value,
tmdbid=mediainfo.tmdb_id,
season=mediainfo.season)
# 媒体库已存在的剧集
exists_tvs: Optional[ExistMediaInfo] = self.media_exists(mediainfo=mediainfo, itemid=itemid)
if not exists_tvs:
# 所有集均缺失
# 所有集均缺失
for season, episodes in mediainfo.seasons.items():
if not episodes:
continue
# 全季不存在
if meta.begin_season \
if meta.season_list \
and season not in meta.season_list:
continue
__append_no_exists(_season=season, _episodes=[], _total=len(episodes), _start=min(episodes))
# 总集数
total_ep = totals.get(season) or len(episodes)
__append_no_exists(_season=season, _episodes=[],
_total=total_ep, _start=min(episodes))
return False, no_exists
else:
# 存在一些,检查缺失的季集
# 存在一些,检查每季缺失的季集
for season, episodes in mediainfo.seasons.items():
if meta.begin_season \
and season not in meta.season_list:
continue
if not episodes:
continue
exist_seasons = exists_tvs.seasons
if exist_seasons.get(season):
# 取差
lack_episodes = list(set(episodes).difference(set(exist_seasons[season])))
# 该季总集数
season_total = totals.get(season) or len(episodes)
# 该季已存在的
exist_episodes = exists_tvs.seasons.get(season)
if exist_episodes:
# 已存在取差集
if totals.get(season):
# 按总集数计算缺失集开始集为TMDB中的最小集
lack_episodes = list(set(range(min(episodes),
season_total + min(episodes))
).difference(set(exist_episodes)))
else:
# 按TMDB集数计算缺失集
lack_episodes = list(set(episodes).difference(set(exist_episodes)))
if not lack_episodes:
# 全部集存在
continue
# 添加不存在的季集信息
__append_no_exists(_season=season, _episodes=lack_episodes,
_total=len(episodes), _start=min(episodes))
_total=season_total, _start=min(lack_episodes))
else:
# 全季不存在
__append_no_exists(_season=season, _episodes=[],
_total=len(episodes), _start=min(episodes))
_total=season_total, _start=min(episodes))
# 存在不完整的剧集
if no_exists:
logger.debug(f"媒体库中已存在部分剧集,缺失:{no_exists}")
@@ -582,7 +652,7 @@ class DownloadChain(ChainBase):
for torrent in torrents:
messages.append(f"{index}. {torrent.title} "
f"{StringUtils.str_filesize(torrent.size)} "
f"{round(torrent.progress * 100, 1)}%")
f"{round(torrent.progress, 1)}%")
index += 1
self.post_message(Notification(
channel=channel, mtype=NotificationType.Download,

View File

@@ -1,3 +1,4 @@
from pathlib import Path
from typing import Optional, List, Tuple
from app.chain import ChainBase
@@ -31,6 +32,29 @@ class MediaChain(ChainBase):
# 返回上下文
return Context(meta_info=metainfo, media_info=mediainfo)
def recognize_by_path(self, path: str) -> Optional[Context]:
"""
根据文件路径识别媒体信息
"""
logger.info(f'开始识别媒体信息,文件:{path} ...')
file_path = Path(path)
# 上级目录元数据
dir_meta = MetaInfo(title=file_path.parent.name)
# 文件元数据,不包含后缀
file_meta = MetaInfo(title=file_path.stem)
# 合并元数据
file_meta.merge(dir_meta)
# 识别媒体信息
mediainfo = self.recognize_media(meta=file_meta)
if not mediainfo:
logger.warn(f'{path} 未识别到媒体信息')
return Context(meta_info=file_meta)
logger.info(f'{path} 识别到媒体信息:{mediainfo.type.value} {mediainfo.title_year}')
# 更新媒体图片
self.obtain_images(mediainfo=mediainfo)
# 返回上下文
return Context(meta_info=file_meta, media_info=mediainfo)
def search(self, title: str) -> Tuple[MetaBase, List[MediaInfo]]:
"""
搜索媒体信息

View File

@@ -7,6 +7,7 @@ from sqlalchemy.orm import Session
from app import schemas
from app.chain import ChainBase
from app.core.config import settings
from app.db import SessionFactory
from app.db.mediaserver_oper import MediaServerOper
from app.log import logger
from app.schemas import MessageChannel, Notification
@@ -21,7 +22,6 @@ class MediaServerChain(ChainBase):
def __init__(self, db: Session = None):
super().__init__(db)
self.mediaserverdb = MediaServerOper(db)
def librarys(self) -> List[schemas.MediaServerLibrary]:
"""
@@ -56,11 +56,14 @@ class MediaServerChain(ChainBase):
同步媒体库所有数据到本地数据库
"""
with lock:
# 媒体服务器同步使用独立的会话
_db = SessionFactory()
_dbOper = MediaServerOper(_db)
logger.info("开始同步媒体库数据 ...")
# 汇总统计
total_count = 0
# 清空登记薄
self.mediaserverdb.empty(server=settings.MEDIASERVER)
_dbOper.empty(server=settings.MEDIASERVER)
for library in self.librarys():
logger.info(f"正在同步媒体库 {library.name} ...")
library_count = 0
@@ -83,8 +86,11 @@ class MediaServerChain(ChainBase):
item_dict = item.dict()
item_dict['seasoninfo'] = json.dumps(seasoninfo)
item_dict['item_type'] = item_type
self.mediaserverdb.add(**item_dict)
_dbOper.add(**item_dict)
logger.info(f"媒体库 {library.name} 同步完成,共同步数量:{library_count}")
# 总数累加
total_count += library_count
# 关闭数据库连接
if _db:
_db.close()
logger.info("【MediaServer】媒体库数据同步完成同步数量%s" % total_count)

View File

@@ -130,19 +130,29 @@ class MessageChain(ChainBase):
return
# 搜索结果排序
contexts = self.torrenthelper.sort_torrents(contexts)
# 更新缓存
user_cache[userid] = {
"type": "Torrent",
"items": contexts
}
_current_page = 0
# 发送种子数据
logger.info(f"搜索到 {len(contexts)} 条数据,开始发送选择消息 ...")
self.__post_torrents_message(channel=channel,
title=mediainfo.title,
items=contexts[:self._page_size],
userid=userid,
total=len(contexts))
# 判断是否设置自动下载
auto_download_user = settings.AUTO_DOWNLOAD_USER
# 匹配到自动下载用户
if auto_download_user and any(userid == user for user in auto_download_user.split(",")):
logger.info(f"用户 {userid} 在自动下载用户中,开始自动择优下载")
# 自动选择下载
self.__auto_download(channel=channel,
cache_list=contexts,
userid=userid,
username=username)
else:
# 更新缓存
user_cache[userid] = {
"type": "Torrent",
"items": contexts
}
# 发送种子数据
logger.info(f"搜索到 {len(contexts)} 条数据,开始发送选择消息 ...")
self.__post_torrents_message(channel=channel,
title=mediainfo.title,
items=contexts[:self._page_size],
userid=userid,
total=len(contexts))
elif cache_type == "Subscribe":
# 订阅媒体
@@ -169,36 +179,10 @@ class MessageChain(ChainBase):
elif cache_type == "Torrent":
if int(text) == 0:
# 自动选择下载
# 查询缺失的媒体信息
exist_flag, no_exists = self.downloadchain.get_no_exists_info(meta=_current_meta,
mediainfo=_current_media)
if exist_flag:
self.post_message(Notification(
channel=channel,
title=f"{_current_media.title_year}"
f"{_current_meta.sea} 媒体库中已存在",
userid=userid))
return
# 批量下载
downloads, lefts = self.downloadchain.batch_download(contexts=cache_list,
no_exists=no_exists,
userid=userid)
if downloads and not lefts:
# 全部下载完成
logger.info(f'{_current_media.title_year} 下载完成')
else:
# 未完成下载
logger.info(f'{_current_media.title_year} 未下载未完整,添加订阅 ...')
# 添加订阅状态为R
self.subscribechain.add(title=_current_media.title,
year=_current_media.year,
mtype=_current_media.type,
tmdbid=_current_media.tmdb_id,
season=_current_meta.begin_season,
channel=channel,
userid=userid,
username=username,
state="R")
self.__auto_download(channel=channel,
cache_list=cache_list,
userid=userid,
username=username)
else:
# 下载种子
context: Context = cache_list[int(text) - 1]
@@ -286,7 +270,7 @@ class MessageChain(ChainBase):
elif text.startswith("#") \
or re.search(r"^请[问帮你]", text) \
or re.search(r"[?]$", text) \
or StringUtils.count_words(text) > 10 \
or StringUtils.count_words(text) > 15 \
or text.find("继续") != -1:
# 聊天
content = text
@@ -337,6 +321,41 @@ class MessageChain(ChainBase):
# 保存缓存
self.save_cache(user_cache, self._cache_file)
def __auto_download(self, channel, cache_list, userid, username):
"""
自动择优下载
"""
# 查询缺失的媒体信息
exist_flag, no_exists = self.downloadchain.get_no_exists_info(meta=_current_meta,
mediainfo=_current_media)
if exist_flag:
self.post_message(Notification(
channel=channel,
title=f"{_current_media.title_year}"
f"{_current_meta.sea} 媒体库中已存在",
userid=userid))
return
# 批量下载
downloads, lefts = self.downloadchain.batch_download(contexts=cache_list,
no_exists=no_exists,
userid=userid)
if downloads and not lefts:
# 全部下载完成
logger.info(f'{_current_media.title_year} 下载完成')
else:
# 未完成下载
logger.info(f'{_current_media.title_year} 未下载未完整,添加订阅 ...')
# 添加订阅状态为R
self.subscribechain.add(title=_current_media.title,
year=_current_media.year,
mtype=_current_media.type,
tmdbid=_current_media.tmdb_id,
season=_current_meta.begin_season,
channel=channel,
userid=userid,
username=username,
state="R")
def __post_medias_message(self, channel: MessageChannel,
title: str, items: list, userid: str, total: int):
"""

View File

@@ -1,6 +1,5 @@
import json
import re
import time
from datetime import datetime
from typing import Tuple, Optional
@@ -43,6 +42,7 @@ class RssChain(ChainBase):
识别媒体信息并添加订阅
"""
logger.info(f'开始添加自定义订阅,标题:{title} ...')
# 识别元数据
metainfo = MetaInfo(title)
if year:
@@ -52,13 +52,16 @@ class RssChain(ChainBase):
if season:
metainfo.type = MediaType.TV
metainfo.begin_season = season
# 识别媒体信息
mediainfo: MediaInfo = self.recognize_media(meta=metainfo)
if not mediainfo:
logger.warn(f'{title} 未识别到媒体信息')
return None, "未识别到媒体信息"
# 更新媒体图片
self.obtain_images(mediainfo=mediainfo)
# 总集数
if mediainfo.type == MediaType.TV:
if not season:
@@ -82,6 +85,7 @@ class RssChain(ChainBase):
kwargs.update({
'total_episode': total_episode
})
# 检查是否存在
if self.rssoper.exists(tmdbid=mediainfo.tmdb_id, season=season):
logger.warn(f'{mediainfo.title} 已存在')
@@ -97,13 +101,14 @@ class RssChain(ChainBase):
"vote": mediainfo.vote_average,
"description": mediainfo.overview,
})
# 添加订阅
sid = self.rssoper.add(title=title, year=year, season=season, **kwargs)
if not sid:
logger.error(f'{mediainfo.title_year} 添加自定义订阅失败')
return None, "添加自定义订阅失败"
else:
logger.info(f'{mediainfo.title_year}{metainfo.season} 添加订阅成功')
logger.info(f'{mediainfo.title_year} {metainfo.season} 添加订阅成功')
# 返回结果
return sid, ""
@@ -120,33 +125,41 @@ class RssChain(ChainBase):
continue
if not rss_task.url:
continue
# 下载Rss报文
items = RssHelper.parse(rss_task.url, True if rss_task.proxy else False)
if not items:
logger.error(f"RSS未下载到数据{rss_task.url}")
logger.info(f"{rss_task.name} RSS下载到数据{len(items)}")
# 检查站点
domain = StringUtils.get_url_domain(rss_task.url)
site_info = self.sites.get_indexer(domain) or {}
# 过滤规则
if rss_task.best_version:
filter_rule = self.systemconfig.get(SystemConfigKey.FilterRules2)
else:
filter_rule = self.systemconfig.get(SystemConfigKey.FilterRules)
# 处理RSS条目
matched_contexts = []
# 处理过的title
processed_data = json.loads(rss_task.note) if rss_task.note else {
"titles": [],
"season_episodes": []
}
for item in items:
if not item.get("title"):
continue
# 标题是否已处理过
if item.get("title") in processed_data.get('titles'):
logger.info(f"{item.get('title')} 已处理过")
continue
# 基本要素匹配
if rss_task.include \
and not re.search(r"%s" % rss_task.include, item.get("title")):
@@ -156,6 +169,7 @@ class RssChain(ChainBase):
and re.search(r"%s" % rss_task.exclude, item.get("title")):
logger.info(f"{item.get('title')} 包含 {rss_task.exclude}")
continue
# 识别媒体信息
meta = MetaInfo(title=item.get("title"), subtitle=item.get("description"))
if not meta.name:
@@ -168,10 +182,12 @@ class RssChain(ChainBase):
if mediainfo.tmdb_id != rss_task.tmdbid:
logger.error(f"{item.get('title')} 不匹配")
continue
# 季集是否已处理过
if meta.season_episode in processed_data.get('season_episodes'):
logger.info(f"{meta.season_episode} 已处理过")
logger.info(f"{meta.org_string} {meta.season_episode} 已处理过")
continue
# 种子
torrentinfo = TorrentInfo(
site=site_info.get("id"),
@@ -185,8 +201,9 @@ class RssChain(ChainBase):
enclosure=item.get("enclosure"),
page_url=item.get("link"),
size=item.get("size"),
pubdate=time.strftime("%Y-%m-%d %H:%M:%S", item.get("pubdate")) if item.get("pubdate") else None,
pubdate=item["pubdate"].strftime("%Y-%m-%d %H:%M:%S") if item.get("pubdate") else None,
)
# 过滤种子
if rss_task.filter:
result = self.filter_torrents(
@@ -196,23 +213,24 @@ class RssChain(ChainBase):
if not result:
logger.info(f"{rss_task.name} 不匹配过滤规则")
continue
# 更新已处理数据
processed_data['titles'].append(item.get("title"))
processed_data['season_episodes'].append(meta.season_episode)
# 清除多条数据
# 清除多余数据
mediainfo.clear()
# 匹配到的数据
matched_contexts.append(Context(
meta_info=meta,
media_info=mediainfo,
torrent_info=torrentinfo
))
# 更新已处理过的title
self.rssoper.update(rssid=rss_task.id, note=json.dumps(processed_data))
# 匹配结果
if not matched_contexts:
logger.info(f"{rss_task.name} 未匹配到数据")
continue
logger.info(f"{rss_task.name} 匹配到 {len(matched_contexts)} 条数据")
# 查询本地存在情况
if not rss_task.best_version:
# 查询缺失的媒体信息
@@ -220,6 +238,15 @@ class RssChain(ChainBase):
rss_meta.year = rss_task.year
rss_meta.begin_season = rss_task.season
rss_meta.type = MediaType(rss_task.type)
# 每季总集数
totals = {}
if rss_task.season and rss_task.total_episode:
totals = {
rss_task.season: rss_task.total_episode
}
# 检查缺失
exist_flag, no_exists = self.downloadchain.get_no_exists_info(
meta=rss_meta,
mediainfo=MediaInfo(
@@ -228,6 +255,7 @@ class RssChain(ChainBase):
tmdb_id=rss_task.tmdbid,
season=rss_task.season
),
totals=totals
)
if exist_flag:
logger.info(f'{rss_task.name} 媒体库中已存在,完成订阅')
@@ -254,24 +282,36 @@ class RssChain(ChainBase):
}
else:
no_exists = {}
# 开始下载
downloads, lefts = self.downloadchain.batch_download(contexts=matched_contexts,
no_exists=no_exists,
save_path=rss_task.save_path)
if downloads and not lefts:
if not rss_task.best_version:
# 非洗版结束订阅
self.rssoper.delete(rss_task.id)
# 发送通知
self.post_message(Notification(mtype=NotificationType.Subscribe,
title=f'自定义订阅 {rss_task.name} 已完成',
image=rss_task.backdrop))
# 未完成下载
logger.info(f'{rss_task.name} 未下载未完整,继续订阅 ...')
if downloads:
# 更新最后更新时间和已处理数量
self.rssoper.update(rssid=rss_task.id,
processed=(rss_task.processed or 0) + len(downloads),
last_update=datetime.now().strftime("%Y-%m-%d %H:%M:%S"))
else:
# 未完成下载
logger.info(f'{rss_task.name} 未下载未完整,继续订阅 ...')
if downloads:
for download in downloads:
meta = download.meta_info
# 更新已处理数据
processed_data['titles'].append(meta.org_string)
processed_data['season_episodes'].append(meta.season_episode)
# 更新已处理过的数据
self.rssoper.update(rssid=rss_task.id, note=json.dumps(processed_data))
# 更新最后更新时间和已处理数量
self.rssoper.update(rssid=rss_task.id,
processed=(rss_task.processed or 0) + len(downloads),
last_update=datetime.now().strftime("%Y-%m-%d %H:%M:%S"))
logger.info("刷新RSS订阅数据完成")
if manual:
if len(rss_tasks) == 1:

View File

@@ -32,17 +32,18 @@ class SearchChain(ChainBase):
self.systemconfig = SystemConfigOper(self._db)
self.torrenthelper = TorrentHelper()
def search_by_tmdbid(self, tmdbid: int, mtype: MediaType = None) -> List[Context]:
def search_by_tmdbid(self, tmdbid: int, mtype: MediaType = None, area: str = "title") -> List[Context]:
"""
根据TMDB ID搜索资源精确匹配但不不过滤本地存在的资源
:param tmdbid: TMDB ID
:param mtype: 媒体,电影 or 电视剧
:param area: 搜索范围title or imdbid
"""
mediainfo = self.recognize_media(tmdbid=tmdbid, mtype=mtype)
if not mediainfo:
logger.error(f'{tmdbid} 媒体信息识别失败!')
return []
results = self.process(mediainfo=mediainfo)
results = self.process(mediainfo=mediainfo, area=area)
# 保存眲结果
bytes_results = pickle.dumps(results)
self.systemconfig.set(SystemConfigKey.SearchResults, bytes_results)
@@ -95,7 +96,8 @@ class SearchChain(ChainBase):
keyword: str = None,
no_exists: Dict[int, Dict[int, NotExistMediaInfo]] = None,
sites: List[int] = None,
filter_rule: str = None) -> List[Context]:
filter_rule: str = None,
area: str = "title") -> List[Context]:
"""
根据媒体信息搜索种子资源精确匹配应用过滤规则同时根据no_exists过滤本地已存在的资源
:param mediainfo: 媒体信息
@@ -103,6 +105,7 @@ class SearchChain(ChainBase):
:param no_exists: 缺失的媒体信息
:param sites: 站点ID列表为空时搜索所有站点
:param filter_rule: 过滤规则,为空是使用默认过滤规则
:param area: 搜索范围title or imdbid
"""
logger.info(f'开始搜索资源,关键词:{keyword or mediainfo.title} ...')
# 补充媒体信息
@@ -132,7 +135,8 @@ class SearchChain(ChainBase):
torrents = self.__search_all_sites(
mediainfo=mediainfo,
keyword=keyword,
sites=sites
sites=sites,
area=area
)
if torrents:
break
@@ -185,14 +189,16 @@ class SearchChain(ChainBase):
# 比对年份
if mediainfo.year:
if mediainfo.type == MediaType.TV:
# 需要剧集
# 剧集年份,每季的年份可能不同
if torrent_meta.year and torrent_meta.year not in [year for year in
mediainfo.season_years.values()]:
logger.warn(f'{torrent.site_name} - {torrent.title} 年份不匹配')
continue
else:
# 需要电影
if torrent_meta.year != mediainfo.year:
# 电影年份上下浮动1年
if torrent_meta.year not in [str(int(mediainfo.year) - 1),
mediainfo.year,
str(int(mediainfo.year) + 1)]:
logger.warn(f'{torrent.site_name} - {torrent.title} 年份不匹配')
continue
# 比对标题
@@ -233,13 +239,15 @@ class SearchChain(ChainBase):
def __search_all_sites(self, mediainfo: Optional[MediaInfo] = None,
keyword: str = None,
sites: List[int] = None,
page: int = 0) -> Optional[List[TorrentInfo]]:
page: int = 0,
area: str = "title") -> Optional[List[TorrentInfo]]:
"""
多线程搜索多个站点
:param mediainfo: 识别的媒体信息
:param keyword: 搜索关键词,如有按关键词搜索,否则按媒体信息名称搜索
:param sites: 指定站点ID列表如有则只搜索指定站点否则搜索所有站点
:param page: 搜索页码
:param area: 搜索区域 title or imdbid
:reutrn: 资源列表
"""
# 未开启的站点不搜索
@@ -278,7 +286,7 @@ class SearchChain(ChainBase):
all_task = []
for site in indexer_sites:
task = executor.submit(self.search_torrents, mediainfo=mediainfo,
site=site, keyword=keyword, page=page)
site=site, keyword=keyword, page=page, area=area)
all_task.append(task)
# 结果集
results = []

View File

@@ -8,7 +8,7 @@ from requests import Session
from app.chain import ChainBase
from app.chain.download import DownloadChain
from app.chain.search import SearchChain
from app.core.config import settings
from app.chain.torrents import TorrentsChain
from app.core.context import TorrentInfo, Context, MediaInfo
from app.core.meta import MetaBase
from app.core.metainfo import MetaInfo
@@ -16,11 +16,9 @@ from app.db.models.subscribe import Subscribe
from app.db.subscribe_oper import SubscribeOper
from app.db.systemconfig_oper import SystemConfigOper
from app.helper.message import MessageHelper
from app.helper.sites import SitesHelper
from app.log import logger
from app.schemas import NotExistMediaInfo, Notification
from app.schemas.types import MediaType, SystemConfigKey, MessageChannel, NotificationType
from app.utils.string import StringUtils
class SubscribeChain(ChainBase):
@@ -28,14 +26,12 @@ class SubscribeChain(ChainBase):
订阅管理处理链
"""
_cache_file = "__torrents_cache__"
def __init__(self, db: Session = None):
super().__init__(db)
self.downloadchain = DownloadChain(self._db)
self.searchchain = SearchChain(self._db)
self.subscribeoper = SubscribeOper(self._db)
self.siteshelper = SitesHelper()
self.torrentschain = TorrentsChain()
self.message = MessageHelper()
self.systemconfig = SystemConfigOper(self._db)
@@ -107,13 +103,13 @@ class SubscribeChain(ChainBase):
# 发回原用户
self.post_message(Notification(channel=channel,
mtype=NotificationType.Subscribe,
title=f"{mediainfo.title_year}{metainfo.season} "
title=f"{mediainfo.title_year} {metainfo.season} "
f"添加订阅失败!",
text=f"{err_msg}",
image=mediainfo.get_message_image(),
userid=userid))
elif message:
logger.info(f'{mediainfo.title_year}{metainfo.season} 添加订阅成功')
logger.info(f'{mediainfo.title_year} {metainfo.season} 添加订阅成功')
if username or userid:
text = f"评分:{mediainfo.vote_average},来自用户:{username or userid}"
else:
@@ -121,7 +117,7 @@ class SubscribeChain(ChainBase):
# 广而告之
self.post_message(Notification(channel=channel,
mtype=NotificationType.Subscribe,
title=f"{mediainfo.title_year}{metainfo.season} 已添加订阅",
title=f"{mediainfo.title_year} {metainfo.season} 已添加订阅",
text=text,
image=mediainfo.get_message_image()))
# 返回结果
@@ -189,6 +185,13 @@ class SubscribeChain(ChainBase):
subscribes = self.subscribeoper.list(state)
# 遍历订阅
for subscribe in subscribes:
# 校验当前时间减订阅创建时间是否大于1分钟否则跳过先留出编辑订阅的时间
if subscribe.date:
now = datetime.now()
subscribe_time = datetime.strptime(subscribe.date, '%Y-%m-%d %H:%M:%S')
if (now - subscribe_time).total_seconds() < 60:
logger.debug(f"订阅标题:{subscribe.name} 新增小于1分钟暂不搜索...")
continue
logger.info(f'开始搜索订阅,标题:{subscribe.name} ...')
# 如果状态为N则更新为R
if subscribe.state == 'N':
@@ -206,14 +209,24 @@ class SubscribeChain(ChainBase):
# 非洗版状态
if not subscribe.best_version:
# 每季总集数
totals = {}
if subscribe.season and subscribe.total_episode:
totals = {
subscribe.season: subscribe.total_episode
}
# 查询缺失的媒体信息
exist_flag, no_exists = self.downloadchain.get_no_exists_info(meta=meta, mediainfo=mediainfo)
exist_flag, no_exists = self.downloadchain.get_no_exists_info(
meta=meta,
mediainfo=mediainfo,
totals=totals
)
if exist_flag:
logger.info(f'{mediainfo.title_year} 媒体库中已存在,完成订阅')
self.subscribeoper.delete(subscribe.id)
# 发送通知
self.post_message(Notification(mtype=NotificationType.Subscribe,
title=f'{mediainfo.title_year}{meta.season} 已完成订阅',
title=f'{mediainfo.title_year} {meta.season} 已完成订阅',
image=mediainfo.get_message_image()))
continue
# 电视剧订阅
@@ -231,7 +244,7 @@ class SubscribeChain(ChainBase):
if no_exists and no_exists.get(subscribe.tmdbid):
no_exists_info = no_exists.get(subscribe.tmdbid).get(subscribe.season)
if no_exists_info:
logger.info(f'订阅 {mediainfo.title_year}{meta.season} 缺失集:{no_exists_info.episodes}')
logger.info(f'订阅 {mediainfo.title_year} {meta.season} 缺失集:{no_exists_info.episodes}')
else:
# 洗版状态
if meta.type == MediaType.TV:
@@ -287,7 +300,7 @@ class SubscribeChain(ChainBase):
# 如果是电视剧过滤掉已经下载的集数
if torrent_mediainfo.type == MediaType.TV:
if self.__check_subscribe_note(subscribe, torrent_meta.episode_list):
logger.info(f'{torrent_info.title} 对应剧集 {torrent_meta.episodes} 已下载过')
logger.info(f'{torrent_info.title} 对应剧集 {torrent_meta.episode_list} 已下载过')
continue
else:
# 洗版时,非整季不要
@@ -341,7 +354,7 @@ class SubscribeChain(ChainBase):
self.subscribeoper.delete(subscribe.id)
# 发送通知
self.post_message(Notification(mtype=NotificationType.Subscribe,
title=f'{mediainfo.title_year}{meta.season} 已完成订阅',
title=f'{mediainfo.title_year} {meta.season} 已完成订阅',
image=mediainfo.get_message_image()))
else:
# 当前下载资源的优先级
@@ -351,7 +364,7 @@ class SubscribeChain(ChainBase):
self.subscribeoper.delete(subscribe.id)
# 发送通知
self.post_message(Notification(mtype=NotificationType.Subscribe,
title=f'{mediainfo.title_year}{meta.season} 已洗版完成',
title=f'{mediainfo.title_year} {meta.season} 已洗版完成',
image=mediainfo.get_message_image()))
else:
# 正在洗版,更新资源优先级
@@ -362,73 +375,17 @@ class SubscribeChain(ChainBase):
def refresh(self):
"""
刷新站点最新资源
刷新订阅
"""
# 所有订阅
# 查询所有订阅
subscribes = self.subscribeoper.list('R')
if not subscribes:
# 没有订阅不运行
return
# 读取缓存
torrents_cache: Dict[str, List[Context]] = self.load_cache(self._cache_file) or {}
# 所有站点索引
indexers = self.siteshelper.get_indexers()
# 配置的索引站点
config_indexers = [str(sid) for sid in self.systemconfig.get(SystemConfigKey.IndexerSites) or []]
# 遍历站点缓存资源
for indexer in indexers:
# 未开启的站点不搜索
if config_indexers and str(indexer.get("id")) not in config_indexers:
continue
logger.info(f'开始刷新 {indexer.get("name")} 最新种子 ...')
domain = StringUtils.get_url_domain(indexer.get("domain"))
torrents: List[TorrentInfo] = self.refresh_torrents(site=indexer)
# 按pubdate降序排列
torrents.sort(key=lambda x: x.pubdate or '', reverse=True)
# 取前N条
torrents = torrents[:settings.CACHE_CONF.get('refresh')]
if torrents:
# 过滤出没有处理过的种子
torrents = [torrent for torrent in torrents
if f'{torrent.title}{torrent.description}'
not in [f'{t.torrent_info.title}{t.torrent_info.description}'
for t in torrents_cache.get(domain) or []]]
if torrents:
logger.info(f'{indexer.get("name")}{len(torrents)} 个新种子')
else:
logger.info(f'{indexer.get("name")} 没有新种子')
continue
for torrent in torrents:
logger.info(f'处理资源:{torrent.title} ...')
# 识别
meta = MetaInfo(title=torrent.title, subtitle=torrent.description)
# 识别媒体信息
mediainfo: MediaInfo = self.recognize_media(meta=meta)
if not mediainfo:
logger.warn(f'未识别到媒体信息,标题:{torrent.title}')
# 存储空的媒体信息
mediainfo = MediaInfo()
# 清理多余数据
mediainfo.clear()
# 上下文
context = Context(meta_info=meta, media_info=mediainfo, torrent_info=torrent)
# 添加到缓存
if not torrents_cache.get(domain):
torrents_cache[domain] = [context]
else:
torrents_cache[domain].append(context)
# 如果超过了限制条数则移除掉前面的
if len(torrents_cache[domain]) > settings.CACHE_CONF.get('torrents'):
torrents_cache[domain] = torrents_cache[domain][-settings.CACHE_CONF.get('torrents'):]
# 回收资源
del torrents
else:
logger.info(f'{indexer.get("name")} 获取到种子')
# 从缓存中匹配订阅
self.match(torrents_cache)
# 保存缓存到本地
self.save_cache(torrents_cache, self._cache_file)
# 刷新站点资源,从缓存中匹配订阅
self.match(
self.torrentschain.refresh()
)
def match(self, torrents: Dict[str, List[Context]]):
"""
@@ -454,14 +411,24 @@ class SubscribeChain(ChainBase):
continue
# 非洗版
if not subscribe.best_version:
# 每季总集数
totals = {}
if subscribe.season and subscribe.total_episode:
totals = {
subscribe.season: subscribe.total_episode
}
# 查询缺失的媒体信息
exist_flag, no_exists = self.downloadchain.get_no_exists_info(meta=meta, mediainfo=mediainfo)
exist_flag, no_exists = self.downloadchain.get_no_exists_info(
meta=meta,
mediainfo=mediainfo,
totals=totals
)
if exist_flag:
logger.info(f'{mediainfo.title_year} 媒体库中已存在,完成订阅')
self.subscribeoper.delete(subscribe.id)
# 发送通知
self.post_message(Notification(mtype=NotificationType.Subscribe,
title=f'{mediainfo.title_year}{meta.season} 已完成订阅',
title=f'{mediainfo.title_year} {meta.season} 已完成订阅',
image=mediainfo.get_message_image()))
continue
# 电视剧订阅
@@ -479,7 +446,7 @@ class SubscribeChain(ChainBase):
if no_exists and no_exists.get(subscribe.tmdbid):
no_exists_info = no_exists.get(subscribe.tmdbid).get(subscribe.season)
if no_exists_info:
logger.info(f'订阅 {mediainfo.title_year}{meta.season} 缺失集:{no_exists_info.episodes}')
logger.info(f'订阅 {mediainfo.title_year} {meta.season} 缺失集:{no_exists_info.episodes}')
else:
# 洗版
if meta.type == MediaType.TV:
@@ -548,11 +515,11 @@ class SubscribeChain(ChainBase):
set(torrent_meta.episode_list)
):
logger.info(
f'{torrent_info.title} 对应剧集 {torrent_meta.episodes} 未包含缺失的剧集')
f'{torrent_info.title} 对应剧集 {torrent_meta.episode_list} 未包含缺失的剧集')
continue
# 过滤掉已经下载的集数
if self.__check_subscribe_note(subscribe, torrent_meta.episode_list):
logger.info(f'{torrent_info.title} 对应剧集 {torrent_meta.episodes} 已下载过')
logger.info(f'{torrent_info.title} 对应剧集 {torrent_meta.episode_list} 已下载过')
continue
else:
# 洗版时,非整季不要
@@ -597,6 +564,55 @@ class SubscribeChain(ChainBase):
# 未搜索到资源,但本地缺失可能有变化,更新订阅剩余集数
self.__upate_lack_episodes(lefts=no_exists, subscribe=subscribe, mediainfo=mediainfo)
def check(self):
"""
定时检查订阅,更新订阅信息
"""
# 查询所有订阅
subscribes = self.subscribeoper.list()
if not subscribes:
# 没有订阅不运行
return
# 遍历订阅
for subscribe in subscribes:
logger.info(f'开始检查订阅:{subscribe.name} ...')
# 生成元数据
meta = MetaInfo(subscribe.name)
meta.year = subscribe.year
meta.begin_season = subscribe.season or None
meta.type = MediaType(subscribe.type)
# 识别媒体信息
mediainfo: MediaInfo = self.recognize_media(meta=meta, mtype=meta.type, tmdbid=subscribe.tmdbid)
if not mediainfo:
logger.warn(f'未识别到媒体信息,标题:{subscribe.name}tmdbid{subscribe.tmdbid}')
continue
if not mediainfo.seasons:
continue
# 获取当前季的总集数
episodes = mediainfo.seasons.get(subscribe.season) or []
if len(episodes) > subscribe.total_episode or 0:
total_episode = len(episodes)
lack_episode = subscribe.lack_episode + (total_episode - subscribe.total_episode)
logger.info(f'订阅 {subscribe.name} 总集数变化,更新总集数为{total_episode},缺失集数为{lack_episode} ...')
else:
total_episode = subscribe.total_episode
lack_episode = subscribe.lack_episode
logger.info(f'订阅 {subscribe.name} 总集数未变化')
# 更新TMDB信息
self.subscribeoper.update(subscribe.id, {
"name": mediainfo.title,
"year": mediainfo.year,
"vote": mediainfo.vote_average,
"poster": mediainfo.get_poster_image(),
"backdrop": mediainfo.get_backdrop_image(),
"description": mediainfo.overview,
"imdbid": mediainfo.imdb_id,
"tvdbid": mediainfo.tvdb_id,
"total_episode": total_episode,
"lack_episode": lack_episode
})
logger.info(f'订阅 {subscribe.name} 更新完成')
def __update_subscribe_note(self, subscribe: Subscribe, downloads: List[Context]):
"""
更新已下载集数到note字段
@@ -650,16 +666,20 @@ class SubscribeChain(ChainBase):
season = season_info.season
if season == subscribe.season:
left_episodes = season_info.episodes
logger.info(f'{mediainfo.title_year}{season} 更新缺失集数为{len(left_episodes)} ...')
if not left_episodes:
lack_episode = season_info.total_episode
else:
lack_episode = len(left_episodes)
logger.info(f'{mediainfo.title_year}{season} 更新缺失集数为{lack_episode} ...')
if update_date:
# 同时更新最后时间
self.subscribeoper.update(subscribe.id, {
"lack_episode": len(left_episodes),
"lack_episode": lack_episode,
"last_update": datetime.now().strftime('%Y-%m-%d %H:%M:%S')
})
else:
self.subscribeoper.update(subscribe.id, {
"lack_episode": len(left_episodes)
"lack_episode": lack_episode
})
def remote_list(self, channel: MessageChannel, userid: Union[str, int] = None):
@@ -726,36 +746,45 @@ class SubscribeChain(ChainBase):
:param no_exists: 缺失季集列表
:param tmdb_id: TMDB ID
:param begin_season: 开始季
:param total_episode: 总集数
:param start_episode: 开始集数
:param total_episode: 订阅设定总集数
:param start_episode: 订阅设定开始集数
"""
# 使用订阅的总集数和开始集数替换no_exists
if no_exists \
and no_exists.get(tmdb_id) \
and (total_episode or start_episode):
# 该季原缺失信息
no_exist_season = no_exists.get(tmdb_id).get(begin_season)
if no_exist_season:
# 原集列表
# 原集列表
episode_list = no_exist_season.episodes
# 原总集数
total = no_exist_season.total_episode
if total_episode and start_episode:
# 有开始集和总集数
episodes = list(range(start_episode, total_episode + 1))
elif not start_episode:
# 有总集数没有开始集
episodes = list(range(min(episode_list or [1]), total_episode + 1))
start_episode = min(episode_list or [1])
elif not total_episode:
# 有开始集没有总集数
episodes = list(range(start_episode, max(episode_list or [total]) + 1))
total_episode = max(episode_list or [total])
# 原开始集数
start = no_exist_season.start_episode
# 更新剧集列表、开始集数、总集数
if not episode_list:
# 整季缺失
episodes = []
start_episode = start_episode or start
total_episode = total_episode or total
else:
return no_exists
# 与原有集取交集
if episode_list:
episodes = list(set(episodes).intersection(set(episode_list)))
# 处理集合
# 部分缺失
if not start_episode \
and not total_episode:
# 无需调整
return no_exists
if not start_episode:
# 没有自定义开始集
start_episode = start
if not total_episode:
# 没有自定义总集数
total_episode = total
# 新的集列表
episodes = list(range(max(start_episode, start), total_episode + 1))
# 更新集合
no_exists[tmdb_id][begin_season] = NotExistMediaInfo(
season=begin_season,
episodes=episodes,

17
app/chain/system.py Normal file
View File

@@ -0,0 +1,17 @@
from typing import Union
from app.chain import ChainBase
from app.schemas import Notification, MessageChannel
class SystemChain(ChainBase):
"""
系统级处理链
"""
def remote_clear_cache(self, channel: MessageChannel, userid: Union[int, str]):
"""
清理系统缓存
"""
self.clear_cache()
self.post_message(Notification(channel=channel,
title=f"缓存清理完成!", userid=userid))

109
app/chain/torrents.py Normal file
View File

@@ -0,0 +1,109 @@
from typing import Dict, List, Union
from requests import Session
from app.chain import ChainBase
from app.core.config import settings
from app.core.context import TorrentInfo, Context, MediaInfo
from app.core.metainfo import MetaInfo
from app.db.systemconfig_oper import SystemConfigOper
from app.helper.sites import SitesHelper
from app.log import logger
from app.schemas import Notification
from app.schemas.types import SystemConfigKey, MessageChannel
from app.utils.string import StringUtils
class TorrentsChain(ChainBase):
"""
种子刷新处理链
"""
_cache_file = "__torrents_cache__"
def __init__(self, db: Session = None):
super().__init__(db)
self.siteshelper = SitesHelper()
self.systemconfig = SystemConfigOper(self._db)
def remote_refresh(self, channel: MessageChannel, userid: Union[str, int] = None):
"""
远程刷新订阅,发送消息
"""
self.post_message(Notification(channel=channel,
title=f"开始刷新种子 ...", userid=userid))
self.refresh()
self.post_message(Notification(channel=channel,
title=f"种子刷新完成!", userid=userid))
def get_torrents(self) -> Dict[str, List[Context]]:
"""
获取当前缓存的种子
"""
# 读取缓存
return self.load_cache(self._cache_file) or {}
def refresh(self) -> Dict[str, List[Context]]:
"""
刷新站点最新资源
"""
# 读取缓存
torrents_cache = self.get_torrents()
# 所有站点索引
indexers = self.siteshelper.get_indexers()
# 配置的Rss站点
config_indexers = [str(sid) for sid in self.systemconfig.get(SystemConfigKey.RssSites) or []]
# 遍历站点缓存资源
for indexer in indexers:
# 未开启的站点不搜索
if config_indexers and str(indexer.get("id")) not in config_indexers:
continue
logger.info(f'开始刷新 {indexer.get("name")} 最新种子 ...')
domain = StringUtils.get_url_domain(indexer.get("domain"))
torrents: List[TorrentInfo] = self.refresh_torrents(site=indexer)
# 按pubdate降序排列
torrents.sort(key=lambda x: x.pubdate or '', reverse=True)
# 取前N条
torrents = torrents[:settings.CACHE_CONF.get('refresh')]
if torrents:
# 过滤出没有处理过的种子
torrents = [torrent for torrent in torrents
if f'{torrent.title}{torrent.description}'
not in [f'{t.torrent_info.title}{t.torrent_info.description}'
for t in torrents_cache.get(domain) or []]]
if torrents:
logger.info(f'{indexer.get("name")}{len(torrents)} 个新种子')
else:
logger.info(f'{indexer.get("name")} 没有新种子')
continue
for torrent in torrents:
logger.info(f'处理资源:{torrent.title} ...')
# 识别
meta = MetaInfo(title=torrent.title, subtitle=torrent.description)
# 识别媒体信息
mediainfo: MediaInfo = self.recognize_media(meta=meta)
if not mediainfo:
logger.warn(f'未识别到媒体信息,标题:{torrent.title}')
# 存储空的媒体信息
mediainfo = MediaInfo()
# 清理多余数据
mediainfo.clear()
# 上下文
context = Context(meta_info=meta, media_info=mediainfo, torrent_info=torrent)
# 添加到缓存
if not torrents_cache.get(domain):
torrents_cache[domain] = [context]
else:
torrents_cache[domain].append(context)
# 如果超过了限制条数则移除掉前面的
if len(torrents_cache[domain]) > settings.CACHE_CONF.get('torrents'):
torrents_cache[domain] = torrents_cache[domain][-settings.CACHE_CONF.get('torrents'):]
# 回收资源
del torrents
else:
logger.info(f'{indexer.get("name")} 没有获取到种子')
# 保存缓存到本地
self.save_cache(torrents_cache, self._cache_file)
# 返回
return torrents_cache

View File

@@ -1,12 +1,13 @@
import json
import re
import shutil
import threading
from pathlib import Path
from typing import List, Optional, Tuple, Union
from typing import List, Optional, Tuple, Union, Dict
from sqlalchemy.orm import Session
from app.chain import ChainBase
from app.chain.media import MediaChain
from app.core.config import settings
from app.core.context import MediaInfo
from app.core.meta import MetaBase
@@ -14,11 +15,14 @@ from app.core.metainfo import MetaInfo
from app.db.downloadhistory_oper import DownloadHistoryOper
from app.db.models.downloadhistory import DownloadHistory
from app.db.models.transferhistory import TransferHistory
from app.db.systemconfig_oper import SystemConfigOper
from app.db.transferhistory_oper import TransferHistoryOper
from app.helper.format import FormatParser
from app.helper.progress import ProgressHelper
from app.log import logger
from app.schemas import TransferInfo, TransferTorrent, Notification
from app.schemas.types import TorrentStatus, EventType, MediaType, ProgressKey, NotificationType, MessageChannel
from app.schemas import TransferInfo, TransferTorrent, Notification, EpisodeFormat
from app.schemas.types import TorrentStatus, EventType, MediaType, ProgressKey, NotificationType, MessageChannel, \
SystemConfigKey
from app.utils.string import StringUtils
from app.utils.system import SystemUtils
@@ -35,6 +39,8 @@ class TransferChain(ChainBase):
self.downloadhis = DownloadHistoryOper(self._db)
self.transferhis = TransferHistoryOper(self._db)
self.progress = ProgressHelper()
self.mediachain = MediaChain(self._db)
self.systemconfig = SystemConfigOper()
def process(self) -> bool:
"""
@@ -51,128 +57,353 @@ class TransferChain(ChainBase):
return False
logger.info(f"获取到 {len(torrents)} 个已完成的下载任务")
# 开始进度
self.progress.start(ProgressKey.FileTransfer)
# 总数
total_num = len(torrents)
# 已处理数量
processed_num = 0
self.progress.update(value=0,
text=f"开始转移下载任务文件,共 {total_num} 个任务 ...",
key=ProgressKey.FileTransfer)
for torrent in torrents:
# 更新进度
self.progress.update(value=processed_num / total_num * 100,
text=f"正在转移 {torrent.title} ...",
key=ProgressKey.FileTransfer)
# 识别元数据
meta: MetaBase = MetaInfo(title=torrent.title)
if not meta.name:
logger.error(f'未识别到元数据,标题:{torrent.title}')
continue
for torrent in torrents:
# 查询下载记录识别情况
downloadhis: DownloadHistory = self.downloadhis.get_by_hash(torrent.hash)
if downloadhis:
# 类型
mtype = MediaType(downloadhis.type)
# 补充剧集信息
if mtype == MediaType.TV \
and ((not meta.season_list and downloadhis.seasons)
or (not meta.episode_list and downloadhis.episodes)):
meta = MetaInfo(f"{torrent.title} {downloadhis.seasons} {downloadhis.episodes}")
# 按TMDBID识别
mediainfo = self.recognize_media(mtype=mtype,
tmdbid=downloadhis.tmdbid)
else:
mediainfo = self.recognize_media(meta=meta)
# 非MoviePilot下载的任务按文件识别
mediainfo = None
# 执行转移
self.do_transfer(path=torrent.path, mediainfo=mediainfo,
download_hash=torrent.hash)
# 设置下载任务状态
self.transfer_completed(hashs=torrent.hash, path=torrent.path)
# 结束
logger.info("下载器文件转移执行完成")
return True
def do_transfer(self, path: Path, meta: MetaBase = None,
mediainfo: MediaInfo = None, download_hash: str = None,
target: Path = None, transfer_type: str = None,
season: int = None, epformat: EpisodeFormat = None,
min_filesize: int = 0) -> Tuple[bool, str]:
"""
执行一个复杂目录的转移操作
:param path: 待转移目录或文件
:param meta: 元数据
:param mediainfo: 媒体信息
:param download_hash: 下载记录hash
:param target: 目标路径
:param transfer_type: 转移类型
:param season: 季
:param epformat: 剧集格式
:param min_filesize: 最小文件大小(MB)
返回:成功标识,错误信息
"""
if not transfer_type:
transfer_type = settings.TRANSFER_TYPE
# 获取待转移路径清单
trans_paths = self.__get_trans_paths(path)
if not trans_paths:
logger.warn(f"{path.name} 没有找到可转移的媒体文件")
return False, f"{path.name} 没有找到可转移的媒体文件"
# 汇总错误信息
err_msgs: List[str] = []
# 汇总季集清单
season_episodes: Dict[Tuple, List[int]] = {}
# 汇总元数据
metas: Dict[Tuple, MetaBase] = {}
# 汇总媒体信息
medias: Dict[Tuple, MediaInfo] = {}
# 汇总转移信息
transfers: Dict[Tuple, TransferInfo] = {}
# 有集自定义格式
formaterHandler = FormatParser(eformat=epformat.format,
details=epformat.detail,
part=epformat.part,
offset=epformat.offset) if epformat else None
# 开始进度
self.progress.start(ProgressKey.FileTransfer)
# 总数
transfer_files = SystemUtils.list_files(directory=path,
extensions=settings.RMT_MEDIAEXT,
min_filesize=min_filesize)
if formaterHandler:
# 有集自定义格式,过滤文件
transfer_files = [f for f in transfer_files if formaterHandler.match(f.name)]
# 总数
total_num = len(transfer_files)
# 已处理数量
processed_num = 0
self.progress.update(value=0,
text=f"开始转移 {path},共 {total_num} 个文件 ...",
key=ProgressKey.FileTransfer)
# 整理屏蔽词
transfer_exclude_words = self.systemconfig.get(SystemConfigKey.TransferExcludeWords)
# 处理所有待转移目录或文件,默认一个转移路径或文件只有一个媒体信息
for trans_path in trans_paths:
# 如果是目录且不是⼀蓝光原盘,获取所有文件并转移
if (not trans_path.is_file()
and not SystemUtils.is_bluray_dir(trans_path)):
# 遍历获取下载目录所有文件
file_paths = SystemUtils.list_files(directory=trans_path,
extensions=settings.RMT_MEDIAEXT,
min_filesize=min_filesize)
else:
file_paths = [trans_path]
if formaterHandler:
# 有集自定义格式,过滤文件
file_paths = [f for f in file_paths if formaterHandler.match(f.name)]
# 转移所有文件
for file_path in file_paths:
# 回收站及隐藏的文件不处理
file_path_str = str(file_path)
if file_path_str.find('/@Recycle/') != -1 \
or file_path_str.find('/#recycle/') != -1 \
or file_path_str.find('/.') != -1 \
or file_path_str.find('/@eaDir') != -1:
logger.debug(f"{file_path_str} 是回收站或隐藏的文件")
continue
# 整理屏蔽词不处理
if transfer_exclude_words:
for keyword in transfer_exclude_words:
if not keyword:
continue
if keyword and re.findall(keyword, file_path_str):
logger.info(f"{file_path} 命中整理屏蔽词 {keyword},不处理")
continue
if not meta:
# 上级目录元数据
dir_meta = MetaInfo(title=file_path.parent.name)
# 文件元数据,不包含后缀
file_meta = MetaInfo(title=file_path.stem)
# 合并元数据
file_meta.merge(dir_meta)
else:
file_meta = meta
# 合并季
if season:
file_meta.begin_season = season
if not file_meta:
logger.error(f"{file_path} 无法识别有效信息")
err_msgs.append(f"{file_path} 无法识别有效信息")
continue
# 自定义识别
if formaterHandler:
# 开始集、结束集、PART
begin_ep, end_ep, part = formaterHandler.split_episode(file_path.stem)
if begin_ep is not None:
file_meta.begin_episode = begin_ep
file_meta.part = part
if end_ep is not None:
file_meta.end_episode = end_ep
if not mediainfo:
logger.warn(f'识别媒体信息,标题:{torrent.title}')
# 识别媒体信息
file_mediainfo = self.recognize_media(meta=file_meta)
else:
file_mediainfo = mediainfo
if not file_mediainfo:
logger.warn(f'{file_path} 未识别到媒体信息')
# 新增转移失败历史记录
his = self.__insert_fail_history(
src_path=torrent.path,
download_hash=torrent.hash,
meta=meta
his = self.transferhis.add_fail(
src_path=file_path,
mode=settings.TRANSFER_TYPE,
meta=file_meta,
download_hash=download_hash
)
self.post_message(Notification(
mtype=NotificationType.Manual,
title=f"{torrent.title} 未识别到媒体信息,无法入库!\n"
title=f"{file_path.name} 未识别到媒体信息,无法入库!\n"
f"回复:```\n/redo {his.id} [tmdbid]|[类型]\n``` 手动识别转移。"
))
# 设置种子状态,避免一直报错
self.transfer_completed(hashs=torrent.hash, transinfo=transferinfo)
continue
logger.info(f"{torrent.title} 识别为:{mediainfo.type.value} {mediainfo.title_year}")
# 更新媒体图片
self.obtain_images(mediainfo=mediainfo)
logger.info(f"{file_path.name} 识别为:{file_mediainfo.type.value} {file_mediainfo.title_year}")
# 转移
transferinfo: TransferInfo = self.transfer(mediainfo=mediainfo,
path=torrent.path,
transfer_type=settings.TRANSFER_TYPE)
# 电视剧没有集无法转移
if file_mediainfo.type == MediaType.TV and not file_meta.episode:
# 转移失败
logger.warn(f"{file_path.name} 入库失败:未识别到集数")
err_msgs.append(f"{file_path.name} 未识别到集数")
# 新增转移失败历史记录
self.transferhis.add_fail(
src_path=file_path,
mode=settings.TRANSFER_TYPE,
download_hash=download_hash,
meta=file_meta,
mediainfo=file_mediainfo
)
# 发送消息
self.post_message(Notification(
mtype=NotificationType.Manual,
title=f"{file_path.name} 入库失败!",
text=f"原因:未识别到集数",
image=file_mediainfo.get_message_image()
))
continue
# 更新媒体图片
self.obtain_images(mediainfo=file_mediainfo)
if not download_hash:
download_file = self.downloadhis.get_file_by_fullpath(file_path_str)
if download_file:
download_hash = download_file.download_hash
# 执行转移
transferinfo: TransferInfo = self.transfer(meta=file_meta,
mediainfo=file_mediainfo,
path=file_path,
transfer_type=transfer_type,
target=target)
if not transferinfo:
logger.error("文件转移模块运行失败")
continue
return False, "文件转移模块运行失败"
if not transferinfo.target_path:
# 转移失败
logger.warn(f"{torrent.title} 入库失败:{transferinfo.message}")
logger.warn(f"{file_path.name} 入库失败:{transferinfo.message}")
err_msgs.append(f"{file_path.name} {transferinfo.message}")
# 新增转移失败历史记录
self.__insert_fail_history(
src_path=torrent.path,
download_hash=torrent.hash,
meta=meta,
mediainfo=mediainfo,
self.transferhis.add_fail(
src_path=file_path,
mode=settings.TRANSFER_TYPE,
download_hash=download_hash,
meta=file_meta,
mediainfo=file_mediainfo,
transferinfo=transferinfo
)
# 发送消息
self.post_message(Notification(
title=f"{mediainfo.title_year} {meta.season_episode} 入库失败!",
mtype=NotificationType.Manual,
title=f"{file_mediainfo.title_year} {file_meta.season_episode} 入库失败!",
text=f"原因:{transferinfo.message or '未知'}",
image=mediainfo.get_message_image()
image=file_mediainfo.get_message_image()
))
# 设置种子状态,避免一直报错
self.transfer_completed(hashs=torrent.hash, transinfo=transferinfo)
continue
# 汇总信息
mkey = (file_mediainfo.tmdb_id, file_meta.begin_season)
if mkey not in medias:
# 新增信息
metas[mkey] = file_meta
medias[mkey] = file_mediainfo
season_episodes[mkey] = file_meta.episode_list
transfers[mkey] = transferinfo
else:
# 合并季集清单
season_episodes[mkey] = list(set(season_episodes[mkey] + file_meta.episode_list))
# 合并转移数据
transfers[mkey].file_count += transferinfo.file_count
transfers[mkey].total_size += transferinfo.total_size
transfers[mkey].file_list.extend(transferinfo.file_list)
transfers[mkey].file_list_new.extend(transferinfo.file_list_new)
transfers[mkey].fail_list.extend(transferinfo.fail_list)
# 新增转移成功历史记录
self.__insert_sucess_history(
src_path=torrent.path,
download_hash=torrent.hash,
meta=meta,
mediainfo=mediainfo,
self.transferhis.add_success(
src_path=file_path,
mode=settings.TRANSFER_TYPE,
download_hash=download_hash,
meta=file_meta,
mediainfo=file_mediainfo,
transferinfo=transferinfo
)
# 转移完成
self.transfer_completed(hashs=torrent.hash, transinfo=transferinfo)
# 刮削元数据
self.scrape_metadata(path=transferinfo.target_path, mediainfo=mediainfo)
# 刷新媒体库
self.refresh_mediaserver(mediainfo=mediainfo, file_path=transferinfo.target_path)
# 发送通知
self.send_transfer_message(meta=meta, mediainfo=mediainfo, transferinfo=transferinfo)
# 广播事件
self.eventmanager.send_event(EventType.TransferComplete, {
'meta': meta,
'mediainfo': mediainfo,
'meta': file_meta,
'mediainfo': file_mediainfo,
'transferinfo': transferinfo
})
# 计数
processed_num += 1
# 更新进度
processed_num += 1
self.progress.update(value=processed_num / total_num * 100,
text=f"{torrent.title} 转移完成",
text=f"{file_path.name} 转移完成",
key=ProgressKey.FileTransfer)
# 目录或文件转移完成
for mkey, media in medias.items():
meta = metas[mkey]
transferinfo = transfers[mkey]
# 刷新媒体库
self.refresh_mediaserver(mediainfo=media, file_path=transferinfo.target_path)
# 刮削
self.scrape_metadata(path=transferinfo.target_path, mediainfo=media)
# 发送通知
se_str = None
if media.type == MediaType.TV:
se_str = f"{meta.season} {StringUtils.format_ep(season_episodes[mkey])}"
self.send_transfer_message(meta=meta,
mediainfo=media,
transferinfo=transferinfo,
season_episode=se_str)
# 结束进度
logger.info(f"{path} 转移完成,共 {total_num} 个文件,"
f"成功 {total_num - len(err_msgs)} 个,失败 {len(err_msgs)}")
self.progress.end(ProgressKey.FileTransfer)
logger.info("下载器文件转移执行完成")
return True
return True, "\n".join(err_msgs)
@staticmethod
def __get_trans_paths(directory: Path):
"""
获取转移目录列表
"""
if not directory.exists():
logger.warn(f"目录不存在:{directory}")
return []
# 单文件
if directory.is_file():
return [directory]
# 蓝光原盘
if SystemUtils.is_bluray_dir(directory):
return [directory]
# 需要转移的路径列表
trans_paths = []
# 先检查当前目录的下级目录,以支持合集的情况
for sub_dir in SystemUtils.list_sub_directory(directory):
# 如果是蓝光原盘
if SystemUtils.is_bluray_dir(sub_dir):
trans_paths.append(sub_dir)
# 没有媒体文件的目录跳过
elif SystemUtils.list_files(sub_dir, extensions=settings.RMT_MEDIAEXT):
trans_paths.append(sub_dir)
if not trans_paths:
# 没有有效子目录,直接转移当前目录
trans_paths.append(directory)
else:
# 有子目录时,把当前目录的文件添加到转移任务中
trans_paths.extend(
SystemUtils.list_sub_files(directory, extensions=settings.RMT_MEDIAEXT)
)
return trans_paths
def remote_transfer(self, arg_str: str, channel: MessageChannel, userid: Union[str, int] = None):
"""
远程重新转移,参数 历史记录ID TMDBID|类型
"""
def args_error():
self.post_message(Notification(channel=channel,
title="请输入正确的命令格式:/redo [id] [tmdbid]|[类型]"
@@ -210,7 +441,7 @@ class TransferChain(ChainBase):
def re_transfer(self, logid: int, mtype: MediaType, tmdbid: int) -> Tuple[bool, str]:
"""
根据历史记录,重新识别转移
根据历史记录,重新识别转移只处理对应的src目录
:param logid: 历史记录ID
:param mtype: 媒体类型
:param tmdbid: TMDB ID
@@ -220,22 +451,10 @@ class TransferChain(ChainBase):
if not history:
logger.error(f"历史记录不存在ID{logid}")
return False, "历史记录不存在"
if history.download_hash:
# 有下载记录,按下载记录重新转移
torrents: Optional[List[TransferTorrent]] = self.list_torrents(hashs=history.download_hash)
if not torrents:
return False, f"没有获取到种子hash{history.download_hash}"
# 源目录
src_path = Path(torrents[0].path)
else:
# 没有下载记录,按源目录路径重新转移
src_path = Path(history.src)
if not src_path.exists():
return False, f"源目录不存在:{src_path}"
# 识别元数据
meta = MetaInfo(title=src_path.stem)
if not meta.name:
return False, f"未识别到元数据,标题:{src_path.stem}"
# 没有下载记录,按源目录路径重新转移
src_path = Path(history.src)
if not src_path.exists():
return False, f"源目录不存在:{src_path}"
# 查询媒体信息
mediainfo = self.recognize_media(mtype=mtype, tmdbid=tmdbid)
if not mediainfo:
@@ -245,118 +464,82 @@ class TransferChain(ChainBase):
# 更新媒体图片
self.obtain_images(mediainfo=mediainfo)
# 转移
transferinfo: TransferInfo = self.transfer(mediainfo=mediainfo,
path=src_path,
transfer_type=settings.TRANSFER_TYPE)
if not transferinfo:
logger.error("文件转移模块运行失败")
return False, "文件转移模块运行失败"
if not transferinfo.target_path:
# 转移失败
logger.warn(f"{src_path.name} 入库失败:{transferinfo.message}")
# 新增转移失败历史记录
self.__insert_fail_history(
src_path=src_path,
download_hash=history.download_hash,
meta=meta,
mediainfo=mediainfo,
transferinfo=transferinfo
)
return False, transferinfo.message
# 删除旧的已整理文件
if history.dest:
self.delete_files(Path(history.dest))
# 转移
state, errmsg = self.do_transfer(path=src_path,
mediainfo=mediainfo,
download_hash=history.download_hash)
if not state:
return False, errmsg
# 新增转移成功历史记录
self.__insert_sucess_history(
src_path=src_path,
download_hash=history.download_hash,
meta=meta,
mediainfo=mediainfo,
transferinfo=transferinfo
)
# 删除旧历史记录
self.transferhis.delete(logid)
# 刮削元数据
self.scrape_metadata(path=transferinfo.target_path, mediainfo=mediainfo)
# 刷新媒体库
self.refresh_mediaserver(mediainfo=mediainfo, file_path=transferinfo.target_path)
# 发送通知
self.send_transfer_message(meta=meta, mediainfo=mediainfo, transferinfo=transferinfo)
# 广播事件
self.eventmanager.send_event(EventType.TransferComplete, {
'meta': meta,
'mediainfo': mediainfo,
'transferinfo': transferinfo
})
return True, ""
def __insert_sucess_history(self, src_path: Path, download_hash: str, meta: MetaBase,
mediainfo: MediaInfo, transferinfo: TransferInfo):
def manual_transfer(self, in_path: Path,
target: Path = None,
tmdbid: int = None,
mtype: MediaType = None,
season: int = None,
transfer_type: str = None,
epformat: EpisodeFormat = None,
min_filesize: int = 0) -> Tuple[bool, Union[str, list]]:
"""
新增转移成功历史记录
手动转移
:param in_path: 源文件路径
:param target: 目标路径
:param tmdbid: TMDB ID
:param mtype: 媒体类型
:param season: 季度
:param transfer_type: 转移类型
:param epformat: 剧集格式
:param min_filesize: 最小文件大小(MB)
"""
self.transferhis.add(
src=str(src_path),
dest=str(transferinfo.target_path),
mode=settings.TRANSFER_TYPE,
type=mediainfo.type.value,
category=mediainfo.category,
title=mediainfo.title,
year=mediainfo.year,
tmdbid=mediainfo.tmdb_id,
imdbid=mediainfo.imdb_id,
tvdbid=mediainfo.tvdb_id,
doubanid=mediainfo.douban_id,
seasons=meta.season,
episodes=meta.episode,
image=mediainfo.get_poster_image(),
download_hash=download_hash,
status=1,
files=json.dumps(transferinfo.file_list)
)
logger.info(f"手动转移:{in_path} ...")
def __insert_fail_history(self, src_path: Path, download_hash: str, meta: MetaBase,
transferinfo: TransferInfo = None, mediainfo: MediaInfo = None):
"""
新增转移失败历史记录
"""
if mediainfo and transferinfo:
his = self.transferhis.add(
src=str(src_path),
dest=str(transferinfo.target_path),
mode=settings.TRANSFER_TYPE,
type=mediainfo.type.value,
category=mediainfo.category,
title=mediainfo.title,
year=mediainfo.year,
tmdbid=mediainfo.tmdb_id,
imdbid=mediainfo.imdb_id,
tvdbid=mediainfo.tvdb_id,
doubanid=mediainfo.douban_id,
seasons=meta.season,
episodes=meta.episode,
image=mediainfo.get_poster_image(),
download_hash=download_hash,
status=0,
errmsg=transferinfo.message or '未知错误',
files=json.dumps(transferinfo.file_list)
if tmdbid:
# 有输入TMDBID时单个识别
# 识别媒体信息
mediainfo: MediaInfo = self.mediachain.recognize_media(tmdbid=tmdbid, mtype=mtype)
if not mediainfo:
return False, f"媒体信息识别失败tmdbid: {tmdbid}, type: {mtype.value}"
# 开始进度
self.progress.start(ProgressKey.FileTransfer)
self.progress.update(value=0,
text=f"开始转移 {in_path} ...",
key=ProgressKey.FileTransfer)
# 开始转移
state, errmsg = self.do_transfer(
path=in_path,
mediainfo=mediainfo,
target=target,
season=season,
epformat=epformat,
min_filesize=min_filesize
)
if not state:
return False, errmsg
self.progress.end(ProgressKey.FileTransfer)
logger.info(f"{in_path} 转移完成")
return True, ""
else:
his = self.transferhis.add(
src=str(src_path),
mode=settings.TRANSFER_TYPE,
seasons=meta.season,
episodes=meta.episode,
download_hash=download_hash,
status=0,
errmsg="未识别到媒体信息"
)
return his
# 没有输入TMDBID时按文件识别
state, errmsg = self.do_transfer(path=in_path,
target=target,
transfer_type=transfer_type,
season=season,
epformat=epformat,
min_filesize=min_filesize)
return state, errmsg
def send_transfer_message(self, meta: MetaBase, mediainfo: MediaInfo, transferinfo: TransferInfo):
def send_transfer_message(self, meta: MetaBase, mediainfo: MediaInfo,
transferinfo: TransferInfo, season_episode: str = None):
"""
发送入库成功的消息
"""
msg_title = f"{mediainfo.title_year} {meta.season_episode} 已入库"
msg_title = f"{mediainfo.title_year} {meta.season_episode if not season_episode else season_episode} 已入库"
if mediainfo.vote_average:
msg_str = f"评分:{mediainfo.vote_average},类型:{mediainfo.type.value}"
else:
@@ -389,8 +572,8 @@ class TransferChain(ChainBase):
logger.warn(f"文件 {path} 已删除")
# 判断目录是否为空, 为空则删除
if str(path.parent.parent) != str(path.root):
# 父目录非根目录,才删除父目录
files = SystemUtils.list_files_with_extensions(path.parent, settings.RMT_MEDIAEXT)
# 父目录非根目录,才删除父目录
files = SystemUtils.list_files(path.parent, settings.RMT_MEDIAEXT)
if not files:
shutil.rmtree(path.parent)
logger.warn(f"目录 {path.parent} 已删除")

View File

@@ -8,10 +8,12 @@ from app.chain.download import DownloadChain
from app.chain.mediaserver import MediaServerChain
from app.chain.site import SiteChain
from app.chain.subscribe import SubscribeChain
from app.chain.system import SystemChain
from app.chain.transfer import TransferChain
from app.core.event import Event as ManagerEvent
from app.core.event import eventmanager, EventManager
from app.core.plugin import PluginManager
from app.db import ScopedSession
from app.log import logger
from app.schemas.types import EventType, MessageChannel
from app.utils.object import ObjectUtils
@@ -38,76 +40,85 @@ class Command(metaclass=Singleton):
_event = Event()
def __init__(self):
# 数据库连接
self._db = ScopedSession()
# 事件管理器
self.eventmanager = EventManager()
# 插件管理器
self.pluginmanager = PluginManager()
# 处理链
self.chain = CommandChian(self._db)
# 内置命令
self._commands = {
"/cookiecloud": {
"func": CookieCloudChain().remote_sync,
"func": CookieCloudChain(self._db).remote_sync,
"description": "同步站点",
"data": {}
},
"/sites": {
"func": SiteChain().remote_list,
"func": SiteChain(self._db).remote_list,
"description": "查询站点",
"data": {}
},
"/site_cookie": {
"func": SiteChain().remote_cookie,
"func": SiteChain(self._db).remote_cookie,
"description": "更新站点Cookie",
"data": {}
},
"/site_enable": {
"func": SiteChain().remote_enable,
"func": SiteChain(self._db).remote_enable,
"description": "启用站点",
"data": {}
},
"/site_disable": {
"func": SiteChain().remote_disable,
"func": SiteChain(self._db).remote_disable,
"description": "禁用站点",
"data": {}
},
"/mediaserver_sync": {
"func": MediaServerChain().remote_sync,
"func": MediaServerChain(self._db).remote_sync,
"description": "同步媒体服务器",
"data": {}
},
"/subscribes": {
"func": SubscribeChain().remote_list,
"func": SubscribeChain(self._db).remote_list,
"description": "查询订阅",
"data": {}
},
"/subscribe_refresh": {
"func": SubscribeChain().remote_refresh,
"func": SubscribeChain(self._db).remote_refresh,
"description": "刷新订阅",
"data": {}
},
"/subscribe_search": {
"func": SubscribeChain().remote_search,
"func": SubscribeChain(self._db).remote_search,
"description": "搜索订阅",
"data": {}
},
"/subscribe_delete": {
"func": SubscribeChain().remote_delete,
"func": SubscribeChain(self._db).remote_delete,
"description": "删除订阅",
"data": {}
},
"/downloading": {
"func": DownloadChain().remote_downloading,
"func": DownloadChain(self._db).remote_downloading,
"description": "正在下载",
"data": {}
},
"/transfer": {
"func": TransferChain().process,
"func": TransferChain(self._db).process,
"description": "下载文件整理",
"data": {}
},
"/redo": {
"func": TransferChain().remote_transfer,
"func": TransferChain(self._db).remote_transfer,
"description": "手动整理",
"data": {}
},
"/clear_cache": {
"func": SystemChain(self._db).remote_clear_cache,
"description": "清理缓存",
"data": {}
}
}
# 汇总插件命令
@@ -122,8 +133,6 @@ class Command(metaclass=Singleton):
'data': command.get('data')
}
)
# 处理链
self.chain = CommandChian()
# 广播注册命令菜单
self.chain.register_commands(commands=self.get_commands())
# 消息处理线程

View File

@@ -63,8 +63,12 @@ class Settings(BaseSettings):
RMT_AUDIO_TRACK_EXT: list = ['.mka']
# 索引器
INDEXER: str = "builtin"
# 订阅搜索开关
SUBSCRIBE_SEARCH: bool = False
# 用户认证站点 hhclub/audiences/hddolby/zmpt/freefarm/hdfans/wintersakura/leaves/1ptba/icc2022/iyuu
AUTH_SITE: str = ""
# 交互搜索自动下载用户ID使用,分割
AUTO_DOWNLOAD_USER: str = None
# 消息通知渠道 telegram/wechat/slack
MESSAGER: str = "telegram"
# WeChat企业ID
@@ -105,6 +109,8 @@ class Settings(BaseSettings):
QB_USER: str = None
# Qbittorrent密码
QB_PASSWORD: str = None
# Qbittorrent分类自动管理
QB_CATEGORY: bool = False
# Transmission地址IP:PORT
TR_HOST: str = None
# Transmission用户名
@@ -119,6 +125,8 @@ class Settings(BaseSettings):
DOWNLOAD_MOVIE_PATH: str = None
# 电视剧下载保存目录,容器内映射路径需要一致
DOWNLOAD_TV_PATH: str = None
# 动漫下载保存目录,容器内映射路径需要一致
DOWNLOAD_ANIME_PATH: str = None
# 下载目录二级分类
DOWNLOAD_CATEGORY: bool = False
# 下载站点字幕
@@ -144,13 +152,15 @@ class Settings(BaseSettings):
# 转移方式 link/copy/move/softlink
TRANSFER_TYPE: str = "copy"
# CookieCloud服务器地址
COOKIECLOUD_HOST: str = "https://nastool.org/cookiecloud"
COOKIECLOUD_HOST: str = "https://movie-pilot.org/cookiecloud"
# CookieCloud用户KEY
COOKIECLOUD_KEY: str = None
# CookieCloud端对端加密密码
COOKIECLOUD_PASSWORD: str = None
# CookieCloud同步间隔分钟
COOKIECLOUD_INTERVAL: int = 60 * 24
# OCR服务器地址
OCR_HOST: str = "https://movie-pilot.org"
# CookieCloud对应的浏览器UA
USER_AGENT = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/113.0.0.0 Safari/537.36 Edg/113.0.1774.57"
# 媒体库目录
@@ -159,8 +169,12 @@ class Settings(BaseSettings):
LIBRARY_MOVIE_NAME: str = None
# 电视剧媒体库目录名,默认"电视剧"
LIBRARY_TV_NAME: str = None
# 动漫媒体库目录名,默认"电视剧/动漫"
LIBRARY_ANIME_NAME: str = None
# 二级分类
LIBRARY_CATEGORY: bool = True
# 电视剧动漫的分类genre_ids
ANIME_GENREIDS = [16]
# 电影重命名格式
MOVIE_RENAME_FORMAT: str = "{{title}}{% if year %} ({{year}}){% endif %}" \
"/{{title}}{% if year %} ({{year}}){% endif %}{% if part %}-{{part}}{% endif %}{% if videoFormat %} - {{videoFormat}}{% endif %}" \

View File

@@ -148,6 +148,8 @@ class MediaInfo:
vote_average: int = 0
# 描述
overview: str = None
# 风格ID
genre_ids: list = field(default_factory=list)
# 所有别名和译名
names: list = field(default_factory=list)
# 各季的剧集清单信息
@@ -250,6 +252,15 @@ class MediaInfo:
"""
setattr(self, f"{name}_path", image)
def get_image(self, name: str):
"""
获取图片地址
"""
try:
return getattr(self, f"{name}_path")
except AttributeError:
return None
def set_category(self, cat: str):
"""
设置二级分类
@@ -338,6 +349,8 @@ class MediaInfo:
self.vote_average = round(float(info.get('vote_average')), 1) if info.get('vote_average') else 0
# 描述
self.overview = info.get('overview')
# 风格
self.genre_ids = info.get('genre_ids') or []
# 原语种
self.original_language = info.get('original_language')
if self.type == MediaType.MOVIE:
@@ -442,6 +455,8 @@ class MediaInfo:
self.poster_path = info.get("pic", {}).get("large")
if not self.poster_path and info.get("cover_url"):
self.poster_path = info.get("cover_url")
if not self.poster_path and info.get("cover"):
self.poster_path = info.get("cover").get("url")
# 简介
if not self.overview:
self.overview = info.get("intro") or info.get("card_subtitle") or ""
@@ -549,7 +564,6 @@ class MediaInfo:
dicts["type"] = self.type.value if self.type else None
dicts["detail_link"] = self.detail_link
dicts["title_year"] = self.title_year
dicts["tmdb_info"]["media_type"] = self.type.value if self.type else None
return dicts
def clear(self):

View File

@@ -243,7 +243,7 @@ class MetaBase(object):
else:
return [self.begin_season]
@ property
@property
def episode(self) -> str:
"""
返回开始集、结束集字符串
@@ -440,9 +440,21 @@ class MetaBase(object):
elif len(ep) > 1 and str(ep[0]).isdigit() and str(ep[-1]).isdigit():
self.begin_episode = int(ep[0])
self.end_episode = int(ep[-1])
self.total_episode = (self.end_episode - self.begin_episode) + 1
elif str(ep).isdigit():
self.begin_episode = int(ep)
self.end_episode = None
def set_episodes(self, begin: int, end: int):
"""
设置开始集结束集
"""
if begin:
self.begin_episode = begin
if end:
self.end_episode = end
if self.begin_episode and self.end_episode:
self.total_episode = (self.end_episode - self.begin_episode) + 1
def merge(self, meta: Self):
"""

View File

@@ -371,6 +371,8 @@ class MetaVideo(MetaBase):
self.type = MediaType.TV
elif token.upper() == "SEASON" and self.begin_season is None:
self._last_token_type = "SEASON"
elif self.type == MediaType.TV and self.begin_season is None:
self.begin_season = 1
def __init_episode(self, token: str):
re_res = re.findall(r"%s" % self._episode_re, token, re.IGNORECASE)

View File

@@ -37,6 +37,7 @@ class PluginManager(metaclass=Singleton):
"""
启动加载插件
"""
# 扫描插件目录
plugins = ModuleHelper.load(
"app.plugins",
@@ -80,8 +81,12 @@ class PluginManager(metaclass=Singleton):
"""
# 停止所有插件
for plugin in self._running_plugins.values():
if hasattr(plugin, "stop"):
plugin.stop()
# 关闭插件
if hasattr(plugin, "stop_service"):
plugin.stop_service()
# 清空对像
self._plugins = {}
self._running_plugins = {}
def get_plugin_config(self, pid: str) -> dict:
"""

View File

@@ -1,5 +1,5 @@
from sqlalchemy import create_engine, QueuePool
from sqlalchemy.orm import sessionmaker, Session
from sqlalchemy.orm import sessionmaker, Session, scoped_session
from app.core.config import settings
@@ -11,8 +11,11 @@ Engine = create_engine(f"sqlite:///{settings.CONFIG_PATH}/user.db",
pool_size=1000,
pool_recycle=60 * 10,
max_overflow=0)
# 数据库会话
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=Engine)
# 会话工厂
SessionFactory = sessionmaker(autocommit=False, autoflush=False, bind=Engine)
# 多线程全局使用的数据库会话
ScopedSession = scoped_session(SessionFactory)
def get_db():
@@ -22,7 +25,7 @@ def get_db():
"""
db = None
try:
db = SessionLocal()
db = SessionFactory()
yield db
finally:
if db:
@@ -37,8 +40,4 @@ class DbOper:
if db:
self._db = db
else:
self._db = SessionLocal()
def __del__(self):
if self._db:
self._db.close()
self._db = ScopedSession()

View File

@@ -1,8 +1,8 @@
from pathlib import Path
from typing import Any
from typing import List
from app.db import DbOper
from app.db.models.downloadhistory import DownloadHistory
from app.db.models.downloadhistory import DownloadHistory, DownloadFiles
class DownloadHistoryOper(DbOper):
@@ -10,28 +10,65 @@ class DownloadHistoryOper(DbOper):
下载历史管理
"""
def get_by_path(self, path: Path) -> Any:
def get_by_path(self, path: Path) -> DownloadHistory:
"""
按路径查询下载记录
:param path: 数据key
"""
return DownloadHistory.get_by_path(self._db, str(path))
def get_by_hash(self, download_hash: str) -> Any:
def get_by_hash(self, download_hash: str) -> DownloadHistory:
"""
按Hash查询下载记录
:param download_hash: 数据key
"""
return DownloadHistory.get_by_hash(self._db, download_hash)
def add(self, **kwargs):
def add(self, **kwargs) -> DownloadHistory:
"""
新增下载历史
"""
downloadhistory = DownloadHistory(**kwargs)
return downloadhistory.create(self._db)
def list_by_page(self, page: int = 1, count: int = 30):
def add_files(self, file_items: List[dict]):
"""
新增下载历史文件
"""
for file_item in file_items:
downloadfile = DownloadFiles(**file_item)
downloadfile.create(self._db)
def get_files_by_hash(self, download_hash: str, state: int = None) -> List[DownloadFiles]:
"""
按Hash查询下载文件记录
:param download_hash: 数据key
:param state: 删除状态
"""
return DownloadFiles.get_by_hash(self._db, download_hash, state)
def get_file_by_fullpath(self, fullpath: str) -> DownloadFiles:
"""
按fullpath查询下载文件记录
:param fullpath: 数据key
"""
return DownloadFiles.get_by_fullpath(self._db, fullpath)
def get_files_by_savepath(self, fullpath: str) -> List[DownloadFiles]:
"""
按savepath查询下载文件记录
:param fullpath: 数据key
"""
return DownloadFiles.get_by_savepath(self._db, fullpath)
def delete_file_by_fullpath(self, fullpath: str):
"""
按fullpath删除下载文件记录
:param fullpath: 数据key
"""
DownloadFiles.delete_by_fullpath(self._db, fullpath)
def list_by_page(self, page: int = 1, count: int = 30) -> List[DownloadHistory]:
"""
分页查询下载历史
"""
@@ -44,7 +81,7 @@ class DownloadHistoryOper(DbOper):
DownloadHistory.truncate(self._db)
def get_last_by(self, mtype=None, title: str = None, year: str = None,
season: str = None, episode: str = None, tmdbid=None) -> DownloadHistory:
season: str = None, episode: str = None, tmdbid=None) -> List[DownloadHistory]:
"""
按类型、标题、年份、季集查询下载记录
"""

View File

@@ -6,7 +6,7 @@ from alembic.config import Config
from app.core.config import settings
from app.core.security import get_password_hash
from app.db import Engine, SessionLocal
from app.db import Engine, ScopedSession
from app.db.models import Base
from app.db.models.user import User
from app.log import logger
@@ -22,15 +22,16 @@ def init_db():
# 全量建表
Base.metadata.create_all(bind=Engine)
# 初始化超级管理员
_db = SessionLocal()
user = User.get_by_name(db=_db, name=settings.SUPERUSER)
db = ScopedSession()
user = User.get_by_name(db=db, name=settings.SUPERUSER)
if not user:
user = User(
name=settings.SUPERUSER,
hashed_password=get_password_hash(settings.SUPERUSER_PASSWORD),
is_superuser=True,
)
user.create(_db)
user.create(db)
db.close()
def update_db():

View File

@@ -58,29 +58,29 @@ class DownloadHistory(Base):
"""
if tmdbid and not season and not episode:
return db.query(DownloadHistory).filter(DownloadHistory.tmdbid == tmdbid).order_by(
DownloadHistory.id.desc()).first()
DownloadHistory.id.desc()).all()
if tmdbid and season and not episode:
return db.query(DownloadHistory).filter(DownloadHistory.tmdbid == tmdbid,
DownloadHistory.seasons == season).order_by(
DownloadHistory.id.desc()).first()
DownloadHistory.id.desc()).all()
if tmdbid and season and episode:
return db.query(DownloadHistory).filter(DownloadHistory.tmdbid == tmdbid,
DownloadHistory.seasons == season,
DownloadHistory.episodes == episode).order_by(
DownloadHistory.id.desc()).first()
DownloadHistory.id.desc()).all()
# 电视剧所有季集|电影
if not season and not episode:
return db.query(DownloadHistory).filter(DownloadHistory.type == mtype,
DownloadHistory.title == title,
DownloadHistory.year == year).order_by(
DownloadHistory.id.desc()).first()
DownloadHistory.id.desc()).all()
# 电视剧某季
if season and not episode:
return db.query(DownloadHistory).filter(DownloadHistory.type == mtype,
DownloadHistory.title == title,
DownloadHistory.year == year,
DownloadHistory.seasons == season).order_by(
DownloadHistory.id.desc()).first()
DownloadHistory.id.desc()).all()
# 电视剧某季某集
if season and episode:
return db.query(DownloadHistory).filter(DownloadHistory.type == mtype,
@@ -88,4 +88,51 @@ class DownloadHistory(Base):
DownloadHistory.year == year,
DownloadHistory.seasons == season,
DownloadHistory.episodes == episode).order_by(
DownloadHistory.id.desc()).first()
DownloadHistory.id.desc()).all()
class DownloadFiles(Base):
"""
下载文件记录
"""
id = Column(Integer, Sequence('id'), primary_key=True, index=True)
# 下载任务Hash
download_hash = Column(String, index=True)
# 下载器
downloader = Column(String)
# 完整路径
fullpath = Column(String, index=True)
# 保存路径
savepath = Column(String, index=True)
# 文件相对路径/名称
filepath = Column(String)
# 种子名称
torrentname = Column(String)
# 状态 0-已删除 1-正常
state = Column(Integer, nullable=False, default=1)
@staticmethod
def get_by_hash(db: Session, download_hash: str, state: int = None):
if state:
return db.query(DownloadFiles).filter(DownloadFiles.download_hash == download_hash,
DownloadFiles.state == state).all()
else:
return db.query(DownloadFiles).filter(DownloadFiles.download_hash == download_hash).all()
@staticmethod
def get_by_fullpath(db: Session, fullpath: str):
return db.query(DownloadFiles).filter(DownloadFiles.fullpath == fullpath).order_by(
DownloadFiles.id.desc()).first()
@staticmethod
def get_by_savepath(db: Session, savepath: str):
return db.query(DownloadFiles).filter(DownloadFiles.savepath == savepath).all()
@staticmethod
def delete_by_fullpath(db: Session, fullpath: str):
return db.query(DownloadFiles).filter(DownloadFiles.fullpath == fullpath,
DownloadFiles.state == 1).update(
{
"state": 0
}
)

View File

@@ -49,6 +49,8 @@ class Subscribe(Base):
state = Column(String, nullable=False, index=True, default='N')
# 最后更新时间
last_update = Column(String)
# 创建时间
date = Column(String)
# 订阅用户
username = Column(String)
# 订阅站点

View File

@@ -85,7 +85,7 @@ class TransferHistory(Base):
return db.query(func.count(TransferHistory.id)).filter(TransferHistory.title.like(f'%{title}%')).first()[0]
@staticmethod
def list_by(db: Session, mtype: str = None, title: str = None, year: int = None, season: str = None,
def list_by(db: Session, title: str = None, year: int = None, season: str = None,
episode: str = None, tmdbid: str = None):
"""
据tmdbid、season、season_episode查询转移记录
@@ -101,19 +101,24 @@ class TransferHistory(Base):
TransferHistory.episodes == episode).all()
# 电视剧所有季集|电影
if not season and not episode:
return db.query(TransferHistory).filter(TransferHistory.type == mtype,
TransferHistory.title == title,
return db.query(TransferHistory).filter(TransferHistory.title == title,
TransferHistory.year == year).all()
# 电视剧某季
if season and not episode:
return db.query(TransferHistory).filter(TransferHistory.type == mtype,
TransferHistory.title == title,
return db.query(TransferHistory).filter(TransferHistory.title == title,
TransferHistory.year == year,
TransferHistory.seasons == season).all()
# 电视剧某季某集
if season and episode:
return db.query(TransferHistory).filter(TransferHistory.type == mtype,
TransferHistory.title == title,
return db.query(TransferHistory).filter(TransferHistory.title == title,
TransferHistory.year == year,
TransferHistory.seasons == season,
TransferHistory.episodes == episode).all()
@staticmethod
def update_download_hash(db: Session, historyid: int = None, download_hash: str = None):
db.query(TransferHistory).filter(TransferHistory.id == historyid).update(
{
"download_hash": download_hash
}
)

View File

@@ -2,7 +2,6 @@ import json
from typing import Any
from app.db import DbOper
from app.db.models import Base
from app.db.models.plugin import PluginData
from app.utils.object import ObjectUtils
@@ -12,7 +11,7 @@ class PluginDataOper(DbOper):
插件数据管理
"""
def save(self, plugin_id: str, key: str, value: Any) -> Base:
def save(self, plugin_id: str, key: str, value: Any) -> PluginData:
"""
保存插件数据
:param plugin_id: 插件id

View File

@@ -19,7 +19,7 @@ class SiteOper(DbOper):
return True, "新增站点成功"
return False, "站点已存在"
def get(self, sid: int):
def get(self, sid: int) -> Site:
"""
查询单个站点
"""
@@ -31,7 +31,7 @@ class SiteOper(DbOper):
"""
return Site.list(self._db)
def list_active(self):
def list_active(self) -> List[Site]:
"""
按状态获取站点列表
"""
@@ -41,9 +41,9 @@ class SiteOper(DbOper):
"""
删除站点
"""
return Site.delete(self._db, sid)
Site.delete(self._db, sid)
def update(self, sid: int, payload: dict):
def update(self, sid: int, payload: dict) -> Site:
"""
更新站点
"""

View File

@@ -1,3 +1,4 @@
import time
from typing import Tuple, List
from app.core.context import MediaInfo
@@ -26,13 +27,14 @@ class SubscribeOper(DbOper):
backdrop=mediainfo.get_backdrop_image(),
vote=mediainfo.vote_average,
description=mediainfo.overview,
date=time.strftime("%Y-%m-%d %H:%M:%S", time.localtime()),
**kwargs)
subscribe.create(self._db)
return subscribe.id, "新增订阅成功"
else:
return subscribe.id, "订阅已存在"
def exists(self, tmdbid: int, season: int):
def exists(self, tmdbid: int, season: int) -> bool:
"""
判断是否存在
"""
@@ -61,7 +63,7 @@ class SubscribeOper(DbOper):
"""
Subscribe.delete(self._db, rid=sid)
def update(self, sid: int, payload: dict):
def update(self, sid: int, payload: dict) -> Subscribe:
"""
更新订阅
"""

View File

@@ -35,18 +35,20 @@ class SystemConfigOper(DbOper, metaclass=Singleton):
self.__SYSTEMCONF[key] = value
# 写入数据库
if ObjectUtils.is_obj(value):
if value is not None:
value = json.dumps(value)
else:
value = ''
value = json.dumps(value)
elif value is None:
value = ''
conf = SystemConfig.get_by_key(self._db, key)
if conf:
conf.update(self._db, {"value": value})
if value:
conf.update(self._db, {"value": value})
else:
conf.delete(self._db, conf.id)
else:
conf = SystemConfig(key=key, value=value)
conf.create(self._db)
def get(self, key: Union[str, SystemConfigKey] = None):
def get(self, key: Union[str, SystemConfigKey] = None) -> Any:
"""
获取系统设置
"""

View File

@@ -1,8 +1,13 @@
import json
import time
from typing import Any
from pathlib import Path
from typing import Any, List
from app.core.context import MediaInfo
from app.core.meta import MetaBase
from app.db import DbOper
from app.db.models.transferhistory import TransferHistory
from app.schemas import TransferInfo
class TransferHistoryOper(DbOper):
@@ -10,53 +15,48 @@ class TransferHistoryOper(DbOper):
转移历史管理
"""
def get(self, historyid: int) -> Any:
def get(self, historyid: int) -> TransferHistory:
"""
获取转移历史
:param historyid: 转移历史id
"""
return TransferHistory.get(self._db, historyid)
def get_by_title(self, title: str) -> Any:
def get_by_title(self, title: str) -> List[TransferHistory]:
"""
按标题查询转移记录
:param title: 数据key
"""
return TransferHistory.list_by_title(self._db, title)
def get_by_src(self, src: str) -> Any:
def get_by_src(self, src: str) -> TransferHistory:
"""
按源查询转移记录
:param src: 数据key
"""
return TransferHistory.get_by_src(self._db, src)
def add(self, **kwargs):
def add(self, **kwargs) -> TransferHistory:
"""
新增转移历史
"""
if kwargs.get("download_hash"):
transferhistory = TransferHistory.get_by_hash(self._db, kwargs.get("download_hash"))
if transferhistory:
transferhistory.delete(self._db, transferhistory.id)
kwargs.update({
"date": time.strftime("%Y-%m-%d %H:%M:%S", time.localtime())
})
return TransferHistory(**kwargs).create(self._db)
def statistic(self, days: int = 7):
def statistic(self, days: int = 7) -> List[Any]:
"""
统计最近days天的下载历史数量
"""
return TransferHistory.statistic(self._db, days)
def get_by(self, mtype: str = None, title: str = None, year: str = None,
season: str = None, episode: str = None, tmdbid: str = None) -> Any:
def get_by(self, title: str = None, year: str = None,
season: str = None, episode: str = None, tmdbid: str = None) -> List[TransferHistory]:
"""
按类型、标题、年份、季集查询转移记录
"""
return TransferHistory.list_by(db=self._db,
mtype=mtype,
title=title,
year=year,
season=season,
@@ -75,8 +75,87 @@ class TransferHistoryOper(DbOper):
"""
TransferHistory.truncate(self._db)
def add_force(self, **kwargs):
def add_force(self, **kwargs) -> TransferHistory:
"""
新增转移历史
新增转移历史,相同源目录的记录会被删除
"""
return TransferHistory(**kwargs).create(self._db)
if kwargs.get("src"):
transferhistory = TransferHistory.get_by_src(self._db, kwargs.get("src"))
if transferhistory:
transferhistory.delete(self._db, transferhistory.id)
kwargs.update({
"date": time.strftime("%Y-%m-%d %H:%M:%S", time.localtime())
})
return TransferHistory(**kwargs).create(self._db)
def update_download_hash(self, historyid, download_hash):
"""
补充转移记录download_hash
"""
TransferHistory.update_download_hash(self._db, historyid, download_hash)
def add_success(self, src_path: Path, mode: str, meta: MetaBase,
mediainfo: MediaInfo, transferinfo: TransferInfo,
download_hash: str = None):
"""
新增转移成功历史记录
"""
self.add_force(
src=str(src_path),
dest=str(transferinfo.target_path),
mode=mode,
type=mediainfo.type.value,
category=mediainfo.category,
title=mediainfo.title,
year=mediainfo.year,
tmdbid=mediainfo.tmdb_id,
imdbid=mediainfo.imdb_id,
tvdbid=mediainfo.tvdb_id,
doubanid=mediainfo.douban_id,
seasons=meta.season,
episodes=meta.episode,
image=mediainfo.get_poster_image(),
download_hash=download_hash,
status=1,
files=json.dumps(transferinfo.file_list)
)
def add_fail(self, src_path: Path, mode: str, meta: MetaBase, mediainfo: MediaInfo = None,
transferinfo: TransferInfo = None, download_hash: str = None):
"""
新增转移失败历史记录
"""
if mediainfo and transferinfo:
his = self.add_force(
src=str(src_path),
dest=str(transferinfo.target_path),
mode=mode,
type=mediainfo.type.value,
category=mediainfo.category,
title=mediainfo.title or meta.name,
year=mediainfo.year or meta.year,
tmdbid=mediainfo.tmdb_id,
imdbid=mediainfo.imdb_id,
tvdbid=mediainfo.tvdb_id,
doubanid=mediainfo.douban_id,
seasons=meta.season,
episodes=meta.episode,
image=mediainfo.get_poster_image(),
download_hash=download_hash,
status=0,
errmsg=transferinfo.message or '未知错误',
files=json.dumps(transferinfo.file_list)
)
else:
his = self.add_force(
title=meta.name,
year=meta.year,
src=str(src_path),
mode=mode,
seasons=meta.season,
episodes=meta.episode,
download_hash=download_hash,
status=0,
errmsg="未识别到媒体信息"
)
return his

108
app/helper/format.py Normal file
View File

@@ -0,0 +1,108 @@
import re
from typing import Tuple, Optional
import parse
class FormatParser(object):
_key = ""
_split_chars = r"\.|\s+|\(|\)|\[|]|-|\+|【|】|/||;|&|\||#|_|「|」|~"
def __init__(self, eformat: str, details: str = None, part: str = None,
offset: int = None, key: str = "ep"):
"""
:params eformat: 格式化字符串
:params details: 格式化详情
:params part: 分集
:params offset: 偏移量
:prams key: EP关键字
"""
self._format = eformat
self._start_ep = None
self._end_ep = None
self._part = None
if part:
self._part = part
if details:
if re.compile("\\d{1,4}-\\d{1,4}").match(details):
self._start_ep = details
self._end_ep = details
else:
tmp = details.split(",")
if len(tmp) > 1:
self._start_ep = int(tmp[0])
self._end_ep = int(tmp[0]) if int(tmp[0]) > int(tmp[1]) else int(tmp[1])
else:
self._start_ep = self._end_ep = int(tmp[0])
self.__offset = int(offset) if offset else 0
self._key = key
@property
def format(self):
return self._format
@property
def start_ep(self):
return self._start_ep
@property
def end_ep(self):
return self._end_ep
@property
def part(self):
return self._part
@property
def offset(self):
return self.__offset
def match(self, file: str) -> bool:
if not self._format:
return True
s, e = self.__handle_single(file)
if not s:
return False
if self._start_ep is None:
return True
if self._start_ep <= s <= self._end_ep:
return True
return False
def split_episode(self, file_name: str) -> Tuple[Optional[int], Optional[int], Optional[str]]:
"""
拆分集数返回开始集数结束集数Part信息
"""
# 指定的具体集数,直接返回
if self._start_ep is not None and self._start_ep == self._end_ep:
if isinstance(self._start_ep, str):
s, e = self._start_ep.split("-")
if int(s) == int(e):
return int(s) + self.__offset, None, self.part
return int(s) + self.__offset, int(e) + self.__offset, self.part
return self._start_ep + self.__offset, None, self.part
if not self._format:
return None, None, None
s, e = self.__handle_single(file_name)
return s + self.__offset if s is not None else None, \
e + self.__offset if e is not None else None, self.part
def __handle_single(self, file: str) -> Tuple[Optional[int], Optional[int]]:
"""
处理单集,返回单集的开始和结束集数
"""
if not self._format:
return None, None
ret = parse.parse(self._format, file)
if not ret or not ret.__contains__(self._key):
return None, None
episodes = ret.__getitem__(self._key)
if not re.compile(r"^(EP)?(\d{1,4})(-(EP)?(\d{1,4}))?$", re.IGNORECASE).match(episodes):
return None, None
episode_splits = list(filter(lambda x: re.compile(r'[a-zA-Z]*\d{1,4}', re.IGNORECASE).match(x),
re.split(r'%s' % self._split_chars, episodes)))
if len(episode_splits) == 1:
return int(re.compile(r'[a-zA-Z]*', re.IGNORECASE).sub("", episode_splits[0])), None
else:
return int(re.compile(r'[a-zA-Z]*', re.IGNORECASE).sub("", episode_splits[0])), int(
re.compile(r'[a-zA-Z]*', re.IGNORECASE).sub("", episode_splits[1]))

View File

@@ -1,11 +1,12 @@
import base64
from app.core.config import settings
from app.utils.http import RequestUtils
class OcrHelper:
_ocr_b64_url = "https://nastool.org/captcha/base64"
_ocr_b64_url = f"{settings.OCR_HOST}/captcha/base64"
def get_captcha_text(self, image_url=None, image_b64=None, cookie=None, ua=None):
"""

View File

@@ -130,19 +130,24 @@ class TorrentHelper:
"""
获取种子文件的文件夹名和文件清单
:param torrent_path: 种子文件路径
:return: 文件夹名、文件清单
:return: 文件夹名、文件清单,单文件种子返回空文件夹名
"""
if not torrent_path or not torrent_path.exists():
return "", []
try:
torrentinfo = Torrent.from_file(torrent_path)
# 获取目录名
folder_name = torrentinfo.name
# 获取文件清单
if len(torrentinfo.files) <= 1:
if (not torrentinfo.files
or (len(torrentinfo.files) == 1
and torrentinfo.files[0].name == torrentinfo.name)):
# 单文件种子目录名返回空
folder_name = ""
# 单文件种子
file_list = [torrentinfo.name]
else:
# 目录名
folder_name = torrentinfo.name
# 文件清单
file_list = [fileinfo.name for fileinfo in torrentinfo.files]
logger.debug(f"{torrent_path.stem} -> 目录:{folder_name},文件清单:{file_list}")
return folder_name, file_list
@@ -188,7 +193,12 @@ class TorrentHelper:
# 季数
_season_len = str(len(_meta.season_list)).rjust(2, '0')
# 集数
_episode_len = str(9999 - len(_meta.episode_list)).rjust(4, '0')
if not _meta.episode_list:
# 无集数的排最前面
_episode_len = "9999"
else:
# 集数越多的排越前面
_episode_len = str(len(_meta.episode_list)).rjust(4, '0')
# 优先规则
priority = self.system_config.get(SystemConfigKey.TorrentsPriority)
if priority != "site":

View File

@@ -1,6 +1,8 @@
import logging
from logging.handlers import RotatingFileHandler
import click
from app.core.config import settings
# logger
@@ -21,12 +23,31 @@ file_handler = RotatingFileHandler(filename=settings.LOG_PATH / 'moviepilot.log'
backupCount=3,
encoding='utf-8')
file_handler.setLevel(logging.INFO)
level_name_colors = {
logging.DEBUG: lambda level_name: click.style(str(level_name), fg="cyan"),
logging.INFO: lambda level_name: click.style(str(level_name), fg="green"),
logging.WARNING: lambda level_name: click.style(str(level_name), fg="yellow"),
logging.ERROR: lambda level_name: click.style(str(level_name), fg="red"),
logging.CRITICAL: lambda level_name: click.style(
str(level_name), fg="bright_red"
),
}
# 定义日志输出格式
formatter = logging.Formatter("%(asctime)s - %(filename)s -【%(levelname)s%(message)s")
console_handler.setFormatter(formatter)
file_handler.setFormatter(formatter)
class CustomFormatter(logging.Formatter):
def format(self, record):
seperator = " " * (8 - len(record.levelname))
record.leveltext = level_name_colors[record.levelno](record.levelname + ":") + seperator
return super().format(record)
# 将Handler添加到Logger
# 终端日志
console_formatter = CustomFormatter("%(leveltext)s%(filename)s - %(message)s")
console_handler.setFormatter(console_formatter)
logger.addHandler(console_handler)
# 文件日志
file_formater = CustomFormatter("%(levelname)s%(asctime)s - %(filename)s - %(message)s")
file_handler.setFormatter(file_formater)
logger.addHandler(file_handler)

View File

@@ -166,22 +166,42 @@ class DoubanModule(_ModuleBase):
"""
if settings.SCRAP_SOURCE != "douban":
return None
# 目录下的所有文件
for file in SystemUtils.list_files_with_extensions(path, settings.RMT_MEDIAEXT):
if not file:
continue
logger.info(f"开始刮削媒体库文件:{file} ...")
try:
meta = MetaInfo(file.stem)
if not meta.name:
if SystemUtils.is_bluray_dir(path):
# 蓝光原盘
logger.info(f"开始刮削蓝光原盘:{path} ...")
meta = MetaInfo(path.stem)
if not meta.name:
return
# 根据名称查询豆瓣数据
doubaninfo = self.__match(name=mediainfo.title, year=mediainfo.year, season=meta.begin_season)
if not doubaninfo:
logger.warn(f"未找到 {mediainfo.title} 的豆瓣信息")
return
scrape_path = path / path.name
self.scraper.gen_scraper_files(meta=meta,
mediainfo=MediaInfo(douban_info=doubaninfo),
file_path=scrape_path)
else:
# 目录下的所有文件
for file in SystemUtils.list_files(path, settings.RMT_MEDIAEXT):
if not file:
continue
# 根据名称查询豆瓣数据
doubaninfo = self.__match(name=mediainfo.title, year=mediainfo.year, season=meta.begin_season)
if not doubaninfo:
logger.warn(f"未找到 {mediainfo.title} 的豆瓣信息")
break
# 刮削
self.scraper.gen_scraper_files(meta, MediaInfo(douban_info=doubaninfo), file)
except Exception as e:
logger.error(f"刮削文件 {file} 失败,原因:{e}")
logger.info(f"{file} 刮削完成")
logger.info(f"开始刮削媒体库文件:{file} ...")
try:
meta = MetaInfo(file.stem)
if not meta.name:
continue
# 根据名称查询豆瓣数据
doubaninfo = self.__match(name=mediainfo.title,
year=mediainfo.year,
season=meta.begin_season)
if not doubaninfo:
logger.warn(f"未找到 {mediainfo.title} 的豆瓣信息")
break
# 刮削
self.scraper.gen_scraper_files(meta=meta,
mediainfo=MediaInfo(douban_info=doubaninfo),
file_path=file)
except Exception as e:
logger.error(f"刮削文件 {file} 失败,原因:{e}")
logger.info(f"{path} 刮削完成")

View File

@@ -17,7 +17,7 @@ class DoubanScraper:
生成刮削文件
:param meta: 元数据
:param mediainfo: 媒体信息
:param file_path: 文件路径
:param file_path: 文件路径或者目录路径
"""
try:

View File

@@ -23,6 +23,14 @@ class EmbyModule(_ModuleBase):
def init_setting(self) -> Tuple[str, Union[str, bool]]:
return "MEDIASERVER", "emby"
def scheduler_job(self) -> None:
"""
定时任务每10分钟调用一次
"""
# 定时重连
if not self.emby.is_inactive():
self.emby = Emby()
def user_authenticate(self, name: str, password: str) -> Optional[str]:
"""
使用Emby用户辅助完成用户认证

View File

@@ -24,8 +24,16 @@ class Emby(metaclass=Singleton):
if not self._host.startswith("http"):
self._host = "http://" + self._host
self._apikey = settings.EMBY_API_KEY
self._user = self.get_user()
self._folders = self.get_emby_folders()
self.user = self.get_user()
self.folders = self.get_emby_folders()
def is_inactive(self) -> bool:
"""
判断是否需要重连
"""
if not self._host or not self._apikey:
return False
return True if not self.user else False
def get_emby_folders(self) -> List[dict]:
"""
@@ -51,7 +59,7 @@ class Emby(metaclass=Singleton):
"""
if not self._host or not self._apikey:
return []
req_url = f"{self._host}emby/Users/{self._user}/Views?api_key={self._apikey}"
req_url = f"{self._host}emby/Users/{self.user}/Views?api_key={self._apikey}"
try:
res = RequestUtils().get_res(req_url)
if res:
@@ -318,7 +326,7 @@ class Emby(metaclass=Singleton):
if not item_id:
return {}
# 验证tmdbid是否相同
item_tmdbid = self.get_iteminfo(item_id).get("ProviderIds", {}).get("Tmdb")
item_tmdbid = (self.get_iteminfo(item_id).get("ProviderIds") or {}).get("Tmdb")
if tmdb_id and item_tmdbid:
if str(tmdb_id) != str(item_tmdbid):
return {}
@@ -452,7 +460,7 @@ class Emby(metaclass=Singleton):
return None
# 查找需要刷新的媒体库ID
item_path = Path(item.target_path)
for folder in self._folders:
for folder in self.folders:
# 找同级路径最多的媒体库(要求容器内映射路径与实际一致)
max_comm_path = ""
match_num = 0
@@ -494,7 +502,7 @@ class Emby(metaclass=Singleton):
return {}
if not self._host or not self._apikey:
return {}
req_url = "%semby/Users/%s/Items/%s?api_key=%s" % (self._host, self._user, itemid, self._apikey)
req_url = "%semby/Users/%s/Items/%s?api_key=%s" % (self._host, self.user, itemid, self._apikey)
try:
res = RequestUtils().get_res(req_url)
if res and res.status_code == 200:
@@ -511,7 +519,7 @@ class Emby(metaclass=Singleton):
yield {}
if not self._host or not self._apikey:
yield {}
req_url = "%semby/Users/%s/Items?ParentId=%s&api_key=%s" % (self._host, self._user, parent, self._apikey)
req_url = "%semby/Users/%s/Items?ParentId=%s&api_key=%s" % (self._host, self.user, parent, self._apikey)
try:
res = RequestUtils().get_res(req_url)
if res and res.status_code == 200:
@@ -849,9 +857,9 @@ class Emby(metaclass=Singleton):
"""
if not self._host or not self._apikey:
return None
url = url.replace("{HOST}", self._host)\
.replace("{APIKEY}", self._apikey)\
.replace("{USER}", self._user)
url = url.replace("{HOST}", self._host) \
.replace("{APIKEY}", self._apikey) \
.replace("{USER}", self.user)
try:
return RequestUtils().get_res(url=url)
except Exception as e:

View File

@@ -47,7 +47,10 @@ class FanartModule(_ModuleBase):
continue
# 按欢迎程度倒排
images.sort(key=lambda x: int(x.get('likes', 0)), reverse=True)
mediainfo.set_image(self.__name(name), images[0].get('url'))
# 图片属性xx_path
image_name = self.__name(name)
if not mediainfo.get_image(image_name):
mediainfo.set_image(image_name, images[0].get('url'))
return mediainfo

View File

@@ -5,15 +5,15 @@ from typing import Optional, List, Tuple, Union
from jinja2 import Template
from app.core.context import MediaInfo
from app.core.metainfo import MetaInfo
from app.core.config import settings
from app.core.context import MediaInfo
from app.core.meta import MetaBase
from app.core.metainfo import MetaInfo
from app.log import logger
from app.modules import _ModuleBase
from app.schemas import TransferInfo
from app.utils.system import SystemUtils
from app.schemas.types import MediaType
from app.utils.system import SystemUtils
lock = Lock()
@@ -29,15 +29,15 @@ class FileTransferModule(_ModuleBase):
def init_setting(self) -> Tuple[str, Union[str, bool]]:
pass
def transfer(self, path: Path, mediainfo: MediaInfo,
transfer_type: str, target: Path = None, meta: MetaBase = None) -> TransferInfo:
def transfer(self, path: Path, meta: MetaBase, mediainfo: MediaInfo,
transfer_type: str, target: Path = None) -> TransferInfo:
"""
文件转移
:param path: 文件路径
:param meta: 预识别的元数据,仅单文件转移时传递
:param mediainfo: 识别的媒体信息
:param transfer_type: 转移方式
:param target: 目标路径
:param meta: 预识别的元数据,仅单文件转移时传递
:return: {path, target_path, message}
"""
# 获取目标路径
@@ -48,10 +48,10 @@ class FileTransferModule(_ModuleBase):
return TransferInfo(message="未找到媒体库目录,无法转移文件")
# 转移
return self.transfer_media(in_path=path,
in_meta=meta,
mediainfo=mediainfo,
transfer_type=transfer_type,
target_dir=target,
in_meta=meta)
target_dir=target)
@staticmethod
def __transfer_command(file_item: Path, target_file: Path, transfer_type: str) -> int:
@@ -121,7 +121,7 @@ class FileTransferModule(_ModuleBase):
# 比对文件名并转移字幕
org_dir: Path = org_path.parent
file_list: List[Path] = SystemUtils.list_files_with_extensions(org_dir, settings.RMT_SUBEXT)
file_list: List[Path] = SystemUtils.list_files(org_dir, settings.RMT_SUBEXT)
if len(file_list) == 0:
logger.debug(f"{org_dir} 目录下没有找到字幕文件...")
else:
@@ -207,7 +207,7 @@ class FileTransferModule(_ModuleBase):
"""
dir_name = org_path.parent
file_name = org_path.name
file_list: List[Path] = SystemUtils.list_files_with_extensions(dir_name, ['.mka'])
file_list: List[Path] = SystemUtils.list_files(dir_name, ['.mka'])
pending_file_list: List[Path] = [file for file in file_list if org_path.stem == file.stem]
if len(pending_file_list) == 0:
logger.debug(f"{dir_name} 目录下没有找到匹配的音轨文件")
@@ -236,9 +236,9 @@ class FileTransferModule(_ModuleBase):
logger.error(f"音轨文件 {file_name} {transfer_type}失败:{reason}")
return 0
def __transfer_bluray_dir(self, file_path: Path, new_path: Path, transfer_type: str) -> int:
def __transfer_dir(self, file_path: Path, new_path: Path, transfer_type: str) -> int:
"""
转移蓝光文件夹
转移整个文件夹
:param file_path: 原路径
:param new_path: 新路径
:param transfer_type: RmtMode转移方式
@@ -257,14 +257,18 @@ class FileTransferModule(_ModuleBase):
def __transfer_dir_files(self, src_dir: Path, target_dir: Path, transfer_type: str) -> int:
"""
按目录结构转移所有文件
按目录结构转移目录下所有文件
:param src_dir: 原路径
:param target_dir: 新路径
:param transfer_type: RmtMode转移方式
"""
retcode = 0
for file in src_dir.glob("**/*"):
new_file = target_dir.with_name(src_dir.name)
# 过滤掉目录
if file.is_dir():
continue
# 使用target_dir的父目录作为新的父目录
new_file = target_dir.joinpath(file.relative_to(src_dir))
if new_file.exists():
logger.warn(f"{new_file} 文件已存在")
continue
@@ -310,33 +314,20 @@ class FileTransferModule(_ModuleBase):
transfer_type=transfer_type,
over_flag=over_flag)
@staticmethod
def __is_bluray_dir(dir_path: Path) -> bool:
"""
判断是否为蓝光原盘目录
"""
# 蓝光原盘目录必备的文件或文件夹
required_files = ['BDMV', 'CERTIFICATE']
# 检查目录下是否存在所需文件或文件夹
for item in required_files:
if (dir_path / item).exists():
return True
return False
def transfer_media(self,
in_path: Path,
in_meta: MetaBase,
mediainfo: MediaInfo,
transfer_type: str,
target_dir: Path = None,
in_meta: MetaBase = None
target_dir: Path,
) -> TransferInfo:
"""
识别并转移一个文件、多个文件或者目录
识别并转移一个文件或者一个目录下的所有文件
:param in_path: 转移的路径,可能是一个文件也可以是一个目录
:param in_meta预识别元数据
:param mediainfo: 媒体信息
:param target_dir: 目的文件夹,非空的转移到该文件夹,为空时则按类型转移到配置文件中的媒体库文件夹
:param transfer_type: 文件转移方式
:param in_meta预识别元数为空则重新识别
:return: TransferInfo、错误信息
"""
# 检查目录路径
@@ -347,6 +338,7 @@ class FileTransferModule(_ModuleBase):
return TransferInfo(message=f"{target_dir} 目标路径不存在")
if mediainfo.type == MediaType.MOVIE:
# 电影
if settings.LIBRARY_MOVIE_NAME:
target_dir = target_dir / settings.LIBRARY_MOVIE_NAME / mediainfo.category
else:
@@ -354,7 +346,14 @@ class FileTransferModule(_ModuleBase):
target_dir = target_dir / mediainfo.type.value / mediainfo.category
if mediainfo.type == MediaType.TV:
if settings.LIBRARY_TV_NAME:
# 电视剧
if settings.LIBRARY_ANIME_NAME \
and mediainfo.genre_ids \
and set(mediainfo.genre_ids).intersection(set(settings.ANIME_GENREIDS)):
# 动漫
target_dir = target_dir / settings.LIBRARY_ANIME_NAME / mediainfo.category
elif settings.LIBRARY_TV_NAME:
# 电视剧
target_dir = target_dir / settings.LIBRARY_TV_NAME / mediainfo.category
else:
# 目的目录加上类型和二级分类
@@ -364,134 +363,84 @@ class FileTransferModule(_ModuleBase):
rename_format = settings.TV_RENAME_FORMAT \
if mediainfo.type == MediaType.TV else settings.MOVIE_RENAME_FORMAT
# 总大小
total_filesize = 0
# 处理文件清单
file_list = []
# 目标文件清单
file_list_new = []
# 失败文件清单
fail_list = []
# 错误信息
err_msgs = []
# 判断是否为蓝光原盘
bluray_flag = self.__is_bluray_dir(in_path)
if bluray_flag:
# 识别目录名称,不包括后缀
meta = MetaInfo(in_path.stem)
# 判断是否为文件夹
if in_path.is_dir():
# 转移整个目录
# 是否蓝光原盘
bluray_flag = SystemUtils.is_bluray_dir(in_path)
if bluray_flag:
logger.info(f"{in_path} 是蓝光原盘文件夹")
# 目的路径
new_path = self.get_rename_path(
path=target_dir,
template_string=rename_format,
rename_dict=self.__get_naming_dict(meta=meta,
rename_dict=self.__get_naming_dict(meta=in_meta,
mediainfo=mediainfo)
).parent
# 转移蓝光原盘
retcode = self.__transfer_bluray_dir(file_path=in_path,
new_path=new_path,
transfer_type=transfer_type)
retcode = self.__transfer_dir(file_path=in_path,
new_path=new_path,
transfer_type=transfer_type)
if retcode != 0:
return TransferInfo(message=f"{retcode},蓝光原盘转移失败")
else:
# 计算大小
total_filesize += in_path.stat().st_size
# 返回转移后的路径
return TransferInfo(path=in_path,
target_path=new_path,
total_size=total_filesize,
is_bluray=bluray_flag,
file_list=[],
file_list_new=[])
else:
# 获取文件清单
transfer_files: List[Path] = SystemUtils.list_files_with_extensions(in_path, settings.RMT_MEDIAEXT)
if len(transfer_files) == 0:
return TransferInfo(message=f"{in_path} 目录下没有找到可转移的文件")
if not in_meta:
# 识别目录名称,不包括后缀
meta = MetaInfo(in_path.stem)
else:
meta = in_meta
# 目的路径
new_path = target_dir / (self.get_rename_path(
template_string=rename_format,
rename_dict=self.__get_naming_dict(meta=meta,
mediainfo=mediainfo)).parents[-2].name)
# 转移所有文件
for transfer_file in transfer_files:
try:
if not in_meta:
# 识别文件元数据,不包含后缀
file_meta = MetaInfo(transfer_file.stem)
# 合并元数据
file_meta.merge(meta)
else:
file_meta = in_meta
# 文件结束季为空
file_meta.end_season = None
# 文件总季数为1
if file_meta.total_season:
file_meta.total_season = 1
# 文件不可能有多集
if file_meta.total_episode > 2:
file_meta.total_episode = 1
file_meta.end_episode = None
# 目的文件名
new_file = self.get_rename_path(
path=target_dir,
template_string=rename_format,
rename_dict=self.__get_naming_dict(meta=file_meta,
mediainfo=mediainfo,
file_ext=transfer_file.suffix)
)
# 判断是否要覆盖
overflag = False
if new_file.exists():
if new_file.stat().st_size < transfer_file.stat().st_size:
logger.info(f"目标文件已存在,但文件大小更小,将覆盖:{new_file}")
overflag = True
# 转移文件
retcode = self.__transfer_file(file_item=transfer_file,
new_file=new_file,
transfer_type=transfer_type,
over_flag=overflag)
if retcode != 0:
logger.error(f"{transfer_file} 转移文件失败,错误码:{retcode}")
err_msgs.append(f"{transfer_file.name}:错误码 {retcode}")
fail_list.append(transfer_file)
continue
# 源文件清单
file_list.append(str(transfer_file))
# 目的文件清单
file_list_new.append(str(new_file))
# 计算总大小
total_filesize += transfer_file.stat().st_size
except Exception as err:
err_msgs.append(f"{transfer_file.name}{err}")
logger.error(f"{transfer_file}转移失败:{err}")
fail_list.append(transfer_file)
if not file_list:
# 没有成功的
return TransferInfo(message="\n".join(err_msgs))
logger.error(f"文件夹 {in_path} 转移失败,错误码:{retcode}")
return TransferInfo(message=f"文件夹 {in_path} 转移失败,错误码:{retcode}")
logger.info(f"文件夹 {in_path} 转移成功")
# 返回转移后的路径
return TransferInfo(path=in_path,
target_path=new_path,
message="\n".join(err_msgs),
file_count=len(file_list),
total_size=total_filesize,
fail_list=fail_list,
is_bluray=bluray_flag,
file_list=file_list,
file_list_new=file_list_new)
total_size=new_path.stat().st_size,
is_bluray=bluray_flag)
else:
# 转移单个文件
# 文件结束季为空
in_meta.end_season = None
# 文件总季数为1
if in_meta.total_season:
in_meta.total_season = 1
# 文件不可能有多集
if in_meta.total_episode > 2:
in_meta.total_episode = 1
in_meta.end_episode = None
# 目的文件名
new_file = self.get_rename_path(
path=target_dir,
template_string=rename_format,
rename_dict=self.__get_naming_dict(
meta=in_meta,
mediainfo=mediainfo,
file_ext=in_path.suffix
)
)
# 判断是否要覆盖
overflag = False
if new_file.exists():
if new_file.stat().st_size < in_path.stat().st_size:
logger.info(f"目标文件已存在,但文件大小更小,将覆盖:{new_file}")
overflag = True
# 转移文件
retcode = self.__transfer_file(file_item=in_path,
new_file=new_file,
transfer_type=transfer_type,
over_flag=overflag)
if retcode != 0:
logger.error(f"文件 {in_path} 转移失败,错误码:{retcode}")
return TransferInfo(message=f"文件 {in_path.name} 转移失败,错误码:{retcode}",
fail_list=[str(in_path)])
logger.info(f"文件 {in_path} 转移成功")
return TransferInfo(path=in_path,
target_path=new_file,
file_count=1,
total_size=new_file.stat().st_size,
is_bluray=False,
file_list=[str(in_path)],
file_list_new=[str(new_file)])
@staticmethod
def __get_naming_dict(meta: MetaBase, mediainfo: MediaInfo, file_ext: str = None) -> dict:
@@ -505,7 +454,7 @@ class FileTransferModule(_ModuleBase):
# 标题
"title": mediainfo.title,
# 原文件名
"original_name": meta.org_string,
"original_name": f"{meta.org_string}{file_ext}",
# 原语种标题
"original_title": mediainfo.original_title,
# 识别名称
@@ -579,6 +528,7 @@ class FileTransferModule(_ModuleBase):
max_length = len(relative)
target_path = path
except Exception as e:
logger.debug(f"计算目标路径时出错:{e}")
continue
if target_path:
return Path(target_path)

View File

@@ -15,10 +15,10 @@ class FilterModule(_ModuleBase):
# 内置规则集
rule_set: Dict[str, dict] = {
# 蓝光
# 蓝光原盘
"BLU": {
"include": [r'Blu-?Ray.+VC-?1|Blu-?Ray.+AVC|UHD.+blu-?ray.+HEVC|MiniBD'],
"exclude": []
"exclude": [r'[Hx].?264|[Hx].?265|WEB-?DL|WEB-?RIP|REMUX']
},
# 4K
"4K": {
@@ -57,12 +57,12 @@ class FilterModule(_ModuleBase):
},
# H265
"H265": {
"include": [r'[Hx].?265'],
"include": [r'[Hx].?265|HEVC'],
"exclude": []
},
# H264
"H264": {
"include": [r'[Hx].?264'],
"include": [r'[Hx].?264|AVC'],
"exclude": []
},
# 杜比

View File

@@ -28,13 +28,14 @@ class IndexerModule(_ModuleBase):
return "INDEXER", "builtin"
def search_torrents(self, site: CommentedMap, mediainfo: MediaInfo = None,
keyword: str = None, page: int = 0) -> List[TorrentInfo]:
keyword: str = None, page: int = 0, area: str = "title") -> List[TorrentInfo]:
"""
搜索一个站点
:param mediainfo: 识别的媒体信息
:param site: 站点
:param keyword: 搜索关键词,如有按关键词搜索,否则按媒体信息名称搜索
:param page: 页码
:param area: 搜索区域 title or imdbid
:return: 资源列表
"""
# 确认搜索的名字
@@ -52,15 +53,20 @@ class IndexerModule(_ModuleBase):
logger.warn(f"{site.get('name')} 不支持中文搜索")
return []
# 去除搜索关键字中的特殊字符
if search_word:
search_word = StringUtils.clear(search_word, replace_word=" ", allow_space=True)
# 开始索引
result_array = []
# 开始计时
start_time = datetime.now()
try:
imdbid = mediainfo.imdb_id if mediainfo and area == "imdbid" else None
if site.get('parser') == "TNodeSpider":
error_flag, result_array = TNodeSpider(site).search(
keyword=search_word,
# imdbid=mediainfo.imdb_id if mediainfo else None,
imdbid=imdbid,
page=page
)
elif site.get('parser') == "TorrentLeech":
@@ -71,7 +77,7 @@ class IndexerModule(_ModuleBase):
else:
error_flag, result_array = self.__spider_search(
keyword=search_word,
# imdbid=mediainfo.imdb_id if mediainfo else None,
imdbid=imdbid,
indexer=site,
mtype=mediainfo.type if mediainfo else None,
page=page

View File

@@ -262,7 +262,12 @@ class TorrentSpider:
# 解码为字符串
page_source = raw_data.decode(encoding)
except Exception as e:
logger.error(f"chardet解码失败{e}")
logger.debug(f"chardet解码失败{e}")
# 探测utf-8解码
if re.search(r"charset=\"?utf-8\"?", ret.text, re.IGNORECASE):
ret.encoding = "utf-8"
else:
ret.encoding = ret.apparent_encoding
page_source = ret.text
else:
page_source = ret.text

View File

@@ -17,12 +17,20 @@ class JellyfinModule(_ModuleBase):
def init_module(self) -> None:
self.jellyfin = Jellyfin()
def stop(self):
pass
def init_setting(self) -> Tuple[str, Union[str, bool]]:
return "MEDIASERVER", "jellyfin"
def scheduler_job(self) -> None:
"""
定时任务每10分钟调用一次
"""
# 定时重连
if not self.jellyfin.is_inactive():
self.jellyfin = Jellyfin()
def stop(self):
pass
def user_authenticate(self, name: str, password: str) -> Optional[str]:
"""
使用Emby用户辅助完成用户认证

View File

@@ -22,8 +22,16 @@ class Jellyfin(metaclass=Singleton):
if not self._host.startswith("http"):
self._host = "http://" + self._host
self._apikey = settings.JELLYFIN_API_KEY
self._user = self.get_user()
self._serverid = self.get_server_id()
self.user = self.get_user()
self.serverid = self.get_server_id()
def is_inactive(self) -> bool:
"""
判断是否需要重连
"""
if not self._host or not self._apikey:
return False
return True if not self.user else False
def __get_jellyfin_librarys(self) -> List[dict]:
"""
@@ -31,7 +39,7 @@ class Jellyfin(metaclass=Singleton):
"""
if not self._host or not self._apikey:
return []
req_url = f"{self._host}Users/{self._user}/Views?api_key={self._apikey}"
req_url = f"{self._host}Users/{self.user}/Views?api_key={self._apikey}"
try:
res = RequestUtils().get_res(req_url)
if res:
@@ -222,10 +230,10 @@ class Jellyfin(metaclass=Singleton):
"""
根据名称查询Jellyfin中剧集的SeriesId
"""
if not self._host or not self._apikey or not self._user:
if not self._host or not self._apikey or not self.user:
return None
req_url = "%sUsers/%s/Items?api_key=%s&searchTerm=%s&IncludeItemTypes=Series&Limit=10&Recursive=true" % (
self._host, self._user, self._apikey, name)
self._host, self.user, self._apikey, name)
try:
res = RequestUtils().get_res(req_url)
if res:
@@ -247,10 +255,10 @@ class Jellyfin(metaclass=Singleton):
:param year: 年份,为空则不过滤
:return: 含title、year属性的字典列表
"""
if not self._host or not self._apikey or not self._user:
if not self._host or not self._apikey or not self.user:
return None
req_url = "%sUsers/%s/Items?api_key=%s&searchTerm=%s&IncludeItemTypes=Movie&Limit=10&Recursive=true" % (
self._host, self._user, self._apikey, title)
self._host, self.user, self._apikey, title)
try:
res = RequestUtils().get_res(req_url)
if res:
@@ -283,7 +291,7 @@ class Jellyfin(metaclass=Singleton):
:param season: 季
:return: 集号的列表
"""
if not self._host or not self._apikey or not self._user:
if not self._host or not self._apikey or not self.user:
return None
# 查TVID
if not item_id:
@@ -293,7 +301,7 @@ class Jellyfin(metaclass=Singleton):
if not item_id:
return {}
# 验证tmdbid是否相同
item_tmdbid = self.get_iteminfo(item_id).get("ProviderIds", {}).get("Tmdb")
item_tmdbid = (self.get_iteminfo(item_id).get("ProviderIds") or {}).get("Tmdb")
if tmdb_id and item_tmdbid:
if str(tmdb_id) != str(item_tmdbid):
return {}
@@ -301,7 +309,7 @@ class Jellyfin(metaclass=Singleton):
season = ""
try:
req_url = "%sShows/%s/Episodes?season=%s&&userId=%s&isMissing=false&api_key=%s" % (
self._host, item_id, season, self._user, self._apikey)
self._host, item_id, season, self.user, self._apikey)
res_json = RequestUtils().get_res(req_url)
if res_json:
res_items = res_json.json().get("Items")
@@ -400,7 +408,7 @@ class Jellyfin(metaclass=Singleton):
if not self._host or not self._apikey:
return {}
req_url = "%sUsers/%s/Items/%s?api_key=%s" % (
self._host, self._user, itemid, self._apikey)
self._host, self.user, itemid, self._apikey)
try:
res = RequestUtils().get_res(req_url)
if res and res.status_code == 200:
@@ -417,7 +425,7 @@ class Jellyfin(metaclass=Singleton):
yield {}
if not self._host or not self._apikey:
yield {}
req_url = "%sUsers/%s/Items?parentId=%s&api_key=%s" % (self._host, self._user, parent, self._apikey)
req_url = "%sUsers/%s/Items?parentId=%s&api_key=%s" % (self._host, self.user, parent, self._apikey)
try:
res = RequestUtils().get_res(req_url)
if res and res.status_code == 200:
@@ -454,7 +462,7 @@ class Jellyfin(metaclass=Singleton):
return None
url = url.replace("{HOST}", self._host)\
.replace("{APIKEY}", self._apikey)\
.replace("{USER}", self._user)
.replace("{USER}", self.user)
try:
return RequestUtils().get_res(url=url)
except Exception as e:

View File

@@ -23,6 +23,14 @@ class PlexModule(_ModuleBase):
def init_setting(self) -> Tuple[str, Union[str, bool]]:
return "MEDIASERVER", "plex"
def scheduler_job(self) -> None:
"""
定时任务每10分钟调用一次
"""
# 定时重连
if not self.plex.is_inactive():
self.plex = Plex()
def webhook_parser(self, body: Any, form: Any, args: Any) -> WebhookEventInfo:
"""
解析Webhook报文体

View File

@@ -30,6 +30,14 @@ class Plex(metaclass=Singleton):
self._plex = None
logger.error(f"Plex服务器连接失败{str(e)}")
def is_inactive(self) -> bool:
"""
判断是否需要重连
"""
if not self._host or not self._token:
return False
return True if not self._plex else False
def get_librarys(self):
"""
获取媒体服务器所有媒体库列表

View File

@@ -1,3 +1,4 @@
import shutil
from pathlib import Path
from typing import Set, Tuple, Optional, Union, List
@@ -9,9 +10,10 @@ from app.core.metainfo import MetaInfo
from app.log import logger
from app.modules import _ModuleBase
from app.modules.qbittorrent.qbittorrent import Qbittorrent
from app.schemas import TransferInfo, TransferTorrent, DownloadingTorrent
from app.schemas import TransferTorrent, DownloadingTorrent
from app.schemas.types import TorrentStatus
from app.utils.string import StringUtils
from app.utils.system import SystemUtils
class QbittorrentModule(_ModuleBase):
@@ -26,14 +28,23 @@ class QbittorrentModule(_ModuleBase):
def init_setting(self) -> Tuple[str, Union[str, bool]]:
return "DOWNLOADER", "qbittorrent"
def scheduler_job(self) -> None:
"""
定时任务每10分钟调用一次
"""
# 定时重连
if self.qbittorrent.is_inactive():
self.qbittorrent = Qbittorrent()
def download(self, torrent_path: Path, download_dir: Path, cookie: str,
episodes: Set[int] = None) -> Optional[Tuple[Optional[str], str]]:
episodes: Set[int] = None, category: str = None) -> Optional[Tuple[Optional[str], str]]:
"""
根据种子文件,选择并添加下载任务
:param torrent_path: 种子文件地址
:param download_dir: 下载目录
:param cookie: cookie
:param episodes: 需要下载的集数
:param category: 分类
:return: 种子Hash错误信息
"""
if not torrent_path or not torrent_path.exists():
@@ -51,7 +62,8 @@ class QbittorrentModule(_ModuleBase):
download_dir=str(download_dir),
is_paused=is_paused,
tag=tags,
cookie=cookie)
cookie=cookie,
category=category)
if not state:
return None, f"添加种子任务失败:{torrent_path}"
else:
@@ -153,17 +165,24 @@ class QbittorrentModule(_ModuleBase):
return None
return ret_torrents
def transfer_completed(self, hashs: Union[str, list], transinfo: TransferInfo) -> None:
def transfer_completed(self, hashs: Union[str, list],
path: Path = None) -> None:
"""
转移完成后的处理
:param hashs: 种子Hash
:param transinfo: 转移信息
:param path: 源目录
"""
self.qbittorrent.set_torrents_tag(ids=hashs, tags=['已整理'])
# 移动模式删除种子
if settings.TRANSFER_TYPE == "move":
if self.remove_torrents(hashs):
logger.info(f"移动模式删除种子成功:{hashs} ")
# 删除残留文件
if path and path.exists():
files = SystemUtils.list_files(path, settings.RMT_MEDIAEXT)
if not files:
logger.warn(f"删除残留文件夹:{path}")
shutil.rmtree(path, ignore_errors=True)
def remove_torrents(self, hashs: Union[str, list]) -> bool:
"""

View File

@@ -24,9 +24,17 @@ class Qbittorrent(metaclass=Singleton):
self._host, self._port = StringUtils.get_domain_address(address=settings.QB_HOST, prefix=True)
self._username = settings.QB_USER
self._password = settings.QB_PASSWORD
if self._host and self._port and self._username and self._password:
if self._host and self._port:
self.qbc = self.__login_qbittorrent()
def is_inactive(self) -> bool:
"""
判断是否需要重连
"""
if not self._host or not self._port:
return False
return True if not self.qbc else False
def __login_qbittorrent(self) -> Optional[Client]:
"""
连接qbittorrent
@@ -83,7 +91,8 @@ class Qbittorrent(metaclass=Singleton):
"""
if not self.qbc:
return None
torrents, error = self.get_torrents(status=["completed"], ids=ids, tags=tags)
# completed会包含移动状态 改为获取seeding状态 包含活动上传, 正在做种, 及强制做种
torrents, error = self.get_torrents(status=["seeding"], ids=ids, tags=tags)
return None if error else torrents or []
def get_downloading_torrents(self, ids: Union[str, list] = None,
@@ -176,6 +185,7 @@ class Qbittorrent(metaclass=Singleton):
is_paused: bool = False,
download_dir: str = None,
tag: Union[str, list] = None,
category: str = None,
cookie=None
) -> bool:
"""
@@ -183,6 +193,7 @@ class Qbittorrent(metaclass=Singleton):
:param content: 种子urls或文件内容
:param is_paused: 添加后暂停
:param tag: 标签
:param category: 种子分类
:param download_dir: 下载路径
:param cookie: 站点Cookie用于辅助下载种子
:return: bool
@@ -190,6 +201,7 @@ class Qbittorrent(metaclass=Singleton):
if not self.qbc or not content:
return False
# 下载内容
if isinstance(content, str):
urls = content
torrent_files = None
@@ -197,20 +209,26 @@ class Qbittorrent(metaclass=Singleton):
urls = None
torrent_files = content
# 保存目录
if download_dir:
save_path = download_dir
is_auto = False
else:
save_path = None
is_auto = None
# 标签
if tag:
tags = tag
else:
tags = None
try:
# 分类自动管理
if category and settings.QB_CATEGORY:
is_auto = True
else:
is_auto = False
category = None
try:
# 添加下载
qbc_ret = self.qbc.torrents_add(urls=urls,
torrent_files=torrent_files,
@@ -218,7 +236,9 @@ class Qbittorrent(metaclass=Singleton):
is_paused=is_paused,
tags=tags,
use_auto_torrent_management=is_auto,
cookie=cookie)
is_sequential_download=True,
cookie=cookie,
category=category)
return True if qbc_ret and str(qbc_ret).find("Ok") != -1 else False
except Exception as err:
logger.error(f"添加种子出错:{err}")
@@ -334,3 +354,15 @@ class Qbittorrent(metaclass=Singleton):
except Exception as err:
logger.error(f"重新校验种子出错:{err}")
return False
def add_trackers(self, ids: Union[str, list], trackers: list):
"""
添加tracker
"""
if not self.qbc:
return False
try:
return self.qbc.torrents_add_trackers(torrent_hashes=ids, urls=trackers)
except Exception as err:
logger.error(f"添加tracker出错{err}")
return False

View File

@@ -182,14 +182,14 @@ class SlackModule(_ModuleBase):
return None
@checkMessage(MessageChannel.Slack)
def post_message(self, message: Notification) -> Optional[bool]:
def post_message(self, message: Notification) -> None:
"""
发送消息
:param message: 消息
:return: 成功或失败
"""
return self.slack.send_msg(title=message.title, text=message.text,
image=message.image, userid=message.userid)
self.slack.send_msg(title=message.title, text=message.text,
image=message.image, userid=message.userid)
@checkMessage(MessageChannel.Slack)
def post_medias_message(self, message: Notification, medias: List[MediaInfo]) -> Optional[bool]:

View File

@@ -146,7 +146,7 @@ class Slack:
# 发送
result = self._client.chat_postMessage(
channel=channel,
text=message_text,
text=message_text[:1000],
blocks=blocks,
mrkdwn=True
)

View File

@@ -50,21 +50,16 @@ class SubtitleModule(_ModuleBase):
logger.info("开始从站点下载字幕:%s" % torrent.page_url)
# 获取种子信息
folder_name, _ = TorrentHelper.get_torrent_info(torrent_path)
# 下载目录,也可能是文件名
download_dir = download_dir / (folder_name or "")
# 等待文件或者目录存在
# 文件保存目录如果是单文件种子则folder_name是空此时文件保存目录就是下载目录
download_dir = download_dir / folder_name
# 等待目录存在
for _ in range(30):
if download_dir.exists():
break
time.sleep(1)
# 目录仍然不存在,且是目录则创建目录
if not download_dir.exists() \
and download_dir.suffix not in settings.RMT_MEDIAEXT:
# 目录仍然不存在,且有文件夹名,则创建目录
if not download_dir.exists() and folder_name:
download_dir.mkdir(parents=True, exist_ok=True)
# 不是目录说明是单文件种子,直接使用下载目录
if download_dir.is_file() \
or download_dir.suffix in settings.RMT_MEDIAEXT:
download_dir = download_dir.parent
# 读取网站代码
request = RequestUtils(cookies=torrent.site_cookie, ua=torrent.site_ua)
res = request.get_res(torrent.page_url)
@@ -108,7 +103,7 @@ class SubtitleModule(_ModuleBase):
# 解压文件
shutil.unpack_archive(zip_file, zip_path, format='zip')
# 遍历转移文件
for sub_file in SystemUtils.list_files_with_extensions(zip_path, settings.RMT_SUBEXT):
for sub_file in SystemUtils.list_files(zip_path, settings.RMT_SUBEXT):
target_sub_file = download_dir / sub_file.name
if target_sub_file.exists():
logger.info(f"字幕文件已存在:{target_sub_file}")

View File

@@ -78,28 +78,26 @@ class TelegramModule(_ModuleBase):
and str(user_id) not in settings.TELEGRAM_ADMINS.split(',') \
and str(user_id) != settings.TELEGRAM_CHAT_ID:
self.telegram.send_msg(title="只有管理员才有权限执行此命令", userid=user_id)
return CommingMessage(channel=MessageChannel.Wechat,
userid=user_id, username=user_id, text="")
return None
else:
if settings.TELEGRAM_USERS \
and not str(user_id) in settings.TELEGRAM_USERS.split(','):
logger.info(f"用户{user_id}不在用户白名单中,无法使用此机器人")
self.telegram.send_msg(title="你不在用户白名单中,无法使用此机器人", userid=user_id)
return CommingMessage(channel=MessageChannel.Wechat,
userid=user_id, username=user_id, text="")
return None
return CommingMessage(channel=MessageChannel.Telegram,
userid=user_id, username=user_id, text=text)
userid=user_id, username=user_name, text=text)
return None
@checkMessage(MessageChannel.Telegram)
def post_message(self, message: Notification) -> Optional[bool]:
def post_message(self, message: Notification) -> None:
"""
发送消息
:param message: 消息体
:return: 成功或失败
"""
return self.telegram.send_msg(title=message.title, text=message.text,
image=message.image, userid=message.userid)
self.telegram.send_msg(title=message.title, text=message.text,
image=message.image, userid=message.userid)
@checkMessage(MessageChannel.Telegram)
def post_medias_message(self, message: Notification, medias: List[MediaInfo]) -> Optional[bool]:

View File

@@ -89,7 +89,7 @@ class TheMovieDbModule(_ModuleBase):
year=meta.year,
mtype=MediaType.TV)
if not info:
# 非严格模式下去掉年份和类型再查一次
# 去掉年份和类型再查一次
info = self.tmdb.match_multi(name=meta.name)
if not info:
@@ -163,10 +163,8 @@ class TheMovieDbModule(_ModuleBase):
results = self.tmdb.search_multiis(meta.name)
else:
if meta.type == MediaType.UNKNOWN:
results = list(
set(self.tmdb.search_movies(meta.name, meta.year))
.union(set(self.tmdb.search_tv_tmdbinfos(meta.name, meta.year)))
)
results = self.tmdb.search_movies(meta.name, meta.year)
results.extend(self.tmdb.search_tvs(meta.name, meta.year))
# 组合结果的情况下要排序
results = sorted(
results,
@@ -176,7 +174,7 @@ class TheMovieDbModule(_ModuleBase):
elif meta.type == MediaType.MOVIE:
results = self.tmdb.search_movies(meta.name, meta.year)
else:
results = self.tmdb.search_tv_tmdbinfos(meta.name, meta.year)
results = self.tmdb.search_tvs(meta.name, meta.year)
return [MediaInfo(tmdb_info=info) for info in results]
@@ -189,14 +187,22 @@ class TheMovieDbModule(_ModuleBase):
"""
if settings.SCRAP_SOURCE != "themoviedb":
return None
# 目录下的所有文件
for file in SystemUtils.list_files_with_extensions(path, settings.RMT_MEDIAEXT):
if not file:
continue
logger.info(f"开始刮削媒体库文件:{file} ...")
if SystemUtils.is_bluray_dir(path):
# 蓝光原盘
logger.info(f"开始刮削蓝光原盘:{path} ...")
scrape_path = path / path.name
self.scraper.gen_scraper_files(mediainfo=mediainfo,
file_path=file)
logger.info(f"{file} 刮削完成")
file_path=scrape_path)
else:
# 目录下的所有文件
for file in SystemUtils.list_files(path, settings.RMT_MEDIAEXT):
if not file:
continue
logger.info(f"开始刮削媒体库文件:{file} ...")
self.scraper.gen_scraper_files(mediainfo=mediainfo,
file_path=file)
logger.info(f"{path} 刮削完成")
def tmdb_discover(self, mtype: MediaType, sort_by: str, with_genres: str, with_original_language: str,
page: int = 1) -> Optional[List[dict]]:
@@ -389,3 +395,10 @@ class TheMovieDbModule(_ModuleBase):
:param page: 页码
"""
return self.tmdb.get_person_credits(person_id=person_id, page=page)
def clear_cache(self):
"""
清除缓存
"""
self.tmdb.clear_cache()
self.cache.clear()

View File

@@ -2,11 +2,14 @@ import time
from pathlib import Path
from xml.dom import minidom
from requests import RequestException
from app.core.config import settings
from app.core.context import MediaInfo
from app.core.metainfo import MetaInfo
from app.log import logger
from app.schemas.types import MediaType
from app.utils.common import retry
from app.utils.dom import DomUtils
from app.utils.http import RequestUtils
@@ -22,7 +25,7 @@ class TmdbScraper:
"""
生成刮削文件
:param mediainfo: 媒体信息
:param file_path: 文件路径
:param file_path: 文件路径或者目录路径
"""
def __get_episode_detail(_seasoninfo: dict, _episode: int):
@@ -37,7 +40,7 @@ class TmdbScraper:
try:
# 电影
if mediainfo.type == MediaType.MOVIE:
# 强制或者不已存在时才处理
# 不已存在时才处理
if not file_path.with_name("movie.nfo").exists() \
and not file_path.with_suffix(".nfo").exists():
# 生成电影描述文件
@@ -271,11 +274,11 @@ class TmdbScraper:
# 添加时间
DomUtils.add_node(doc, root, "dateadded", time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(time.time())))
# TMDBID
uniqueid = DomUtils.add_node(doc, root, "uniqueid", tmdbid)
uniqueid = DomUtils.add_node(doc, root, "uniqueid", str(tmdbid))
uniqueid.setAttribute("type", "tmdb")
uniqueid.setAttribute("default", "true")
# tmdbid
DomUtils.add_node(doc, root, "tmdbid", tmdbid)
DomUtils.add_node(doc, root, "tmdbid", str(tmdbid))
# 标题
DomUtils.add_node(doc, root, "title", episodeinfo.get("name") or "%s" % episode)
# 简介
@@ -312,6 +315,7 @@ class TmdbScraper:
self.__save_nfo(doc, file_path.with_suffix(".nfo"))
@staticmethod
@retry(RequestException, logger=logger)
def __save_image(url: str, file_path: Path):
"""
下载图片并保存
@@ -320,7 +324,7 @@ class TmdbScraper:
return
try:
logger.info(f"正在下载{file_path.stem}图片:{url} ...")
r = RequestUtils().get_res(url=url)
r = RequestUtils().get_res(url=url, raise_exception=True)
if r:
file_path.write_bytes(r.content)
logger.info(f"图片已保存:{file_path}")

View File

@@ -78,7 +78,7 @@ class TmdbHelper:
ret_infos.append(movie)
return ret_infos
def search_tv_tmdbinfos(self, title: str, year: str) -> List[dict]:
def search_tvs(self, title: str, year: str) -> List[dict]:
"""
查询模糊匹配的所有电视剧TMDB信息
"""
@@ -191,8 +191,7 @@ class TmdbHelper:
if not info:
logger.debug(
f"正在识别{mtype.value}{name}, 年份={year} ...")
info = self.__search_tv_by_name(name,
year)
info = self.__search_tv_by_name(name, year)
if info:
info['media_type'] = MediaType.TV
# 返回
@@ -222,37 +221,28 @@ class TmdbHelper:
logger.debug(f"{name} 未找到相关电影信息!")
return {}
else:
# 匹配标题、原标题
if year:
for movie in movies:
if movie.get('release_date'):
if self.__compare_names(name, movie.get('title')) \
and movie.get('release_date')[0:4] == str(year):
return movie
if self.__compare_names(name, movie.get('original_title')) \
and movie.get('release_date')[0:4] == str(year):
return movie
else:
for movie in movies:
if self.__compare_names(name, movie.get('title')) \
or self.__compare_names(name, movie.get('original_title')):
return movie
# 匹配别名、译名
index = 0
# 按年份降序排列
movies = sorted(
movies,
key=lambda x: x.get('release_date') or '0000-00-00',
reverse=True
)
for movie in movies:
# 年份先过滤
if year:
if not movie.get('release_date'):
continue
if movie.get('release_date')[0:4] != str(year):
continue
index += 1
# 年份
movie_year = movie.get('release_date')[0:4] if movie.get('release_date') else None
if year and movie_year != year:
# 年份不匹配
continue
# 匹配标题、原标题
if self.__compare_names(name, movie.get('title')):
return movie
if self.__compare_names(name, movie.get('original_title')):
return movie
# 匹配别名、译名
if not movie.get("names"):
movie = self.get_info(mtype=MediaType.MOVIE, tmdbid=movie.get("id"))
if movie and self.__compare_names(name, movie.get("names")):
return movie
if index > 5:
break
return {}
def __search_tv_by_name(self, name: str, year: str) -> Optional[dict]:
@@ -279,37 +269,27 @@ class TmdbHelper:
logger.debug(f"{name} 未找到相关剧集信息!")
return {}
else:
# 匹配标题、原标题
if year:
for tv in tvs:
if tv.get('first_air_date'):
if self.__compare_names(name, tv.get('name')) \
and tv.get('first_air_date')[0:4] == str(year):
return tv
if self.__compare_names(name, tv.get('original_name')) \
and tv.get('first_air_date')[0:4] == str(year):
return tv
else:
for tv in tvs:
if self.__compare_names(name, tv.get('name')) \
or self.__compare_names(name, tv.get('original_name')):
return tv
# 匹配别名、译名
index = 0
# 按年份降序排列
tvs = sorted(
tvs,
key=lambda x: x.get('first_air_date') or '0000-00-00',
reverse=True
)
for tv in tvs:
# 有年份先过滤
if year:
if not tv.get('first_air_date'):
continue
if tv.get('first_air_date')[0:4] != str(year):
continue
index += 1
tv_year = tv.get('first_air_date')[0:4] if tv.get('first_air_date') else None
if year and tv_year != year:
# 年份不匹配
continue
# 匹配标题、原标题
if self.__compare_names(name, tv.get('name')):
return tv
if self.__compare_names(name, tv.get('original_name')):
return tv
# 匹配别名、译名
if not tv.get("names"):
tv = self.get_info(mtype=MediaType.TV, tmdbid=tv.get("id"))
if tv and self.__compare_names(name, tv.get("names")):
return tv
if index > 5:
break
return {}
def __search_tv_by_season(self, name: str, season_year: str, season_number: int) -> Optional[dict]:
@@ -351,14 +331,20 @@ class TmdbHelper:
logger.debug("%s 未找到季%s相关信息!" % (name, season_number))
return {}
else:
# 匹配标题、原标题
# 按年份降序排列
tvs = sorted(
tvs,
key=lambda x: x.get('first_air_date') or '0000-00-00',
reverse=True
)
for tv in tvs:
# 年份
tv_year = tv.get('first_air_date')[0:4] if tv.get('first_air_date') else None
if (self.__compare_names(name, tv.get('name'))
or self.__compare_names(name, tv.get('original_name'))) \
and (tv.get('first_air_date') and tv.get('first_air_date')[0:4] == str(season_year)):
and (tv_year == str(season_year)):
return tv
# 匹配别名、译名
for tv in tvs[:5]:
# 匹配别名、译名
if not tv.get("names"):
tv = self.get_info(mtype=MediaType.TV, tmdbid=tv.get("id"))
if not tv or not self.__compare_names(name, tv.get("names")):
@@ -407,7 +393,7 @@ class TmdbHelper:
def match_multi(self, name: str) -> Optional[dict]:
"""
根据名称同时查询电影和电视剧,不带年份
根据名称同时查询电影和电视剧,没有类型也没有年份时使用
:param name: 识别的文件名或种子名
:return: 匹配的媒体信息
"""
@@ -421,32 +407,50 @@ class TmdbHelper:
print(traceback.print_exc())
return None
logger.debug(f"API返回{str(self.search.total_results)}")
# 返回结果
ret_info = {}
if len(multis) == 0:
logger.debug(f"{name} 未找到相关媒体息!")
return {}
else:
# 匹配标题、原标题
# 按年份降序排列,电影在前面
multis = sorted(
multis,
key=lambda x: ("1"
if x.get("media_type") == "movie"
else "0") + (x.get('release_date')
or x.get('first_air_date')
or '0000-00-00'),
reverse=True
)
for multi in multis:
if multi.get("media_type") == "movie":
if self.__compare_names(name, multi.get('title')) \
or self.__compare_names(name, multi.get('original_title')):
return multi
elif multi.get("media_type") == "tv":
if self.__compare_names(name, multi.get('name')) \
or self.__compare_names(name, multi.get('original_name')):
return multi
# 匹配别名、译名
for multi in multis[:5]:
if multi.get("media_type") == "movie":
ret_info = multi
break
# 匹配别名、译名
if not multi.get("names"):
multi = self.get_info(mtype=MediaType.MOVIE, tmdbid=multi.get("id"))
if multi and self.__compare_names(name, multi.get("names")):
return multi
ret_info = multi
break
elif multi.get("media_type") == "tv":
if self.__compare_names(name, multi.get('name')) \
or self.__compare_names(name, multi.get('original_name')):
ret_info = multi
break
# 匹配别名、译名
if not multi.get("names"):
multi = self.get_info(mtype=MediaType.TV, tmdbid=multi.get("id"))
if multi and self.__compare_names(name, multi.get("names")):
return multi
return {}
ret_info = multi
break
# 类型变更
if ret_info:
ret_info['media_type'] = MediaType.MOVIE if ret_info.get("media_type") == "movie" else MediaType.TV
return ret_info
@lru_cache(maxsize=settings.CACHE_CONF.get('tmdb'))
def match_web(self, name: str, mtype: MediaType) -> Optional[dict]:
@@ -1157,3 +1161,9 @@ class TmdbHelper:
except Exception as e:
print(str(e))
return []
def clear_cache(self):
"""
清除缓存
"""
self.tmdb.cache_clear()

View File

@@ -1,18 +1,21 @@
from typing import Optional, Tuple, Union
import tvdb_api
from app.core.config import settings
from app.log import logger
from app.modules import _ModuleBase
from app.modules.thetvdb import tvdbapi
class TheTvDbModule(_ModuleBase):
tvdb: tvdb_api.Tvdb = None
tvdb: tvdbapi.Tvdb = None
def init_module(self) -> None:
self.tvdb = tvdb_api.Tvdb(apikey=settings.TVDB_API_KEY, cache=False, select_first=True)
self.tvdb = tvdbapi.Tvdb(apikey=settings.TVDB_API_KEY,
cache=False,
select_first=True,
proxies=settings.PROXY)
def stop(self):
pass

File diff suppressed because it is too large Load Diff

View File

@@ -1,3 +1,4 @@
import shutil
from pathlib import Path
from typing import Set, Tuple, Optional, Union, List
@@ -9,9 +10,10 @@ from app.core.metainfo import MetaInfo
from app.log import logger
from app.modules import _ModuleBase
from app.modules.transmission.transmission import Transmission
from app.schemas import TransferInfo, TransferTorrent, DownloadingTorrent
from app.schemas import TransferTorrent, DownloadingTorrent
from app.schemas.types import TorrentStatus
from app.utils.string import StringUtils
from app.utils.system import SystemUtils
class TransmissionModule(_ModuleBase):
@@ -26,14 +28,23 @@ class TransmissionModule(_ModuleBase):
def init_setting(self) -> Tuple[str, Union[str, bool]]:
return "DOWNLOADER", "transmission"
def scheduler_job(self) -> None:
"""
定时任务每10分钟调用一次
"""
# 定时重连
if not self.transmission.is_inactive():
self.transmission = Transmission()
def download(self, torrent_path: Path, download_dir: Path, cookie: str,
episodes: Set[int] = None) -> Optional[Tuple[Optional[str], str]]:
episodes: Set[int] = None, category: str = None) -> Optional[Tuple[Optional[str], str]]:
"""
根据种子文件,选择并添加下载任务
:param torrent_path: 种子文件地址
:param download_dir: 下载目录
:param cookie: cookie
:param episodes: 需要下载的集数
:param category: 分类TR中未使用
:return: 种子Hash
"""
# 如果要选择文件则先暂停
@@ -120,6 +131,8 @@ class TransmissionModule(_ModuleBase):
torrents = self.transmission.get_downloading_torrents(tags=settings.TORRENT_TAG)
for torrent in torrents or []:
meta = MetaInfo(torrent.name)
dlspeed = torrent.rate_download if hasattr(torrent, "rate_download") else torrent.rateDownload
upspeed = torrent.rate_upload if hasattr(torrent, "rate_upload") else torrent.rateUpload
ret_torrents.append(DownloadingTorrent(
hash=torrent.hashString,
title=torrent.name,
@@ -129,18 +142,19 @@ class TransmissionModule(_ModuleBase):
progress=torrent.progress,
size=torrent.total_size,
state="paused" if torrent.status == "stopped" else "downloading",
dlspeed=StringUtils.str_filesize(torrent.download_speed),
ulspeed=StringUtils.str_filesize(torrent.upload_speed),
dlspeed=StringUtils.str_filesize(dlspeed),
upspeed=StringUtils.str_filesize(upspeed),
))
else:
return None
return ret_torrents
def transfer_completed(self, hashs: Union[str, list], transinfo: TransferInfo) -> None:
def transfer_completed(self, hashs: Union[str, list],
path: Path = None) -> None:
"""
转移完成后的处理
:param hashs: 种子Hash
:param transinfo: 转移信息
:param path: 源目录
:return: None
"""
self.transmission.set_torrent_tag(ids=hashs, tags=['已整理'])
@@ -148,6 +162,12 @@ class TransmissionModule(_ModuleBase):
if settings.TRANSFER_TYPE == "move":
if self.remove_torrents(hashs):
logger.info(f"移动模式删除种子成功:{hashs} ")
# 删除残留文件
if path and path.exists():
files = SystemUtils.list_files(path, settings.RMT_MEDIAEXT)
if not files:
logger.warn(f"删除残留文件夹:{path}")
shutil.rmtree(path, ignore_errors=True)
def remove_torrents(self, hashs: Union[str, list]) -> bool:
"""

View File

@@ -28,7 +28,7 @@ class Transmission(metaclass=Singleton):
self._host, self._port = StringUtils.get_domain_address(address=settings.TR_HOST, prefix=False)
self._username = settings.TR_USER
self._password = settings.TR_PASSWORD
if self._host and self._port and self._username and self._password:
if self._host and self._port:
self.trc = self.__login_transmission()
def __login_transmission(self) -> Optional[Client]:
@@ -48,6 +48,14 @@ class Transmission(metaclass=Singleton):
logger.error(f"transmission 连接出错:{err}")
return None
def is_inactive(self) -> bool:
"""
判断是否需要重连
"""
if not self._host or not self._port:
return False
return True if not self.trc else False
def get_torrents(self, ids: Union[str, list] = None, status: Union[str, list] = None,
tags: Union[str, list] = None) -> Tuple[List[Torrent], bool]:
"""
@@ -244,7 +252,6 @@ class Transmission(metaclass=Singleton):
if not self.trc:
return
try:
session = self.trc.get_session()
download_limit_enabled = True if download_limit else False
upload_limit_enabled = True if upload_limit else False
self.trc.set_session(
@@ -268,3 +275,15 @@ class Transmission(metaclass=Singleton):
except Exception as err:
logger.error(f"重新校验种子出错:{err}")
return False
def add_trackers(self, ids: Union[str, list], trackers: list):
"""
添加Tracker
"""
if not self.trc:
return False
try:
return self.trc.change_torrent(ids=ids, tracker_list=[trackers])
except Exception as err:
logger.error(f"添加Tracker出错{err}")
return False

View File

@@ -100,8 +100,7 @@ class WechatModule(_ModuleBase):
if wechat_admins and not any(
user_id == admin_user for admin_user in wechat_admins):
self.wechat.send_msg(title="用户无权限执行菜单命令", userid=user_id)
return CommingMessage(channel=MessageChannel.Wechat,
userid=user_id, username=user_id, text="")
return None
elif msg_type == "text":
# 文本消息
content = DomUtils.tag_value(root_node, "Content", default="")
@@ -115,14 +114,14 @@ class WechatModule(_ModuleBase):
return None
@checkMessage(MessageChannel.Wechat)
def post_message(self, message: Notification) -> Optional[bool]:
def post_message(self, message: Notification) -> None:
"""
发送消息
:param message: 消息内容
:return: 成功或失败
"""
return self.wechat.send_msg(title=message.title, text=message.text,
image=message.image, userid=message.userid)
self.wechat.send_msg(title=message.title, text=message.text,
image=message.image, userid=message.userid)
@checkMessage(MessageChannel.Wechat)
def post_medias_message(self, message: Notification, medias: List[MediaInfo]) -> Optional[bool]:

View File

@@ -5,6 +5,7 @@ from typing import Any, List, Dict, Tuple
from app.chain import ChainBase
from app.core.config import settings
from app.core.event import EventManager
from app.db import ScopedSession
from app.db.models import Base
from app.db.plugindata_oper import PluginDataOper
from app.db.systemconfig_oper import SystemConfigOper
@@ -16,9 +17,7 @@ class PluginChian(ChainBase):
"""
插件处理链
"""
def process(self, *args, **kwargs):
pass
pass
class _PluginBase(metaclass=ABCMeta):
@@ -37,10 +36,12 @@ class _PluginBase(metaclass=ABCMeta):
plugin_desc: str = ""
def __init__(self):
# 数据库连接
self.db = ScopedSession()
# 插件数据
self.plugindata = PluginDataOper()
self.plugindata = PluginDataOper(self.db)
# 处理链
self.chain = PluginChian()
self.chain = PluginChian(self.db)
# 系统配置
self.systemconfig = SystemConfigOper()
# 系统消息

View File

@@ -74,10 +74,10 @@ class AutoBackup(_PluginBase):
logger.error(f"定时任务配置错误:{err}")
if self._onlyonce:
logger.info(f"Cloudflare CDN优选服务启动,立即运行一次")
logger.info(f"自动备份服务启动,立即运行一次")
self._scheduler.add_job(func=self.__backup, trigger='date',
run_date=datetime.now(tz=pytz.timezone(settings.TZ)) + timedelta(seconds=3),
name="Cloudflare优选")
name="自动备份")
# 关闭一次性开关
self._onlyonce = False
self.update_config({

View File

@@ -1,3 +1,4 @@
import re
import traceback
from datetime import datetime, timedelta
from multiprocessing.dummy import Pool as ThreadPool
@@ -5,6 +6,7 @@ from multiprocessing.pool import ThreadPool
from typing import Any, List, Dict, Tuple, Optional
from urllib.parse import urljoin
import pytz
from apscheduler.schedulers.background import BackgroundScheduler
from apscheduler.triggers.cron import CronTrigger
from ruamel.yaml import CommentedMap
@@ -29,7 +31,7 @@ class AutoSignIn(_PluginBase):
# 插件名称
plugin_name = "站点自动签到"
# 插件描述
plugin_desc = "自动模拟登录站点签到。"
plugin_desc = "自动模拟登录站点签到。"
# 插件图标
plugin_icon = "signin.png"
# 主题色
@@ -59,10 +61,16 @@ class AutoSignIn(_PluginBase):
# 配置属性
_enabled: bool = False
_cron: str = ""
_sign_type: str = ""
_onlyonce: bool = False
_notify: bool = False
_queue_cnt: int = 5
_sign_sites: list = []
_retry_keyword = None
_clean: bool = False
_start_time: int = None
_end_time: int = None
_action: str = ""
def init_plugin(self, config: dict = None):
self.sites = SitesHelper()
@@ -79,6 +87,10 @@ class AutoSignIn(_PluginBase):
self._notify = config.get("notify")
self._queue_cnt = config.get("queue_cnt") or 5
self._sign_sites = config.get("sign_sites")
self._retry_keyword = config.get("retry_keyword")
self._clean = config.get("clean")
self._sign_type = config.get("sign_type") or "sign"
self._action = "签到" if str(self._sign_type) == "sign" else "模拟登陆"
# 加载模块
if self._enabled or self._onlyonce:
@@ -89,41 +101,82 @@ class AutoSignIn(_PluginBase):
# 定时服务
self._scheduler = BackgroundScheduler(timezone=settings.TZ)
if self._cron:
try:
self._scheduler.add_job(func=self.sign_in,
trigger=CronTrigger.from_crontab(self._cron),
name="站点自动签到")
except Exception as err:
logger.error(f"定时任务配置错误:{err}")
# 推送实时消息
self.systemmessage.put(f"执行周期配置错误:{err}")
else:
# 随机时间
triggers = TimerUtils.random_scheduler(num_executions=2,
begin_hour=9,
end_hour=23,
max_interval=12 * 60,
min_interval=6 * 60)
for trigger in triggers:
self._scheduler.add_job(self.sign_in, "cron",
hour=trigger.hour, minute=trigger.minute,
name="站点自动签到")
# 立即运行一次
if self._onlyonce:
logger.info(f"站点自动{self._action}服务启动,立即运行一次")
self._scheduler.add_job(func=self.sign_in, trigger='date',
run_date=datetime.now(tz=pytz.timezone(settings.TZ)) + timedelta(seconds=3),
name=f"站点自动{self._action}")
# 关闭一次性开关
self._onlyonce = False
# 保存配置
self.update_config(
{
"enabled": self._enabled,
"notify": self._notify,
"cron": self._cron,
"onlyonce": self._onlyonce,
"queue_cnt": self._queue_cnt,
"sign_sites": self._sign_sites
}
)
self.__update_config()
# 周期运行
if self._enabled:
if self._cron:
try:
if self._cron.strip().count(" ") == 4:
self._scheduler.add_job(func=self.sign_in,
trigger=CronTrigger.from_crontab(self._cron),
name=f"站点自动{self._action}")
logger.info(f"站点自动{self._action}服务启动,执行周期 {self._cron}")
else:
# 2.3/9-23
crons = self._cron.strip().split("/")
if len(crons) == 2:
# 2.3
cron = crons[0]
# 9-23
times = crons[1].split("-")
if len(times) == 2:
# 9
self._start_time = int(times[0])
# 23
self._end_time = int(times[1])
if self._start_time and self._end_time:
self._scheduler.add_job(func=self.sign_in,
trigger="interval",
hours=float(cron.strip()),
name=f"站点自动{self._action}")
logger.info(
f"站点自动{self._action}服务启动,执行周期 {self._start_time}点-{self._end_time}点 每{cron}小时执行一次")
else:
logger.error(f"站点自动{self._action}服务启动失败,周期格式错误")
# 推送实时消息
self.systemmessage.put(f"执行周期配置错误")
self._cron = ""
self._enabled = False
self.__update_config()
else:
# 默认0-24 按照周期运行
self._start_time = 0
self._end_time = 24
self._scheduler.add_job(func=self.sign_in,
trigger="interval",
hours=float(self._cron.strip()),
name=f"站点自动{self._action}")
logger.info(
f"站点自动{self._action}服务启动,执行周期 {self._start_time}点-{self._end_time}点 每{self._cron}小时执行一次")
except Exception as err:
logger.error(f"定时任务配置错误:{err}")
# 推送实时消息
self.systemmessage.put(f"执行周期配置错误:{err}")
self._cron = ""
self._enabled = False
self.__update_config()
else:
# 随机时间
triggers = TimerUtils.random_scheduler(num_executions=2,
begin_hour=9,
end_hour=23,
max_interval=12 * 60,
min_interval=6 * 60)
for trigger in triggers:
self._scheduler.add_job(self.sign_in, "cron",
hour=trigger.hour, minute=trigger.minute,
name=f"站点自动{self._action}")
# 启动任务
if self._scheduler.get_jobs():
@@ -133,6 +186,22 @@ class AutoSignIn(_PluginBase):
def get_state(self) -> bool:
return self._enabled
def __update_config(self):
# 保存配置
self.update_config(
{
"enabled": self._enabled,
"notify": self._notify,
"cron": self._cron,
"onlyonce": self._onlyonce,
"queue_cnt": self._queue_cnt,
"sign_sites": self._sign_sites,
"retry_keyword": self._retry_keyword,
"clean": self._clean,
"sign_type": self._sign_type,
}
)
@staticmethod
def get_command() -> List[Dict[str, Any]]:
"""
@@ -182,7 +251,7 @@ class AutoSignIn(_PluginBase):
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
'md': 3
},
'content': [
{
@@ -198,7 +267,7 @@ class AutoSignIn(_PluginBase):
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
'md': 3
},
'content': [
{
@@ -214,7 +283,7 @@ class AutoSignIn(_PluginBase):
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
'md': 3
},
'content': [
{
@@ -225,12 +294,48 @@ class AutoSignIn(_PluginBase):
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 3
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'clean',
'label': '清理本日已签到',
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
},
'content': [
{
'component': 'VSelect',
'props': {
'model': 'sign_type',
'label': '签到方式',
'items': [
{'title': '签到', 'value': 'sign'},
{'title': '登录', 'value': 'login'}
]
}
}
]
},
{
'component': 'VCol',
'props': {
@@ -263,6 +368,23 @@ class AutoSignIn(_PluginBase):
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
},
'content': [
{
'component': 'VTextField',
'props': {
'model': 'retry_keyword',
'label': '重试关键词',
'placeholder': '支持正则表达式,命中才重签'
}
}
]
}
]
},
@@ -285,16 +407,42 @@ class AutoSignIn(_PluginBase):
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
},
'content': [
{
'component': 'VAlert',
'props': {
'text': '执行周期支持:'
'1、5位cron表达式'
'2、配置间隔小时如2.3/9-239-23点之间每隔2.3小时执行一次);'
'3、周期不填默认9-23点随机执行2次。'
'每天首次全量执行,其余执行命中重试关键词的站点。'
}
}
]
}
]
}
]
}
], {
"enabled": False,
"notify": True,
"sign_type": "sign",
"cron": "",
"onlyonce": False,
"clean": False,
"queue_cnt": 5,
"sign_sites": []
"sign_sites": [],
"retry_keyword": "错误|失败"
}
def get_page(self) -> List[dict]:
@@ -400,30 +548,77 @@ class AutoSignIn(_PluginBase):
@eventmanager.register(EventType.SiteSignin)
def sign_in(self, event: Event = None):
"""
自动签到
自动签到|模拟登陆
"""
# 日期
today = datetime.today()
if self._start_time and self._end_time:
if int(datetime.today().hour) < self._start_time or int(datetime.today().hour) > self._end_time:
logger.error(
f"当前时间 {int(datetime.today().hour)} 不在 {self._start_time}-{self._end_time} 范围内,暂不{self._action}")
return
if event:
logger.info("收到命令,开始站点签到 ...")
logger.info(f"收到命令,开始站点{self._action} ...")
self.post_message(channel=event.event_data.get("channel"),
title="开始站点签到 ...",
title=f"开始站点{self._action} ...",
userid=event.event_data.get("user"))
yesterday = today - timedelta(days=1)
yesterday_str = yesterday.strftime('%Y-%m-%d')
# 删除昨天历史
self.del_data(key=yesterday_str)
# 查看今天有没有签到历史
today = today.strftime('%Y-%m-%d')
today_history = self.get_data(key=today)
# 查询签到站点
sign_sites = [site for site in self.sites.get_indexers() if not site.get("public")]
# 过滤掉没有选中的站点
if self._sign_sites:
sign_sites = [site for site in sign_sites if site.get("id") in self._sign_sites]
# 今日没数据
if not today_history or self._clean:
logger.info(f"今日 {today}{self._action},开始{self._action}已选站点")
# 过滤删除的站点
self._sign_sites = [site.get("id") for site in sign_sites if site]
if self._clean:
# 关闭开关
self._clean = False
else:
# 今天已签到需要重签站点
retry_sites = today_history.get("retry")
# 今天已签到站点
already_sign_sites = today_history.get("sign")
# 今日未签站点
no_sign_sites = [site for site in sign_sites if
site.get("id") not in already_sign_sites or site.get("id") in retry_sites]
if not no_sign_sites:
logger.info(f"今日 {today}{self._action},无重新{self._action}站点,本次任务结束")
return
# 签到站点 = 需要重签+今日未签
sign_sites = no_sign_sites
logger.info(f"今日 {today}{self._action},开始重试命中关键词站点")
if not sign_sites:
logger.info("没有需要签到的站点")
logger.info(f"没有需要{self._action}的站点")
return
# 执行签到
logger.info("开始执行签到任务 ...")
with ThreadPool(min(len(sign_sites), int(self._queue_cnt))) as p:
status = p.map(self.signin_site, sign_sites)
logger.info(f"开始执行{self._action}任务 ...")
if str(self._sign_type) == "sign":
with ThreadPool(min(len(sign_sites), int(self._queue_cnt))) as p:
status = p.map(self.signin_site, sign_sites)
else:
with ThreadPool(min(len(sign_sites), int(self._queue_cnt))) as p:
status = p.map(self.login_site, sign_sites)
if status:
logger.info("站点签到任务完成!")
logger.info(f"站点{self._action}任务完成!")
# 获取今天的日期
key = f"{datetime.now().month}{datetime.now().day}"
# 保存数据
@@ -431,19 +626,88 @@ class AutoSignIn(_PluginBase):
"site": s[0],
"status": s[1]
} for s in status])
# 命中重试词的站点id
retry_sites = []
# 命中重试词的站点签到msg
retry_msg = []
# 登录成功
login_success_msg = []
# 签到成功
sign_success_msg = []
# 已签到
already_sign_msg = []
# 仿真签到成功
fz_sign_msg = []
# 失败|错误
failed_msg = []
sites = {site.get('name'): site.get("id") for site in self.sites.get_indexers() if not site.get("public")}
for s in status:
site_name = s[0]
site_id = None
if site_name:
site_id = sites.get(site_name)
# 记录本次命中重试关键词的站点
if self._retry_keyword:
if site_id:
match = re.search(self._retry_keyword, s[1])
if match:
logger.debug(f"站点 {site_name} 命中重试关键词 {self._retry_keyword}")
retry_sites.append(site_id)
# 命中的站点
retry_msg.append(s)
continue
if "登录成功" in s:
login_success_msg.append(s)
elif "仿真签到成功" in s:
fz_sign_msg.append(s)
continue
elif "签到成功" in s:
sign_success_msg.append(s)
elif '已签到' in s:
already_sign_msg.append(s)
else:
failed_msg.append(s)
if not self._retry_keyword:
# 没设置重试关键词则重试已选站点
retry_sites = self._sign_sites
logger.debug(f"下次{self._action}重试站点 {retry_sites}")
# 存入历史
self.save_data(key=today,
value={
"sign": self._sign_sites,
"retry": retry_sites
})
# 发送通知
if self._notify:
self.post_message(title="站点自动签到",
# 签到详细信息 登录成功、签到成功、已签到、仿真签到成功、失败--命中重试
signin_message = login_success_msg + sign_success_msg + already_sign_msg + fz_sign_msg + failed_msg
if len(retry_msg) > 0:
signin_message += retry_msg
signin_message = "\n".join([f'{s[0]}{s[1]}' for s in signin_message if s])
self.post_message(title=f"站点自动{self._action}",
mtype=NotificationType.SiteMessage,
text="\n".join([f'{s[0]}{s[1]}' for s in status if s]))
text=f"全部{self._action}数量: {len(list(self._sign_sites))} \n"
f"本次{self._action}数量: {len(sign_sites)} \n"
f"下次{self._action}数量: {len(retry_sites) if self._retry_keyword else 0} \n"
f"{signin_message}"
)
if event:
self.post_message(channel=event.event_data.get("channel"),
title="站点签到完成!", userid=event.event_data.get("user"))
title=f"站点{self._action}完成!", userid=event.event_data.get("user"))
else:
logger.error("站点签到任务失败!")
logger.error(f"站点{self._action}任务失败!")
if event:
self.post_message(channel=event.event_data.get("channel"),
title="站点签到任务失败!", userid=event.event_data.get("user"))
title=f"站点{self._action}任务失败!", userid=event.event_data.get("user"))
# 保存配置
self.__update_config()
def __build_class(self, url) -> Any:
for site_schema in self._site_schema:
@@ -523,6 +787,8 @@ class AutoSignIn(_PluginBase):
if under_challenge(page_source):
return f"无法通过Cloudflare"
return f"仿真登录失败Cookie已失效"
else:
return "仿真签到成功"
else:
res = RequestUtils(cookies=site_cookie,
ua=ua,
@@ -559,6 +825,77 @@ class AutoSignIn(_PluginBase):
traceback.print_exc()
return f"签到失败:{str(e)}"
def login_site(self, site_info: CommentedMap) -> Tuple[str, str]:
"""
模拟登陆一个站点
"""
return site_info.get("name"), self.__login_base(site_info)
@staticmethod
def __login_base(site_info: CommentedMap) -> str:
"""
模拟登陆通用处理
:param site_info: 站点信息
:return: 签到结果信息
"""
if not site_info:
return ""
site = site_info.get("name")
site_url = site_info.get("url")
site_cookie = site_info.get("cookie")
ua = site_info.get("ua")
render = site_info.get("render")
proxies = settings.PROXY if site_info.get("proxy") else None
proxy_server = settings.PROXY_SERVER if site_info.get("proxy") else None
if not site_url or not site_cookie:
logger.warn(f"未配置 {site} 的站点地址或Cookie无法签到")
return ""
# 模拟登录
try:
# 访问链接
site_url = str(site_url).replace("attendance.php", "")
logger.info(f"开始站点模拟登陆:{site},地址:{site_url}...")
if render:
page_source = PlaywrightHelper().get_page_source(url=site_url,
cookies=site_cookie,
ua=ua,
proxies=proxy_server)
if not SiteUtils.is_logged_in(page_source):
if under_challenge(page_source):
return f"无法通过Cloudflare"
return f"仿真登录失败Cookie已失效"
else:
return "模拟登陆成功"
else:
res = RequestUtils(cookies=site_cookie,
ua=ua,
proxies=proxies
).get_res(url=site_url)
# 判断登录状态
if res and res.status_code in [200, 500, 403]:
if not SiteUtils.is_logged_in(res.text):
if under_challenge(res.text):
msg = "站点被Cloudflare防护请打开站点浏览器仿真"
elif res.status_code == 200:
msg = "Cookie已失效"
else:
msg = f"状态码:{res.status_code}"
logger.warn(f"{site} 模拟登陆失败,{msg}")
return f"模拟登陆失败,{msg}"
else:
logger.info(f"{site} 模拟登陆成功")
return f"模拟登陆成功"
elif res is not None:
logger.warn(f"{site} 模拟登陆失败,状态码:{res.status_code}")
return f"模拟登陆失败,状态码:{res.status_code}"
else:
logger.warn(f"{site} 模拟登陆失败,无法打开网站")
return f"模拟登陆失败,无法打开网站!"
except Exception as e:
logger.warn("%s 模拟登陆失败:%s" % (site, str(e)))
traceback.print_exc()
return f"模拟登陆失败:{str(e)}"
def stop_service(self):
"""
退出插件
@@ -571,3 +908,39 @@ class AutoSignIn(_PluginBase):
self._scheduler = None
except Exception as e:
logger.error("退出插件失败:%s" % str(e))
@eventmanager.register(EventType.SiteDeleted)
def site_deleted(self, event):
"""
删除对应站点选中
"""
site_id = event.event_data.get("site_id")
config = self.get_config()
if config:
sign_sites = config.get("sign_sites")
if sign_sites:
if isinstance(sign_sites, str):
sign_sites = [sign_sites]
# 删除对应站点
if site_id:
sign_sites = [site for site in sign_sites if int(site) != int(site_id)]
else:
# 清空
sign_sites = []
# 若无站点,则停止
if len(sign_sites) == 0:
self._enabled = False
# 保存配置
self.update_config(
{
"enabled": self._enabled,
"notify": self._notify,
"cron": self._cron,
"onlyonce": self._onlyonce,
"queue_cnt": self._queue_cnt,
"sign_sites": sign_sites
}
)

View File

@@ -62,7 +62,7 @@ class BestFilmVersion(_PluginBase):
def init_plugin(self, config: dict = None):
self._cache_path = settings.TEMP_PATH / "__best_film_version_cache__"
self.subscribechain = SubscribeChain()
self.subscribechain = SubscribeChain(self.db)
# 停止现有任务
self.stop_service()

View File

@@ -79,7 +79,7 @@ class CloudflareSpeedTest(_PluginBase):
if self.get_state() or self._onlyonce:
self._scheduler = BackgroundScheduler(timezone=settings.TZ)
if self._cron:
if self.get_state() and self._cron:
logger.info(f"Cloudflare CDN优选服务启动周期{self._cron}")
self._scheduler.add_job(func=self.__cloudflareSpeedTest,
trigger=CronTrigger.from_crontab(self._cron),
@@ -390,7 +390,7 @@ class CloudflareSpeedTest(_PluginBase):
})
def get_state(self) -> bool:
return self._cf_ip and True if self._cron else False
return True if self._cf_ip and self._cron else False
@staticmethod
def get_command() -> List[Dict[str, Any]]:

View File

@@ -1,9 +1,13 @@
import re
import shutil
import threading
import traceback
from datetime import datetime
from pathlib import Path
from threading import Event
from typing import List, Tuple, Dict, Any
from apscheduler.schedulers.background import BackgroundScheduler
from watchdog.events import FileSystemEventHandler
from watchdog.observers import Observer
from watchdog.observers.polling import PollingObserver
@@ -17,7 +21,9 @@ from app.db.transferhistory_oper import TransferHistoryOper
from app.log import logger
from app.plugins import _PluginBase
from app.schemas import Notification, NotificationType, TransferInfo
from app.schemas.types import EventType
from app.schemas.types import EventType, MediaType, SystemConfigKey
from app.utils.string import StringUtils
from app.utils.system import SystemUtils
lock = threading.Lock()
@@ -67,6 +73,7 @@ class DirMonitor(_PluginBase):
_synced_files = []
# 私有属性
_scheduler = None
transferhis = None
downloadhis = None
transferchian = None
@@ -81,11 +88,14 @@ class DirMonitor(_PluginBase):
_exclude_keywords = ""
# 存储源目录与目的目录关系
_dirconf: Dict[str, Path] = {}
_medias = {}
# 退出事件
_event = Event()
def init_plugin(self, config: dict = None):
self.transferhis = TransferHistoryOper()
self.downloadhis = DownloadHistoryOper()
self.transferchian = TransferChain()
self.transferhis = TransferHistoryOper(self.db)
self.downloadhis = DownloadHistoryOper(self.db)
self.transferchian = TransferChain(self.db)
# 清空配置
self._dirconf = {}
@@ -103,6 +113,8 @@ class DirMonitor(_PluginBase):
self.stop_service()
if self._enabled:
self._scheduler = BackgroundScheduler(timezone=settings.TZ)
# 启动任务
monitor_dirs = self._monitor_dirs.split("\n")
if not monitor_dirs:
@@ -114,17 +126,20 @@ class DirMonitor(_PluginBase):
# 存储目的目录
paths = mon_path.split(":")
target_path = None
if len(paths) > 1:
mon_path = paths[0]
self._dirconf[mon_path] = Path(paths[1])
target_path = Path(paths[1])
self._dirconf[mon_path] = target_path
# 检查目录是不是媒体库目录的子目录
# 检查媒体库目录是不是下载目录的子目录
try:
if Path(mon_path).is_relative_to(settings.LIBRARY_PATH):
logger.warn(f"{mon_path}媒体库目录的子目录,无法监控")
self.systemmessage.put(f"{mon_path}媒体库目录的子目录,无法监控")
if target_path.is_relative_to(Path(mon_path)):
logger.warn(f"{target_path}下载目录 {mon_path} 的子目录,无法监控")
self.systemmessage.put(f"{target_path}下载目录 {mon_path} 的子目录,无法监控")
continue
except Exception as e:
logger.debug(str(e))
pass
try:
@@ -153,6 +168,12 @@ class DirMonitor(_PluginBase):
logger.error(f"{mon_path} 启动目录监控失败:{err_msg}")
self.systemmessage.put(f"{mon_path} 启动目录监控失败:{err_msg}")
# 追加入库消息统一发送服务
self._scheduler.add_job(self.send_msg, trigger='interval', seconds=15)
# 启动服务
self._scheduler.print_jobs()
self._scheduler.start()
def event_handler(self, event, mon_path: str, text: str, event_path: str):
"""
处理文件变化
@@ -178,13 +199,6 @@ class DirMonitor(_PluginBase):
logger.debug("文件已处理过:%s" % event_path)
return
# 命中过滤关键字不处理
if self._exclude_keywords:
for keyword in self._exclude_keywords.split("\n"):
if keyword and re.findall(keyword, event_path):
logger.debug(f"{event_path} 命中过滤关键字 {keyword}")
return
# 回收站及隐藏的文件不处理
if event_path.find('/@Recycle/') != -1 \
or event_path.find('/#recycle/') != -1 \
@@ -193,6 +207,23 @@ class DirMonitor(_PluginBase):
logger.debug(f"{event_path} 是回收站或隐藏的文件")
return
# 命中过滤关键字不处理
if self._exclude_keywords:
for keyword in self._exclude_keywords.split("\n"):
if keyword and re.findall(keyword, event_path):
logger.info(f"{event_path} 命中过滤关键字 {keyword},不处理")
return
# 整理屏蔽词不处理
transfer_exclude_words = self.systemconfig.get(SystemConfigKey.TransferExcludeWords)
if transfer_exclude_words:
for keyword in transfer_exclude_words:
if not keyword:
continue
if keyword and re.findall(keyword, event_path):
logger.info(f"{event_path} 命中整理屏蔽词 {keyword},不处理")
return
# 不是媒体文件不处理
if file_path.suffix not in settings.RMT_MEDIAEXT:
logger.debug(f"{event_path} 不是媒体文件")
@@ -211,9 +242,12 @@ class DirMonitor(_PluginBase):
file_meta.merge(meta)
if not file_meta.name:
logger.warn(f"{file_path.name} 无法识别有效信息")
logger.error(f"{file_path.name} 无法识别有效信息")
return
# 查询转移目的目录
target: Path = self._dirconf.get(mon_path)
# 识别媒体信息
mediainfo: MediaInfo = self.chain.recognize_media(meta=file_meta)
if not mediainfo:
@@ -223,14 +257,20 @@ class DirMonitor(_PluginBase):
mtype=NotificationType.Manual,
title=f"{file_path.name} 未识别到媒体信息,无法入库!"
))
# 新增转移成功历史记录
self.transferhis.add_fail(
src_path=file_path,
mode=self._transfer_type,
meta=file_meta
)
return
logger.info(f"{file_path.name} 识别为:{mediainfo.type.value} {mediainfo.title_year}")
# 更新媒体图片
self.chain.obtain_images(mediainfo=mediainfo)
# 查询转移目的目录
target = self._dirconf.get(mon_path)
# 获取downloadhash
download_hash = self.get_download_hash(src=str(file_path))
# 转移
transferinfo: TransferInfo = self.chain.transfer(mediainfo=mediainfo,
@@ -245,6 +285,15 @@ class DirMonitor(_PluginBase):
if not transferinfo.target_path:
# 转移失败
logger.warn(f"{file_path.name} 入库失败:{transferinfo.message}")
# 新增转移失败历史记录
self.transferhis.add_fail(
src_path=file_path,
mode=self._transfer_type,
download_hash=download_hash,
meta=file_meta,
mediainfo=mediainfo,
transferinfo=transferinfo
)
if self._notify:
self.chain.post_message(Notification(
title=f"{mediainfo.title_year}{file_meta.season_episode} 入库失败!",
@@ -253,41 +302,80 @@ class DirMonitor(_PluginBase):
))
return
# 获取downloadhash
downloadHis = self.downloadhis.get_last_by(mtype=mediainfo.type.value,
title=mediainfo.title,
year=mediainfo.year,
season=file_meta.season,
episode=file_meta.episode,
tmdbid=mediainfo.tmdb_id)
# 新增转移成功历史记录
self.transferhis.add_force(
src=event_path,
dest=str(transferinfo.target_path),
mode=settings.TRANSFER_TYPE,
type=mediainfo.type.value,
category=mediainfo.category,
title=mediainfo.title,
year=mediainfo.year,
tmdbid=mediainfo.tmdb_id,
imdbid=mediainfo.imdb_id,
tvdbid=mediainfo.tvdb_id,
doubanid=mediainfo.douban_id,
seasons=file_meta.season,
episodes=file_meta.episode,
image=mediainfo.get_poster_image(),
download_hash=downloadHis.download_hash if downloadHis else None,
status=1
self.transferhis.add_success(
src_path=file_path,
mode=self._transfer_type,
download_hash=download_hash,
meta=file_meta,
mediainfo=mediainfo,
transferinfo=transferinfo
)
# 刮削元数据
self.chain.scrape_metadata(path=transferinfo.target_path, mediainfo=mediainfo)
# 刷新媒体库
"""
{
"title_year season": {
"files": [
{
"path":,
"mediainfo":,
"file_meta":,
"transferinfo":
}
],
"time": "2023-08-24 23:23:23.332"
}
}
"""
# 发送消息汇总
media_list = self._medias.get(mediainfo.title_year + " " + meta.season) or {}
if media_list:
media_files = media_list.get("files") or []
if media_files:
file_exists = False
for file in media_files:
if str(event_path) == file.get("path"):
file_exists = True
break
if not file_exists:
media_files.append({
"path": event_path,
"mediainfo": mediainfo,
"file_meta": file_meta,
"transferinfo": transferinfo
})
else:
media_files = [
{
"path": event_path,
"mediainfo": mediainfo,
"file_meta": file_meta,
"transferinfo": transferinfo
}
]
media_list = {
"files": media_files,
"time": datetime.now()
}
else:
media_list = {
"files": [
{
"path": event_path,
"mediainfo": mediainfo,
"file_meta": file_meta,
"transferinfo": transferinfo
}
],
"time": datetime.now()
}
self._medias[mediainfo.title_year + " " + meta.season] = media_list
# 汇总刷新媒体库
self.chain.refresh_mediaserver(mediainfo=mediainfo, file_path=transferinfo.target_path)
# 发送通知
if self._notify:
self.transferchian.send_transfer_message(meta=file_meta, mediainfo=mediainfo,
transferinfo=transferinfo)
# 广播事件
self.eventmanager.send_event(EventType.TransferComplete, {
'meta': file_meta,
@@ -295,9 +383,92 @@ class DirMonitor(_PluginBase):
'transferinfo': transferinfo
})
# 移动模式删除空目录
if self._transfer_type == "move":
for file_dir in file_path.parents:
if len(str(file_dir)) <= len(str(Path(mon_path))):
# 重要,删除到监控目录为止
break
files = SystemUtils.list_files(file_dir, settings.RMT_MEDIAEXT)
if not files:
logger.warn(f"移动模式,删除空目录:{file_dir}")
shutil.rmtree(file_dir, ignore_errors=True)
except Exception as e:
logger.error("目录监控发生错误:%s - %s" % (str(e), traceback.format_exc()))
def send_msg(self):
"""
定时检查是否有媒体处理完,发送统一消息
"""
if not self._medias or not self._medias.keys():
return
# 遍历检查是否已刮削完,发送消息
for medis_title_year_season in list(self._medias.keys()):
media_list = self._medias.get(medis_title_year_season)
logger.info(f"开始处理媒体 {medis_title_year_season} 消息")
if not media_list:
continue
# 获取最后更新时间
last_update_time = media_list.get("time")
media_files = media_list.get("files")
if not last_update_time or not media_files:
continue
transferinfo = media_files[0].get("transferinfo")
file_meta = media_files[0].get("file_meta")
mediainfo = media_files[0].get("mediainfo")
# 判断最后更新时间距现在是已超过5秒超过则发送消息
if (datetime.now() - last_update_time).total_seconds() > 5:
# 发送通知
if self._notify:
# 汇总处理文件总大小
total_size = 0
file_count = 0
# 剧集汇总
episodes = []
for file in media_files:
transferinfo = file.get("transferinfo")
total_size += transferinfo.total_size
file_count += 1
file_meta = file.get("file_meta")
if file_meta and file_meta.begin_episode:
episodes.append(file_meta.begin_episode)
transferinfo.total_size = total_size
# 汇总处理文件数量
transferinfo.file_count = file_count
# 剧集季集信息 S01 E01-E04 || S01 E01、E02、E04
season_episode = None
# 处理文件多,说明是剧集,显示季入库消息
if mediainfo.type == MediaType.TV:
# 季集文本
season_episode = f"{file_meta.season} {StringUtils.format_ep(episodes)}"
# 发送消息
self.transferchian.send_transfer_message(meta=file_meta,
mediainfo=mediainfo,
transferinfo=transferinfo,
season_episode=season_episode)
# 发送完消息移出key
del self._medias[medis_title_year_season]
continue
def get_download_hash(self, src: str):
"""
从表中获取download_hash避免连接下载器
"""
downloadHis = self.downloadhis.get_file_by_fullpath(src)
if downloadHis:
return downloadHis.download_hash
return None
def get_state(self) -> bool:
return self._enabled
@@ -310,149 +481,149 @@ class DirMonitor(_PluginBase):
def get_form(self) -> Tuple[List[dict], Dict[str, Any]]:
return [
{
'component': 'VForm',
'content': [
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'enabled',
'label': '启用插件',
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'notify',
'label': '发送通知',
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
},
'content': [
{
'component': 'VSelect',
'props': {
'model': 'mode',
'label': '监控模式',
'items': [
{'title': '兼容模式', 'value': 'compatibility'},
{'title': '性能模式', 'value': 'fast'}
]
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
},
'content': [
{
'component': 'VSelect',
'props': {
'model': 'transfer_type',
'label': '转移方式',
'items': [
{'title': '移动', 'value': 'move'},
{'title': '复制', 'value': 'copy'},
{'title': '硬链接', 'value': 'link'},
{'title': '软链接', 'value': 'softlink'}
]
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12
},
'content': [
{
'component': 'VTextarea',
'props': {
'model': 'monitor_dirs',
'label': '监控目录',
'rows': 5,
'placeholder': '每一行一个目录,支持两种配置方式:\n'
'监控目录\n'
'监控目录:转移目的目录'
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12
},
'content': [
{
'component': 'VTextarea',
'props': {
'model': 'exclude_keywords',
'label': '排除关键词',
'rows': 2,
'placeholder': '每一行一个关键词'
}
}
]
}
]
}
]
}
], {
"enabled": False,
"notify": False,
"mode": "fast",
"transfer_type": settings.TRANSFER_TYPE,
"monitor_dirs": "",
"exclude_keywords": ""
}
{
'component': 'VForm',
'content': [
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'enabled',
'label': '启用插件',
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'notify',
'label': '发送通知',
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
},
'content': [
{
'component': 'VSelect',
'props': {
'model': 'mode',
'label': '监控模式',
'items': [
{'title': '兼容模式', 'value': 'compatibility'},
{'title': '性能模式', 'value': 'fast'}
]
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
},
'content': [
{
'component': 'VSelect',
'props': {
'model': 'transfer_type',
'label': '转移方式',
'items': [
{'title': '移动', 'value': 'move'},
{'title': '复制', 'value': 'copy'},
{'title': '硬链接', 'value': 'link'},
{'title': '软链接', 'value': 'softlink'}
]
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12
},
'content': [
{
'component': 'VTextarea',
'props': {
'model': 'monitor_dirs',
'label': '监控目录',
'rows': 5,
'placeholder': '每一行一个目录,支持两种配置方式:\n'
'监控目录\n'
'监控目录:转移目的目录'
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12
},
'content': [
{
'component': 'VTextarea',
'props': {
'model': 'exclude_keywords',
'label': '排除关键词',
'rows': 2,
'placeholder': '每一行一个关键词'
}
}
]
}
]
}
]
}
], {
"enabled": False,
"notify": False,
"mode": "fast",
"transfer_type": settings.TRANSFER_TYPE,
"monitor_dirs": "",
"exclude_keywords": ""
}
def get_page(self) -> List[dict]:
pass
@@ -469,3 +640,10 @@ class DirMonitor(_PluginBase):
except Exception as e:
print(str(e))
self._observer = []
if self._scheduler:
self._scheduler.remove_all_jobs()
if self._scheduler.running:
self._event.set()
self._scheduler.shutdown()
self._event.clear()
self._scheduler = None

View File

@@ -65,8 +65,8 @@ class DoubanRank(_PluginBase):
_clearflag = False
def init_plugin(self, config: dict = None):
self.downloadchain = DownloadChain()
self.subscribechain = SubscribeChain()
self.downloadchain = DownloadChain(self.db)
self.subscribechain = SubscribeChain(self.db)
if config:
self._enabled = config.get("enabled")

View File

@@ -66,9 +66,9 @@ class DoubanSync(_PluginBase):
def init_plugin(self, config: dict = None):
self.rsshelper = RssHelper()
self.downloadchain = DownloadChain()
self.searchchain = SearchChain()
self.subscribechain = SubscribeChain()
self.downloadchain = DownloadChain(self.db)
self.searchchain = SearchChain(self.db)
self.subscribechain = SubscribeChain(self.db)
# 停止现有任务
self.stop_service()

View File

@@ -258,7 +258,7 @@ class LibraryScraper(_PluginBase):
"""
exclude_paths = self._exclude_paths.split("\n")
# 查找目录下所有的文件
files = SystemUtils.list_files_with_extensions(path, settings.RMT_MEDIAEXT)
files = SystemUtils.list_files(path, settings.RMT_MEDIAEXT)
for file in files:
if self._event.is_set():
logger.info(f"媒体库刮削服务停止")

View File

@@ -12,12 +12,15 @@ from apscheduler.triggers.cron import CronTrigger
from app.core.config import settings
from app.core.event import eventmanager, Event
from app.db.downloadhistory_oper import DownloadHistoryOper
from app.db.models.transferhistory import TransferHistory
from app.db.transferhistory_oper import TransferHistoryOper
from app.log import logger
from app.modules.emby import Emby
from app.modules.jellyfin import Jellyfin
from app.modules.qbittorrent import Qbittorrent
from app.modules.themoviedb.tmdbv3api import Episode
from app.modules.transmission import Transmission
from app.plugins import _PluginBase
from app.schemas.types import NotificationType, EventType
from app.utils.path_utils import PathUtils
@@ -27,7 +30,7 @@ class MediaSyncDel(_PluginBase):
# 插件名称
plugin_name = "媒体库同步删除"
# 插件描述
plugin_desc = "媒体库删除媒体后同步删除历史记录源文件。"
plugin_desc = "媒体库删除媒体后同步删除历史记录源文件和下载任务"
# 插件图标
plugin_icon = "mediasyncdel.png"
# 主题色
@@ -55,10 +58,16 @@ class MediaSyncDel(_PluginBase):
_del_source = False
_exclude_path = None
_transferhis = None
_downloadhis = None
qb = None
tr = None
def init_plugin(self, config: dict = None):
self._transferhis = TransferHistoryOper()
self._transferhis = TransferHistoryOper(self.db)
self._downloadhis = DownloadHistoryOper(self.db)
self.episode = Episode()
self.qb = Qbittorrent()
self.tr = Transmission()
# 停止现有任务
self.stop_service()
@@ -487,14 +496,17 @@ class MediaSyncDel(_PluginBase):
logger.error(f"{media_name} 季同步删除失败,未获取到具体季")
return
msg = f'剧集 {media_name} S{season_num} {tmdb_id}'
transfer_history: List[TransferHistory] = self._transferhis.get_by(tmdbid=tmdb_id)
transfer_history: List[TransferHistory] = self._transferhis.get_by(tmdbid=tmdb_id,
season=f'S{season_num}')
# 删除剧集S02E02
elif media_type == "Episode":
if not season_num or not str(season_num).isdigit() or not episode_num or not str(episode_num).isdigit():
logger.error(f"{media_name} 集同步删除失败,未获取到具体集")
return
msg = f'剧集 {media_name} S{season_num}E{episode_num} {tmdb_id}'
transfer_history: List[TransferHistory] = self._transferhis.get_by(tmdbid=tmdb_id)
transfer_history: List[TransferHistory] = self._transferhis.get_by(tmdbid=tmdb_id,
season=f'S{season_num}',
episode=f'E{episode_num}')
else:
return
@@ -507,9 +519,16 @@ class MediaSyncDel(_PluginBase):
# 开始删除
image = 'https://emby.media/notificationicon.png'
year = None
del_cnt = 0
stop_cnt = 0
error_cnt = 0
for transferhis in transfer_history:
image = transferhis.image
year = transferhis.year
# 0、删除转移记录
self._transferhis.delete(transferhis.id)
# 删除种子任务
if self._del_source:
# 1、直接删除源文件
@@ -518,14 +537,20 @@ class MediaSyncDel(_PluginBase):
source_path = str(transferhis.src).replace(source_name, "")
self.delete_media_file(filedir=source_path,
filename=source_name)
if transferhis.download_hash:
try:
# 2、判断种子是否被删除完
self.handle_torrent(history_id=transferhis.id,
src=transferhis.src,
torrent_hash=transferhis.download_hash)
except Exception as e:
logger.error("删除种子失败,尝试删除源文件:%s" % str(e))
if transferhis.download_hash:
try:
# 2、判断种子是否被删除完
delete_flag, success_flag = self.handle_torrent(src=transferhis.src,
torrent_hash=transferhis.download_hash)
if not success_flag:
error_cnt += 1
else:
if delete_flag:
del_cnt += 1
else:
stop_cnt += 1
except Exception as e:
logger.error("删除种子失败,尝试删除源文件:%s" % str(e))
logger.info(f"同步删除 {msg} 完成!")
@@ -544,6 +569,9 @@ class MediaSyncDel(_PluginBase):
title="媒体库同步删除任务完成",
image=image,
text=f"{msg}\n"
f"删除{del_cnt}\n"
f"暂停{stop_cnt}\n"
f"错误{error_cnt}\n"
f"时间 {time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(time.time()))}"
)
@@ -613,30 +641,29 @@ class MediaSyncDel(_PluginBase):
if media_type == "Movie":
msg = f'电影 {media_name}'
transfer_history: List[TransferHistory] = self._transferhis.get_by(
mtype="电影",
title=media_name,
year=media_year)
# 删除电视剧
elif media_type == "Series":
msg = f'剧集 {media_name}'
transfer_history: List[TransferHistory] = self._transferhis.get_by(
mtype="电视剧",
title=media_name,
year=media_year)
# 删除季 S02
elif media_type == "Season":
msg = f'剧集 {media_name} {media_season}'
transfer_history: List[TransferHistory] = self._transferhis.get_by(
mtype="电视剧",
title=media_name,
year=media_year)
year=media_year,
season=media_season)
# 删除剧集S02E02
elif media_type == "Episode":
msg = f'剧集 {media_name} {media_season}{media_episode}'
transfer_history: List[TransferHistory] = self._transferhis.get_by(
mtype="电视剧",
title=media_name,
year=media_year)
year=media_year,
season=media_season,
episode=media_episode)
else:
continue
@@ -650,9 +677,14 @@ class MediaSyncDel(_PluginBase):
# 开始删除
image = 'https://emby.media/notificationicon.png'
del_cnt = 0
stop_cnt = 0
error_cnt = 0
for transferhis in transfer_history:
image = transferhis.image
# 0、删除转移记录
self._transferhis.delete(transferhis.id)
# 删除种子任务
if self._del_source:
# 1、直接删除源文件
@@ -661,14 +693,20 @@ class MediaSyncDel(_PluginBase):
source_path = str(transferhis.src).replace(source_name, "")
self.delete_media_file(filedir=source_path,
filename=source_name)
if transferhis.download_hash:
try:
# 2、判断种子是否被删除完
self.handle_torrent(history_id=transferhis.id,
src=transferhis.src,
torrent_hash=transferhis.download_hash)
except Exception as e:
logger.error("删除种子失败,尝试删除源文件:%s" % str(e))
if transferhis.download_hash:
try:
# 2、判断种子是否被删除完
delete_flag, success_flag = self.handle_torrent(src=transferhis.src,
torrent_hash=transferhis.download_hash)
if not success_flag:
error_cnt += 1
else:
if delete_flag:
del_cnt += 1
else:
stop_cnt += 1
except Exception as e:
logger.error("删除种子失败,尝试删除源文件:%s" % str(e))
logger.info(f"同步删除 {msg} 完成!")
@@ -678,7 +716,9 @@ class MediaSyncDel(_PluginBase):
mtype=NotificationType.MediaServer,
title="媒体库同步删除任务完成",
text=f"{msg}\n"
f"数量 {len(transfer_history)}\n"
f"删除{del_cnt}\n"
f"暂停{stop_cnt}\n"
f"错误{error_cnt}\n"
f"时间 {time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(time.time()))}",
image=image)
@@ -698,7 +738,7 @@ class MediaSyncDel(_PluginBase):
self.save_data("last_time", datetime.datetime.now())
def handle_torrent(self, history_id: int, src: str, torrent_hash: str):
def handle_torrent(self, src: str, torrent_hash: str):
"""
判断种子是否局部删除
局部删除则暂停种子
@@ -712,91 +752,77 @@ class MediaSyncDel(_PluginBase):
plugin_id=plugin_id)
logger.info(f"查询到 {history_key} 转种历史 {transfer_history}")
# 删除历史标志
del_history = False
# 删除种子标志
delete_flag = True
try:
# 删除本次种子记录
self._downloadhis.delete_file_by_fullpath(fullpath=src)
# 是否需要暂停源下载器种子
stop_from = False
# 根据种子hash查询剩余未删除的记录
downloadHisNoDel = self._downloadhis.get_files_by_hash(download_hash=torrent_hash, state=1)
if downloadHisNoDel and len(downloadHisNoDel) > 0:
logger.info(
f"查询种子任务 {torrent_hash} 存在 {len(downloadHisNoDel)} 个未删除文件,执行暂停种子操作")
delete_flag = False
else:
logger.info(
f"查询种子任务 {torrent_hash} 文件已全部删除,执行删除种子操作")
delete_flag = True
# 如果有转种记录,则删除转种后的下载任务
if transfer_history and isinstance(transfer_history, dict):
download = transfer_history['to_download']
download_id = transfer_history['to_download_id']
delete_source = transfer_history['delete_source']
del_history = True
# 如果有转种记录,则删除转种后的下载任务
if transfer_history and isinstance(transfer_history, dict):
download = transfer_history['to_download']
download_id = transfer_history['to_download_id']
delete_source = transfer_history['delete_source']
# 转种后未删除源种时,同步删除源种
if not delete_source:
logger.info(f"{history_key} 转种时未删除源下载任务,开始删除源下载任务…")
try:
dl_files = self.chain.torrent_files(tid=torrent_hash)
if not dl_files:
logger.info(f"未获取到 {settings.DOWNLOADER} - {torrent_hash} 种子文件,种子已被删除")
else:
for dl_file in dl_files:
dl_file_name = dl_file.get("name")
torrent_file = os.path.join(src, os.path.basename(dl_file_name))
if Path(torrent_file).exists():
logger.warn(f"种子有文件被删除,种子文件{torrent_file}暂未删除,暂停种子")
delete_flag = False
stop_from = True
break
if delete_flag:
logger.info(f"删除下载任务:{settings.DOWNLOADER} - {torrent_hash}")
self.chain.remove_torrents(torrent_hash)
except Exception as e:
logger.error(f"删除源下载任务 {history_key} 失败: {str(e)}")
# 如果是False则说明种子文件没有完全被删除暂停种子暂不处理
if delete_flag:
try:
dl_files = self.chain.torrent_files(tid=download_id)
if not dl_files:
logger.info(f"未获取到 {download} - {download_id} 种子文件,种子已被删除")
else:
for dl_file in dl_files:
dl_file_name = dl_file.get("name")
if not stop_from:
torrent_file = os.path.join(src, os.path.basename(dl_file_name))
if Path(torrent_file).exists():
logger.info(f"种子有文件被删除,种子文件{torrent_file}暂未删除,暂停种子")
delete_flag = False
break
# 删除种子
if delete_flag:
# 删除源下载任务或转种后下载任务
logger.info(f"删除下载任务:{download} - {download_id}")
self.chain.remove_torrents(download_id)
# 删除转移记录
self._transferhis.delete(history_id)
# 删除转种记录
if del_history:
self.del_data(key=history_key, plugin_id=plugin_id)
self.del_data(key=history_key, plugin_id=plugin_id)
# 处理辅
self.__del_seed(download=download, download_id=download_id, action_flag="del")
except Exception as e:
logger.error(f"删除转种辅种下载任务失败: {str(e)}")
# 转种后未删除源种时,同步删除源
if not delete_source:
logger.info(f"{history_key} 转种时未删除源下载任务,开始删除源下载任务…")
# 判断是否暂停
if not delete_flag:
logger.error("开始暂停种子")
# 暂停种子
if stop_from:
# 暂停源种
self.chain.stop_torrents(torrent_hash)
logger.info(f"种子:{settings.DOWNLOADER} - {torrent_hash} 暂停")
# 删除源种子
logger.info(f"删除源下载器下载任务:{settings.DOWNLOADER} - {torrent_hash}")
self.chain.remove_torrents(torrent_hash)
# 转种
self.chain.stop_torrents(download_id)
logger.info(f"转种:{download} - {download_id} 暂停")
# 删除转种后任务
logger.info(f"删除转种后下载任务:{download} - {download_id}")
# 删除转种后下载任务
if download == "transmission":
self.tr.delete_torrents(delete_file=True,
ids=download_id)
else:
self.qb.delete_torrents(delete_file=True,
ids=download_id)
else:
# 暂停种子
# 转种后未删除源种时,同步暂停源种
if not delete_source:
logger.info(f"{history_key} 转种时未删除源下载任务,开始暂停源下载任务…")
# 辅种
self.__del_seed(download=download, download_id=download_id, action_flag="stop")
# 暂停源种子
logger.info(f"暂停源下载器下载任务:{settings.DOWNLOADER} - {torrent_hash}")
self.chain.stop_torrents(torrent_hash)
else:
# 未转种de情况
if delete_flag:
# 删除源种子
logger.info(f"删除源下载器下载任务:{download} - {download_id}")
self.chain.remove_torrents(download_id)
else:
# 暂停源种子
logger.info(f"暂停源下载器下载任务:{download} - {download_id}")
self.chain.stop_torrents(download_id)
# 处理辅种
self.__del_seed(download=download, download_id=download_id, action_flag="del" if delete_flag else 'stop')
return delete_flag, True
except Exception as e:
logger.error(f"删种失败: {e}")
return False, False
def __del_seed(self, download, download_id, action_flag):
"""
@@ -822,15 +848,26 @@ class MediaSyncDel(_PluginBase):
# 删除辅种历史中与本下载器相同的辅种记录
if int(downloader) == download:
for torrent in torrents:
# 删除辅种
if action_flag == "del":
logger.info(f"删除辅种:{downloader} - {torrent}")
self.chain.remove_torrents(torrent)
# 暂停辅种
if action_flag == "stop":
self.chain.stop_torrents(torrent)
logger.info(f"辅种:{downloader} - {torrent} 暂停")
if download == "qbittorrent":
# 删除辅种
if action_flag == "del":
logger.info(f"删除辅种:{downloader} - {torrent}")
self.qb.delete_torrents(delete_file=True,
ids=torrent)
# 暂停辅种
if action_flag == "stop":
self.qb.stop_torrents(torrent)
logger.info(f"辅种:{downloader} - {torrent} 暂停")
else:
# 删除辅种
if action_flag == "del":
logger.info(f"删除辅种:{downloader} - {torrent}")
self.tr.delete_torrents(delete_file=True,
ids=torrent)
# 暂停辅种
if action_flag == "stop":
self.tr.stop_torrents(torrent)
logger.info(f"辅种:{downloader} - {torrent} 暂停")
# 删除本下载器辅种历史
if action_flag == "del":
del history

View File

@@ -46,13 +46,14 @@ class NAStoolSync(_PluginBase):
_site = None
_downloader = None
_supp = False
_transfer = False
qb = None
tr = None
def init_plugin(self, config: dict = None):
self._transferhistory = TransferHistoryOper()
self._plugindata = PluginDataOper()
self._downloadhistory = DownloadHistoryOper()
self._transferhistory = TransferHistoryOper(self.db)
self._plugindata = PluginDataOper(self.db)
self._downloadhistory = DownloadHistoryOper(self.db)
if config:
self._clear = config.get("clear")
@@ -61,13 +62,30 @@ class NAStoolSync(_PluginBase):
self._site = config.get("site")
self._downloader = config.get("downloader")
self._supp = config.get("supp")
self._transfer = config.get("transfer")
if self._nt_db_path:
if self._nt_db_path and self._transfer:
self.qb = Qbittorrent()
self.tr = Transmission()
# 读取sqlite数据
gradedb = sqlite3.connect(self._nt_db_path)
try:
gradedb = sqlite3.connect(self._nt_db_path)
except Exception as e:
self.update_config(
{
"transfer": False,
"clear": False,
"nt_db_path": None,
"path": self._path,
"downloader": self._downloader,
"site": self._site,
"supp": self._supp,
}
)
logger.error(f"无法打开数据库文件 {self._nt_db_path},请检查路径是否正确:{e}")
return
# 创建游标cursor来执行execute语句
cursor = gradedb.cursor()
@@ -92,12 +110,13 @@ class NAStoolSync(_PluginBase):
self.update_config(
{
"transfer": False,
"clear": False,
"nt_db_path": "",
"nt_db_path": self._nt_db_path,
"path": self._path,
"downloader": self._downloader,
"site": self._site,
"supp": self._supp
"supp": self._supp,
}
)
@@ -236,29 +255,32 @@ class NAStoolSync(_PluginBase):
# 转种后种子hash
transfer_hash = []
qb_torrents = []
tr_torrents = []
tr_torrents_all = []
if self._supp:
# 获取所有的转种数据
transfer_datas = self._plugindata.get_data_all("TorrentTransfer")
if transfer_datas:
if not isinstance(transfer_datas, list):
transfer_datas = [transfer_datas]
# 获取所有的转种数据
transfer_datas = self._plugindata.get_data_all("TorrentTransfer")
if transfer_datas:
if not isinstance(transfer_datas, list):
transfer_datas = [transfer_datas]
for transfer_data in transfer_datas:
if not transfer_data or not isinstance(transfer_data, PluginData):
continue
# 转移后种子hash
transfer_value = transfer_data.value
transfer_value = json.loads(transfer_value)
if not isinstance(transfer_value, dict):
for transfer_data in transfer_datas:
if not transfer_data or not isinstance(transfer_data, PluginData):
continue
# 转移后种子hash
transfer_value = transfer_data.value
transfer_value = json.loads(transfer_value)
to_hash = transfer_value.get("to_download_id")
# 转移前种子hash
transfer_hash.append(to_hash)
if not isinstance(transfer_value, dict):
transfer_value = json.loads(transfer_value)
to_hash = transfer_value.get("to_download_id")
# 转移前种子hash
transfer_hash.append(to_hash)
# 获取tr、qb所有种子
qb_torrents, _ = self.qb.get_torrents()
tr_torrents, _ = self.tr.get_torrents(ids=transfer_hash)
tr_torrents_all, _ = self.tr.get_torrents()
# 获取tr、qb所有种子
qb_torrents, _ = self.qb.get_torrents()
tr_torrents, _ = self.tr.get_torrents(ids=transfer_hash)
tr_torrents_all, _ = self.tr.get_torrents()
# 处理数据存入mp数据库
for history in transfer_history:
@@ -517,163 +539,180 @@ class NAStoolSync(_PluginBase):
拼装插件配置页面需要返回两块数据1、页面配置2、数据结构
"""
return [
{
'component': 'VForm',
'content': [
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'clear',
'label': '清空记录'
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'supp',
'label': '补充数据'
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
},
'content': [
{
'component': 'VTextField',
'props': {
'model': 'nt_db_path',
'label': 'NAStool数据库user.db路径',
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
},
'content': [
{
'component': 'VTextarea',
'props': {
'model': 'path',
'rows': '2',
'label': '历史记录路径映射',
'placeholder': 'NAStool路径:MoviePilot路径一行一个'
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
},
'content': [
{
'component': 'VTextarea',
'props': {
'model': 'downloader',
'rows': '2',
'label': '插件数据下载器映射',
'placeholder': 'NAStool下载器id:qbittorrent|transmission一行一个'
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
},
'content': [
{
'component': 'VTextarea',
'props': {
'model': 'site',
'label': '下载历史站点映射',
'placeholder': 'NAStool站点名:MoviePilot站点名一行一个'
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
},
'content': [
{
'component': 'VAlert',
'props': {
'text': '开启清空记录时会在导入历史数据之前删除MoviePilot之前的记录。'
'如果转移记录很多同步时间可能会长3-10分钟'
'所以点击确定后页面没反应是正常现象,后台正在处理。'
'如果开启补充数据会获取tr、qb种子补充转移记录中download_hash缺失的情况同步删除需要'
}
}
]
}
]
}
]
}
], {
"clear": False,
"supp": False,
"nt_db_path": "",
"path": "",
"downloader": "",
"site": "",
}
{
'component': 'VForm',
'content': [
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'transfer',
'label': '同步记录'
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'clear',
'label': '清空记录'
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'supp',
'label': '补充数据'
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
},
'content': [
{
'component': 'VTextField',
'props': {
'model': 'nt_db_path',
'label': 'NAStool数据库user.db路径',
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
},
'content': [
{
'component': 'VTextarea',
'props': {
'model': 'path',
'rows': '2',
'label': '历史记录路径映射',
'placeholder': 'NAStool路径:MoviePilot路径一行一个'
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
},
'content': [
{
'component': 'VTextarea',
'props': {
'model': 'downloader',
'rows': '2',
'label': '插件数据下载器映射',
'placeholder': 'NAStool下载器id:qbittorrent|transmission一行一个'
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
},
'content': [
{
'component': 'VTextarea',
'props': {
'model': 'site',
'label': '下载历史站点映射',
'placeholder': 'NAStool站点名:MoviePilot站点名一行一个'
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
},
'content': [
{
'component': 'VAlert',
'props': {
'text': '开启清空记录时会在导入历史数据之前删除MoviePilot之前的记录。'
'如果转移记录很多同步时间可能会长3-10分钟'
'所以点击确定后页面没反应是正常现象,后台正在处理。'
'如果开启补充数据会获取tr、qb种子补充转移记录中download_hash缺失的情况同步删除需要'
}
}
]
}
]
}
]
}
], {
"transfer": False,
"clear": False,
"supp": False,
"nt_db_path": "",
"path": "",
"downloader": "",
"site": "",
}
def get_page(self) -> List[dict]:
pass

View File

@@ -1,6 +1,5 @@
import datetime
import re
import time
from pathlib import Path
from threading import Lock
from typing import Optional, Any, List, Dict, Tuple
@@ -70,9 +69,9 @@ class RssSubscribe(_PluginBase):
def init_plugin(self, config: dict = None):
self.rsshelper = RssHelper()
self.downloadchain = DownloadChain()
self.searchchain = SearchChain()
self.subscribechain = SubscribeChain()
self.downloadchain = DownloadChain(self.db)
self.searchchain = SearchChain(self.db)
self.subscribechain = SubscribeChain(self.db)
# 停止现有任务
self.stop_service()
@@ -559,7 +558,7 @@ class RssSubscribe(_PluginBase):
enclosure = result.get("enclosure")
link = result.get("link")
sise = result.get("sise")
pubdate = result.get("pubdate")
pubdate: datetime.datetime = result.get("pubdate")
# 检查是否处理过
if not title or title in [h.get("key") for h in history]:
continue
@@ -588,7 +587,7 @@ class RssSubscribe(_PluginBase):
enclosure=enclosure,
page_url=link,
size=sise,
pubdate=time.strftime("%Y-%m-%d %H:%M:%S", pubdate) if pubdate else None,
pubdate=pubdate.strftime("%Y-%m-%d %H:%M:%S") if pubdate else None,
)
# 过滤种子
if self._filter:

View File

@@ -1,3 +1,4 @@
import re
import warnings
from datetime import datetime, timedelta
from multiprocessing.dummy import Pool as ThreadPool
@@ -87,14 +88,31 @@ class SiteStatistic(_PluginBase):
# 加载模块
self._site_schema = ModuleHelper.load('app.plugins.sitestatistic.siteuserinfo',
filter_func=lambda _, obj: hasattr(obj, 'schema'))
# 定时服务
self._scheduler = BackgroundScheduler(timezone=settings.TZ)
self._site_schema.sort(key=lambda x: x.order)
# 站点上一次更新时间
self._last_update_time = None
# 站点数据
self._sites_data = {}
# 定时服务
self._scheduler = BackgroundScheduler(timezone=settings.TZ)
if self._cron:
# 立即运行一次
if self._onlyonce:
logger.info(f"站点数据统计服务启动,立即运行一次")
self._scheduler.add_job(self.refresh_all_site_data, 'date',
run_date=datetime.now(
tz=pytz.timezone(settings.TZ)) + timedelta(seconds=3)
)
# 关闭一次性开关
self._onlyonce = False
# 保存配置
self.__update_config()
# 周期运行
if self._enabled and self._cron:
try:
self._scheduler.add_job(func=self.refresh_all_site_data,
trigger=CronTrigger.from_crontab(self._cron),
@@ -113,17 +131,6 @@ class SiteStatistic(_PluginBase):
self._scheduler.add_job(self.refresh_all_site_data, "cron",
hour=trigger.hour, minute=trigger.minute,
name="站点数据统计")
if self._onlyonce:
logger.info(f"站点数据统计服务启动,立即运行一次")
self._scheduler.add_job(self.refresh_all_site_data, 'date',
run_date=datetime.now(
tz=pytz.timezone(settings.TZ)) + timedelta(seconds=3)
)
# 关闭一次性开关
self._onlyonce = False
# 保存配置
self.__update_config()
# 启动任务
if self._scheduler.get_jobs():
@@ -370,7 +377,7 @@ class SiteStatistic(_PluginBase):
{
'component': 'td',
'props': {
'class': 'whitespace-nowrap break-keep'
'class': 'whitespace-nowrap break-keep text-high-emphasis'
},
'text': site
},
@@ -384,10 +391,16 @@ class SiteStatistic(_PluginBase):
},
{
'component': 'td',
'props': {
'class': 'text-success'
},
'text': StringUtils.str_filesize(data.get("upload"))
},
{
'component': 'td',
'props': {
'class': 'text-error'
},
'text': StringUtils.str_filesize(data.get("download"))
},
{
@@ -396,7 +409,7 @@ class SiteStatistic(_PluginBase):
},
{
'component': 'td',
'text': data.get('bonus')
'text': '{:,.1f}'.format(data.get('bonus') or 0)
},
{
'component': 'td',
@@ -587,7 +600,7 @@ class SiteStatistic(_PluginBase):
{
'component': 'VImg',
'props': {
'src': '/plugin/cloud.png'
'src': '/plugin/seed.png'
}
}
]
@@ -657,7 +670,7 @@ class SiteStatistic(_PluginBase):
{
'component': 'VImg',
'props': {
'src': '/plugin/seed_size.png'
'src': '/plugin/database.png'
}
}
]
@@ -841,8 +854,8 @@ class SiteStatistic(_PluginBase):
proxies=proxies
).get_res(url=url)
if res and res.status_code == 200:
if "charset=utf-8" in res.text or "charset=UTF-8" in res.text:
res.encoding = "UTF-8"
if re.search(r"charset=\"?utf-8\"?", res.text, re.IGNORECASE):
res.encoding = "utf-8"
else:
res.encoding = res.apparent_encoding
html_text = res.text
@@ -881,8 +894,8 @@ class SiteStatistic(_PluginBase):
proxies=proxies
).get_res(url=url + "/index.php")
if res and res.status_code == 200:
if "charset=utf-8" in res.text or "charset=UTF-8" in res.text:
res.encoding = "UTF-8"
if re.search(r"charset=\"?utf-8\"?", res.text, re.IGNORECASE):
res.encoding = "utf-8"
else:
res.encoding = res.apparent_encoding
html_text = res.text
@@ -1028,6 +1041,9 @@ class SiteStatistic(_PluginBase):
refresh_sites = [site for site in self.sites.get_indexers() if
site.get("id") in self._statistic_sites]
# 过滤掉已删除的站点
self._statistic_sites = [site.get("id") for site in refresh_sites if site]
self.__update_config()
if not refresh_sites:
return
@@ -1099,3 +1115,31 @@ class SiteStatistic(_PluginBase):
"statistic_type": self._statistic_type,
"statistic_sites": self._statistic_sites,
})
@eventmanager.register(EventType.SiteDeleted)
def site_deleted(self, event):
"""
删除对应站点选中
"""
site_id = event.event_data.get("site_id")
config = self.get_config()
if config:
statistic_sites = config.get("statistic_sites")
if statistic_sites:
if isinstance(statistic_sites, str):
statistic_sites = [statistic_sites]
# 删除对应站点
if site_id:
statistic_sites = [site for site in statistic_sites if int(site) != int(site_id)]
else:
# 清空
statistic_sites = []
# 若无站点,则停止
if len(statistic_sites) == 0:
self._enabled = False
self._statistic_sites = statistic_sites
# 保存配置
self.__update_config()

View File

@@ -26,6 +26,7 @@ class SiteSchema(Enum):
NexusPhp = "NexusPhp"
NexusProject = "NexusProject"
NexusRabbit = "NexusRabbit"
NexusHhanclub = "NexusHhanclub"
SmallHorse = "Small Horse"
Unit3d = "Unit3d"
TorrentLeech = "TorrentLeech"
@@ -246,8 +247,8 @@ class ISiteUserInfo(metaclass=ABCMeta):
logger.warn(
f"{self.site_name} 检测到Cloudflare请更新Cookie和UA")
return ""
if "charset=utf-8" in res.text or "charset=UTF-8" in res.text:
res.encoding = "UTF-8"
if re.search(r"charset=\"?utf-8\"?", res.text, re.IGNORECASE):
res.encoding = "utf-8"
else:
res.encoding = res.apparent_encoding
return res.text

View File

@@ -0,0 +1,60 @@
# -*- coding: utf-8 -*-
import re
from lxml import etree
from app.plugins.sitestatistic.siteuserinfo import SITE_BASE_ORDER, SiteSchema
from app.plugins.sitestatistic.siteuserinfo.nexus_php import NexusPhpSiteUserInfo
from app.utils.string import StringUtils
class NexusHhanclubSiteUserInfo(NexusPhpSiteUserInfo):
schema = SiteSchema.NexusHhanclub
order = SITE_BASE_ORDER + 20
@classmethod
def match(cls, html_text: str) -> bool:
return 'hhanclub.top' in html_text
def _parse_user_traffic_info(self, html_text):
super()._parse_user_traffic_info(html_text)
html_text = self._prepare_html_text(html_text)
html = etree.HTML(html_text)
# 上传、下载、分享率
upload_match = re.search(r"[_<>/a-zA-Z-=\"'\s#;]+([\d,.\s]+[KMGTPI]*B)",
html.xpath('//*[@id="user-info-panel"]/div[2]/div[2]/div[4]/text()')[0])
download_match = re.search(r"[_<>/a-zA-Z-=\"'\s#;]+([\d,.\s]+[KMGTPI]*B)",
html.xpath('//*[@id="user-info-panel"]/div[2]/div[2]/div[5]/text()')[0])
ratio_match = re.search(r"分享率][:_<>/a-zA-Z-=\"'\s#;]+([\d,.\s]+)",
html.xpath('//*[@id="user-info-panel"]/div[2]/div[1]/div[1]/div/text()')[0])
# 计算分享率
self.upload = StringUtils.num_filesize(upload_match.group(1).strip()) if upload_match else 0
self.download = StringUtils.num_filesize(download_match.group(1).strip()) if download_match else 0
# 优先使用页面上的分享率
calc_ratio = 0.0 if self.download <= 0.0 else round(self.upload / self.download, 3)
self.ratio = StringUtils.str_float(ratio_match.group(1)) if (
ratio_match and ratio_match.group(1).strip()) else calc_ratio
def _parse_user_detail_info(self, html_text: str):
"""
解析用户额外信息,加入时间,等级
:param html_text:
:return:
"""
super()._parse_user_detail_info(html_text)
html = etree.HTML(html_text)
if not html:
return
# 加入时间
join_at_text = html.xpath('//*[@id="mainContent"]/div/div[2]/div[4]/div[3]/span[2]/text()[1]')
if join_at_text:
self.join_at = StringUtils.unify_datetime_str(join_at_text[0].split(' (')[0].strip())
def _get_user_level(self, html):
super()._get_user_level(html)
self.user_level = html.xpath('//*[@id="mainContent"]/div/div[2]/div[2]/div[4]/img/@title')[0]

Some files were not shown because too many files have changed in this diff Show More