Compare commits

...

281 Commits

Author SHA1 Message Date
jxxghp
b2fe86c744 v1.2.3
- 优先级规则现可以按订阅和搜索分别设置
- 中文字幕过滤规则只针对原语种为非中文生效
2023-09-20 06:52:31 +08:00
jxxghp
600e32d3e4 更新 __init__.py 2023-09-19 23:29:35 +08:00
jxxghp
3ad733bab4 Merge remote-tracking branch 'origin/main' 2023-09-19 21:40:52 +08:00
jxxghp
1799b63abb feat 优先级规则按订阅和搜索拆分 2023-09-19 21:40:36 +08:00
jxxghp
d71dc13e32 Merge pull request #621 from developer-wlj/wlj0909 2023-09-19 18:21:47 +08:00
mayun110
f4633788e9 Merge remote-tracking branch 'origin/wlj0909' into wlj0909 2023-09-19 18:14:47 +08:00
jxxghp
2250e7db39 Merge remote-tracking branch 'origin/main' 2023-09-19 17:15:26 +08:00
jxxghp
b1bb0ced7a fix #608 2023-09-19 17:15:16 +08:00
jxxghp
28aecd79c6 Merge pull request #612 from thsrite/main
fix #553 修复unraid删除资源慢的问题
2023-09-19 17:08:24 +08:00
thsrite
d097ef45eb fix 当前路径下没有媒体文件则删除 2023-09-19 16:44:20 +08:00
thsrite
dac718edc8 fix 7a5d2101 2023-09-19 16:15:05 +08:00
mayun110
598ab23a2c 优化Windows下Cloudflare IP优选插件 2023-09-19 13:39:41 +08:00
jxxghp
8be6e28933 feat 中文字幕过滤规则只针对原语种为非中文 2023-09-19 12:42:10 +08:00
mayun110
bd6805be58 优化Windows下Cloudflare IP优选插件 2023-09-19 11:45:06 +08:00
thsrite
c147d36cb2 fix 资源下载msg增加下载用户 2023-09-19 11:15:14 +08:00
thsrite
7a5d210167 fix #553 修复unraid删除资源慢的问题 2023-09-19 09:17:48 +08:00
mayun110
ef335f2b8e Cloudflare IP优选新增windows支持 2023-09-19 00:02:59 +08:00
jxxghp
19eca11d17 Merge pull request #616 from thsrite/fix 2023-09-18 18:33:42 +08:00
thsrite
ab99bd356a fix iyuuautoseed 2023-09-18 18:32:19 +08:00
jxxghp
70f2d72532 Merge pull request #615 from thsrite/fix 2023-09-18 18:29:43 +08:00
thsrite
0ca995da0f fix #613 2023-09-18 18:25:52 +08:00
jxxghp
2a67abe62d v1.2.2
- 修复了RSS模式指定订阅站点时不刷新订阅的问题
- 推荐页面后退时会记住浏览位置
- 订阅及搜索支持设置全局包含和排除规则
2023-09-18 17:13:51 +08:00
jxxghp
03a07ac7bf fix RSS模式指定订阅站点时不刷新订阅的问题 2023-09-18 17:05:08 +08:00
jxxghp
f104c903ec Merge pull request #611 from thsrite/main 2023-09-18 11:38:00 +08:00
thsrite
6b74a8e266 fix 插件站点排序、删除 2023-09-18 10:30:28 +08:00
thsrite
cadd885dbf fix #592 2023-09-18 10:29:27 +08:00
jxxghp
7e0cad8491 fix 2023-09-17 19:49:21 +08:00
jxxghp
4c05e9fb2b Merge pull request #609 from WithdewHua/subscribe 2023-09-17 18:59:42 +08:00
WithdewHua
42311f0118 feat: 订阅搜索支持默认包含与排除规则 2023-09-17 18:35:31 +08:00
WithdewHua
951be74a21 fix: 函数命名 2023-09-17 18:35:31 +08:00
jxxghp
c86a21d11d Merge pull request #604 from WithdewHua/subscribe 2023-09-16 20:31:42 +08:00
WithdewHua
3fb02f6490 feat: 增加更新订阅 tmdb 信息 API 2023-09-16 19:36:49 +08:00
WithdewHua
ca2c0392bb fix: 调整 API 顺序,避免错误匹配 2023-09-16 18:43:33 +08:00
WithdewHua
b8663ee735 fix: 同时更新电影订阅信息;修复 typo 2023-09-16 16:16:39 +08:00
WithdewHua
4ab60423c1 feat: 根据原标题查询媒体服务器(plex) 2023-09-16 15:48:22 +08:00
jxxghp
1ea80e6870 更新 README.md 2023-09-16 10:58:33 +08:00
jxxghp
6f1d4754be Merge pull request #600 from DDS-Derek/main 2023-09-16 08:28:56 +08:00
DDSRem
52288d98c0 bump: action jobs version
docker/metadata-action@v5
docker/setup-qemu-action@v3
docker/setup-buildx-action@v3
docker/login-action@v3
docker/build-push-action@v5

Co-Authored-By: DDSDerek <108336573+DDSDerek@users.noreply.github.com>
Co-Authored-By: DDSTomo <142158217+ddstomo@users.noreply.github.com>
2023-09-15 20:18:28 +08:00
jxxghp
d1368c4f84 fix bug 2023-09-15 17:28:35 +08:00
jxxghp
4367c53bb0 fix bug 2023-09-15 17:24:22 +08:00
jxxghp
d87f69da35 fix azusa 2023-09-15 16:07:01 +08:00
jxxghp
5ece44090e fix 2023-09-15 15:38:30 +08:00
jxxghp
01be4f9549 need test 2023-09-15 15:37:05 +08:00
jxxghp
94077917f3 Merge remote-tracking branch 'origin/main' 2023-09-15 15:22:19 +08:00
jxxghp
8af981738c fix README.md 2023-09-15 15:22:11 +08:00
jxxghp
4d7982803e Merge pull request #596 from thsrite/main
fix 辅种插件增加不辅种路径
2023-09-15 15:15:55 +08:00
thsrite
a1bba6da4a fix 辅种插件增加不辅种路径 2023-09-15 15:08:15 +08:00
jxxghp
4eb3e16b37 v1.2.1
- 修复了IOS下菜单栏需要点击两次的问题
- 修复了电影洗版重复下载的问题
- 站点新增支持ptlsp、azusa
- 认证站点新增支持ptlsp
- 仿真签到增加判断签到状态
2023-09-15 15:04:18 +08:00
jxxghp
1f0b40fe05 support ptlsp 2023-09-15 14:29:15 +08:00
jxxghp
29e92a17e7 support azusa 2023-09-15 14:01:12 +08:00
jxxghp
8cc4469282 fix #591 2023-09-15 10:59:46 +08:00
jxxghp
a5e66071ba support PTLSP 2023-09-15 10:46:54 +08:00
jxxghp
fb4e817993 fix #594 2023-09-15 10:38:15 +08:00
jxxghp
8f26110e65 Merge pull request #590 from thsrite/main 2023-09-14 16:19:46 +08:00
thsrite
9f65a088c0 fix 插件交互命令增加channel字段 2023-09-14 16:09:56 +08:00
jxxghp
15c15388b6 Merge pull request #589 from thsrite/main 2023-09-14 15:34:49 +08:00
thsrite
950a43e001 fix 每日签到记录存储bug 2023-09-14 15:28:06 +08:00
jxxghp
9a28f8c365 Merge pull request #588 from thsrite/main 2023-09-14 15:18:43 +08:00
thsrite
32cb96fc44 fix 仿真签到判断是否已签 2023-09-14 15:17:30 +08:00
jxxghp
f7982e3e43 fix build 2023-09-14 11:36:37 +08:00
jxxghp
d13602827c fix build 2023-09-14 11:30:22 +08:00
jxxghp
182adc77b6 v1.2.0
- 修复了 QB4.5+ 转种到 TR3.0 丢失tracker的问题
- 站点新增支持byr、hdcity、okpt
- RSS订阅模式时自动检测是否失效并更新链接地址
- 自定义订阅插件支持磁力链接下载
- 增加了自定义识别词支持的配置格式:被替换词 => 替换词 && 前定位词 <> 后定位词 >> 集偏移量
- 媒体库同步删除插件支持多版本文件处理
2023-09-14 11:17:10 +08:00
jxxghp
ef4cdb41c8 fix release 2023-09-14 10:07:20 +08:00
jxxghp
9a60121914 fix #579 修改转种使用的模块 2023-09-14 09:46:51 +08:00
jxxghp
6fb0c92183 fix message content 2023-09-14 09:18:11 +08:00
jxxghp
96c4e0ba2f Merge remote-tracking branch 'origin/main' 2023-09-14 09:09:43 +08:00
jxxghp
7afe82480c fix brush 2023-09-14 09:08:57 +08:00
jxxghp
c37c8e7318 Merge pull request #583 from thsrite/main 2023-09-13 21:41:48 +08:00
thsrite
3d10ca4c8b fix 签到数量 2023-09-13 20:19:06 +08:00
jxxghp
4e515ec442 fix #516 支持磁力链下载 2023-09-13 17:56:57 +08:00
jxxghp
5eb37b5d28 fix sites 2023-09-13 16:58:05 +08:00
jxxghp
7f95bab0d5 fix #578 2023-09-13 16:12:57 +08:00
jxxghp
3fc267bcfa Merge pull request #578 from thsrite/main
fix 订阅刷新只处理订阅选中的站点(没选刷新所有设定的订阅站点)
2023-09-13 15:58:04 +08:00
jxxghp
648f0b6ec1 add byr、hdcity、okpt 2023-09-13 15:53:58 +08:00
thsrite
be3c3ef37f fix 订阅刷新站点 2023-09-13 15:52:32 +08:00
jxxghp
a47f382c21 fix download message 2023-09-13 15:18:23 +08:00
jxxghp
61c59b4405 fix #572 2023-09-13 14:58:33 +08:00
jxxghp
8ee391688d Merge pull request #575 from thsrite/main 2023-09-13 13:26:09 +08:00
thsrite
68c7bf0a96 Revert "fix"
This reverts commit 7c3c6ee999.
2023-09-13 13:10:11 +08:00
thsrite
6dd517a490 fix 自定义识别词空格 2023-09-13 12:45:44 +08:00
jxxghp
9baa5e1d35 Merge pull request #574 from thsrite/main 2023-09-13 12:36:13 +08:00
thsrite
e675e4358a fix 同步删除插件 2023-09-13 12:35:09 +08:00
jxxghp
c9a6081a57 fix log 2023-09-13 12:30:45 +08:00
jxxghp
2de20f601b fix 开关位置 2023-09-13 12:23:32 +08:00
jxxghp
79c708c30e Merge pull request #564 from thsrite/main 2023-09-13 12:04:09 +08:00
thsrite
f38defb515 Revert "fix 卸载插件时删除插件配置"
This reverts commit dd7803c90a.
2023-09-13 11:58:37 +08:00
thsrite
ac11d4eb30 Revert "fix dd7803c9"
This reverts commit 08560fc7c3.
2023-09-13 11:58:30 +08:00
thsrite
221c31f481 fix 自定义识别词增加规则:被替换词 => 替换词 && 偏移前 <> 偏移后 >> 集偏移 2023-09-13 10:37:13 +08:00
thsrite
7c3c6ee999 fix 2023-09-13 09:52:05 +08:00
thsrite
08560fc7c3 fix dd7803c9 2023-09-13 09:29:52 +08:00
thsrite
4659e7367f fix d8afa339 函数参数名 2023-09-13 09:24:16 +08:00
thsrite
2fa11a4796 Merge remote-tracking branch 'origin/main' 2023-09-13 09:22:29 +08:00
thsrite
01a153902e Revert "fix 自定义订阅插件增加识别按钮"
This reverts commit 1b2f09b95f.
2023-09-13 09:21:56 +08:00
jxxghp
5eb65046f0 Merge pull request #571 from WithdewHua/media_exists 2023-09-13 06:36:22 +08:00
WithdewHua
bb64e57f7c fix: 检查媒体文件是否存在时验证 TMDB ID 2023-09-12 23:03:15 +08:00
thsrite
0cb75d689c fix 根据type和tmdbid查询转移记录 2023-09-12 15:28:21 +08:00
thsrite
d7310ade86 fix 同步删除插件兼容多分辨率 2023-09-12 15:21:34 +08:00
thsrite
dd7803c90a fix 卸载插件时删除插件配置 2023-09-12 14:57:31 +08:00
thsrite
d8afa339de fix 媒体库刮削插件开启强制刮削时忽略SCRAP_METADATA变量 2023-09-12 13:29:12 +08:00
thsrite
1b2f09b95f fix 自定义订阅插件增加识别按钮 2023-09-12 12:45:39 +08:00
jxxghp
0414854832 Merge pull request #562 from thsrite/main 2023-09-12 11:47:57 +08:00
thsrite
9e6a7be5b1 fix #537 天空辅种失败问题 2023-09-12 11:45:57 +08:00
thsrite
e3c1407b62 fix 憨憨用户等级 2023-09-12 11:26:45 +08:00
thsrite
7a9ee954c5 fix sub正则 2023-09-12 11:11:36 +08:00
thsrite
99a06dcba0 fix rss过期,尝试保留原配置生成新的rss地址 2023-09-12 10:09:17 +08:00
jxxghp
bb8fc14bc6 v1.1.9
- 修复了部分情况下媒体识别错误的问题
- 站点新增支持dajiao、ptcafe
- 支持RSS订阅模式,RSS模式会自动获取RSS链接(也可手动维护),订阅刷新对站点压力小,同时可设置订阅刷新周期,24小时运行,可通过开关切换。
- 移除了自定义订阅功能,可使用RSS订阅模式或使用自定义订阅插件替代。
- 手动整理时支持通过名称搜索TMDBID。
2023-09-12 08:07:03 +08:00
jxxghp
50d9dcf17b fix #556 2023-09-12 07:36:29 +08:00
jxxghp
141b99d134 fix #556 2023-09-12 07:22:06 +08:00
jxxghp
18457a4de7 fix #555 2023-09-11 21:43:30 +08:00
jxxghp
a343d736ae fix #550 2023-09-11 21:25:56 +08:00
jxxghp
df5c364185 fix #550 2023-09-11 21:14:49 +08:00
jxxghp
edcec114ae fix bug 2023-09-11 19:54:38 +08:00
jxxghp
605a7486b3 fix log 2023-09-11 19:10:51 +08:00
jxxghp
efe89f59b9 feat 支持dajiao、ptcafe 2023-09-11 18:10:50 +08:00
jxxghp
fdd4aef3d3 feat 整合RSS订阅模式 2023-09-11 17:47:51 +08:00
jxxghp
08aef1f47f fix rsslink helper 2023-09-11 17:13:26 +08:00
jxxghp
c45f5e6ac4 Merge pull request #549 from thsrite/main
feat 自动生成站点默认rss地址
2023-09-11 16:35:36 +08:00
thsrite
f239cede07 fix speedlimit 未开启时return 2023-09-11 16:14:15 +08:00
thsrite
b2eb952cd0 fix 自动获取rss使用代理 2023-09-11 13:15:15 +08:00
thsrite
3a2fba0422 fix 自动获取rss data 2023-09-11 12:43:27 +08:00
thsrite
1034caa9fd fix ttg、zhuque等自动获取rss 2023-09-11 12:26:23 +08:00
thsrite
8b243e23ab feat 自动生成默认rss地址 2023-09-11 11:39:39 +08:00
jxxghp
1f76dc1e2a Merge pull request #540 from thsrite/main 2023-09-10 20:22:25 +08:00
thsrite
ea5c2fb4cf fix 限速插件每次重启完发送取消限速消息 2023-09-10 20:08:09 +08:00
thsrite
e50b56d542 fix 交互命令翻页下载 2023-09-10 19:51:20 +08:00
jxxghp
2206fafda9 Merge pull request #539 from thsrite/main 2023-09-10 18:45:33 +08:00
thsrite
345b74d881 fix #538 2023-09-10 18:41:04 +08:00
jxxghp
d231d75446 v1.1.8
- 修复了Jellyfin/Plex的webhook通知消息
- 修复了手动整理时屏蔽词不生效的问题
- 优化了剧集的年份匹配
- 优化了站点种子的索引频率控制
- 增加了站点分享率低时的信息提醒
- 增加了重启系统的远程交互命令
2023-09-10 17:43:04 +08:00
jxxghp
afb5874350 fix #536 2023-09-10 17:35:58 +08:00
jxxghp
1bd7b5c77e fix jellyfin webhook 2023-09-10 17:07:24 +08:00
jxxghp
ba41de61cb fix plex webhook 2023-09-10 12:57:51 +08:00
jxxghp
ae40d32115 fix bug 2023-09-10 09:15:12 +08:00
jxxghp
3fe4c9467e fix 2023-09-10 09:06:00 +08:00
jxxghp
b89512cc33 fix #526 2023-09-10 09:02:46 +08:00
jxxghp
f3b12bed20 feat 分享率低通知预警 2023-09-10 08:54:33 +08:00
jxxghp
08c7fff5ab fix README.md 2023-09-10 08:32:52 +08:00
jxxghp
9c20d1a270 Merge pull request #530 from thsrite/main
feat 补充剧集全部季年份
2023-09-09 22:06:58 +08:00
thsrite
b7b1aee878 fix 2023-09-09 22:03:51 +08:00
jxxghp
f998b39152 fix 删除种子数无法计算 2023-09-09 21:58:49 +08:00
jxxghp
ca01db31a9 fix LIBRARY_PATH 2023-09-09 21:41:55 +08:00
thsrite
a0b8cc6719 feat 补充剧集全部季年份 2023-09-09 21:24:07 +08:00
jxxghp
66b91abe90 fix sites.cpython-311-darwin 2023-09-09 20:58:52 +08:00
jxxghp
9b17d55ac0 fix db session 2023-09-09 20:56:37 +08:00
jxxghp
a7a0889867 Merge pull request #528 from thsrite/main 2023-09-09 20:17:09 +08:00
thsrite
af6cf306c8 fix 交互命令重启 2023-09-09 20:01:43 +08:00
jxxghp
20f35854f9 fix update 2023-09-09 19:43:02 +08:00
jxxghp
e5165c8fea fix plugin db session 2023-09-09 19:41:06 +08:00
jxxghp
0e36d003c0 fix db session 2023-09-09 19:26:56 +08:00
jxxghp
ccc249f29d Merge pull request #527 from developer-wlj/wlj0909 2023-09-09 18:31:27 +08:00
mayun110
f4edb32886 fix Windows目录监控下获取目录问题 2023-09-09 18:11:50 +08:00
jxxghp
475a84bfa6 Merge pull request #525 from thsrite/main 2023-09-09 17:53:29 +08:00
mayun110
3914ff4dd6 fix Windows下获取目录问题 2023-09-09 17:49:40 +08:00
jxxghp
5bcbacf3a5 feat torrents全局缓存共享 2023-09-09 17:42:31 +08:00
jxxghp
27238ac467 fix brushflow plugin 2023-09-09 16:49:15 +08:00
thsrite
019d40c17a fix 辅种插件排除已删除站点 2023-09-09 16:40:09 +08:00
jxxghp
fa5b92214f fix ssd 2023-09-09 16:24:53 +08:00
jxxghp
32a5f67e72 Merge pull request #524 from thsrite/main 2023-09-09 15:56:02 +08:00
thsrite
d6e9c14183 fix qb删种 2023-09-09 15:47:51 +08:00
jxxghp
87325d5bbd Merge pull request #523 from thsrite/main 2023-09-09 15:07:53 +08:00
thsrite
67ead871c1 fix 删除清除缓存按钮 2023-09-09 15:06:44 +08:00
jxxghp
691beb1186 Merge pull request #522 from DDS-Derek/main 2023-09-09 14:57:02 +08:00
jxxghp
b30d3c7dac Merge pull request #521 from WithdewHua/rsssubscribe 2023-09-09 14:55:41 +08:00
DDSRem
5e048f0150 feat: 优化容器id获取 2023-09-09 14:18:10 +08:00
WithdewHua
cb2cfe9d85 fix: 关闭清理缓存开关 2023-09-09 14:13:17 +08:00
jxxghp
482fca9b8c Merge pull request #520 from DDS-Derek/main
fix: failed to obtain container id
2023-09-09 12:08:50 +08:00
DDSRem
42511b95d8 fix: failed to obtain container id 2023-09-09 12:03:48 +08:00
jxxghp
b18e901fbd fix plugin ui 2023-09-09 11:37:34 +08:00
jxxghp
a30e3f49a3 v1.1.7
- 修复了文件转移无法覆盖的问题
- 修复了过滤规则只能从尾部开始删除的问题
- 优化了内建重启,支持非root权限环境(需要重拉镜像)
- 文件管理功能支持排序
- 优化了插件页面交互,优先展示插件数据
2023-09-09 11:08:45 +08:00
jxxghp
65d202e636 fix README.md 2023-09-09 10:51:59 +08:00
jxxghp
4373c0596b Merge pull request #518 from DDS-Derek/main
fix: port conflict
2023-09-09 10:45:54 +08:00
DDSRem
0136d9fe06 fix: port conflict 2023-09-09 10:44:05 +08:00
jxxghp
933c6d838c fix #497 2023-09-09 08:27:40 +08:00
jxxghp
7ce656148f fix #508 2023-09-09 08:19:17 +08:00
jxxghp
c05ffed6df fix #514 文件管理支持排序 2023-09-09 08:00:17 +08:00
jxxghp
6770ba3a35 feat 文件管理API排序 2023-09-08 22:48:53 +08:00
jxxghp
3b73dfcdc6 fix 文件转移时无法覆盖 2023-09-08 22:36:31 +08:00
jxxghp
100ff97017 Merge pull request #515 from thsrite/main 2023-09-08 21:49:49 +08:00
thsrite
4fe96178ee fix 2023-09-08 21:44:32 +08:00
thsrite
86d484fac0 fix 2023-09-08 21:41:30 +08:00
thsrite
db23b62fd1 fix 2023-09-08 21:31:36 +08:00
jxxghp
b84c8fd7f1 Merge pull request #512 from thsrite/main 2023-09-08 21:29:46 +08:00
jxxghp
c9f6c75069 Merge pull request #510 from DDS-Derek/main 2023-09-08 21:26:00 +08:00
thsrite
846459c244 fix wechat token 2023-09-08 21:21:43 +08:00
DDSRem
c4898d04aa docs: update 2023-09-08 20:38:07 +08:00
DDSRem
c8bc6a4618 fix: 重启更新 2023-09-08 20:33:23 +08:00
DDSRem
55dce26cb8 test: restart 2023-09-08 19:55:03 +08:00
DDSRem
ae3b73a73f feat: 优化重启 2023-09-08 19:49:10 +08:00
jxxghp
091df01b7c fix plugin 2023-09-08 16:48:13 +08:00
jxxghp
20c4c7d6e6 add 发布时间 2023-09-08 16:36:02 +08:00
jxxghp
eb1e045d8f Merge remote-tracking branch 'origin/main' 2023-09-08 15:38:38 +08:00
jxxghp
678638e9f1 feat 插件API 2023-09-08 15:38:32 +08:00
jxxghp
d8b78d3051 Merge pull request #505 from thsrite/main
fix 消息转发插件清理缓存按钮
2023-09-08 13:15:45 +08:00
thsrite
eaf0d17118 fix 消息转发插件清理缓存按钮 2023-09-08 13:13:11 +08:00
jxxghp
81bcfef6ec Merge pull request #504 from thsrite/main 2023-09-08 13:11:13 +08:00
thsrite
0997691b23 fix time format 2023-09-08 13:06:40 +08:00
jxxghp
d1f9647a63 Merge pull request #503 from thsrite/main
feat 签到插件支持分别配置签到、登录站点
2023-09-08 12:26:15 +08:00
thsrite
64a04ba8ed fix 2023-09-08 12:24:27 +08:00
jxxghp
726c130f1f Merge remote-tracking branch 'origin/main' 2023-09-08 12:23:29 +08:00
jxxghp
215b56b9f2 feat 打印jellyfin/plex webhook报文 2023-09-08 12:23:19 +08:00
thsrite
516bd8bc30 Merge remote-tracking branch 'origin/main' 2023-09-08 12:21:38 +08:00
thsrite
8bc6e04665 feat 签到插件支持分别配置签到、登录站点 2023-09-08 12:21:29 +08:00
jxxghp
94057cd5f1 Merge pull request #499 from thsrite/main 2023-09-08 11:24:49 +08:00
thsrite
2e80586436 Merge branch 'jxxghp:main' into main 2023-09-08 11:22:42 +08:00
thsrite
faa6d7dadd fix bug 2023-09-08 11:20:25 +08:00
jxxghp
071c81d52c v1.1.6
- 修复了一个未设置媒体服务器时订阅日志报错的问题
- 媒体库刮削插件支持覆盖已有元数据和图片
- 新增了一个沿用已有刮削名称的开关(默认开),避免TMDB信息变化时导致整理后名称不一致
- 刮削时季的海报优先使用TMDB的图片
- 增加了内建重启失败时的提示
2023-09-08 11:01:37 +08:00
jxxghp
52d4feb583 Update README.md 2023-09-08 10:45:55 +08:00
jxxghp
584e05e63e fix ui 2023-09-08 10:34:05 +08:00
jxxghp
061ff322ab fix bug 2023-09-08 10:26:00 +08:00
jxxghp
a2bcf8df9a Merge remote-tracking branch 'origin/main' 2023-09-08 10:03:23 +08:00
jxxghp
6c85040eb6 fix plugin 2023-09-08 10:03:14 +08:00
jxxghp
2e5d892120 fix plugin 2023-09-08 09:44:51 +08:00
jxxghp
43d108aea9 Merge pull request #498 from thsrite/main 2023-09-08 09:44:07 +08:00
thsrite
c46b1dd116 fix 消息转发插件bug 2023-09-08 09:22:59 +08:00
jxxghp
d3fac56e9a fix 2023-09-08 08:05:54 +08:00
jxxghp
b3f5b87b02 fix 2023-09-08 08:04:09 +08:00
jxxghp
03abdf9cb4 fix 2023-09-08 07:52:05 +08:00
jxxghp
42bc354e06 fix 2023-09-08 07:39:08 +08:00
jxxghp
02e81a79b2 fix 2023-09-07 23:11:08 +08:00
jxxghp
9fa4b8dfbe fix 2023-09-07 23:04:35 +08:00
jxxghp
366f59623a fix 2023-09-07 23:00:51 +08:00
jxxghp
d4c28500b7 fix plugin 2023-09-07 22:04:07 +08:00
jxxghp
5780344c43 fix 2023-09-07 20:19:03 +08:00
jxxghp
18970efc1a add index 2023-09-07 18:23:43 +08:00
jxxghp
5725584176 add index 2023-09-07 18:23:30 +08:00
jxxghp
4e26168ab5 fix plugin 2023-09-07 17:40:09 +08:00
jxxghp
f694dee71d fix 2023-09-07 16:16:04 +08:00
jxxghp
a9db0f6bbf fix 2023-09-07 16:05:48 +08:00
jxxghp
7efcde89b9 fix 2023-09-07 14:59:32 +08:00
jxxghp
1c07b306c3 Merge pull request #489 from thsrite/main
fix 目录监控已处理逻辑 && feat 新增已入库媒体是否跟随TMDB信息变化开关,关闭则延用媒体库名称
2023-09-07 14:29:08 +08:00
jxxghp
6c59a5ebb0 Merge branch 'main' into main 2023-09-07 14:29:01 +08:00
thsrite
4c7321a738 fix 2023-09-07 13:57:27 +08:00
jxxghp
f42fd023bb fix #490 2023-09-07 13:39:08 +08:00
jxxghp
9b8a4ebdd4 fix 2023-09-07 12:56:39 +08:00
jxxghp
443e2d8104 fix 减少刮削识别次数 2023-09-07 12:51:49 +08:00
jxxghp
2c61d439ca feat 媒体库刮削支持覆盖
fix 类型声明
2023-09-07 12:35:35 +08:00
thsrite
e01268222c fix 2023-09-07 12:27:04 +08:00
thsrite
27ff77b504 fix type 2023-09-07 12:25:01 +08:00
thsrite
bf8893d71b fix 文件所在文件夹重新刮削bug 2023-09-07 11:16:11 +08:00
thsrite
54b09a17c2 fix 2023-09-07 11:12:16 +08:00
thsrite
b01621049b feat 新增已入库媒体是否跟随TMDB信息变化开关,关闭则延用媒体库名称 2023-09-07 10:55:01 +08:00
thsrite
e5dc40e3c1 fix token过期后重新获取、重新发送请求 2023-09-07 10:24:21 +08:00
thsrite
44d4bcdd19 fix 目录监控已处理逻辑 2023-09-07 10:16:44 +08:00
jxxghp
b899b23d04 fix 2023-09-07 08:37:57 +08:00
jxxghp
fa23012adb fix #486 季图片优先使用TMDB的 2023-09-07 08:03:05 +08:00
jxxghp
d836b385ae fix 2023-09-07 07:20:10 +08:00
jxxghp
15a0bc6c12 fix 重启失败提示 2023-09-06 21:48:09 +08:00
jxxghp
22791e361d 更新 README.md 2023-09-06 21:41:23 +08:00
jxxghp
47b7dade5d Merge pull request #484 from thsrite/main 2023-09-06 21:23:32 +08:00
thsrite
c57d13afcc fix 优化同步删除插件msg 2023-09-06 21:21:00 +08:00
jxxghp
8db1c2952c Merge remote-tracking branch 'origin/main' 2023-09-06 21:15:21 +08:00
jxxghp
28c19bc4e3 fix 优化文件整理进度提示 2023-09-06 21:15:10 +08:00
jxxghp
fbef1735b0 Merge pull request #482 from thsrite/main
fix bug
2023-09-06 20:19:44 +08:00
thsrite
9869af992b fix bug 2023-09-06 20:18:43 +08:00
jxxghp
b6cb241b8a Merge pull request #480 from WPF0414/main
fix:限速通知速率展示问题
2023-09-06 20:09:02 +08:00
jxxghp
7edf8e7c30 Merge pull request #481 from thsrite/main
fix 删除辅种bug
2023-09-06 20:07:42 +08:00
thsrite
452161f1b8 fix 删除辅种bug 2023-09-06 20:05:29 +08:00
jxxghp
f75abb27b6 v1.1.5
- 修复了批量整理时只刮削第一个文件的问题
- 修复了多下载任务同一下载目录时会重复处理文件的问题
- 支持在WEB页面操作重启(需要映射`/var/run/docker.sock`文件到容器)
2023-09-06 19:54:23 +08:00
wangpengfei
30311e8e56 fix:限速通知
修复限速时通知错误问题
2023-09-06 19:50:18 +08:00
jxxghp
adff3b22e9 Merge pull request #476 from thsrite/main
fix 媒体库同步删除插件优化
2023-09-06 16:55:10 +08:00
thsrite
013c0dea3b fix NAStool同步插件不处理download_hash 2023-09-06 16:22:32 +08:00
jxxghp
c593c3ba16 fix #461 已转移成功的文件不重复处理 2023-09-06 16:12:40 +08:00
jxxghp
61b74735de fix #464 2023-09-06 16:00:42 +08:00
thsrite
952cae50e2 fix 同步删除插件删种逻辑 2023-09-06 15:57:46 +08:00
thsrite
7a9f89e86c fix 删除同步删除插件交互命令 2023-09-06 15:34:35 +08:00
jxxghp
f14d8bec1b fix api 2023-09-06 15:29:52 +08:00
thsrite
697d5a815b fix 标题不一致时防误删 2023-09-06 15:07:06 +08:00
thsrite
cfeaa2674d fix 媒体库同步删除插件优化 2023-09-06 14:26:24 +08:00
jxxghp
08f046f059 fix #465 批量转移时只刮削一个文件的问题 2023-09-06 13:04:18 +08:00
jxxghp
a66912f41a fix #465 批量转移时只刮削一个文件的问题 2023-09-06 13:01:13 +08:00
jxxghp
f244728a96 Merge remote-tracking branch 'origin/main' 2023-09-06 12:56:04 +08:00
jxxghp
576ac08a05 feat 内建重启 2023-09-06 12:55:48 +08:00
jxxghp
e874b3f294 Merge pull request #474 from thsrite/main
fix NAStool记录同步增加进度…
2023-09-06 11:34:13 +08:00
thsrite
90ff0fc793 fix NAStool记录同步增加进度… 2023-09-06 11:32:34 +08:00
jxxghp
259e8fc2e1 fix #463 2023-09-06 11:29:47 +08:00
jxxghp
5c0be93913 Merge pull request #471 from thsrite/main 2023-09-06 10:47:21 +08:00
thsrite
e84a5c74f6 fix 同步删除插件防重复消费 2023-09-06 09:37:02 +08:00
jxxghp
5145527d0e fix #456 2023-09-06 08:34:04 +08:00
jxxghp
e3f7f873c0 Merge pull request #462 from WPF0414/main 2023-09-05 22:38:21 +08:00
wangpengfei
84a2db2247 Update __init__.py
修复按比例的bug
2023-09-05 22:35:39 +08:00
jxxghp
4902d5ebed feat 本地文件系统判重 2023-09-05 20:32:38 +08:00
jxxghp
243391ee30 fix release 2023-09-05 19:57:24 +08:00
103 changed files with 5455 additions and 2099 deletions

View File

@@ -14,13 +14,7 @@ jobs:
steps:
-
name: Checkout
uses: actions/checkout@v3
- name: Docker meta
id: meta
uses: docker/metadata-action@v4
with:
images: ${{ secrets.DOCKER_USERNAME }}/moviepilot
uses: actions/checkout@v4
-
name: Release version
@@ -29,34 +23,42 @@ jobs:
app_version=$(cat version.py |sed -ne "s/APP_VERSION\s=\s'v\(.*\)'/\1/gp")
echo "app_version=$app_version" >> $GITHUB_ENV
-
name: Docker meta
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ secrets.DOCKER_USERNAME }}/moviepilot
tags: |
type=raw,value=${{ env.app_version }}
type=raw,value=latest
-
name: Set Up QEMU
uses: docker/setup-qemu-action@v2
uses: docker/setup-qemu-action@v3
-
name: Set Up Buildx
uses: docker/setup-buildx-action@v2
uses: docker/setup-buildx-action@v3
-
name: Login DockerHub
uses: docker/login-action@v2
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
-
name: Build Image
uses: docker/build-push-action@v4
uses: docker/build-push-action@v5
with:
context: .
file: Dockerfile
platforms: |
linux/amd64
linux/arm64
linux/arm64/v8
push: true
build-args: |
MOVIEPILOT_VERSION=${{ env.app_version }}
tags: |
${{ secrets.DOCKER_USERNAME }}/moviepilot:latest
${{ secrets.DOCKER_USERNAME }}/moviepilot:${{ env.app_version }}
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}

View File

@@ -14,7 +14,7 @@ jobs:
steps:
-
name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
-
name: Release Version
@@ -28,7 +28,7 @@ jobs:
uses: actions/create-release@latest
with:
tag_name: v${{ env.app_version }}
name: v${{ env.app_version }}
release_name: v${{ env.app_version }}
body: ${{ github.event.commits[0].message }}
draft: false
prerelease: false

View File

@@ -9,6 +9,7 @@ ENV LANG="C.UTF-8" \
UMASK=000 \
MOVIEPILOT_AUTO_UPDATE=true \
MOVIEPILOT_AUTO_UPDATE_DEV=false \
PORT=3001 \
NGINX_PORT=3000 \
CONFIG_DIR="/config" \
API_TOKEN="moviepilot" \
@@ -48,6 +49,7 @@ RUN apt-get update \
busybox \
dumb-init \
jq \
haproxy \
&& \
if [ "$(uname -m)" = "x86_64" ]; \
then ln -s /usr/lib/x86_64-linux-musl/libc.so /lib/libc.musl-x86_64.so.1; \
@@ -58,11 +60,12 @@ RUN apt-get update \
&& cp -f /app/update /usr/local/bin/mp_update \
&& cp -f /app/entrypoint /entrypoint \
&& chmod +x /entrypoint /usr/local/bin/mp_update \
&& mkdir -p ${HOME} \
&& mkdir -p ${HOME} /var/lib/haproxy/server-state \
&& groupadd -r moviepilot -g 911 \
&& useradd -r moviepilot -g moviepilot -d ${HOME} -s /bin/bash -u 911 \
&& apt-get install -y build-essential \
&& pip install --upgrade pip \
&& pip install Cython \
&& pip install -r requirements.txt \
&& playwright install-deps chromium \
&& python_ver=$(python3 -V | awk '{print $2}') \
@@ -82,5 +85,5 @@ RUN apt-get update \
/var/lib/apt/lists/* \
/var/tmp/*
EXPOSE 3000
VOLUME ["/config"]
VOLUME [ "/config" ]
ENTRYPOINT [ "/entrypoint" ]

View File

@@ -51,20 +51,22 @@ docker pull jxxghp/moviepilot:latest
- **PGID**:运行程序用户的`gid`,默认`0`
- **UMASK**:掩码权限,默认`000`,可以考虑设置为`022`
- **MOVIEPILOT_AUTO_UPDATE**:重启更新,`true`/`false`,默认`true` **注意:如果出现网络问题可以配置`PROXY_HOST`,具体看下方`PROXY_HOST`解释**
- **NGINX_PORT** WEB服务端口默认`3000`,可自行修改,不能`3001`
- **NGINX_PORT** WEB服务端口默认`3000`,可自行修改,不能与API服务端口冲突
- **PORT** API服务端口默认`3001`可自行修改不能与WEB服务端口冲突
- **SUPERUSER** 超级管理员用户名,默认`admin`,安装后使用该用户登录后台管理界面
- **SUPERUSER_PASSWORD** 超级管理员初始密码,默认`password`,建议修改为复杂密码
- **API_TOKEN** API密钥默认`moviepilot`在媒体服务器Webhook、微信回调等地址配置中需要加上`?token=`该值,建议修改为复杂字符串
- **PROXY_HOST** 网络代理可选访问themoviedb或者重启更新需要使用代理访问格式为`http(s)://ip:port`
- **TMDB_API_DOMAIN** TMDB API地址默认`api.themoviedb.org`,也可配置为`api.tmdb.org`或其它中转代理服务地址,能连通即可
- **DOWNLOAD_PATH** 下载保存目录,**注意:需要将`moviepilot``下载器`的映射路径保持一致**,否则会导致下载文件无法转移
- **DOWNLOAD_MOVIE_PATH** 电影下载保存目录,**必须是`DOWNLOAD_PATH`的下级路径**不设置则下载到`DOWNLOAD_PATH`
- **DOWNLOAD_TV_PATH** 电视剧下载保存目录,**必须是`DOWNLOAD_PATH`的下级路径**不设置则下载到`DOWNLOAD_PATH`
- **DOWNLOAD_ANIME_PATH** 动漫下载保存目录,**必须是`DOWNLOAD_PATH`的下级路径**不设置则下载到`DOWNLOAD_PATH`
- **DOWNLOAD_MOVIE_PATH** 电影下载保存目录,不设置则下载到`DOWNLOAD_PATH`
- **DOWNLOAD_TV_PATH** 电视剧下载保存目录,不设置则下载到`DOWNLOAD_PATH`
- **DOWNLOAD_ANIME_PATH** 动漫下载保存目录,不设置则下载到`DOWNLOAD_PATH`
- **DOWNLOAD_CATEGORY** 下载二级分类开关,`true`/`false`,默认`false`,开启后会根据配置`category.yaml`自动在下载目录下建立二级目录分类
- **DOWNLOAD_SUBTITLE** 下载站点字幕,`true`/`false`,默认`true`
- **REFRESH_MEDIASERVER** 入库刷新媒体库,`true`/`false`,默认`true`
- **SCRAP_METADATA** 刮削入库的媒体文件,`true`/`false`,默认`true`
- **SCRAP_FOLLOW_TMDB** 新增已入库媒体是否跟随TMDB信息变化`true`/`false`,默认`true`
- **TORRENT_TAG** 种子标签,默认为`MOVIEPILOT`设置后只有MoviePilot添加的下载才会处理留空所有下载器中的任务均会处理
- **LIBRARY_PATH** 媒体库目录,多个目录使用`,`分隔
- **LIBRARY_MOVIE_NAME** 电影媒体库目录名,默认`电影`
@@ -79,6 +81,8 @@ docker pull jxxghp/moviepilot:latest
- **OCR_HOST** OCR识别服务器地址格式`http(s)://ip:port`用于识别站点二维码实现自动登录获取Cookie等不配置默认使用内建服务器`https://movie-pilot.org`,可使用 [这个镜像](https://hub.docker.com/r/jxxghp/moviepilot-ocr) 自行搭建。
- **USER_AGENT** CookieCloud对应的浏览器UA可选设置后可增加连接站点的成功率同步站点后可以在管理界面中修改
- **AUTO_DOWNLOAD_USER** 交互搜索自动下载用户ID使用,分割
- **SUBSCRIBE_MODE** 订阅模式,`rss`/`spider`,默认`spider``rss`模式通过定时刷新RSS来匹配订阅RSS地址会自动获取也可手动维护对站点压力小同时可设置订阅刷新周期24小时运行但订阅和下载通知不能过滤和显示免费推荐使用rss模式。
- **SUBSCRIBE_RSS_INTERVAL** RSS订阅模式刷新时间间隔分钟默认`30`分钟不能小于5分钟。
- **SUBSCRIBE_SEARCH** 订阅搜索,`true`/`false`,默认`false`开启后会每隔24小时对所有订阅进行全量搜索以补齐缺失剧集一般情况下正常订阅即可订阅搜索只做为兜底会增加站点压力不建议开启
- **MESSAGER** 消息通知渠道,支持 `telegram`/`wechat`/`slack`,开启多个渠道时使用`,`分隔。同时还需要配置对应渠道的环境变量,非对应渠道的变量可删除,推荐使用`telegram`
@@ -145,23 +149,24 @@ docker pull jxxghp/moviepilot:latest
### 2. **用户认证**
- **AUTH_SITE** 认证站点,支持`hhclub`/`audiences`/`hddolby`/`zmpt`/`freefarm`/`hdfans`/`wintersakura`/`leaves`/`1ptba`/`icc2022`/`iyuu`
- **AUTH_SITE** 认证站点,支持`iyuu`/`hhclub`/`audiences`/`hddolby`/`zmpt`/`freefarm`/`hdfans`/`wintersakura`/`leaves`/`1ptba`/`icc2022`/`ptlsp`
`MoviePilot`需要认证后才能使用,配置`AUTH_SITE`后,需要根据下表配置对应站点的认证参数。
| 站点 | 参数 |
|:--:|:-----------------------------------------------------:|
| iyuu | `IYUU_SIGN`IYUU登录令牌 |
| hhclub | `HHCLUB_USERNAME`:用户名<br/>`HHCLUB_PASSKEY`:密钥 |
| audiences | `AUDIENCES_UID`用户ID<br/>`AUDIENCES_PASSKEY`:密钥 |
| hddolby | `HDDOLBY_ID`用户ID<br/>`HDDOLBY_PASSKEY`:密钥 |
| zmpt | `ZMPT_UID`用户ID<br/>`ZMPT_PASSKEY`:密钥 |
| freefarm | `FREEFARM_UID`用户ID<br/>`FREEFARM_PASSKEY`:密钥 |
| hdfans | `HDFANS_UID`用户ID<br/>`HDFANS_PASSKEY`:密钥 |
| 站点 | 参数 |
|:------------:|:-----------------------------------------------------:|
| iyuu | `IYUU_SIGN`IYUU登录令牌 |
| hhclub | `HHCLUB_USERNAME`:用户名<br/>`HHCLUB_PASSKEY`:密钥 |
| audiences | `AUDIENCES_UID`用户ID<br/>`AUDIENCES_PASSKEY`:密钥 |
| hddolby | `HDDOLBY_ID`用户ID<br/>`HDDOLBY_PASSKEY`:密钥 |
| zmpt | `ZMPT_UID`用户ID<br/>`ZMPT_PASSKEY`:密钥 |
| freefarm | `FREEFARM_UID`用户ID<br/>`FREEFARM_PASSKEY`:密钥 |
| hdfans | `HDFANS_UID`用户ID<br/>`HDFANS_PASSKEY`:密钥 |
| wintersakura | `WINTERSAKURA_UID`用户ID<br/>`WINTERSAKURA_PASSKEY`:密钥 |
| leaves | `LEAVES_UID`用户ID<br/>`LEAVES_PASSKEY`:密钥 |
| 1ptba | `1PTBA_UID`用户ID<br/>`1PTBA_PASSKEY`:密钥 |
| icc2022 | `ICC2022_UID`用户ID<br/>`ICC2022_PASSKEY`:密钥 |
| leaves | `LEAVES_UID`用户ID<br/>`LEAVES_PASSKEY`:密钥 |
| 1ptba | `1PTBA_UID`用户ID<br/>`1PTBA_PASSKEY`:密钥 |
| icc2022 | `ICC2022_UID`用户ID<br/>`ICC2022_PASSKEY`:密钥 |
| ptlsp | `PTLSP_UID`用户ID<br/>`PTLSP_PASSKEY`:密钥 |
### 2. **进阶配置**
@@ -226,6 +231,7 @@ docker pull jxxghp/moviepilot:latest
- 通过微信/Telegram/Slack远程管理其中微信/Telegram将会自动添加操作菜单微信菜单条数有限制部分菜单不显示微信需要在官方页面设置回调地址地址相对路径为`/api/v1/message/`
- 设置媒体服务器Webhook通过MoviePilot发送播放通知等。Webhook回调相对路径为`/api/v1/webhook?token=moviepilot``3001`端口),其中`moviepilot`为设置的`API_TOKEN`
- 将MoviePilot做为Radarr或Sonarr服务器添加到Overseerr或Jellyseerr`3001`端口可使用Overseerr/Jellyseerr浏览订阅。
- 映射宿主机docker.sock文件到容器`/var/run/docker.sock`,以支持内建重启操作。实例:`-v /var/run/docker.sock:/var/run/docker.sock:ro`
**注意**
@@ -240,11 +246,24 @@ location / {
proxy_set_header X-Forwarded-Proto $scheme;
}
```
3) 新建的企业微信应用需要固定公网IP的代理才能收到消息代理添加以下代码
```nginx configuration
location /cgi-bin/gettoken {
proxy_pass https://qyapi.weixin.qq.com;
}
location /cgi-bin/message/send {
proxy_pass https://qyapi.weixin.qq.com;
}
location /cgi-bin/menu/create {
proxy_pass https://qyapi.weixin.qq.com;
}
```
![image](https://github.com/jxxghp/MoviePilot/assets/51039935/b8f0238d-847f-4f9d-b210-e905837362b9)
![image](https://github.com/jxxghp/MoviePilot/assets/51039935/f2654b09-26f3-464f-a0af-1de3f97832ee)
![image](https://github.com/jxxghp/MoviePilot/assets/51039935/28219233-ec7d-479b-b184-9a901c947dd1)
![image](https://github.com/jxxghp/MoviePilot/assets/51039935/fcb87529-56dd-43df-8337-6e34b8582819)
![image](https://github.com/jxxghp/MoviePilot/assets/51039935/f7df0806-668d-4c8b-ad41-133bf8f0bf73)
![image](https://github.com/jxxghp/MoviePilot/assets/51039935/bfa77c71-510a-46a6-9c1e-cf98cb101e3a)
![image](https://github.com/jxxghp/MoviePilot/assets/51039935/51cafd09-e38c-47f9-ae62-1e83ab8bf89b)
![image](https://github.com/jxxghp/MoviePilot/assets/51039935/f7ea77cd-0362-4c35-967c-7f1b22dbef05)

View File

@@ -0,0 +1,40 @@
"""1.0.6
Revision ID: 232dfa044617
Revises: e734c7fe6056
Create Date: 2023-09-19 21:34:41.994617
"""
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = '232dfa044617'
down_revision = 'e734c7fe6056'
branch_labels = None
depends_on = None
def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
# 搜索优先级
op.execute("delete from systemconfig where key = 'SearchFilterRules';")
op.execute(
"insert into systemconfig(key, value) VALUES('SearchFilterRules', (select value from systemconfig where key= 'FilterRules'));")
# 订阅优先级
op.execute("delete from systemconfig where key = 'SubscribeFilterRules';")
op.execute(
"insert into systemconfig(key, value) VALUES('SubscribeFilterRules', (select value from systemconfig where key= 'FilterRules'));")
# 洗版优先级
op.execute("delete from systemconfig where key = 'BestVersionFilterRules';")
op.execute(
"insert into systemconfig(key, value) VALUES('BestVersionFilterRules', (select value from systemconfig where key= 'FilterRules2'));")
# 删除旧的优先级规则
op.execute("delete from systemconfig where key = 'FilterRules';")
op.execute("delete from systemconfig where key = 'FilterRules2';")
# ### end Alembic commands ###
def downgrade() -> None:
pass

View File

@@ -6,8 +6,6 @@ Create Date: 2023-08-28 13:21:45.152012
"""
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = '52ab4930be04'

View File

@@ -0,0 +1,27 @@
"""1.0.5
Revision ID: e734c7fe6056
Revises: 1e169250e949
Create Date: 2023-09-07 18:19:41.250957
"""
from alembic import op
# revision identifiers, used by Alembic.
revision = 'e734c7fe6056'
down_revision = '1e169250e949'
branch_labels = None
depends_on = None
def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
try:
op.create_index('ix_transferhistory_tmdbid', 'transferhistory', ['tmdbid'], unique=False)
except Exception as err:
pass
# ### end Alembic commands ###
def downgrade() -> None:
pass

View File

@@ -1,7 +1,7 @@
from fastapi import APIRouter
from app.api.endpoints import login, user, site, message, webhook, subscribe, \
media, douban, search, plugin, tmdb, history, system, download, dashboard, rss, filebrowser, transfer
media, douban, search, plugin, tmdb, history, system, download, dashboard, filebrowser, transfer
api_router = APIRouter()
api_router.include_router(login.router, prefix="/login", tags=["login"])
@@ -19,6 +19,5 @@ api_router.include_router(system.router, prefix="/system", tags=["system"])
api_router.include_router(plugin.router, prefix="/plugin", tags=["plugin"])
api_router.include_router(download.router, prefix="/download", tags=["download"])
api_router.include_router(dashboard.router, prefix="/dashboard", tags=["dashboard"])
api_router.include_router(rss.router, prefix="/rss", tags=["rss"])
api_router.include_router(filebrowser.router, prefix="/filebrowser", tags=["filebrowser"])
api_router.include_router(transfer.router, prefix="/transfer", tags=["transfer"])

View File

@@ -41,12 +41,7 @@ def storage(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询存储空间信息
"""
if settings.LIBRARY_PATH:
total_storage, free_storage = SystemUtils.space_usage(
[Path(path) for path in settings.LIBRARY_PATH.split(",")]
)
else:
total_storage, free_storage = 0, 0
total_storage, free_storage = SystemUtils.space_usage(settings.LIBRARY_PATHS)
return schemas.Storage(
total_storage=total_storage,
used_storage=total_storage - free_storage

View File

@@ -17,7 +17,7 @@ IMAGE_TYPES = [".jpg", ".png", ".gif", ".bmp", ".jpeg", ".webp"]
@router.get("/list", summary="所有插件", response_model=List[schemas.FileItem])
def list_path(path: str, _: schemas.TokenPayload = Depends(verify_token)) -> Any:
def list_path(path: str, sort: str = 'time', _: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询当前目录下所有目录和文件
"""
@@ -55,6 +55,7 @@ def list_path(path: str, _: schemas.TokenPayload = Depends(verify_token)) -> Any
basename=path_obj.stem,
extension=path_obj.suffix[1:],
size=path_obj.stat().st_size,
modify_time=path_obj.stat().st_mtime,
))
return ret_items
@@ -65,6 +66,7 @@ def list_path(path: str, _: schemas.TokenPayload = Depends(verify_token)) -> Any
path=str(item).replace("\\", "/") + "/",
name=item.name,
basename=item.stem,
modify_time=item.stat().st_mtime,
))
# 遍历所有文件,不含子目录
@@ -80,8 +82,13 @@ def list_path(path: str, _: schemas.TokenPayload = Depends(verify_token)) -> Any
basename=item.stem,
extension=item.suffix[1:],
size=item.stat().st_size,
modify_time=item.stat().st_mtime,
))
# 排序
if sort == 'time':
ret_items.sort(key=lambda x: x.modify_time, reverse=True)
else:
ret_items.sort(key=lambda x: x.name, reverse=False)
return ret_items

View File

@@ -64,14 +64,13 @@ def wechat_verify(echostr: str, msg_signature: str,
@router.get("/switchs", summary="查询通知消息渠道开关", response_model=List[NotificationSwitch])
def read_switchs(db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
def read_switchs(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询通知消息渠道开关
"""
return_list = []
# 读取数据库
switchs = SystemConfigOper(db).get(SystemConfigKey.NotificationChannels)
switchs = SystemConfigOper().get(SystemConfigKey.NotificationChannels)
if not switchs:
for noti in NotificationType:
return_list.append(NotificationSwitch(mtype=noti.value, wechat=True, telegram=True, slack=True))
@@ -83,7 +82,6 @@ def read_switchs(db: Session = Depends(get_db),
@router.post("/switchs", summary="设置通知消息渠道开关", response_model=schemas.Response)
def set_switchs(switchs: List[NotificationSwitch],
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询通知消息渠道开关
@@ -92,6 +90,6 @@ def set_switchs(switchs: List[NotificationSwitch],
for switch in switchs:
switch_list.append(switch.dict())
# 存入数据库
SystemConfigOper(db).set(SystemConfigKey.NotificationChannels, switch_list)
SystemConfigOper().set(SystemConfigKey.NotificationChannels, switch_list)
return schemas.Response(success=True)

View File

@@ -22,28 +22,26 @@ def all_plugins(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
@router.get("/installed", summary="已安装插件", response_model=List[str])
def installed_plugins(db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
def installed_plugins(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询用户已安装插件清单
"""
return SystemConfigOper(db).get(SystemConfigKey.UserInstalledPlugins) or []
return SystemConfigOper().get(SystemConfigKey.UserInstalledPlugins) or []
@router.get("/install/{plugin_id}", summary="安装插件", response_model=schemas.Response)
def install_plugin(plugin_id: str,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
安装插件
"""
# 已安装插件
install_plugins = SystemConfigOper(db).get(SystemConfigKey.UserInstalledPlugins) or []
install_plugins = SystemConfigOper().get(SystemConfigKey.UserInstalledPlugins) or []
# 安装插件
if plugin_id not in install_plugins:
install_plugins.append(plugin_id)
# 保存设置
SystemConfigOper(db).set(SystemConfigKey.UserInstalledPlugins, install_plugins)
SystemConfigOper().set(SystemConfigKey.UserInstalledPlugins, install_plugins)
# 重载插件管理器
PluginManager().init_config()
return schemas.Response(success=True)
@@ -93,19 +91,18 @@ def set_plugin_config(plugin_id: str, conf: dict,
@router.delete("/{plugin_id}", summary="卸载插件", response_model=schemas.Response)
def uninstall_plugin(plugin_id: str,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
卸载插件
"""
# 删除已安装信息
install_plugins = SystemConfigOper(db).get(SystemConfigKey.UserInstalledPlugins) or []
install_plugins = SystemConfigOper().get(SystemConfigKey.UserInstalledPlugins) or []
for plugin in install_plugins:
if plugin == plugin_id:
install_plugins.remove(plugin)
break
# 保存
SystemConfigOper(db).set(SystemConfigKey.UserInstalledPlugins, install_plugins)
SystemConfigOper().set(SystemConfigKey.UserInstalledPlugins, install_plugins)
# 重载插件管理器
PluginManager().init_config()
return schemas.Response(success=True)

View File

@@ -1,135 +0,0 @@
from typing import List, Any
from fastapi import APIRouter, Depends
from sqlalchemy.orm import Session
from starlette.background import BackgroundTasks
from app import schemas
from app.chain.rss import RssChain
from app.core.security import verify_token
from app.db import get_db
from app.db.models.rss import Rss
from app.helper.rss import RssHelper
from app.schemas import MediaType
router = APIRouter()
def start_rss_refresh(db: Session, rssid: int = None):
"""
启动自定义订阅刷新
"""
RssChain(db).refresh(rssid=rssid, manual=True)
@router.get("/", summary="所有自定义订阅", response_model=List[schemas.Rss])
def read_rsses(
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询所有自定义订阅
"""
return Rss.list(db)
@router.post("/", summary="新增自定义订阅", response_model=schemas.Response)
def create_rss(
*,
rss_in: schemas.Rss,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)
) -> Any:
"""
新增自定义订阅
"""
if rss_in.type:
mtype = MediaType(rss_in.type)
else:
mtype = None
rssid, errormsg = RssChain(db).add(
mtype=mtype,
**rss_in.dict()
)
if not rssid:
return schemas.Response(success=False, message=errormsg)
return schemas.Response(success=True, data={
"id": rssid
})
@router.put("/", summary="更新自定义订阅", response_model=schemas.Response)
def update_rss(
*,
rss_in: schemas.Rss,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)
) -> Any:
"""
更新自定义订阅信息
"""
rss = Rss.get(db, rss_in.id)
if not rss:
return schemas.Response(success=False, message="自定义订阅不存在")
rss.update(db, rss_in.dict())
return schemas.Response(success=True)
@router.get("/preview/{rssid}", summary="预览自定义订阅", response_model=List[schemas.TorrentInfo])
def preview_rss(
rssid: int,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据ID查询自定义订阅RSS报文
"""
rssinfo: Rss = Rss.get(db, rssid)
if not rssinfo:
return []
torrents = RssHelper.parse(rssinfo.url, proxy=True if rssinfo.proxy else False) or []
return [schemas.TorrentInfo(
title=t.get("title"),
description=t.get("description"),
enclosure=t.get("enclosure"),
size=t.get("size"),
page_url=t.get("link"),
pubdate=t["pubdate"].strftime("%Y-%m-%d %H:%M:%S") if t.get("pubdate") else None,
) for t in torrents]
@router.get("/refresh/{rssid}", summary="刷新自定义订阅", response_model=schemas.Response)
def refresh_rss(
rssid: int,
background_tasks: BackgroundTasks,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据ID刷新自定义订阅
"""
background_tasks.add_task(start_rss_refresh,
db=db,
rssid=rssid)
return schemas.Response(success=True)
@router.get("/{rssid}", summary="查询自定义订阅详情", response_model=schemas.Rss)
def read_rss(
rssid: int,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据ID查询自定义订阅详情
"""
return Rss.get(db, rssid)
@router.delete("/{rssid}", summary="删除自定义订阅", response_model=schemas.Response)
def read_rss(
rssid: int,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据ID删除自定义订阅
"""
Rss.delete(db, rssid)
return schemas.Response(success=True)

View File

@@ -1,6 +1,6 @@
from typing import List, Any
from fastapi import APIRouter, Depends, HTTPException
from fastapi import APIRouter, Depends
from sqlalchemy.orm import Session
from app import schemas

View File

@@ -6,8 +6,8 @@ from starlette.background import BackgroundTasks
from app import schemas
from app.chain.cookiecloud import CookieCloudChain
from app.chain.search import SearchChain
from app.chain.site import SiteChain
from app.chain.torrents import TorrentsChain
from app.core.event import EventManager
from app.core.security import verify_token
from app.db import get_db
@@ -117,9 +117,9 @@ def cookie_cloud_sync(db: Session = Depends(get_db),
清空所有站点数据并重新同步CookieCloud站点信息
"""
Site.reset(db)
SystemConfigOper(db).set(SystemConfigKey.IndexerSites, [])
SystemConfigOper(db).set(SystemConfigKey.RssSites, [])
CookieCloudChain(db).process(manual=True)
SystemConfigOper().set(SystemConfigKey.IndexerSites, [])
SystemConfigOper().set(SystemConfigKey.RssSites, [])
CookieCloudChain().process(manual=True)
# 插件站点删除
EventManager().send_event(EventType.SiteDeleted,
{
@@ -191,7 +191,7 @@ def site_icon(site_id: int,
@router.get("/resource/{site_id}", summary="站点资源", response_model=List[schemas.TorrentInfo])
def site_resource(site_id: int, keyword: str = None,
def site_resource(site_id: int,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
@@ -203,7 +203,7 @@ def site_resource(site_id: int, keyword: str = None,
status_code=404,
detail=f"站点 {site_id} 不存在",
)
torrents = SearchChain(db).browse(site.domain, keyword)
torrents = TorrentsChain().browse(domain=site.domain)
if not torrents:
return []
return [torrent.to_dict() for torrent in torrents]
@@ -234,14 +234,14 @@ def read_rss_sites(db: Session = Depends(get_db)) -> List[dict]:
获取站点列表
"""
# 选中的rss站点
rss_sites = SystemConfigOper(db).get(SystemConfigKey.RssSites)
selected_sites = SystemConfigOper().get(SystemConfigKey.RssSites) or []
# 所有站点
all_site = Site.list_order_by_pri(db)
if not rss_sites or not all_site:
if not selected_sites or not all_site:
return []
# 选中的rss站点
rss_sites = [site for site in all_site if site and site.id in rss_sites]
rss_sites = [site for site in all_site if site and site.id in selected_sites]
return rss_sites

View File

@@ -138,6 +138,53 @@ def subscribe_mediaid(
return result if result else Subscribe()
@router.get("/refresh", summary="刷新订阅", response_model=schemas.Response)
def refresh_subscribes(
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
刷新所有订阅
"""
SubscribeChain(db).refresh()
return schemas.Response(success=True)
@router.get("/check", summary="刷新订阅 TMDB 信息", response_model=schemas.Response)
def check_subscribes(
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
刷新所有订阅
"""
SubscribeChain(db).check()
return schemas.Response(success=True)
@router.get("/search", summary="搜索所有订阅", response_model=schemas.Response)
def search_subscribes(
background_tasks: BackgroundTasks,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
搜索所有订阅
"""
background_tasks.add_task(start_subscribe_search, db=db, sid=None, state='R')
return schemas.Response(success=True)
@router.get("/search/{subscribe_id}", summary="搜索订阅", response_model=schemas.Response)
def search_subscribe(
subscribe_id: int,
background_tasks: BackgroundTasks,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据订阅编号搜索订阅
"""
background_tasks.add_task(start_subscribe_search, db=db, sid=subscribe_id, state=None)
return schemas.Response(success=True)
@router.get("/{subscribe_id}", summary="订阅详情", response_model=schemas.Subscribe)
def read_subscribe(
subscribe_id: int,
@@ -243,39 +290,3 @@ async def seerr_subscribe(request: Request, background_tasks: BackgroundTasks,
username=user_name)
return schemas.Response(success=True)
@router.get("/refresh", summary="刷新订阅", response_model=schemas.Response)
def refresh_subscribes(
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
刷新所有订阅
"""
SubscribeChain(db).refresh()
return schemas.Response(success=True)
@router.get("/search/{subscribe_id}", summary="搜索订阅", response_model=schemas.Response)
def search_subscribe(
subscribe_id: int,
background_tasks: BackgroundTasks,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
搜索所有订阅
"""
background_tasks.add_task(start_subscribe_search, db=db, sid=subscribe_id, state=None)
return schemas.Response(success=True)
@router.get("/search", summary="搜索所有订阅", response_model=schemas.Response)
def search_subscribes(
background_tasks: BackgroundTasks,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
搜索所有订阅
"""
background_tasks.add_task(start_subscribe_search, db=db, sid=None, state='R')
return schemas.Response(success=True)

View File

@@ -18,13 +18,14 @@ from app.helper.message import MessageHelper
from app.helper.progress import ProgressHelper
from app.schemas.types import SystemConfigKey
from app.utils.http import RequestUtils
from app.utils.system import SystemUtils
from version import APP_VERSION
router = APIRouter()
@router.get("/env", summary="查询系统环境变量", response_model=schemas.Response)
def get_setting(_: schemas.TokenPayload = Depends(verify_token)):
def get_env_setting(_: schemas.TokenPayload = Depends(verify_token)):
"""
查询系统环境变量,包括当前版本号
"""
@@ -62,29 +63,27 @@ def get_progress(process_type: str, token: str):
@router.get("/setting/{key}", summary="查询系统设置", response_model=schemas.Response)
def get_setting(key: str,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)):
"""
查询系统设置
"""
return schemas.Response(success=True, data={
"value": SystemConfigOper(db).get(key)
"value": SystemConfigOper().get(key)
})
@router.post("/setting/{key}", summary="更新系统设置", response_model=schemas.Response)
def set_setting(key: str, value: Union[list, dict, str, int] = None,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)):
"""
更新系统设置
"""
SystemConfigOper(db).set(key, value)
SystemConfigOper().set(key, value)
return schemas.Response(success=True)
@router.get("/message", summary="实时消息")
def get_progress(token: str):
def get_message(token: str):
"""
实时获取系统消息返回格式为SSE
"""
@@ -170,31 +169,45 @@ def latest_version(_: schemas.TokenPayload = Depends(verify_token)):
return schemas.Response(success=False)
@router.get("/ruletest", summary="过滤规则测试", response_model=schemas.Response)
@router.get("/ruletest", summary="优先级规则测试", response_model=schemas.Response)
def ruletest(title: str,
subtitle: str = None,
ruletype: str = None,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)):
"""
过滤规则测试,规则类型 1-订阅2-洗版
过滤规则测试,规则类型 1-订阅2-洗版3-搜索
"""
torrent = schemas.TorrentInfo(
title=title,
description=subtitle,
)
if ruletype == "2":
rule_string = SystemConfigOper(db).get(SystemConfigKey.FilterRules2)
rule_string = SystemConfigOper().get(SystemConfigKey.BestVersionFilterRules)
elif ruletype == "3":
rule_string = SystemConfigOper().get(SystemConfigKey.SearchFilterRules)
else:
rule_string = SystemConfigOper(db).get(SystemConfigKey.FilterRules)
rule_string = SystemConfigOper().get(SystemConfigKey.SubscribeFilterRules)
if not rule_string:
return schemas.Response(success=False, message="过滤规则未设置!")
return schemas.Response(success=False, message="优先级规则未设置!")
# 过滤
result = SearchChain(db).filter_torrents(rule_string=rule_string,
torrent_list=[torrent])
if not result:
return schemas.Response(success=False, message="不符合过滤规则!")
return schemas.Response(success=False, message="不符合优先级规则!")
return schemas.Response(success=True, data={
"priority": 100 - result[0].pri_order + 1
})
@router.get("/restart", summary="重启系统", response_model=schemas.Response)
def restart_system(_: schemas.TokenPayload = Depends(verify_token)):
"""
重启系统
"""
if not SystemUtils.can_restart():
return schemas.Response(success=False, message="当前运行环境不支持重启操作!")
# 执行重启
ret, msg = SystemUtils.restart()
return schemas.Response(success=ret, message=msg)

View File

@@ -132,13 +132,10 @@ def arr_rootfolder(apikey: str) -> Any:
status_code=403,
detail="认证失败!",
)
library_path = "/"
if settings.LIBRARY_PATH:
library_path = settings.LIBRARY_PATH.split(",")[0]
return [
{
"id": 1,
"path": library_path,
"path": "/" if not settings.LIBRARY_PATHS else str(settings.LIBRARY_PATHS[0]),
"accessible": True,
"freeSpace": 0,
"unmappedFolders": []

View File

@@ -197,7 +197,7 @@ class ChainBase(metaclass=ABCMeta):
return self.run_module("search_medias", meta=meta)
def search_torrents(self, site: CommentedMap,
mediainfo: Optional[MediaInfo] = None,
mediainfo: MediaInfo,
keyword: str = None,
page: int = 0,
area: str = "title") -> List[TorrentInfo]:
@@ -223,44 +223,45 @@ class ChainBase(metaclass=ABCMeta):
def filter_torrents(self, rule_string: str,
torrent_list: List[TorrentInfo],
season_episodes: Dict[int, list] = None) -> List[TorrentInfo]:
season_episodes: Dict[int, list] = None,
mediainfo: MediaInfo = None) -> List[TorrentInfo]:
"""
过滤种子资源
:param rule_string: 过滤规则
:param torrent_list: 资源列表
:param season_episodes: 季集数过滤 {season:[episodes]}
:param mediainfo: 识别的媒体信息
:return: 过滤后的资源列表,添加资源优先级
"""
return self.run_module("filter_torrents", rule_string=rule_string,
torrent_list=torrent_list, season_episodes=season_episodes)
torrent_list=torrent_list, season_episodes=season_episodes,
mediainfo=mediainfo)
def download(self, torrent_path: Path, download_dir: Path, cookie: str,
def download(self, content: Union[Path, str], download_dir: Path, cookie: str,
episodes: Set[int] = None, category: str = None
) -> Optional[Tuple[Optional[str], str]]:
"""
根据种子文件,选择并添加下载任务
:param torrent_path: 种子文件地址
:param content: 种子文件地址或者磁力链接
:param download_dir: 下载目录
:param cookie: cookie
:param episodes: 需要下载的集数
:param category: 种子分类
:return: 种子Hash错误信息
"""
return self.run_module("download", torrent_path=torrent_path, download_dir=download_dir,
return self.run_module("download", content=content, download_dir=download_dir,
cookie=cookie, episodes=episodes, category=category)
def download_added(self, context: Context, torrent_path: Path, download_dir: Path) -> None:
def download_added(self, context: Context, download_dir: Path, torrent_path: Path = None) -> None:
"""
添加下载任务成功后,从站点下载字幕,保存到下载目录
:param context: 上下文,包括识别信息、媒体信息、种子信息
:param torrent_path: 种子文件地址
:param download_dir: 下载目录
:param torrent_path: 种子文件地址
:return: None该方法可被多个模块同时处理
"""
if settings.DOWNLOAD_SUBTITLE:
return self.run_module("download_added", context=context, torrent_path=torrent_path,
download_dir=download_dir)
return None
return self.run_module("download_added", context=context, torrent_path=torrent_path,
download_dir=download_dir)
def list_torrents(self, status: TorrentStatus = None,
hashs: Union[list, str] = None) -> Optional[List[Union[TransferTorrent, DownloadingTorrent]]]:
@@ -342,9 +343,7 @@ class ChainBase(metaclass=ABCMeta):
:param file_path: 文件路径
:return: 成功或失败
"""
if settings.REFRESH_MEDIASERVER:
return self.run_module("refresh_mediaserver", mediainfo=mediainfo, file_path=file_path)
return None
return self.run_module("refresh_mediaserver", mediainfo=mediainfo, file_path=file_path)
def post_message(self, message: Notification) -> None:
"""
@@ -392,9 +391,7 @@ class ChainBase(metaclass=ABCMeta):
:param mediainfo: 识别的媒体信息
:return: 成功或失败
"""
if settings.SCRAP_METADATA:
return self.run_module("scrape_metadata", path=path, mediainfo=mediainfo)
return None
return self.run_module("scrape_metadata", path=path, mediainfo=mediainfo)
def register_commands(self, commands: Dict[str, dict]) -> None:
"""

View File

@@ -13,6 +13,7 @@ from app.db.siteicon_oper import SiteIconOper
from app.helper.cloudflare import under_challenge
from app.helper.cookiecloud import CookieCloudHelper
from app.helper.message import MessageHelper
from app.helper.rss import RssHelper
from app.helper.sites import SitesHelper
from app.log import logger
from app.schemas import Notification, NotificationType, MessageChannel
@@ -30,6 +31,7 @@ class CookieCloudChain(ChainBase):
self.siteoper = SiteOper(self._db)
self.siteiconoper = SiteIconOper(self._db)
self.siteshelper = SitesHelper()
self.rsshelper = RssHelper()
self.sitechain = SiteChain(self._db)
self.message = MessageHelper()
self.cookiecloud = CookieCloudHelper(
@@ -78,8 +80,23 @@ class CookieCloudChain(ChainBase):
# 更新站点Cookie
if status:
logger.info(f"站点【{site_info.name}】连通性正常不同步CookieCloud数据")
# 更新站点rss地址
if not site_info.public and not site_info.rss:
# 自动生成rss地址
rss_url, errmsg = self.rsshelper.get_rss_link(
url=site_info.url,
cookie=cookie,
ua=settings.USER_AGENT,
proxy=True if site_info.proxy else False
)
if rss_url:
logger.info(f"更新站点 {domain} RSS地址 ...")
self.siteoper.update_rss(domain=domain, rss=rss_url)
else:
logger.warn(errmsg)
continue
# 更新站点Cookie
logger.info(f"更新站点 {domain} Cookie ...")
self.siteoper.update_cookie(domain=domain, cookies=cookie)
_update_count += 1
elif indexer:
@@ -104,12 +121,25 @@ class CookieCloudChain(ChainBase):
_fail_count += 1
logger.warn(f"站点 {indexer.get('name')} 连接失败,无法添加站点")
continue
# 获取rss地址
rss_url = None
if not indexer.get("public") and indexer.get("domain"):
# 自动生成rss地址
rss_url, errmsg = self.rsshelper.get_rss_link(url=indexer.get("domain"),
cookie=cookie,
ua=settings.USER_AGENT)
if errmsg:
logger.warn(errmsg)
# 插入数据库
logger.info(f"新增站点 {indexer.get('name')} ...")
self.siteoper.add(name=indexer.get("name"),
url=indexer.get("domain"),
domain=domain,
cookie=cookie,
rss=rss_url,
public=1 if indexer.get("public") else 0)
_add_count += 1
# 保存站点图标
if indexer:
site_icon = self.siteiconoper.get_by_domain(domain)

View File

@@ -48,9 +48,12 @@ class DownloadChain(ChainBase):
msg_text = f"{msg_text}\n大小:{size}"
if torrent.title:
msg_text = f"{msg_text}\n种子:{torrent.title}"
if torrent.pubdate:
msg_text = f"{msg_text}\n发布时间:{torrent.pubdate}"
if torrent.seeders:
msg_text = f"{msg_text}\n做种数:{torrent.seeders}"
msg_text = f"{msg_text}\n促销:{torrent.volume_factor}"
if torrent.uploadvolumefactor and torrent.downloadvolumefactor:
msg_text = f"{msg_text}\n促销:{torrent.volume_factor}"
if torrent.hit_and_run:
msg_text = f"{msg_text}\nHit&Run"
if torrent.description:
@@ -70,25 +73,33 @@ class DownloadChain(ChainBase):
def download_torrent(self, torrent: TorrentInfo,
channel: MessageChannel = None,
userid: Union[str, int] = None) -> Tuple[Optional[Path], str, list]:
userid: Union[str, int] = None
) -> Tuple[Optional[Union[Path, str]], str, list]:
"""
下载种子文件
下载种子文件,如果是磁力链,会返回磁力链接本身
:return: 种子路径,种子目录名,种子文件清单
"""
torrent_file, _, download_folder, files, error_msg = self.torrent.download_torrent(
torrent_file, content, download_folder, files, error_msg = self.torrent.download_torrent(
url=torrent.enclosure,
cookie=torrent.site_cookie,
ua=torrent.site_ua,
proxy=torrent.site_proxy)
if isinstance(content, str):
# 磁力链
return content, "", []
if not torrent_file:
logger.error(f"下载种子文件失败:{torrent.title} - {torrent.enclosure}")
self.post_message(Notification(
channel=channel,
mtype=NotificationType.Manual,
title=f"{torrent.title} 种子下载失败!",
text=f"错误信息:{error_msg}\n种子链接{torrent.enclosure}",
text=f"错误信息:{error_msg}\n站点{torrent.site_name}",
userid=userid))
return None, "", []
# 返回 种子文件路径,种子目录名,种子文件清单
return torrent_file, download_folder, files
def download_single(self, context: Context, torrent_file: Path = None,
@@ -98,19 +109,27 @@ class DownloadChain(ChainBase):
userid: Union[str, int] = None) -> Optional[str]:
"""
下载及发送通知
:param context: 资源上下文
:param torrent_file: 种子文件路径
:param episodes: 需要下载的集数
:param channel: 通知渠道
:param save_path: 保存路径
:param userid: 用户ID
"""
_torrent = context.torrent_info
_media = context.media_info
_meta = context.meta_info
_folder_name = ""
if not torrent_file:
# 下载种子文件
torrent_file, _folder_name, _file_list = self.download_torrent(_torrent, userid=userid)
if not torrent_file:
# 下载种子文件,得到的可能是文件也可能是磁力链
content, _folder_name, _file_list = self.download_torrent(_torrent, userid=userid)
if not content:
return
else:
content = torrent_file
# 获取种子文件的文件夹名和文件清单
_folder_name, _file_list = self.torrent.get_torrent_info(torrent_file)
# 下载目录
if not save_path:
if settings.DOWNLOAD_CATEGORY and _media and _media.category:
@@ -149,7 +168,7 @@ class DownloadChain(ChainBase):
download_dir = Path(save_path)
# 添加下载
result: Optional[tuple] = self.download(torrent_path=torrent_file,
result: Optional[tuple] = self.download(content=content,
cookie=_torrent.site_cookie,
episodes=episodes,
download_dir=download_dir,
@@ -206,13 +225,12 @@ class DownloadChain(ChainBase):
self.downloadhis.add_files(files_to_add)
# 发送消息
self.post_download_message(meta=_meta, mediainfo=_media, torrent=_torrent, channel=channel)
self.post_download_message(meta=_meta, mediainfo=_media, torrent=_torrent, channel=channel, userid=userid)
# 下载成功后处理
self.download_added(context=context, torrent_path=torrent_file, download_dir=download_dir)
self.download_added(context=context, download_dir=download_dir, torrent_path=torrent_file)
# 广播事件
self.eventmanager.send_event(EventType.DownloadAdded, {
"hash": _hash,
"torrent_file": torrent_file,
"context": context
})
else:
@@ -226,7 +244,6 @@ class DownloadChain(ChainBase):
% (_media.title_year, _meta.season_episode),
text=f"站点:{_torrent.site_name}\n"
f"种子名称:{_meta.org_string}\n"
f"种子链接:{_torrent.enclosure}\n"
f"错误信息:{error_msg}",
image=_media.get_message_image(),
userid=userid))
@@ -347,8 +364,11 @@ class DownloadChain(ChainBase):
if set(torrent_season).issubset(set(need_season)):
if len(torrent_season) == 1:
# 只有一季的可能是命名错误,需要打开种子鉴别,只有实际集数大于等于总集数才下载
torrent_path, _, torrent_files = self.download_torrent(torrent)
if not torrent_path:
content, _, torrent_files = self.download_torrent(torrent)
if not content:
continue
if isinstance(content, str):
logger.warn(f"{meta.org_string} 下载地址是磁力链,无法确定种子文件集数")
continue
torrent_episodes = self.torrent.get_torrent_episodes(torrent_files)
logger.info(f"{meta.org_string} 解析文件集数为 {torrent_episodes}")
@@ -366,10 +386,12 @@ class DownloadChain(ChainBase):
continue
else:
# 下载
download_id = self.download_single(context=context,
torrent_file=torrent_path,
save_path=save_path,
userid=userid)
download_id = self.download_single(
context=context,
torrent_file=content if isinstance(content, Path) else None,
save_path=save_path,
userid=userid
)
else:
# 下载
download_id = self.download_single(context, save_path=save_path, userid=userid)
@@ -486,8 +508,11 @@ class DownloadChain(ChainBase):
and len(meta.season_list) == 1 \
and meta.season_list[0] == need_season:
# 检查种子看是否有需要的集
torrent_path, _, torrent_files = self.download_torrent(torrent, userid=userid)
if not torrent_path:
content, _, torrent_files = self.download_torrent(torrent, userid=userid)
if not content:
continue
if isinstance(content, str):
logger.warn(f"{meta.org_string} 下载地址是磁力链,无法解析种子文件集数")
continue
# 种子全部集
torrent_episodes = self.torrent.get_torrent_episodes(torrent_files)
@@ -499,11 +524,13 @@ class DownloadChain(ChainBase):
continue
logger.info(f"{torrent.site_name} - {torrent.title} 选中集数:{selected_episodes}")
# 添加下载
download_id = self.download_single(context=context,
torrent_file=torrent_path,
episodes=selected_episodes,
save_path=save_path,
userid=userid)
download_id = self.download_single(
context=context,
torrent_file=content if isinstance(content, Path) else None,
episodes=selected_episodes,
save_path=save_path,
userid=userid
)
if not download_id:
continue
# 把识别的集更新到上下文

View File

@@ -214,6 +214,11 @@ class MessageChain(ChainBase):
start = _current_page * self._page_size
end = start + self._page_size
if cache_type == "Torrent":
# 更新缓存
user_cache[userid] = {
"type": "Torrent",
"items": cache_list[start:end]
}
# 发送种子数据
self.__post_torrents_message(channel=channel,
title=_current_media.title,
@@ -251,6 +256,11 @@ class MessageChain(ChainBase):
# 加一页
_current_page += 1
if cache_type == "Torrent":
# 更新缓存
user_cache[userid] = {
"type": "Torrent",
"items": cache_list
}
# 发送种子数据
self.__post_torrents_message(channel=channel,
title=_current_media.title,
@@ -270,7 +280,7 @@ class MessageChain(ChainBase):
elif text.startswith("#") \
or re.search(r"^请[问帮你]", text) \
or re.search(r"[?]$", text) \
or StringUtils.count_words(text) > 15 \
or StringUtils.count_words(text) > 10 \
or text.find("继续") != -1:
# 聊天
content = text

View File

@@ -1,320 +0,0 @@
import json
import re
from datetime import datetime
from typing import Tuple, Optional
from sqlalchemy.orm import Session
from app.chain import ChainBase
from app.chain.download import DownloadChain
from app.core.config import settings
from app.core.context import Context, TorrentInfo, MediaInfo
from app.core.metainfo import MetaInfo
from app.db.rss_oper import RssOper
from app.db.systemconfig_oper import SystemConfigOper
from app.helper.message import MessageHelper
from app.helper.rss import RssHelper
from app.helper.sites import SitesHelper
from app.log import logger
from app.schemas import Notification, NotExistMediaInfo
from app.schemas.types import SystemConfigKey, MediaType, NotificationType
from app.utils.string import StringUtils
class RssChain(ChainBase):
"""
RSS处理链
"""
def __init__(self, db: Session = None):
super().__init__(db)
self.rssoper = RssOper(self._db)
self.sites = SitesHelper()
self.systemconfig = SystemConfigOper(self._db)
self.downloadchain = DownloadChain(self._db)
self.message = MessageHelper()
def add(self, title: str, year: str,
mtype: MediaType = None,
season: int = None,
**kwargs) -> Tuple[Optional[int], str]:
"""
识别媒体信息并添加订阅
"""
logger.info(f'开始添加自定义订阅,标题:{title} ...')
# 识别元数据
metainfo = MetaInfo(title)
if year:
metainfo.year = year
if mtype:
metainfo.type = mtype
if season:
metainfo.type = MediaType.TV
metainfo.begin_season = season
# 识别媒体信息
mediainfo: MediaInfo = self.recognize_media(meta=metainfo)
if not mediainfo:
logger.warn(f'{title} 未识别到媒体信息')
return None, "未识别到媒体信息"
# 更新媒体图片
self.obtain_images(mediainfo=mediainfo)
# 总集数
if mediainfo.type == MediaType.TV:
if not season:
season = 1
# 总集数
if not kwargs.get('total_episode'):
if not mediainfo.seasons:
# 补充媒体信息
mediainfo: MediaInfo = self.recognize_media(mtype=mediainfo.type,
tmdbid=mediainfo.tmdb_id)
if not mediainfo:
logger.error(f"媒体信息识别失败!")
return None, "媒体信息识别失败"
if not mediainfo.seasons:
logger.error(f"{title} 媒体信息中没有季集信息")
return None, "媒体信息中没有季集信息"
total_episode = len(mediainfo.seasons.get(season) or [])
if not total_episode:
logger.error(f'{title} 未获取到总集数')
return None, "未获取到总集数"
kwargs.update({
'total_episode': total_episode
})
# 检查是否存在
if self.rssoper.exists(tmdbid=mediainfo.tmdb_id, season=season):
logger.warn(f'{mediainfo.title} 已存在')
return None, f'{mediainfo.title} 自定义订阅已存在'
if not kwargs.get("name"):
kwargs.update({
"name": mediainfo.title
})
kwargs.update({
"tmdbid": mediainfo.tmdb_id,
"poster": mediainfo.get_poster_image(),
"backdrop": mediainfo.get_backdrop_image(),
"vote": mediainfo.vote_average,
"description": mediainfo.overview,
})
# 添加订阅
sid = self.rssoper.add(title=title, year=year, season=season, **kwargs)
if not sid:
logger.error(f'{mediainfo.title_year} 添加自定义订阅失败')
return None, "添加自定义订阅失败"
else:
logger.info(f'{mediainfo.title_year} {metainfo.season} 添加订阅成功')
# 返回结果
return sid, ""
def refresh(self, rssid: int = None, manual: bool = False):
"""
刷新RSS订阅数据
"""
# 所有RSS订阅
logger.info("开始刷新RSS订阅数据 ...")
rss_tasks = self.rssoper.list(rssid) or []
for rss_task in rss_tasks:
if not rss_task:
continue
if not rss_task.url:
continue
# 下载Rss报文
items = RssHelper.parse(rss_task.url, True if rss_task.proxy else False)
if not items:
logger.error(f"RSS未下载到数据{rss_task.url}")
logger.info(f"{rss_task.name} RSS下载到数据{len(items)}")
# 检查站点
domain = StringUtils.get_url_domain(rss_task.url)
site_info = self.sites.get_indexer(domain) or {}
# 过滤规则
if rss_task.best_version:
filter_rule = self.systemconfig.get(SystemConfigKey.FilterRules2)
else:
filter_rule = self.systemconfig.get(SystemConfigKey.FilterRules)
# 处理RSS条目
matched_contexts = []
# 处理过的title
processed_data = json.loads(rss_task.note) if rss_task.note else {
"titles": [],
"season_episodes": []
}
for item in items:
if not item.get("title"):
continue
# 标题是否已处理过
if item.get("title") in processed_data.get('titles'):
logger.info(f"{item.get('title')} 已处理过")
continue
# 基本要素匹配
if rss_task.include \
and not re.search(r"%s" % rss_task.include, item.get("title")):
logger.info(f"{item.get('title')} 未包含 {rss_task.include}")
continue
if rss_task.exclude \
and re.search(r"%s" % rss_task.exclude, item.get("title")):
logger.info(f"{item.get('title')} 包含 {rss_task.exclude}")
continue
# 识别媒体信息
meta = MetaInfo(title=item.get("title"), subtitle=item.get("description"))
if not meta.name:
logger.error(f"{item.get('title')} 未识别到有效信息")
continue
mediainfo = self.recognize_media(meta=meta)
if not mediainfo:
logger.error(f"{item.get('title')} 未识别到TMDB媒体信息")
continue
if mediainfo.tmdb_id != rss_task.tmdbid:
logger.error(f"{item.get('title')} 不匹配")
continue
# 季集是否已处理过
if meta.season_episode in processed_data.get('season_episodes'):
logger.info(f"{meta.org_string} {meta.season_episode} 已处理过")
continue
# 种子
torrentinfo = TorrentInfo(
site=site_info.get("id"),
site_name=site_info.get("name"),
site_cookie=site_info.get("cookie"),
site_ua=site_info.get("cookie") or settings.USER_AGENT,
site_proxy=site_info.get("proxy") or rss_task.proxy,
site_order=site_info.get("pri"),
title=item.get("title"),
description=item.get("description"),
enclosure=item.get("enclosure"),
page_url=item.get("link"),
size=item.get("size"),
pubdate=item["pubdate"].strftime("%Y-%m-%d %H:%M:%S") if item.get("pubdate") else None,
)
# 过滤种子
if rss_task.filter:
result = self.filter_torrents(
rule_string=filter_rule,
torrent_list=[torrentinfo]
)
if not result:
logger.info(f"{rss_task.name} 不匹配过滤规则")
continue
# 清除多余数据
mediainfo.clear()
# 匹配到的数据
matched_contexts.append(Context(
meta_info=meta,
media_info=mediainfo,
torrent_info=torrentinfo
))
# 匹配结果
if not matched_contexts:
logger.info(f"{rss_task.name} 未匹配到数据")
continue
logger.info(f"{rss_task.name} 匹配到 {len(matched_contexts)} 条数据")
# 查询本地存在情况
if not rss_task.best_version:
# 查询缺失的媒体信息
rss_meta = MetaInfo(title=rss_task.title)
rss_meta.year = rss_task.year
rss_meta.begin_season = rss_task.season
rss_meta.type = MediaType(rss_task.type)
# 每季总集数
totals = {}
if rss_task.season and rss_task.total_episode:
totals = {
rss_task.season: rss_task.total_episode
}
# 检查缺失
exist_flag, no_exists = self.downloadchain.get_no_exists_info(
meta=rss_meta,
mediainfo=MediaInfo(
title=rss_task.title,
year=rss_task.year,
tmdb_id=rss_task.tmdbid,
season=rss_task.season
),
totals=totals
)
if exist_flag:
logger.info(f'{rss_task.name} 媒体库中已存在,完成订阅')
self.rssoper.delete(rss_task.id)
# 发送通知
self.post_message(Notification(mtype=NotificationType.Subscribe,
title=f'自定义订阅 {rss_task.name} 已完成',
image=rss_task.backdrop))
continue
elif rss_meta.type == MediaType.TV.value:
# 打印缺失集信息
if no_exists and no_exists.get(rss_task.tmdbid):
no_exists_info = no_exists.get(rss_task.tmdbid).get(rss_task.season)
if no_exists_info:
logger.info(f'订阅 {rss_task.name} 缺失集:{no_exists_info.episodes}')
else:
if rss_task.type == MediaType.TV.value:
no_exists = {
rss_task.season: NotExistMediaInfo(
season=rss_task.season,
episodes=[],
total_episode=rss_task.total_episode,
start_episode=1)
}
else:
no_exists = {}
# 开始下载
downloads, lefts = self.downloadchain.batch_download(contexts=matched_contexts,
no_exists=no_exists,
save_path=rss_task.save_path)
if downloads and not lefts:
if not rss_task.best_version:
# 非洗版结束订阅
self.rssoper.delete(rss_task.id)
# 发送通知
self.post_message(Notification(mtype=NotificationType.Subscribe,
title=f'自定义订阅 {rss_task.name} 已完成',
image=rss_task.backdrop))
else:
# 未完成下载
logger.info(f'{rss_task.name} 未下载未完整,继续订阅 ...')
if downloads:
for download in downloads:
meta = download.meta_info
# 更新已处理数据
processed_data['titles'].append(meta.org_string)
processed_data['season_episodes'].append(meta.season_episode)
# 更新已处理过的数据
self.rssoper.update(rssid=rss_task.id, note=json.dumps(processed_data))
# 更新最后更新时间和已处理数量
self.rssoper.update(rssid=rss_task.id,
processed=(rss_task.processed or 0) + len(downloads),
last_update=datetime.now().strftime("%Y-%m-%d %H:%M:%S"))
logger.info("刷新RSS订阅数据完成")
if manual:
if len(rss_tasks) == 1:
self.message.put(f"{rss_tasks[0].name} 自定义订阅刷新完成")
else:
self.message.put(f"自定义订阅刷新完成")

View File

@@ -29,7 +29,7 @@ class SearchChain(ChainBase):
super().__init__(db)
self.siteshelper = SitesHelper()
self.progress = ProgressHelper()
self.systemconfig = SystemConfigOper(self._db)
self.systemconfig = SystemConfigOper()
self.torrenthelper = TorrentHelper()
def search_by_tmdbid(self, tmdbid: int, mtype: MediaType = None, area: str = "title") -> List[Context]:
@@ -76,22 +76,6 @@ class SearchChain(ChainBase):
print(str(e))
return []
def browse(self, domain: str, keyword: str = None) -> List[TorrentInfo]:
"""
浏览站点首页内容
:param domain: 站点域名
:param keyword: 关键词,有值时为搜索
"""
if not keyword:
logger.info(f'开始浏览站点首页内容,站点:{domain} ...')
else:
logger.info(f'开始搜索资源,关键词:{keyword},站点:{domain} ...')
site = self.siteshelper.get_indexer(domain)
if not site:
logger.error(f'站点 {domain} 不存在!')
return []
return self.search_torrents(site=site, keyword=keyword)
def process(self, mediainfo: MediaInfo,
keyword: str = None,
no_exists: Dict[int, Dict[int, NotExistMediaInfo]] = None,
@@ -104,7 +88,7 @@ class SearchChain(ChainBase):
:param keyword: 搜索关键词
:param no_exists: 缺失的媒体信息
:param sites: 站点ID列表为空时搜索所有站点
:param filter_rule: 过滤规则,为空是使用默认过滤规则
:param filter_rule: 过滤规则,为空是使用默认搜索过滤规则
:param area: 搜索范围title or imdbid
"""
logger.info(f'开始搜索资源,关键词:{keyword or mediainfo.title} ...')
@@ -146,12 +130,13 @@ class SearchChain(ChainBase):
# 过滤种子
if filter_rule is None:
# 取默认过滤规则
filter_rule = self.systemconfig.get(SystemConfigKey.FilterRules)
filter_rule = self.systemconfig.get(SystemConfigKey.SearchFilterRules)
if filter_rule:
logger.info(f'开始过滤资源,当前规则:{filter_rule} ...')
result: List[TorrentInfo] = self.filter_torrents(rule_string=filter_rule,
torrent_list=torrents,
season_episodes=season_episodes)
season_episodes=season_episodes,
mediainfo=mediainfo)
if result is not None:
torrents = result
if not torrents:
@@ -201,19 +186,30 @@ class SearchChain(ChainBase):
str(int(mediainfo.year) + 1)]:
logger.warn(f'{torrent.site_name} - {torrent.title} 年份不匹配')
continue
# 比对标题
# 比对标题和原语种标题
meta_name = StringUtils.clear_upper(torrent_meta.name)
if meta_name in [
StringUtils.clear_upper(mediainfo.title),
StringUtils.clear_upper(mediainfo.original_title)
]:
logger.info(f'{mediainfo.title} 匹配到资源:{torrent.site_name} - {torrent.title}')
logger.info(f'{mediainfo.title} 通过标题匹配到资源:{torrent.site_name} - {torrent.title}')
_match_torrents.append(torrent)
continue
# 在副标题中判断是否存在标题与原语种标题
if torrent.description:
subtitle = torrent.description.split()
if (StringUtils.is_chinese(mediainfo.title)
and str(mediainfo.title) in subtitle) \
or (StringUtils.is_chinese(mediainfo.original_title)
and str(mediainfo.original_title) in subtitle):
logger.info(f'{mediainfo.title} 通过副标题匹配到资源:{torrent.site_name} - {torrent.title}'
f'副标题:{torrent.description}')
_match_torrents.append(torrent)
continue
# 比对别名和译名
for name in mediainfo.names:
if StringUtils.clear_upper(name) == meta_name:
logger.info(f'{mediainfo.title} 匹配到资源:{torrent.site_name} - {torrent.title}')
logger.info(f'{mediainfo.title} 通过别名或译名匹配到资源:{torrent.site_name} - {torrent.title}')
_match_torrents.append(torrent)
break
else:
@@ -252,14 +248,14 @@ class SearchChain(ChainBase):
"""
# 未开启的站点不搜索
indexer_sites = []
# 配置的索引站点
if sites:
config_indexers = [str(sid) for sid in sites]
else:
config_indexers = [str(sid) for sid in self.systemconfig.get(SystemConfigKey.IndexerSites) or []]
if not sites:
sites = self.systemconfig.get(SystemConfigKey.IndexerSites) or []
for indexer in self.siteshelper.get_indexers():
# 检查站点索引开关
if not config_indexers or str(indexer.get("id")) in config_indexers:
if not sites or indexer.get("id") in sites:
# 站点流控
state, msg = self.siteshelper.check(indexer.get("domain"))
if state:
@@ -269,6 +265,7 @@ class SearchChain(ChainBase):
if not indexer_sites:
logger.warn('未开启任何有效站点,无法搜索资源')
return []
# 开始进度
self.progress.start(ProgressKey.Search)
# 开始计时

View File

@@ -33,7 +33,7 @@ class SubscribeChain(ChainBase):
self.subscribeoper = SubscribeOper(self._db)
self.torrentschain = TorrentsChain()
self.message = MessageHelper()
self.systemconfig = SystemConfigOper(self._db)
self.systemconfig = SystemConfigOper()
def add(self, title: str, year: str,
mtype: MediaType = None,
@@ -264,9 +264,9 @@ class SubscribeChain(ChainBase):
sites = None
# 过滤规则
if subscribe.best_version:
filter_rule = self.systemconfig.get(SystemConfigKey.FilterRules2)
filter_rule = self.systemconfig.get(SystemConfigKey.BestVersionFilterRules)
else:
filter_rule = self.systemconfig.get(SystemConfigKey.FilterRules)
filter_rule = self.systemconfig.get(SystemConfigKey.SubscribeFilterRules)
# 搜索,同时电视剧会过滤掉不需要的剧集
contexts = self.searchchain.process(mediainfo=mediainfo,
keyword=subscribe.keyword,
@@ -277,7 +277,7 @@ class SubscribeChain(ChainBase):
logger.warn(f'订阅 {subscribe.keyword or subscribe.name} 未搜索到资源')
if meta.type == MediaType.TV:
# 未搜索到资源,但本地缺失可能有变化,更新订阅剩余集数
self.__upate_lack_episodes(lefts=no_exists, subscribe=subscribe, mediainfo=mediainfo)
self.__update_lack_episodes(lefts=no_exists, subscribe=subscribe, mediainfo=mediainfo)
continue
# 过滤
matched_contexts = []
@@ -285,15 +285,21 @@ class SubscribeChain(ChainBase):
torrent_meta = context.meta_info
torrent_info = context.torrent_info
torrent_mediainfo = context.media_info
# 包含与排除规则
default_include_exclude = self.systemconfig.get(SystemConfigKey.DefaultIncludeExcludeFilter) or {}
include = subscribe.include or default_include_exclude.get("include")
exclude = subscribe.exclude or default_include_exclude.get("exclude")
# 包含
if subscribe.include:
if not re.search(r"%s" % subscribe.include,
if include:
if not re.search(r"%s" % include,
f"{torrent_info.title} {torrent_info.description}", re.I):
logger.info(f"{torrent_info.title} 不匹配包含规则 {include}")
continue
# 排除
if subscribe.exclude:
if re.search(r"%s" % subscribe.exclude,
if exclude:
if re.search(r"%s" % exclude,
f"{torrent_info.title} {torrent_info.description}", re.I):
logger.info(f"{torrent_info.title} 匹配排除规则 {exclude}")
continue
# 非洗版
if not subscribe.best_version:
@@ -308,12 +314,17 @@ class SubscribeChain(ChainBase):
if torrent_meta.episode_list:
logger.info(f'{subscribe.name} 正在洗版,{torrent_info.title} 不是整季')
continue
# 优先级小于已下载优先级的不要
if subscribe.current_priority \
and torrent_info.pri_order < subscribe.current_priority:
logger.info(f'{subscribe.name} 正在洗版,{torrent_info.title} 优先级低于已下载优先级')
continue
matched_contexts.append(context)
if not matched_contexts:
logger.warn(f'订阅 {subscribe.name} 没有符合过滤条件的资源')
# 非洗版未搜索到资源,但本地缺失可能有变化,更新订阅剩余集数
if meta.type == MediaType.TV and not subscribe.best_version:
self.__upate_lack_episodes(lefts=no_exists, subscribe=subscribe, mediainfo=mediainfo)
self.__update_lack_episodes(lefts=no_exists, subscribe=subscribe, mediainfo=mediainfo)
continue
# 自动下载
downloads, lefts = self.downloadchain.batch_download(contexts=matched_contexts,
@@ -330,18 +341,18 @@ class SubscribeChain(ChainBase):
mediainfo=mediainfo, downloads=downloads)
else:
# 未完成下载
logger.info(f'{mediainfo.title_year} 未下载完整,继续订阅 ...')
logger.info(f'{mediainfo.title_year} 未下载完整,继续订阅 ...')
if meta.type == MediaType.TV and not subscribe.best_version:
# 更新订阅剩余集数和时间
update_date = True if downloads else False
self.__upate_lack_episodes(lefts=lefts, subscribe=subscribe,
mediainfo=mediainfo, update_date=update_date)
self.__update_lack_episodes(lefts=lefts, subscribe=subscribe,
mediainfo=mediainfo, update_date=update_date)
# 手动触发时发送系统消息
if manual:
if sid:
self.message.put(f'订阅 {subscribes[0].name} 搜索完成!')
else:
self.message.put(f'所有订阅搜索完成!')
self.message.put('所有订阅搜索完成!')
def finish_subscribe_or_not(self, subscribe: Subscribe, meta: MetaInfo,
mediainfo: MediaInfo, downloads: List[Context]):
@@ -375,17 +386,40 @@ class SubscribeChain(ChainBase):
def refresh(self):
"""
刷新订阅
订阅刷新
"""
# 触发刷新站点资源,从缓存中匹配订阅
sites = self.get_subscribed_sites()
if sites is None:
return
self.match(
self.torrentschain.refresh(sites=sites)
)
def get_subscribed_sites(self) -> Optional[List[int]]:
"""
获取订阅中涉及的所有站点清单(节约资源)
:return: 返回[]代表所有站点命中返回None代表没有订阅
"""
# 查询所有订阅
subscribes = self.subscribeoper.list('R')
if not subscribes:
# 没有订阅不运行
return
# 刷新站点资源,从缓存中匹配订阅
self.match(
self.torrentschain.refresh()
)
return None
ret_sites = []
# 刷新订阅选中的Rss站点
for subscribe in subscribes:
# 如果有一个订阅没有选择站点,则刷新所有订阅站点
if not subscribe.sites:
return []
# 刷新选中的站点
sub_sites = json.loads(subscribe.sites)
if sub_sites:
ret_sites.extend(sub_sites)
# 去重
if ret_sites:
ret_sites = list(set(ret_sites))
return ret_sites
def match(self, torrents: Dict[str, List[Context]]):
"""
@@ -473,19 +507,22 @@ class SubscribeChain(ChainBase):
continue
# 过滤规则
if subscribe.best_version:
filter_rule = self.systemconfig.get(SystemConfigKey.FilterRules2)
filter_rule = self.systemconfig.get(SystemConfigKey.BestVersionFilterRules)
else:
filter_rule = self.systemconfig.get(SystemConfigKey.FilterRules)
filter_rule = self.systemconfig.get(SystemConfigKey.SubscribeFilterRules)
result: List[TorrentInfo] = self.filter_torrents(
rule_string=filter_rule,
torrent_list=[torrent_info])
torrent_list=[torrent_info],
mediainfo=torrent_mediainfo)
if result is not None and not result:
# 不符合过滤规则
logger.info(f"{torrent_info.title} 不匹配当前过滤规则")
continue
# 不在订阅站点范围的不处理
if subscribe.sites:
sub_sites = json.loads(subscribe.sites)
if sub_sites and torrent_info.site not in sub_sites:
logger.info(f"{torrent_info.title} 不符合 {torrent_mediainfo.title_year} 订阅站点要求")
continue
# 如果是电视剧
if torrent_mediainfo.type == MediaType.TV:
@@ -527,15 +564,21 @@ class SubscribeChain(ChainBase):
if torrent_meta.episode_list:
logger.info(f'{subscribe.name} 正在洗版,{torrent_info.title} 不是整季')
continue
# 包含与排除规则
default_include_exclude = self.systemconfig.get(SystemConfigKey.DefaultIncludeExcludeFilter) or {}
include = subscribe.include or default_include_exclude.get("include")
exclude = subscribe.exclude or default_include_exclude.get("exclude")
# 包含
if subscribe.include:
if not re.search(r"%s" % subscribe.include,
if include:
if not re.search(r"%s" % include,
f"{torrent_info.title} {torrent_info.description}", re.I):
logger.info(f"{torrent_info.title} 不匹配包含规则 {include}")
continue
# 排除
if subscribe.exclude:
if re.search(r"%s" % subscribe.exclude,
if exclude:
if re.search(r"%s" % exclude,
f"{torrent_info.title} {torrent_info.description}", re.I):
logger.info(f"{torrent_info.title} 匹配排除规则 {exclude}")
continue
# 匹配成功
logger.info(f'{mediainfo.title_year} 匹配成功:{torrent_info.title}')
@@ -557,12 +600,12 @@ class SubscribeChain(ChainBase):
if meta.type == MediaType.TV and not subscribe.best_version:
update_date = True if downloads else False
# 未完成下载,计算剩余集数
self.__upate_lack_episodes(lefts=lefts, subscribe=subscribe,
mediainfo=mediainfo, update_date=update_date)
self.__update_lack_episodes(lefts=lefts, subscribe=subscribe,
mediainfo=mediainfo, update_date=update_date)
else:
if meta.type == MediaType.TV:
# 未搜索到资源,但本地缺失可能有变化,更新订阅剩余集数
self.__upate_lack_episodes(lefts=no_exists, subscribe=subscribe, mediainfo=mediainfo)
self.__update_lack_episodes(lefts=no_exists, subscribe=subscribe, mediainfo=mediainfo)
def check(self):
"""
@@ -586,18 +629,16 @@ class SubscribeChain(ChainBase):
if not mediainfo:
logger.warn(f'未识别到媒体信息,标题:{subscribe.name}tmdbid{subscribe.tmdbid}')
continue
if not mediainfo.seasons:
continue
# 获取当前季的总集数
# 对于电视剧,获取当前季的总集数
episodes = mediainfo.seasons.get(subscribe.season) or []
if len(episodes) > subscribe.total_episode or 0:
if len(episodes) > (subscribe.total_episode or 0):
total_episode = len(episodes)
lack_episode = subscribe.lack_episode + (total_episode - subscribe.total_episode)
logger.info(f'订阅 {subscribe.name} 总集数变化,更新总集数为{total_episode},缺失集数为{lack_episode} ...')
logger.info(
f'订阅 {subscribe.name} 总集数变化,更新总集数为{total_episode},缺失集数为{lack_episode} ...')
else:
total_episode = subscribe.total_episode
lack_episode = subscribe.lack_episode
logger.info(f'订阅 {subscribe.name} 总集数未变化')
# 更新TMDB信息
self.subscribeoper.update(subscribe.id, {
"name": mediainfo.title,
@@ -654,10 +695,10 @@ class SubscribeChain(ChainBase):
return True
return False
def __upate_lack_episodes(self, lefts: Dict[int, Dict[int, NotExistMediaInfo]],
subscribe: Subscribe,
mediainfo: MediaInfo,
update_date: bool = False):
def __update_lack_episodes(self, lefts: Dict[int, Dict[int, NotExistMediaInfo]],
subscribe: Subscribe,
mediainfo: MediaInfo,
update_date: bool = False):
"""
更新订阅剩余集数
"""
@@ -742,7 +783,7 @@ class SubscribeChain(ChainBase):
total_episode: int,
start_episode: int):
"""
根据订阅开始集数和总结合TMDB信息计算当前订阅的缺失集数
根据订阅开始集数和总结合TMDB信息计算当前订阅的缺失集数
:param no_exists: 缺失季集列表
:param tmdb_id: TMDB ID
:param begin_season: 开始季
@@ -782,8 +823,9 @@ class SubscribeChain(ChainBase):
# 没有自定义总集数
total_episode = total
# 新的集列表
episodes = list(range(max(start_episode, start), total_episode + 1))
new_episodes = list(range(max(start_episode, start), total_episode + 1))
# 与原集列表取交集
episodes = list(set(episode_list).intersection(set(new_episodes)))
# 更新集合
no_exists[tmdb_id][begin_season] = NotExistMediaInfo(
season=begin_season,

View File

@@ -1,33 +1,39 @@
from datetime import datetime
import re
from typing import Dict, List, Union
from requests import Session
from cachetools import cached, TTLCache
from app.chain import ChainBase
from app.core.config import settings
from app.core.context import TorrentInfo, Context, MediaInfo
from app.core.metainfo import MetaInfo
from app.db import SessionFactory
from app.db.site_oper import SiteOper
from app.db.systemconfig_oper import SystemConfigOper
from app.helper.rss import RssHelper
from app.helper.sites import SitesHelper
from app.log import logger
from app.schemas import Notification
from app.schemas.types import SystemConfigKey, MessageChannel
from app.schemas.types import SystemConfigKey, MessageChannel, NotificationType
from app.utils.singleton import Singleton
from app.utils.string import StringUtils
from app.utils.timer import TimerUtils
class TorrentsChain(ChainBase):
class TorrentsChain(ChainBase, metaclass=Singleton):
"""
种子刷新处理链
站点首页或RSS种子处理链服务于订阅、刷流等
"""
_cache_file = "__torrents_cache__"
_last_refresh_time = None
_spider_file = "__torrents_cache__"
_rss_file = "__rss_cache__"
def __init__(self, db: Session = None):
super().__init__(db)
def __init__(self):
self._db = SessionFactory()
super().__init__(self._db)
self.siteshelper = SitesHelper()
self.systemconfig = SystemConfigOper(self._db)
self.siteoper = SiteOper(self._db)
self.rsshelper = RssHelper()
self.systemconfig = SystemConfigOper()
def remote_refresh(self, channel: MessageChannel, userid: Union[str, int] = None):
"""
@@ -39,40 +45,109 @@ class TorrentsChain(ChainBase):
self.post_message(Notification(channel=channel,
title=f"种子刷新完成!", userid=userid))
def get_torrents(self) -> Dict[str, List[Context]]:
def get_torrents(self, stype: str = None) -> Dict[str, List[Context]]:
"""
获取当前缓存的种子
:param stype: 强制指定缓存类型spider:爬虫缓存rss:rss缓存
"""
if not stype:
stype = settings.SUBSCRIBE_MODE
# 读取缓存
return self.load_cache(self._cache_file) or {}
if stype == 'spider':
return self.load_cache(self._spider_file) or {}
else:
return self.load_cache(self._rss_file) or {}
def refresh(self) -> Dict[str, List[Context]]:
@cached(cache=TTLCache(maxsize=128, ttl=600))
def browse(self, domain: str) -> List[TorrentInfo]:
"""
刷新站点最新资源
浏览站点首页内容返回种子清单TTL缓存10分钟
:param domain: 站点域名
"""
# 控制刷新频率不能小于10分钟
if self._last_refresh_time and TimerUtils.diff_minutes(self._last_refresh_time) < 10:
logger.warn(f'种子刷新频率过快,跳过本次刷新')
return self.get_torrents()
logger.info(f'开始获取站点 {domain} 最新种子 ...')
site = self.siteshelper.get_indexer(domain)
if not site:
logger.error(f'站点 {domain} 不存在!')
return []
return self.refresh_torrents(site=site)
# 记录刷新时间
self._last_refresh_time = datetime.now()
@cached(cache=TTLCache(maxsize=128, ttl=300))
def rss(self, domain: str) -> List[TorrentInfo]:
"""
获取站点RSS内容返回种子清单TTL缓存5分钟
:param domain: 站点域名
"""
logger.info(f'开始获取站点 {domain} RSS ...')
site = self.siteshelper.get_indexer(domain)
if not site:
logger.error(f'站点 {domain} 不存在!')
return []
if not site.get("rss"):
logger.error(f'站点 {domain} 未配置RSS地址')
return []
rss_items = self.rsshelper.parse(site.get("rss"), True if site.get("proxy") else False)
if rss_items is None:
# rss过期尝试保留原配置生成新的rss
self.__renew_rss_url(domain=domain, site=site)
return []
if not rss_items:
logger.error(f'站点 {domain} 未获取到RSS数据')
return []
# 组装种子
ret_torrents: List[TorrentInfo] = []
for item in rss_items:
if not item.get("title"):
continue
torrentinfo = TorrentInfo(
site=site.get("id"),
site_name=site.get("name"),
site_cookie=site.get("cookie"),
site_ua=site.get("ua") or settings.USER_AGENT,
site_proxy=site.get("proxy"),
site_order=site.get("pri"),
title=item.get("title"),
enclosure=item.get("enclosure"),
page_url=item.get("link"),
size=item.get("size"),
pubdate=item["pubdate"].strftime("%Y-%m-%d %H:%M:%S") if item.get("pubdate") else None,
)
ret_torrents.append(torrentinfo)
return ret_torrents
def refresh(self, stype: str = None, sites: List[int] = None) -> Dict[str, List[Context]]:
"""
刷新站点最新资源,识别并缓存起来
:param stype: 强制指定缓存类型spider:爬虫缓存rss:rss缓存
:param sites: 强制指定站点ID列表为空则读取设置的订阅站点
"""
# 刷新类型
if not stype:
stype = settings.SUBSCRIBE_MODE
# 刷新站点
if not sites:
sites = self.systemconfig.get(SystemConfigKey.RssSites) or []
# 读取缓存
torrents_cache = self.get_torrents()
# 所有站点索引
indexers = self.siteshelper.get_indexers()
# 配置的Rss站点
config_indexers = [str(sid) for sid in self.systemconfig.get(SystemConfigKey.RssSites) or []]
# 遍历站点缓存资源
for indexer in indexers:
# 未开启的站点不搜索
if config_indexers and str(indexer.get("id")) not in config_indexers:
# 未开启的站点不刷新
if sites and indexer.get("id") not in sites:
continue
logger.info(f'开始刷新 {indexer.get("name")} 最新种子 ...')
domain = StringUtils.get_url_domain(indexer.get("domain"))
torrents: List[TorrentInfo] = self.refresh_torrents(site=indexer)
if stype == "spider":
# 刷新首页种子
torrents: List[TorrentInfo] = self.browse(domain=domain)
else:
# 刷新RSS种子
torrents: List[TorrentInfo] = self.rss(domain=domain)
# 按pubdate降序排列
torrents.sort(key=lambda x: x.pubdate or '', reverse=True)
# 取前N条
@@ -114,7 +189,46 @@ class TorrentsChain(ChainBase):
del torrents
else:
logger.info(f'{indexer.get("name")} 没有获取到种子')
# 保存缓存到本地
self.save_cache(torrents_cache, self._cache_file)
if stype == "spider":
self.save_cache(torrents_cache, self._spider_file)
else:
self.save_cache(torrents_cache, self._rss_file)
# 返回
return torrents_cache
def __renew_rss_url(self, domain: str, site: dict):
"""
保留原配置生成新的rss地址
"""
try:
# RSS链接过期
logger.error(f"站点 {domain} RSS链接已过期正在尝试自动获取")
# 自动生成rss地址
rss_url, errmsg = self.rsshelper.get_rss_link(
url=site.get("url"),
cookie=site.get("cookie"),
ua=site.get("ua") or settings.USER_AGENT,
proxy=True if site.get("proxy") else False
)
if rss_url:
# 获取新的日期的passkey
match = re.search(r'passkey=([a-zA-Z0-9]+)', rss_url)
if match:
new_passkey = match.group(1)
# 获取过期rss除去passkey部分
new_rss = re.sub(r'&passkey=([a-zA-Z0-9]+)', f'&passkey={new_passkey}', site.get("rss"))
logger.info(f"更新站点 {domain} RSS地址 ...")
self.siteoper.update_rss(domain=domain, rss=new_rss)
else:
# 发送消息
self.post_message(
Notification(mtype=NotificationType.SiteMessage, title=f"站点 {domain} RSS链接已过期"))
else:
self.post_message(
Notification(mtype=NotificationType.SiteMessage, title=f"站点 {domain} RSS链接已过期"))
except Exception as e:
print(str(e))
self.post_message(Notification(mtype=NotificationType.SiteMessage, title=f"站点 {domain} RSS链接已过期"))

View File

@@ -1,3 +1,4 @@
import glob
import re
import shutil
import threading
@@ -85,7 +86,7 @@ class TransferChain(ChainBase):
mediainfo: MediaInfo = None, download_hash: str = None,
target: Path = None, transfer_type: str = None,
season: int = None, epformat: EpisodeFormat = None,
min_filesize: int = 0) -> Tuple[bool, str]:
min_filesize: int = 0, force: bool = False) -> Tuple[bool, str]:
"""
执行一个复杂目录的转移操作
:param path: 待转移目录或文件
@@ -97,6 +98,7 @@ class TransferChain(ChainBase):
:param season: 季
:param epformat: 剧集格式
:param min_filesize: 最小文件大小(MB)
:param force: 是否强制转移
返回:成功标识,错误信息
"""
if not transfer_type:
@@ -174,13 +176,30 @@ class TransferChain(ChainBase):
continue
# 整理屏蔽词不处理
is_blocked = False
if transfer_exclude_words:
for keyword in transfer_exclude_words:
if not keyword:
continue
if keyword and re.findall(keyword, file_path_str):
if keyword and re.search(r"%s" % keyword, file_path_str, re.IGNORECASE):
logger.info(f"{file_path} 命中整理屏蔽词 {keyword},不处理")
continue
is_blocked = True
break
if is_blocked:
err_msgs.append(f"{file_path.name} 命中整理屏蔽词")
continue
# 转移成功的不再处理
if not force:
transferd = self.transferhis.get_by_src(file_path_str)
if transferd and transferd.status:
logger.info(f"{file_path} 已成功转移过,如需重新处理,请删除历史记录。")
continue
# 更新进度
self.progress.update(value=processed_num / total_num * 100,
text=f"正在转移 {processed_num + 1}/{total_num}{file_path.name} ...",
key=ProgressKey.FileTransfer)
if not meta:
# 上级目录元数据
@@ -193,7 +212,7 @@ class TransferChain(ChainBase):
file_meta = meta
# 合并季
if season:
if season is not None:
file_meta.begin_season = season
if not file_meta:
@@ -233,6 +252,13 @@ class TransferChain(ChainBase):
))
continue
# 如果未开启新增已入库媒体是否跟随TMDB信息变化则根据tmdbid查询之前的title
if not settings.SCRAP_FOLLOW_TMDB:
transfer_history = self.transferhis.get_by_type_tmdbid(tmdbid=file_mediainfo.tmdb_id,
mtype=file_mediainfo.type.value)
if transfer_history:
file_mediainfo.title = transfer_history.title
logger.info(f"{file_path.name} 识别为:{file_mediainfo.type.value} {file_mediainfo.title_year}")
# 电视剧没有集无法转移
@@ -323,6 +349,9 @@ class TransferChain(ChainBase):
mediainfo=file_mediainfo,
transferinfo=transferinfo
)
# 刮削单个文件
if settings.SCRAP_METADATA:
self.scrape_metadata(path=transferinfo.target_path, mediainfo=file_mediainfo)
# 更新进度
processed_num += 1
self.progress.update(value=processed_num / total_num * 100,
@@ -333,13 +362,17 @@ class TransferChain(ChainBase):
self.progress.update(value=100,
text=f"所有文件转移完成,正在执行后续处理 ...",
key=ProgressKey.FileTransfer)
# 执行后续处理
for mkey, media in medias.items():
transfer_meta = metas[mkey]
transfer_info = transfers[mkey]
# 刮削
self.scrape_metadata(path=transfer_info.target_path, mediainfo=media)
# 刷新媒体库
self.refresh_mediaserver(mediainfo=media, file_path=transfer_info.target_path)
# 媒体目录
if transfer_info.target_path.is_file():
transfer_info.target_path = transfer_info.target_path.parent
# 刷新媒体库,根目录或季目录
if settings.REFRESH_MEDIASERVER:
self.refresh_mediaserver(mediainfo=media, file_path=transfer_info.target_path)
# 发送通知
se_str = None
if media.type == MediaType.TV:
@@ -470,10 +503,11 @@ class TransferChain(ChainBase):
if history.dest:
self.delete_files(Path(history.dest))
# 转移
# 强制转移
state, errmsg = self.do_transfer(path=src_path,
mediainfo=mediainfo,
download_hash=history.download_hash)
download_hash=history.download_hash,
force=True)
if not state:
return False, errmsg
@@ -566,22 +600,42 @@ class TransferChain(ChainBase):
"""
logger.info(f"开始删除文件以及空目录:{path} ...")
if not path.exists():
logger.error(f"{path} 不存在")
return
elif path.is_file():
# 删除文件
path.unlink()
if path.is_file():
# 删除文件、nfo、jpg
files = glob.glob(f"{Path(path.parent).joinpath(path.stem)}*")
for file in files:
Path(file).unlink()
logger.warn(f"文件 {path} 已删除")
# 判断目录是否为空, 为空则删除
if str(path.parent.parent) != str(path.root):
# 父目录非根目录,删除父目录
files = SystemUtils.list_files(path.parent, settings.RMT_MEDIAEXT)
if not files:
shutil.rmtree(path.parent)
logger.warn(f"目录 {path.parent} 已删除")
# 需要删除父目录
elif str(path.parent) == str(path.root):
# 根目录,删除
logger.warn(f"根目录 {path} 不能删除!")
return
else:
if str(path.parent) != str(path.root):
# 父目录非根目录,才删除目录
shutil.rmtree(path)
# 删除目录
logger.warn(f"目录 {path} 已删除")
# 非根目录,才删除目录
shutil.rmtree(path)
# 删除目录
logger.warn(f"目录 {path} 已删除")
# 需要删除父目录
# 判断当前媒体父路径下是否有媒体文件,如有则无需遍历父级
if not SystemUtils.exits_files(path.parent, settings.RMT_MEDIAEXT):
# 媒体库二级分类根路径
library_root_names = [
settings.LIBRARY_MOVIE_NAME or '电影',
settings.LIBRARY_TV_NAME or '电视剧',
settings.LIBRARY_ANIME_NAME or '动漫',
]
# 判断父目录是否为空, 为空则删除
for parent_path in path.parents:
# 遍历父目录到媒体库二级分类根路径
if str(parent_path.name) in library_root_names:
break
if str(parent_path.parent) != str(path.root):
# 父目录非根目录,才删除父目录
if not SystemUtils.exits_files(path.parent, settings.RMT_MEDIAEXT):
# 当前路径下没有媒体文件则删除
shutil.rmtree(parent_path)
logger.warn(f"目录 {parent_path} 已删除")

View File

@@ -13,11 +13,12 @@ from app.chain.transfer import TransferChain
from app.core.event import Event as ManagerEvent
from app.core.event import eventmanager, EventManager
from app.core.plugin import PluginManager
from app.db import ScopedSession
from app.db import SessionFactory
from app.log import logger
from app.schemas.types import EventType, MessageChannel
from app.utils.object import ObjectUtils
from app.utils.singleton import Singleton
from app.utils.system import SystemUtils
class CommandChian(ChainBase):
@@ -41,7 +42,7 @@ class Command(metaclass=Singleton):
def __init__(self):
# 数据库连接
self._db = ScopedSession()
self._db = SessionFactory()
# 事件管理器
self.eventmanager = EventManager()
# 插件管理器
@@ -128,6 +129,12 @@ class Command(metaclass=Singleton):
"description": "清理缓存",
"category": "管理",
"data": {}
},
"/restart": {
"func": SystemUtils.restart,
"description": "重启系统",
"category": "管理",
"data": {}
}
}
# 汇总插件命令
@@ -174,6 +181,8 @@ class Command(metaclass=Singleton):
"""
self._event.set()
self._thread.join()
if self._db:
self._db.close()
def get_commands(self):
"""
@@ -213,6 +222,10 @@ class Command(metaclass=Singleton):
if args_num > 0:
if cmd_data:
# 有内置参数直接使用内置参数
data = cmd_data.get("data") or {}
data['channel'] = channel
data['user'] = userid
cmd_data['data'] = data
command['func'](**cmd_data)
elif args_num == 2:
# 没有输入参数只输入渠道和用户ID

View File

@@ -1,5 +1,6 @@
import secrets
from pathlib import Path
from typing import List
from pydantic import BaseSettings
@@ -39,6 +40,8 @@ class Settings(BaseSettings):
SEARCH_SOURCE: str = "themoviedb"
# 刮削入库的媒体文件
SCRAP_METADATA: bool = True
# 新增已入库媒体是否跟随TMDB信息变化
SCRAP_FOLLOW_TMDB: bool = True
# 刮削来源
SCRAP_SOURCE: str = "themoviedb"
# TMDB图片地址
@@ -63,9 +66,13 @@ class Settings(BaseSettings):
RMT_AUDIO_TRACK_EXT: list = ['.mka']
# 索引器
INDEXER: str = "builtin"
# 订阅模式
SUBSCRIBE_MODE: str = "spider"
# RSS订阅模式刷新时间间隔分钟
SUBSCRIBE_RSS_INTERVAL: int = 30
# 订阅搜索开关
SUBSCRIBE_SEARCH: bool = False
# 用户认证站点 hhclub/audiences/hddolby/zmpt/freefarm/hdfans/wintersakura/leaves/1ptba/icc2022/iyuu
# 用户认证站点
AUTH_SITE: str = ""
# 交互搜索自动下载用户ID使用,分割
AUTO_DOWNLOAD_USER: str = None
@@ -163,7 +170,7 @@ class Settings(BaseSettings):
OCR_HOST: str = "https://movie-pilot.org"
# CookieCloud对应的浏览器UA
USER_AGENT = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/113.0.0.0 Safari/537.36 Edg/113.0.1774.57"
# 媒体库目录
# 媒体库目录,多个目录使用,分隔
LIBRARY_PATH: str = None
# 电影媒体库目录名,默认"电影"
LIBRARY_MOVIE_NAME: str = None
@@ -249,6 +256,12 @@ class Settings(BaseSettings):
"server": self.PROXY_HOST
}
@property
def LIBRARY_PATHS(self) -> List[Path]:
if self.LIBRARY_PATH:
return [Path(path) for path in self.LIBRARY_PATH.split(",")]
return []
def __init__(self):
super().__init__()
with self.CONFIG_PATH as p:

View File

@@ -15,9 +15,9 @@ class MetaBase(object):
"""
# 是否处理的文件
isfile: bool = False
# 原标题字符串
# 原标题字符串(未经过识别词处理)
title: str = ""
# 识别用字符串
# 识别用字符串(经过识别词处理后)
org_string: Optional[str] = None
# 副标题
subtitle: Optional[str] = None

View File

@@ -28,7 +28,23 @@ class WordsMatcher(metaclass=Singleton):
if not word:
continue
try:
if word.count(" => "):
if word.count(" => ") and word.count(" && ") and word.count(" >> ") and word.count(" <> "):
# 替换词
thc = str(re.findall(r'(.*?)\s*=>', word)[0]).strip()
# 被替换词
bthc = str(re.findall(r'=>\s*(.*?)\s*&&', word)[0]).strip()
# 集偏移前字段
pyq = str(re.findall(r'&&\s*(.*?)\s*<>', word)[0]).strip()
# 集偏移后字段
pyh = str(re.findall(r'<>(.*?)\s*>>', word)[0]).strip()
# 集偏移
offsets = str(re.findall(r'>>\s*(.*?)$', word)[0]).strip()
# 替换词
title, message, state = self.__replace_regex(title, thc, bthc)
if state:
# 替换词成功再进行集偏移
title, message, state = self.__episode_offset(title, pyq, pyh, offsets)
elif word.count(" => "):
# 替换词
strings = word.split(" => ")
title, message, state = self.__replace_regex(title, strings[0], strings[1])

View File

@@ -3,6 +3,7 @@ from typing import List, Any, Dict, Tuple
from app.db.systemconfig_oper import SystemConfigOper
from app.helper.module import ModuleHelper
from app.helper.sites import SitesHelper
from app.log import logger
from app.schemas.types import SystemConfigKey
from app.utils.object import ObjectUtils
@@ -23,6 +24,7 @@ class PluginManager(metaclass=Singleton):
_config_key: str = "plugin.%s"
def __init__(self):
self.siteshelper = SitesHelper()
self.init_config()
def init_config(self):
@@ -81,6 +83,9 @@ class PluginManager(metaclass=Singleton):
"""
# 停止所有插件
for plugin in self._running_plugins.values():
# 关闭数据库
if hasattr(plugin, "close"):
plugin.close()
# 关闭插件
if hasattr(plugin, "stop_service"):
plugin.stop_service()
@@ -181,6 +186,8 @@ class PluginManager(metaclass=Singleton):
# 已安装插件
installed_apps = self.systemconfig.get(SystemConfigKey.UserInstalledPlugins) or []
for pid, plugin in self._plugins.items():
# 运行状插件
plugin_obj = self._running_plugins.get(pid)
# 基本属性
conf = {}
# ID
@@ -191,11 +198,20 @@ class PluginManager(metaclass=Singleton):
else:
conf.update({"installed": False})
# 运行状态
if pid in self._running_plugins.keys() and hasattr(plugin, "get_state"):
plugin_obj = self._running_plugins.get(pid)
if plugin_obj and hasattr(plugin, "get_state"):
conf.update({"state": plugin_obj.get_state()})
else:
conf.update({"state": False})
# 是否有详情页面
if hasattr(plugin, "get_page"):
if ObjectUtils.check_method(plugin.get_page):
conf.update({"has_page": True})
else:
conf.update({"has_page": False})
# 权限
if hasattr(plugin, "auth_level"):
if self.siteshelper.auth_level < plugin.auth_level:
continue
# 名称
if hasattr(plugin, "plugin_name"):
conf.update({"plugin_name": plugin.plugin_name})

View File

@@ -39,6 +39,12 @@ class DownloadHistoryOper(DbOper):
downloadfile = DownloadFiles(**file_item)
downloadfile.create(self._db)
def truncate_files(self):
"""
清空下载历史文件记录
"""
DownloadFiles.truncate(self._db)
def get_files_by_hash(self, download_hash: str, state: int = None) -> List[DownloadFiles]:
"""
按Hash查询下载文件记录

View File

@@ -6,7 +6,7 @@ from alembic.config import Config
from app.core.config import settings
from app.core.security import get_password_hash
from app.db import Engine, ScopedSession
from app.db import Engine, SessionFactory
from app.db.models import Base
from app.db.models.user import User
from app.log import logger
@@ -22,7 +22,7 @@ def init_db():
# 全量建表
Base.metadata.create_all(bind=Engine)
# 初始化超级管理员
db = ScopedSession()
db = SessionFactory()
user = User.get_by_name(db=db, name=settings.SUPERUSER)
if not user:
user = User(

View File

@@ -52,7 +52,7 @@ class DownloadHistory(Base):
@staticmethod
def get_last_by(db: Session, mtype: str = None, title: str = None, year: int = None, season: str = None,
episode: str = None, tmdbid: str = None):
episode: str = None, tmdbid: int = None):
"""
据tmdbid、season、season_episode查询转移记录
"""

View File

@@ -1,66 +0,0 @@
from sqlalchemy import Column, Integer, String, Sequence
from sqlalchemy.orm import Session
from app.db.models import Base
class Rss(Base):
"""
RSS订阅
"""
id = Column(Integer, Sequence('id'), primary_key=True, index=True)
# 名称
name = Column(String, nullable=False)
# RSS地址
url = Column(String, nullable=False)
# 类型
type = Column(String)
# 标题
title = Column(String)
# 年份
year = Column(String)
# TMDBID
tmdbid = Column(Integer, index=True)
# 季号
season = Column(Integer)
# 海报
poster = Column(String)
# 背景图
backdrop = Column(String)
# 评分
vote = Column(Integer)
# 简介
description = Column(String)
# 总集数
total_episode = Column(Integer)
# 包含
include = Column(String)
# 排除
exclude = Column(String)
# 洗版
best_version = Column(Integer)
# 是否使用代理服务器
proxy = Column(Integer)
# 是否使用过滤规则
filter = Column(Integer)
# 保存路径
save_path = Column(String)
# 已处理数量
processed = Column(Integer)
# 附加信息,已处理数据
note = Column(String)
# 最后更新时间
last_update = Column(String)
# 状态 0-停用1-启用
state = Column(Integer, default=1)
@staticmethod
def get_by_tmdbid(db: Session, tmdbid: int, season: int = None):
if season:
return db.query(Rss).filter(Rss.tmdbid == tmdbid,
Rss.season == season).all()
return db.query(Rss).filter(Rss.tmdbid == tmdbid).all()
@staticmethod
def get_by_title(db: Session, title: str):
return db.query(Rss).filter(Rss.title == title).first()

View File

@@ -25,7 +25,7 @@ class TransferHistory(Base):
title = Column(String, index=True)
# 年份
year = Column(String)
tmdbid = Column(Integer)
tmdbid = Column(Integer, index=True)
imdbid = Column(String)
tvdbid = Column(Integer)
doubanid = Column(String)
@@ -85,35 +85,69 @@ class TransferHistory(Base):
return db.query(func.count(TransferHistory.id)).filter(TransferHistory.title.like(f'%{title}%')).first()[0]
@staticmethod
def list_by(db: Session, title: str = None, year: int = None, season: str = None,
episode: str = None, tmdbid: str = None):
def list_by(db: Session, mtype: str = None, title: str = None, year: str = None, season: str = None,
episode: str = None, tmdbid: int = None, dest: str = None):
"""
据tmdbid、season、season_episode查询转移记录
tmdbid + mtype 或 title + year 必输
"""
if tmdbid and not season and not episode:
return db.query(TransferHistory).filter(TransferHistory.tmdbid == tmdbid).all()
if tmdbid and season and not episode:
return db.query(TransferHistory).filter(TransferHistory.tmdbid == tmdbid,
TransferHistory.seasons == season).all()
if tmdbid and season and episode:
return db.query(TransferHistory).filter(TransferHistory.tmdbid == tmdbid,
TransferHistory.seasons == season,
TransferHistory.episodes == episode).all()
# 电视剧所有季集|电影
if not season and not episode:
return db.query(TransferHistory).filter(TransferHistory.title == title,
TransferHistory.year == year).all()
# 电视剧某季
if season and not episode:
return db.query(TransferHistory).filter(TransferHistory.title == title,
TransferHistory.year == year,
TransferHistory.seasons == season).all()
# 电视剧某季某集
if season and episode:
return db.query(TransferHistory).filter(TransferHistory.title == title,
TransferHistory.year == year,
TransferHistory.seasons == season,
TransferHistory.episodes == episode).all()
# TMDBID + 类型
if tmdbid and mtype:
# 电视剧某季某集
if season and episode:
return db.query(TransferHistory).filter(TransferHistory.tmdbid == tmdbid,
TransferHistory.type == mtype,
TransferHistory.seasons == season,
TransferHistory.episodes == episode,
TransferHistory.dest == dest).all()
# 电视剧某季
elif season:
return db.query(TransferHistory).filter(TransferHistory.tmdbid == tmdbid,
TransferHistory.type == mtype,
TransferHistory.seasons == season).all()
else:
if dest:
# 电影
return db.query(TransferHistory).filter(TransferHistory.tmdbid == tmdbid,
TransferHistory.type == mtype,
TransferHistory.dest == dest).all()
else:
# 电视剧所有季集
return db.query(TransferHistory).filter(TransferHistory.tmdbid == tmdbid,
TransferHistory.type == mtype).all()
# 标题 + 年份
elif title and year:
# 电视剧某季某集
if season and episode:
return db.query(TransferHistory).filter(TransferHistory.title == title,
TransferHistory.year == year,
TransferHistory.seasons == season,
TransferHistory.episodes == episode,
TransferHistory.dest == dest).all()
# 电视剧某季
elif season:
return db.query(TransferHistory).filter(TransferHistory.title == title,
TransferHistory.year == year,
TransferHistory.seasons == season).all()
else:
if dest:
# 电影
return db.query(TransferHistory).filter(TransferHistory.title == title,
TransferHistory.year == year,
TransferHistory.dest == dest).all()
else:
# 电视剧所有季集
return db.query(TransferHistory).filter(TransferHistory.title == title,
TransferHistory.year == year).all()
return []
@staticmethod
def get_by_type_tmdbid(db: Session, mtype: str = None, tmdbid: int = None):
"""
据tmdbid、type查询转移记录
"""
return db.query(TransferHistory).filter(TransferHistory.tmdbid == tmdbid,
TransferHistory.type == mtype).first()
@staticmethod
def update_download_hash(db: Session, historyid: int = None, download_hash: str = None):

View File

@@ -1,57 +0,0 @@
from typing import List
from sqlalchemy.orm import Session
from app.db import DbOper
from app.db.models.rss import Rss
class RssOper(DbOper):
"""
RSS订阅数据管理
"""
def __init__(self, db: Session = None):
super().__init__(db)
def add(self, **kwargs) -> bool:
"""
新增RSS订阅
"""
item = Rss(**kwargs)
item.create(self._db)
return True
def exists(self, tmdbid: int, season: int = None):
"""
判断是否存在
"""
return Rss.get_by_tmdbid(self._db, tmdbid, season)
def list(self, rssid: int = None) -> List[Rss]:
"""
查询所有RSS订阅
"""
if rssid:
return [Rss.get(self._db, rssid)]
return Rss.list(self._db)
def delete(self, rssid: int) -> bool:
"""
删除RSS订阅
"""
item = Rss.get(self._db, rssid)
if item:
item.delete(self._db)
return True
return False
def update(self, rssid: int, **kwargs) -> bool:
"""
更新RSS订阅
"""
item = Rss.get(self._db, rssid)
if item:
item.update(self._db, kwargs)
return True
return False

View File

@@ -74,3 +74,15 @@ class SiteOper(DbOper):
"cookie": cookies
})
return True, "更新站点Cookie成功"
def update_rss(self, domain: str, rss: str) -> Tuple[bool, str]:
"""
更新站点rss
"""
site = Site.get_by_domain(self._db, domain)
if not site:
return False, "站点不存在"
site.update(self._db, {
"rss": rss
})
return True, "更新站点RSS地址成功"

View File

@@ -1,9 +1,7 @@
import json
from typing import Any, Union
from sqlalchemy.orm import Session
from app.db import DbOper
from app.db import DbOper, SessionFactory
from app.db.models.systemconfig import SystemConfig
from app.schemas.types import SystemConfigKey
from app.utils.object import ObjectUtils
@@ -14,11 +12,12 @@ class SystemConfigOper(DbOper, metaclass=Singleton):
# 配置对象
__SYSTEMCONF: dict = {}
def __init__(self, db: Session = None):
def __init__(self):
"""
加载配置到内存
"""
super().__init__(db)
self._db = SessionFactory()
super().__init__(self._db)
for item in SystemConfig.list(self._db):
if ObjectUtils.is_obj(item.value):
self.__SYSTEMCONF[item.key] = json.loads(item.value)
@@ -57,3 +56,7 @@ class SystemConfigOper(DbOper, metaclass=Singleton):
if not key:
return self.__SYSTEMCONF
return self.__SYSTEMCONF.get(key)
def __del__(self):
if self._db:
self._db.close()

View File

@@ -51,18 +51,28 @@ class TransferHistoryOper(DbOper):
"""
return TransferHistory.statistic(self._db, days)
def get_by(self, title: str = None, year: str = None,
season: str = None, episode: str = None, tmdbid: str = None) -> List[TransferHistory]:
def get_by(self, title: str = None, year: str = None, mtype: str = None,
season: str = None, episode: str = None, tmdbid: int = None, dest: str = None) -> List[TransferHistory]:
"""
按类型、标题、年份、季集查询转移记录
"""
return TransferHistory.list_by(db=self._db,
mtype=mtype,
title=title,
dest=dest,
year=year,
season=season,
episode=episode,
tmdbid=tmdbid)
def get_by_type_tmdbid(self, mtype: str = None, tmdbid: int = None) -> TransferHistory:
"""
按类型、tmdb查询转移记录
"""
return TransferHistory.get_by_type_tmdbid(db=self._db,
mtype=mtype,
tmdbid=tmdbid)
def delete(self, historyid):
"""
删除转移记录

View File

@@ -1,16 +1,227 @@
import xml.dom.minidom
from typing import List
from typing import List, Tuple, Union
from urllib.parse import urljoin
from lxml import etree
from app.core.config import settings
from app.helper.browser import PlaywrightHelper
from app.utils.dom import DomUtils
from app.utils.http import RequestUtils
from app.utils.string import StringUtils
class RssHelper:
"""
RSS帮助类解析RSS报文、获取RSS地址等
"""
# 各站点RSS链接获取配置
rss_link_conf = {
"default": {
"xpath": "//a[@class='faqlink']/@href",
"url": "getrss.php",
"params": {
"inclbookmarked": 0,
"itemsmalldescr": 1,
"showrows": 50,
"search_mode": 1,
}
},
"hares.top": {
"xpath": "//*[@id='layui-layer100001']/div[2]/div/p[4]/a/@href",
"url": "getrss.php",
"params": {
"inclbookmarked": 0,
"itemsmalldescr": 1,
"showrows": 50,
"search_mode": 1,
}
},
"et8.org": {
"xpath": "//*[@id='outer']/table/tbody/tr/td/table/tbody/tr/td/a[2]/@href",
"url": "getrss.php",
"params": {
"inclbookmarked": 0,
"itemsmalldescr": 1,
"showrows": 50,
"search_mode": 1,
}
},
"pttime.org": {
"xpath": "//*[@id='outer']/table/tbody/tr/td/table/tbody/tr/td/text()[5]",
"url": "getrss.php",
"params": {
"showrows": 10,
"inclbookmarked": 0,
"itemsmalldescr": 1
}
},
"ourbits.club": {
"xpath": "//a[@class='gen_rsslink']/@href",
"url": "getrss.php",
"params": {
"inclbookmarked": 0,
"itemsmalldescr": 1,
"showrows": 50,
"search_mode": 1,
}
},
"totheglory.im": {
"xpath": "//textarea/text()",
"url": "rsstools.php?c51=51&c52=52&c53=53&c54=54&c108=108&c109=109&c62=62&c63=63&c67=67&c69=69&c70=70&c73=73&c76=76&c75=75&c74=74&c87=87&c88=88&c99=99&c90=90&c58=58&c103=103&c101=101&c60=60",
"params": {
"inclbookmarked": 0,
"itemsmalldescr": 1,
"showrows": 50,
"search_mode": 1,
}
},
"monikadesign.uk": {
"xpath": "//a/@href",
"url": "rss",
"params": {
"inclbookmarked": 0,
"itemsmalldescr": 1,
"showrows": 50,
"search_mode": 1,
}
},
"zhuque.in": {
"xpath": "//a/@href",
"url": "user/rss",
"render": True,
"params": {
"inclbookmarked": 0,
"itemsmalldescr": 1,
"showrows": 50,
"search_mode": 1,
}
},
"hdchina.org": {
"xpath": "//a[@class='faqlink']/@href",
"url": "getrss.php",
"params": {
"inclbookmarked": 0,
"itemsmalldescr": 1,
"showrows": 50,
"search_mode": 1,
"rsscart": 0
}
},
"audiences.me": {
"xpath": "//a[@class='faqlink']/@href",
"url": "getrss.php",
"params": {
"inclbookmarked": 0,
"itemsmalldescr": 1,
"showrows": 50,
"search_mode": 1,
"torrent_type": 1,
"exp": 180
}
},
"shadowflow.org": {
"xpath": "//a[@class='faqlink']/@href",
"url": "getrss.php",
"params": {
"inclbookmarked": 0,
"itemsmalldescr": 1,
"paid": 0,
"search_mode": 0,
"showrows": 30
}
},
"hddolby.com": {
"xpath": "//a[@class='faqlink']/@href",
"url": "getrss.php",
"params": {
"inclbookmarked": 0,
"itemsmalldescr": 1,
"showrows": 50,
"search_mode": 1,
"exp": 180
}
},
"hdhome.org": {
"xpath": "//a[@class='faqlink']/@href",
"url": "getrss.php",
"params": {
"inclbookmarked": 0,
"itemsmalldescr": 1,
"showrows": 50,
"search_mode": 1,
"exp": 180
}
},
"pthome.net": {
"xpath": "//a[@class='faqlink']/@href",
"url": "getrss.php",
"params": {
"inclbookmarked": 0,
"itemsmalldescr": 1,
"showrows": 50,
"search_mode": 1,
"exp": 180
}
},
"ptsbao.club": {
"xpath": "//a[@class='faqlink']/@href",
"url": "getrss.php",
"params": {
"inclbookmarked": 0,
"itemsmalldescr": 1,
"showrows": 50,
"search_mode": 1,
"size": 0
}
},
"leaves.red": {
"xpath": "//a[@class='faqlink']/@href",
"url": "getrss.php",
"params": {
"inclbookmarked": 0,
"itemsmalldescr": 1,
"showrows": 50,
"search_mode": 0,
"paid": 2
}
},
"hdtime.org": {
"xpath": "//a[@class='faqlink']/@href",
"url": "getrss.php",
"params": {
"inclbookmarked": 0,
"itemsmalldescr": 1,
"showrows": 50,
"search_mode": 0,
}
},
"m-team.io": {
"xpath": "//a[@class='faqlink']/@href",
"url": "getrss.php",
"params": {
"showrows": 50,
"inclbookmarked": 0,
"itemsmalldescr": 1,
"https": 1
}
},
"u2.dmhy.org": {
"xpath": "//a[@class='faqlink']/@href",
"url": "getrss.php",
"params": {
"inclbookmarked": 0,
"itemsmalldescr": 1,
"showrows": 50,
"search_mode": 1,
"inclautochecked": 1,
"trackerssl": 1
}
},
}
@staticmethod
def parse(url, proxy: bool = False) -> List[dict]:
def parse(url, proxy: bool = False) -> Union[List[dict], None]:
"""
解析RSS订阅URL获取RSS中的种子信息
:param url: RSS地址
@@ -77,4 +288,61 @@ class RssHelper:
continue
except Exception as e2:
print(str(e2))
# RSS过期 观众RSS 链接已过期,您需要获得一个新的! pthome RSS Link has expired, You need to get a new one!
_rss_expired_msg = [
"RSS 链接已过期, 您需要获得一个新的!",
"RSS Link has expired, You need to get a new one!",
"RSS Link has expired, You need to get new!"
]
if ret_xml in _rss_expired_msg:
return None
return ret_array
def get_rss_link(self, url: str, cookie: str, ua: str, proxy: bool = False) -> Tuple[str, str]:
"""
获取站点rss地址
:param url: 站点地址
:param cookie: 站点cookie
:param ua: 站点ua
:param proxy: 是否使用代理
:return: rss地址、错误信息
"""
try:
# 获取站点域名
domain = StringUtils.get_url_domain(url)
# 获取配置
site_conf = self.rss_link_conf.get(domain) or self.rss_link_conf.get("default")
# RSS地址
rss_url = urljoin(url, site_conf.get("url"))
# RSS请求参数
rss_params = site_conf.get("params")
# 请求RSS页面
if site_conf.get("render"):
html_text = PlaywrightHelper().get_page_source(
url=rss_url,
cookies=cookie,
ua=ua,
proxies=settings.PROXY if proxy else None
)
else:
res = RequestUtils(
cookies=cookie,
timeout=60,
ua=ua,
proxies=settings.PROXY if proxy else None
).post_res(url=rss_url, data=rss_params)
if res:
html_text = res.text
elif res is not None:
return "", f"获取 {url} RSS链接失败错误码{res.status_code},错误原因:{res.reason}"
else:
return "", f"获取RSS链接失败无法连接 {url} "
# 解析HTML
html = etree.HTML(html_text)
if html:
rss_link = html.xpath(site_conf.get("xpath"))
if rss_link:
return str(rss_link[-1]), ""
return "", f"获取RSS链接失败{url}"
except Exception as e:
return "", f"获取 {url} RSS链接失败{str(e)}"

Binary file not shown.

View File

@@ -68,7 +68,7 @@ class EmbyModule(_ModuleBase):
if movie:
logger.info(f"媒体库中已存在:{movie}")
return ExistMediaInfo(type=MediaType.MOVIE)
movies = self.emby.get_movies(title=mediainfo.title, year=mediainfo.year)
movies = self.emby.get_movies(title=mediainfo.title, year=mediainfo.year, tmdb_id=mediainfo.tmdb_id)
if not movies:
logger.info(f"{mediainfo.title_year} 在媒体库中不存在")
return None

View File

@@ -272,11 +272,15 @@ class Emby(metaclass=Singleton):
return None
return ""
def get_movies(self, title: str, year: str = None) -> Optional[List[dict]]:
def get_movies(self,
title: str,
year: str = None,
tmdb_id: int = None) -> Optional[List[dict]]:
"""
根据标题和年份检查电影是否在Emby中存在存在则返回列表
:param title: 标题
:param year: 年份,可以为空,为空时不按年份过滤
:param tmdb_id: TMDB ID
:return: 含title、year属性的字典列表
"""
if not self._host or not self._apikey:
@@ -291,11 +295,19 @@ class Emby(metaclass=Singleton):
if res_items:
ret_movies = []
for res_item in res_items:
item_tmdbid = res_item.get("ProviderIds", {}).get("Tmdb")
if tmdb_id and item_tmdbid:
if str(item_tmdbid) != str(tmdb_id):
continue
else:
ret_movies.append(
{'title': res_item.get('Name'), 'year': str(res_item.get('ProductionYear'))})
continue
if res_item.get('Name') == title and (
not year or str(res_item.get('ProductionYear')) == str(year)):
ret_movies.append(
{'title': res_item.get('Name'), 'year': str(res_item.get('ProductionYear'))})
return ret_movies
return ret_movies
except Exception as e:
logger.error(f"连接Items出错" + str(e))
return None
@@ -326,10 +338,12 @@ class Emby(metaclass=Singleton):
if not item_id:
return {}
# 验证tmdbid是否相同
item_tmdbid = (self.get_iteminfo(item_id).get("ProviderIds") or {}).get("Tmdb")
if tmdb_id and item_tmdbid:
if str(tmdb_id) != str(item_tmdbid):
return {}
item_info = self.get_iteminfo(item_id)
if item_info:
item_tmdbid = (item_info.get("ProviderIds") or {}).get("Tmdb")
if tmdb_id and item_tmdbid:
if str(tmdb_id) != str(item_tmdbid):
return {}
# /Shows/Id/Episodes 查集的信息
if not season:
season = ""
@@ -461,31 +475,15 @@ class Emby(metaclass=Singleton):
# 查找需要刷新的媒体库ID
item_path = Path(item.target_path)
for folder in self.folders:
# 找同级路径最多的媒体库(要求容器内映射路径与实际一致)
max_comm_path = ""
match_num = 0
match_id = None
# 匹配子目录
for subfolder in folder.get("SubFolders"):
try:
# 查询最大公共路径
# 匹配子目录
subfolder_path = Path(subfolder.get("Path"))
item_path_parents = list(item_path.parents)
subfolder_path_parents = list(subfolder_path.parents)
common_path = next(p1 for p1, p2 in zip(reversed(item_path_parents),
reversed(subfolder_path_parents)
) if p1 == p2)
if len(common_path) > len(max_comm_path):
max_comm_path = common_path
match_id = subfolder.get("Id")
match_num += 1
except StopIteration:
continue
if item_path.is_relative_to(subfolder_path):
return subfolder.get("Id")
except Exception as err:
print(str(err))
# 检查匹配情况
if match_id:
return match_id if match_num == 1 else folder.get("Id")
# 如果找不到,只要路径中有分类目录名就命中
for subfolder in folder.get("SubFolders"):
if subfolder.get("Path") and re.search(r"[/\\]%s" % item.category,
@@ -786,6 +784,7 @@ class Emby(metaclass=Singleton):
}
"""
message = json.loads(message_str)
logger.info(f"接收到emby webhook{message}")
eventItem = WebhookEventInfo(event=message.get('Event', ''), channel="emby")
if message.get('Item'):
if message.get('Item', {}).get('Type') == 'Episode':
@@ -814,9 +813,9 @@ class Emby(metaclass=Singleton):
eventItem.item_type = "MOV"
eventItem.item_name = "%s %s" % (
message.get('Item', {}).get('Name'), "(" + str(message.get('Item', {}).get('ProductionYear')) + ")")
eventItem.item_path = message.get('Item', {}).get('Path')
eventItem.item_id = message.get('Item', {}).get('Id')
eventItem.item_path = message.get('Item', {}).get('Path')
eventItem.tmdb_id = message.get('Item', {}).get('ProviderIds', {}).get('Tmdb')
if message.get('Item', {}).get('Overview') and len(message.get('Item', {}).get('Overview')) > 100:
eventItem.overview = str(message.get('Item', {}).get('Overview'))[:100] + "..."

View File

@@ -11,6 +11,299 @@ from app.schemas.types import MediaType
class FanartModule(_ModuleBase):
"""
{
"name": "The Wheel of Time",
"thetvdb_id": "355730",
"tvposter": [
{
"id": "174068",
"url": "http://assets.fanart.tv/fanart/tv/355730/tvposter/the-wheel-of-time-64b009de9548d.jpg",
"lang": "en",
"likes": "3"
},
{
"id": "176424",
"url": "http://assets.fanart.tv/fanart/tv/355730/tvposter/the-wheel-of-time-64de44fe42073.jpg",
"lang": "00",
"likes": "3"
},
{
"id": "176407",
"url": "http://assets.fanart.tv/fanart/tv/355730/tvposter/the-wheel-of-time-64dde63c7c941.jpg",
"lang": "en",
"likes": "0"
},
{
"id": "177321",
"url": "http://assets.fanart.tv/fanart/tv/355730/tvposter/the-wheel-of-time-64eda10599c3d.jpg",
"lang": "cz",
"likes": "0"
},
{
"id": "155050",
"url": "http://assets.fanart.tv/fanart/tv/355730/tvposter/the-wheel-of-time-6313adbd1fd58.jpg",
"lang": "pl",
"likes": "0"
},
{
"id": "140198",
"url": "http://assets.fanart.tv/fanart/tv/355730/tvposter/the-wheel-of-time-61a0d7b11952e.jpg",
"lang": "en",
"likes": "0"
},
{
"id": "140034",
"url": "http://assets.fanart.tv/fanart/tv/355730/tvposter/the-wheel-of-time-619e65b73871d.jpg",
"lang": "en",
"likes": "0"
}
],
"hdtvlogo": [
{
"id": "139835",
"url": "http://assets.fanart.tv/fanart/tv/355730/hdtvlogo/the-wheel-of-time-6197d9392faba.png",
"lang": "en",
"likes": "3"
},
{
"id": "140039",
"url": "http://assets.fanart.tv/fanart/tv/355730/hdtvlogo/the-wheel-of-time-619e87941a128.png",
"lang": "pt",
"likes": "3"
},
{
"id": "140092",
"url": "http://assets.fanart.tv/fanart/tv/355730/hdtvlogo/the-wheel-of-time-619fa2347bada.png",
"lang": "en",
"likes": "3"
},
{
"id": "164312",
"url": "http://assets.fanart.tv/fanart/tv/355730/hdtvlogo/the-wheel-of-time-63c8185cb8824.png",
"lang": "hu",
"likes": "1"
},
{
"id": "139827",
"url": "http://assets.fanart.tv/fanart/tv/355730/hdtvlogo/the-wheel-of-time-6197539658a9e.png",
"lang": "en",
"likes": "1"
},
{
"id": "177214",
"url": "http://assets.fanart.tv/fanart/tv/355730/hdtvlogo/the-wheel-of-time-64ebae44c23a6.png",
"lang": "cz",
"likes": "0"
},
{
"id": "177215",
"url": "http://assets.fanart.tv/fanart/tv/355730/hdtvlogo/the-wheel-of-time-64ebae472deef.png",
"lang": "cz",
"likes": "0"
},
{
"id": "156163",
"url": "http://assets.fanart.tv/fanart/tv/355730/hdtvlogo/the-wheel-of-time-63316bef1ff9d.png",
"lang": "cz",
"likes": "0"
},
{
"id": "155051",
"url": "http://assets.fanart.tv/fanart/tv/355730/hdtvlogo/the-wheel-of-time-6313add04ca92.png",
"lang": "pl",
"likes": "0"
},
{
"id": "152668",
"url": "http://assets.fanart.tv/fanart/tv/355730/hdtvlogo/the-wheel-of-time-62ced3775a40a.png",
"lang": "pl",
"likes": "0"
},
{
"id": "142266",
"url": "http://assets.fanart.tv/fanart/tv/355730/hdtvlogo/the-wheel-of-time-61ccd93eeac2b.png",
"lang": "de",
"likes": "0"
}
],
"hdclearart": [
{
"id": "164313",
"url": "http://assets.fanart.tv/fanart/tv/355730/hdclearart/the-wheel-of-time-63c81871c982c.png",
"lang": "en",
"likes": "3"
},
{
"id": "140284",
"url": "http://assets.fanart.tv/fanart/tv/355730/hdclearart/the-wheel-of-time-61a2128ed1df2.png",
"lang": "pt",
"likes": "3"
},
{
"id": "139828",
"url": "http://assets.fanart.tv/fanart/tv/355730/hdclearart/the-wheel-of-time-61975401e894c.png",
"lang": "en",
"likes": "1"
},
{
"id": "164314",
"url": "http://assets.fanart.tv/fanart/tv/355730/hdclearart/the-wheel-of-time-63c8188488a5f.png",
"lang": "hu",
"likes": "1"
},
{
"id": "177322",
"url": "http://assets.fanart.tv/fanart/tv/355730/hdclearart/the-wheel-of-time-64eda135933b6.png",
"lang": "cz",
"likes": "0"
},
{
"id": "142267",
"url": "http://assets.fanart.tv/fanart/tv/355730/hdclearart/the-wheel-of-time-61ccda9918c5c.png",
"lang": "de",
"likes": "0"
}
],
"seasonposter": [
{
"id": "140199",
"url": "http://assets.fanart.tv/fanart/tv/355730/seasonposter/the-wheel-of-time-61a0d7c2976de.jpg",
"lang": "en",
"likes": "1",
"season": "1"
},
{
"id": "176395",
"url": "http://assets.fanart.tv/fanart/tv/355730/seasonposter/the-wheel-of-time-64dd80b3d79a9.jpg",
"lang": "en",
"likes": "0",
"season": "1"
},
{
"id": "140035",
"url": "http://assets.fanart.tv/fanart/tv/355730/seasonposter/the-wheel-of-time-619e65c4d5357.jpg",
"lang": "en",
"likes": "0",
"season": "1"
}
],
"tvthumb": [
{
"id": "140242",
"url": "http://assets.fanart.tv/fanart/tv/355730/tvthumb/the-wheel-of-time-61a1813035506.jpg",
"lang": "en",
"likes": "1"
},
{
"id": "177323",
"url": "http://assets.fanart.tv/fanart/tv/355730/tvthumb/the-wheel-of-time-64eda15b6dce6.jpg",
"lang": "cz",
"likes": "0"
},
{
"id": "176399",
"url": "http://assets.fanart.tv/fanart/tv/355730/tvthumb/the-wheel-of-time-64dd85c9b618c.jpg",
"lang": "en",
"likes": "0"
},
{
"id": "152669",
"url": "http://assets.fanart.tv/fanart/tv/355730/tvthumb/the-wheel-of-time-62ced53d16574.jpg",
"lang": "pl",
"likes": "0"
},
{
"id": "141983",
"url": "http://assets.fanart.tv/fanart/tv/355730/tvthumb/the-wheel-of-time-61c6d04a6d701.jpg",
"lang": "en",
"likes": "0"
}
],
"showbackground": [
{
"id": "177324",
"url": "http://assets.fanart.tv/fanart/tv/355730/showbackground/the-wheel-of-time-64eda1833ccb1.jpg",
"lang": "",
"likes": "0",
"season": "all"
},
{
"id": "141986",
"url": "http://assets.fanart.tv/fanart/tv/355730/showbackground/the-wheel-of-time-61c6d08f7c7e2.jpg",
"lang": "",
"likes": "0",
"season": "all"
},
{
"id": "139868",
"url": "http://assets.fanart.tv/fanart/tv/355730/showbackground/the-wheel-of-time-6198ce358b98a.jpg",
"lang": "",
"likes": "0",
"season": "all"
}
],
"seasonthumb": [
{
"id": "176396",
"url": "http://assets.fanart.tv/fanart/tv/355730/seasonthumb/the-wheel-of-time-64dd80c8593f9.jpg",
"lang": "en",
"likes": "0",
"season": "1"
},
{
"id": "176400",
"url": "http://assets.fanart.tv/fanart/tv/355730/seasonthumb/the-wheel-of-time-64dd85da7c5e9.jpg",
"lang": "en",
"likes": "0",
"season": "0"
}
],
"tvbanner": [
{
"id": "176397",
"url": "http://assets.fanart.tv/fanart/tv/355730/tvbanner/the-wheel-of-time-64dd80da9a255.jpg",
"lang": "en",
"likes": "0"
},
{
"id": "176401",
"url": "http://assets.fanart.tv/fanart/tv/355730/tvbanner/the-wheel-of-time-64dd85e8904ea.jpg",
"lang": "en",
"likes": "0"
},
{
"id": "141988",
"url": "http://assets.fanart.tv/fanart/tv/355730/tvbanner/the-wheel-of-time-61c6d34bceb5f.jpg",
"lang": "en",
"likes": "0"
},
{
"id": "141984",
"url": "http://assets.fanart.tv/fanart/tv/355730/tvbanner/the-wheel-of-time-61c6d06c1c21c.jpg",
"lang": "en",
"likes": "0"
}
],
"seasonbanner": [
{
"id": "176398",
"url": "http://assets.fanart.tv/fanart/tv/355730/seasonbanner/the-wheel-of-time-64dd80e7dbd9f.jpg",
"lang": "en",
"likes": "0",
"season": "1"
},
{
"id": "176402",
"url": "http://assets.fanart.tv/fanart/tv/355730/seasonbanner/the-wheel-of-time-64dd85fb4f1b1.jpg",
"lang": "en",
"likes": "0",
"season": "0"
}
]
}
"""
# 代理
_proxies: dict = settings.PROXY
@@ -40,6 +333,7 @@ class FanartModule(_ModuleBase):
if not result or result.get('status') == 'error':
logger.warn(f"没有获取到 {mediainfo.title_year} 的Fanart图片数据")
return
# 获取所有图片
for name, images in result.items():
if not images:
continue
@@ -47,10 +341,17 @@ class FanartModule(_ModuleBase):
continue
# 按欢迎程度倒排
images.sort(key=lambda x: int(x.get('likes', 0)), reverse=True)
# 取第一张图片
image_obj = images[0]
# 图片属性xx_path
image_name = self.__name(name)
image_season = image_obj.get('season')
# 设置图片
if image_name.startswith("season") and image_season:
# 季图片格式 seasonxx-poster
image_name = f"season{str(image_season).rjust(2, '0')}-{image_name[6:]}"
if not mediainfo.get_image(image_name):
mediainfo.set_image(image_name, images[0].get('url'))
mediainfo.set_image(image_name, image_obj.get('url'))
return mediainfo

View File

@@ -1,7 +1,7 @@
import re
from pathlib import Path
from threading import Lock
from typing import Optional, List, Tuple, Union
from typing import Optional, List, Tuple, Union, Dict
from jinja2 import Template
@@ -11,7 +11,7 @@ from app.core.meta import MetaBase
from app.core.metainfo import MetaInfo
from app.log import logger
from app.modules import _ModuleBase
from app.schemas import TransferInfo
from app.schemas import TransferInfo, ExistMediaInfo
from app.schemas.types import MediaType
from app.utils.system import SystemUtils
@@ -283,7 +283,7 @@ class FileTransferModule(_ModuleBase):
return retcode
def __transfer_file(self, file_item: Path, new_file: Path, transfer_type: str,
over_flag: bool = False, old_file: Path = None) -> int:
over_flag: bool = False) -> int:
"""
转移一个文件,同时处理其他相关文件
:param file_item: 原文件路径
@@ -291,12 +291,13 @@ class FileTransferModule(_ModuleBase):
:param transfer_type: RmtMode转移方式
:param over_flag: 是否覆盖为True时会先删除再转移
"""
if not over_flag and new_file.exists():
logger.warn(f"文件已存在:{new_file}")
return 0
if over_flag and old_file and old_file.exists():
logger.info(f"正在删除已存在的文件:{old_file}")
old_file.unlink()
if new_file.exists():
if not over_flag:
logger.warn(f"文件已存在:{new_file}")
return 0
else:
logger.info(f"正在删除已存在的文件:{new_file}")
new_file.unlink()
logger.info(f"正在转移文件:{file_item}{new_file}")
# 创建父目录
new_file.parent.mkdir(parents=True, exist_ok=True)
@@ -314,6 +315,34 @@ class FileTransferModule(_ModuleBase):
transfer_type=transfer_type,
over_flag=over_flag)
@staticmethod
def __get_library_dir(mediainfo: MediaInfo, target_dir: Path) -> Path:
"""
根据设置并装媒体库目录
"""
if mediainfo.type == MediaType.MOVIE:
# 电影
if settings.LIBRARY_MOVIE_NAME:
target_dir = target_dir / settings.LIBRARY_MOVIE_NAME / mediainfo.category
else:
# 目的目录加上类型和二级分类
target_dir = target_dir / mediainfo.type.value / mediainfo.category
if mediainfo.type == MediaType.TV:
# 电视剧
if settings.LIBRARY_ANIME_NAME \
and mediainfo.genre_ids \
and set(mediainfo.genre_ids).intersection(set(settings.ANIME_GENREIDS)):
# 动漫
target_dir = target_dir / settings.LIBRARY_ANIME_NAME / mediainfo.category
elif settings.LIBRARY_TV_NAME:
# 电视剧
target_dir = target_dir / settings.LIBRARY_TV_NAME / mediainfo.category
else:
# 目的目录加上类型和二级分类
target_dir = target_dir / mediainfo.type.value / mediainfo.category
return target_dir
def transfer_media(self,
in_path: Path,
in_meta: MetaBase,
@@ -337,27 +366,8 @@ class FileTransferModule(_ModuleBase):
if not target_dir.exists():
return TransferInfo(message=f"{target_dir} 目标路径不存在")
if mediainfo.type == MediaType.MOVIE:
# 电影
if settings.LIBRARY_MOVIE_NAME:
target_dir = target_dir / settings.LIBRARY_MOVIE_NAME / mediainfo.category
else:
# 目的目录加上类型和二级分类
target_dir = target_dir / mediainfo.type.value / mediainfo.category
if mediainfo.type == MediaType.TV:
# 电视剧
if settings.LIBRARY_ANIME_NAME \
and mediainfo.genre_ids \
and set(mediainfo.genre_ids).intersection(set(settings.ANIME_GENREIDS)):
# 动漫
target_dir = target_dir / settings.LIBRARY_ANIME_NAME / mediainfo.category
elif settings.LIBRARY_TV_NAME:
# 电视剧
target_dir = target_dir / settings.LIBRARY_TV_NAME / mediainfo.category
else:
# 目的目录加上类型和二级分类
target_dir = target_dir / mediainfo.type.value / mediainfo.category
# 媒体库目录
target_dir = self.__get_library_dir(mediainfo=mediainfo, target_dir=target_dir)
# 重命名格式
rename_format = settings.TV_RENAME_FORMAT \
@@ -510,13 +520,13 @@ class FileTransferModule(_ModuleBase):
计算一个最好的目的目录有in_path时找与in_path同路径的没有in_path时顺序查找1个符合大小要求的没有in_path和size时返回第1个
:param in_path: 源目录
"""
if not settings.LIBRARY_PATH:
if not settings.LIBRARY_PATHS:
return None
# 目的路径,多路径以,分隔
dest_paths = str(settings.LIBRARY_PATH).split(",")
dest_paths = settings.LIBRARY_PATHS
# 只有一个路径,直接返回
if len(dest_paths) == 1:
return Path(dest_paths[0])
return dest_paths[0]
# 匹配有最长共同上级路径的目录
max_length = 0
target_path = None
@@ -531,12 +541,77 @@ class FileTransferModule(_ModuleBase):
logger.debug(f"计算目标路径时出错:{e}")
continue
if target_path:
return Path(target_path)
return target_path
# 顺序匹配第1个满足空间存储要求的目录
if in_path.exists():
file_size = in_path.stat().st_size
for path in dest_paths:
if SystemUtils.free_space(Path(path)) > file_size:
return Path(path)
if SystemUtils.free_space(path) > file_size:
return path
# 默认返回第1个
return Path(dest_paths[0])
return dest_paths[0]
def media_exists(self, mediainfo: MediaInfo, itemid: str = None) -> Optional[ExistMediaInfo]:
"""
判断媒体文件是否存在于本地文件系统
:param mediainfo: 识别的媒体信息
:param itemid: 媒体服务器ItemID
:return: 如不存在返回None存在时返回信息包括每季已存在所有集{type: movie/tv, seasons: {season: [episodes]}}
"""
if not settings.LIBRARY_PATHS:
return None
# 目的路径
dest_paths = settings.LIBRARY_PATHS
# 检查每一个媒体库目录
for dest_path in dest_paths:
# 媒体库路径
target_dir = self.get_target_path(dest_path)
if not target_dir:
continue
# 媒体分类路径
target_dir = self.__get_library_dir(mediainfo=mediainfo, target_dir=target_dir)
# 重命名格式
rename_format = settings.TV_RENAME_FORMAT \
if mediainfo.type == MediaType.TV else settings.MOVIE_RENAME_FORMAT
# 相对路径
rel_path = self.get_rename_path(
template_string=rename_format,
rename_dict=self.__get_naming_dict(meta=MetaInfo(mediainfo.title),
mediainfo=mediainfo)
)
# 取相对路径的第1层目录
if rel_path.parts:
media_path = target_dir / rel_path.parts[0]
else:
continue
# 检查媒体文件夹是否存在
if not media_path.exists():
continue
# 检索媒体文件
media_files = SystemUtils.list_files(directory=media_path, extensions=settings.RMT_MEDIAEXT)
if not media_files:
continue
if mediainfo.type == MediaType.MOVIE:
# 电影存在任何文件为存在
logger.info(f"文件系统已存在:{mediainfo.title_year}")
return ExistMediaInfo(type=MediaType.MOVIE)
else:
# 电视剧检索集数
seasons: Dict[int, list] = {}
for media_file in media_files:
file_meta = MetaInfo(media_file.stem)
season_index = file_meta.begin_season or 1
episode_index = file_meta.begin_episode
if not episode_index:
continue
if season_index not in seasons:
seasons[season_index] = []
seasons[season_index].append(episode_index)
# 返回剧集情况
logger.info(f"{mediainfo.title_year} 文件系统已存在:{seasons}")
return ExistMediaInfo(type=MediaType.TV, seasons=seasons)
# 不存在
return None

View File

@@ -1,7 +1,7 @@
import re
from typing import List, Tuple, Union, Dict, Optional
from app.core.context import TorrentInfo
from app.core.context import TorrentInfo, MediaInfo
from app.core.metainfo import MetaInfo
from app.log import logger
from app.modules import _ModuleBase
@@ -9,9 +9,10 @@ from app.modules.filter.RuleParser import RuleParser
class FilterModule(_ModuleBase):
# 规则解析器
parser: RuleParser = None
# 媒体信息
media: MediaInfo = None
# 内置规则集
rule_set: Dict[str, dict] = {
@@ -37,8 +38,12 @@ class FilterModule(_ModuleBase):
},
# 中字
"CNSUB": {
"include": [r'[中国國繁简](/|\s|\\|\|)?[繁简英粤]|[英简繁](/|\s|\\|\|)?[中繁简]|繁體|简体|[中国國][字配]|国语|國語|中文|中字'],
"exclude": []
"include": [
r'[中国國繁简](/|\s|\\|\|)?[繁简英粤]|[英简繁](/|\s|\\|\|)?[中繁简]|繁體|简体|[中国國][字配]|国语|國語|中文|中字'],
"exclude": [],
"tmdb": {
"original_language": "zh,cn"
}
},
# 特效字幕
"SPECSUB": {
@@ -67,7 +72,7 @@ class FilterModule(_ModuleBase):
},
# 杜比
"DOLBY": {
"include": [r"DOLBY|DOVI|[\s.]+DV[\s.]+|杜比"],
"include": [r"Dolby[\s.]+Vision|DOVI|[\s.]+DV[\s.]+|杜比视界"],
"exclude": []
},
# HDR
@@ -107,16 +112,19 @@ class FilterModule(_ModuleBase):
def filter_torrents(self, rule_string: str,
torrent_list: List[TorrentInfo],
season_episodes: Dict[int, list] = None) -> List[TorrentInfo]:
season_episodes: Dict[int, list] = None,
mediainfo: MediaInfo = None) -> List[TorrentInfo]:
"""
过滤种子资源
:param rule_string: 过滤规则
:param torrent_list: 资源列表
:param season_episodes: 季集数过滤 {season:[episodes]}
:param mediainfo: 媒体信息
:return: 过滤后的资源列表,添加资源优先级
"""
if not rule_string:
return torrent_list
self.media = mediainfo
# 返回种子列表
ret_torrents = []
for torrent in torrent_list:
@@ -215,6 +223,11 @@ class FilterModule(_ModuleBase):
if not self.rule_set.get(rule_name):
# 规则不存在
return False
# TMDB规则
tmdb = self.rule_set[rule_name].get("tmdb")
# 符合TMDB规则的直接返回True即不过滤
if tmdb and self.__match_tmdb(tmdb):
return True
# 包含规则项
includes = self.rule_set[rule_name].get("include") or []
# 排除规则项
@@ -236,3 +249,44 @@ class FilterModule(_ModuleBase):
# FREE规则不匹配
return False
return True
def __match_tmdb(self, tmdb: dict) -> bool:
"""
判断种子是否匹配TMDB规则
"""
def __get_media_value(key: str):
try:
return getattr(self.media, key)
except ValueError:
return ""
if not self.media:
return False
for attr, value in tmdb.items():
if not value:
continue
# 获取media信息的值
info_value = __get_media_value(attr)
if not info_value:
# 没有该值,不匹配
return False
elif attr == "production_countries":
# 国家信息
info_values = [str(val.get("iso_3166_1")).upper() for val in info_value]
else:
# media信息转化为数组
if isinstance(info_value, list):
info_values = [str(val).upper() for val in info_value]
else:
info_values = [str(info_value).upper()]
# 过滤值转化为数组
if value.find(",") != -1:
values = [str(val).upper() for val in value.split(",")]
else:
values = [str(value).upper()]
# 没有交集为不匹配
if not set(values).intersection(set(info_values)):
return False
return True

View File

@@ -64,7 +64,7 @@ class JellyfinModule(_ModuleBase):
if movie:
logger.info(f"媒体库中已存在:{movie}")
return ExistMediaInfo(type=MediaType.MOVIE)
movies = self.jellyfin.get_movies(title=mediainfo.title, year=mediainfo.year)
movies = self.jellyfin.get_movies(title=mediainfo.title, year=mediainfo.year, tmdb_id=mediainfo.tmdb_id)
if not movies:
logger.info(f"{mediainfo.title_year} 在媒体库中不存在")
return None

View File

@@ -248,11 +248,15 @@ class Jellyfin(metaclass=Singleton):
return None
return ""
def get_movies(self, title: str, year: str = None) -> Optional[List[dict]]:
def get_movies(self,
title: str,
year: str = None,
tmdb_id: int = None) -> Optional[List[dict]]:
"""
根据标题和年份检查电影是否在Jellyfin中存在存在则返回列表
:param title: 标题
:param year: 年份,为空则不过滤
:param tmdb_id: TMDB ID
:return: 含title、year属性的字典列表
"""
if not self._host or not self._apikey or not self.user:
@@ -266,11 +270,19 @@ class Jellyfin(metaclass=Singleton):
if res_items:
ret_movies = []
for res_item in res_items:
item_tmdbid = res_item.get("ProviderIds", {}).get("Tmdb")
if tmdb_id and item_tmdbid:
if str(item_tmdbid) != str(tmdb_id):
continue
else:
ret_movies.append(
{'title': res_item.get('Name'), 'year': str(res_item.get('ProductionYear'))})
continue
if res_item.get('Name') == title and (
not year or str(res_item.get('ProductionYear')) == str(year)):
ret_movies.append(
{'title': res_item.get('Name'), 'year': str(res_item.get('ProductionYear'))})
return ret_movies
return ret_movies
except Exception as e:
logger.error(f"连接Items出错" + str(e))
return None
@@ -378,24 +390,100 @@ class Jellyfin(metaclass=Singleton):
def get_webhook_message(self, message: dict) -> WebhookEventInfo:
"""
解析Jellyfin报文
{
"ServerId": "d79d3a6261614419a114595a585xxxxx",
"ServerName": "nyanmisaka-jellyfin1",
"ServerVersion": "10.8.10",
"ServerUrl": "http://xxxxxxxx:8098",
"NotificationType": "PlaybackStart",
"Timestamp": "2023-09-10T08:35:25.3996506+00:00",
"UtcTimestamp": "2023-09-10T08:35:25.3996527Z",
"Name": "慕灼华逃婚离开",
"Overview": "慕灼华假装在读书,她害怕大娘子说她不务正业。",
"Tagline": "",
"ItemId": "4b92551344f53b560fb55cd6700xxxxx",
"ItemType": "Episode",
"RunTimeTicks": 27074985984,
"RunTime": "00:45:07",
"Year": 2023,
"SeriesName": "灼灼风流",
"SeasonNumber": 1,
"SeasonNumber00": "01",
"SeasonNumber000": "001",
"EpisodeNumber": 1,
"EpisodeNumber00": "01",
"EpisodeNumber000": "001",
"Provider_tmdb": "229210",
"Video_0_Title": "4K HEVC SDR",
"Video_0_Type": "Video",
"Video_0_Codec": "hevc",
"Video_0_Profile": "Main",
"Video_0_Level": 150,
"Video_0_Height": 2160,
"Video_0_Width": 3840,
"Video_0_AspectRatio": "16:9",
"Video_0_Interlaced": false,
"Video_0_FrameRate": 25,
"Video_0_VideoRange": "SDR",
"Video_0_ColorSpace": "bt709",
"Video_0_ColorTransfer": "bt709",
"Video_0_ColorPrimaries": "bt709",
"Video_0_PixelFormat": "yuv420p",
"Video_0_RefFrames": 1,
"Audio_0_Title": "AAC - Stereo - Default",
"Audio_0_Type": "Audio",
"Audio_0_Language": "und",
"Audio_0_Codec": "aac",
"Audio_0_Channels": 2,
"Audio_0_Bitrate": 125360,
"Audio_0_SampleRate": 48000,
"Audio_0_Default": true,
"PlaybackPositionTicks": 1000000,
"PlaybackPosition": "00:00:00",
"MediaSourceId": "4b92551344f53b560fb55cd6700ebc86",
"IsPaused": false,
"IsAutomated": false,
"DeviceId": "TW96aWxsxxxxxjA",
"DeviceName": "Edge Chromium",
"ClientName": "Jellyfin Web",
"NotificationUsername": "Jeaven",
"UserId": "9783d2432b0d40a8a716b6aa46xxxxx"
}
"""
logger.info(f"接收到jellyfin webhook{message}")
eventItem = WebhookEventInfo(
event=message.get('NotificationType', ''),
item_id=message.get('ItemId'),
item_name=message.get('Name'),
item_type=message.get('ItemType'),
item_favorite=message.get('Favorite'),
save_reason=message.get('SaveReason'),
tmdb_id=message.get('Provider_tmdb'),
user_name=message.get('NotificationUsername'),
channel="jellyfin"
)
eventItem.item_id = message.get('ItemId')
eventItem.tmdb_id = message.get('Provider_tmdb')
eventItem.overview = message.get('Overview')
eventItem.device_name = message.get('DeviceName')
eventItem.user_name = message.get('NotificationUsername')
eventItem.client = message.get('ClientName')
if message.get("ItemType") == "Episode":
# 剧集
eventItem.item_type = "TV"
eventItem.season_id = message.get('SeasonNumber')
eventItem.episode_id = message.get('EpisodeNumber')
eventItem.item_name = "%s %s%s %s" % (
message.get('SeriesName'),
"S" + str(eventItem.season_id),
"E" + str(eventItem.episode_id),
message.get('Name'))
else:
# 电影
eventItem.item_type = "MOV"
eventItem.item_name = "%s %s" % (
message.get('Name'), "(" + str(message.get('Year')) + ")")
# 获取消息图片
if eventItem.item_id:
# 根据返回的item_id去调用媒体服务器获取
eventItem.image_url = self.get_remote_image_by_id(item_id=eventItem.item_id,
image_type="Backdrop")
eventItem.image_url = self.get_remote_image_by_id(
item_id=eventItem.item_id,
image_type="Backdrop"
)
return eventItem
@@ -460,8 +548,8 @@ class Jellyfin(metaclass=Singleton):
"""
if not self._host or not self._apikey:
return None
url = url.replace("{HOST}", self._host)\
.replace("{APIKEY}", self._apikey)\
url = url.replace("{HOST}", self._host) \
.replace("{APIKEY}", self._apikey) \
.replace("{USER}", self.user)
try:
return RequestUtils().get_res(url=url)

View File

@@ -54,7 +54,10 @@ class PlexModule(_ModuleBase):
if movie:
logger.info(f"媒体库中已存在:{movie}")
return ExistMediaInfo(type=MediaType.MOVIE)
movies = self.plex.get_movies(title=mediainfo.title, year=mediainfo.year)
movies = self.plex.get_movies(title=mediainfo.title,
original_title=mediainfo.original_title,
year=mediainfo.year,
tmdb_id=mediainfo.tmdb_id)
if not movies:
logger.info(f"{mediainfo.title_year} 在媒体库中不存在")
return None
@@ -63,7 +66,9 @@ class PlexModule(_ModuleBase):
return ExistMediaInfo(type=MediaType.MOVIE)
else:
tvs = self.plex.get_tv_episodes(title=mediainfo.title,
original_title=mediainfo.original_title,
year=mediainfo.year,
tmdb_id=mediainfo.tmdb_id,
item_id=itemid)
if not tvs:
logger.info(f"{mediainfo.title_year} 在媒体库中不存在")

View File

@@ -128,11 +128,17 @@ class Plex(metaclass=Singleton):
"EpisodeCount": EpisodeCount
}
def get_movies(self, title: str, year: str = None) -> Optional[List[dict]]:
def get_movies(self,
title: str,
original_title: str = None,
year: str = None,
tmdb_id: int = None) -> Optional[List[dict]]:
"""
根据标题和年份检查电影是否在Plex中存在存在则返回列表
:param title: 标题
:param original_title: 原产地标题
:param year: 年份,为空则不过滤
:param tmdb_id: TMDB ID
:return: 含title、year属性的字典列表
"""
if not self._plex:
@@ -140,22 +146,35 @@ class Plex(metaclass=Singleton):
ret_movies = []
if year:
movies = self._plex.library.search(title=title, year=year, libtype="movie")
# 根据原标题再查一遍
if original_title and str(original_title) != str(title):
movies.extend(self._plex.library.search(title=original_title, year=year, libtype="movie"))
else:
movies = self._plex.library.search(title=title, libtype="movie")
for movie in movies:
if original_title and str(original_title) != str(title):
movies.extend(self._plex.library.search(title=original_title, year=year, libtype="movie"))
for movie in set(movies):
movie_tmdbid = self.__get_ids(movie.guids).get("tmdb_id")
if tmdb_id and movie_tmdbid:
if str(movie_tmdbid) != str(tmdb_id):
continue
ret_movies.append({'title': movie.title, 'year': movie.year})
return ret_movies
def get_tv_episodes(self,
item_id: str = None,
title: str = None,
original_title: str = None,
year: str = None,
tmdb_id: int = None,
season: int = None) -> Optional[Dict[int, list]]:
"""
根据标题、年份、季查询电视剧所有集信息
:param item_id: 媒体ID
:param title: 标题
:param original_title: 原产地标题
:param year: 年份,可以为空,为空时不按年份过滤
:param tmdb_id: TMDB ID
:param season: 季号,数字
:return: 所有集的列表
"""
@@ -164,13 +183,19 @@ class Plex(metaclass=Singleton):
if item_id:
videos = self._plex.fetchItem(item_id)
else:
# 根据标题和年份模糊搜索,该结果不够准确
videos = self._plex.library.search(title=title, year=year, libtype="show")
if not videos and original_title and str(original_title) != str(title):
videos = self._plex.library.search(title=original_title, year=year, libtype="show")
if not videos:
return {}
if isinstance(videos, list):
episodes = videos[0].episodes()
else:
episodes = videos.episodes()
videos = videos[0]
video_tmdbid = self.__get_ids(videos.guids).get('tmdb_id')
if tmdb_id and video_tmdbid:
if str(video_tmdbid) != str(tmdb_id):
return {}
episodes = videos.episodes()
season_episodes = {}
for episode in episodes:
if season and episode.seasonNumber != int(season):
@@ -337,9 +362,104 @@ class Plex(metaclass=Singleton):
item_name TV:琅琊榜 S1E6 剖心明志 虎口脱险
MOV:猪猪侠大冒险(2001)
overview 剧情描述
{
"event": "media.scrobble",
"user": false,
"owner": true,
"Account": {
"id": 31646104,
"thumb": "https://plex.tv/users/xx",
"title": "播放"
},
"Server": {
"title": "Media-Server",
"uuid": "xxxx"
},
"Player": {
"local": false,
"publicAddress": "xx.xx.xx.xx",
"title": "MagicBook",
"uuid": "wu0uoa1ujfq90t0c5p9f7fw0"
},
"Metadata": {
"librarySectionType": "show",
"ratingKey": "40294",
"key": "/library/metadata/40294",
"parentRatingKey": "40291",
"grandparentRatingKey": "40275",
"guid": "plex://episode/615580a9fa828e7f1a0caabd",
"parentGuid": "plex://season/615580a9fa828e7f1a0caab8",
"grandparentGuid": "plex://show/60e81fd8d8000e002d7d2976",
"type": "episode",
"title": "The World's Strongest Senior",
"titleSort": "World's Strongest Senior",
"grandparentKey": "/library/metadata/40275",
"parentKey": "/library/metadata/40291",
"librarySectionTitle": "动漫剧集",
"librarySectionID": 7,
"librarySectionKey": "/library/sections/7",
"grandparentTitle": "范马刃牙",
"parentTitle": "Combat Shadow Fighting Saga / Great Prison Battle Saga",
"originalTitle": "Baki Hanma",
"contentRating": "TV-MA",
"summary": "The world is shaken by news of a man taking down a monstrous elephant with his bare hands. Back in Japan, Baki is confronted by a knife-wielding child.",
"index": 1,
"parentIndex": 1,
"audienceRating": 8.5,
"viewCount": 1,
"lastViewedAt": 1694320444,
"year": 2021,
"thumb": "/library/metadata/40294/thumb/1693544504",
"art": "/library/metadata/40275/art/1693952979",
"parentThumb": "/library/metadata/40291/thumb/1691115271",
"grandparentThumb": "/library/metadata/40275/thumb/1693952979",
"grandparentArt": "/library/metadata/40275/art/1693952979",
"duration": 1500000,
"originallyAvailableAt": "2021-09-30",
"addedAt": 1691115281,
"updatedAt": 1693544504,
"audienceRatingImage": "themoviedb://image.rating",
"Guid": [
{
"id": "imdb://tt14765720"
},
{
"id": "tmdb://3087250"
},
{
"id": "tvdb://8530933"
}
],
"Rating": [
{
"image": "themoviedb://image.rating",
"value": 8.5,
"type": "audience"
}
],
"Director": [
{
"id": 115144,
"filter": "director=115144",
"tag": "Keiya Saito",
"tagKey": "5f401c8d04a86500409ea6c1"
}
],
"Writer": [
{
"id": 115135,
"filter": "writer=115135",
"tag": "Tatsuhiko Urahata",
"tagKey": "5d7768e07a53e9001e6db1ce",
"thumb": "https://metadata-static.plex.tv/f/people/f6f90dc89fa87d459f85d40a09720c05.jpg"
}
]
}
}
"""
message = json.loads(message_str)
eventItem = WebhookEventInfo(event=message.get('Event', ''), channel="plex")
logger.info(f"接收到plex webhook{message}")
eventItem = WebhookEventInfo(event=message.get('event', ''), channel="plex")
if message.get('Metadata'):
if message.get('Metadata', {}).get('type') == 'episode':
eventItem.item_type = "TV"

View File

@@ -36,19 +36,22 @@ class QbittorrentModule(_ModuleBase):
if self.qbittorrent.is_inactive():
self.qbittorrent = Qbittorrent()
def download(self, torrent_path: Path, download_dir: Path, cookie: str,
def download(self, content: Union[Path, str], download_dir: Path, cookie: str,
episodes: Set[int] = None, category: str = None) -> Optional[Tuple[Optional[str], str]]:
"""
根据种子文件,选择并添加下载任务
:param torrent_path: 种子文件地址
:param content: 种子文件地址或者磁力链接
:param download_dir: 下载目录
:param cookie: cookie
:param episodes: 需要下载的集数
:param category: 分类
:return: 种子Hash错误信息
"""
if not torrent_path or not torrent_path.exists():
return None, f"种子文件不存在:{torrent_path}"
if not content:
return
if isinstance(content, Path) and not content.exists():
return None, f"种子文件不存在:{content}"
# 生成随机Tag
tag = StringUtils.generate_random_str(10)
if settings.TORRENT_TAG:
@@ -58,19 +61,21 @@ class QbittorrentModule(_ModuleBase):
# 如果要选择文件则先暂停
is_paused = True if episodes else False
# 添加任务
state = self.qbittorrent.add_torrent(content=torrent_path.read_bytes(),
download_dir=str(download_dir),
is_paused=is_paused,
tag=tags,
cookie=cookie,
category=category)
state = self.qbittorrent.add_torrent(
content=content.read_bytes() if isinstance(content, Path) else content,
download_dir=str(download_dir),
is_paused=is_paused,
tag=tags,
cookie=cookie,
category=category
)
if not state:
return None, f"添加种子任务失败:{torrent_path}"
return None, f"添加种子任务失败:{content}"
else:
# 获取种子Hash
torrent_hash = self.qbittorrent.get_torrent_id_by_tag(tags=tag)
if not torrent_hash:
return None, f"获取种子Hash失败{torrent_path}"
return None, f"获取种子Hash失败{content}"
else:
if is_paused:
# 种子文件

View File

@@ -186,7 +186,8 @@ class Qbittorrent(metaclass=Singleton):
download_dir: str = None,
tag: Union[str, list] = None,
category: str = None,
cookie=None
cookie=None,
**kwargs
) -> bool:
"""
添加种子
@@ -238,7 +239,8 @@ class Qbittorrent(metaclass=Singleton):
use_auto_torrent_management=is_auto,
is_sequential_download=True,
cookie=cookie,
category=category)
category=category,
**kwargs)
return True if qbc_ret and str(qbc_ret).find("Ok") != -1 else False
except Exception as err:
logger.error(f"添加种子出错:{err}")

View File

@@ -272,6 +272,7 @@ class Slack:
link = torrent.page_url
title = f"{meta.season_episode} " \
f"{meta.resource_term} " \
f"{meta.video_term} " \
f"{meta.release_group}"
title = re.sub(r"\s+", " ", title).strip()
free = torrent.volume_factor

View File

@@ -34,15 +34,22 @@ class SubtitleModule(_ModuleBase):
def stop(self) -> None:
pass
def download_added(self, context: Context, torrent_path: Path, download_dir: Path) -> None:
def download_added(self, context: Context, download_dir: Path, torrent_path: Path = None) -> None:
"""
添加下载任务成功后,从站点下载字幕,保存到下载目录
:param context: 上下文,包括识别信息、媒体信息、种子信息
:param torrent_path: 种子文件地址
:param download_dir: 下载目录
:param torrent_path: 种子文件地址
:return: None该方法可被多个模块同时处理
"""
# 种子信息
if not settings.DOWNLOAD_SUBTITLE:
return None
# 没有种子文件不处理
if not torrent_path:
return
# 没有详情页不处理
torrent = context.torrent_info
if not torrent.page_url:
return

View File

@@ -153,6 +153,7 @@ class Telegram(metaclass=Singleton):
link = torrent.page_url
title = f"{meta.season_episode} " \
f"{meta.resource_term} " \
f"{meta.video_term} " \
f"{meta.release_group}"
title = re.sub(r"\s+", " ", title).strip()
free = torrent.volume_factor

View File

@@ -74,8 +74,8 @@ class TheMovieDbModule(_ModuleBase):
mtype=meta.type,
season_year=meta.year,
season_number=meta.begin_season)
if meta.year:
# 非严格模式下去掉年份再查一次
if not info:
# 去掉年份再查一次
info = self.tmdb.match(name=meta.name,
mtype=meta.type)
else:
@@ -132,6 +132,12 @@ class TheMovieDbModule(_ModuleBase):
else:
logger.info(f"{tmdbid} 识别结果:{mediainfo.type.value} "
f"{mediainfo.title_year}")
# 补充剧集年份
if mediainfo.type == MediaType.TV:
episode_years = self.tmdb.get_tv_episode_years(info.get("id"))
if episode_years:
mediainfo.season_years = episode_years
return mediainfo
else:
logger.info(f"{meta.name if meta else tmdbid} 未匹配到媒体信息")
@@ -194,12 +200,17 @@ class TheMovieDbModule(_ModuleBase):
scrape_path = path / path.name
self.scraper.gen_scraper_files(mediainfo=mediainfo,
file_path=scrape_path)
elif path.is_file():
# 单个文件
logger.info(f"开始刮削媒体库文件:{path} ...")
self.scraper.gen_scraper_files(mediainfo=mediainfo,
file_path=path)
else:
# 目录下的所有文件
logger.info(f"开始刮削目录:{path} ...")
for file in SystemUtils.list_files(path, settings.RMT_MEDIAEXT):
if not file:
continue
logger.info(f"开始刮削媒体库文件:{file} ...")
self.scraper.gen_scraper_files(mediainfo=mediainfo,
file_path=file)
logger.info(f"{path} 刮削完成")

View File

@@ -15,7 +15,6 @@ from app.utils.http import RequestUtils
class TmdbScraper:
tmdb = None
def __init__(self, tmdb):
@@ -23,7 +22,7 @@ class TmdbScraper:
def gen_scraper_files(self, mediainfo: MediaInfo, file_path: Path):
"""
生成刮削文件
生成刮削文件包括NFO和图片传入路径为文件路径
:param mediainfo: 媒体信息
:param file_path: 文件路径或者目录路径
"""
@@ -38,7 +37,7 @@ class TmdbScraper:
return {}
try:
# 电影
# 电影,路径为文件名 名称/名称.xxx 或者蓝光原盘目录 名称/名称
if mediainfo.type == MediaType.MOVIE:
# 不已存在时才处理
if not file_path.with_name("movie.nfo").exists() \
@@ -56,11 +55,11 @@ class TmdbScraper:
image_name = attr_name.replace("_path", "") + Path(attr_value).suffix
self.__save_image(url=attr_value,
file_path=file_path.with_name(image_name))
# 电视剧
# 电视剧,路径为每一季的文件名 名称/Season xx/名称 SxxExx.xxx
else:
# 识别
meta = MetaInfo(file_path.stem)
# 不存在时才处理
# 根目录不存在时才处理
if not file_path.parent.with_name("tvshow.nfo").exists():
# 根目录描述文件
self.__gen_tv_nfo_file(mediainfo=mediainfo,
@@ -84,19 +83,25 @@ class TmdbScraper:
self.__gen_tv_season_nfo_file(seasoninfo=seasoninfo,
season=meta.begin_season,
season_path=file_path.parent)
# 季的图片
# TMDB季poster图片
sea_seq = str(meta.begin_season).rjust(2, '0')
if seasoninfo.get("poster_path"):
# 后缀
ext = Path(seasoninfo.get('poster_path')).suffix
# URL
url = f"https://{settings.TMDB_IMAGE_DOMAIN}/t/p/original{seasoninfo.get('poster_path')}"
self.__save_image(url, file_path.parent.with_name(f"season{sea_seq}-poster{ext}"))
# 季的其它图片
for attr_name, attr_value in vars(mediainfo).items():
if attr_value \
and attr_name.startswith("season") \
and not attr_name.endswith("poster_path") \
and attr_value \
and isinstance(attr_value, str) \
and attr_value.startswith("http"):
image_name = attr_name.replace("_path",
"").replace("season",
f"{str(meta.begin_season).rjust(2, '0')}-") \
+ Path(attr_value).suffix
image_name = attr_name.replace("_path", "") + Path(attr_value).suffix
self.__save_image(url=attr_value,
file_path=file_path.parent.with_name(f"season{image_name}"))
file_path=file_path.parent.with_name(image_name))
# 查询集详情
episodeinfo = __get_episode_detail(seasoninfo, meta.begin_episode)
if episodeinfo:
@@ -108,10 +113,11 @@ class TmdbScraper:
episode=meta.begin_episode,
file_path=file_path)
# 集的图片
if episodeinfo.get('still_path'):
episode_image = episodeinfo.get("still_path")
if episode_image:
self.__save_image(
f"https://{settings.TMDB_IMAGE_DOMAIN}/t/p/original{episodeinfo.get('still_path')}",
file_path.with_suffix(Path(episodeinfo.get('still_path')).suffix))
f"https://{settings.TMDB_IMAGE_DOMAIN}/t/p/original{episode_image}",
file_path.with_suffix(Path(episode_image).suffix))
except Exception as e:
logger.error(f"{file_path} 刮削失败:{e}")

View File

@@ -447,7 +447,8 @@ class TmdbHelper:
ret_info = multi
break
# 类型变更
if ret_info:
if (ret_info
and not isinstance(ret_info.get("media_type"), MediaType)):
ret_info['media_type'] = MediaType.MOVIE if ret_info.get("media_type") == "movie" else MediaType.TV
return ret_info
@@ -1167,3 +1168,35 @@ class TmdbHelper:
清除缓存
"""
self.tmdb.cache_clear()
def get_tv_episode_years(self, tv_id: int):
"""
查询剧集组年份
"""
try:
episode_groups = self.tv.episode_groups(tv_id)
if not episode_groups:
return {}
episode_years = {}
for episode_group in episode_groups:
logger.info(f"正在获取剧集组年份:{episode_group.get('id')}...")
if episode_group.get('type') != 6:
# 只处理剧集部分
continue
group_episodes = self.tv.group_episodes(episode_group.get('id'))
if not group_episodes:
continue
for group_episode in group_episodes:
order = group_episode.get('order')
episodes = group_episode.get('episodes')
if not episodes or not order:
continue
# 当前季第一季时间
first_date = episodes[0].get("air_date")
if not first_date and str(first_date).split("-") != 3:
continue
episode_years[order] = str(first_date).split("-")[0]
return episode_years
except Exception as e:
print(str(e))
return {}

View File

@@ -33,6 +33,7 @@ class TV(TMDb):
"on_the_air": "/tv/on_the_air",
"popular": "/tv/popular",
"top_rated": "/tv/top_rated",
"group_episodes": "/tv/episode_group/%s",
}
def details(self, tv_id, append_to_response="videos,trailers,images,credits,translations"):
@@ -130,6 +131,17 @@ class TV(TMDb):
key="results"
)
def group_episodes(self, group_id):
"""
查询剧集组所有剧集
:param group_id: int
:return:
"""
return self._request_obj(
self._urls["group_episodes"] % group_id,
key="groups"
)
def external_ids(self, tv_id):
"""
Get the external ids for a TV show.

View File

@@ -36,17 +36,22 @@ class TransmissionModule(_ModuleBase):
if not self.transmission.is_inactive():
self.transmission = Transmission()
def download(self, torrent_path: Path, download_dir: Path, cookie: str,
def download(self, content: Union[Path, str], download_dir: Path, cookie: str,
episodes: Set[int] = None, category: str = None) -> Optional[Tuple[Optional[str], str]]:
"""
根据种子文件,选择并添加下载任务
:param torrent_path: 种子文件地址
:param content: 种子文件地址或者磁力链接
:param download_dir: 下载目录
:param cookie: cookie
:param episodes: 需要下载的集数
:param category: 分类TR中未使用
:return: 种子Hash
"""
if not content:
return
if isinstance(content, Path) and not content.exists():
return None, f"种子文件不存在:{content}"
# 如果要选择文件则先暂停
is_paused = True if episodes else False
# 标签
@@ -55,13 +60,15 @@ class TransmissionModule(_ModuleBase):
else:
labels = None
# 添加任务
torrent = self.transmission.add_torrent(content=torrent_path.read_bytes(),
download_dir=str(download_dir),
is_paused=is_paused,
labels=labels,
cookie=cookie)
torrent = self.transmission.add_torrent(
content=content.read_bytes() if isinstance(content, Path) else content,
download_dir=str(download_dir),
is_paused=is_paused,
labels=labels,
cookie=cookie
)
if not torrent:
return None, f"添加种子任务失败:{torrent_path}"
return None, f"添加种子任务失败:{content}"
else:
torrent_hash = torrent.hashString
if is_paused:
@@ -105,7 +112,7 @@ class TransmissionModule(_ModuleBase):
title=torrent.name,
path=Path(torrent.download_dir) / torrent.name,
hash=torrent.hashString,
tags=torrent.labels
tags=",".join(torrent.labels or [])
))
elif status == TorrentStatus.TRANSFER:
# 获取已完成且未整理的
@@ -124,7 +131,7 @@ class TransmissionModule(_ModuleBase):
title=torrent.name,
path=Path(torrent.download_dir) / torrent.name,
hash=torrent.hashString,
tags=torrent.labels
tags=",".join(torrent.labels or [])
))
elif status == TorrentStatus.DOWNLOADING:
# 获取正在下载的任务

View File

@@ -288,3 +288,58 @@ class Transmission(metaclass=Singleton):
except Exception as err:
logger.error(f"添加Tracker出错{err}")
return False
def change_torrent(self,
hash_string: str,
upload_limit=None,
download_limit=None,
ratio_limit=None,
seeding_time_limit=None):
"""
设置种子
:param hash_string: ID
:param upload_limit: 上传限速 Kb/s
:param download_limit: 下载限速 Kb/s
:param ratio_limit: 分享率限制
:param seeding_time_limit: 做种时间限制
:return: bool
"""
if not hash_string:
return False
if upload_limit:
uploadLimited = True
uploadLimit = int(upload_limit)
else:
uploadLimited = False
uploadLimit = 0
if download_limit:
downloadLimited = True
downloadLimit = int(download_limit)
else:
downloadLimited = False
downloadLimit = 0
if ratio_limit:
seedRatioMode = 1
seedRatioLimit = round(float(ratio_limit), 2)
else:
seedRatioMode = 2
seedRatioLimit = 0
if seeding_time_limit:
seedIdleMode = 1
seedIdleLimit = int(seeding_time_limit)
else:
seedIdleMode = 2
seedIdleLimit = 0
try:
self.trc.change_torrent(ids=hash_string,
uploadLimited=uploadLimited,
uploadLimit=uploadLimit,
downloadLimited=downloadLimited,
downloadLimit=downloadLimit,
seedRatioMode=seedRatioMode,
seedRatioLimit=seedRatioLimit,
seedIdleMode=seedIdleMode,
seedIdleLimit=seedIdleLimit)
except Exception as err:
logger.error(f"设置种子出错:{err}")
return False

View File

@@ -222,6 +222,7 @@ class WeChat(metaclass=Singleton):
torrent_title = f"{index}.【{torrent.site_name}" \
f"{meta.season_episode} " \
f"{meta.resource_term} " \
f"{meta.video_term} " \
f"{meta.release_group} " \
f"{StringUtils.str_filesize(torrent.size)} " \
f"{torrent.volume_factor} " \

View File

@@ -5,7 +5,7 @@ from typing import Any, List, Dict, Tuple
from app.chain import ChainBase
from app.core.config import settings
from app.core.event import EventManager
from app.db import ScopedSession
from app.db import SessionFactory
from app.db.models import Base
from app.db.plugindata_oper import PluginDataOper
from app.db.systemconfig_oper import SystemConfigOper
@@ -37,7 +37,7 @@ class _PluginBase(metaclass=ABCMeta):
def __init__(self):
# 数据库连接
self.db = ScopedSession()
self.db = SessionFactory()
# 插件数据
self.plugindata = PluginDataOper(self.db)
# 处理链
@@ -184,3 +184,10 @@ class _PluginBase(metaclass=ABCMeta):
channel=channel, mtype=mtype, title=title, text=text,
image=image, link=link, userid=userid
))
def close(self):
"""
关闭数据库连接
"""
if self.db:
self.db.close()

View File

@@ -14,6 +14,7 @@ from ruamel.yaml import CommentedMap
from app import schemas
from app.core.config import settings
from app.core.event import EventManager, eventmanager, Event
from app.db.models.site import Site
from app.helper.browser import PlaywrightHelper
from app.helper.cloudflare import under_challenge
from app.helper.module import ModuleHelper
@@ -31,13 +32,13 @@ class AutoSignIn(_PluginBase):
# 插件名称
plugin_name = "站点自动签到"
# 插件描述
plugin_desc = "自动模拟登录站点签到。"
plugin_desc = "自动模拟登录站点签到。"
# 插件图标
plugin_icon = "signin.png"
# 主题色
plugin_color = "#4179F4"
# 插件版本
plugin_version = "1.0"
plugin_version = "1.1"
# 插件作者
plugin_author = "thsrite"
# 作者主页
@@ -61,16 +62,15 @@ class AutoSignIn(_PluginBase):
# 配置属性
_enabled: bool = False
_cron: str = ""
_sign_type: str = ""
_onlyonce: bool = False
_notify: bool = False
_queue_cnt: int = 5
_sign_sites: list = []
_login_sites: list = []
_retry_keyword = None
_clean: bool = False
_start_time: int = None
_end_time: int = None
_action: str = ""
def init_plugin(self, config: dict = None):
self.sites = SitesHelper()
@@ -86,11 +86,17 @@ class AutoSignIn(_PluginBase):
self._onlyonce = config.get("onlyonce")
self._notify = config.get("notify")
self._queue_cnt = config.get("queue_cnt") or 5
self._sign_sites = config.get("sign_sites")
self._sign_sites = config.get("sign_sites") or []
self._login_sites = config.get("login_sites") or []
self._retry_keyword = config.get("retry_keyword")
self._clean = config.get("clean")
self._sign_type = config.get("sign_type") or "sign"
self._action = "签到" if str(self._sign_type) == "sign" else "模拟登陆"
# 过滤掉已删除的站点
all_sites = [site for site in self.sites.get_indexers() if not site.get("public")]
self._sign_sites = [site.get("id") for site in all_sites if site.get("id") in self._sign_sites]
self._login_sites = [site.get("id") for site in all_sites if site.get("id") in self._login_sites]
# 保存配置
self.__update_config()
# 加载模块
if self._enabled or self._onlyonce:
@@ -103,10 +109,10 @@ class AutoSignIn(_PluginBase):
# 立即运行一次
if self._onlyonce:
logger.info(f"站点自动{self._action}服务启动,立即运行一次")
logger.info("站点自动签到服务启动,立即运行一次")
self._scheduler.add_job(func=self.sign_in, trigger='date',
run_date=datetime.now(tz=pytz.timezone(settings.TZ)) + timedelta(seconds=3),
name=f"站点自动{self._action}")
name="站点自动签到")
# 关闭一次性开关
self._onlyonce = False
@@ -120,8 +126,8 @@ class AutoSignIn(_PluginBase):
if str(self._cron).strip().count(" ") == 4:
self._scheduler.add_job(func=self.sign_in,
trigger=CronTrigger.from_crontab(self._cron),
name=f"站点自动{self._action}")
logger.info(f"站点自动{self._action}服务启动,执行周期 {self._cron}")
name="站点自动签到")
logger.info(f"站点自动签到服务启动,执行周期 {self._cron}")
else:
# 2.3/9-23
crons = str(self._cron).strip().split("/")
@@ -139,11 +145,11 @@ class AutoSignIn(_PluginBase):
self._scheduler.add_job(func=self.sign_in,
trigger="interval",
hours=float(str(cron).strip()),
name=f"站点自动{self._action}")
name="站点自动签到")
logger.info(
f"站点自动{self._action}服务启动,执行周期 {self._start_time}点-{self._end_time}点 每{cron}小时执行一次")
f"站点自动签到服务启动,执行周期 {self._start_time}点-{self._end_time}点 每{cron}小时执行一次")
else:
logger.error(f"站点自动{self._action}服务启动失败,周期格式错误")
logger.error("站点自动签到服务启动失败,周期格式错误")
# 推送实时消息
self.systemmessage.put(f"执行周期配置错误")
self._cron = ""
@@ -156,9 +162,9 @@ class AutoSignIn(_PluginBase):
self._scheduler.add_job(func=self.sign_in,
trigger="interval",
hours=float(str(self._cron).strip()),
name=f"站点自动{self._action}")
name="站点自动签到")
logger.info(
f"站点自动{self._action}服务启动,执行周期 {self._start_time}点-{self._end_time}点 每{self._cron}小时执行一次")
f"站点自动签到服务启动,执行周期 {self._start_time}点-{self._end_time}点 每{self._cron}小时执行一次")
except Exception as err:
logger.error(f"定时任务配置错误:{err}")
# 推送实时消息
@@ -176,7 +182,7 @@ class AutoSignIn(_PluginBase):
for trigger in triggers:
self._scheduler.add_job(self.sign_in, "cron",
hour=trigger.hour, minute=trigger.minute,
name=f"站点自动{self._action}")
name="站点自动签到")
# 启动任务
if self._scheduler.get_jobs():
@@ -196,9 +202,9 @@ class AutoSignIn(_PluginBase):
"onlyonce": self._onlyonce,
"queue_cnt": self._queue_cnt,
"sign_sites": self._sign_sites,
"login_sites": self._login_sites,
"retry_keyword": self._retry_keyword,
"clean": self._clean,
"sign_type": self._sign_type,
}
)
@@ -239,8 +245,8 @@ class AutoSignIn(_PluginBase):
拼装插件配置页面需要返回两块数据1、页面配置2、数据结构
"""
# 站点的可选项
site_options = [{"title": site.get("name"), "value": site.get("id")}
for site in self.sites.get_indexers()]
site_options = [{"title": site.name, "value": site.id}
for site in Site.list_order_by_pri(self.db)]
return [
{
'component': 'VForm',
@@ -307,7 +313,7 @@ class AutoSignIn(_PluginBase):
'component': 'VSwitch',
'props': {
'model': 'clean',
'label': '清理本日已签到',
'label': '清理本日缓存',
}
}
]
@@ -321,27 +327,7 @@ class AutoSignIn(_PluginBase):
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
},
'content': [
{
'component': 'VSelect',
'props': {
'model': 'sign_type',
'label': '签到方式',
'items': [
{'title': '签到', 'value': 'sign'},
{'title': '登录', 'value': 'login'}
]
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
'md': 4
},
'content': [
{
@@ -358,7 +344,7 @@ class AutoSignIn(_PluginBase):
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
'md': 4
},
'content': [
{
@@ -374,7 +360,7 @@ class AutoSignIn(_PluginBase):
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
'md': 4
},
'content': [
{
@@ -409,6 +395,26 @@ class AutoSignIn(_PluginBase):
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'content': [
{
'component': 'VSelect',
'props': {
'chips': True,
'multiple': True,
'model': 'login_sites',
'label': '登录站点',
'items': site_options
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
@@ -437,12 +443,12 @@ class AutoSignIn(_PluginBase):
], {
"enabled": False,
"notify": True,
"sign_type": "sign",
"cron": "",
"onlyonce": False,
"clean": False,
"queue_cnt": 5,
"sign_sites": [],
"login_sites": [],
"retry_keyword": "错误|失败"
}
@@ -471,7 +477,7 @@ class AutoSignIn(_PluginBase):
{
'component': 'td',
'props': {
'class': 'whitespace-nowrap break-keep'
'class': 'whitespace-nowrap break-keep text-high-emphasis'
},
'text': current_day
},
@@ -556,77 +562,98 @@ class AutoSignIn(_PluginBase):
if self._start_time and self._end_time:
if int(datetime.today().hour) < self._start_time or int(datetime.today().hour) > self._end_time:
logger.error(
f"当前时间 {int(datetime.today().hour)} 不在 {self._start_time}-{self._end_time} 范围内,暂不{self._action}")
f"当前时间 {int(datetime.today().hour)} 不在 {self._start_time}-{self._end_time} 范围内,暂不执行任务")
return
if event:
logger.info(f"收到命令,开始站点{self._action} ...")
logger.info("收到命令,开始站点签到 ...")
self.post_message(channel=event.event_data.get("channel"),
title=f"开始站点{self._action} ...",
title="开始站点签到 ...",
userid=event.event_data.get("user"))
if self._sign_sites:
self.__do(today=today, type="签到", do_sites=self._sign_sites, event=event)
if self._login_sites:
self.__do(today=today, type="登录", do_sites=self._login_sites, event=event)
def __do(self, today: datetime, type: str, do_sites: list, event: Event = None):
"""
签到逻辑
"""
yesterday = today - timedelta(days=1)
yesterday_str = yesterday.strftime('%Y-%m-%d')
# 删除昨天历史
self.del_data(key=yesterday_str)
self.del_data(key=type + "-" + yesterday_str)
self.del_data(key=f"{yesterday.month}{yesterday.day}")
# 查看今天有没有签到历史
# 查看今天有没有签到|登录历史
today = today.strftime('%Y-%m-%d')
today_history = self.get_data(key=today)
today_history = self.get_data(key=type + "-" + today)
# 查询签到站点
sign_sites = [site for site in self.sites.get_indexers() if not site.get("public")]
# 查询所有站点
all_sites = [site for site in self.sites.get_indexers() if not site.get("public")]
# 过滤掉没有选中的站点
if self._sign_sites:
sign_sites = [site for site in sign_sites if site.get("id") in self._sign_sites]
if do_sites:
do_sites = [site for site in all_sites if site.get("id") in do_sites]
else:
do_sites = all_sites
# 今日没数据
if not today_history or self._clean:
logger.info(f"今日 {today}{self._action},开始{self._action}已选站点")
# 过滤删除的站点
self._sign_sites = [site.get("id") for site in sign_sites if site]
logger.info(f"今日 {today}{type},开始{type}已选站点")
if self._clean:
# 关闭开关
self._clean = False
else:
# 今天已签到需要重站点
retry_sites = today_history.get("retry")
# 今天已签到站点
already_sign_sites = today_history.get("sign")
# 需要重站点
retry_sites = today_history.get("retry") or []
# 今天已签到|登录站点
already_sites = today_history.get("do") or []
# 今日未签站点
no_sign_sites = [site for site in sign_sites if
site.get("id") not in already_sign_sites or site.get("id") in retry_sites]
# 今日未签|登录站点
no_sites = [site for site in do_sites if
site.get("id") not in already_sites or site.get("id") in retry_sites]
if not no_sign_sites:
logger.info(f"今日 {today}{self._action},无重新{self._action}站点,本次任务结束")
if not no_sites:
logger.info(f"今日 {today}{type},无重新{type}站点,本次任务结束")
return
# 签到站点 = 需要重+今日未
sign_sites = no_sign_sites
logger.info(f"今日 {today}{self._action},开始重试命中关键词站点")
# 任务站点 = 需要重+今日未do
do_sites = no_sites
logger.info(f"今日 {today}{type},开始重试命中关键词站点")
if not sign_sites:
logger.info(f"没有需要{self._action}的站点")
if not do_sites:
logger.info(f"没有需要{type}的站点")
return
# 执行签到
logger.info(f"开始执行{self._action}任务 ...")
if str(self._sign_type) == "sign":
with ThreadPool(min(len(sign_sites), int(self._queue_cnt))) as p:
status = p.map(self.signin_site, sign_sites)
logger.info(f"开始执行{type}任务 ...")
if type == "签到":
with ThreadPool(min(len(do_sites), int(self._queue_cnt))) as p:
status = p.map(self.signin_site, do_sites)
else:
with ThreadPool(min(len(sign_sites), int(self._queue_cnt))) as p:
status = p.map(self.login_site, sign_sites)
with ThreadPool(min(len(do_sites), int(self._queue_cnt))) as p:
status = p.map(self.login_site, do_sites)
if status:
logger.info(f"站点{self._action}任务完成!")
logger.info(f"站点{type}任务完成!")
# 获取今天的日期
key = f"{datetime.now().month}{datetime.now().day}"
today_data = self.get_data(key)
if today_data:
if not isinstance(today_data, list):
today_data = [today_data]
for s in status:
today_data.append({
"site": s[0],
"status": s[1]
})
else:
today_data = [{
"site": s[0],
"status": s[1]
} for s in status]
# 保存数据
self.save_data(key, [{
"site": s[0],
"status": s[1]
} for s in status])
self.save_data(key, today_data)
# 命中重试词的站点id
retry_sites = []
@@ -674,13 +701,13 @@ class AutoSignIn(_PluginBase):
if not self._retry_keyword:
# 没设置重试关键词则重试已选站点
retry_sites = self._sign_sites
logger.debug(f"下次{self._action}重试站点 {retry_sites}")
retry_sites = self._sign_sites if type == "签到" else self._login_sites
logger.debug(f"下次{type}重试站点 {retry_sites}")
# 存入历史
self.save_data(key=today,
self.save_data(key=type + "-" + today,
value={
"sign": self._sign_sites,
"do": self._sign_sites if type == "签到" else self._login_sites,
"retry": retry_sites
})
@@ -692,21 +719,21 @@ class AutoSignIn(_PluginBase):
signin_message += retry_msg
signin_message = "\n".join([f'{s[0]}{s[1]}' for s in signin_message if s])
self.post_message(title=f"站点自动{self._action}",
self.post_message(title=f"站点自动{type}",
mtype=NotificationType.SiteMessage,
text=f"全部{self._action}数量: {len(list(self._sign_sites))} \n"
f"本次{self._action}数量: {len(sign_sites)} \n"
f"下次{self._action}数量: {len(retry_sites) if self._retry_keyword else 0} \n"
text=f"全部{type}数量: {len(self._sign_sites if type == '签到' else self._login_sites)} \n"
f"本次{type}数量: {len(do_sites)} \n"
f"下次{type}数量: {len(retry_sites) if self._retry_keyword else 0} \n"
f"{signin_message}"
)
if event:
self.post_message(channel=event.event_data.get("channel"),
title=f"站点{self._action}完成!", userid=event.event_data.get("user"))
title=f"站点{type}完成!", userid=event.event_data.get("user"))
else:
logger.error(f"站点{self._action}任务失败!")
logger.error(f"站点{type}任务失败!")
if event:
self.post_message(channel=event.event_data.get("channel"),
title=f"站点{self._action}任务失败!", userid=event.event_data.get("user"))
title=f"站点{type}任务失败!", userid=event.event_data.get("user"))
# 保存配置
self.__update_config()
@@ -789,6 +816,10 @@ class AutoSignIn(_PluginBase):
return f"无法通过Cloudflare"
return f"仿真登录失败Cookie已失效"
else:
# 判断是否已签到
if re.search(r'已签|签到已得', page_source, re.IGNORECASE) \
or SiteUtils.is_checkin(page_source):
return f"签到成功"
return "仿真签到成功"
else:
res = RequestUtils(cookies=site_cookie,
@@ -918,30 +949,25 @@ class AutoSignIn(_PluginBase):
site_id = event.event_data.get("site_id")
config = self.get_config()
if config:
sign_sites = config.get("sign_sites")
if sign_sites:
if isinstance(sign_sites, str):
sign_sites = [sign_sites]
self._sign_sites = self.__remove_site_id(config.get("sign_sites") or [], site_id)
self._login_sites = self.__remove_site_id(config.get("login_sites") or [], site_id)
# 保存配置
self.__update_config()
# 删除对应站点
if site_id:
sign_sites = [site for site in sign_sites if int(site) != int(site_id)]
else:
# 清空
sign_sites = []
def __remove_site_id(self, do_sites, site_id):
if do_sites:
if isinstance(do_sites, str):
do_sites = [do_sites]
# 若无站点,则停止
if len(sign_sites) == 0:
self._enabled = False
# 删除对应站点
if site_id:
do_sites = [site for site in do_sites if int(site) != int(site_id)]
else:
# 清空
do_sites = []
# 保存配置
self.update_config(
{
"enabled": self._enabled,
"notify": self._notify,
"cron": self._cron,
"onlyonce": self._onlyonce,
"queue_cnt": self._queue_cnt,
"sign_sites": sign_sites
}
)
# 若无站点,则停止
if len(do_sites) == 0:
self._enabled = False
return do_sites

File diff suppressed because it is too large Load Diff

View File

@@ -1,12 +1,17 @@
import os
import subprocess
import time
import zipfile
from datetime import datetime, timedelta
from pathlib import Path
from typing import List, Tuple, Dict, Any
import pytz
import requests
from apscheduler.schedulers.background import BackgroundScheduler
from apscheduler.triggers.cron import CronTrigger
from python_hosts import Hosts, HostsEntry
from requests import Response
from app.core.config import settings
from app.log import logger
@@ -142,13 +147,35 @@ class CloudflareSpeedTest(_PluginBase):
if err_flag:
logger.info("正在进行CLoudflare CDN优选请耐心等待")
# 执行优选命令,-dd不测速
cf_command = f'cd {self._cf_path} && chmod a+x {self._binary_name} && ./{self._binary_name} {self._additional_args} -o {self._result_file}' + (
f' -f {self._cf_ipv4}' if self._ipv4 else '') + (f' -f {self._cf_ipv6}' if self._ipv6 else '')
if SystemUtils.is_windows():
cf_command = f'cd \"{self._cf_path}\" && CloudflareST {self._additional_args} -o \"{self._result_file}\"' + (
f' -f \"{self._cf_ipv4}\"' if self._ipv4 else '') + (f' -f \"{self._cf_ipv6}\"' if self._ipv6 else '')
else:
cf_command = f'cd {self._cf_path} && chmod a+x {self._binary_name} && ./{self._binary_name} {self._additional_args} -o {self._result_file}' + (
f' -f {self._cf_ipv4}' if self._ipv4 else '') + (f' -f {self._cf_ipv6}' if self._ipv6 else '')
logger.info(f'正在执行优选命令 {cf_command}')
os.system(cf_command)
if SystemUtils.is_windows():
process = subprocess.Popen(cf_command, shell=True)
# 执行命令后无法退出 采用异步和设置超时方案
# 设置超时时间为120秒
if cf_command.__contains__("-dd"):
time.sleep(120)
else:
time.sleep(600)
# 如果没有在120秒内完成任务那么杀死该进程
if process.poll() is None:
os.system('taskkill /F /IM CloudflareST.exe')
else:
os.system(cf_command)
# 获取优选后最优ip
best_ip = SystemUtils.execute("sed -n '2,1p' " + self._result_file + " | awk -F, '{print $1}'")
if SystemUtils.is_windows():
powershell_command = f"powershell.exe -Command \"Get-Content \'{self._result_file}\' | Select-Object -Skip 1 -First 1 | Write-Output\""
logger.info(f'正在执行powershell命令 {powershell_command}')
best_ip = SystemUtils.execute(powershell_command)
best_ip = best_ip.split(',')[0]
else:
best_ip = SystemUtils.execute("sed -n '2,1p' " + self._result_file + " | awk -F, '{print $1}'")
logger.info(f"\n获取到最优ip==>[{best_ip}]")
# 替换自定义Hosts插件数据库hosts
@@ -246,7 +273,10 @@ class CloudflareSpeedTest(_PluginBase):
# 是否重新安装
if self._re_install:
install_flag = True
os.system(f'rm -rf {self._cf_path}')
if SystemUtils.is_windows():
os.system(f'rd /s /q \"{self._cf_path}\"')
else:
os.system(f'rm -rf {self._cf_path}')
logger.info(f'删除CloudflareSpeedTest目录 {self._cf_path},开始重新安装')
# 判断目录是否存在
@@ -277,7 +307,8 @@ class CloudflareSpeedTest(_PluginBase):
# 重装后数据库有版本数据,但是本地没有则重装
if not install_flag and release_version == self._version and not Path(
f'{self._cf_path}/{self._binary_name}').exists():
f'{self._cf_path}/{self._binary_name}').exists() and not Path(
f'{self._cf_path}/CloudflareST.exe').exists():
logger.warn(f"未检测到CloudflareSpeedTest本地版本重新安装")
install_flag = True
@@ -287,9 +318,11 @@ class CloudflareSpeedTest(_PluginBase):
# 检查环境、安装
if SystemUtils.is_windows():
# todo
logger.error(f"CloudflareSpeedTest暂不支持windows平台")
return False, None
# windows
cf_file_name = 'CloudflareST_windows_amd64.zip'
download_url = f'{self._release_prefix}/{release_version}/{cf_file_name}'
return self.__os_install(download_url, cf_file_name, release_version,
f"ditto -V -x -k --sequesterRsrc {self._cf_path}/{cf_file_name} {self._cf_path}")
elif SystemUtils.is_macos():
# mac
uname = SystemUtils.execute('uname -m')
@@ -317,14 +350,31 @@ class CloudflareSpeedTest(_PluginBase):
proxies = settings.PROXY
https_proxy = proxies.get("https") if proxies and proxies.get("https") else None
if https_proxy:
os.system(
f'wget -P {self._cf_path} --no-check-certificate -e use_proxy=yes -e https_proxy={https_proxy} {download_url}')
if SystemUtils.is_windows():
self.__get_windows_cloudflarest(download_url, proxies)
else:
os.system(
f'wget -P {self._cf_path} --no-check-certificate -e use_proxy=yes -e https_proxy={https_proxy} {download_url}')
else:
os.system(f'wget -P {self._cf_path} https://ghproxy.com/{download_url}')
if SystemUtils.is_windows():
self.__get_windows_cloudflarest(download_url, proxies)
else:
os.system(f'wget -P {self._cf_path} https://ghproxy.com/{download_url}')
# 判断是否下载好安装包
if Path(f'{self._cf_path}/{cf_file_name}').exists():
try:
if SystemUtils.is_windows():
with zipfile.ZipFile(f'{self._cf_path}/{cf_file_name}', 'r') as zip_ref:
# 解压ZIP文件中的所有文件到指定目录
zip_ref.extractall(self._cf_path)
if Path(f'{self._cf_path}\\CloudflareST.exe').exists():
logger.info(f"CloudflareSpeedTest安装成功当前版本{release_version}")
return True, release_version
else:
logger.error(f"CloudflareSpeedTest安装失败请检查")
os.system(f'rd /s /q \"{self._cf_path}\"')
return False, None
# 解压
os.system(f'{unzip_command}')
# 删除压缩包
@@ -338,23 +388,42 @@ class CloudflareSpeedTest(_PluginBase):
return False, None
except Exception as err:
# 如果升级失败但是有可执行文件CloudflareST则可继续运行反之停止
if Path(f'{self._cf_path}/{self._binary_name}').exists():
if Path(f'{self._cf_path}/{self._binary_name}').exists() or \
Path(f'{self._cf_path}\\CloudflareST.exe').exists():
logger.error(f"CloudflareSpeedTest安装失败{str(err)},继续使用现版本运行")
return True, None
else:
logger.error(f"CloudflareSpeedTest安装失败{str(err)},无可用版本,停止运行")
os.removedirs(self._cf_path)
if SystemUtils.is_windows():
os.system(f'rd /s /q \"{self._cf_path}\"')
else:
os.removedirs(self._cf_path)
return False, None
else:
# 如果升级失败但是有可执行文件CloudflareST则可继续运行反之停止
if Path(f'{self._cf_path}/{self._binary_name}').exists():
if Path(f'{self._cf_path}/{self._binary_name}').exists() or \
Path(f'{self._cf_path}\\CloudflareST.exe').exists():
logger.warn(f"CloudflareSpeedTest安装失败存在可执行版本继续运行")
return True, None
else:
logger.error(f"CloudflareSpeedTest安装失败无可用版本停止运行")
os.removedirs(self._cf_path)
if SystemUtils.is_windows():
os.system(f'rd /s /q \"{self._cf_path}\"')
else:
os.removedirs(self._cf_path)
return False, None
def __get_windows_cloudflarest(self, download_url, proxies):
response = Response()
try:
response = requests.get(download_url, stream=True, proxies=proxies if proxies else None)
except requests.exceptions.RequestException as e:
logger.error(f"CloudflareSpeedTest下载失败{str(e)}")
if response.status_code == 200:
with open(f'{self._cf_path}\\CloudflareST_windows_amd64.zip', 'wb') as file:
for chunk in response.iter_content(chunk_size=8192):
file.write(chunk)
@staticmethod
def __get_release_version():
"""

View File

@@ -69,9 +69,6 @@ class DirMonitor(_PluginBase):
# 可使用的用户级别
auth_level = 1
# 已处理的文件清单
_synced_files = []
# 私有属性
_scheduler = None
transferhis = None
@@ -125,7 +122,14 @@ class DirMonitor(_PluginBase):
continue
# 存储目的目录
paths = mon_path.split(":")
if SystemUtils.is_windows():
if mon_path.count(":") > 1:
paths = [mon_path.split(":")[0] + ":" + mon_path.split(":")[1],
mon_path.split(":")[2] + ":" + mon_path.split(":")[3]]
else:
paths = [mon_path]
else:
paths = mon_path.split(":")
target_path = None
if len(paths) > 1:
mon_path = paths[0]
@@ -193,9 +197,8 @@ class DirMonitor(_PluginBase):
# 全程加锁
with lock:
if event_path not in self._synced_files:
self._synced_files.append(event_path)
else:
transfer_history = self.transferhis.get_by_src(event_path)
if transfer_history:
logger.debug("文件已处理过:%s" % event_path)
return
@@ -220,7 +223,7 @@ class DirMonitor(_PluginBase):
for keyword in transfer_exclude_words:
if not keyword:
continue
if keyword and re.findall(keyword, event_path):
if keyword and re.search(r"%s" % keyword, event_path, re.IGNORECASE):
logger.info(f"{event_path} 命中整理屏蔽词 {keyword},不处理")
return
@@ -264,6 +267,13 @@ class DirMonitor(_PluginBase):
meta=file_meta
)
return
# 如果未开启新增已入库媒体是否跟随TMDB信息变化则根据tmdbid查询之前的title
if not settings.SCRAP_FOLLOW_TMDB:
transfer_history = self.transferhis.get_by_type_tmdbid(tmdbid=mediainfo.tmdb_id,
mtype=mediainfo.type.value)
if transfer_history:
mediainfo.title = transfer_history.title
logger.info(f"{file_path.name} 识别为:{mediainfo.type.value} {mediainfo.title_year}")
# 更新媒体图片
@@ -312,8 +322,10 @@ class DirMonitor(_PluginBase):
transferinfo=transferinfo
)
# 刮削元数据
self.chain.scrape_metadata(path=transferinfo.target_path, mediainfo=mediainfo)
# 刮削单个文件
if settings.SCRAP_METADATA:
self.chain.scrape_metadata(path=transferinfo.target_path,
mediainfo=mediainfo)
"""
{
@@ -375,7 +387,8 @@ class DirMonitor(_PluginBase):
self._medias[mediainfo.title_year + " " + meta.season] = media_list
# 汇总刷新媒体库
self.chain.refresh_mediaserver(mediainfo=mediainfo, file_path=transferinfo.target_path)
if settings.REFRESH_MEDIASERVER:
self.chain.refresh_mediaserver(mediainfo=mediainfo, file_path=transferinfo.target_path)
# 广播事件
self.eventmanager.send_event(EventType.TransferComplete, {
'meta': file_meta,
@@ -481,149 +494,149 @@ class DirMonitor(_PluginBase):
def get_form(self) -> Tuple[List[dict], Dict[str, Any]]:
return [
{
'component': 'VForm',
'content': [
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'enabled',
'label': '启用插件',
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'notify',
'label': '发送通知',
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
},
'content': [
{
'component': 'VSelect',
'props': {
'model': 'mode',
'label': '监控模式',
'items': [
{'title': '兼容模式', 'value': 'compatibility'},
{'title': '性能模式', 'value': 'fast'}
]
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
},
'content': [
{
'component': 'VSelect',
'props': {
'model': 'transfer_type',
'label': '转移方式',
'items': [
{'title': '移动', 'value': 'move'},
{'title': '复制', 'value': 'copy'},
{'title': '硬链接', 'value': 'link'},
{'title': '软链接', 'value': 'softlink'}
]
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12
},
'content': [
{
'component': 'VTextarea',
'props': {
'model': 'monitor_dirs',
'label': '监控目录',
'rows': 5,
'placeholder': '每一行一个目录,支持两种配置方式:\n'
'监控目录\n'
'监控目录:转移目的目录'
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12
},
'content': [
{
'component': 'VTextarea',
'props': {
'model': 'exclude_keywords',
'label': '排除关键词',
'rows': 2,
'placeholder': '每一行一个关键词'
}
}
]
}
]
}
]
}
], {
"enabled": False,
"notify": False,
"mode": "fast",
"transfer_type": settings.TRANSFER_TYPE,
"monitor_dirs": "",
"exclude_keywords": ""
}
{
'component': 'VForm',
'content': [
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'enabled',
'label': '启用插件',
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'notify',
'label': '发送通知',
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
},
'content': [
{
'component': 'VSelect',
'props': {
'model': 'mode',
'label': '监控模式',
'items': [
{'title': '兼容模式', 'value': 'compatibility'},
{'title': '性能模式', 'value': 'fast'}
]
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
},
'content': [
{
'component': 'VSelect',
'props': {
'model': 'transfer_type',
'label': '转移方式',
'items': [
{'title': '移动', 'value': 'move'},
{'title': '复制', 'value': 'copy'},
{'title': '硬链接', 'value': 'link'},
{'title': '软链接', 'value': 'softlink'}
]
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12
},
'content': [
{
'component': 'VTextarea',
'props': {
'model': 'monitor_dirs',
'label': '监控目录',
'rows': 5,
'placeholder': '每一行一个目录,支持两种配置方式:\n'
'监控目录\n'
'监控目录:转移目的目录'
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12
},
'content': [
{
'component': 'VTextarea',
'props': {
'model': 'exclude_keywords',
'label': '排除关键词',
'rows': 2,
'placeholder': '每一行一个关键词'
}
}
]
}
]
}
]
}
], {
"enabled": False,
"notify": False,
"mode": "fast",
"transfer_type": settings.TRANSFER_TYPE,
"monitor_dirs": "",
"exclude_keywords": ""
}
def get_page(self) -> List[dict]:
pass

View File

@@ -1,3 +1,4 @@
import os
import re
from datetime import datetime, timedelta
from threading import Event
@@ -11,6 +12,9 @@ from ruamel.yaml import CommentedMap
from app.core.config import settings
from app.helper.sites import SitesHelper
from app.core.event import eventmanager
from app.db.models.site import Site
from app.helper.torrent import TorrentHelper
from app.log import logger
from app.modules.qbittorrent import Qbittorrent
@@ -18,6 +22,7 @@ from app.modules.transmission import Transmission
from app.plugins import _PluginBase
from app.plugins.iyuuautoseed.iyuu_helper import IyuuHelper
from app.schemas import NotificationType
from app.schemas.types import EventType
from app.utils.http import RequestUtils
from app.utils.string import StringUtils
@@ -60,6 +65,7 @@ class IYUUAutoSeed(_PluginBase):
_sites = []
_notify = False
_nolabels = None
_nopaths = None
_clearcache = False
# 退出事件
_event = Event()
@@ -98,14 +104,20 @@ class IYUUAutoSeed(_PluginBase):
self._cron = config.get("cron")
self._token = config.get("token")
self._downloaders = config.get("downloaders")
self._sites = config.get("sites")
self._sites = config.get("sites") or []
self._notify = config.get("notify")
self._nolabels = config.get("nolabels")
self._nopaths = config.get("nopaths")
self._clearcache = config.get("clearcache")
self._permanent_error_caches = config.get("permanent_error_caches") or []
self._error_caches = [] if self._clearcache else config.get("error_caches") or []
self._success_caches = [] if self._clearcache else config.get("success_caches") or []
# 过滤掉已删除的站点
self._sites = [site.get("id") for site in self.sites.get_indexers() if
not site.get("public") and site.get("id") in self._sites]
self.__update_config()
# 停止现有任务
self.stop_service()
@@ -163,8 +175,8 @@ class IYUUAutoSeed(_PluginBase):
拼装插件配置页面需要返回两块数据1、页面配置2、数据结构
"""
# 站点的可选项
site_options = [{"title": site.get("name"), "value": site.get("id")}
for site in self.sites.get_indexers()]
site_options = [{"title": site.name, "value": site.id}
for site in Site.list_order_by_pri(self.db)]
return [
{
'component': 'VForm',
@@ -242,22 +254,6 @@ class IYUUAutoSeed(_PluginBase):
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12
},
'content': [
{
'component': 'VTextField',
'props': {
'model': 'nolabels',
'label': '不辅种标签',
'placeholder': '使用,分隔多个标签'
}
}
]
}
]
},
{
@@ -309,6 +305,44 @@ class IYUUAutoSeed(_PluginBase):
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12
},
'content': [
{
'component': 'VTextField',
'props': {
'model': 'nolabels',
'label': '不辅种标签',
'placeholder': '使用,分隔多个标签'
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12
},
'content': [
{
'component': 'VTextarea',
'props': {
'model': 'nopaths',
'label': '不辅种数据文件目录',
'rows': 3,
'placeholder': '每一行一个目录'
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
@@ -357,6 +391,7 @@ class IYUUAutoSeed(_PluginBase):
"token": "",
"downloaders": [],
"sites": [],
"nopaths": "",
"nolabels": ""
}
@@ -374,6 +409,7 @@ class IYUUAutoSeed(_PluginBase):
"sites": self._sites,
"notify": self._notify,
"nolabels": self._nolabels,
"nopaths": self._nopaths,
"success_caches": self._success_caches,
"error_caches": self._error_caches,
"permanent_error_caches": self._permanent_error_caches
@@ -397,6 +433,7 @@ class IYUUAutoSeed(_PluginBase):
if not self.iyuuhelper:
return
logger.info("开始辅种任务 ...")
# 计数器初始化
self.total = 0
self.realtotal = 0
@@ -426,13 +463,25 @@ class IYUUAutoSeed(_PluginBase):
logger.info(f"种子 {hash_str} 辅种失败且已缓存,跳过 ...")
continue
save_path = self.__get_save_path(torrent, downloader)
if self._nopaths and save_path:
# 过滤不需要转移的路径
nopath_skip = False
for nopath in self._nopaths.split('\n'):
if os.path.normpath(save_path).startswith(os.path.normpath(nopath)):
logger.info(f"种子 {hash_str} 保存路径 {save_path} 不需要辅种,跳过 ...")
nopath_skip = True
break
if nopath_skip:
continue
# 获取种子标签
torrent_labels = self.__get_label(torrent, downloader)
if torrent_labels and self._nolabels:
is_skip = False
for label in self._nolabels.split(','):
if label in torrent_labels:
logger.info(f"种子 {hash_str} 含有不转移标签 {label},跳过 ...")
logger.info(f"种子 {hash_str} 含有不辅种标签 {label},跳过 ...")
is_skip = True
break
if is_skip:
@@ -671,6 +720,15 @@ class IYUUAutoSeed(_PluginBase):
"info_hash": "a444850638e7a6f6220e2efdde94099c53358159"
}
"""
def __is_special_site(url):
"""
判断是否为特殊站点是否需要添加https
"""
if "hdsky.me" in url:
return False
return True
self.total += 1
# 获取种子站点及下载地址模板
site_url, download_page = self.iyuuhelper.get_torrent_url(seed.get("sid"))
@@ -715,10 +773,11 @@ class IYUUAutoSeed(_PluginBase):
self.cached += 1
return False
# 强制使用Https
if "?" in torrent_url:
torrent_url += "&https=1"
else:
torrent_url += "?https=1"
if __is_special_site(torrent_url):
if "?" in torrent_url:
torrent_url += "&https=1"
else:
torrent_url += "?https=1"
# 下载种子文件
_, content, _, _, error_msg = self.torrent.download_torrent(
url=torrent_url,
@@ -870,6 +929,9 @@ class IYUUAutoSeed(_PluginBase):
"""
从详情页面获取下载链接
"""
if not site.get('url'):
logger.warn(f"站点 {site.get('name')} 未获取站点地址,无法获取种子下载链接")
return None
try:
page_url = f"{site.get('url')}details.php?id={seed.get('torrent_id')}&hit=1"
logger.info(f"正在获取种子下载链接:{page_url} ...")
@@ -922,3 +984,31 @@ class IYUUAutoSeed(_PluginBase):
self._scheduler = None
except Exception as e:
print(str(e))
@eventmanager.register(EventType.SiteDeleted)
def site_deleted(self, event):
"""
删除对应站点选中
"""
site_id = event.event_data.get("site_id")
config = self.get_config()
if config:
sites = config.get("sites")
if sites:
if isinstance(sites, str):
sites = [sites]
# 删除对应站点
if site_id:
sites = [site for site in sites if int(site) != int(site_id)]
else:
# 清空
sites = []
# 若无站点,则停止
if len(sites) == 0:
self._enabled = False
self._sites = sites
# 保存配置
self.__update_config()

View File

@@ -8,8 +8,9 @@ from apscheduler.schedulers.background import BackgroundScheduler
from apscheduler.triggers.cron import CronTrigger
from app.core.config import settings
from app.core.context import MediaInfo
from app.core.meta import MetaBase
from app.core.metainfo import MetaInfo
from app.db.transferhistory_oper import TransferHistoryOper
from app.helper.nfo import NfoReader
from app.log import logger
from app.plugins import _PluginBase
@@ -41,12 +42,14 @@ class LibraryScraper(_PluginBase):
user_level = 1
# 私有属性
transferhis = None
_scheduler = None
_scraper = None
# 限速开关
_enabled = False
_onlyonce = False
_cron = None
_mode = ""
_scraper_paths = ""
_exclude_paths = ""
# 退出事件
@@ -58,6 +61,7 @@ class LibraryScraper(_PluginBase):
self._enabled = config.get("enabled")
self._onlyonce = config.get("onlyonce")
self._cron = config.get("cron")
self._mode = config.get("mode") or ""
self._scraper_paths = config.get("scraper_paths") or ""
self._exclude_paths = config.get("exclude_paths") or ""
@@ -66,6 +70,7 @@ class LibraryScraper(_PluginBase):
# 启动定时任务 & 立即运行一次
if self._enabled or self._onlyonce:
self.transferhis = TransferHistoryOper(self.db)
self._scheduler = BackgroundScheduler(timezone=settings.TZ)
if self._cron:
logger.info(f"媒体库刮削服务启动,周期:{self._cron}")
@@ -92,6 +97,7 @@ class LibraryScraper(_PluginBase):
"onlyonce": False,
"enabled": self._enabled,
"cron": self._cron,
"mode": self._mode,
"scraper_paths": self._scraper_paths,
"exclude_paths": self._exclude_paths
})
@@ -155,6 +161,28 @@ class LibraryScraper(_PluginBase):
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
},
'content': [
{
'component': 'VSelect',
'props': {
'model': 'mode',
'label': '刮削模式',
'items': [
{'title': '仅刮削缺失元数据和图片', 'value': ''},
{'title': '覆盖所有元数据和图片', 'value': 'force_all'},
{'title': '覆盖所有元数据', 'value': 'force_nfo'},
{'title': '覆盖所有图片', 'value': 'force_image'},
]
}
}
]
},
{
'component': 'VCol',
'props': {
@@ -189,7 +217,7 @@ class LibraryScraper(_PluginBase):
'model': 'scraper_paths',
'label': '削刮路径',
'rows': 5,
'placeholder': '每一行一个目录'
'placeholder': '每一行一个目录,需配置到媒体文件的上级目录,即开了二级分类时需要配置到二级分类目录'
}
}
]
@@ -223,6 +251,7 @@ class LibraryScraper(_PluginBase):
], {
"enabled": False,
"cron": "0 0 */7 * *",
"mode": "",
"scraper_paths": "",
"err_hosts": ""
}
@@ -236,76 +265,132 @@ class LibraryScraper(_PluginBase):
"""
if not self._scraper_paths:
return
# 排除目录
exclude_paths = self._exclude_paths.split("\n")
# 已选择的目录
paths = self._scraper_paths.split("\n")
for path in paths:
if not path:
continue
if not Path(path).exists():
scraper_path = Path(path)
if not scraper_path.exists():
logger.warning(f"媒体库刮削路径不存在:{path}")
continue
logger.info(f"开始刮削媒体库:{path} ...")
if self._event.is_set():
logger.info(f"媒体库刮削服务停止")
return
# 刮削目录
self.__scrape_dir(Path(path))
logger.info(f"媒体库刮削完成")
# 遍历一层文件夹
for sub_path in scraper_path.iterdir():
if self._event.is_set():
logger.info(f"媒体库刮削服务停止")
return
# 排除目录
exclude_flag = False
for exclude_path in exclude_paths:
try:
if sub_path.is_relative_to(Path(exclude_path)):
exclude_flag = True
break
except Exception as err:
print(str(err))
if exclude_flag:
logger.debug(f"{sub_path} 在排除目录中,跳过 ...")
continue
# 开始刮削目录
if sub_path.is_dir():
# 判断目录是不是媒体目录
dir_meta = MetaInfo(sub_path.name)
if not dir_meta.name or not dir_meta.year:
logger.warn(f"{sub_path} 可能不是媒体目录,请检查刮削目录配置,跳过 ...")
continue
logger.info(f"开始刮削目录:{sub_path} ...")
self.__scrape_dir(path=sub_path, dir_meta=dir_meta)
logger.info(f"目录 {sub_path} 刮削完成")
logger.info(f"媒体库 {path} 刮削完成")
def __scrape_dir(self, path: Path):
def __scrape_dir(self, path: Path, dir_meta: MetaBase):
"""
削刮一个目录
削刮一个目录,该目录必须是媒体文件目录
"""
exclude_paths = self._exclude_paths.split("\n")
# 媒体信息
mediainfo = None
# 查找目录下所有的文件
files = SystemUtils.list_files(path, settings.RMT_MEDIAEXT)
for file in files:
if self._event.is_set():
logger.info(f"媒体库刮削服务停止")
return
# 排除目录
exclude_flag = False
for exclude_path in exclude_paths:
if file.is_relative_to(Path(exclude_path)):
exclude_flag = True
break
if exclude_flag:
logger.debug(f"{file} 在排除目录中,跳过 ...")
continue
# 识别媒体文件
meta_info = MetaInfo(file.name)
if meta_info.type == MediaType.TV:
dir_info = MetaInfo(file.parent.parent.name)
else:
dir_info = MetaInfo(file.parent.name)
meta_info.merge(dir_info)
# 优先读取本地nfo文件
tmdbid = None
if meta_info.type == MediaType.MOVIE:
# 电影
movie_nfo = file.parent / "movie.nfo"
if movie_nfo.exists():
tmdbid = self.__get_tmdbid_from_nfo(movie_nfo)
file_nfo = file.with_suffix(".nfo")
if not tmdbid and file_nfo.exists():
tmdbid = self.__get_tmdbid_from_nfo(file_nfo)
else:
# 电视剧
tv_nfo = file.parent.parent / "tvshow.nfo"
if tv_nfo.exists():
tmdbid = self.__get_tmdbid_from_nfo(tv_nfo)
if tmdbid:
logger.info(f"读取到本地nfo文件的tmdbid{tmdbid}")
# 识别媒体信息
mediainfo: MediaInfo = self.chain.recognize_media(tmdbid=tmdbid, mtype=meta_info.type)
else:
# 识别媒体信息
mediainfo: MediaInfo = self.chain.recognize_media(meta=meta_info)
if not mediainfo:
logger.warn(f"未识别到媒体信息:{file}")
continue
# 开始刮削
self.chain.scrape_metadata(path=file, mediainfo=mediainfo)
# 识别元数据
meta_info = MetaInfo(file.stem)
# 合并
meta_info.merge(dir_meta)
# 是否刮削
scrap_metadata = settings.SCRAP_METADATA
# 没有媒体信息或者名字出现变化时,需要重新识别
if not mediainfo \
or meta_info.name != dir_meta.name:
# 优先读取本地nfo文件
tmdbid = None
if meta_info.type == MediaType.MOVIE:
# 电影
movie_nfo = file.parent / "movie.nfo"
if movie_nfo.exists():
tmdbid = self.__get_tmdbid_from_nfo(movie_nfo)
file_nfo = file.with_suffix(".nfo")
if not tmdbid and file_nfo.exists():
tmdbid = self.__get_tmdbid_from_nfo(file_nfo)
else:
# 电视剧
tv_nfo = file.parent.parent / "tvshow.nfo"
if tv_nfo.exists():
tmdbid = self.__get_tmdbid_from_nfo(tv_nfo)
if tmdbid:
# 按TMDBID识别
logger.info(f"读取到本地nfo文件的tmdbid{tmdbid}")
mediainfo = self.chain.recognize_media(tmdbid=tmdbid, mtype=meta_info.type)
else:
# 按名称识别
mediainfo = self.chain.recognize_media(meta=meta_info)
if not mediainfo:
logger.warn(f"未识别到媒体信息:{file}")
continue
# 如果未开启新增已入库媒体是否跟随TMDB信息变化则根据tmdbid查询之前的title
if not settings.SCRAP_FOLLOW_TMDB:
transfer_history = self.transferhis.get_by_type_tmdbid(tmdbid=mediainfo.tmdb_id,
mtype=mediainfo.type.value)
if transfer_history:
mediainfo.title = transfer_history.title
# 覆盖模式时提前删除nfo
if self._mode in ["force_all", "force_nfo"]:
scrap_metadata = True
nfo_files = SystemUtils.list_files(path, [".nfo"])
for nfo_file in nfo_files:
try:
logger.warn(f"删除nfo文件{nfo_file}")
nfo_file.unlink()
except Exception as err:
print(str(err))
# 覆盖模式时,提前删除图片文件
if self._mode in ["force_all", "force_image"]:
scrap_metadata = True
image_files = SystemUtils.list_files(path, [".jpg", ".png"])
for image_file in image_files:
if ".actors" in str(image_file):
continue
try:
logger.warn(f"删除图片文件:{image_file}")
image_file.unlink()
except Exception as err:
print(str(err))
# 刮削单个文件
if scrap_metadata:
self.chain.scrape_metadata(path=file, mediainfo=mediainfo)
@staticmethod
def __get_tmdbid_from_nfo(file_path: Path):

View File

@@ -22,7 +22,7 @@ from app.modules.qbittorrent import Qbittorrent
from app.modules.themoviedb.tmdbv3api import Episode
from app.modules.transmission import Transmission
from app.plugins import _PluginBase
from app.schemas.types import NotificationType, EventType
from app.schemas.types import NotificationType, EventType, MediaType
from app.utils.path_utils import PathUtils
@@ -36,7 +36,7 @@ class MediaSyncDel(_PluginBase):
# 主题色
plugin_color = "#ff1a1a"
# 插件版本
plugin_version = "1.0"
plugin_version = "1.1"
# 插件作者
plugin_author = "thsrite"
# 作者主页
@@ -57,6 +57,7 @@ class MediaSyncDel(_PluginBase):
_notify = False
_del_source = False
_exclude_path = None
_library_path = None
_transferhis = None
_downloadhis = None
qb = None
@@ -80,6 +81,7 @@ class MediaSyncDel(_PluginBase):
self._notify = config.get("notify")
self._del_source = config.get("del_source")
self._exclude_path = config.get("exclude_path")
self._library_path = config.get("library_path")
if self._enabled and str(self._sync_type) == "log":
self._scheduler = BackgroundScheduler(timezone=settings.TZ)
@@ -106,13 +108,7 @@ class MediaSyncDel(_PluginBase):
定义远程控制命令
:return: 命令关键字、事件、描述、附带数据
"""
return [{
"cmd": "/sync_del",
"event": EventType.HistoryDeleted,
"desc": "媒体库同步删除",
"category": "管理",
"data": {}
}]
pass
def get_api(self) -> List[Dict[str, Any]]:
pass
@@ -122,153 +118,177 @@ class MediaSyncDel(_PluginBase):
拼装插件配置页面需要返回两块数据1、页面配置2、数据结构
"""
return [
{
'component': 'VForm',
'content': [
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'enabled',
'label': '启用插件',
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'notify',
'label': '发送通知',
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'del_source',
'label': '删除源文件',
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
},
'content': [
{
'component': 'VSelect',
'props': {
'model': 'sync_type',
'label': '同步方式',
'items': [
{'title': 'webhook', 'value': 'webhook'},
{'title': '日志', 'value': 'log'},
{'title': 'Scripter X', 'value': 'plugin'}
]
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
},
'content': [
{
'component': 'VTextField',
'props': {
'model': 'cron',
'label': '执行周期',
'placeholder': '5位cron表达式留空自动'
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
},
'content': [
{
'component': 'VTextField',
'props': {
'model': 'exclude_path',
'label': '排除路径'
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
},
'content': [
{
'component': 'VAlert',
'props': {
'text': '同步方式分为webhook、日志同步和Scripter X。'
'webhook需要Emby4.8.0.45及以上开启媒体删除的webhook。'
'日志同步需要配置执行周期默认30分钟执行一次。'
'Scripter X方式需要emby安装并配置Scripter X插件无需配置执行周期。'
}
}
]
}
]
}
]
}
], {
"enabled": False,
"notify": True,
"del_source": False,
"sync_type": "webhook",
"cron": "*/30 * * * *",
"exclude_path": "",
}
{
'component': 'VForm',
'content': [
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'enabled',
'label': '启用插件',
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'notify',
'label': '发送通知',
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'del_source',
'label': '删除源文件',
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
},
'content': [
{
'component': 'VSelect',
'props': {
'model': 'sync_type',
'label': '同步方式',
'items': [
{'title': 'webhook', 'value': 'webhook'},
{'title': '日志', 'value': 'log'},
{'title': 'Scripter X', 'value': 'plugin'}
]
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
},
'content': [
{
'component': 'VTextField',
'props': {
'model': 'cron',
'label': '执行周期',
'placeholder': '5位cron表达式留空自动'
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
},
'content': [
{
'component': 'VTextField',
'props': {
'model': 'exclude_path',
'label': '排除路径'
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
},
'content': [
{
'component': 'VTextarea',
'props': {
'model': 'library_path',
'rows': '2',
'label': '媒体库路径',
'placeholder': '媒体服务器路径:MoviePilot路径一行一个'
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
},
'content': [
{
'component': 'VAlert',
'props': {
'text': '同步方式分为webhook、日志同步和Scripter X。'
'webhook需要Emby4.8.0.45及以上开启媒体删除的webhook'
'(建议使用媒体库刮削插件覆盖元数据重新刮削剧集路径)。'
'日志同步需要配置执行周期默认30分钟执行一次。'
'Scripter X方式需要emby安装并配置Scripter X插件无需配置执行周期。'
}
}
]
}
]
}
]
}
], {
"enabled": False,
"notify": True,
"del_source": False,
"library_path": "",
"sync_type": "webhook",
"cron": "*/30 * * * *",
"exclude_path": "",
}
def get_page(self) -> List[dict]:
"""
@@ -423,40 +443,21 @@ class MediaSyncDel(_PluginBase):
]
@eventmanager.register(EventType.WebhookMessage)
def sync_del_by_plugin_or_webhook(self, event):
def sync_del_by_webhook(self, event: Event):
"""
emby删除媒体库同步删除历史记录
Scripter X插件 webhook
webhook
"""
if not self._enabled:
if not self._enabled or str(self._sync_type) != "webhook":
return
event_data = event.event_data
event_type = event_data.event
if not event_type or (str(event_type) != 'media_del' and str(event_type) != 'library.deleted'):
# Emby Webhook event_type = library.deleted
if not event_type or str(event_type) != 'library.deleted':
return
# Scripter X插件 需要是否虚拟标识
if str(event_type) == 'media_del':
item_isvirtual = event_data.item_isvirtual
if not item_isvirtual:
logger.error("item_isvirtual参数未配置为防止误删除暂停插件运行")
self.update_config({
"enabled": False,
"del_source": self._del_source,
"exclude_path": self._exclude_path,
"notify": self._notify,
"cron": self._cron,
"sync_type": self._sync_type,
})
return
# 如果是虚拟item则直接return不进行删除
if item_isvirtual == 'True':
return
# 读取历史记录
history = self.get_data('history') or []
# 媒体类型
media_type = event_data.item_type
# 媒体名称
@@ -467,13 +468,76 @@ class MediaSyncDel(_PluginBase):
tmdb_id = event_data.tmdb_id
# 季数
season_num = event_data.season_id
if season_num and str(season_num).isdigit() and int(season_num) < 10:
season_num = f'0{season_num}'
# 集数
episode_num = event_data.episode_id
if episode_num and str(episode_num).isdigit() and int(episode_num) < 10:
episode_num = f'0{episode_num}'
self.__sync_del(media_type=media_type,
media_name=media_name,
media_path=media_path,
tmdb_id=tmdb_id,
season_num=season_num,
episode_num=episode_num)
@eventmanager.register(EventType.WebhookMessage)
def sync_del_by_plugin(self, event):
"""
emby删除媒体库同步删除历史记录
Scripter X插件
"""
if not self._enabled or str(self._sync_type) != "plugin":
return
event_data = event.event_data
event_type = event_data.event
# Scripter X插件 event_type = media_del
if not event_type or str(event_type) != 'media_del':
return
# Scripter X插件 需要是否虚拟标识
item_isvirtual = event_data.item_isvirtual
if not item_isvirtual:
logger.error("Scripter X插件方式item_isvirtual参数未配置为防止误删除暂停插件运行")
self.update_config({
"enabled": False,
"del_source": self._del_source,
"exclude_path": self._exclude_path,
"library_path": self._library_path,
"notify": self._notify,
"cron": self._cron,
"sync_type": self._sync_type,
})
return
# 如果是虚拟item则直接return不进行删除
if item_isvirtual == 'True':
return
# 媒体类型
media_type = event_data.item_type
# 媒体名称
media_name = event_data.item_name
# 媒体路径
media_path = event_data.item_path
# tmdb_id
tmdb_id = event_data.tmdb_id
# 季数
season_num = event_data.season_id
# 集数
episode_num = event_data.episode_id
self.__sync_del(media_type=media_type,
media_name=media_name,
media_path=media_path,
tmdb_id=tmdb_id,
season_num=season_num,
episode_num=episode_num)
def __sync_del(self, media_type: str, media_name: str, media_path: str,
tmdb_id: int, season_num: int, episode_num: int):
"""
执行删除逻辑
"""
if not media_type:
logger.error(f"{media_name} 同步删除失败,未获取到媒体类型")
return
@@ -487,38 +551,18 @@ class MediaSyncDel(_PluginBase):
logger.info(f"媒体路径 {media_path} 已被排除,暂不处理")
return
# 删除电影
if media_type == "Movie" or media_type == "MOV":
msg = f'电影 {media_name} {tmdb_id}'
transfer_history: List[TransferHistory] = self._transferhis.get_by(tmdbid=tmdb_id)
# 删除电视剧
elif (media_type == "Series" or media_type == "TV") and not season_num and not episode_num:
msg = f'剧集 {media_name} {tmdb_id}'
transfer_history: List[TransferHistory] = self._transferhis.get_by(tmdbid=tmdb_id)
# 删除季 S02
elif (media_type == "Season" or media_type == "TV") and season_num and not episode_num:
if not season_num or not str(season_num).isdigit():
logger.error(f"{media_name} 季同步删除失败,未获取到具体季")
return
msg = f'剧集 {media_name} S{season_num} {tmdb_id}'
transfer_history: List[TransferHistory] = self._transferhis.get_by(tmdbid=tmdb_id,
season=f'S{season_num}')
# 删除剧集S02E02
elif (media_type == "Episode" or media_type == "TV") and season_num and episode_num:
if not season_num or not str(season_num).isdigit() or not episode_num or not str(episode_num).isdigit():
logger.error(f"{media_name} 集同步删除失败,未获取到具体集")
return
msg = f'剧集 {media_name} S{season_num}E{episode_num} {tmdb_id}'
transfer_history: List[TransferHistory] = self._transferhis.get_by(tmdbid=tmdb_id,
season=f'S{season_num}',
episode=f'E{episode_num}')
else:
return
# 查询转移记录
msg, transfer_history = self.__get_transfer_his(media_type=media_type,
media_name=media_name,
media_path=media_path,
tmdb_id=tmdb_id,
season_num=season_num,
episode_num=episode_num)
logger.info(f"正在同步删除{msg}")
if not transfer_history:
logger.warn(f"{media_type} {media_name} 未获取到可删除数据")
logger.warn(f"{media_type} {media_name} 未获取到可删除数据,可使用媒体库刮削插件覆盖所有元数据")
return
# 开始删除
@@ -528,6 +572,11 @@ class MediaSyncDel(_PluginBase):
stop_cnt = 0
error_cnt = 0
for transferhis in transfer_history:
title = transferhis.title
if title not in media_name:
logger.warn(
f"当前转移记录 {transferhis.id} {title} {transferhis.tmdbid} 与删除媒体{media_name}不符,防误删,暂不自动删除")
continue
image = transferhis.image
year = transferhis.year
@@ -545,15 +594,15 @@ class MediaSyncDel(_PluginBase):
if transferhis.download_hash:
try:
# 2、判断种子是否被删除完
delete_flag, success_flag = self.handle_torrent(src=transferhis.src,
torrent_hash=transferhis.download_hash)
delete_flag, success_flag, handle_cnt = self.handle_torrent(src=transferhis.src,
torrent_hash=transferhis.download_hash)
if not success_flag:
error_cnt += 1
else:
if delete_flag:
del_cnt += 1
del_cnt += handle_cnt
else:
stop_cnt += 1
stop_cnt += handle_cnt
except Exception as e:
logger.error("删除种子失败,尝试删除源文件:%s" % str(e))
@@ -568,20 +617,30 @@ class MediaSyncDel(_PluginBase):
episode_num=episode_num)
if images:
image = self.get_tmdbimage_url(images[-1].get("file_path"), prefix="original")
torrent_cnt_msg = ""
if del_cnt:
torrent_cnt_msg += f"删除种子{del_cnt}\n"
if stop_cnt:
torrent_cnt_msg += f"暂停种子{stop_cnt}\n"
if error_cnt:
torrent_cnt_msg += f"删种失败{error_cnt}\n"
# 发送通知
self.post_message(
mtype=NotificationType.MediaServer,
title="媒体库同步删除任务完成",
image=image,
text=f"{msg}\n"
f"删除{del_cnt}\n"
f"暂停{stop_cnt}\n"
f"错误{error_cnt}\n"
f"删除记录{len(transfer_history)}\n"
f"{torrent_cnt_msg}"
f"时间 {time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(time.time()))}"
)
# 读取历史记录
history = self.get_data('history') or []
history.append({
"type": "电影" if media_type == "Movie" else "电视剧",
"type": "电影" if media_type == "Movie" or media_type == "MOV" else "电视剧",
"title": media_name,
"year": year,
"path": media_path,
@@ -594,6 +653,64 @@ class MediaSyncDel(_PluginBase):
# 保存历史
self.save_data("history", history)
def __get_transfer_his(self, media_type: str, media_name: str, media_path: str,
tmdb_id: int, season_num: int, episode_num: int):
"""
查询转移记录
"""
# 季数
if season_num:
season_num = str(season_num).rjust(2, '0')
# 集数
if episode_num:
episode_num = str(episode_num).rjust(2, '0')
# 类型
mtype = MediaType.MOVIE if media_type in ["Movie", "MOV"] else MediaType.TV
# 处理路径映射 (处理同一媒体多分辨率的情况)
if self._library_path:
paths = self._library_path.split("\n")
for path in paths:
sub_paths = path.split(":")
media_path = media_path.replace(sub_paths[0], sub_paths[1]).replace('\\', '/')
# 删除电影
if mtype == MediaType.MOVIE:
msg = f'电影 {media_name} {tmdb_id}'
transfer_history: List[TransferHistory] = self._transferhis.get_by(tmdbid=tmdb_id,
mtype=mtype.value,
dest=media_path)
# 删除电视剧
elif mtype == MediaType.TV and not season_num and not episode_num:
msg = f'剧集 {media_name} {tmdb_id}'
transfer_history: List[TransferHistory] = self._transferhis.get_by(tmdbid=tmdb_id,
mtype=mtype.value)
# 删除季 S02
elif mtype == MediaType.TV and season_num and not episode_num:
if not season_num or not str(season_num).isdigit():
logger.error(f"{media_name} 季同步删除失败,未获取到具体季")
return
msg = f'剧集 {media_name} S{season_num} {tmdb_id}'
transfer_history: List[TransferHistory] = self._transferhis.get_by(tmdbid=tmdb_id,
mtype=mtype.value,
season=f'S{season_num}')
# 删除剧集S02E02
elif mtype == MediaType.TV and season_num and episode_num:
if not season_num or not str(season_num).isdigit() or not episode_num or not str(episode_num).isdigit():
logger.error(f"{media_name} 集同步删除失败,未获取到具体集")
return
msg = f'剧集 {media_name} S{season_num}E{episode_num} {tmdb_id}'
transfer_history: List[TransferHistory] = self._transferhis.get_by(tmdbid=tmdb_id,
mtype=mtype.value,
season=f'S{season_num}',
episode=f'E{episode_num}',
dest=media_path)
else:
return "", []
return msg, transfer_history
def sync_del_by_log(self):
"""
emby删除媒体库同步删除历史记录
@@ -641,13 +758,21 @@ class MediaSyncDel(_PluginBase):
logger.info(f"媒体路径 {media_path} 已被排除,暂不处理")
return
# 处理路径映射 (处理同一媒体多分辨率的情况)
if self._library_path:
paths = self._library_path.split("\n")
for path in paths:
sub_paths = path.split(":")
media_path = media_path.replace(sub_paths[0], sub_paths[1]).replace('\\', '/')
# 获取删除的记录
# 删除电影
if media_type == "Movie":
msg = f'电影 {media_name}'
transfer_history: List[TransferHistory] = self._transferhis.get_by(
title=media_name,
year=media_year)
year=media_year,
dest=media_path)
# 删除电视剧
elif media_type == "Series":
msg = f'剧集 {media_name}'
@@ -668,7 +793,8 @@ class MediaSyncDel(_PluginBase):
title=media_name,
year=media_year,
season=media_season,
episode=media_episode)
episode=media_episode,
dest=media_path)
else:
continue
@@ -686,6 +812,11 @@ class MediaSyncDel(_PluginBase):
stop_cnt = 0
error_cnt = 0
for transferhis in transfer_history:
title = transferhis.title
if title not in media_name:
logger.warn(
f"当前转移记录 {transferhis.id} {title} {transferhis.tmdbid} 与删除媒体{media_name}不符,防误删,暂不自动删除")
continue
image = transferhis.image
# 0、删除转移记录
self._transferhis.delete(transferhis.id)
@@ -701,15 +832,15 @@ class MediaSyncDel(_PluginBase):
if transferhis.download_hash:
try:
# 2、判断种子是否被删除完
delete_flag, success_flag = self.handle_torrent(src=transferhis.src,
torrent_hash=transferhis.download_hash)
delete_flag, success_flag, handle_cnt = self.handle_torrent(src=transferhis.src,
torrent_hash=transferhis.download_hash)
if not success_flag:
error_cnt += 1
else:
if delete_flag:
del_cnt += 1
del_cnt += handle_cnt
else:
stop_cnt += 1
stop_cnt += handle_cnt
except Exception as e:
logger.error("删除种子失败,尝试删除源文件:%s" % str(e))
@@ -717,13 +848,19 @@ class MediaSyncDel(_PluginBase):
# 发送消息
if self._notify:
torrent_cnt_msg = ""
if del_cnt:
torrent_cnt_msg += f"删除种子{del_cnt}\n"
if stop_cnt:
torrent_cnt_msg += f"暂停种子{stop_cnt}\n"
if error_cnt:
torrent_cnt_msg += f"删种失败{error_cnt}\n"
self.post_message(
mtype=NotificationType.MediaServer,
title="媒体库同步删除任务完成",
text=f"{msg}\n"
f"删除{del_cnt}\n"
f"暂停{stop_cnt}\n"
f"错误{error_cnt}\n"
f"删除记录{len(transfer_history)}\n"
f"{torrent_cnt_msg}"
f"时间 {time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(time.time()))}",
image=image)
@@ -757,15 +894,27 @@ class MediaSyncDel(_PluginBase):
plugin_id=plugin_id)
logger.info(f"查询到 {history_key} 转种历史 {transfer_history}")
handle_cnt = 0
try:
# 删除本次种子记录
self._downloadhis.delete_file_by_fullpath(fullpath=src)
# 根据种子hash查询剩余未删除的记录
downloadHisNoDel = self._downloadhis.get_files_by_hash(download_hash=torrent_hash, state=1)
if downloadHisNoDel and len(downloadHisNoDel) > 0:
# 根据种子hash查询所有下载器文件记录
download_files = self._downloadhis.get_files_by_hash(download_hash=torrent_hash)
if not download_files:
logger.error(
f"未查询到种子任务 {torrent_hash} 存在文件记录,未执行下载器文件同步或该种子已被删除")
return False, False, 0
# 查询未删除数
no_del_cnt = 0
for download_file in download_files:
if download_file and download_file.state and int(download_file.state) == 1:
no_del_cnt += 1
if no_del_cnt > 0:
logger.info(
f"查询种子任务 {torrent_hash} 存在 {len(downloadHisNoDel)} 个未删除文件,执行暂停种子操作")
f"查询种子任务 {torrent_hash} 存在 {no_del_cnt} 个未删除文件,执行暂停种子操作")
delete_flag = False
else:
logger.info(
@@ -790,6 +939,7 @@ class MediaSyncDel(_PluginBase):
# 删除源种子
logger.info(f"删除源下载器下载任务:{settings.DOWNLOADER} - {torrent_hash}")
self.chain.remove_torrents(torrent_hash)
handle_cnt += 1
# 删除转种后任务
logger.info(f"删除转种后下载任务:{download} - {download_id}")
@@ -800,6 +950,7 @@ class MediaSyncDel(_PluginBase):
else:
self.qb.delete_torrents(delete_file=True,
ids=download_id)
handle_cnt += 1
else:
# 暂停种子
# 转种后未删除源种时,同步暂停源种
@@ -809,6 +960,7 @@ class MediaSyncDel(_PluginBase):
# 暂停源种子
logger.info(f"暂停源下载器下载任务:{settings.DOWNLOADER} - {torrent_hash}")
self.chain.stop_torrents(torrent_hash)
handle_cnt += 1
else:
# 未转种de情况
@@ -820,16 +972,20 @@ class MediaSyncDel(_PluginBase):
# 暂停源种子
logger.info(f"暂停源下载器下载任务:{download} - {download_id}")
self.chain.stop_torrents(download_id)
handle_cnt += 1
# 处理辅种
self.__del_seed(download=download, download_id=download_id, action_flag="del" if delete_flag else 'stop')
handle_cnt = self.__del_seed(download=download,
download_id=download_id,
action_flag="del" if delete_flag else 'stop',
handle_cnt=handle_cnt)
return delete_flag, True
return delete_flag, True, handle_cnt
except Exception as e:
logger.error(f"删种失败: {e}")
return False, False
return False, False, 0
def __del_seed(self, download, download_id, action_flag):
def __del_seed(self, download, download_id, action_flag, handle_cnt):
"""
删除辅种
"""
@@ -851,9 +1007,10 @@ class MediaSyncDel(_PluginBase):
torrents = [torrents]
# 删除辅种历史中与本下载器相同的辅种记录
if int(downloader) == download:
if str(downloader) == str(download):
for torrent in torrents:
if download == "qbittorrent":
handle_cnt += 1
if str(download) == "qbittorrent":
# 删除辅种
if action_flag == "del":
logger.info(f"删除辅种:{downloader} - {torrent}")
@@ -883,6 +1040,8 @@ class MediaSyncDel(_PluginBase):
value=seed_history,
plugin_id=plugin_id)
return handle_cnt
@staticmethod
def parse_emby_log(last_time):
log_url = "{HOST}System/Logs/embyserver.txt?api_key={APIKEY}"
@@ -956,7 +1115,7 @@ class MediaSyncDel(_PluginBase):
return del_medias
@staticmethod
def parse_jellyfin_log(last_time):
def parse_jellyfin_log(last_time: datetime):
# 根据加入日期 降序排序
log_url = "{HOST}System/Logs/Log?name=log_%s.log&api_key={APIKEY}" % datetime.date.today().strftime("%Y%m%d")
log_res = Jellyfin().get_data(log_url)
@@ -1029,7 +1188,7 @@ class MediaSyncDel(_PluginBase):
return del_medias
@staticmethod
def delete_media_file(filedir, filename):
def delete_media_file(filedir: str, filename: str):
"""
删除媒体文件,空目录也会被删除
"""
@@ -1097,7 +1256,7 @@ class MediaSyncDel(_PluginBase):
title="媒体库同步删除完成!", userid=event.event_data.get("user"))
@staticmethod
def get_tmdbimage_url(path, prefix="w500"):
def get_tmdbimage_url(path: str, prefix="w500"):
if not path:
return ""
tmdb_image_url = f"https://{settings.TMDB_IMAGE_DOMAIN}"

View File

@@ -31,7 +31,7 @@ class MessageForward(_PluginBase):
# 加载顺序
plugin_order = 16
# 可使用的用户级别
auth_level = 2
auth_level = 1
# 私有属性
_enabled = False
@@ -69,81 +69,81 @@ class MessageForward(_PluginBase):
拼装插件配置页面需要返回两块数据1、页面配置2、数据结构
"""
return [
{
'component': 'VForm',
'content': [
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'enabled',
'label': '开启转发'
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
},
'content': [
{
'component': 'VTextarea',
'props': {
'model': 'wechat',
'rows': '3',
'label': '应用配置',
'placeholder': 'appid:corpid:appsecret一行一个配置'
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
},
'content': [
{
'component': 'VTextarea',
'props': {
'model': 'pattern',
'rows': '3',
'label': '正则配置',
'placeholder': '对应上方应用配置,一行一个,一一对应'
}
}
]
}
]
},
]
}
], {
"enabled": False,
"wechat": "",
"pattern": ""
}
{
'component': 'VForm',
'content': [
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'enabled',
'label': '开启转发'
}
}
]
},
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
},
'content': [
{
'component': 'VTextarea',
'props': {
'model': 'wechat',
'rows': '3',
'label': '应用配置',
'placeholder': 'appid:corpid:appsecret一行一个配置'
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
},
'content': [
{
'component': 'VTextarea',
'props': {
'model': 'pattern',
'rows': '3',
'label': '正则配置',
'placeholder': '对应上方应用配置,一行一个,一一对应'
}
}
]
}
]
},
]
}
], {
"enabled": False,
"wechat": "",
"pattern": ""
}
def get_page(self) -> List[dict]:
pass
@@ -169,29 +169,27 @@ class MessageForward(_PluginBase):
# 正则匹配
patterns = self._pattern.split("\n")
for i, pattern in enumerate(patterns):
for index, pattern in enumerate(patterns):
msg_match = re.search(pattern, title)
if msg_match:
access_token, appid = self.__flush_access_token(i)
access_token, appid = self.__flush_access_token(index)
if not access_token:
logger.error("未获取到有效token请检查配置")
continue
# 发送消息
if image:
self.__send_image_message(title, text, image, userid, access_token, appid, i)
self.__send_image_message(title, text, image, userid, access_token, appid, index)
else:
self.__send_message(title, text, userid, access_token, appid, i)
self.__send_message(title, text, userid, access_token, appid, index)
def __save_wechat_token(self):
"""
获取并存储wechat token
"""
# 查询历史
wechat_token_history = self.get_data("wechat_token") or {}
# 解析配置
wechats = self._wechat.split("\n")
for i, wechat in enumerate(wechats):
for index, wechat in enumerate(wechats):
wechat_config = wechat.split(":")
if len(wechat_config) != 3:
logger.error(f"{wechat} 应用配置不正确")
@@ -200,53 +198,30 @@ class MessageForward(_PluginBase):
corpid = wechat_config[1]
appsecret = wechat_config[2]
# 查询历史是否存储token
wechat_config = wechat_token_history.get("appid")
access_token = None
expires_in = None
access_token_time = None
if wechat_config:
access_token_time = wechat_config['access_token_time']
expires_in = wechat_config['expires_in']
# 判断token是否过期
if (datetime.now() - access_token_time).seconds < expires_in:
# 重新获取token
access_token, expires_in, access_token_time = self.__get_access_token(corpid=corpid,
appsecret=appsecret)
# 已过期,重新获取token
access_token, expires_in, access_token_time = self.__get_access_token(corpid=corpid,
appsecret=appsecret)
if not access_token:
# 获取token
access_token, expires_in, access_token_time = self.__get_access_token(corpid=corpid,
appsecret=appsecret)
if access_token:
wechat_token_history[appid] = {
"access_token": access_token,
"expires_in": expires_in,
"access_token_time": str(access_token_time),
"corpid": corpid,
"appsecret": appsecret
}
self._pattern_token[i] = {
"appid": appid,
"corpid": corpid,
"appsecret": appsecret,
"access_token": access_token,
"expires_in": expires_in,
"access_token_time": access_token_time,
}
else:
# 没有token获取token
logger.error(f"wechat配置 appid = {appid} 获取token失败请检查配置")
continue
# 保存wechat token
if wechat_token_history:
self.save_data("wechat_token", wechat_token_history)
self._pattern_token[index] = {
"appid": appid,
"corpid": corpid,
"appsecret": appsecret,
"access_token": access_token,
"expires_in": expires_in,
"access_token_time": access_token_time,
}
def __flush_access_token(self, i: int):
def __flush_access_token(self, index: int, force: bool = False):
"""
获取第i个配置wechat token
"""
wechat_token = self._pattern_token[i]
wechat_token = self._pattern_token[index]
if not wechat_token:
logger.error(f"未获取到第 {i} 条正则对应的wechat应用token请检查配置")
logger.error(f"未获取到第 {index} 条正则对应的wechat应用token请检查配置")
return None
access_token = wechat_token['access_token']
expires_in = wechat_token['expires_in']
@@ -256,7 +231,7 @@ class MessageForward(_PluginBase):
appsecret = wechat_token['appsecret']
# 判断token有效期
if (datetime.now() - access_token_time).seconds < expires_in:
if force or (datetime.now() - access_token_time).seconds >= expires_in:
# 重新获取token
access_token, expires_in, access_token_time = self.__get_access_token(corpid=corpid,
appsecret=appsecret)
@@ -264,7 +239,7 @@ class MessageForward(_PluginBase):
logger.error(f"wechat配置 appid = {appid} 获取token失败请检查配置")
return None, None
self._pattern_token[i] = {
self._pattern_token[index] = {
"appid": appid,
"corpid": corpid,
"appsecret": appsecret,
@@ -275,8 +250,7 @@ class MessageForward(_PluginBase):
return access_token, appid
def __send_message(self, title: str, text: str = None, userid: str = None, access_token: str = None,
appid: str = None, i: int = None) -> \
Optional[bool]:
appid: str = None, index: int = None) -> Optional[bool]:
"""
发送文本消息
:param title: 消息标题
@@ -284,7 +258,6 @@ class MessageForward(_PluginBase):
:param userid: 消息发送对象的ID为空则发给所有人
:return: 发送状态,错误信息
"""
message_url = self._send_msg_url % access_token
if text:
conent = "%s\n%s" % (title, text.replace("\n\n", "\n"))
else:
@@ -303,10 +276,10 @@ class MessageForward(_PluginBase):
"enable_id_trans": 0,
"enable_duplicate_check": 0
}
return self.__post_request(message_url, req_json, i, title)
return self.__post_request(access_token=access_token, req_json=req_json, index=index, title=title)
def __send_image_message(self, title: str, text: str, image_url: str, userid: str = None, access_token: str = None,
appid: str = None, i: int = None) -> Optional[bool]:
def __send_image_message(self, title: str, text: str, image_url: str, userid: str = None,
access_token: str = None, appid: str = None, index: int = None) -> Optional[bool]:
"""
发送图文消息
:param title: 消息标题
@@ -315,7 +288,6 @@ class MessageForward(_PluginBase):
:param userid: 消息发送对象的ID为空则发给所有人
:return: 发送状态,错误信息
"""
message_url = self._send_msg_url % access_token
if text:
text = text.replace("\n\n", "\n")
if not userid:
@@ -335,9 +307,10 @@ class MessageForward(_PluginBase):
]
}
}
return self.__post_request(message_url, req_json, i, title)
return self.__post_request(access_token=access_token, req_json=req_json, index=index, title=title)
def __post_request(self, message_url: str, req_json: dict, i: int, title: str) -> bool:
def __post_request(self, access_token: str, req_json: dict, index: int, title: str, retry: int = 0) -> bool:
message_url = self._send_msg_url % access_token
"""
向微信发送请求
"""
@@ -352,10 +325,21 @@ class MessageForward(_PluginBase):
logger.info(f"转发消息 {title} 成功")
return True
else:
if ret_json.get('errcode') == 42001:
# 重新获取token
self.__flush_access_token(i)
logger.error(f"转发消息 {title} 失败,错误信息:{ret_json}")
if ret_json.get('errcode') == 42001 or ret_json.get('errcode') == 40014:
logger.info("token已过期正在重新刷新token重试")
# 重新获取token
access_token, appid = self.__flush_access_token(index=index,
force=True)
if access_token:
retry += 1
# 重发请求
if retry <= 3:
return self.__post_request(access_token=access_token,
req_json=req_json,
index=index,
title=title,
retry=retry)
return False
elif res is not None:
logger.error(f"转发消息 {title} 失败,错误码:{res.status_code},错误原因:{res.reason}")
@@ -364,10 +348,10 @@ class MessageForward(_PluginBase):
logger.error(f"转发消息 {title} 失败,未获取到返回信息")
return False
except Exception as err:
logger.error(f"转发消息 {title} 失败,错误信息:{err}")
logger.error(f"转发消息 {title} 异常,错误信息:{err}")
return False
def __get_access_token(self, corpid, appsecret):
def __get_access_token(self, corpid: str, appsecret: str):
"""
获取微信Token
:return 微信Token

View File

@@ -4,11 +4,8 @@ import sqlite3
from datetime import datetime
from app.db.downloadhistory_oper import DownloadHistoryOper
from app.db.models.plugin import PluginData
from app.db.plugindata_oper import PluginDataOper
from app.db.transferhistory_oper import TransferHistoryOper
from app.modules.qbittorrent import Qbittorrent
from app.modules.transmission import Transmission
from app.plugins import _PluginBase
from typing import Any, List, Dict, Tuple
from app.log import logger
@@ -34,7 +31,7 @@ class NAStoolSync(_PluginBase):
# 加载顺序
plugin_order = 15
# 可使用的用户级别
auth_level = 2
auth_level = 1
# 私有属性
_transferhistory = None
@@ -45,10 +42,7 @@ class NAStoolSync(_PluginBase):
_path = None
_site = None
_downloader = None
_supp = False
_transfer = False
qb = None
tr = None
def init_plugin(self, config: dict = None):
self._transferhistory = TransferHistoryOper(self.db)
@@ -61,13 +55,9 @@ class NAStoolSync(_PluginBase):
self._path = config.get("path")
self._site = config.get("site")
self._downloader = config.get("downloader")
self._supp = config.get("supp")
self._transfer = config.get("transfer")
if self._nt_db_path and self._transfer:
self.qb = Qbittorrent()
self.tr = Transmission()
# 读取sqlite数据
try:
gradedb = sqlite3.connect(self._nt_db_path)
@@ -80,7 +70,6 @@ class NAStoolSync(_PluginBase):
"path": self._path,
"downloader": self._downloader,
"site": self._site,
"supp": self._supp,
}
)
logger.error(f"无法打开数据库文件 {self._nt_db_path},请检查路径是否正确:{e}")
@@ -116,7 +105,6 @@ class NAStoolSync(_PluginBase):
"path": self._path,
"downloader": self._downloader,
"site": self._site,
"supp": self._supp,
}
)
@@ -144,6 +132,7 @@ class NAStoolSync(_PluginBase):
logger.info("MoviePilot插件记录已清空")
self._plugindata.truncate()
cnt = 0
for history in plugin_history:
plugin_id = history[1]
plugin_key = history[2]
@@ -153,7 +142,12 @@ class NAStoolSync(_PluginBase):
if self._downloader:
downloaders = self._downloader.split("\n")
for downloader in downloaders:
if not downloader:
continue
sub_downloaders = downloader.split(":")
if not str(sub_downloaders[0]).isdigit():
logger.error(f"下载器映射配置错误NAStool下载器id 应为数字!")
continue
# 替换转种记录
if str(plugin_id) == "TorrentTransfer":
keys = str(plugin_key).split("-")
@@ -172,7 +166,11 @@ class NAStoolSync(_PluginBase):
if str(plugin_id) == "IYUUAutoSeed":
if isinstance(plugin_value, str):
plugin_value = json.loads(plugin_value)
if not isinstance(plugin_value, list):
plugin_value = [plugin_value]
for value in plugin_value:
if not str(value.get("downloader")).isdigit():
continue
if str(value.get("downloader")).isdigit() and int(value.get("downloader")) == int(
sub_downloaders[0]):
value["downloader"] = sub_downloaders[1]
@@ -180,6 +178,9 @@ class NAStoolSync(_PluginBase):
self._plugindata.save(plugin_id=plugin_id,
key=plugin_key,
value=plugin_value)
cnt += 1
if cnt % 100 == 0:
logger.info(f"插件记录同步进度 {cnt} / {len(plugin_history)}")
# 计算耗时
end_time = datetime.now()
@@ -198,6 +199,7 @@ class NAStoolSync(_PluginBase):
logger.info("MoviePilot下载记录已清空")
self._downloadhistory.truncate()
cnt = 0
for history in download_history:
mpath = history[0]
mtype = history[1]
@@ -234,6 +236,9 @@ class NAStoolSync(_PluginBase):
torrent_description=mdesc,
torrent_site=msite
)
cnt += 1
if cnt % 100 == 0:
logger.info(f"下载记录同步进度 {cnt} / {len(download_history)}")
# 计算耗时
end_time = datetime.now()
@@ -253,36 +258,8 @@ class NAStoolSync(_PluginBase):
logger.info("MoviePilot转移记录已清空")
self._transferhistory.truncate()
# 转种后种子hash
transfer_hash = []
qb_torrents = []
tr_torrents = []
tr_torrents_all = []
if self._supp:
# 获取所有的转种数据
transfer_datas = self._plugindata.get_data_all("TorrentTransfer")
if transfer_datas:
if not isinstance(transfer_datas, list):
transfer_datas = [transfer_datas]
for transfer_data in transfer_datas:
if not transfer_data or not isinstance(transfer_data, PluginData):
continue
# 转移后种子hash
transfer_value = transfer_data.value
transfer_value = json.loads(transfer_value)
if not isinstance(transfer_value, dict):
transfer_value = json.loads(transfer_value)
to_hash = transfer_value.get("to_download_id")
# 转移前种子hash
transfer_hash.append(to_hash)
# 获取tr、qb所有种子
qb_torrents, _ = self.qb.get_torrents()
tr_torrents, _ = self.tr.get_torrents(ids=transfer_hash)
tr_torrents_all, _ = self.tr.get_torrents()
# 处理数据存入mp数据库
cnt = 0
for history in transfer_history:
msrc_path = history[0]
msrc_filename = history[1]
@@ -297,8 +274,7 @@ class NAStoolSync(_PluginBase):
mseasons = history[10]
mepisodes = history[11]
mimage = history[12]
mdownload_hash = history[13]
mdate = history[14]
mdate = history[13]
if not msrc_path or not mdest_path:
continue
@@ -306,78 +282,6 @@ class NAStoolSync(_PluginBase):
msrc = msrc_path + "/" + msrc_filename
mdest = mdest_path + "/" + mdest_filename
# 尝试补充download_id
if self._supp and not mdownload_hash:
logger.debug(f"转移记录 {mtitle} 缺失download_hash尝试补充……")
# 种子名称
torrent_name = str(msrc_path).split("/")[-1]
torrent_name2 = str(msrc_path).split("/")[-2]
# 处理下载器
for torrent in qb_torrents:
if str(torrent.get("name")) == str(torrent_name) \
or str(torrent.get("name")) == str(torrent_name2):
mdownload_hash = torrent.get("hash")
torrent_name = str(torrent.get("name"))
break
# 处理辅种器
if not mdownload_hash:
for torrent in tr_torrents:
if str(torrent.get("name")) == str(torrent_name) \
or str(torrent.get("name")) == str(torrent_name2):
mdownload_hash = torrent.get("hashString")
torrent_name = str(torrent.get("name"))
break
# 继续补充 遍历所有种子,按照添加升序升序,第一个种子是初始种子
if not mdownload_hash:
mate_torrents = []
for torrent in tr_torrents_all:
if str(torrent.get("name")) == str(torrent_name) \
or str(torrent.get("name")) == str(torrent_name2):
mate_torrents.append(torrent)
# 匹配上则按照时间升序
if mate_torrents:
if len(mate_torrents) > 1:
mate_torrents = sorted(mate_torrents, key=lambda x: x.added_date)
# 最早添加的hash是下载的hash
mdownload_hash = mate_torrents[0].get("hashString")
torrent_name = str(mate_torrents[0].get("name"))
# 补充转种记录
self._plugindata.save(plugin_id="TorrentTransfer",
key=f"qbittorrent-{mdownload_hash}",
value={
"to_download": "transmission",
"to_download_id": mdownload_hash,
"delete_source": True}
)
# 补充辅种记录
if len(mate_torrents) > 1:
self._plugindata.save(plugin_id="IYUUAutoSeed",
key=mdownload_hash,
value=[{"downloader": "transmission",
"torrents": [torrent.get("hashString") for torrent in
mate_torrents[1:]]}]
)
# 补充下载历史
self._downloadhistory.add(
path=msrc_filename,
type=mtype,
title=mtitle,
year=myear,
tmdbid=mtmdbid,
seasons=mseasons,
episodes=mepisodes,
image=mimage,
download_hash=mdownload_hash,
torrent_name=torrent_name,
torrent_description="",
torrent_site=""
)
# 处理路径映射
if self._path:
paths = self._path.split("\n")
@@ -399,11 +303,14 @@ class NAStoolSync(_PluginBase):
seasons=mseasons,
episodes=mepisodes,
image=mimage,
download_hash=mdownload_hash,
date=mdate
)
logger.debug(f"{mtitle} {myear} {mtmdbid} {mseasons} {mepisodes} 已同步")
cnt += 1
if cnt % 100 == 0:
logger.info(f"转移记录同步进度 {cnt} / {len(transfer_history)}")
# 计算耗时
end_time = datetime.now()
@@ -507,7 +414,6 @@ class NAStoolSync(_PluginBase):
NULL ELSE substr( t.SEASON_EPISODE, instr ( t.SEASON_EPISODE, ' ' ) + 1 )
END AS episodes,
d.POSTER AS image,
d.DOWNLOAD_ID AS download_hash,
t.DATE AS date
FROM
TRANSFER_HISTORY t
@@ -549,7 +455,7 @@ class NAStoolSync(_PluginBase):
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
'md': 6
},
'content': [
{
@@ -565,7 +471,7 @@ class NAStoolSync(_PluginBase):
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
'md': 6
},
'content': [
{
@@ -576,22 +482,6 @@ class NAStoolSync(_PluginBase):
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'supp',
'label': '补充数据'
}
}
]
}
]
},
@@ -695,7 +585,6 @@ class NAStoolSync(_PluginBase):
'text': '开启清空记录时会在导入历史数据之前删除MoviePilot之前的记录。'
'如果转移记录很多同步时间可能会长3-10分钟'
'所以点击确定后页面没反应是正常现象,后台正在处理。'
'如果开启补充数据会获取tr、qb种子补充转移记录中download_hash缺失的情况同步删除需要'
}
}
]

View File

@@ -24,7 +24,7 @@ lock = Lock()
class RssSubscribe(_PluginBase):
# 插件名称
plugin_name = "RSS订阅"
plugin_name = "自定义订阅"
# 插件描述
plugin_desc = "定时刷新RSS报文识别内容后添加订阅或直接下载。"
# 插件图标
@@ -119,7 +119,7 @@ class RssSubscribe(_PluginBase):
# 记录清理缓存设置
self._clearflag = self._clear
# 关闭清理缓存开关
self._clearflag = False
self._clear = False
# 保存设置
self.__update_config()
@@ -549,7 +549,7 @@ class RssSubscribe(_PluginBase):
logger.error(f"未获取到RSS数据{url}")
return
# 过滤规则
filter_rule = self.systemconfig.get(SystemConfigKey.FilterRules)
filter_rule = self.systemconfig.get(SystemConfigKey.SubscribeFilterRules)
# 解析数据
for result in results:
try:
@@ -593,7 +593,8 @@ class RssSubscribe(_PluginBase):
if self._filter:
result = self.chain.filter_torrents(
rule_string=filter_rule,
torrent_list=[torrentinfo]
torrent_list=[torrentinfo],
mediainfo=mediainfo
)
if not result:
logger.info(f"{title} {description} 不匹配过滤规则")

View File

@@ -15,6 +15,7 @@ from app import schemas
from app.core.config import settings
from app.core.event import Event
from app.core.event import eventmanager
from app.db.models.site import Site
from app.helper.browser import PlaywrightHelper
from app.helper.module import ModuleHelper
from app.helper.sites import SitesHelper
@@ -84,6 +85,11 @@ class SiteStatistic(_PluginBase):
self._statistic_type = config.get("statistic_type") or "all"
self._statistic_sites = config.get("statistic_sites") or []
# 过滤掉已删除的站点
self._statistic_sites = [site.get("id") for site in self.sites.get_indexers() if
not site.get("public") and site.get("id") in self._statistic_sites]
self.__update_config()
if self._enabled or self._onlyonce:
# 加载模块
self._site_schema = ModuleHelper.load('app.plugins.sitestatistic.siteuserinfo',
@@ -177,8 +183,8 @@ class SiteStatistic(_PluginBase):
拼装插件配置页面需要返回两块数据1、页面配置2、数据结构
"""
# 站点的可选项
site_options = [{"title": site.get("name"), "value": site.get("id")}
for site in self.sites.get_indexers()]
site_options = [{"title": site.name, "value": site.id}
for site in Site.list_order_by_pri(self.db)]
return [
{
'component': 'VForm',
@@ -966,6 +972,12 @@ class SiteStatistic(_PluginBase):
# 发送通知,存在未读消息
self.__notify_unread_msg(site_name, site_user_info, unread_msg_notify)
# 分享率接近1时发送消息提醒
if site_user_info.ratio and float(site_user_info.ratio) < 1:
self.post_message(mtype=NotificationType.SiteMessage,
title=f"【站点分享率低预警】",
text=f"站点 {site_user_info.site_name} 分享率 {site_user_info.ratio},请注意!")
self._sites_data.update(
{
site_name: {
@@ -1041,10 +1053,6 @@ class SiteStatistic(_PluginBase):
else:
refresh_sites = [site for site in self.sites.get_indexers() if
site.get("id") in self._statistic_sites]
# 过滤掉已删除的站点
self._statistic_sites = [site.get("id") for site in refresh_sites if site]
self.__update_config()
if not refresh_sites:
return

View File

@@ -56,5 +56,6 @@ class NexusHhanclubSiteUserInfo(NexusPhpSiteUserInfo):
def _get_user_level(self, html):
super()._get_user_level(html)
self.user_level = html.xpath('//*[@id="mainContent"]/div/div[2]/div[2]/div[4]/img/@title')[0]
user_level_path = html.xpath('//*[@id="mainContent"]/div/div[2]/div[2]/div[4]/span[2]/img/@title')
if user_level_path:
self.user_level = user_level_path[0]

View File

@@ -37,7 +37,7 @@ class SpeedLimiter(_PluginBase):
# 加载顺序
plugin_order = 11
# 可使用的用户级别
auth_level = 2
auth_level = 1
# 私有属性
_scheduler = None
@@ -69,6 +69,7 @@ class SpeedLimiter(_PluginBase):
self._play_down_speed = float(config.get("play_down_speed")) if config.get("play_down_speed") else 0
self._noplay_up_speed = float(config.get("noplay_up_speed")) if config.get("noplay_up_speed") else 0
self._noplay_down_speed = float(config.get("noplay_down_speed")) if config.get("noplay_down_speed") else 0
self._current_state = f"U:{self._noplay_up_speed},D:{self._noplay_down_speed}"
try:
# 总带宽
self._bandwidth = int(float(config.get("bandwidth") or 0)) * 1000000
@@ -374,6 +375,8 @@ class SpeedLimiter(_PluginBase):
"""
if not self._qb and not self._tr:
return
if not self._enabled:
return
if event:
event_data: WebhookEventInfo = event.event_data
if event_data.event not in ["playback.start", "PlaybackStart", "media.play"]:
@@ -492,15 +495,7 @@ class SpeedLimiter(_PluginBase):
return
else:
self._current_state = state
if upload_limit:
text = f"上传:{upload_limit} KB/s"
else:
text = f"上传:未限速"
if download_limit:
text = f"{text}\n下载:{download_limit} KB/s"
else:
text = f"{text}\n下载:未限速"
try:
cnt = 0
for download in self._downloader:
@@ -517,9 +512,16 @@ class SpeedLimiter(_PluginBase):
else:
# 按比例
allocation_count = sum([int(i) for i in self._allocation_ratio.split(":")])
upload_limit = int(upload_limit * int(self._allocation_ratio[cnt]) / allocation_count)
upload_limit = int(upload_limit * int(self._allocation_ratio.split(":")[cnt]) / allocation_count)
cnt += 1
if upload_limit:
text = f"上传:{upload_limit} KB/s"
else:
text = f"上传:未限速"
if download_limit:
text = f"{text}\n下载:{download_limit} KB/s"
else:
text = f"{text}\n下载:未限速"
if str(download) == 'qbittorrent':
if self._qb:
self._qb.set_speed_limit(download_limit=download_limit, upload_limit=upload_limit)
@@ -593,7 +595,8 @@ class SpeedLimiter(_PluginBase):
for allow_ipv6 in allow_ipv6s:
if ipaddr in ipaddress.ip_network(allow_ipv6, strict=False):
return True
except Exception:
except Exception as err:
print(str(err))
return False
return False

View File

@@ -34,7 +34,7 @@ class SyncDownloadFiles(_PluginBase):
# 加载顺序
plugin_order = 20
# 可使用的用户级别
auth_level = 2
auth_level = 1
# 私有属性
_enabled = False
@@ -44,6 +44,7 @@ class SyncDownloadFiles(_PluginBase):
tr = None
_onlyonce = False
_history = False
_clear = False
_downloaders = []
_dirs = None
downloadhis = None
@@ -56,21 +57,33 @@ class SyncDownloadFiles(_PluginBase):
# 停止现有任务
self.stop_service()
self.qb = Qbittorrent()
self.tr = Transmission()
self.downloadhis = DownloadHistoryOper(self.db)
self.transferhis = TransferHistoryOper(self.db)
if config:
self._enabled = config.get('enabled')
self._time = config.get('time') or 6
self._history = config.get('history')
self._clear = config.get('clear')
self._onlyonce = config.get("onlyonce")
self._downloaders = config.get('downloaders') or []
self._dirs = config.get("dirs") or ""
if self._clear:
# 清理下载器文件记录
self.downloadhis.truncate_files()
# 清理下载器最后处理记录
for downloader in self._downloaders:
# 获取最后同步时间
self.del_data(f"last_sync_time_{downloader}")
# 关闭clear
self._clear = False
self.__update_config()
if self._onlyonce:
# 执行一次
self.qb = Qbittorrent()
self.tr = Transmission()
self.downloadhis = DownloadHistoryOper(self.db)
self.transferhis = TransferHistoryOper(self.db)
# 关闭onlyonce
self._onlyonce = False
self.__update_config()
@@ -84,7 +97,7 @@ class SyncDownloadFiles(_PluginBase):
try:
self._scheduler.add_job(func=self.sync,
trigger="interval",
hours=float(self._time.strip()),
hours=float(str(self._time).strip()),
name="自动同步下载器文件记录")
logger.info(f"自动同步下载器文件记录服务启动,时间间隔 {self._time} 小时")
except Exception as err:
@@ -224,6 +237,7 @@ class SyncDownloadFiles(_PluginBase):
"enabled": self._enabled,
"time": self._time,
"history": self._history,
"clear": self._clear,
"onlyonce": self._onlyonce,
"downloaders": self._downloaders,
"dirs": self._dirs
@@ -434,6 +448,22 @@ class SyncDownloadFiles(_PluginBase):
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'clear',
'label': '清理数据',
}
}
]
},
]
},
{
@@ -526,6 +556,7 @@ class SyncDownloadFiles(_PluginBase):
"enabled": False,
"onlyonce": False,
"history": False,
"clear": False,
"time": 6,
"dirs": "",
"downloaders": []

View File

@@ -605,7 +605,7 @@ class TorrentRemover(_PluginBase):
if torrents and message_text and self._notify:
self.post_message(
mtype=NotificationType.SiteMessage,
title=f"【自动删种任务执行完成】",
title=f"【自动删种任务完成】",
text=message_text
)
except Exception as e:
@@ -624,7 +624,7 @@ class TorrentRemover(_PluginBase):
# 平均上传速度
torrent_upload_avs = torrent.uploaded / torrent_seeding_time if torrent_seeding_time else 0
# 大小 单位GB
sizes = self._size.split(',') if self._size else []
sizes = self._size.split('-') if self._size else []
minsize = sizes[0] * 1024 * 1024 * 1024 if sizes else 0
maxsize = sizes[-1] * 1024 * 1024 * 1024 if sizes else 0
# 分享率
@@ -634,7 +634,7 @@ class TorrentRemover(_PluginBase):
if self._time and torrent_seeding_time <= float(self._time) * 3600:
return None
# 文件大小
if self._size and (torrent.size >= maxsize or torrent.size <= minsize):
if self._size and (torrent.size >= int(maxsize) or torrent.size <= int(minsize)):
return None
if self._upspeed and torrent_upload_avs >= float(self._upspeed) * 1024:
return None
@@ -668,7 +668,7 @@ class TorrentRemover(_PluginBase):
# 平均上传速茺
torrent_upload_avs = torrent_uploaded / torrent_seeding_time if torrent_seeding_time else 0
# 大小 单位GB
sizes = self._size.split(',') if self._size else []
sizes = self._size.split('-') if self._size else []
minsize = sizes[0] * 1024 * 1024 * 1024 if sizes else 0
maxsize = sizes[-1] * 1024 * 1024 * 1024 if sizes else 0
# 分享率
@@ -676,7 +676,7 @@ class TorrentRemover(_PluginBase):
return None
if self._time and torrent_seeding_time <= float(self._time) * 3600:
return None
if self._size and (torrent.total_size >= maxsize or torrent.total_size <= minsize):
if self._size and (torrent.total_size >= int(maxsize) or torrent.total_size <= int(minsize)):
return None
if self._upspeed and torrent_upload_avs >= float(self._upspeed) * 1024:
return None
@@ -736,14 +736,15 @@ class TorrentRemover(_PluginBase):
name = remove_torrent.get("name")
size = remove_torrent.get("size")
for torrent in torrents:
if torrent.name == name \
and torrent.size == size \
and torrent.hash not in [t.get("id") for t in remove_torrents]:
remove_torrents_plus.append({
"id": torrent.hash,
"name": torrent.name,
"site": StringUtils.get_url_sld(torrent.tracker),
"size": torrent.size
})
if downloader == "qbittorrent":
item_plus = self.__get_qb_torrent(torrent)
else:
item_plus = self.__get_tr_torrent(torrent)
if not item_plus:
continue
if item_plus.get("name") == name \
and item_plus.get("size") == size \
and item_plus.get("id") not in [t.get("id") for t in remove_torrents]:
remove_torrents_plus.append(item_plus)
remove_torrents.extend(remove_torrents_plus)
return remove_torrents

View File

@@ -7,7 +7,7 @@ from typing import Any, List, Dict, Tuple, Optional
import pytz
from apscheduler.schedulers.background import BackgroundScheduler
from apscheduler.triggers.cron import CronTrigger
from torrentool.torrent import Torrent
from bencode import bdecode, bencode
from app.core.config import settings
from app.helper.torrent import TorrentHelper
@@ -582,26 +582,50 @@ class TorrentTransfer(_PluginBase):
continue
# 如果源下载器是QB检查是否有Tracker没有的话额外获取
trackers = None
if downloader == "qbittorrent":
# 读取种子内容、解析种子文件
content = torrent_file.read_bytes()
if not content:
logger.warn(f"读取种子文件失败:{torrent_file}")
fail += 1
continue
# 读取trackers
try:
torrent_main = Torrent.from_file(torrent_file)
main_announce = torrent_main.announce_urls
torrent_main = bdecode(content)
main_announce = torrent_main.get('announce')
except Exception as err:
logger.error(f"解析种子文件 {torrent_file} 失败:{err}")
# 失败计数
logger.warn(f"解析种子文件 {torrent_file} 失败:{err}")
fail += 1
continue
if not main_announce:
logger.info(f"{torrent_item.get('hash')} 未发现tracker信息尝试补充tracker ...")
# 从源下载任务信息中获取Tracker
torrent = torrent_item.get('torrent')
# 源trackers
trackers = [tracker.get("url") for tracker in torrent.trackers
if str(tracker.get("url")).startswith('http')]
logger.info(f"获取到源tracker{trackers}")
logger.info(f"{torrent_item.get('hash')} 未发现tracker信息尝试补充tracker信息...")
# 读取fastresume文件
fastresume_file = Path(self._fromtorrentpath) / f"{torrent_item.get('hash')}.fastresume"
if not fastresume_file.exists():
logger.warn(f"fastresume文件不存在{fastresume_file}")
fail += 1
continue
# 尝试补充trackers
try:
# 解析fastresume文件
fastresume = fastresume_file.read_bytes()
torrent_fastresume = bdecode(fastresume)
# 读取trackers
fastresume_trackers = torrent_fastresume.get('trackers')
if isinstance(fastresume_trackers, list) \
and len(fastresume_trackers) > 0 \
and fastresume_trackers[0]:
# 重新赋值
torrent_main['announce'] = fastresume_trackers[0][0]
# 替换种子文件路径
torrent_file = settings.TEMP_PATH / f"{torrent_item.get('hash')}.torrent"
# 编码并保存到临时文件
torrent_file.write_bytes(bencode(torrent_main))
except Exception as err:
logger.error(f"解析fastresume文件 {fastresume_file} 出错:{err}")
fail += 1
continue
# 发送到另一个下载器中下载:默认暂停、传输下载路径、关闭自动管理模式
logger.info(f"添加转移做种任务到下载器 {todownloader}{torrent_file}")
@@ -617,11 +641,6 @@ class TorrentTransfer(_PluginBase):
# 下载成功
logger.info(f"成功添加转移做种任务,种子文件:{torrent_file}")
# 补充Tracker
if trackers:
logger.info(f"开始补充 {download_id} 的tracker{trackers}")
todownloader_obj.add_trackers(ids=[download_id], trackers=trackers)
# TR会自动校验QB需要手动校验
if todownloader == "qbittorrent":
logger.info(f"qbittorrent 开始校验 {download_id} ...")

View File

@@ -26,7 +26,7 @@ class WebHook(_PluginBase):
# 加载顺序
plugin_order = 14
# 可使用的用户级别
auth_level = 2
auth_level = 1
# 私有属性
_webhook_url = None

View File

@@ -8,11 +8,10 @@ from apscheduler.schedulers.background import BackgroundScheduler
from app.chain import ChainBase
from app.chain.cookiecloud import CookieCloudChain
from app.chain.mediaserver import MediaServerChain
from app.chain.rss import RssChain
from app.chain.subscribe import SubscribeChain
from app.chain.transfer import TransferChain
from app.core.config import settings
from app.db import ScopedSession
from app.db import SessionFactory
from app.log import logger
from app.utils.singleton import Singleton
from app.utils.timer import TimerUtils
@@ -40,7 +39,7 @@ class Scheduler(metaclass=Singleton):
def __init__(self):
# 数据库连接
self._db = ScopedSession()
self._db = SessionFactory()
# 调试模式不启动定时服务
if settings.DEV:
return
@@ -71,15 +70,20 @@ class Scheduler(metaclass=Singleton):
self._scheduler.add_job(SubscribeChain(self._db).search, "interval",
hours=24, kwargs={'state': 'R'}, name="订阅搜索")
# 站点首页种子定时刷新缓存并匹配订阅
triggers = TimerUtils.random_scheduler(num_executions=30)
for trigger in triggers:
self._scheduler.add_job(SubscribeChain(self._db).refresh, "cron",
hour=trigger.hour, minute=trigger.minute, name="订阅刷新")
# 自定义订阅
self._scheduler.add_job(RssChain(self._db).refresh, "interval",
minutes=30, name="自定义订阅刷新")
if settings.SUBSCRIBE_MODE == "spider":
# 站点首页种子定时刷新模式
triggers = TimerUtils.random_scheduler(num_executions=30)
for trigger in triggers:
self._scheduler.add_job(SubscribeChain(self._db).refresh, "cron",
hour=trigger.hour, minute=trigger.minute, name="订阅刷新")
else:
# RSS订阅模式
if not settings.SUBSCRIBE_RSS_INTERVAL:
settings.SUBSCRIBE_RSS_INTERVAL = 30
elif settings.SUBSCRIBE_RSS_INTERVAL < 5:
settings.SUBSCRIBE_RSS_INTERVAL = 5
self._scheduler.add_job(SubscribeChain(self._db).refresh, "interval",
minutes=settings.SUBSCRIBE_RSS_INTERVAL, name="订阅刷新")
# 下载器文件转移每5分钟
if settings.DOWNLOADER_MONITOR:
@@ -106,3 +110,5 @@ class Scheduler(metaclass=Singleton):
"""
if self._scheduler.running:
self._scheduler.shutdown()
if self._db:
self._db.close()

View File

@@ -12,5 +12,4 @@ from .mediaserver import *
from .message import *
from .tmdb import *
from .transfer import *
from .rss import *
from .file import *

View File

@@ -16,3 +16,5 @@ class FileItem(BaseModel):
extension: Optional[str] = None
# 文件大小
size: Optional[int] = None
# 修改时间
modify_time: Optional[float] = None

View File

@@ -32,3 +32,5 @@ class Plugin(BaseModel):
installed: Optional[bool] = False
# 运行状态
state: Optional[bool] = False
# 是否有详情页面
has_page: Optional[bool] = False

View File

@@ -1,54 +0,0 @@
from typing import Optional
from pydantic import BaseModel
class Rss(BaseModel):
id: Optional[int]
# 名称
name: Optional[str]
# RSS地址
url: Optional[str]
# 类型
type: Optional[str]
# 标题
title: Optional[str]
# 年份
year: Optional[str]
# TMDBID
tmdbid: Optional[int]
# 季号
season: Optional[int]
# 海报
poster: Optional[str]
# 背景图
backdrop: Optional[str]
# 评分
vote: Optional[float]
# 简介
description: Optional[str]
# 总集数
total_episode: Optional[int]
# 包含
include: Optional[str]
# 排除
exclude: Optional[str]
# 洗版
best_version: Optional[int]
# 是否使用代理服务器
proxy: Optional[int]
# 是否使用过滤规则
filter: Optional[int]
# 保存路径
save_path: Optional[str]
# 附加信息
note: Optional[str]
# 已处理数量
processed: Optional[int]
# 最后更新时间
last_update: Optional[str]
# 状态 0-停用1-启用
state: Optional[int]
class Config:
orm_mode = True

View File

@@ -48,7 +48,7 @@ class SystemConfigKey(Enum):
UserInstalledPlugins = "UserInstalledPlugins"
# 搜索结果
SearchResults = "SearchResults"
# 索站点范围
# 索站点范围
IndexerSites = "IndexerSites"
# 订阅站点范围
RssSites = "RssSites"
@@ -60,10 +60,14 @@ class SystemConfigKey(Enum):
CustomReleaseGroups = "CustomReleaseGroups"
# 自定义识别词
CustomIdentifiers = "CustomIdentifiers"
# 过滤规则
FilterRules = "FilterRules"
# 搜索优先级规则
SearchFilterRules = "SearchFilterRules"
# 订阅优先级规则
SubscribeFilterRules = "SubscribeFilterRules"
# 洗版规则
FilterRules2 = "FilterRules2"
BestVersionFilterRules = "BestVersionFilterRules"
# 默认包含与排除规则
DefaultIncludeExcludeFilter = "DefaultIncludeExcludeFilter"
# 转移屏蔽词
TransferExcludeWords = "TransferExcludeWords"

View File

@@ -5,8 +5,6 @@ import urllib3
from requests import Session, Response
from urllib3.exceptions import InsecureRequestWarning
from app.utils.ip import IpUtils
urllib3.disable_warnings(InsecureRequestWarning)

View File

@@ -35,7 +35,7 @@ class ObjectUtils:
"""
检查函数是否已实现
"""
return func.__code__.co_code != b'd\x01S\x00'
return func.__code__.co_code not in [b'd\x01S\x00', b'\x97\x00d\x00S\x00']
@staticmethod
def check_signature(func: FunctionType, *args) -> bool:

View File

@@ -31,3 +31,32 @@ class SiteUtils:
return True
return False
@classmethod
def is_checkin(cls, html_text: str) -> bool:
"""
判断站点是否已经签到
:return True已签到 False未签到
"""
html = etree.HTML(html_text)
if not html:
return False
# 站点签到支持的识别XPATH
xpaths = [
'//a[@id="signed"]',
'//a[contains(@href, "attendance")]',
'//a[contains(text(), "签到")]',
'//a/b[contains(text(), "签 到")]',
'//span[@id="sign_in"]/a',
'//a[contains(@href, "addbonus")]',
'//input[@class="dt_button"][contains(@value, "打卡")]',
'//a[contains(@href, "sign_in")]',
'//a[contains(@onclick, "do_signin")]',
'//a[@id="do-attendance"]',
'//shark-icon-button[@href="attendance.php"]'
]
for xpath in xpaths:
if html.xpath(xpath):
return False
return True

View File

@@ -641,3 +641,16 @@ class StringUtils:
formatted_string = "".join(formatted_ranges)
return formatted_string
@staticmethod
def is_number(text: str) -> bool:
"""
判断字符是否为可以转换为整数或者浮点数
"""
if not text:
return False
try:
float(text)
return True
except ValueError:
return False

View File

@@ -5,6 +5,8 @@ import re
import shutil
from pathlib import Path
from typing import List, Union, Tuple
import docker
import psutil
from app import schemas
@@ -104,7 +106,7 @@ class SystemUtils:
if directory.is_file():
return [directory]
if not min_filesize:
min_filesize = 0
@@ -120,6 +122,36 @@ class SystemUtils:
return files
@staticmethod
def exits_files(directory: Path, extensions: list, min_filesize: int = 0) -> bool:
"""
判断目录下是否存在指定扩展名的文件
:return True存在 False不存在
"""
if not min_filesize:
min_filesize = 0
if not directory.exists():
return False
if directory.is_file():
return True
if not min_filesize:
min_filesize = 0
pattern = r".*(" + "|".join(extensions) + ")$"
# 遍历目录及子目录
for path in directory.rglob('**/*'):
if path.is_file() \
and re.match(pattern, path.name, re.IGNORECASE) \
and path.stat().st_size >= min_filesize * 1024 * 1024:
return True
return False
@staticmethod
def list_sub_files(directory: Path, extensions: list) -> List[Path]:
"""
@@ -292,3 +324,35 @@ class SystemUtils:
获取内存使用量和使用率
"""
return [psutil.virtual_memory().used, int(psutil.virtual_memory().percent)]
@staticmethod
def can_restart() -> bool:
"""
判断是否可以内部重启
"""
return Path("/var/run/docker.sock").exists()
@staticmethod
def restart() -> Tuple[bool, str]:
"""
执行Docker重启操作
"""
try:
# 创建 Docker 客户端
client = docker.DockerClient(base_url='tcp://127.0.0.1:38379')
# 获取当前容器的 ID
with open('/proc/self/mountinfo', 'r') as f:
data = f.read()
index_resolv_conf = data.find("resolv.conf")
if index_resolv_conf != -1:
index_second_slash = data.rfind("/", 0, index_resolv_conf)
index_first_slash = data.rfind("/", 0, index_second_slash) + 1
container_id = data[index_first_slash:index_second_slash]
if not container_id:
return False, "获取容器ID失败"
# 重启当前容器
client.containers.get(container_id.strip()).restart()
return True, ""
except Exception as err:
print(str(err))
return False, f"重启时发生错误:{str(err)}"

File diff suppressed because one or more lines are too long

View File

@@ -1,7 +1,7 @@
#!/bin/bash
# 使用 `envsubst` 将模板文件中的 ${NGINX_PORT} 替换为实际的环境变量值
envsubst '${NGINX_PORT}' < /etc/nginx/nginx.template.conf > /etc/nginx/nginx.conf
envsubst '${NGINX_PORT}${PORT}' < /etc/nginx/nginx.template.conf > /etc/nginx/nginx.conf
# 自动更新
if [ "${MOVIEPILOT_AUTO_UPDATE}" = "true" ]; then
cd /
@@ -26,6 +26,10 @@ chown moviepilot:moviepilot /etc/hosts /tmp
gosu moviepilot:moviepilot playwright install chromium
# 启动前端nginx服务
nginx
# 启动haproxy
if [ -S "/var/run/docker.sock" ]; then
haproxy -f /app/haproxy.cfg
fi
# 设置后端服务权限掩码
umask ${UMASK}
# 启动后端服务

60
haproxy.cfg Normal file
View File

@@ -0,0 +1,60 @@
global
log stdout format raw daemon info
user root
group root
daemon
pidfile /run/haproxy.pid
maxconn 4000
# Turn on stats unix socket
server-state-file /var/lib/haproxy/server-state
setenv POST 1
setenv ALLOW_RESTARTS 1
setenv CONTAINERS 1
setenv VERSION 1
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 10m
timeout server 10m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
# Allow seamless reloads
load-server-state-from-file global
# Use provided example error pages
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
backend dockerbackend
server dockersocket /var/run/docker.sock
frontend dockerfrontend
bind :38379
http-request deny unless METH_GET || { env(POST) -m bool }
http-request allow if { path,url_dec -m reg -i ^(/v[\d\.]+)?/containers/[a-zA-Z0-9_.-]+/((stop)|(restart)|(kill)) } { env(ALLOW_RESTARTS) -m bool }
http-request allow if { path,url_dec -m reg -i ^(/v[\d\.]+)?/containers } { env(CONTAINERS) -m bool }
http-request allow if { path,url_dec -m reg -i ^(/v[\d\.]+)?/version } { env(VERSION) -m bool }
http-request deny
default_backend dockerbackend

View File

@@ -100,7 +100,7 @@ http {
upstream backend_api {
# 后端API的地址和端口
server 127.0.0.1:3001;
server 127.0.0.1:${PORT};
# 可以添加更多后端服务器作为负载均衡
}

Some files were not shown because too many files have changed in this diff Show More