Compare commits

...

304 Commits

Author SHA1 Message Date
jxxghp
c86a21d11d Merge pull request #604 from WithdewHua/subscribe 2023-09-16 20:31:42 +08:00
WithdewHua
3fb02f6490 feat: 增加更新订阅 tmdb 信息 API 2023-09-16 19:36:49 +08:00
WithdewHua
ca2c0392bb fix: 调整 API 顺序,避免错误匹配 2023-09-16 18:43:33 +08:00
WithdewHua
b8663ee735 fix: 同时更新电影订阅信息;修复 typo 2023-09-16 16:16:39 +08:00
WithdewHua
4ab60423c1 feat: 根据原标题查询媒体服务器(plex) 2023-09-16 15:48:22 +08:00
jxxghp
1ea80e6870 更新 README.md 2023-09-16 10:58:33 +08:00
jxxghp
6f1d4754be Merge pull request #600 from DDS-Derek/main 2023-09-16 08:28:56 +08:00
DDSRem
52288d98c0 bump: action jobs version
docker/metadata-action@v5
docker/setup-qemu-action@v3
docker/setup-buildx-action@v3
docker/login-action@v3
docker/build-push-action@v5

Co-Authored-By: DDSDerek <108336573+DDSDerek@users.noreply.github.com>
Co-Authored-By: DDSTomo <142158217+ddstomo@users.noreply.github.com>
2023-09-15 20:18:28 +08:00
jxxghp
d1368c4f84 fix bug 2023-09-15 17:28:35 +08:00
jxxghp
4367c53bb0 fix bug 2023-09-15 17:24:22 +08:00
jxxghp
d87f69da35 fix azusa 2023-09-15 16:07:01 +08:00
jxxghp
5ece44090e fix 2023-09-15 15:38:30 +08:00
jxxghp
01be4f9549 need test 2023-09-15 15:37:05 +08:00
jxxghp
94077917f3 Merge remote-tracking branch 'origin/main' 2023-09-15 15:22:19 +08:00
jxxghp
8af981738c fix README.md 2023-09-15 15:22:11 +08:00
jxxghp
4d7982803e Merge pull request #596 from thsrite/main
fix 辅种插件增加不辅种路径
2023-09-15 15:15:55 +08:00
thsrite
a1bba6da4a fix 辅种插件增加不辅种路径 2023-09-15 15:08:15 +08:00
jxxghp
4eb3e16b37 v1.2.1
- 修复了IOS下菜单栏需要点击两次的问题
- 修复了电影洗版重复下载的问题
- 站点新增支持ptlsp、azusa
- 认证站点新增支持ptlsp
- 仿真签到增加判断签到状态
2023-09-15 15:04:18 +08:00
jxxghp
1f0b40fe05 support ptlsp 2023-09-15 14:29:15 +08:00
jxxghp
29e92a17e7 support azusa 2023-09-15 14:01:12 +08:00
jxxghp
8cc4469282 fix #591 2023-09-15 10:59:46 +08:00
jxxghp
a5e66071ba support PTLSP 2023-09-15 10:46:54 +08:00
jxxghp
fb4e817993 fix #594 2023-09-15 10:38:15 +08:00
jxxghp
8f26110e65 Merge pull request #590 from thsrite/main 2023-09-14 16:19:46 +08:00
thsrite
9f65a088c0 fix 插件交互命令增加channel字段 2023-09-14 16:09:56 +08:00
jxxghp
15c15388b6 Merge pull request #589 from thsrite/main 2023-09-14 15:34:49 +08:00
thsrite
950a43e001 fix 每日签到记录存储bug 2023-09-14 15:28:06 +08:00
jxxghp
9a28f8c365 Merge pull request #588 from thsrite/main 2023-09-14 15:18:43 +08:00
thsrite
32cb96fc44 fix 仿真签到判断是否已签 2023-09-14 15:17:30 +08:00
jxxghp
f7982e3e43 fix build 2023-09-14 11:36:37 +08:00
jxxghp
d13602827c fix build 2023-09-14 11:30:22 +08:00
jxxghp
182adc77b6 v1.2.0
- 修复了 QB4.5+ 转种到 TR3.0 丢失tracker的问题
- 站点新增支持byr、hdcity、okpt
- RSS订阅模式时自动检测是否失效并更新链接地址
- 自定义订阅插件支持磁力链接下载
- 增加了自定义识别词支持的配置格式:被替换词 => 替换词 && 前定位词 <> 后定位词 >> 集偏移量
- 媒体库同步删除插件支持多版本文件处理
2023-09-14 11:17:10 +08:00
jxxghp
ef4cdb41c8 fix release 2023-09-14 10:07:20 +08:00
jxxghp
9a60121914 fix #579 修改转种使用的模块 2023-09-14 09:46:51 +08:00
jxxghp
6fb0c92183 fix message content 2023-09-14 09:18:11 +08:00
jxxghp
96c4e0ba2f Merge remote-tracking branch 'origin/main' 2023-09-14 09:09:43 +08:00
jxxghp
7afe82480c fix brush 2023-09-14 09:08:57 +08:00
jxxghp
c37c8e7318 Merge pull request #583 from thsrite/main 2023-09-13 21:41:48 +08:00
thsrite
3d10ca4c8b fix 签到数量 2023-09-13 20:19:06 +08:00
jxxghp
4e515ec442 fix #516 支持磁力链下载 2023-09-13 17:56:57 +08:00
jxxghp
5eb37b5d28 fix sites 2023-09-13 16:58:05 +08:00
jxxghp
7f95bab0d5 fix #578 2023-09-13 16:12:57 +08:00
jxxghp
3fc267bcfa Merge pull request #578 from thsrite/main
fix 订阅刷新只处理订阅选中的站点(没选刷新所有设定的订阅站点)
2023-09-13 15:58:04 +08:00
jxxghp
648f0b6ec1 add byr、hdcity、okpt 2023-09-13 15:53:58 +08:00
thsrite
be3c3ef37f fix 订阅刷新站点 2023-09-13 15:52:32 +08:00
jxxghp
a47f382c21 fix download message 2023-09-13 15:18:23 +08:00
jxxghp
61c59b4405 fix #572 2023-09-13 14:58:33 +08:00
jxxghp
8ee391688d Merge pull request #575 from thsrite/main 2023-09-13 13:26:09 +08:00
thsrite
68c7bf0a96 Revert "fix"
This reverts commit 7c3c6ee999.
2023-09-13 13:10:11 +08:00
thsrite
6dd517a490 fix 自定义识别词空格 2023-09-13 12:45:44 +08:00
jxxghp
9baa5e1d35 Merge pull request #574 from thsrite/main 2023-09-13 12:36:13 +08:00
thsrite
e675e4358a fix 同步删除插件 2023-09-13 12:35:09 +08:00
jxxghp
c9a6081a57 fix log 2023-09-13 12:30:45 +08:00
jxxghp
2de20f601b fix 开关位置 2023-09-13 12:23:32 +08:00
jxxghp
79c708c30e Merge pull request #564 from thsrite/main 2023-09-13 12:04:09 +08:00
thsrite
f38defb515 Revert "fix 卸载插件时删除插件配置"
This reverts commit dd7803c90a.
2023-09-13 11:58:37 +08:00
thsrite
ac11d4eb30 Revert "fix dd7803c9"
This reverts commit 08560fc7c3.
2023-09-13 11:58:30 +08:00
thsrite
221c31f481 fix 自定义识别词增加规则:被替换词 => 替换词 && 偏移前 <> 偏移后 >> 集偏移 2023-09-13 10:37:13 +08:00
thsrite
7c3c6ee999 fix 2023-09-13 09:52:05 +08:00
thsrite
08560fc7c3 fix dd7803c9 2023-09-13 09:29:52 +08:00
thsrite
4659e7367f fix d8afa339 函数参数名 2023-09-13 09:24:16 +08:00
thsrite
2fa11a4796 Merge remote-tracking branch 'origin/main' 2023-09-13 09:22:29 +08:00
thsrite
01a153902e Revert "fix 自定义订阅插件增加识别按钮"
This reverts commit 1b2f09b95f.
2023-09-13 09:21:56 +08:00
jxxghp
5eb65046f0 Merge pull request #571 from WithdewHua/media_exists 2023-09-13 06:36:22 +08:00
WithdewHua
bb64e57f7c fix: 检查媒体文件是否存在时验证 TMDB ID 2023-09-12 23:03:15 +08:00
thsrite
0cb75d689c fix 根据type和tmdbid查询转移记录 2023-09-12 15:28:21 +08:00
thsrite
d7310ade86 fix 同步删除插件兼容多分辨率 2023-09-12 15:21:34 +08:00
thsrite
dd7803c90a fix 卸载插件时删除插件配置 2023-09-12 14:57:31 +08:00
thsrite
d8afa339de fix 媒体库刮削插件开启强制刮削时忽略SCRAP_METADATA变量 2023-09-12 13:29:12 +08:00
thsrite
1b2f09b95f fix 自定义订阅插件增加识别按钮 2023-09-12 12:45:39 +08:00
jxxghp
0414854832 Merge pull request #562 from thsrite/main 2023-09-12 11:47:57 +08:00
thsrite
9e6a7be5b1 fix #537 天空辅种失败问题 2023-09-12 11:45:57 +08:00
thsrite
e3c1407b62 fix 憨憨用户等级 2023-09-12 11:26:45 +08:00
thsrite
7a9ee954c5 fix sub正则 2023-09-12 11:11:36 +08:00
thsrite
99a06dcba0 fix rss过期,尝试保留原配置生成新的rss地址 2023-09-12 10:09:17 +08:00
jxxghp
bb8fc14bc6 v1.1.9
- 修复了部分情况下媒体识别错误的问题
- 站点新增支持dajiao、ptcafe
- 支持RSS订阅模式,RSS模式会自动获取RSS链接(也可手动维护),订阅刷新对站点压力小,同时可设置订阅刷新周期,24小时运行,可通过开关切换。
- 移除了自定义订阅功能,可使用RSS订阅模式或使用自定义订阅插件替代。
- 手动整理时支持通过名称搜索TMDBID。
2023-09-12 08:07:03 +08:00
jxxghp
50d9dcf17b fix #556 2023-09-12 07:36:29 +08:00
jxxghp
141b99d134 fix #556 2023-09-12 07:22:06 +08:00
jxxghp
18457a4de7 fix #555 2023-09-11 21:43:30 +08:00
jxxghp
a343d736ae fix #550 2023-09-11 21:25:56 +08:00
jxxghp
df5c364185 fix #550 2023-09-11 21:14:49 +08:00
jxxghp
edcec114ae fix bug 2023-09-11 19:54:38 +08:00
jxxghp
605a7486b3 fix log 2023-09-11 19:10:51 +08:00
jxxghp
efe89f59b9 feat 支持dajiao、ptcafe 2023-09-11 18:10:50 +08:00
jxxghp
fdd4aef3d3 feat 整合RSS订阅模式 2023-09-11 17:47:51 +08:00
jxxghp
08aef1f47f fix rsslink helper 2023-09-11 17:13:26 +08:00
jxxghp
c45f5e6ac4 Merge pull request #549 from thsrite/main
feat 自动生成站点默认rss地址
2023-09-11 16:35:36 +08:00
thsrite
f239cede07 fix speedlimit 未开启时return 2023-09-11 16:14:15 +08:00
thsrite
b2eb952cd0 fix 自动获取rss使用代理 2023-09-11 13:15:15 +08:00
thsrite
3a2fba0422 fix 自动获取rss data 2023-09-11 12:43:27 +08:00
thsrite
1034caa9fd fix ttg、zhuque等自动获取rss 2023-09-11 12:26:23 +08:00
thsrite
8b243e23ab feat 自动生成默认rss地址 2023-09-11 11:39:39 +08:00
jxxghp
1f76dc1e2a Merge pull request #540 from thsrite/main 2023-09-10 20:22:25 +08:00
thsrite
ea5c2fb4cf fix 限速插件每次重启完发送取消限速消息 2023-09-10 20:08:09 +08:00
thsrite
e50b56d542 fix 交互命令翻页下载 2023-09-10 19:51:20 +08:00
jxxghp
2206fafda9 Merge pull request #539 from thsrite/main 2023-09-10 18:45:33 +08:00
thsrite
345b74d881 fix #538 2023-09-10 18:41:04 +08:00
jxxghp
d231d75446 v1.1.8
- 修复了Jellyfin/Plex的webhook通知消息
- 修复了手动整理时屏蔽词不生效的问题
- 优化了剧集的年份匹配
- 优化了站点种子的索引频率控制
- 增加了站点分享率低时的信息提醒
- 增加了重启系统的远程交互命令
2023-09-10 17:43:04 +08:00
jxxghp
afb5874350 fix #536 2023-09-10 17:35:58 +08:00
jxxghp
1bd7b5c77e fix jellyfin webhook 2023-09-10 17:07:24 +08:00
jxxghp
ba41de61cb fix plex webhook 2023-09-10 12:57:51 +08:00
jxxghp
ae40d32115 fix bug 2023-09-10 09:15:12 +08:00
jxxghp
3fe4c9467e fix 2023-09-10 09:06:00 +08:00
jxxghp
b89512cc33 fix #526 2023-09-10 09:02:46 +08:00
jxxghp
f3b12bed20 feat 分享率低通知预警 2023-09-10 08:54:33 +08:00
jxxghp
08c7fff5ab fix README.md 2023-09-10 08:32:52 +08:00
jxxghp
9c20d1a270 Merge pull request #530 from thsrite/main
feat 补充剧集全部季年份
2023-09-09 22:06:58 +08:00
thsrite
b7b1aee878 fix 2023-09-09 22:03:51 +08:00
jxxghp
f998b39152 fix 删除种子数无法计算 2023-09-09 21:58:49 +08:00
jxxghp
ca01db31a9 fix LIBRARY_PATH 2023-09-09 21:41:55 +08:00
thsrite
a0b8cc6719 feat 补充剧集全部季年份 2023-09-09 21:24:07 +08:00
jxxghp
66b91abe90 fix sites.cpython-311-darwin 2023-09-09 20:58:52 +08:00
jxxghp
9b17d55ac0 fix db session 2023-09-09 20:56:37 +08:00
jxxghp
a7a0889867 Merge pull request #528 from thsrite/main 2023-09-09 20:17:09 +08:00
thsrite
af6cf306c8 fix 交互命令重启 2023-09-09 20:01:43 +08:00
jxxghp
20f35854f9 fix update 2023-09-09 19:43:02 +08:00
jxxghp
e5165c8fea fix plugin db session 2023-09-09 19:41:06 +08:00
jxxghp
0e36d003c0 fix db session 2023-09-09 19:26:56 +08:00
jxxghp
ccc249f29d Merge pull request #527 from developer-wlj/wlj0909 2023-09-09 18:31:27 +08:00
mayun110
f4edb32886 fix Windows目录监控下获取目录问题 2023-09-09 18:11:50 +08:00
jxxghp
475a84bfa6 Merge pull request #525 from thsrite/main 2023-09-09 17:53:29 +08:00
mayun110
3914ff4dd6 fix Windows下获取目录问题 2023-09-09 17:49:40 +08:00
jxxghp
5bcbacf3a5 feat torrents全局缓存共享 2023-09-09 17:42:31 +08:00
jxxghp
27238ac467 fix brushflow plugin 2023-09-09 16:49:15 +08:00
thsrite
019d40c17a fix 辅种插件排除已删除站点 2023-09-09 16:40:09 +08:00
jxxghp
fa5b92214f fix ssd 2023-09-09 16:24:53 +08:00
jxxghp
32a5f67e72 Merge pull request #524 from thsrite/main 2023-09-09 15:56:02 +08:00
thsrite
d6e9c14183 fix qb删种 2023-09-09 15:47:51 +08:00
jxxghp
87325d5bbd Merge pull request #523 from thsrite/main 2023-09-09 15:07:53 +08:00
thsrite
67ead871c1 fix 删除清除缓存按钮 2023-09-09 15:06:44 +08:00
jxxghp
691beb1186 Merge pull request #522 from DDS-Derek/main 2023-09-09 14:57:02 +08:00
jxxghp
b30d3c7dac Merge pull request #521 from WithdewHua/rsssubscribe 2023-09-09 14:55:41 +08:00
DDSRem
5e048f0150 feat: 优化容器id获取 2023-09-09 14:18:10 +08:00
WithdewHua
cb2cfe9d85 fix: 关闭清理缓存开关 2023-09-09 14:13:17 +08:00
jxxghp
482fca9b8c Merge pull request #520 from DDS-Derek/main
fix: failed to obtain container id
2023-09-09 12:08:50 +08:00
DDSRem
42511b95d8 fix: failed to obtain container id 2023-09-09 12:03:48 +08:00
jxxghp
b18e901fbd fix plugin ui 2023-09-09 11:37:34 +08:00
jxxghp
a30e3f49a3 v1.1.7
- 修复了文件转移无法覆盖的问题
- 修复了过滤规则只能从尾部开始删除的问题
- 优化了内建重启,支持非root权限环境(需要重拉镜像)
- 文件管理功能支持排序
- 优化了插件页面交互,优先展示插件数据
2023-09-09 11:08:45 +08:00
jxxghp
65d202e636 fix README.md 2023-09-09 10:51:59 +08:00
jxxghp
4373c0596b Merge pull request #518 from DDS-Derek/main
fix: port conflict
2023-09-09 10:45:54 +08:00
DDSRem
0136d9fe06 fix: port conflict 2023-09-09 10:44:05 +08:00
jxxghp
933c6d838c fix #497 2023-09-09 08:27:40 +08:00
jxxghp
7ce656148f fix #508 2023-09-09 08:19:17 +08:00
jxxghp
c05ffed6df fix #514 文件管理支持排序 2023-09-09 08:00:17 +08:00
jxxghp
6770ba3a35 feat 文件管理API排序 2023-09-08 22:48:53 +08:00
jxxghp
3b73dfcdc6 fix 文件转移时无法覆盖 2023-09-08 22:36:31 +08:00
jxxghp
100ff97017 Merge pull request #515 from thsrite/main 2023-09-08 21:49:49 +08:00
thsrite
4fe96178ee fix 2023-09-08 21:44:32 +08:00
thsrite
86d484fac0 fix 2023-09-08 21:41:30 +08:00
thsrite
db23b62fd1 fix 2023-09-08 21:31:36 +08:00
jxxghp
b84c8fd7f1 Merge pull request #512 from thsrite/main 2023-09-08 21:29:46 +08:00
jxxghp
c9f6c75069 Merge pull request #510 from DDS-Derek/main 2023-09-08 21:26:00 +08:00
thsrite
846459c244 fix wechat token 2023-09-08 21:21:43 +08:00
DDSRem
c4898d04aa docs: update 2023-09-08 20:38:07 +08:00
DDSRem
c8bc6a4618 fix: 重启更新 2023-09-08 20:33:23 +08:00
DDSRem
55dce26cb8 test: restart 2023-09-08 19:55:03 +08:00
DDSRem
ae3b73a73f feat: 优化重启 2023-09-08 19:49:10 +08:00
jxxghp
091df01b7c fix plugin 2023-09-08 16:48:13 +08:00
jxxghp
20c4c7d6e6 add 发布时间 2023-09-08 16:36:02 +08:00
jxxghp
eb1e045d8f Merge remote-tracking branch 'origin/main' 2023-09-08 15:38:38 +08:00
jxxghp
678638e9f1 feat 插件API 2023-09-08 15:38:32 +08:00
jxxghp
d8b78d3051 Merge pull request #505 from thsrite/main
fix 消息转发插件清理缓存按钮
2023-09-08 13:15:45 +08:00
thsrite
eaf0d17118 fix 消息转发插件清理缓存按钮 2023-09-08 13:13:11 +08:00
jxxghp
81bcfef6ec Merge pull request #504 from thsrite/main 2023-09-08 13:11:13 +08:00
thsrite
0997691b23 fix time format 2023-09-08 13:06:40 +08:00
jxxghp
d1f9647a63 Merge pull request #503 from thsrite/main
feat 签到插件支持分别配置签到、登录站点
2023-09-08 12:26:15 +08:00
thsrite
64a04ba8ed fix 2023-09-08 12:24:27 +08:00
jxxghp
726c130f1f Merge remote-tracking branch 'origin/main' 2023-09-08 12:23:29 +08:00
jxxghp
215b56b9f2 feat 打印jellyfin/plex webhook报文 2023-09-08 12:23:19 +08:00
thsrite
516bd8bc30 Merge remote-tracking branch 'origin/main' 2023-09-08 12:21:38 +08:00
thsrite
8bc6e04665 feat 签到插件支持分别配置签到、登录站点 2023-09-08 12:21:29 +08:00
jxxghp
94057cd5f1 Merge pull request #499 from thsrite/main 2023-09-08 11:24:49 +08:00
thsrite
2e80586436 Merge branch 'jxxghp:main' into main 2023-09-08 11:22:42 +08:00
thsrite
faa6d7dadd fix bug 2023-09-08 11:20:25 +08:00
jxxghp
071c81d52c v1.1.6
- 修复了一个未设置媒体服务器时订阅日志报错的问题
- 媒体库刮削插件支持覆盖已有元数据和图片
- 新增了一个沿用已有刮削名称的开关(默认开),避免TMDB信息变化时导致整理后名称不一致
- 刮削时季的海报优先使用TMDB的图片
- 增加了内建重启失败时的提示
2023-09-08 11:01:37 +08:00
jxxghp
52d4feb583 Update README.md 2023-09-08 10:45:55 +08:00
jxxghp
584e05e63e fix ui 2023-09-08 10:34:05 +08:00
jxxghp
061ff322ab fix bug 2023-09-08 10:26:00 +08:00
jxxghp
a2bcf8df9a Merge remote-tracking branch 'origin/main' 2023-09-08 10:03:23 +08:00
jxxghp
6c85040eb6 fix plugin 2023-09-08 10:03:14 +08:00
jxxghp
2e5d892120 fix plugin 2023-09-08 09:44:51 +08:00
jxxghp
43d108aea9 Merge pull request #498 from thsrite/main 2023-09-08 09:44:07 +08:00
thsrite
c46b1dd116 fix 消息转发插件bug 2023-09-08 09:22:59 +08:00
jxxghp
d3fac56e9a fix 2023-09-08 08:05:54 +08:00
jxxghp
b3f5b87b02 fix 2023-09-08 08:04:09 +08:00
jxxghp
03abdf9cb4 fix 2023-09-08 07:52:05 +08:00
jxxghp
42bc354e06 fix 2023-09-08 07:39:08 +08:00
jxxghp
02e81a79b2 fix 2023-09-07 23:11:08 +08:00
jxxghp
9fa4b8dfbe fix 2023-09-07 23:04:35 +08:00
jxxghp
366f59623a fix 2023-09-07 23:00:51 +08:00
jxxghp
d4c28500b7 fix plugin 2023-09-07 22:04:07 +08:00
jxxghp
5780344c43 fix 2023-09-07 20:19:03 +08:00
jxxghp
18970efc1a add index 2023-09-07 18:23:43 +08:00
jxxghp
5725584176 add index 2023-09-07 18:23:30 +08:00
jxxghp
4e26168ab5 fix plugin 2023-09-07 17:40:09 +08:00
jxxghp
f694dee71d fix 2023-09-07 16:16:04 +08:00
jxxghp
a9db0f6bbf fix 2023-09-07 16:05:48 +08:00
jxxghp
7efcde89b9 fix 2023-09-07 14:59:32 +08:00
jxxghp
1c07b306c3 Merge pull request #489 from thsrite/main
fix 目录监控已处理逻辑 && feat 新增已入库媒体是否跟随TMDB信息变化开关,关闭则延用媒体库名称
2023-09-07 14:29:08 +08:00
jxxghp
6c59a5ebb0 Merge branch 'main' into main 2023-09-07 14:29:01 +08:00
thsrite
4c7321a738 fix 2023-09-07 13:57:27 +08:00
jxxghp
f42fd023bb fix #490 2023-09-07 13:39:08 +08:00
jxxghp
9b8a4ebdd4 fix 2023-09-07 12:56:39 +08:00
jxxghp
443e2d8104 fix 减少刮削识别次数 2023-09-07 12:51:49 +08:00
jxxghp
2c61d439ca feat 媒体库刮削支持覆盖
fix 类型声明
2023-09-07 12:35:35 +08:00
thsrite
e01268222c fix 2023-09-07 12:27:04 +08:00
thsrite
27ff77b504 fix type 2023-09-07 12:25:01 +08:00
thsrite
bf8893d71b fix 文件所在文件夹重新刮削bug 2023-09-07 11:16:11 +08:00
thsrite
54b09a17c2 fix 2023-09-07 11:12:16 +08:00
thsrite
b01621049b feat 新增已入库媒体是否跟随TMDB信息变化开关,关闭则延用媒体库名称 2023-09-07 10:55:01 +08:00
thsrite
e5dc40e3c1 fix token过期后重新获取、重新发送请求 2023-09-07 10:24:21 +08:00
thsrite
44d4bcdd19 fix 目录监控已处理逻辑 2023-09-07 10:16:44 +08:00
jxxghp
b899b23d04 fix 2023-09-07 08:37:57 +08:00
jxxghp
fa23012adb fix #486 季图片优先使用TMDB的 2023-09-07 08:03:05 +08:00
jxxghp
d836b385ae fix 2023-09-07 07:20:10 +08:00
jxxghp
15a0bc6c12 fix 重启失败提示 2023-09-06 21:48:09 +08:00
jxxghp
22791e361d 更新 README.md 2023-09-06 21:41:23 +08:00
jxxghp
47b7dade5d Merge pull request #484 from thsrite/main 2023-09-06 21:23:32 +08:00
thsrite
c57d13afcc fix 优化同步删除插件msg 2023-09-06 21:21:00 +08:00
jxxghp
8db1c2952c Merge remote-tracking branch 'origin/main' 2023-09-06 21:15:21 +08:00
jxxghp
28c19bc4e3 fix 优化文件整理进度提示 2023-09-06 21:15:10 +08:00
jxxghp
fbef1735b0 Merge pull request #482 from thsrite/main
fix bug
2023-09-06 20:19:44 +08:00
thsrite
9869af992b fix bug 2023-09-06 20:18:43 +08:00
jxxghp
b6cb241b8a Merge pull request #480 from WPF0414/main
fix:限速通知速率展示问题
2023-09-06 20:09:02 +08:00
jxxghp
7edf8e7c30 Merge pull request #481 from thsrite/main
fix 删除辅种bug
2023-09-06 20:07:42 +08:00
thsrite
452161f1b8 fix 删除辅种bug 2023-09-06 20:05:29 +08:00
jxxghp
f75abb27b6 v1.1.5
- 修复了批量整理时只刮削第一个文件的问题
- 修复了多下载任务同一下载目录时会重复处理文件的问题
- 支持在WEB页面操作重启(需要映射`/var/run/docker.sock`文件到容器)
2023-09-06 19:54:23 +08:00
wangpengfei
30311e8e56 fix:限速通知
修复限速时通知错误问题
2023-09-06 19:50:18 +08:00
jxxghp
adff3b22e9 Merge pull request #476 from thsrite/main
fix 媒体库同步删除插件优化
2023-09-06 16:55:10 +08:00
thsrite
013c0dea3b fix NAStool同步插件不处理download_hash 2023-09-06 16:22:32 +08:00
jxxghp
c593c3ba16 fix #461 已转移成功的文件不重复处理 2023-09-06 16:12:40 +08:00
jxxghp
61b74735de fix #464 2023-09-06 16:00:42 +08:00
thsrite
952cae50e2 fix 同步删除插件删种逻辑 2023-09-06 15:57:46 +08:00
thsrite
7a9f89e86c fix 删除同步删除插件交互命令 2023-09-06 15:34:35 +08:00
jxxghp
f14d8bec1b fix api 2023-09-06 15:29:52 +08:00
thsrite
697d5a815b fix 标题不一致时防误删 2023-09-06 15:07:06 +08:00
thsrite
cfeaa2674d fix 媒体库同步删除插件优化 2023-09-06 14:26:24 +08:00
jxxghp
08f046f059 fix #465 批量转移时只刮削一个文件的问题 2023-09-06 13:04:18 +08:00
jxxghp
a66912f41a fix #465 批量转移时只刮削一个文件的问题 2023-09-06 13:01:13 +08:00
jxxghp
f244728a96 Merge remote-tracking branch 'origin/main' 2023-09-06 12:56:04 +08:00
jxxghp
576ac08a05 feat 内建重启 2023-09-06 12:55:48 +08:00
jxxghp
e874b3f294 Merge pull request #474 from thsrite/main
fix NAStool记录同步增加进度…
2023-09-06 11:34:13 +08:00
thsrite
90ff0fc793 fix NAStool记录同步增加进度… 2023-09-06 11:32:34 +08:00
jxxghp
259e8fc2e1 fix #463 2023-09-06 11:29:47 +08:00
jxxghp
5c0be93913 Merge pull request #471 from thsrite/main 2023-09-06 10:47:21 +08:00
thsrite
e84a5c74f6 fix 同步删除插件防重复消费 2023-09-06 09:37:02 +08:00
jxxghp
5145527d0e fix #456 2023-09-06 08:34:04 +08:00
jxxghp
e3f7f873c0 Merge pull request #462 from WPF0414/main 2023-09-05 22:38:21 +08:00
wangpengfei
84a2db2247 Update __init__.py
修复按比例的bug
2023-09-05 22:35:39 +08:00
jxxghp
4902d5ebed feat 本地文件系统判重 2023-09-05 20:32:38 +08:00
jxxghp
243391ee30 fix release 2023-09-05 19:57:24 +08:00
jxxghp
c424de65b3 - 修复了站点数据统计某些情况下不发消息的问题
- 修复了播放限速TR不生效的问题
- 优化了下载器文件同步插件
- 优化了数据库异常处理
- 媒体库同步删除插件支持Emby Webhook方式。
- 微信现在会自动添加交互操作菜单了
- 新增了一套UI主题配色
2023-09-05 19:52:46 +08:00
jxxghp
2077eede8c Merge pull request #459 from thsrite/main 2023-09-05 19:48:27 +08:00
thsrite
876d1e01b4 fix 签到插件strip 2023-09-05 19:33:27 +08:00
thsrite
dec022fd89 fix 同步删除插件 2023-09-05 19:30:46 +08:00
jxxghp
83829cbe27 Merge pull request #458 from thsrite/main 2023-09-05 19:08:41 +08:00
thsrite
8249f9356f fix 同步删除插件适配emby webhook方式! 2023-09-05 19:07:07 +08:00
jxxghp
b5fc6cdd1e fix 统一处理db事务回滚 2023-09-05 18:19:02 +08:00
jxxghp
51b959cff8 Merge remote-tracking branch 'origin/main' 2023-09-05 17:11:09 +08:00
jxxghp
36880a8b7d fix 下载文件记录只登记选中的文件 2023-09-05 17:11:02 +08:00
jxxghp
380cc7552f Merge pull request #453 from thsrite/main
fix 同步插件路径替换
2023-09-05 16:57:04 +08:00
thsrite
0f1c8cb226 Merge branch 'jxxghp:main' into main 2023-09-05 16:55:08 +08:00
thsrite
7435fb0c10 fix tr文件同步过滤掉未下载的文件 2023-09-05 16:52:02 +08:00
jxxghp
1a03981463 fix #193 2023-09-05 16:43:34 +08:00
jxxghp
4cb7a488a9 fix #193 2023-09-05 16:43:02 +08:00
jxxghp
c69762d4c9 fix #448 TR限速不生效的问题 2023-09-05 16:35:15 +08:00
thsrite
03d9bf6d05 fix 路径替换 2023-09-05 16:20:42 +08:00
jxxghp
6a08b4ba7f fix 提高DB连接等待时间,避免database locked报错。 2023-09-05 16:18:04 +08:00
jxxghp
99218515ea fix 部分数据库操作没有Commit 2023-09-05 16:12:43 +08:00
jxxghp
c3a0a839c3 Merge pull request #450 from thsrite/main
fix 交互命令消息原路返回
2023-09-05 13:39:14 +08:00
thsrite
351513bcbc fix 交互命令消息原路返回 2023-09-05 13:19:25 +08:00
jxxghp
ed5dec1b0f feat 种子刷新频率控制 2023-09-05 12:39:01 +08:00
jxxghp
c62b29edc4 fix 微信菜单 2023-09-05 11:54:16 +08:00
jxxghp
c224a7c07b fix bug 2023-09-05 11:52:46 +08:00
jxxghp
a7b244a4b4 fix README.md 2023-09-05 11:48:36 +08:00
jxxghp
b564f70c63 feat 微信自动注册菜单 2023-09-05 11:33:42 +08:00
jxxghp
551f32491d fix 微信菜单长度 2023-09-05 11:23:21 +08:00
jxxghp
2826b9411d fix bug 2023-09-05 11:20:06 +08:00
jxxghp
4bf9045784 fix bug 2023-09-05 11:01:12 +08:00
jxxghp
114788e3ed feat 微信自动注册菜单 2023-09-05 10:58:19 +08:00
jxxghp
bb729bf976 fix #442 2023-09-05 08:39:23 +08:00
jxxghp
bedc885232 Merge pull request #440 from amtoaer/memory_percent 2023-09-04 23:18:46 +08:00
amtoaer
21e39611bc feat: 内存占用图使用百分比 2023-09-04 23:07:39 +08:00
jxxghp
73e7e547ea Merge pull request #437 from thsrite/main 2023-09-04 22:22:40 +08:00
thsrite
bc25d71b88 fix #407 2023-09-04 22:21:03 +08:00
jxxghp
ff8a9dc8c7 v1.1.3
- 修复了历史记录重新整理记录缺失的问题
- 优化了数据库会话处理
- 优化了普通用户的菜单权限
- 优化了文件管理UI细节
- 调整了仪表仪显示内容
- 捷径新增了过滤规则测试功能
- 图片刮削下载失败时支持重试
- 播放限速插件支持手动配置不限速地址范围
2023-09-04 21:21:04 +08:00
jxxghp
4ee7daa673 Merge remote-tracking branch 'origin/main' 2023-09-04 20:40:28 +08:00
jxxghp
aca1673ee3 fix db session 2023-09-04 20:40:17 +08:00
jxxghp
87ece98471 Merge pull request #435 from thsrite/main 2023-09-04 20:24:39 +08:00
thsrite
4c16cd7bfb fix b7d2168f 2023-09-04 20:20:42 +08:00
jxxghp
712af24a72 fix 2023-09-04 20:13:16 +08:00
jxxghp
b7d2168f8e fix #434 2023-09-04 19:30:06 +08:00
jxxghp
65ad7123f9 fix #419 2023-09-04 18:08:11 +08:00
jxxghp
ce42e48b37 fix login api 2023-09-04 17:48:44 +08:00
jxxghp
45b53da056 Merge pull request #428 from thsrite/main 2023-09-04 11:47:52 +08:00
thsrite
70f93e02e4 fix #365 限速插件增加不限速地址范围,不设置默认不限速内网ip 2023-09-04 11:40:19 +08:00
jxxghp
e4b63eacae add system apis 2023-09-04 11:07:30 +08:00
jxxghp
96f17e2bc2 fix #426 刮削下载图片重试 2023-09-04 10:14:05 +08:00
jxxghp
7eb77875f1 fix 重连机制 2023-09-03 21:59:18 +08:00
jxxghp
bbc27bbe19 更新 README.md 2023-09-03 21:39:47 +08:00
jxxghp
3691b2a10b add 过滤规则测试API 2023-09-03 18:36:06 +08:00
jxxghp
08a3d02daf fix 调整重新整理的删除顺序 2023-09-03 17:37:06 +08:00
jxxghp
57abc7816b Merge pull request #420 from thsrite/main 2023-09-03 16:30:01 +08:00
thsrite
69c277777e fix 签到周期重启bug 2023-09-03 16:23:38 +08:00
114 changed files with 5678 additions and 2157 deletions

View File

@@ -14,13 +14,7 @@ jobs:
steps:
-
name: Checkout
uses: actions/checkout@v3
- name: Docker meta
id: meta
uses: docker/metadata-action@v4
with:
images: ${{ secrets.DOCKER_USERNAME }}/moviepilot
uses: actions/checkout@v4
-
name: Release version
@@ -29,34 +23,42 @@ jobs:
app_version=$(cat version.py |sed -ne "s/APP_VERSION\s=\s'v\(.*\)'/\1/gp")
echo "app_version=$app_version" >> $GITHUB_ENV
-
name: Docker meta
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ secrets.DOCKER_USERNAME }}/moviepilot
tags: |
type=raw,value=${{ env.app_version }}
type=raw,value=latest
-
name: Set Up QEMU
uses: docker/setup-qemu-action@v2
uses: docker/setup-qemu-action@v3
-
name: Set Up Buildx
uses: docker/setup-buildx-action@v2
uses: docker/setup-buildx-action@v3
-
name: Login DockerHub
uses: docker/login-action@v2
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
-
name: Build Image
uses: docker/build-push-action@v4
uses: docker/build-push-action@v5
with:
context: .
file: Dockerfile
platforms: |
linux/amd64
linux/arm64
linux/arm64/v8
push: true
build-args: |
MOVIEPILOT_VERSION=${{ env.app_version }}
tags: |
${{ secrets.DOCKER_USERNAME }}/moviepilot:latest
${{ secrets.DOCKER_USERNAME }}/moviepilot:${{ env.app_version }}
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}

View File

@@ -14,7 +14,7 @@ jobs:
steps:
-
name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
-
name: Release Version
@@ -28,7 +28,7 @@ jobs:
uses: actions/create-release@latest
with:
tag_name: v${{ env.app_version }}
name: v${{ env.app_version }}
release_name: v${{ env.app_version }}
body: ${{ github.event.commits[0].message }}
draft: false
prerelease: false

View File

@@ -9,6 +9,7 @@ ENV LANG="C.UTF-8" \
UMASK=000 \
MOVIEPILOT_AUTO_UPDATE=true \
MOVIEPILOT_AUTO_UPDATE_DEV=false \
PORT=3001 \
NGINX_PORT=3000 \
CONFIG_DIR="/config" \
API_TOKEN="moviepilot" \
@@ -48,6 +49,7 @@ RUN apt-get update \
busybox \
dumb-init \
jq \
haproxy \
&& \
if [ "$(uname -m)" = "x86_64" ]; \
then ln -s /usr/lib/x86_64-linux-musl/libc.so /lib/libc.musl-x86_64.so.1; \
@@ -58,11 +60,12 @@ RUN apt-get update \
&& cp -f /app/update /usr/local/bin/mp_update \
&& cp -f /app/entrypoint /entrypoint \
&& chmod +x /entrypoint /usr/local/bin/mp_update \
&& mkdir -p ${HOME} \
&& mkdir -p ${HOME} /var/lib/haproxy/server-state \
&& groupadd -r moviepilot -g 911 \
&& useradd -r moviepilot -g moviepilot -d ${HOME} -s /bin/bash -u 911 \
&& apt-get install -y build-essential \
&& pip install --upgrade pip \
&& pip install Cython \
&& pip install -r requirements.txt \
&& playwright install-deps chromium \
&& python_ver=$(python3 -V | awk '{print $2}') \
@@ -82,5 +85,5 @@ RUN apt-get update \
/var/lib/apt/lists/* \
/var/tmp/*
EXPOSE 3000
VOLUME ["/config"]
VOLUME [ "/config" ]
ENTRYPOINT [ "/entrypoint" ]

View File

@@ -51,20 +51,22 @@ docker pull jxxghp/moviepilot:latest
- **PGID**:运行程序用户的`gid`,默认`0`
- **UMASK**:掩码权限,默认`000`,可以考虑设置为`022`
- **MOVIEPILOT_AUTO_UPDATE**:重启更新,`true`/`false`,默认`true` **注意:如果出现网络问题可以配置`PROXY_HOST`,具体看下方`PROXY_HOST`解释**
- **NGINX_PORT** WEB服务端口默认`3000`,可自行修改,不能`3001`
- **NGINX_PORT** WEB服务端口默认`3000`,可自行修改,不能与API服务端口冲突
- **PORT** API服务端口默认`3001`可自行修改不能与WEB服务端口冲突
- **SUPERUSER** 超级管理员用户名,默认`admin`,安装后使用该用户登录后台管理界面
- **SUPERUSER_PASSWORD** 超级管理员初始密码,默认`password`,建议修改为复杂密码
- **API_TOKEN** API密钥默认`moviepilot`在媒体服务器Webhook、微信回调等地址配置中需要加上`?token=`该值,建议修改为复杂字符串
- **PROXY_HOST** 网络代理可选访问themoviedb或者重启更新需要使用代理访问格式为`http(s)://ip:port`
- **TMDB_API_DOMAIN** TMDB API地址默认`api.themoviedb.org`,也可配置为`api.tmdb.org`或其它中转代理服务地址,能连通即可
- **DOWNLOAD_PATH** 下载保存目录,**注意:需要将`moviepilot``下载器`的映射路径保持一致**,否则会导致下载文件无法转移
- **DOWNLOAD_MOVIE_PATH** 电影下载保存目录,**必须是`DOWNLOAD_PATH`的下级路径**不设置则下载到`DOWNLOAD_PATH`
- **DOWNLOAD_TV_PATH** 电视剧下载保存目录,**必须是`DOWNLOAD_PATH`的下级路径**不设置则下载到`DOWNLOAD_PATH`
- **DOWNLOAD_ANIME_PATH** 动漫下载保存目录,**必须是`DOWNLOAD_PATH`的下级路径**不设置则下载到`DOWNLOAD_PATH`
- **DOWNLOAD_MOVIE_PATH** 电影下载保存目录,不设置则下载到`DOWNLOAD_PATH`
- **DOWNLOAD_TV_PATH** 电视剧下载保存目录,不设置则下载到`DOWNLOAD_PATH`
- **DOWNLOAD_ANIME_PATH** 动漫下载保存目录,不设置则下载到`DOWNLOAD_PATH`
- **DOWNLOAD_CATEGORY** 下载二级分类开关,`true`/`false`,默认`false`,开启后会根据配置`category.yaml`自动在下载目录下建立二级目录分类
- **DOWNLOAD_SUBTITLE** 下载站点字幕,`true`/`false`,默认`true`
- **REFRESH_MEDIASERVER** 入库刷新媒体库,`true`/`false`,默认`true`
- **SCRAP_METADATA** 刮削入库的媒体文件,`true`/`false`,默认`true`
- **SCRAP_FOLLOW_TMDB** 新增已入库媒体是否跟随TMDB信息变化`true`/`false`,默认`true`
- **TORRENT_TAG** 种子标签,默认为`MOVIEPILOT`设置后只有MoviePilot添加的下载才会处理留空所有下载器中的任务均会处理
- **LIBRARY_PATH** 媒体库目录,多个目录使用`,`分隔
- **LIBRARY_MOVIE_NAME** 电影媒体库目录名,默认`电影`
@@ -79,6 +81,8 @@ docker pull jxxghp/moviepilot:latest
- **OCR_HOST** OCR识别服务器地址格式`http(s)://ip:port`用于识别站点二维码实现自动登录获取Cookie等不配置默认使用内建服务器`https://movie-pilot.org`,可使用 [这个镜像](https://hub.docker.com/r/jxxghp/moviepilot-ocr) 自行搭建。
- **USER_AGENT** CookieCloud对应的浏览器UA可选设置后可增加连接站点的成功率同步站点后可以在管理界面中修改
- **AUTO_DOWNLOAD_USER** 交互搜索自动下载用户ID使用,分割
- **SUBSCRIBE_MODE** 订阅模式,`rss`/`spider`,默认`spider``rss`模式通过定时刷新RSS来匹配订阅RSS地址会自动获取也可手动维护对站点压力小同时可设置订阅刷新周期24小时运行但订阅和下载通知不能过滤和显示免费推荐使用rss模式。
- **SUBSCRIBE_RSS_INTERVAL** RSS订阅模式刷新时间间隔分钟默认`30`分钟不能小于5分钟。
- **SUBSCRIBE_SEARCH** 订阅搜索,`true`/`false`,默认`false`开启后会每隔24小时对所有订阅进行全量搜索以补齐缺失剧集一般情况下正常订阅即可订阅搜索只做为兜底会增加站点压力不建议开启
- **MESSAGER** 消息通知渠道,支持 `telegram`/`wechat`/`slack`,开启多个渠道时使用`,`分隔。同时还需要配置对应渠道的环境变量,非对应渠道的变量可删除,推荐使用`telegram`
@@ -113,7 +117,7 @@ docker pull jxxghp/moviepilot:latest
- **QB_HOST** qbittorrent地址格式`ip:port`https需要添加`https://`前缀
- **QB_USER** qbittorrent用户名
- **QB_PASSWORD** qbittorrent密码
- **QB_CATEGORY** qbittorrent分类自动管理`true`/`false`,默认`flase`,开启后会将下载二级分类传递到下载器,由下载器管理下载目录,需要同步开启`DOWNLOAD_CATEGORY`
- **QB_CATEGORY** qbittorrent分类自动管理`true`/`false`,默认`false`,开启后会将下载二级分类传递到下载器,由下载器管理下载目录,需要同步开启`DOWNLOAD_CATEGORY`
- `transmission`设置项:
@@ -145,23 +149,24 @@ docker pull jxxghp/moviepilot:latest
### 2. **用户认证**
- **AUTH_SITE** 认证站点,支持`hhclub`/`audiences`/`hddolby`/`zmpt`/`freefarm`/`hdfans`/`wintersakura`/`leaves`/`1ptba`/`icc2022`/`iyuu`
- **AUTH_SITE** 认证站点,支持`iyuu`/`hhclub`/`audiences`/`hddolby`/`zmpt`/`freefarm`/`hdfans`/`wintersakura`/`leaves`/`1ptba`/`icc2022`/`ptlsp`
`MoviePilot`需要认证后才能使用,配置`AUTH_SITE`后,需要根据下表配置对应站点的认证参数。
| 站点 | 参数 |
|:--:|:-----------------------------------------------------:|
| iyuu | `IYUU_SIGN`IYUU登录令牌 |
| hhclub | `HHCLUB_USERNAME`:用户名<br/>`HHCLUB_PASSKEY`:密钥 |
| audiences | `AUDIENCES_UID`用户ID<br/>`AUDIENCES_PASSKEY`:密钥 |
| hddolby | `HDDOLBY_ID`用户ID<br/>`HDDOLBY_PASSKEY`:密钥 |
| zmpt | `ZMPT_UID`用户ID<br/>`ZMPT_PASSKEY`:密钥 |
| freefarm | `FREEFARM_UID`用户ID<br/>`FREEFARM_PASSKEY`:密钥 |
| hdfans | `HDFANS_UID`用户ID<br/>`HDFANS_PASSKEY`:密钥 |
| 站点 | 参数 |
|:------------:|:-----------------------------------------------------:|
| iyuu | `IYUU_SIGN`IYUU登录令牌 |
| hhclub | `HHCLUB_USERNAME`:用户名<br/>`HHCLUB_PASSKEY`:密钥 |
| audiences | `AUDIENCES_UID`用户ID<br/>`AUDIENCES_PASSKEY`:密钥 |
| hddolby | `HDDOLBY_ID`用户ID<br/>`HDDOLBY_PASSKEY`:密钥 |
| zmpt | `ZMPT_UID`用户ID<br/>`ZMPT_PASSKEY`:密钥 |
| freefarm | `FREEFARM_UID`用户ID<br/>`FREEFARM_PASSKEY`:密钥 |
| hdfans | `HDFANS_UID`用户ID<br/>`HDFANS_PASSKEY`:密钥 |
| wintersakura | `WINTERSAKURA_UID`用户ID<br/>`WINTERSAKURA_PASSKEY`:密钥 |
| leaves | `LEAVES_UID`用户ID<br/>`LEAVES_PASSKEY`:密钥 |
| 1ptba | `1PTBA_UID`用户ID<br/>`1PTBA_PASSKEY`:密钥 |
| icc2022 | `ICC2022_UID`用户ID<br/>`ICC2022_PASSKEY`:密钥 |
| leaves | `LEAVES_UID`用户ID<br/>`LEAVES_PASSKEY`:密钥 |
| 1ptba | `1PTBA_UID`用户ID<br/>`1PTBA_PASSKEY`:密钥 |
| icc2022 | `ICC2022_UID`用户ID<br/>`ICC2022_PASSKEY`:密钥 |
| ptlsp | `PTLSP_UID`用户ID<br/>`PTLSP_PASSKEY`:密钥 |
### 2. **进阶配置**
@@ -220,12 +225,13 @@ docker pull jxxghp/moviepilot:latest
## 使用
- 通过CookieCloud同步快速同步站点不需要使用的站点可在WEB管理界面中禁用。
- 通过下载器监控实现自动整理入库刮削
- 通过微信/Telegram/Slack远程管理其中Telegram将会自动添加操作菜单。微信回调相对路径为`/api/v1/message/`
- 通过WEB进行管理将WEB添加到手机桌面获得类App使用效果管理界面端口`3000`
- 设置媒体服务器Webhook通过MoviePilot发送播放通知等。Webhook回调相对路径为`/api/v1/webhook?token=moviepilot`,其中`moviepilot`为设置的`API_TOKEN`
- 将MoviePilot做为Radarr或Sonarr服务器添加到Overseerr或Jellyseerr可使用Overseerr/Jellyseerr浏览订阅。
- 通过CookieCloud同步快速同步站点不需要使用的站点可在WEB管理界面中禁用,无法同步的站点可手动新增
- 通过WEB进行管理将WEB添加到手机桌面获得类App使用效果管理界面端口`3000`后台API端口`3001`
- 通过下载器监控或使用目录监控插件实现自动整理入库刮削(二选一)
- 通过微信/Telegram/Slack远程管理其中微信/Telegram将会自动添加操作菜单微信菜单条数有限制部分菜单不显示微信需要在官方页面设置回调地址地址相对路径为`/api/v1/message/`
- 设置媒体服务器Webhook通过MoviePilot发送播放通知等。Webhook回调相对路径为`/api/v1/webhook?token=moviepilot``3001`端口),其中`moviepilot`为设置的`API_TOKEN`
- 将MoviePilot做为Radarr或Sonarr服务器添加到Overseerr或Jellyseerr`3001`端口)可使用Overseerr/Jellyseerr浏览订阅。
- 映射宿主机docker.sock文件到容器`/var/run/docker.sock`,以支持内建重启操作。实例:`-v /var/run/docker.sock:/var/run/docker.sock:ro`
**注意**
@@ -240,11 +246,24 @@ location / {
proxy_set_header X-Forwarded-Proto $scheme;
}
```
3) 新建的企业微信应用需要固定公网IP的代理才能收到消息代理添加以下代码
```nginx configuration
location /cgi-bin/gettoken {
proxy_pass https://qyapi.weixin.qq.com;
}
location /cgi-bin/message/send {
proxy_pass https://qyapi.weixin.qq.com;
}
location /cgi-bin/menu/create {
proxy_pass https://qyapi.weixin.qq.com;
}
```
![image](https://github.com/jxxghp/MoviePilot/assets/51039935/b8f0238d-847f-4f9d-b210-e905837362b9)
![image](https://github.com/jxxghp/MoviePilot/assets/51039935/f2654b09-26f3-464f-a0af-1de3f97832ee)
![image](https://github.com/jxxghp/MoviePilot/assets/51039935/28219233-ec7d-479b-b184-9a901c947dd1)
![image](https://github.com/jxxghp/MoviePilot/assets/51039935/fcb87529-56dd-43df-8337-6e34b8582819)
![image](https://github.com/jxxghp/MoviePilot/assets/51039935/f7df0806-668d-4c8b-ad41-133bf8f0bf73)
![image](https://github.com/jxxghp/MoviePilot/assets/51039935/bfa77c71-510a-46a6-9c1e-cf98cb101e3a)
![image](https://github.com/jxxghp/MoviePilot/assets/51039935/51cafd09-e38c-47f9-ae62-1e83ab8bf89b)
![image](https://github.com/jxxghp/MoviePilot/assets/51039935/f7ea77cd-0362-4c35-967c-7f1b22dbef05)

View File

@@ -6,8 +6,6 @@ Create Date: 2023-08-28 13:21:45.152012
"""
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = '52ab4930be04'

View File

@@ -0,0 +1,27 @@
"""1.0.5
Revision ID: e734c7fe6056
Revises: 1e169250e949
Create Date: 2023-09-07 18:19:41.250957
"""
from alembic import op
# revision identifiers, used by Alembic.
revision = 'e734c7fe6056'
down_revision = '1e169250e949'
branch_labels = None
depends_on = None
def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
try:
op.create_index('ix_transferhistory_tmdbid', 'transferhistory', ['tmdbid'], unique=False)
except Exception as err:
pass
# ### end Alembic commands ###
def downgrade() -> None:
pass

View File

@@ -1,7 +1,7 @@
from fastapi import APIRouter
from app.api.endpoints import login, user, site, message, webhook, subscribe, \
media, douban, search, plugin, tmdb, history, system, download, dashboard, rss, filebrowser, transfer
media, douban, search, plugin, tmdb, history, system, download, dashboard, filebrowser, transfer
api_router = APIRouter()
api_router.include_router(login.router, prefix="/login", tags=["login"])
@@ -19,6 +19,5 @@ api_router.include_router(system.router, prefix="/system", tags=["system"])
api_router.include_router(plugin.router, prefix="/plugin", tags=["plugin"])
api_router.include_router(download.router, prefix="/download", tags=["download"])
api_router.include_router(dashboard.router, prefix="/dashboard", tags=["dashboard"])
api_router.include_router(rss.router, prefix="/rss", tags=["rss"])
api_router.include_router(filebrowser.router, prefix="/filebrowser", tags=["filebrowser"])
api_router.include_router(transfer.router, prefix="/transfer", tags=["transfer"])

View File

@@ -41,12 +41,7 @@ def storage(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询存储空间信息
"""
if settings.LIBRARY_PATH:
total_storage, free_storage = SystemUtils.space_usage(
[Path(path) for path in settings.LIBRARY_PATH.split(",")]
)
else:
total_storage, free_storage = 0, 0
total_storage, free_storage = SystemUtils.space_usage(settings.LIBRARY_PATHS)
return schemas.Storage(
total_storage=total_storage,
used_storage=total_storage - free_storage
@@ -124,3 +119,19 @@ def transfer(days: int = 7, db: Session = Depends(get_db),
"""
transfer_stat = TransferHistory.statistic(db, days)
return [stat[1] for stat in transfer_stat]
@router.get("/cpu", summary="获取当前CPU使用率", response_model=int)
def cpu(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
获取当前CPU使用率
"""
return SystemUtils.cpu_usage()
@router.get("/memory", summary="获取当前内存使用量和使用率", response_model=List[int])
def memory(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
获取当前内存使用率
"""
return SystemUtils.memory_usage()

View File

@@ -17,7 +17,7 @@ IMAGE_TYPES = [".jpg", ".png", ".gif", ".bmp", ".jpeg", ".webp"]
@router.get("/list", summary="所有插件", response_model=List[schemas.FileItem])
def list_path(path: str, _: schemas.TokenPayload = Depends(verify_token)) -> Any:
def list_path(path: str, sort: str = 'time', _: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询当前目录下所有目录和文件
"""
@@ -55,6 +55,7 @@ def list_path(path: str, _: schemas.TokenPayload = Depends(verify_token)) -> Any
basename=path_obj.stem,
extension=path_obj.suffix[1:],
size=path_obj.stat().st_size,
modify_time=path_obj.stat().st_mtime,
))
return ret_items
@@ -65,6 +66,7 @@ def list_path(path: str, _: schemas.TokenPayload = Depends(verify_token)) -> Any
path=str(item).replace("\\", "/") + "/",
name=item.name,
basename=item.stem,
modify_time=item.stat().st_mtime,
))
# 遍历所有文件,不含子目录
@@ -80,8 +82,13 @@ def list_path(path: str, _: schemas.TokenPayload = Depends(verify_token)) -> Any
basename=item.stem,
extension=item.suffix[1:],
size=item.stat().st_size,
modify_time=item.stat().st_mtime,
))
# 排序
if sort == 'time':
ret_items.sort(key=lambda x: x.modify_time, reverse=True)
else:
ret_items.sort(key=lambda x: x.name, reverse=False)
return ret_items

View File

@@ -56,6 +56,9 @@ async def login_access_token(
user.id, expires_delta=access_token_expires
),
token_type="bearer",
super_user=user.is_superuser,
user_name=user.name,
avatar=user.avatar
)

View File

@@ -64,14 +64,13 @@ def wechat_verify(echostr: str, msg_signature: str,
@router.get("/switchs", summary="查询通知消息渠道开关", response_model=List[NotificationSwitch])
def read_switchs(db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
def read_switchs(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询通知消息渠道开关
"""
return_list = []
# 读取数据库
switchs = SystemConfigOper(db).get(SystemConfigKey.NotificationChannels)
switchs = SystemConfigOper().get(SystemConfigKey.NotificationChannels)
if not switchs:
for noti in NotificationType:
return_list.append(NotificationSwitch(mtype=noti.value, wechat=True, telegram=True, slack=True))
@@ -83,7 +82,6 @@ def read_switchs(db: Session = Depends(get_db),
@router.post("/switchs", summary="设置通知消息渠道开关", response_model=schemas.Response)
def set_switchs(switchs: List[NotificationSwitch],
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询通知消息渠道开关
@@ -92,6 +90,6 @@ def set_switchs(switchs: List[NotificationSwitch],
for switch in switchs:
switch_list.append(switch.dict())
# 存入数据库
SystemConfigOper(db).set(SystemConfigKey.NotificationChannels, switch_list)
SystemConfigOper().set(SystemConfigKey.NotificationChannels, switch_list)
return schemas.Response(success=True)

View File

@@ -22,28 +22,26 @@ def all_plugins(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
@router.get("/installed", summary="已安装插件", response_model=List[str])
def installed_plugins(db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
def installed_plugins(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询用户已安装插件清单
"""
return SystemConfigOper(db).get(SystemConfigKey.UserInstalledPlugins) or []
return SystemConfigOper().get(SystemConfigKey.UserInstalledPlugins) or []
@router.get("/install/{plugin_id}", summary="安装插件", response_model=schemas.Response)
def install_plugin(plugin_id: str,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
安装插件
"""
# 已安装插件
install_plugins = SystemConfigOper(db).get(SystemConfigKey.UserInstalledPlugins) or []
install_plugins = SystemConfigOper().get(SystemConfigKey.UserInstalledPlugins) or []
# 安装插件
if plugin_id not in install_plugins:
install_plugins.append(plugin_id)
# 保存设置
SystemConfigOper(db).set(SystemConfigKey.UserInstalledPlugins, install_plugins)
SystemConfigOper().set(SystemConfigKey.UserInstalledPlugins, install_plugins)
# 重载插件管理器
PluginManager().init_config()
return schemas.Response(success=True)
@@ -93,19 +91,18 @@ def set_plugin_config(plugin_id: str, conf: dict,
@router.delete("/{plugin_id}", summary="卸载插件", response_model=schemas.Response)
def uninstall_plugin(plugin_id: str,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
卸载插件
"""
# 删除已安装信息
install_plugins = SystemConfigOper(db).get(SystemConfigKey.UserInstalledPlugins) or []
install_plugins = SystemConfigOper().get(SystemConfigKey.UserInstalledPlugins) or []
for plugin in install_plugins:
if plugin == plugin_id:
install_plugins.remove(plugin)
break
# 保存
SystemConfigOper(db).set(SystemConfigKey.UserInstalledPlugins, install_plugins)
SystemConfigOper().set(SystemConfigKey.UserInstalledPlugins, install_plugins)
# 重载插件管理器
PluginManager().init_config()
return schemas.Response(success=True)

View File

@@ -1,135 +0,0 @@
from typing import List, Any
from fastapi import APIRouter, Depends
from sqlalchemy.orm import Session
from starlette.background import BackgroundTasks
from app import schemas
from app.chain.rss import RssChain
from app.core.security import verify_token
from app.db import get_db
from app.db.models.rss import Rss
from app.helper.rss import RssHelper
from app.schemas import MediaType
router = APIRouter()
def start_rss_refresh(db: Session, rssid: int = None):
"""
启动自定义订阅刷新
"""
RssChain(db).refresh(rssid=rssid, manual=True)
@router.get("/", summary="所有自定义订阅", response_model=List[schemas.Rss])
def read_rsses(
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询所有自定义订阅
"""
return Rss.list(db)
@router.post("/", summary="新增自定义订阅", response_model=schemas.Response)
def create_rss(
*,
rss_in: schemas.Rss,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)
) -> Any:
"""
新增自定义订阅
"""
if rss_in.type:
mtype = MediaType(rss_in.type)
else:
mtype = None
rssid, errormsg = RssChain(db).add(
mtype=mtype,
**rss_in.dict()
)
if not rssid:
return schemas.Response(success=False, message=errormsg)
return schemas.Response(success=True, data={
"id": rssid
})
@router.put("/", summary="更新自定义订阅", response_model=schemas.Response)
def update_rss(
*,
rss_in: schemas.Rss,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)
) -> Any:
"""
更新自定义订阅信息
"""
rss = Rss.get(db, rss_in.id)
if not rss:
return schemas.Response(success=False, message="自定义订阅不存在")
rss.update(db, rss_in.dict())
return schemas.Response(success=True)
@router.get("/preview/{rssid}", summary="预览自定义订阅", response_model=List[schemas.TorrentInfo])
def preview_rss(
rssid: int,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据ID查询自定义订阅RSS报文
"""
rssinfo: Rss = Rss.get(db, rssid)
if not rssinfo:
return []
torrents = RssHelper.parse(rssinfo.url, proxy=True if rssinfo.proxy else False) or []
return [schemas.TorrentInfo(
title=t.get("title"),
description=t.get("description"),
enclosure=t.get("enclosure"),
size=t.get("size"),
page_url=t.get("link"),
pubdate=t["pubdate"].strftime("%Y-%m-%d %H:%M:%S") if t.get("pubdate") else None,
) for t in torrents]
@router.get("/refresh/{rssid}", summary="刷新自定义订阅", response_model=schemas.Response)
def refresh_rss(
rssid: int,
background_tasks: BackgroundTasks,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据ID刷新自定义订阅
"""
background_tasks.add_task(start_rss_refresh,
db=db,
rssid=rssid)
return schemas.Response(success=True)
@router.get("/{rssid}", summary="查询自定义订阅详情", response_model=schemas.Rss)
def read_rss(
rssid: int,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据ID查询自定义订阅详情
"""
return Rss.get(db, rssid)
@router.delete("/{rssid}", summary="删除自定义订阅", response_model=schemas.Response)
def read_rss(
rssid: int,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据ID删除自定义订阅
"""
Rss.delete(db, rssid)
return schemas.Response(success=True)

View File

@@ -1,6 +1,6 @@
from typing import List, Any
from fastapi import APIRouter, Depends, HTTPException
from fastapi import APIRouter, Depends
from sqlalchemy.orm import Session
from app import schemas

View File

@@ -6,8 +6,8 @@ from starlette.background import BackgroundTasks
from app import schemas
from app.chain.cookiecloud import CookieCloudChain
from app.chain.search import SearchChain
from app.chain.site import SiteChain
from app.chain.torrents import TorrentsChain
from app.core.event import EventManager
from app.core.security import verify_token
from app.db import get_db
@@ -117,9 +117,9 @@ def cookie_cloud_sync(db: Session = Depends(get_db),
清空所有站点数据并重新同步CookieCloud站点信息
"""
Site.reset(db)
SystemConfigOper(db).set(SystemConfigKey.IndexerSites, [])
SystemConfigOper(db).set(SystemConfigKey.RssSites, [])
CookieCloudChain(db).process(manual=True)
SystemConfigOper().set(SystemConfigKey.IndexerSites, [])
SystemConfigOper().set(SystemConfigKey.RssSites, [])
CookieCloudChain().process(manual=True)
# 插件站点删除
EventManager().send_event(EventType.SiteDeleted,
{
@@ -191,7 +191,7 @@ def site_icon(site_id: int,
@router.get("/resource/{site_id}", summary="站点资源", response_model=List[schemas.TorrentInfo])
def site_resource(site_id: int, keyword: str = None,
def site_resource(site_id: int,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
@@ -203,7 +203,7 @@ def site_resource(site_id: int, keyword: str = None,
status_code=404,
detail=f"站点 {site_id} 不存在",
)
torrents = SearchChain(db).browse(site.domain, keyword)
torrents = TorrentsChain().browse(domain=site.domain)
if not torrents:
return []
return [torrent.to_dict() for torrent in torrents]
@@ -234,7 +234,7 @@ def read_rss_sites(db: Session = Depends(get_db)) -> List[dict]:
获取站点列表
"""
# 选中的rss站点
rss_sites = SystemConfigOper(db).get(SystemConfigKey.RssSites)
rss_sites = SystemConfigOper().get(SystemConfigKey.RssSites)
# 所有站点
all_site = Site.list_order_by_pri(db)
if not rss_sites or not all_site:

View File

@@ -138,6 +138,53 @@ def subscribe_mediaid(
return result if result else Subscribe()
@router.get("/refresh", summary="刷新订阅", response_model=schemas.Response)
def refresh_subscribes(
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
刷新所有订阅
"""
SubscribeChain(db).refresh()
return schemas.Response(success=True)
@router.get("/check", summary="刷新订阅 TMDB 信息", response_model=schemas.Response)
def check_subscribes(
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
刷新所有订阅
"""
SubscribeChain(db).check()
return schemas.Response(success=True)
@router.get("/search", summary="搜索所有订阅", response_model=schemas.Response)
def search_subscribes(
background_tasks: BackgroundTasks,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
搜索所有订阅
"""
background_tasks.add_task(start_subscribe_search, db=db, sid=None, state='R')
return schemas.Response(success=True)
@router.get("/search/{subscribe_id}", summary="搜索订阅", response_model=schemas.Response)
def search_subscribe(
subscribe_id: int,
background_tasks: BackgroundTasks,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据订阅编号搜索订阅
"""
background_tasks.add_task(start_subscribe_search, db=db, sid=subscribe_id, state=None)
return schemas.Response(success=True)
@router.get("/{subscribe_id}", summary="订阅详情", response_model=schemas.Subscribe)
def read_subscribe(
subscribe_id: int,
@@ -243,39 +290,3 @@ async def seerr_subscribe(request: Request, background_tasks: BackgroundTasks,
username=user_name)
return schemas.Response(success=True)
@router.get("/refresh", summary="刷新订阅", response_model=schemas.Response)
def refresh_subscribes(
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
刷新所有订阅
"""
SubscribeChain(db).refresh()
return schemas.Response(success=True)
@router.get("/search/{subscribe_id}", summary="搜索订阅", response_model=schemas.Response)
def search_subscribe(
subscribe_id: int,
background_tasks: BackgroundTasks,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
搜索所有订阅
"""
background_tasks.add_task(start_subscribe_search, db=db, sid=subscribe_id, state=None)
return schemas.Response(success=True)
@router.get("/search", summary="搜索所有订阅", response_model=schemas.Response)
def search_subscribes(
background_tasks: BackgroundTasks,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
搜索所有订阅
"""
background_tasks.add_task(start_subscribe_search, db=db, sid=None, state='R')
return schemas.Response(success=True)

View File

@@ -9,13 +9,16 @@ from fastapi.responses import StreamingResponse
from sqlalchemy.orm import Session
from app import schemas
from app.chain.search import SearchChain
from app.core.config import settings
from app.core.security import verify_token
from app.db import get_db
from app.db.systemconfig_oper import SystemConfigOper
from app.helper.message import MessageHelper
from app.helper.progress import ProgressHelper
from app.schemas.types import SystemConfigKey
from app.utils.http import RequestUtils
from app.utils.system import SystemUtils
from version import APP_VERSION
router = APIRouter()
@@ -60,24 +63,22 @@ def get_progress(process_type: str, token: str):
@router.get("/setting/{key}", summary="查询系统设置", response_model=schemas.Response)
def get_setting(key: str,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)):
"""
查询系统设置
"""
return schemas.Response(success=True, data={
"value": SystemConfigOper(db).get(key)
"value": SystemConfigOper().get(key)
})
@router.post("/setting/{key}", summary="更新系统设置", response_model=schemas.Response)
def set_setting(key: str, value: Union[list, dict, str, int] = None,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)):
"""
更新系统设置
"""
SystemConfigOper(db).set(key, value)
SystemConfigOper().set(key, value)
return schemas.Response(success=True)
@@ -166,3 +167,45 @@ def latest_version(_: schemas.TokenPayload = Depends(verify_token)):
if ver_json:
return schemas.Response(success=True, data=ver_json)
return schemas.Response(success=False)
@router.get("/ruletest", summary="过滤规则测试", response_model=schemas.Response)
def ruletest(title: str,
subtitle: str = None,
ruletype: str = None,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)):
"""
过滤规则测试,规则类型 1-订阅2-洗版
"""
torrent = schemas.TorrentInfo(
title=title,
description=subtitle,
)
if ruletype == "2":
rule_string = SystemConfigOper().get(SystemConfigKey.FilterRules2)
else:
rule_string = SystemConfigOper().get(SystemConfigKey.FilterRules)
if not rule_string:
return schemas.Response(success=False, message="过滤规则未设置!")
# 过滤
result = SearchChain(db).filter_torrents(rule_string=rule_string,
torrent_list=[torrent])
if not result:
return schemas.Response(success=False, message="不符合过滤规则!")
return schemas.Response(success=True, data={
"priority": 100 - result[0].pri_order + 1
})
@router.get("/restart", summary="重启系统", response_model=schemas.Response)
def restart_system(_: schemas.TokenPayload = Depends(verify_token)):
"""
重启系统
"""
if not SystemUtils.can_restart():
return schemas.Response(success=False, message="当前运行环境不支持重启操作!")
# 执行重启
ret, msg = SystemUtils.restart()
return schemas.Response(success=ret, message=msg)

View File

@@ -132,13 +132,10 @@ def arr_rootfolder(apikey: str) -> Any:
status_code=403,
detail="认证失败!",
)
library_path = "/"
if settings.LIBRARY_PATH:
library_path = settings.LIBRARY_PATH.split(",")[0]
return [
{
"id": 1,
"path": library_path,
"path": "/" if not settings.LIBRARY_PATHS else str(settings.LIBRARY_PATHS[0]),
"accessible": True,
"freeSpace": 0,
"unmappedFolders": []

View File

@@ -197,7 +197,7 @@ class ChainBase(metaclass=ABCMeta):
return self.run_module("search_medias", meta=meta)
def search_torrents(self, site: CommentedMap,
mediainfo: Optional[MediaInfo] = None,
mediainfo: MediaInfo,
keyword: str = None,
page: int = 0,
area: str = "title") -> List[TorrentInfo]:
@@ -234,33 +234,31 @@ class ChainBase(metaclass=ABCMeta):
return self.run_module("filter_torrents", rule_string=rule_string,
torrent_list=torrent_list, season_episodes=season_episodes)
def download(self, torrent_path: Path, download_dir: Path, cookie: str,
def download(self, content: Union[Path, str], download_dir: Path, cookie: str,
episodes: Set[int] = None, category: str = None
) -> Optional[Tuple[Optional[str], str]]:
"""
根据种子文件,选择并添加下载任务
:param torrent_path: 种子文件地址
:param content: 种子文件地址或者磁力链接
:param download_dir: 下载目录
:param cookie: cookie
:param episodes: 需要下载的集数
:param category: 种子分类
:return: 种子Hash错误信息
"""
return self.run_module("download", torrent_path=torrent_path, download_dir=download_dir,
return self.run_module("download", content=content, download_dir=download_dir,
cookie=cookie, episodes=episodes, category=category)
def download_added(self, context: Context, torrent_path: Path, download_dir: Path) -> None:
def download_added(self, context: Context, download_dir: Path, torrent_path: Path = None) -> None:
"""
添加下载任务成功后,从站点下载字幕,保存到下载目录
:param context: 上下文,包括识别信息、媒体信息、种子信息
:param torrent_path: 种子文件地址
:param download_dir: 下载目录
:param torrent_path: 种子文件地址
:return: None该方法可被多个模块同时处理
"""
if settings.DOWNLOAD_SUBTITLE:
return self.run_module("download_added", context=context, torrent_path=torrent_path,
download_dir=download_dir)
return None
return self.run_module("download_added", context=context, torrent_path=torrent_path,
download_dir=download_dir)
def list_torrents(self, status: TorrentStatus = None,
hashs: Union[list, str] = None) -> Optional[List[Union[TransferTorrent, DownloadingTorrent]]]:
@@ -342,9 +340,7 @@ class ChainBase(metaclass=ABCMeta):
:param file_path: 文件路径
:return: 成功或失败
"""
if settings.REFRESH_MEDIASERVER:
return self.run_module("refresh_mediaserver", mediainfo=mediainfo, file_path=file_path)
return None
return self.run_module("refresh_mediaserver", mediainfo=mediainfo, file_path=file_path)
def post_message(self, message: Notification) -> None:
"""
@@ -392,11 +388,9 @@ class ChainBase(metaclass=ABCMeta):
:param mediainfo: 识别的媒体信息
:return: 成功或失败
"""
if settings.SCRAP_METADATA:
return self.run_module("scrape_metadata", path=path, mediainfo=mediainfo)
return None
return self.run_module("scrape_metadata", path=path, mediainfo=mediainfo)
def register_commands(self, commands: dict) -> None:
def register_commands(self, commands: Dict[str, dict]) -> None:
"""
注册菜单命令
"""

View File

@@ -13,6 +13,7 @@ from app.db.siteicon_oper import SiteIconOper
from app.helper.cloudflare import under_challenge
from app.helper.cookiecloud import CookieCloudHelper
from app.helper.message import MessageHelper
from app.helper.rss import RssHelper
from app.helper.sites import SitesHelper
from app.log import logger
from app.schemas import Notification, NotificationType, MessageChannel
@@ -30,6 +31,7 @@ class CookieCloudChain(ChainBase):
self.siteoper = SiteOper(self._db)
self.siteiconoper = SiteIconOper(self._db)
self.siteshelper = SitesHelper()
self.rsshelper = RssHelper()
self.sitechain = SiteChain(self._db)
self.message = MessageHelper()
self.cookiecloud = CookieCloudHelper(
@@ -78,8 +80,23 @@ class CookieCloudChain(ChainBase):
# 更新站点Cookie
if status:
logger.info(f"站点【{site_info.name}】连通性正常不同步CookieCloud数据")
# 更新站点rss地址
if not site_info.public and not site_info.rss:
# 自动生成rss地址
rss_url, errmsg = self.rsshelper.get_rss_link(
url=site_info.url,
cookie=cookie,
ua=settings.USER_AGENT,
proxy=True if site_info.proxy else False
)
if rss_url:
logger.info(f"更新站点 {domain} RSS地址 ...")
self.siteoper.update_rss(domain=domain, rss=rss_url)
else:
logger.warn(errmsg)
continue
# 更新站点Cookie
logger.info(f"更新站点 {domain} Cookie ...")
self.siteoper.update_cookie(domain=domain, cookies=cookie)
_update_count += 1
elif indexer:
@@ -104,12 +121,25 @@ class CookieCloudChain(ChainBase):
_fail_count += 1
logger.warn(f"站点 {indexer.get('name')} 连接失败,无法添加站点")
continue
# 获取rss地址
rss_url = None
if not indexer.get("public") and indexer.get("domain"):
# 自动生成rss地址
rss_url, errmsg = self.rsshelper.get_rss_link(url=indexer.get("domain"),
cookie=cookie,
ua=settings.USER_AGENT)
if errmsg:
logger.warn(errmsg)
# 插入数据库
logger.info(f"新增站点 {indexer.get('name')} ...")
self.siteoper.add(name=indexer.get("name"),
url=indexer.get("domain"),
domain=domain,
cookie=cookie,
rss=rss_url,
public=1 if indexer.get("public") else 0)
_add_count += 1
# 保存站点图标
if indexer:
site_icon = self.siteiconoper.get_by_domain(domain)

View File

@@ -8,6 +8,7 @@ from app.chain import ChainBase
from app.core.config import settings
from app.core.context import MediaInfo, TorrentInfo, Context
from app.core.meta import MetaBase
from app.core.metainfo import MetaInfo
from app.db.downloadhistory_oper import DownloadHistoryOper
from app.db.mediaserver_oper import MediaServerOper
from app.helper.torrent import TorrentHelper
@@ -47,9 +48,12 @@ class DownloadChain(ChainBase):
msg_text = f"{msg_text}\n大小:{size}"
if torrent.title:
msg_text = f"{msg_text}\n种子:{torrent.title}"
if torrent.pubdate:
msg_text = f"{msg_text}\n发布时间:{torrent.pubdate}"
if torrent.seeders:
msg_text = f"{msg_text}\n做种数:{torrent.seeders}"
msg_text = f"{msg_text}\n促销:{torrent.volume_factor}"
if torrent.uploadvolumefactor and torrent.downloadvolumefactor:
msg_text = f"{msg_text}\n促销:{torrent.volume_factor}"
if torrent.hit_and_run:
msg_text = f"{msg_text}\nHit&Run"
if torrent.description:
@@ -69,25 +73,33 @@ class DownloadChain(ChainBase):
def download_torrent(self, torrent: TorrentInfo,
channel: MessageChannel = None,
userid: Union[str, int] = None) -> Tuple[Optional[Path], str, list]:
userid: Union[str, int] = None
) -> Tuple[Optional[Union[Path, str]], str, list]:
"""
下载种子文件
下载种子文件,如果是磁力链,会返回磁力链接本身
:return: 种子路径,种子目录名,种子文件清单
"""
torrent_file, _, download_folder, files, error_msg = self.torrent.download_torrent(
torrent_file, content, download_folder, files, error_msg = self.torrent.download_torrent(
url=torrent.enclosure,
cookie=torrent.site_cookie,
ua=torrent.site_ua,
proxy=torrent.site_proxy)
if isinstance(content, str):
# 磁力链
return content, "", []
if not torrent_file:
logger.error(f"下载种子文件失败:{torrent.title} - {torrent.enclosure}")
self.post_message(Notification(
channel=channel,
mtype=NotificationType.Manual,
title=f"{torrent.title} 种子下载失败!",
text=f"错误信息:{error_msg}\n种子链接{torrent.enclosure}",
text=f"错误信息:{error_msg}\n站点{torrent.site_name}",
userid=userid))
return None, "", []
# 返回 种子文件路径,种子目录名,种子文件清单
return torrent_file, download_folder, files
def download_single(self, context: Context, torrent_file: Path = None,
@@ -97,19 +109,27 @@ class DownloadChain(ChainBase):
userid: Union[str, int] = None) -> Optional[str]:
"""
下载及发送通知
:param context: 资源上下文
:param torrent_file: 种子文件路径
:param episodes: 需要下载的集数
:param channel: 通知渠道
:param save_path: 保存路径
:param userid: 用户ID
"""
_torrent = context.torrent_info
_media = context.media_info
_meta = context.meta_info
_folder_name = ""
if not torrent_file:
# 下载种子文件
torrent_file, _folder_name, _file_list = self.download_torrent(_torrent, userid=userid)
if not torrent_file:
# 下载种子文件,得到的可能是文件也可能是磁力链
content, _folder_name, _file_list = self.download_torrent(_torrent, userid=userid)
if not content:
return
else:
content = torrent_file
# 获取种子文件的文件夹名和文件清单
_folder_name, _file_list = self.torrent.get_torrent_info(torrent_file)
# 下载目录
if not save_path:
if settings.DOWNLOAD_CATEGORY and _media and _media.category:
@@ -148,7 +168,7 @@ class DownloadChain(ChainBase):
download_dir = Path(save_path)
# 添加下载
result: Optional[tuple] = self.download(torrent_path=torrent_file,
result: Optional[tuple] = self.download(content=content,
cookie=_torrent.site_cookie,
episodes=episodes,
download_dir=download_dir,
@@ -159,9 +179,15 @@ class DownloadChain(ChainBase):
_hash, error_msg = None, "未知错误"
if _hash:
# 下载文件路径
if _folder_name:
download_path = download_dir / _folder_name
else:
download_path = download_dir / _file_list[0] if _file_list else download_dir
# 登记下载记录
self.downloadhis.add(
path=_folder_name or _torrent.title,
path=str(download_path),
type=_media.type.value,
title=_media.title,
year=_media.year,
@@ -177,25 +203,34 @@ class DownloadChain(ChainBase):
torrent_description=_torrent.description,
torrent_site=_torrent.site_name
)
# 登记下载文件
self.downloadhis.add_files([
{
files_to_add = []
for file in _file_list:
if episodes:
# 识别文件集
file_meta = MetaInfo(Path(file).stem)
if not file_meta.begin_episode \
or file_meta.begin_episode not in episodes:
continue
files_to_add.append({
"download_hash": _hash,
"downloader": settings.DOWNLOADER,
"fullpath": str(download_dir / _folder_name / file),
"savepath": str(download_dir / _folder_name),
"filepath": file,
"torrentname": _meta.org_string,
} for file in _file_list if file
])
})
if files_to_add:
self.downloadhis.add_files(files_to_add)
# 发送消息
self.post_download_message(meta=_meta, mediainfo=_media, torrent=_torrent, channel=channel)
# 下载成功后处理
self.download_added(context=context, torrent_path=torrent_file, download_dir=download_dir)
self.download_added(context=context, download_dir=download_dir, torrent_path=torrent_file)
# 广播事件
self.eventmanager.send_event(EventType.DownloadAdded, {
"hash": _hash,
"torrent_file": torrent_file,
"context": context
})
else:
@@ -209,7 +244,6 @@ class DownloadChain(ChainBase):
% (_media.title_year, _meta.season_episode),
text=f"站点:{_torrent.site_name}\n"
f"种子名称:{_meta.org_string}\n"
f"种子链接:{_torrent.enclosure}\n"
f"错误信息:{error_msg}",
image=_media.get_message_image(),
userid=userid))
@@ -330,31 +364,34 @@ class DownloadChain(ChainBase):
if set(torrent_season).issubset(set(need_season)):
if len(torrent_season) == 1:
# 只有一季的可能是命名错误,需要打开种子鉴别,只有实际集数大于等于总集数才下载
torrent_path, _, torrent_files = self.download_torrent(torrent)
if not torrent_path:
content, _, torrent_files = self.download_torrent(torrent)
if not content:
continue
if isinstance(content, str):
logger.warn(f"{meta.org_string} 下载地址是磁力链,无法确定种子文件集数")
continue
torrent_episodes = self.torrent.get_torrent_episodes(torrent_files)
if torrent_episodes:
# 总集数
need_total = __get_season_episodes(need_tmdbid, torrent_season[0])
if len(torrent_episodes) < need_total:
# 更新集数范围
begin_ep = min(torrent_episodes)
end_ep = max(torrent_episodes)
meta.set_episodes(begin=begin_ep, end=end_ep)
logger.info(
f"{meta.org_string} 解析文件集数为 [{begin_ep}-{end_ep}],不是完整合集")
continue
else:
# 下载
download_id = self.download_single(context=context,
torrent_file=torrent_path,
save_path=save_path,
userid=userid)
else:
logger.info(
f"{meta.org_string} 解析文件集数为 {len(torrent_episodes)},不是完整合集")
logger.info(f"{meta.org_string} 解析文件集数为 {torrent_episodes}")
if not torrent_episodes:
continue
# 总集数
need_total = __get_season_episodes(need_tmdbid, torrent_season[0])
if len(torrent_episodes) < need_total:
# 更新集数范围
begin_ep = min(torrent_episodes)
end_ep = max(torrent_episodes)
meta.set_episodes(begin=begin_ep, end=end_ep)
logger.info(
f"{meta.org_string} 解析文件集数发现不是完整合集")
continue
else:
# 下载
download_id = self.download_single(
context=context,
torrent_file=content if isinstance(content, Path) else None,
save_path=save_path,
userid=userid
)
else:
# 下载
download_id = self.download_single(context, save_path=save_path, userid=userid)
@@ -471,22 +508,29 @@ class DownloadChain(ChainBase):
and len(meta.season_list) == 1 \
and meta.season_list[0] == need_season:
# 检查种子看是否有需要的集
torrent_path, _, torrent_files = self.download_torrent(torrent, userid=userid)
if not torrent_path:
content, _, torrent_files = self.download_torrent(torrent, userid=userid)
if not content:
continue
if isinstance(content, str):
logger.warn(f"{meta.org_string} 下载地址是磁力链,无法解析种子文件集数")
continue
# 种子全部集
torrent_episodes = self.torrent.get_torrent_episodes(torrent_files)
logger.info(f"{torrent.site_name} - {meta.org_string} 解析文件集数:{torrent_episodes}")
# 选中的集
selected_episodes = set(torrent_episodes).intersection(set(need_episodes))
if not selected_episodes:
logger.info(f"{torrent.site_name} - {torrent.title} 没有需要的集,跳过...")
continue
logger.info(f"{torrent.site_name} - {torrent.title} 选中集数:{selected_episodes}")
# 添加下载
download_id = self.download_single(context=context,
torrent_file=torrent_path,
episodes=selected_episodes,
save_path=save_path,
userid=userid)
download_id = self.download_single(
context=context,
torrent_file=content if isinstance(content, Path) else None,
episodes=selected_episodes,
save_path=save_path,
userid=userid
)
if not download_id:
continue
# 把识别的集更新到上下文
@@ -638,7 +682,8 @@ class DownloadChain(ChainBase):
self.post_message(Notification(
channel=channel,
mtype=NotificationType.Download,
title="没有正在下载的任务!"))
title="没有正在下载的任务!",
userid=userid))
return
# 发送消息
title = f"{len(torrents)} 个任务正在下载:"

View File

@@ -214,6 +214,11 @@ class MessageChain(ChainBase):
start = _current_page * self._page_size
end = start + self._page_size
if cache_type == "Torrent":
# 更新缓存
user_cache[userid] = {
"type": "Torrent",
"items": cache_list[start:end]
}
# 发送种子数据
self.__post_torrents_message(channel=channel,
title=_current_media.title,
@@ -251,6 +256,11 @@ class MessageChain(ChainBase):
# 加一页
_current_page += 1
if cache_type == "Torrent":
# 更新缓存
user_cache[userid] = {
"type": "Torrent",
"items": cache_list
}
# 发送种子数据
self.__post_torrents_message(channel=channel,
title=_current_media.title,
@@ -270,7 +280,7 @@ class MessageChain(ChainBase):
elif text.startswith("#") \
or re.search(r"^请[问帮你]", text) \
or re.search(r"[?]$", text) \
or StringUtils.count_words(text) > 15 \
or StringUtils.count_words(text) > 10 \
or text.find("继续") != -1:
# 聊天
content = text

View File

@@ -1,320 +0,0 @@
import json
import re
from datetime import datetime
from typing import Tuple, Optional
from sqlalchemy.orm import Session
from app.chain import ChainBase
from app.chain.download import DownloadChain
from app.core.config import settings
from app.core.context import Context, TorrentInfo, MediaInfo
from app.core.metainfo import MetaInfo
from app.db.rss_oper import RssOper
from app.db.systemconfig_oper import SystemConfigOper
from app.helper.message import MessageHelper
from app.helper.rss import RssHelper
from app.helper.sites import SitesHelper
from app.log import logger
from app.schemas import Notification, NotExistMediaInfo
from app.schemas.types import SystemConfigKey, MediaType, NotificationType
from app.utils.string import StringUtils
class RssChain(ChainBase):
"""
RSS处理链
"""
def __init__(self, db: Session = None):
super().__init__(db)
self.rssoper = RssOper(self._db)
self.sites = SitesHelper()
self.systemconfig = SystemConfigOper(self._db)
self.downloadchain = DownloadChain(self._db)
self.message = MessageHelper()
def add(self, title: str, year: str,
mtype: MediaType = None,
season: int = None,
**kwargs) -> Tuple[Optional[int], str]:
"""
识别媒体信息并添加订阅
"""
logger.info(f'开始添加自定义订阅,标题:{title} ...')
# 识别元数据
metainfo = MetaInfo(title)
if year:
metainfo.year = year
if mtype:
metainfo.type = mtype
if season:
metainfo.type = MediaType.TV
metainfo.begin_season = season
# 识别媒体信息
mediainfo: MediaInfo = self.recognize_media(meta=metainfo)
if not mediainfo:
logger.warn(f'{title} 未识别到媒体信息')
return None, "未识别到媒体信息"
# 更新媒体图片
self.obtain_images(mediainfo=mediainfo)
# 总集数
if mediainfo.type == MediaType.TV:
if not season:
season = 1
# 总集数
if not kwargs.get('total_episode'):
if not mediainfo.seasons:
# 补充媒体信息
mediainfo: MediaInfo = self.recognize_media(mtype=mediainfo.type,
tmdbid=mediainfo.tmdb_id)
if not mediainfo:
logger.error(f"媒体信息识别失败!")
return None, "媒体信息识别失败"
if not mediainfo.seasons:
logger.error(f"{title} 媒体信息中没有季集信息")
return None, "媒体信息中没有季集信息"
total_episode = len(mediainfo.seasons.get(season) or [])
if not total_episode:
logger.error(f'{title} 未获取到总集数')
return None, "未获取到总集数"
kwargs.update({
'total_episode': total_episode
})
# 检查是否存在
if self.rssoper.exists(tmdbid=mediainfo.tmdb_id, season=season):
logger.warn(f'{mediainfo.title} 已存在')
return None, f'{mediainfo.title} 自定义订阅已存在'
if not kwargs.get("name"):
kwargs.update({
"name": mediainfo.title
})
kwargs.update({
"tmdbid": mediainfo.tmdb_id,
"poster": mediainfo.get_poster_image(),
"backdrop": mediainfo.get_backdrop_image(),
"vote": mediainfo.vote_average,
"description": mediainfo.overview,
})
# 添加订阅
sid = self.rssoper.add(title=title, year=year, season=season, **kwargs)
if not sid:
logger.error(f'{mediainfo.title_year} 添加自定义订阅失败')
return None, "添加自定义订阅失败"
else:
logger.info(f'{mediainfo.title_year} {metainfo.season} 添加订阅成功')
# 返回结果
return sid, ""
def refresh(self, rssid: int = None, manual: bool = False):
"""
刷新RSS订阅数据
"""
# 所有RSS订阅
logger.info("开始刷新RSS订阅数据 ...")
rss_tasks = self.rssoper.list(rssid) or []
for rss_task in rss_tasks:
if not rss_task:
continue
if not rss_task.url:
continue
# 下载Rss报文
items = RssHelper.parse(rss_task.url, True if rss_task.proxy else False)
if not items:
logger.error(f"RSS未下载到数据{rss_task.url}")
logger.info(f"{rss_task.name} RSS下载到数据{len(items)}")
# 检查站点
domain = StringUtils.get_url_domain(rss_task.url)
site_info = self.sites.get_indexer(domain) or {}
# 过滤规则
if rss_task.best_version:
filter_rule = self.systemconfig.get(SystemConfigKey.FilterRules2)
else:
filter_rule = self.systemconfig.get(SystemConfigKey.FilterRules)
# 处理RSS条目
matched_contexts = []
# 处理过的title
processed_data = json.loads(rss_task.note) if rss_task.note else {
"titles": [],
"season_episodes": []
}
for item in items:
if not item.get("title"):
continue
# 标题是否已处理过
if item.get("title") in processed_data.get('titles'):
logger.info(f"{item.get('title')} 已处理过")
continue
# 基本要素匹配
if rss_task.include \
and not re.search(r"%s" % rss_task.include, item.get("title")):
logger.info(f"{item.get('title')} 未包含 {rss_task.include}")
continue
if rss_task.exclude \
and re.search(r"%s" % rss_task.exclude, item.get("title")):
logger.info(f"{item.get('title')} 包含 {rss_task.exclude}")
continue
# 识别媒体信息
meta = MetaInfo(title=item.get("title"), subtitle=item.get("description"))
if not meta.name:
logger.error(f"{item.get('title')} 未识别到有效信息")
continue
mediainfo = self.recognize_media(meta=meta)
if not mediainfo:
logger.error(f"{item.get('title')} 未识别到TMDB媒体信息")
continue
if mediainfo.tmdb_id != rss_task.tmdbid:
logger.error(f"{item.get('title')} 不匹配")
continue
# 季集是否已处理过
if meta.season_episode in processed_data.get('season_episodes'):
logger.info(f"{meta.org_string} {meta.season_episode} 已处理过")
continue
# 种子
torrentinfo = TorrentInfo(
site=site_info.get("id"),
site_name=site_info.get("name"),
site_cookie=site_info.get("cookie"),
site_ua=site_info.get("cookie") or settings.USER_AGENT,
site_proxy=site_info.get("proxy") or rss_task.proxy,
site_order=site_info.get("pri"),
title=item.get("title"),
description=item.get("description"),
enclosure=item.get("enclosure"),
page_url=item.get("link"),
size=item.get("size"),
pubdate=item["pubdate"].strftime("%Y-%m-%d %H:%M:%S") if item.get("pubdate") else None,
)
# 过滤种子
if rss_task.filter:
result = self.filter_torrents(
rule_string=filter_rule,
torrent_list=[torrentinfo]
)
if not result:
logger.info(f"{rss_task.name} 不匹配过滤规则")
continue
# 清除多余数据
mediainfo.clear()
# 匹配到的数据
matched_contexts.append(Context(
meta_info=meta,
media_info=mediainfo,
torrent_info=torrentinfo
))
# 匹配结果
if not matched_contexts:
logger.info(f"{rss_task.name} 未匹配到数据")
continue
logger.info(f"{rss_task.name} 匹配到 {len(matched_contexts)} 条数据")
# 查询本地存在情况
if not rss_task.best_version:
# 查询缺失的媒体信息
rss_meta = MetaInfo(title=rss_task.title)
rss_meta.year = rss_task.year
rss_meta.begin_season = rss_task.season
rss_meta.type = MediaType(rss_task.type)
# 每季总集数
totals = {}
if rss_task.season and rss_task.total_episode:
totals = {
rss_task.season: rss_task.total_episode
}
# 检查缺失
exist_flag, no_exists = self.downloadchain.get_no_exists_info(
meta=rss_meta,
mediainfo=MediaInfo(
title=rss_task.title,
year=rss_task.year,
tmdb_id=rss_task.tmdbid,
season=rss_task.season
),
totals=totals
)
if exist_flag:
logger.info(f'{rss_task.name} 媒体库中已存在,完成订阅')
self.rssoper.delete(rss_task.id)
# 发送通知
self.post_message(Notification(mtype=NotificationType.Subscribe,
title=f'自定义订阅 {rss_task.name} 已完成',
image=rss_task.backdrop))
continue
elif rss_meta.type == MediaType.TV.value:
# 打印缺失集信息
if no_exists and no_exists.get(rss_task.tmdbid):
no_exists_info = no_exists.get(rss_task.tmdbid).get(rss_task.season)
if no_exists_info:
logger.info(f'订阅 {rss_task.name} 缺失集:{no_exists_info.episodes}')
else:
if rss_task.type == MediaType.TV.value:
no_exists = {
rss_task.season: NotExistMediaInfo(
season=rss_task.season,
episodes=[],
total_episode=rss_task.total_episode,
start_episode=1)
}
else:
no_exists = {}
# 开始下载
downloads, lefts = self.downloadchain.batch_download(contexts=matched_contexts,
no_exists=no_exists,
save_path=rss_task.save_path)
if downloads and not lefts:
if not rss_task.best_version:
# 非洗版结束订阅
self.rssoper.delete(rss_task.id)
# 发送通知
self.post_message(Notification(mtype=NotificationType.Subscribe,
title=f'自定义订阅 {rss_task.name} 已完成',
image=rss_task.backdrop))
else:
# 未完成下载
logger.info(f'{rss_task.name} 未下载未完整,继续订阅 ...')
if downloads:
for download in downloads:
meta = download.meta_info
# 更新已处理数据
processed_data['titles'].append(meta.org_string)
processed_data['season_episodes'].append(meta.season_episode)
# 更新已处理过的数据
self.rssoper.update(rssid=rss_task.id, note=json.dumps(processed_data))
# 更新最后更新时间和已处理数量
self.rssoper.update(rssid=rss_task.id,
processed=(rss_task.processed or 0) + len(downloads),
last_update=datetime.now().strftime("%Y-%m-%d %H:%M:%S"))
logger.info("刷新RSS订阅数据完成")
if manual:
if len(rss_tasks) == 1:
self.message.put(f"{rss_tasks[0].name} 自定义订阅刷新完成")
else:
self.message.put(f"自定义订阅刷新完成")

View File

@@ -29,7 +29,7 @@ class SearchChain(ChainBase):
super().__init__(db)
self.siteshelper = SitesHelper()
self.progress = ProgressHelper()
self.systemconfig = SystemConfigOper(self._db)
self.systemconfig = SystemConfigOper()
self.torrenthelper = TorrentHelper()
def search_by_tmdbid(self, tmdbid: int, mtype: MediaType = None, area: str = "title") -> List[Context]:
@@ -76,22 +76,6 @@ class SearchChain(ChainBase):
print(str(e))
return []
def browse(self, domain: str, keyword: str = None) -> List[TorrentInfo]:
"""
浏览站点首页内容
:param domain: 站点域名
:param keyword: 关键词,有值时为搜索
"""
if not keyword:
logger.info(f'开始浏览站点首页内容,站点:{domain} ...')
else:
logger.info(f'开始搜索资源,关键词:{keyword},站点:{domain} ...')
site = self.siteshelper.get_indexer(domain)
if not site:
logger.error(f'站点 {domain} 不存在!')
return []
return self.search_torrents(site=site, keyword=keyword)
def process(self, mediainfo: MediaInfo,
keyword: str = None,
no_exists: Dict[int, Dict[int, NotExistMediaInfo]] = None,
@@ -201,19 +185,30 @@ class SearchChain(ChainBase):
str(int(mediainfo.year) + 1)]:
logger.warn(f'{torrent.site_name} - {torrent.title} 年份不匹配')
continue
# 比对标题
# 比对标题和原语种标题
meta_name = StringUtils.clear_upper(torrent_meta.name)
if meta_name in [
StringUtils.clear_upper(mediainfo.title),
StringUtils.clear_upper(mediainfo.original_title)
]:
logger.info(f'{mediainfo.title} 匹配到资源:{torrent.site_name} - {torrent.title}')
logger.info(f'{mediainfo.title} 通过标题匹配到资源:{torrent.site_name} - {torrent.title}')
_match_torrents.append(torrent)
continue
# 在副标题中判断是否存在标题与原语种标题
if torrent.description:
subtitle = torrent.description.split()
if (StringUtils.is_chinese(mediainfo.title)
and str(mediainfo.title) in subtitle) \
or (StringUtils.is_chinese(mediainfo.original_title)
and str(mediainfo.original_title) in subtitle):
logger.info(f'{mediainfo.title} 通过副标题匹配到资源:{torrent.site_name} - {torrent.title}'
f'副标题:{torrent.description}')
_match_torrents.append(torrent)
continue
# 比对别名和译名
for name in mediainfo.names:
if StringUtils.clear_upper(name) == meta_name:
logger.info(f'{mediainfo.title} 匹配到资源:{torrent.site_name} - {torrent.title}')
logger.info(f'{mediainfo.title} 通过别名或译名匹配到资源:{torrent.site_name} - {torrent.title}')
_match_torrents.append(torrent)
break
else:

View File

@@ -91,7 +91,8 @@ class SiteChain(ChainBase):
if not site_list:
self.post_message(Notification(
channel=channel,
title="没有维护任何站点信息!"))
title="没有维护任何站点信息!",
userid=userid))
title = f"共有 {len(site_list)} 个站点,回复对应指令操作:" \
f"\n- 禁用站点:/site_disable [id]" \
f"\n- 启用站点:/site_enable [id]" \
@@ -221,8 +222,8 @@ class SiteChain(ChainBase):
title=f"站点编号 {site_id} 不存在!", userid=userid))
return
self.post_message(Notification(
channel=channel,
title=f"开始更新【{site_info.name}】Cookie&UA ...", userid=userid))
channel=channel,
title=f"开始更新【{site_info.name}】Cookie&UA ...", userid=userid))
# 用户名
username = args[1]
# 密码

View File

@@ -33,7 +33,7 @@ class SubscribeChain(ChainBase):
self.subscribeoper = SubscribeOper(self._db)
self.torrentschain = TorrentsChain()
self.message = MessageHelper()
self.systemconfig = SystemConfigOper(self._db)
self.systemconfig = SystemConfigOper()
def add(self, title: str, year: str,
mtype: MediaType = None,
@@ -277,7 +277,7 @@ class SubscribeChain(ChainBase):
logger.warn(f'订阅 {subscribe.keyword or subscribe.name} 未搜索到资源')
if meta.type == MediaType.TV:
# 未搜索到资源,但本地缺失可能有变化,更新订阅剩余集数
self.__upate_lack_episodes(lefts=no_exists, subscribe=subscribe, mediainfo=mediainfo)
self.__update_lack_episodes(lefts=no_exists, subscribe=subscribe, mediainfo=mediainfo)
continue
# 过滤
matched_contexts = []
@@ -308,12 +308,17 @@ class SubscribeChain(ChainBase):
if torrent_meta.episode_list:
logger.info(f'{subscribe.name} 正在洗版,{torrent_info.title} 不是整季')
continue
# 优先级小于已下载优先级的不要
if subscribe.current_priority \
and torrent_info.pri_order < subscribe.current_priority:
logger.info(f'{subscribe.name} 正在洗版,{torrent_info.title} 优先级低于已下载优先级')
continue
matched_contexts.append(context)
if not matched_contexts:
logger.warn(f'订阅 {subscribe.name} 没有符合过滤条件的资源')
# 非洗版未搜索到资源,但本地缺失可能有变化,更新订阅剩余集数
if meta.type == MediaType.TV and not subscribe.best_version:
self.__upate_lack_episodes(lefts=no_exists, subscribe=subscribe, mediainfo=mediainfo)
self.__update_lack_episodes(lefts=no_exists, subscribe=subscribe, mediainfo=mediainfo)
continue
# 自动下载
downloads, lefts = self.downloadchain.batch_download(contexts=matched_contexts,
@@ -330,18 +335,18 @@ class SubscribeChain(ChainBase):
mediainfo=mediainfo, downloads=downloads)
else:
# 未完成下载
logger.info(f'{mediainfo.title_year} 未下载完整,继续订阅 ...')
logger.info(f'{mediainfo.title_year} 未下载完整,继续订阅 ...')
if meta.type == MediaType.TV and not subscribe.best_version:
# 更新订阅剩余集数和时间
update_date = True if downloads else False
self.__upate_lack_episodes(lefts=lefts, subscribe=subscribe,
self.__update_lack_episodes(lefts=lefts, subscribe=subscribe,
mediainfo=mediainfo, update_date=update_date)
# 手动触发时发送系统消息
if manual:
if sid:
self.message.put(f'订阅 {subscribes[0].name} 搜索完成!')
else:
self.message.put(f'所有订阅搜索完成!')
self.message.put('所有订阅搜索完成!')
def finish_subscribe_or_not(self, subscribe: Subscribe, meta: MetaInfo,
mediainfo: MediaInfo, downloads: List[Context]):
@@ -375,17 +380,40 @@ class SubscribeChain(ChainBase):
def refresh(self):
"""
刷新订阅
订阅刷新
"""
# 触发刷新站点资源,从缓存中匹配订阅
sites = self.get_subscribed_sites()
if sites is None:
return
self.match(
self.torrentschain.refresh(sites=sites)
)
def get_subscribed_sites(self) -> Optional[List[int]]:
"""
获取订阅中涉及的所有站点清单(节约资源)
:return: 返回[]代表所有站点命中返回None代表没有订阅
"""
# 查询所有订阅
subscribes = self.subscribeoper.list('R')
if not subscribes:
# 没有订阅不运行
return
# 刷新站点资源,从缓存中匹配订阅
self.match(
self.torrentschain.refresh()
)
return None
ret_sites = []
# 刷新订阅选中的Rss站点
for subscribe in subscribes:
# 如果有一个订阅没有选择站点,则刷新所有订阅站点
if not subscribe.sites:
return []
# 刷新选中的站点
sub_sites = json.loads(subscribe.sites)
if sub_sites:
ret_sites.extend(sub_sites)
# 去重
if ret_sites:
ret_sites = list(set(ret_sites))
return ret_sites
def match(self, torrents: Dict[str, List[Context]]):
"""
@@ -481,11 +509,13 @@ class SubscribeChain(ChainBase):
torrent_list=[torrent_info])
if result is not None and not result:
# 不符合过滤规则
logger.info(f"{torrent_info.title} 不匹配当前过滤规则")
continue
# 不在订阅站点范围的不处理
if subscribe.sites:
sub_sites = json.loads(subscribe.sites)
if sub_sites and torrent_info.site not in sub_sites:
logger.info(f"{torrent_info.title} 不符合 {torrent_mediainfo.title_year} 订阅站点要求")
continue
# 如果是电视剧
if torrent_mediainfo.type == MediaType.TV:
@@ -557,12 +587,12 @@ class SubscribeChain(ChainBase):
if meta.type == MediaType.TV and not subscribe.best_version:
update_date = True if downloads else False
# 未完成下载,计算剩余集数
self.__upate_lack_episodes(lefts=lefts, subscribe=subscribe,
self.__update_lack_episodes(lefts=lefts, subscribe=subscribe,
mediainfo=mediainfo, update_date=update_date)
else:
if meta.type == MediaType.TV:
# 未搜索到资源,但本地缺失可能有变化,更新订阅剩余集数
self.__upate_lack_episodes(lefts=no_exists, subscribe=subscribe, mediainfo=mediainfo)
self.__update_lack_episodes(lefts=no_exists, subscribe=subscribe, mediainfo=mediainfo)
def check(self):
"""
@@ -586,18 +616,15 @@ class SubscribeChain(ChainBase):
if not mediainfo:
logger.warn(f'未识别到媒体信息,标题:{subscribe.name}tmdbid{subscribe.tmdbid}')
continue
if not mediainfo.seasons:
continue
# 获取当前季的总集数
# 对于电视剧,获取当前季的总集数
episodes = mediainfo.seasons.get(subscribe.season) or []
if len(episodes) > subscribe.total_episode or 0:
if len(episodes) > (subscribe.total_episode or 0):
total_episode = len(episodes)
lack_episode = subscribe.lack_episode + (total_episode - subscribe.total_episode)
logger.info(f'订阅 {subscribe.name} 总集数变化,更新总集数为{total_episode},缺失集数为{lack_episode} ...')
else:
total_episode = subscribe.total_episode
lack_episode = subscribe.lack_episode
logger.info(f'订阅 {subscribe.name} 总集数未变化')
# 更新TMDB信息
self.subscribeoper.update(subscribe.id, {
"name": mediainfo.title,
@@ -654,7 +681,7 @@ class SubscribeChain(ChainBase):
return True
return False
def __upate_lack_episodes(self, lefts: Dict[int, Dict[int, NotExistMediaInfo]],
def __update_lack_episodes(self, lefts: Dict[int, Dict[int, NotExistMediaInfo]],
subscribe: Subscribe,
mediainfo: MediaInfo,
update_date: bool = False):
@@ -742,7 +769,7 @@ class SubscribeChain(ChainBase):
total_episode: int,
start_episode: int):
"""
根据订阅开始集数和总结合TMDB信息计算当前订阅的缺失集数
根据订阅开始集数和总结合TMDB信息计算当前订阅的缺失集数
:param no_exists: 缺失季集列表
:param tmdb_id: TMDB ID
:param begin_season: 开始季
@@ -782,8 +809,9 @@ class SubscribeChain(ChainBase):
# 没有自定义总集数
total_episode = total
# 新的集列表
episodes = list(range(max(start_episode, start), total_episode + 1))
new_episodes = list(range(max(start_episode, start), total_episode + 1))
# 与原集列表取交集
episodes = list(set(episode_list).intersection(set(new_episodes)))
# 更新集合
no_exists[tmdb_id][begin_season] = NotExistMediaInfo(
season=begin_season,

View File

@@ -1,30 +1,39 @@
import re
from typing import Dict, List, Union
from requests import Session
from cachetools import cached, TTLCache
from app.chain import ChainBase
from app.core.config import settings
from app.core.context import TorrentInfo, Context, MediaInfo
from app.core.metainfo import MetaInfo
from app.db import SessionFactory
from app.db.site_oper import SiteOper
from app.db.systemconfig_oper import SystemConfigOper
from app.helper.rss import RssHelper
from app.helper.sites import SitesHelper
from app.log import logger
from app.schemas import Notification
from app.schemas.types import SystemConfigKey, MessageChannel
from app.schemas.types import SystemConfigKey, MessageChannel, NotificationType
from app.utils.singleton import Singleton
from app.utils.string import StringUtils
class TorrentsChain(ChainBase):
class TorrentsChain(ChainBase, metaclass=Singleton):
"""
种子刷新处理链
站点首页或RSS种子处理链服务于订阅、刷流等
"""
_cache_file = "__torrents_cache__"
_spider_file = "__torrents_cache__"
_rss_file = "__rss_cache__"
def __init__(self, db: Session = None):
super().__init__(db)
def __init__(self):
self._db = SessionFactory()
super().__init__(self._db)
self.siteshelper = SitesHelper()
self.systemconfig = SystemConfigOper(self._db)
self.siteoper = SiteOper(self._db)
self.rsshelper = RssHelper()
self.systemconfig = SystemConfigOper()
def remote_refresh(self, channel: MessageChannel, userid: Union[str, int] = None):
"""
@@ -36,32 +45,109 @@ class TorrentsChain(ChainBase):
self.post_message(Notification(channel=channel,
title=f"种子刷新完成!", userid=userid))
def get_torrents(self) -> Dict[str, List[Context]]:
def get_torrents(self, stype: str = None) -> Dict[str, List[Context]]:
"""
获取当前缓存的种子
:param stype: 强制指定缓存类型spider:爬虫缓存rss:rss缓存
"""
# 读取缓存
return self.load_cache(self._cache_file) or {}
def refresh(self) -> Dict[str, List[Context]]:
if not stype:
stype = settings.SUBSCRIBE_MODE
# 读取缓存
if stype == 'spider':
return self.load_cache(self._spider_file) or {}
else:
return self.load_cache(self._rss_file) or {}
@cached(cache=TTLCache(maxsize=128, ttl=600))
def browse(self, domain: str) -> List[TorrentInfo]:
"""
刷新站点最新资源
浏览站点首页内容返回种子清单TTL缓存10分钟
:param domain: 站点域名
"""
logger.info(f'开始获取站点 {domain} 最新种子 ...')
site = self.siteshelper.get_indexer(domain)
if not site:
logger.error(f'站点 {domain} 不存在!')
return []
return self.refresh_torrents(site=site)
@cached(cache=TTLCache(maxsize=128, ttl=300))
def rss(self, domain: str) -> List[TorrentInfo]:
"""
获取站点RSS内容返回种子清单TTL缓存5分钟
:param domain: 站点域名
"""
logger.info(f'开始获取站点 {domain} RSS ...')
site = self.siteshelper.get_indexer(domain)
if not site:
logger.error(f'站点 {domain} 不存在!')
return []
if not site.get("rss"):
logger.error(f'站点 {domain} 未配置RSS地址')
return []
rss_items = self.rsshelper.parse(site.get("rss"), True if site.get("proxy") else False)
if rss_items is None:
# rss过期尝试保留原配置生成新的rss
self.__renew_rss_url(domain=domain, site=site)
return []
if not rss_items:
logger.error(f'站点 {domain} 未获取到RSS数据')
return []
# 组装种子
ret_torrents: List[TorrentInfo] = []
for item in rss_items:
if not item.get("title"):
continue
torrentinfo = TorrentInfo(
site=site.get("id"),
site_name=site.get("name"),
site_cookie=site.get("cookie"),
site_ua=site.get("ua") or settings.USER_AGENT,
site_proxy=site.get("proxy"),
site_order=site.get("pri"),
title=item.get("title"),
enclosure=item.get("enclosure"),
page_url=item.get("link"),
size=item.get("size"),
pubdate=item["pubdate"].strftime("%Y-%m-%d %H:%M:%S") if item.get("pubdate") else None,
)
ret_torrents.append(torrentinfo)
return ret_torrents
def refresh(self, stype: str = None, sites: List[int] = None) -> Dict[str, List[Context]]:
"""
刷新站点最新资源,识别并缓存起来
:param stype: 强制指定缓存类型spider:爬虫缓存rss:rss缓存
:param sites: 强制指定站点ID列表为空则读取设置的订阅站点
"""
# 刷新类型
if not stype:
stype = settings.SUBSCRIBE_MODE
# 刷新站点
if not sites:
sites = [str(sid) for sid in (self.systemconfig.get(SystemConfigKey.RssSites) or [])]
# 读取缓存
torrents_cache = self.get_torrents()
# 所有站点索引
indexers = self.siteshelper.get_indexers()
# 配置的Rss站点
config_indexers = [str(sid) for sid in self.systemconfig.get(SystemConfigKey.RssSites) or []]
# 遍历站点缓存资源
for indexer in indexers:
# 未开启的站点不搜索
if config_indexers and str(indexer.get("id")) not in config_indexers:
# 未开启的站点不刷新
if sites and str(indexer.get("id")) not in sites:
continue
logger.info(f'开始刷新 {indexer.get("name")} 最新种子 ...')
domain = StringUtils.get_url_domain(indexer.get("domain"))
torrents: List[TorrentInfo] = self.refresh_torrents(site=indexer)
if stype == "spider":
# 刷新首页种子
torrents: List[TorrentInfo] = self.browse(domain=domain)
else:
# 刷新RSS种子
torrents: List[TorrentInfo] = self.rss(domain=domain)
# 按pubdate降序排列
torrents.sort(key=lambda x: x.pubdate or '', reverse=True)
# 取前N条
@@ -103,7 +189,46 @@ class TorrentsChain(ChainBase):
del torrents
else:
logger.info(f'{indexer.get("name")} 没有获取到种子')
# 保存缓存到本地
self.save_cache(torrents_cache, self._cache_file)
if stype == "spider":
self.save_cache(torrents_cache, self._spider_file)
else:
self.save_cache(torrents_cache, self._rss_file)
# 返回
return torrents_cache
def __renew_rss_url(self, domain: str, site: dict):
"""
保留原配置生成新的rss地址
"""
try:
# RSS链接过期
logger.error(f"站点 {domain} RSS链接已过期正在尝试自动获取")
# 自动生成rss地址
rss_url, errmsg = self.rsshelper.get_rss_link(
url=site.get("url"),
cookie=site.get("cookie"),
ua=site.get("ua") or settings.USER_AGENT,
proxy=True if site.get("proxy") else False
)
if rss_url:
# 获取新的日期的passkey
match = re.search(r'passkey=([a-zA-Z0-9]+)', rss_url)
if match:
new_passkey = match.group(1)
# 获取过期rss除去passkey部分
new_rss = re.sub(r'&passkey=([a-zA-Z0-9]+)', f'&passkey={new_passkey}', site.get("rss"))
logger.info(f"更新站点 {domain} RSS地址 ...")
self.siteoper.update_rss(domain=domain, rss=new_rss)
else:
# 发送消息
self.post_message(
Notification(mtype=NotificationType.SiteMessage, title=f"站点 {domain} RSS链接已过期"))
else:
self.post_message(
Notification(mtype=NotificationType.SiteMessage, title=f"站点 {domain} RSS链接已过期"))
except Exception as e:
print(str(e))
self.post_message(Notification(mtype=NotificationType.SiteMessage, title=f"站点 {domain} RSS链接已过期"))

View File

@@ -85,7 +85,7 @@ class TransferChain(ChainBase):
mediainfo: MediaInfo = None, download_hash: str = None,
target: Path = None, transfer_type: str = None,
season: int = None, epformat: EpisodeFormat = None,
min_filesize: int = 0) -> Tuple[bool, str]:
min_filesize: int = 0, force: bool = False) -> Tuple[bool, str]:
"""
执行一个复杂目录的转移操作
:param path: 待转移目录或文件
@@ -97,6 +97,7 @@ class TransferChain(ChainBase):
:param season: 季
:param epformat: 剧集格式
:param min_filesize: 最小文件大小(MB)
:param force: 是否强制转移
返回:成功标识,错误信息
"""
if not transfer_type:
@@ -174,13 +175,30 @@ class TransferChain(ChainBase):
continue
# 整理屏蔽词不处理
is_blocked = False
if transfer_exclude_words:
for keyword in transfer_exclude_words:
if not keyword:
continue
if keyword and re.findall(keyword, file_path_str):
if keyword and re.search(r"%s" % keyword, file_path_str, re.IGNORECASE):
logger.info(f"{file_path} 命中整理屏蔽词 {keyword},不处理")
continue
is_blocked = True
break
if is_blocked:
err_msgs.append(f"{file_path.name} 命中整理屏蔽词")
continue
# 转移成功的不再处理
if not force:
transferd = self.transferhis.get_by_src(file_path_str)
if transferd and transferd.status:
logger.info(f"{file_path} 已成功转移过,如需重新处理,请删除历史记录。")
continue
# 更新进度
self.progress.update(value=processed_num / total_num * 100,
text=f"正在转移 {processed_num + 1}/{total_num}{file_path.name} ...",
key=ProgressKey.FileTransfer)
if not meta:
# 上级目录元数据
@@ -193,7 +211,7 @@ class TransferChain(ChainBase):
file_meta = meta
# 合并季
if season:
if season is not None:
file_meta.begin_season = season
if not file_meta:
@@ -233,6 +251,13 @@ class TransferChain(ChainBase):
))
continue
# 如果未开启新增已入库媒体是否跟随TMDB信息变化则根据tmdbid查询之前的title
if not settings.SCRAP_FOLLOW_TMDB:
transfer_history = self.transferhis.get_by_type_tmdbid(tmdbid=file_mediainfo.tmdb_id,
mtype=file_mediainfo.type.value)
if transfer_history:
file_mediainfo.title = transfer_history.title
logger.info(f"{file_path.name} 识别为:{file_mediainfo.type.value} {file_mediainfo.title_year}")
# 电视剧没有集无法转移
@@ -323,13 +348,9 @@ class TransferChain(ChainBase):
mediainfo=file_mediainfo,
transferinfo=transferinfo
)
# 广播事件
self.eventmanager.send_event(EventType.TransferComplete, {
'meta': file_meta,
'mediainfo': file_mediainfo,
'transferinfo': transferinfo
})
# 刮削单个文件
if settings.SCRAP_METADATA:
self.scrape_metadata(path=transferinfo.target_path, mediainfo=file_mediainfo)
# 更新进度
processed_num += 1
self.progress.update(value=processed_num / total_num * 100,
@@ -337,21 +358,34 @@ class TransferChain(ChainBase):
key=ProgressKey.FileTransfer)
# 目录或文件转移完成
self.progress.update(value=100,
text=f"所有文件转移完成,正在执行后续处理 ...",
key=ProgressKey.FileTransfer)
# 执行后续处理
for mkey, media in medias.items():
meta = metas[mkey]
transferinfo = transfers[mkey]
# 刷新媒体库
self.refresh_mediaserver(mediainfo=media, file_path=transferinfo.target_path)
# 刮削
self.scrape_metadata(path=transferinfo.target_path, mediainfo=media)
transfer_meta = metas[mkey]
transfer_info = transfers[mkey]
# 媒体目录
if transfer_info.target_path.is_file():
transfer_info.target_path = transfer_info.target_path.parent
# 刷新媒体库,根目录或季目录
if settings.REFRESH_MEDIASERVER:
self.refresh_mediaserver(mediainfo=media, file_path=transfer_info.target_path)
# 发送通知
se_str = None
if media.type == MediaType.TV:
se_str = f"{meta.season} {StringUtils.format_ep(season_episodes[mkey])}"
self.send_transfer_message(meta=meta,
se_str = f"{transfer_meta.season} {StringUtils.format_ep(season_episodes[mkey])}"
self.send_transfer_message(meta=transfer_meta,
mediainfo=media,
transferinfo=transferinfo,
transferinfo=transfer_info,
season_episode=se_str)
# 广播事件
self.eventmanager.send_event(EventType.TransferComplete, {
'meta': transfer_meta,
'mediainfo': media,
'transferinfo': transfer_info
})
# 结束进度
logger.info(f"{path} 转移完成,共 {total_num} 个文件,"
f"成功 {total_num - len(err_msgs)} 个,失败 {len(err_msgs)}")
@@ -464,19 +498,17 @@ class TransferChain(ChainBase):
# 更新媒体图片
self.obtain_images(mediainfo=mediainfo)
# 转移
state, errmsg = self.do_transfer(path=src_path,
mediainfo=mediainfo,
download_hash=history.download_hash)
if not state:
return False, errmsg
# 删除旧的已整理文件
if history.dest:
self.delete_files(Path(history.dest))
# 删除旧历史记录
self.transferhis.delete(logid)
# 强制转移
state, errmsg = self.do_transfer(path=src_path,
mediainfo=mediainfo,
download_hash=history.download_hash,
force=True)
if not state:
return False, errmsg
return True, ""
@@ -567,22 +599,27 @@ class TransferChain(ChainBase):
"""
logger.info(f"开始删除文件以及空目录:{path} ...")
if not path.exists():
logger.error(f"{path} 不存在")
return
elif path.is_file():
if path.is_file():
# 删除文件
path.unlink()
logger.warn(f"文件 {path} 已删除")
# 判断目录是否为空, 为空则删除
if str(path.parent.parent) != str(path.root):
# 父目录非根目录,删除父目录
files = SystemUtils.list_files(path.parent, settings.RMT_MEDIAEXT)
if not files:
shutil.rmtree(path.parent)
logger.warn(f"目录 {path.parent} 已删除")
# 需要删除父目录
elif str(path.parent) == str(path.root):
# 根目录,删除
logger.warn(f"根目录 {path} 不能删除!")
return
else:
if str(path.parent) != str(path.root):
# 父目录非根目录,才删除目录
shutil.rmtree(path)
# 删除目录
logger.warn(f"目录 {path} 已删除")
# 非根目录,才删除目录
shutil.rmtree(path)
# 删除目录
logger.warn(f"目录 {path} 已删除")
# 需要删除父目录
# 判断父目录是否为空, 为空则删除
for parent_path in path.parents:
if str(parent_path.parent) != str(path.root):
# 父目录非根目录,才删除父目录
files = SystemUtils.list_files(parent_path, settings.RMT_MEDIAEXT)
if not files:
shutil.rmtree(parent_path)
logger.warn(f"目录 {parent_path} 已删除")

View File

@@ -13,11 +13,12 @@ from app.chain.transfer import TransferChain
from app.core.event import Event as ManagerEvent
from app.core.event import eventmanager, EventManager
from app.core.plugin import PluginManager
from app.db import ScopedSession
from app.db import SessionFactory
from app.log import logger
from app.schemas.types import EventType, MessageChannel
from app.utils.object import ObjectUtils
from app.utils.singleton import Singleton
from app.utils.system import SystemUtils
class CommandChian(ChainBase):
@@ -41,7 +42,7 @@ class Command(metaclass=Singleton):
def __init__(self):
# 数据库连接
self._db = ScopedSession()
self._db = SessionFactory()
# 事件管理器
self.eventmanager = EventManager()
# 插件管理器
@@ -53,11 +54,13 @@ class Command(metaclass=Singleton):
"/cookiecloud": {
"func": CookieCloudChain(self._db).remote_sync,
"description": "同步站点",
"category": "站点",
"data": {}
},
"/sites": {
"func": SiteChain(self._db).remote_list,
"description": "查询站点",
"category": "站点",
"data": {}
},
"/site_cookie": {
@@ -78,21 +81,25 @@ class Command(metaclass=Singleton):
"/mediaserver_sync": {
"func": MediaServerChain(self._db).remote_sync,
"description": "同步媒体服务器",
"category": "管理",
"data": {}
},
"/subscribes": {
"func": SubscribeChain(self._db).remote_list,
"description": "查询订阅",
"category": "订阅",
"data": {}
},
"/subscribe_refresh": {
"func": SubscribeChain(self._db).remote_refresh,
"description": "刷新订阅",
"category": "订阅",
"data": {}
},
"/subscribe_search": {
"func": SubscribeChain(self._db).remote_search,
"description": "搜索订阅",
"category": "订阅",
"data": {}
},
"/subscribe_delete": {
@@ -103,11 +110,13 @@ class Command(metaclass=Singleton):
"/downloading": {
"func": DownloadChain(self._db).remote_downloading,
"description": "正在下载",
"category": "管理",
"data": {}
},
"/transfer": {
"func": TransferChain(self._db).process,
"description": "下载文件整理",
"category": "管理",
"data": {}
},
"/redo": {
@@ -118,6 +127,13 @@ class Command(metaclass=Singleton):
"/clear_cache": {
"func": SystemChain(self._db).remote_clear_cache,
"description": "清理缓存",
"category": "管理",
"data": {}
},
"/restart": {
"func": SystemUtils.restart,
"description": "重启系统",
"category": "管理",
"data": {}
}
}
@@ -128,6 +144,7 @@ class Command(metaclass=Singleton):
cmd=command.get('cmd'),
func=Command.send_plugin_event,
desc=command.get('desc'),
category=command.get('category'),
data={
'etype': command.get('event'),
'data': command.get('data')
@@ -164,6 +181,8 @@ class Command(metaclass=Singleton):
"""
self._event.set()
self._thread.join()
if self._db:
self._db.close()
def get_commands(self):
"""
@@ -171,13 +190,15 @@ class Command(metaclass=Singleton):
"""
return self._commands
def register(self, cmd: str, func: Any, data: dict = None, desc: str = None) -> None:
def register(self, cmd: str, func: Any, data: dict = None,
desc: str = None, category: str = None) -> None:
"""
注册命令
"""
self._commands[cmd] = {
"func": func,
"description": desc,
"category": category,
"data": data or {}
}
@@ -201,6 +222,10 @@ class Command(metaclass=Singleton):
if args_num > 0:
if cmd_data:
# 有内置参数直接使用内置参数
data = cmd_data.get("data") or {}
data['channel'] = channel
data['user'] = userid
cmd_data['data'] = data
command['func'](**cmd_data)
elif args_num == 2:
# 没有输入参数只输入渠道和用户ID
@@ -242,7 +267,3 @@ class Command(metaclass=Singleton):
args = " ".join(event_str.split()[1:])
if self.get(cmd):
self.execute(cmd, args, event_channel, event_user)
def __del__(self):
if self._db:
self._db.close()

View File

@@ -1,5 +1,6 @@
import secrets
from pathlib import Path
from typing import List
from pydantic import BaseSettings
@@ -39,6 +40,8 @@ class Settings(BaseSettings):
SEARCH_SOURCE: str = "themoviedb"
# 刮削入库的媒体文件
SCRAP_METADATA: bool = True
# 新增已入库媒体是否跟随TMDB信息变化
SCRAP_FOLLOW_TMDB: bool = True
# 刮削来源
SCRAP_SOURCE: str = "themoviedb"
# TMDB图片地址
@@ -63,9 +66,13 @@ class Settings(BaseSettings):
RMT_AUDIO_TRACK_EXT: list = ['.mka']
# 索引器
INDEXER: str = "builtin"
# 订阅模式
SUBSCRIBE_MODE: str = "spider"
# RSS订阅模式刷新时间间隔分钟
SUBSCRIBE_RSS_INTERVAL: int = 30
# 订阅搜索开关
SUBSCRIBE_SEARCH: bool = False
# 用户认证站点 hhclub/audiences/hddolby/zmpt/freefarm/hdfans/wintersakura/leaves/1ptba/icc2022/iyuu
# 用户认证站点
AUTH_SITE: str = ""
# 交互搜索自动下载用户ID使用,分割
AUTO_DOWNLOAD_USER: str = None
@@ -163,7 +170,7 @@ class Settings(BaseSettings):
OCR_HOST: str = "https://movie-pilot.org"
# CookieCloud对应的浏览器UA
USER_AGENT = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/113.0.0.0 Safari/537.36 Edg/113.0.1774.57"
# 媒体库目录
# 媒体库目录,多个目录使用,分隔
LIBRARY_PATH: str = None
# 电影媒体库目录名,默认"电影"
LIBRARY_MOVIE_NAME: str = None
@@ -249,6 +256,12 @@ class Settings(BaseSettings):
"server": self.PROXY_HOST
}
@property
def LIBRARY_PATHS(self) -> List[Path]:
if self.LIBRARY_PATH:
return [Path(path) for path in self.LIBRARY_PATH.split(",")]
return []
def __init__(self):
super().__init__()
with self.CONFIG_PATH as p:

View File

@@ -15,9 +15,9 @@ class MetaBase(object):
"""
# 是否处理的文件
isfile: bool = False
# 原标题字符串
# 原标题字符串(未经过识别词处理)
title: str = ""
# 识别用字符串
# 识别用字符串(经过识别词处理后)
org_string: Optional[str] = None
# 副标题
subtitle: Optional[str] = None

View File

@@ -28,7 +28,23 @@ class WordsMatcher(metaclass=Singleton):
if not word:
continue
try:
if word.count(" => "):
if word.count(" => ") and word.count(" && ") and word.count(" >> ") and word.count(" <> "):
# 替换词
thc = str(re.findall(r'(.*?)\s*=>', word)[0]).strip()
# 被替换词
bthc = str(re.findall(r'=>\s*(.*?)\s*&&', word)[0]).strip()
# 集偏移前字段
pyq = str(re.findall(r'&&\s*(.*?)\s*<>', word)[0]).strip()
# 集偏移后字段
pyh = str(re.findall(r'<>(.*?)\s*>>', word)[0]).strip()
# 集偏移
offsets = str(re.findall(r'>>\s*(.*?)$', word)[0]).strip()
# 替换词
title, message, state = self.__replace_regex(title, thc, bthc)
if state:
# 替换词成功再进行集偏移
title, message, state = self.__episode_offset(title, pyq, pyh, offsets)
elif word.count(" => "):
# 替换词
strings = word.split(" => ")
title, message, state = self.__replace_regex(title, strings[0], strings[1])

View File

@@ -3,6 +3,7 @@ from typing import List, Any, Dict, Tuple
from app.db.systemconfig_oper import SystemConfigOper
from app.helper.module import ModuleHelper
from app.helper.sites import SitesHelper
from app.log import logger
from app.schemas.types import SystemConfigKey
from app.utils.object import ObjectUtils
@@ -23,6 +24,7 @@ class PluginManager(metaclass=Singleton):
_config_key: str = "plugin.%s"
def __init__(self):
self.siteshelper = SitesHelper()
self.init_config()
def init_config(self):
@@ -82,7 +84,8 @@ class PluginManager(metaclass=Singleton):
# 停止所有插件
for plugin in self._running_plugins.values():
# 关闭数据库
plugin.close()
if hasattr(plugin, "close"):
plugin.close()
# 关闭插件
if hasattr(plugin, "stop_service"):
plugin.stop_service()
@@ -183,6 +186,8 @@ class PluginManager(metaclass=Singleton):
# 已安装插件
installed_apps = self.systemconfig.get(SystemConfigKey.UserInstalledPlugins) or []
for pid, plugin in self._plugins.items():
# 运行状插件
plugin_obj = self._running_plugins.get(pid)
# 基本属性
conf = {}
# ID
@@ -193,11 +198,20 @@ class PluginManager(metaclass=Singleton):
else:
conf.update({"installed": False})
# 运行状态
if pid in self._running_plugins.keys() and hasattr(plugin, "get_state"):
plugin_obj = self._running_plugins.get(pid)
if plugin_obj and hasattr(plugin, "get_state"):
conf.update({"state": plugin_obj.get_state()})
else:
conf.update({"state": False})
# 是否有详情页面
if hasattr(plugin, "get_page"):
if ObjectUtils.check_method(plugin.get_page):
conf.update({"has_page": True})
else:
conf.update({"has_page": False})
# 权限
if hasattr(plugin, "auth_level"):
if self.siteshelper.auth_level < plugin.auth_level:
continue
# 名称
if hasattr(plugin, "plugin_name"):
conf.update({"plugin_name": plugin.plugin_name})

View File

@@ -8,9 +8,11 @@ Engine = create_engine(f"sqlite:///{settings.CONFIG_PATH}/user.db",
pool_pre_ping=True,
echo=False,
poolclass=QueuePool,
pool_size=1000,
pool_recycle=60 * 10,
max_overflow=0)
pool_size=1024,
pool_recycle=600,
pool_timeout=180,
max_overflow=0,
connect_args={"timeout": 60})
# 会话工厂
SessionFactory = sessionmaker(autocommit=False, autoflush=False, bind=Engine)
@@ -33,7 +35,6 @@ def get_db():
class DbOper:
_db: Session = None
def __init__(self, db: Session = None):
@@ -41,7 +42,3 @@ class DbOper:
self._db = db
else:
self._db = ScopedSession()
def __del__(self):
if self._db:
self._db.close()

View File

@@ -39,6 +39,12 @@ class DownloadHistoryOper(DbOper):
downloadfile = DownloadFiles(**file_item)
downloadfile.create(self._db)
def truncate_files(self):
"""
清空下载历史文件记录
"""
DownloadFiles.truncate(self._db)
def get_files_by_hash(self, download_hash: str, state: int = None) -> List[DownloadFiles]:
"""
按Hash查询下载文件记录

View File

@@ -6,7 +6,7 @@ from alembic.config import Config
from app.core.config import settings
from app.core.security import get_password_hash
from app.db import Engine, ScopedSession
from app.db import Engine, SessionFactory
from app.db.models import Base
from app.db.models.user import User
from app.log import logger
@@ -22,7 +22,7 @@ def init_db():
# 全量建表
Base.metadata.create_all(bind=Engine)
# 初始化超级管理员
db = ScopedSession()
db = SessionFactory()
user = User.get_by_name(db=db, name=settings.SUPERUSER)
if not user:
user = User(

View File

@@ -1,6 +1,6 @@
from typing import Any
from sqlalchemy.orm import as_declarative, declared_attr
from sqlalchemy.orm import as_declarative, declared_attr, Session
@as_declarative()
@@ -8,33 +8,41 @@ class Base:
id: Any
__name__: str
def create(self, db):
@staticmethod
def commit(db: Session):
try:
db.commit()
except Exception as err:
db.rollback()
raise err
def create(self, db: Session):
db.add(self)
db.commit()
self.commit(db)
return self
@classmethod
def get(cls, db, rid: int):
def get(cls, db: Session, rid: int):
return db.query(cls).filter(cls.id == rid).first()
def update(self, db, payload: dict):
def update(self, db: Session, payload: dict):
payload = {k: v for k, v in payload.items() if v is not None}
for key, value in payload.items():
setattr(self, key, value)
db.commit()
Base.commit(db)
@classmethod
def delete(cls, db, rid):
def delete(cls, db: Session, rid):
db.query(cls).filter(cls.id == rid).delete()
db.commit()
Base.commit(db)
@classmethod
def truncate(cls, db):
def truncate(cls, db: Session):
db.query(cls).delete()
db.commit()
Base.commit(db)
@classmethod
def list(cls, db):
def list(cls, db: Session):
return db.query(cls).all()
def to_dict(self):

View File

@@ -52,7 +52,7 @@ class DownloadHistory(Base):
@staticmethod
def get_last_by(db: Session, mtype: str = None, title: str = None, year: int = None, season: str = None,
episode: str = None, tmdbid: str = None):
episode: str = None, tmdbid: int = None):
"""
据tmdbid、season、season_episode查询转移记录
"""
@@ -130,9 +130,10 @@ class DownloadFiles(Base):
@staticmethod
def delete_by_fullpath(db: Session, fullpath: str):
return db.query(DownloadFiles).filter(DownloadFiles.fullpath == fullpath,
DownloadFiles.state == 1).update(
db.query(DownloadFiles).filter(DownloadFiles.fullpath == fullpath,
DownloadFiles.state == 1).update(
{
"state": 0
}
)
Base.commit(db)

View File

@@ -47,7 +47,7 @@ class MediaServerItem(Base):
@staticmethod
def empty(db: Session, server: str):
db.query(MediaServerItem).filter(MediaServerItem.server == server).delete()
db.commit()
Base.commit(db)
@staticmethod
def exist_by_tmdbid(db: Session, tmdbid: int, mtype: str):

View File

@@ -23,7 +23,8 @@ class PluginData(Base):
@staticmethod
def del_plugin_data_by_key(db: Session, plugin_id: str, key: str):
return db.query(PluginData).filter(PluginData.plugin_id == plugin_id, PluginData.key == key).delete()
db.query(PluginData).filter(PluginData.plugin_id == plugin_id, PluginData.key == key).delete()
Base.commit(db)
@staticmethod
def get_plugin_data_by_plugin_id(db: Session, plugin_id: str):

View File

@@ -1,66 +0,0 @@
from sqlalchemy import Column, Integer, String, Sequence
from sqlalchemy.orm import Session
from app.db.models import Base
class Rss(Base):
"""
RSS订阅
"""
id = Column(Integer, Sequence('id'), primary_key=True, index=True)
# 名称
name = Column(String, nullable=False)
# RSS地址
url = Column(String, nullable=False)
# 类型
type = Column(String)
# 标题
title = Column(String)
# 年份
year = Column(String)
# TMDBID
tmdbid = Column(Integer, index=True)
# 季号
season = Column(Integer)
# 海报
poster = Column(String)
# 背景图
backdrop = Column(String)
# 评分
vote = Column(Integer)
# 简介
description = Column(String)
# 总集数
total_episode = Column(Integer)
# 包含
include = Column(String)
# 排除
exclude = Column(String)
# 洗版
best_version = Column(Integer)
# 是否使用代理服务器
proxy = Column(Integer)
# 是否使用过滤规则
filter = Column(Integer)
# 保存路径
save_path = Column(String)
# 已处理数量
processed = Column(Integer)
# 附加信息,已处理数据
note = Column(String)
# 最后更新时间
last_update = Column(String)
# 状态 0-停用1-启用
state = Column(Integer, default=1)
@staticmethod
def get_by_tmdbid(db: Session, tmdbid: int, season: int = None):
if season:
return db.query(Rss).filter(Rss.tmdbid == tmdbid,
Rss.season == season).all()
return db.query(Rss).filter(Rss.tmdbid == tmdbid).all()
@staticmethod
def get_by_title(db: Session, title: str):
return db.query(Rss).filter(Rss.title == title).first()

View File

@@ -61,4 +61,4 @@ class Site(Base):
@staticmethod
def reset(db: Session):
db.query(Site).delete()
db.commit()
Base.commit(db)

View File

@@ -25,7 +25,7 @@ class TransferHistory(Base):
title = Column(String, index=True)
# 年份
year = Column(String)
tmdbid = Column(Integer)
tmdbid = Column(Integer, index=True)
imdbid = Column(String)
tvdbid = Column(Integer)
doubanid = Column(String)
@@ -85,35 +85,69 @@ class TransferHistory(Base):
return db.query(func.count(TransferHistory.id)).filter(TransferHistory.title.like(f'%{title}%')).first()[0]
@staticmethod
def list_by(db: Session, title: str = None, year: int = None, season: str = None,
episode: str = None, tmdbid: str = None):
def list_by(db: Session, mtype: str = None, title: str = None, year: str = None, season: str = None,
episode: str = None, tmdbid: int = None, dest: str = None):
"""
据tmdbid、season、season_episode查询转移记录
tmdbid + mtype 或 title + year 必输
"""
if tmdbid and not season and not episode:
return db.query(TransferHistory).filter(TransferHistory.tmdbid == tmdbid).all()
if tmdbid and season and not episode:
return db.query(TransferHistory).filter(TransferHistory.tmdbid == tmdbid,
TransferHistory.seasons == season).all()
if tmdbid and season and episode:
return db.query(TransferHistory).filter(TransferHistory.tmdbid == tmdbid,
TransferHistory.seasons == season,
TransferHistory.episodes == episode).all()
# 电视剧所有季集|电影
if not season and not episode:
return db.query(TransferHistory).filter(TransferHistory.title == title,
TransferHistory.year == year).all()
# 电视剧某季
if season and not episode:
return db.query(TransferHistory).filter(TransferHistory.title == title,
TransferHistory.year == year,
TransferHistory.seasons == season).all()
# 电视剧某季某集
if season and episode:
return db.query(TransferHistory).filter(TransferHistory.title == title,
TransferHistory.year == year,
TransferHistory.seasons == season,
TransferHistory.episodes == episode).all()
# TMDBID + 类型
if tmdbid and mtype:
# 电视剧某季某集
if season and episode:
return db.query(TransferHistory).filter(TransferHistory.tmdbid == tmdbid,
TransferHistory.type == mtype,
TransferHistory.seasons == season,
TransferHistory.episodes == episode,
TransferHistory.dest == dest).all()
# 电视剧某季
elif season:
return db.query(TransferHistory).filter(TransferHistory.tmdbid == tmdbid,
TransferHistory.type == mtype,
TransferHistory.seasons == season).all()
else:
if dest:
# 电影
return db.query(TransferHistory).filter(TransferHistory.tmdbid == tmdbid,
TransferHistory.type == mtype,
TransferHistory.dest == dest).all()
else:
# 电视剧所有季集
return db.query(TransferHistory).filter(TransferHistory.tmdbid == tmdbid,
TransferHistory.type == mtype).all()
# 标题 + 年份
elif title and year:
# 电视剧某季某集
if season and episode:
return db.query(TransferHistory).filter(TransferHistory.title == title,
TransferHistory.year == year,
TransferHistory.seasons == season,
TransferHistory.episodes == episode,
TransferHistory.dest == dest).all()
# 电视剧某季
elif season:
return db.query(TransferHistory).filter(TransferHistory.title == title,
TransferHistory.year == year,
TransferHistory.seasons == season).all()
else:
if dest:
# 电影
return db.query(TransferHistory).filter(TransferHistory.title == title,
TransferHistory.year == year,
TransferHistory.dest == dest).all()
else:
# 电视剧所有季集
return db.query(TransferHistory).filter(TransferHistory.title == title,
TransferHistory.year == year).all()
return []
@staticmethod
def get_by_type_tmdbid(db: Session, mtype: str = None, tmdbid: int = None):
"""
据tmdbid、type查询转移记录
"""
return db.query(TransferHistory).filter(TransferHistory.tmdbid == tmdbid,
TransferHistory.type == mtype).first()
@staticmethod
def update_download_hash(db: Session, historyid: int = None, download_hash: str = None):
@@ -122,3 +156,4 @@ class TransferHistory(Base):
"download_hash": download_hash
}
)
Base.commit(db)

View File

@@ -1,57 +0,0 @@
from typing import List
from sqlalchemy.orm import Session
from app.db import DbOper
from app.db.models.rss import Rss
class RssOper(DbOper):
"""
RSS订阅数据管理
"""
def __init__(self, db: Session = None):
super().__init__(db)
def add(self, **kwargs) -> bool:
"""
新增RSS订阅
"""
item = Rss(**kwargs)
item.create(self._db)
return True
def exists(self, tmdbid: int, season: int = None):
"""
判断是否存在
"""
return Rss.get_by_tmdbid(self._db, tmdbid, season)
def list(self, rssid: int = None) -> List[Rss]:
"""
查询所有RSS订阅
"""
if rssid:
return [Rss.get(self._db, rssid)]
return Rss.list(self._db)
def delete(self, rssid: int) -> bool:
"""
删除RSS订阅
"""
item = Rss.get(self._db, rssid)
if item:
item.delete(self._db)
return True
return False
def update(self, rssid: int, **kwargs) -> bool:
"""
更新RSS订阅
"""
item = Rss.get(self._db, rssid)
if item:
item.update(self._db, kwargs)
return True
return False

View File

@@ -74,3 +74,15 @@ class SiteOper(DbOper):
"cookie": cookies
})
return True, "更新站点Cookie成功"
def update_rss(self, domain: str, rss: str) -> Tuple[bool, str]:
"""
更新站点rss
"""
site = Site.get_by_domain(self._db, domain)
if not site:
return False, "站点不存在"
site.update(self._db, {
"rss": rss
})
return True, "更新站点RSS地址成功"

View File

@@ -1,9 +1,7 @@
import json
from typing import Any, Union
from sqlalchemy.orm import Session
from app.db import DbOper
from app.db import DbOper, SessionFactory
from app.db.models.systemconfig import SystemConfig
from app.schemas.types import SystemConfigKey
from app.utils.object import ObjectUtils
@@ -14,11 +12,12 @@ class SystemConfigOper(DbOper, metaclass=Singleton):
# 配置对象
__SYSTEMCONF: dict = {}
def __init__(self, db: Session = None):
def __init__(self):
"""
加载配置到内存
"""
super().__init__(db)
self._db = SessionFactory()
super().__init__(self._db)
for item in SystemConfig.list(self._db):
if ObjectUtils.is_obj(item.value):
self.__SYSTEMCONF[item.key] = json.loads(item.value)
@@ -57,3 +56,7 @@ class SystemConfigOper(DbOper, metaclass=Singleton):
if not key:
return self.__SYSTEMCONF
return self.__SYSTEMCONF.get(key)
def __del__(self):
if self._db:
self._db.close()

View File

@@ -51,18 +51,28 @@ class TransferHistoryOper(DbOper):
"""
return TransferHistory.statistic(self._db, days)
def get_by(self, title: str = None, year: str = None,
season: str = None, episode: str = None, tmdbid: str = None) -> List[TransferHistory]:
def get_by(self, title: str = None, year: str = None, mtype: str = None,
season: str = None, episode: str = None, tmdbid: int = None, dest: str = None) -> List[TransferHistory]:
"""
按类型、标题、年份、季集查询转移记录
"""
return TransferHistory.list_by(db=self._db,
mtype=mtype,
title=title,
dest=dest,
year=year,
season=season,
episode=episode,
tmdbid=tmdbid)
def get_by_type_tmdbid(self, mtype: str = None, tmdbid: int = None) -> TransferHistory:
"""
按类型、tmdb查询转移记录
"""
return TransferHistory.get_by_type_tmdbid(db=self._db,
mtype=mtype,
tmdbid=tmdbid)
def delete(self, historyid):
"""
删除转移记录

View File

@@ -1,16 +1,227 @@
import xml.dom.minidom
from typing import List
from typing import List, Tuple, Union
from urllib.parse import urljoin
from lxml import etree
from app.core.config import settings
from app.helper.browser import PlaywrightHelper
from app.utils.dom import DomUtils
from app.utils.http import RequestUtils
from app.utils.string import StringUtils
class RssHelper:
"""
RSS帮助类解析RSS报文、获取RSS地址等
"""
# 各站点RSS链接获取配置
rss_link_conf = {
"default": {
"xpath": "//a[@class='faqlink']/@href",
"url": "getrss.php",
"params": {
"inclbookmarked": 0,
"itemsmalldescr": 1,
"showrows": 50,
"search_mode": 1,
}
},
"hares.top": {
"xpath": "//*[@id='layui-layer100001']/div[2]/div/p[4]/a/@href",
"url": "getrss.php",
"params": {
"inclbookmarked": 0,
"itemsmalldescr": 1,
"showrows": 50,
"search_mode": 1,
}
},
"et8.org": {
"xpath": "//*[@id='outer']/table/tbody/tr/td/table/tbody/tr/td/a[2]/@href",
"url": "getrss.php",
"params": {
"inclbookmarked": 0,
"itemsmalldescr": 1,
"showrows": 50,
"search_mode": 1,
}
},
"pttime.org": {
"xpath": "//*[@id='outer']/table/tbody/tr/td/table/tbody/tr/td/text()[5]",
"url": "getrss.php",
"params": {
"showrows": 10,
"inclbookmarked": 0,
"itemsmalldescr": 1
}
},
"ourbits.club": {
"xpath": "//a[@class='gen_rsslink']/@href",
"url": "getrss.php",
"params": {
"inclbookmarked": 0,
"itemsmalldescr": 1,
"showrows": 50,
"search_mode": 1,
}
},
"totheglory.im": {
"xpath": "//textarea/text()",
"url": "rsstools.php?c51=51&c52=52&c53=53&c54=54&c108=108&c109=109&c62=62&c63=63&c67=67&c69=69&c70=70&c73=73&c76=76&c75=75&c74=74&c87=87&c88=88&c99=99&c90=90&c58=58&c103=103&c101=101&c60=60",
"params": {
"inclbookmarked": 0,
"itemsmalldescr": 1,
"showrows": 50,
"search_mode": 1,
}
},
"monikadesign.uk": {
"xpath": "//a/@href",
"url": "rss",
"params": {
"inclbookmarked": 0,
"itemsmalldescr": 1,
"showrows": 50,
"search_mode": 1,
}
},
"zhuque.in": {
"xpath": "//a/@href",
"url": "user/rss",
"render": True,
"params": {
"inclbookmarked": 0,
"itemsmalldescr": 1,
"showrows": 50,
"search_mode": 1,
}
},
"hdchina.org": {
"xpath": "//a[@class='faqlink']/@href",
"url": "getrss.php",
"params": {
"inclbookmarked": 0,
"itemsmalldescr": 1,
"showrows": 50,
"search_mode": 1,
"rsscart": 0
}
},
"audiences.me": {
"xpath": "//a[@class='faqlink']/@href",
"url": "getrss.php",
"params": {
"inclbookmarked": 0,
"itemsmalldescr": 1,
"showrows": 50,
"search_mode": 1,
"torrent_type": 1,
"exp": 180
}
},
"shadowflow.org": {
"xpath": "//a[@class='faqlink']/@href",
"url": "getrss.php",
"params": {
"inclbookmarked": 0,
"itemsmalldescr": 1,
"paid": 0,
"search_mode": 0,
"showrows": 30
}
},
"hddolby.com": {
"xpath": "//a[@class='faqlink']/@href",
"url": "getrss.php",
"params": {
"inclbookmarked": 0,
"itemsmalldescr": 1,
"showrows": 50,
"search_mode": 1,
"exp": 180
}
},
"hdhome.org": {
"xpath": "//a[@class='faqlink']/@href",
"url": "getrss.php",
"params": {
"inclbookmarked": 0,
"itemsmalldescr": 1,
"showrows": 50,
"search_mode": 1,
"exp": 180
}
},
"pthome.net": {
"xpath": "//a[@class='faqlink']/@href",
"url": "getrss.php",
"params": {
"inclbookmarked": 0,
"itemsmalldescr": 1,
"showrows": 50,
"search_mode": 1,
"exp": 180
}
},
"ptsbao.club": {
"xpath": "//a[@class='faqlink']/@href",
"url": "getrss.php",
"params": {
"inclbookmarked": 0,
"itemsmalldescr": 1,
"showrows": 50,
"search_mode": 1,
"size": 0
}
},
"leaves.red": {
"xpath": "//a[@class='faqlink']/@href",
"url": "getrss.php",
"params": {
"inclbookmarked": 0,
"itemsmalldescr": 1,
"showrows": 50,
"search_mode": 0,
"paid": 2
}
},
"hdtime.org": {
"xpath": "//a[@class='faqlink']/@href",
"url": "getrss.php",
"params": {
"inclbookmarked": 0,
"itemsmalldescr": 1,
"showrows": 50,
"search_mode": 0,
}
},
"m-team.io": {
"xpath": "//a[@class='faqlink']/@href",
"url": "getrss.php",
"params": {
"showrows": 50,
"inclbookmarked": 0,
"itemsmalldescr": 1,
"https": 1
}
},
"u2.dmhy.org": {
"xpath": "//a[@class='faqlink']/@href",
"url": "getrss.php",
"params": {
"inclbookmarked": 0,
"itemsmalldescr": 1,
"showrows": 50,
"search_mode": 1,
"inclautochecked": 1,
"trackerssl": 1
}
},
}
@staticmethod
def parse(url, proxy: bool = False) -> List[dict]:
def parse(url, proxy: bool = False) -> Union[List[dict], None]:
"""
解析RSS订阅URL获取RSS中的种子信息
:param url: RSS地址
@@ -77,4 +288,61 @@ class RssHelper:
continue
except Exception as e2:
print(str(e2))
# RSS过期 观众RSS 链接已过期,您需要获得一个新的! pthome RSS Link has expired, You need to get a new one!
_rss_expired_msg = [
"RSS 链接已过期, 您需要获得一个新的!",
"RSS Link has expired, You need to get a new one!",
"RSS Link has expired, You need to get new!"
]
if ret_xml in _rss_expired_msg:
return None
return ret_array
def get_rss_link(self, url: str, cookie: str, ua: str, proxy: bool = False) -> Tuple[str, str]:
"""
获取站点rss地址
:param url: 站点地址
:param cookie: 站点cookie
:param ua: 站点ua
:param proxy: 是否使用代理
:return: rss地址、错误信息
"""
try:
# 获取站点域名
domain = StringUtils.get_url_domain(url)
# 获取配置
site_conf = self.rss_link_conf.get(domain) or self.rss_link_conf.get("default")
# RSS地址
rss_url = urljoin(url, site_conf.get("url"))
# RSS请求参数
rss_params = site_conf.get("params")
# 请求RSS页面
if site_conf.get("render"):
html_text = PlaywrightHelper().get_page_source(
url=rss_url,
cookies=cookie,
ua=ua,
proxies=settings.PROXY if proxy else None
)
else:
res = RequestUtils(
cookies=cookie,
timeout=60,
ua=ua,
proxies=settings.PROXY if proxy else None
).post_res(url=rss_url, data=rss_params)
if res:
html_text = res.text
elif res is not None:
return "", f"获取 {url} RSS链接失败错误码{res.status_code},错误原因:{res.reason}"
else:
return "", f"获取RSS链接失败无法连接 {url} "
# 解析HTML
html = etree.HTML(html_text)
if html:
rss_link = html.xpath(site_conf.get("xpath"))
if rss_link:
return str(rss_link[-1]), ""
return "", f"获取RSS链接失败{url}"
except Exception as e:
return "", f"获取 {url} RSS链接失败{str(e)}"

Binary file not shown.

View File

@@ -130,21 +130,34 @@ class TorrentHelper:
"""
获取种子文件的文件夹名和文件清单
:param torrent_path: 种子文件路径
:return: 文件夹名、文件清单
:return: 文件夹名、文件清单,单文件种子返回空文件夹名
"""
if not torrent_path or not torrent_path.exists():
return "", []
try:
torrentinfo = Torrent.from_file(torrent_path)
# 获取目录名
folder_name = torrentinfo.name
# 获取文件清单
if not torrentinfo.files:
if (not torrentinfo.files
or (len(torrentinfo.files) == 1
and torrentinfo.files[0].name == torrentinfo.name)):
# 单文件种子目录名返回空
folder_name = ""
# 单文件种子
file_list = [torrentinfo.name]
else:
file_list = [fileinfo.name for fileinfo in torrentinfo.files]
logger.debug(f"{torrent_path.stem} -> 目录:{folder_name},文件清单:{file_list}")
# 目录名
folder_name = torrentinfo.name
# 文件清单,如果一级目录与种子名相同则去掉
file_list = []
for fileinfo in torrentinfo.files:
file_path = Path(fileinfo.name)
# 根路径
root_path = file_path.parts[0]
if root_path == folder_name:
file_list.append(str(file_path.relative_to(root_path)))
else:
file_list.append(fileinfo.name)
logger.info(f"解析种子:{torrent_path.name} => 目录:{folder_name},文件清单:{file_list}")
return folder_name, file_list
except Exception as err:
logger.error(f"种子文件解析失败:{err}")
@@ -254,9 +267,11 @@ class TorrentHelper:
for file in files:
if not file:
continue
if Path(file).suffix not in settings.RMT_MEDIAEXT:
file_path = Path(file)
if file_path.suffix not in settings.RMT_MEDIAEXT:
continue
meta = MetaInfo(file)
# 只使用文件名识别
meta = MetaInfo(file_path.stem)
if not meta.begin_episode:
continue
episodes = list(set(episodes).union(set(meta.episode_list)))

View File

@@ -28,7 +28,7 @@ class EmbyModule(_ModuleBase):
定时任务每10分钟调用一次
"""
# 定时重连
if not self.emby.user:
if not self.emby.is_inactive():
self.emby = Emby()
def user_authenticate(self, name: str, password: str) -> Optional[str]:
@@ -68,7 +68,7 @@ class EmbyModule(_ModuleBase):
if movie:
logger.info(f"媒体库中已存在:{movie}")
return ExistMediaInfo(type=MediaType.MOVIE)
movies = self.emby.get_movies(title=mediainfo.title, year=mediainfo.year)
movies = self.emby.get_movies(title=mediainfo.title, year=mediainfo.year, tmdb_id=mediainfo.tmdb_id)
if not movies:
logger.info(f"{mediainfo.title_year} 在媒体库中不存在")
return None

View File

@@ -27,6 +27,14 @@ class Emby(metaclass=Singleton):
self.user = self.get_user()
self.folders = self.get_emby_folders()
def is_inactive(self) -> bool:
"""
判断是否需要重连
"""
if not self._host or not self._apikey:
return False
return True if not self.user else False
def get_emby_folders(self) -> List[dict]:
"""
获取Emby媒体库路径列表
@@ -264,11 +272,15 @@ class Emby(metaclass=Singleton):
return None
return ""
def get_movies(self, title: str, year: str = None) -> Optional[List[dict]]:
def get_movies(self,
title: str,
year: str = None,
tmdb_id: int = None) -> Optional[List[dict]]:
"""
根据标题和年份检查电影是否在Emby中存在存在则返回列表
:param title: 标题
:param year: 年份,可以为空,为空时不按年份过滤
:param tmdb_id: TMDB ID
:return: 含title、year属性的字典列表
"""
if not self._host or not self._apikey:
@@ -283,11 +295,19 @@ class Emby(metaclass=Singleton):
if res_items:
ret_movies = []
for res_item in res_items:
item_tmdbid = res_item.get("ProviderIds", {}).get("Tmdb")
if tmdb_id and item_tmdbid:
if str(item_tmdbid) != str(tmdb_id):
continue
else:
ret_movies.append(
{'title': res_item.get('Name'), 'year': str(res_item.get('ProductionYear'))})
continue
if res_item.get('Name') == title and (
not year or str(res_item.get('ProductionYear')) == str(year)):
ret_movies.append(
{'title': res_item.get('Name'), 'year': str(res_item.get('ProductionYear'))})
return ret_movies
return ret_movies
except Exception as e:
logger.error(f"连接Items出错" + str(e))
return None
@@ -453,31 +473,15 @@ class Emby(metaclass=Singleton):
# 查找需要刷新的媒体库ID
item_path = Path(item.target_path)
for folder in self.folders:
# 找同级路径最多的媒体库(要求容器内映射路径与实际一致)
max_comm_path = ""
match_num = 0
match_id = None
# 匹配子目录
for subfolder in folder.get("SubFolders"):
try:
# 查询最大公共路径
# 匹配子目录
subfolder_path = Path(subfolder.get("Path"))
item_path_parents = list(item_path.parents)
subfolder_path_parents = list(subfolder_path.parents)
common_path = next(p1 for p1, p2 in zip(reversed(item_path_parents),
reversed(subfolder_path_parents)
) if p1 == p2)
if len(common_path) > len(max_comm_path):
max_comm_path = common_path
match_id = subfolder.get("Id")
match_num += 1
except StopIteration:
continue
if item_path.is_relative_to(subfolder_path):
return subfolder.get("Id")
except Exception as err:
print(str(err))
# 检查匹配情况
if match_id:
return match_id if match_num == 1 else folder.get("Id")
# 如果找不到,只要路径中有分类目录名就命中
for subfolder in folder.get("SubFolders"):
if subfolder.get("Path") and re.search(r"[/\\]%s" % item.category,
@@ -778,6 +782,7 @@ class Emby(metaclass=Singleton):
}
"""
message = json.loads(message_str)
logger.info(f"接收到emby webhook{message}")
eventItem = WebhookEventInfo(event=message.get('Event', ''), channel="emby")
if message.get('Item'):
if message.get('Item', {}).get('Type') == 'Episode':
@@ -806,9 +811,9 @@ class Emby(metaclass=Singleton):
eventItem.item_type = "MOV"
eventItem.item_name = "%s %s" % (
message.get('Item', {}).get('Name'), "(" + str(message.get('Item', {}).get('ProductionYear')) + ")")
eventItem.item_path = message.get('Item', {}).get('Path')
eventItem.item_id = message.get('Item', {}).get('Id')
eventItem.item_path = message.get('Item', {}).get('Path')
eventItem.tmdb_id = message.get('Item', {}).get('ProviderIds', {}).get('Tmdb')
if message.get('Item', {}).get('Overview') and len(message.get('Item', {}).get('Overview')) > 100:
eventItem.overview = str(message.get('Item', {}).get('Overview'))[:100] + "..."

View File

@@ -11,6 +11,299 @@ from app.schemas.types import MediaType
class FanartModule(_ModuleBase):
"""
{
"name": "The Wheel of Time",
"thetvdb_id": "355730",
"tvposter": [
{
"id": "174068",
"url": "http://assets.fanart.tv/fanart/tv/355730/tvposter/the-wheel-of-time-64b009de9548d.jpg",
"lang": "en",
"likes": "3"
},
{
"id": "176424",
"url": "http://assets.fanart.tv/fanart/tv/355730/tvposter/the-wheel-of-time-64de44fe42073.jpg",
"lang": "00",
"likes": "3"
},
{
"id": "176407",
"url": "http://assets.fanart.tv/fanart/tv/355730/tvposter/the-wheel-of-time-64dde63c7c941.jpg",
"lang": "en",
"likes": "0"
},
{
"id": "177321",
"url": "http://assets.fanart.tv/fanart/tv/355730/tvposter/the-wheel-of-time-64eda10599c3d.jpg",
"lang": "cz",
"likes": "0"
},
{
"id": "155050",
"url": "http://assets.fanart.tv/fanart/tv/355730/tvposter/the-wheel-of-time-6313adbd1fd58.jpg",
"lang": "pl",
"likes": "0"
},
{
"id": "140198",
"url": "http://assets.fanart.tv/fanart/tv/355730/tvposter/the-wheel-of-time-61a0d7b11952e.jpg",
"lang": "en",
"likes": "0"
},
{
"id": "140034",
"url": "http://assets.fanart.tv/fanart/tv/355730/tvposter/the-wheel-of-time-619e65b73871d.jpg",
"lang": "en",
"likes": "0"
}
],
"hdtvlogo": [
{
"id": "139835",
"url": "http://assets.fanart.tv/fanart/tv/355730/hdtvlogo/the-wheel-of-time-6197d9392faba.png",
"lang": "en",
"likes": "3"
},
{
"id": "140039",
"url": "http://assets.fanart.tv/fanart/tv/355730/hdtvlogo/the-wheel-of-time-619e87941a128.png",
"lang": "pt",
"likes": "3"
},
{
"id": "140092",
"url": "http://assets.fanart.tv/fanart/tv/355730/hdtvlogo/the-wheel-of-time-619fa2347bada.png",
"lang": "en",
"likes": "3"
},
{
"id": "164312",
"url": "http://assets.fanart.tv/fanart/tv/355730/hdtvlogo/the-wheel-of-time-63c8185cb8824.png",
"lang": "hu",
"likes": "1"
},
{
"id": "139827",
"url": "http://assets.fanart.tv/fanart/tv/355730/hdtvlogo/the-wheel-of-time-6197539658a9e.png",
"lang": "en",
"likes": "1"
},
{
"id": "177214",
"url": "http://assets.fanart.tv/fanart/tv/355730/hdtvlogo/the-wheel-of-time-64ebae44c23a6.png",
"lang": "cz",
"likes": "0"
},
{
"id": "177215",
"url": "http://assets.fanart.tv/fanart/tv/355730/hdtvlogo/the-wheel-of-time-64ebae472deef.png",
"lang": "cz",
"likes": "0"
},
{
"id": "156163",
"url": "http://assets.fanart.tv/fanart/tv/355730/hdtvlogo/the-wheel-of-time-63316bef1ff9d.png",
"lang": "cz",
"likes": "0"
},
{
"id": "155051",
"url": "http://assets.fanart.tv/fanart/tv/355730/hdtvlogo/the-wheel-of-time-6313add04ca92.png",
"lang": "pl",
"likes": "0"
},
{
"id": "152668",
"url": "http://assets.fanart.tv/fanart/tv/355730/hdtvlogo/the-wheel-of-time-62ced3775a40a.png",
"lang": "pl",
"likes": "0"
},
{
"id": "142266",
"url": "http://assets.fanart.tv/fanart/tv/355730/hdtvlogo/the-wheel-of-time-61ccd93eeac2b.png",
"lang": "de",
"likes": "0"
}
],
"hdclearart": [
{
"id": "164313",
"url": "http://assets.fanart.tv/fanart/tv/355730/hdclearart/the-wheel-of-time-63c81871c982c.png",
"lang": "en",
"likes": "3"
},
{
"id": "140284",
"url": "http://assets.fanart.tv/fanart/tv/355730/hdclearart/the-wheel-of-time-61a2128ed1df2.png",
"lang": "pt",
"likes": "3"
},
{
"id": "139828",
"url": "http://assets.fanart.tv/fanart/tv/355730/hdclearart/the-wheel-of-time-61975401e894c.png",
"lang": "en",
"likes": "1"
},
{
"id": "164314",
"url": "http://assets.fanart.tv/fanart/tv/355730/hdclearart/the-wheel-of-time-63c8188488a5f.png",
"lang": "hu",
"likes": "1"
},
{
"id": "177322",
"url": "http://assets.fanart.tv/fanart/tv/355730/hdclearart/the-wheel-of-time-64eda135933b6.png",
"lang": "cz",
"likes": "0"
},
{
"id": "142267",
"url": "http://assets.fanart.tv/fanart/tv/355730/hdclearart/the-wheel-of-time-61ccda9918c5c.png",
"lang": "de",
"likes": "0"
}
],
"seasonposter": [
{
"id": "140199",
"url": "http://assets.fanart.tv/fanart/tv/355730/seasonposter/the-wheel-of-time-61a0d7c2976de.jpg",
"lang": "en",
"likes": "1",
"season": "1"
},
{
"id": "176395",
"url": "http://assets.fanart.tv/fanart/tv/355730/seasonposter/the-wheel-of-time-64dd80b3d79a9.jpg",
"lang": "en",
"likes": "0",
"season": "1"
},
{
"id": "140035",
"url": "http://assets.fanart.tv/fanart/tv/355730/seasonposter/the-wheel-of-time-619e65c4d5357.jpg",
"lang": "en",
"likes": "0",
"season": "1"
}
],
"tvthumb": [
{
"id": "140242",
"url": "http://assets.fanart.tv/fanart/tv/355730/tvthumb/the-wheel-of-time-61a1813035506.jpg",
"lang": "en",
"likes": "1"
},
{
"id": "177323",
"url": "http://assets.fanart.tv/fanart/tv/355730/tvthumb/the-wheel-of-time-64eda15b6dce6.jpg",
"lang": "cz",
"likes": "0"
},
{
"id": "176399",
"url": "http://assets.fanart.tv/fanart/tv/355730/tvthumb/the-wheel-of-time-64dd85c9b618c.jpg",
"lang": "en",
"likes": "0"
},
{
"id": "152669",
"url": "http://assets.fanart.tv/fanart/tv/355730/tvthumb/the-wheel-of-time-62ced53d16574.jpg",
"lang": "pl",
"likes": "0"
},
{
"id": "141983",
"url": "http://assets.fanart.tv/fanart/tv/355730/tvthumb/the-wheel-of-time-61c6d04a6d701.jpg",
"lang": "en",
"likes": "0"
}
],
"showbackground": [
{
"id": "177324",
"url": "http://assets.fanart.tv/fanart/tv/355730/showbackground/the-wheel-of-time-64eda1833ccb1.jpg",
"lang": "",
"likes": "0",
"season": "all"
},
{
"id": "141986",
"url": "http://assets.fanart.tv/fanart/tv/355730/showbackground/the-wheel-of-time-61c6d08f7c7e2.jpg",
"lang": "",
"likes": "0",
"season": "all"
},
{
"id": "139868",
"url": "http://assets.fanart.tv/fanart/tv/355730/showbackground/the-wheel-of-time-6198ce358b98a.jpg",
"lang": "",
"likes": "0",
"season": "all"
}
],
"seasonthumb": [
{
"id": "176396",
"url": "http://assets.fanart.tv/fanart/tv/355730/seasonthumb/the-wheel-of-time-64dd80c8593f9.jpg",
"lang": "en",
"likes": "0",
"season": "1"
},
{
"id": "176400",
"url": "http://assets.fanart.tv/fanart/tv/355730/seasonthumb/the-wheel-of-time-64dd85da7c5e9.jpg",
"lang": "en",
"likes": "0",
"season": "0"
}
],
"tvbanner": [
{
"id": "176397",
"url": "http://assets.fanart.tv/fanart/tv/355730/tvbanner/the-wheel-of-time-64dd80da9a255.jpg",
"lang": "en",
"likes": "0"
},
{
"id": "176401",
"url": "http://assets.fanart.tv/fanart/tv/355730/tvbanner/the-wheel-of-time-64dd85e8904ea.jpg",
"lang": "en",
"likes": "0"
},
{
"id": "141988",
"url": "http://assets.fanart.tv/fanart/tv/355730/tvbanner/the-wheel-of-time-61c6d34bceb5f.jpg",
"lang": "en",
"likes": "0"
},
{
"id": "141984",
"url": "http://assets.fanart.tv/fanart/tv/355730/tvbanner/the-wheel-of-time-61c6d06c1c21c.jpg",
"lang": "en",
"likes": "0"
}
],
"seasonbanner": [
{
"id": "176398",
"url": "http://assets.fanart.tv/fanart/tv/355730/seasonbanner/the-wheel-of-time-64dd80e7dbd9f.jpg",
"lang": "en",
"likes": "0",
"season": "1"
},
{
"id": "176402",
"url": "http://assets.fanart.tv/fanart/tv/355730/seasonbanner/the-wheel-of-time-64dd85fb4f1b1.jpg",
"lang": "en",
"likes": "0",
"season": "0"
}
]
}
"""
# 代理
_proxies: dict = settings.PROXY
@@ -40,6 +333,7 @@ class FanartModule(_ModuleBase):
if not result or result.get('status') == 'error':
logger.warn(f"没有获取到 {mediainfo.title_year} 的Fanart图片数据")
return
# 获取所有图片
for name, images in result.items():
if not images:
continue
@@ -47,10 +341,17 @@ class FanartModule(_ModuleBase):
continue
# 按欢迎程度倒排
images.sort(key=lambda x: int(x.get('likes', 0)), reverse=True)
# 取第一张图片
image_obj = images[0]
# 图片属性xx_path
image_name = self.__name(name)
image_season = image_obj.get('season')
# 设置图片
if image_name.startswith("season") and image_season:
# 季图片格式 seasonxx-poster
image_name = f"season{str(image_season).rjust(2, '0')}-{image_name[6:]}"
if not mediainfo.get_image(image_name):
mediainfo.set_image(image_name, images[0].get('url'))
mediainfo.set_image(image_name, image_obj.get('url'))
return mediainfo

View File

@@ -1,7 +1,7 @@
import re
from pathlib import Path
from threading import Lock
from typing import Optional, List, Tuple, Union
from typing import Optional, List, Tuple, Union, Dict
from jinja2 import Template
@@ -11,7 +11,7 @@ from app.core.meta import MetaBase
from app.core.metainfo import MetaInfo
from app.log import logger
from app.modules import _ModuleBase
from app.schemas import TransferInfo
from app.schemas import TransferInfo, ExistMediaInfo
from app.schemas.types import MediaType
from app.utils.system import SystemUtils
@@ -283,7 +283,7 @@ class FileTransferModule(_ModuleBase):
return retcode
def __transfer_file(self, file_item: Path, new_file: Path, transfer_type: str,
over_flag: bool = False, old_file: Path = None) -> int:
over_flag: bool = False) -> int:
"""
转移一个文件,同时处理其他相关文件
:param file_item: 原文件路径
@@ -291,12 +291,13 @@ class FileTransferModule(_ModuleBase):
:param transfer_type: RmtMode转移方式
:param over_flag: 是否覆盖为True时会先删除再转移
"""
if not over_flag and new_file.exists():
logger.warn(f"文件已存在:{new_file}")
return 0
if over_flag and old_file and old_file.exists():
logger.info(f"正在删除已存在的文件:{old_file}")
old_file.unlink()
if new_file.exists():
if not over_flag:
logger.warn(f"文件已存在:{new_file}")
return 0
else:
logger.info(f"正在删除已存在的文件:{new_file}")
new_file.unlink()
logger.info(f"正在转移文件:{file_item}{new_file}")
# 创建父目录
new_file.parent.mkdir(parents=True, exist_ok=True)
@@ -314,6 +315,34 @@ class FileTransferModule(_ModuleBase):
transfer_type=transfer_type,
over_flag=over_flag)
@staticmethod
def __get_library_dir(mediainfo: MediaInfo, target_dir: Path) -> Path:
"""
根据设置并装媒体库目录
"""
if mediainfo.type == MediaType.MOVIE:
# 电影
if settings.LIBRARY_MOVIE_NAME:
target_dir = target_dir / settings.LIBRARY_MOVIE_NAME / mediainfo.category
else:
# 目的目录加上类型和二级分类
target_dir = target_dir / mediainfo.type.value / mediainfo.category
if mediainfo.type == MediaType.TV:
# 电视剧
if settings.LIBRARY_ANIME_NAME \
and mediainfo.genre_ids \
and set(mediainfo.genre_ids).intersection(set(settings.ANIME_GENREIDS)):
# 动漫
target_dir = target_dir / settings.LIBRARY_ANIME_NAME / mediainfo.category
elif settings.LIBRARY_TV_NAME:
# 电视剧
target_dir = target_dir / settings.LIBRARY_TV_NAME / mediainfo.category
else:
# 目的目录加上类型和二级分类
target_dir = target_dir / mediainfo.type.value / mediainfo.category
return target_dir
def transfer_media(self,
in_path: Path,
in_meta: MetaBase,
@@ -337,27 +366,8 @@ class FileTransferModule(_ModuleBase):
if not target_dir.exists():
return TransferInfo(message=f"{target_dir} 目标路径不存在")
if mediainfo.type == MediaType.MOVIE:
# 电影
if settings.LIBRARY_MOVIE_NAME:
target_dir = target_dir / settings.LIBRARY_MOVIE_NAME / mediainfo.category
else:
# 目的目录加上类型和二级分类
target_dir = target_dir / mediainfo.type.value / mediainfo.category
if mediainfo.type == MediaType.TV:
# 电视剧
if settings.LIBRARY_ANIME_NAME \
and mediainfo.genre_ids \
and set(mediainfo.genre_ids).intersection(set(settings.ANIME_GENREIDS)):
# 动漫
target_dir = target_dir / settings.LIBRARY_ANIME_NAME / mediainfo.category
elif settings.LIBRARY_TV_NAME:
# 电视剧
target_dir = target_dir / settings.LIBRARY_TV_NAME / mediainfo.category
else:
# 目的目录加上类型和二级分类
target_dir = target_dir / mediainfo.type.value / mediainfo.category
# 媒体库目录
target_dir = self.__get_library_dir(mediainfo=mediainfo, target_dir=target_dir)
# 重命名格式
rename_format = settings.TV_RENAME_FORMAT \
@@ -510,13 +520,13 @@ class FileTransferModule(_ModuleBase):
计算一个最好的目的目录有in_path时找与in_path同路径的没有in_path时顺序查找1个符合大小要求的没有in_path和size时返回第1个
:param in_path: 源目录
"""
if not settings.LIBRARY_PATH:
if not settings.LIBRARY_PATHS:
return None
# 目的路径,多路径以,分隔
dest_paths = str(settings.LIBRARY_PATH).split(",")
dest_paths = settings.LIBRARY_PATHS
# 只有一个路径,直接返回
if len(dest_paths) == 1:
return Path(dest_paths[0])
return dest_paths[0]
# 匹配有最长共同上级路径的目录
max_length = 0
target_path = None
@@ -531,12 +541,77 @@ class FileTransferModule(_ModuleBase):
logger.debug(f"计算目标路径时出错:{e}")
continue
if target_path:
return Path(target_path)
return target_path
# 顺序匹配第1个满足空间存储要求的目录
if in_path.exists():
file_size = in_path.stat().st_size
for path in dest_paths:
if SystemUtils.free_space(Path(path)) > file_size:
return Path(path)
if SystemUtils.free_space(path) > file_size:
return path
# 默认返回第1个
return Path(dest_paths[0])
return dest_paths[0]
def media_exists(self, mediainfo: MediaInfo, itemid: str = None) -> Optional[ExistMediaInfo]:
"""
判断媒体文件是否存在于本地文件系统
:param mediainfo: 识别的媒体信息
:param itemid: 媒体服务器ItemID
:return: 如不存在返回None存在时返回信息包括每季已存在所有集{type: movie/tv, seasons: {season: [episodes]}}
"""
if not settings.LIBRARY_PATHS:
return None
# 目的路径
dest_paths = settings.LIBRARY_PATHS
# 检查每一个媒体库目录
for dest_path in dest_paths:
# 媒体库路径
target_dir = self.get_target_path(dest_path)
if not target_dir:
continue
# 媒体分类路径
target_dir = self.__get_library_dir(mediainfo=mediainfo, target_dir=target_dir)
# 重命名格式
rename_format = settings.TV_RENAME_FORMAT \
if mediainfo.type == MediaType.TV else settings.MOVIE_RENAME_FORMAT
# 相对路径
rel_path = self.get_rename_path(
template_string=rename_format,
rename_dict=self.__get_naming_dict(meta=MetaInfo(mediainfo.title),
mediainfo=mediainfo)
)
# 取相对路径的第1层目录
if rel_path.parts:
media_path = target_dir / rel_path.parts[0]
else:
continue
# 检查媒体文件夹是否存在
if not media_path.exists():
continue
# 检索媒体文件
media_files = SystemUtils.list_files(directory=media_path, extensions=settings.RMT_MEDIAEXT)
if not media_files:
continue
if mediainfo.type == MediaType.MOVIE:
# 电影存在任何文件为存在
logger.info(f"文件系统已存在:{mediainfo.title_year}")
return ExistMediaInfo(type=MediaType.MOVIE)
else:
# 电视剧检索集数
seasons: Dict[int, list] = {}
for media_file in media_files:
file_meta = MetaInfo(media_file.stem)
season_index = file_meta.begin_season or 1
episode_index = file_meta.begin_episode
if not episode_index:
continue
if season_index not in seasons:
seasons[season_index] = []
seasons[season_index].append(episode_index)
# 返回剧集情况
logger.info(f"{mediainfo.title_year} 文件系统已存在:{seasons}")
return ExistMediaInfo(type=MediaType.TV, seasons=seasons)
# 不存在
return None

View File

@@ -67,7 +67,7 @@ class FilterModule(_ModuleBase):
},
# 杜比
"DOLBY": {
"include": [r"DOLBY|DOVI|[\s.]+DV[\s.]+|杜比"],
"include": [r"Dolby[\s.]+Vision|DOVI|[\s.]+DV[\s.]+|杜比视界"],
"exclude": []
},
# HDR

View File

@@ -25,7 +25,7 @@ class JellyfinModule(_ModuleBase):
定时任务每10分钟调用一次
"""
# 定时重连
if not self.jellyfin.user:
if not self.jellyfin.is_inactive():
self.jellyfin = Jellyfin()
def stop(self):
@@ -64,7 +64,7 @@ class JellyfinModule(_ModuleBase):
if movie:
logger.info(f"媒体库中已存在:{movie}")
return ExistMediaInfo(type=MediaType.MOVIE)
movies = self.jellyfin.get_movies(title=mediainfo.title, year=mediainfo.year)
movies = self.jellyfin.get_movies(title=mediainfo.title, year=mediainfo.year, tmdb_id=mediainfo.tmdb_id)
if not movies:
logger.info(f"{mediainfo.title_year} 在媒体库中不存在")
return None

View File

@@ -25,6 +25,14 @@ class Jellyfin(metaclass=Singleton):
self.user = self.get_user()
self.serverid = self.get_server_id()
def is_inactive(self) -> bool:
"""
判断是否需要重连
"""
if not self._host or not self._apikey:
return False
return True if not self.user else False
def __get_jellyfin_librarys(self) -> List[dict]:
"""
获取Jellyfin媒体库的信息
@@ -240,11 +248,15 @@ class Jellyfin(metaclass=Singleton):
return None
return ""
def get_movies(self, title: str, year: str = None) -> Optional[List[dict]]:
def get_movies(self,
title: str,
year: str = None,
tmdb_id: int = None) -> Optional[List[dict]]:
"""
根据标题和年份检查电影是否在Jellyfin中存在存在则返回列表
:param title: 标题
:param year: 年份,为空则不过滤
:param tmdb_id: TMDB ID
:return: 含title、year属性的字典列表
"""
if not self._host or not self._apikey or not self.user:
@@ -258,11 +270,19 @@ class Jellyfin(metaclass=Singleton):
if res_items:
ret_movies = []
for res_item in res_items:
item_tmdbid = res_item.get("ProviderIds", {}).get("Tmdb")
if tmdb_id and item_tmdbid:
if str(item_tmdbid) != str(tmdb_id):
continue
else:
ret_movies.append(
{'title': res_item.get('Name'), 'year': str(res_item.get('ProductionYear'))})
continue
if res_item.get('Name') == title and (
not year or str(res_item.get('ProductionYear')) == str(year)):
ret_movies.append(
{'title': res_item.get('Name'), 'year': str(res_item.get('ProductionYear'))})
return ret_movies
return ret_movies
except Exception as e:
logger.error(f"连接Items出错" + str(e))
return None
@@ -370,24 +390,100 @@ class Jellyfin(metaclass=Singleton):
def get_webhook_message(self, message: dict) -> WebhookEventInfo:
"""
解析Jellyfin报文
{
"ServerId": "d79d3a6261614419a114595a585xxxxx",
"ServerName": "nyanmisaka-jellyfin1",
"ServerVersion": "10.8.10",
"ServerUrl": "http://xxxxxxxx:8098",
"NotificationType": "PlaybackStart",
"Timestamp": "2023-09-10T08:35:25.3996506+00:00",
"UtcTimestamp": "2023-09-10T08:35:25.3996527Z",
"Name": "慕灼华逃婚离开",
"Overview": "慕灼华假装在读书,她害怕大娘子说她不务正业。",
"Tagline": "",
"ItemId": "4b92551344f53b560fb55cd6700xxxxx",
"ItemType": "Episode",
"RunTimeTicks": 27074985984,
"RunTime": "00:45:07",
"Year": 2023,
"SeriesName": "灼灼风流",
"SeasonNumber": 1,
"SeasonNumber00": "01",
"SeasonNumber000": "001",
"EpisodeNumber": 1,
"EpisodeNumber00": "01",
"EpisodeNumber000": "001",
"Provider_tmdb": "229210",
"Video_0_Title": "4K HEVC SDR",
"Video_0_Type": "Video",
"Video_0_Codec": "hevc",
"Video_0_Profile": "Main",
"Video_0_Level": 150,
"Video_0_Height": 2160,
"Video_0_Width": 3840,
"Video_0_AspectRatio": "16:9",
"Video_0_Interlaced": false,
"Video_0_FrameRate": 25,
"Video_0_VideoRange": "SDR",
"Video_0_ColorSpace": "bt709",
"Video_0_ColorTransfer": "bt709",
"Video_0_ColorPrimaries": "bt709",
"Video_0_PixelFormat": "yuv420p",
"Video_0_RefFrames": 1,
"Audio_0_Title": "AAC - Stereo - Default",
"Audio_0_Type": "Audio",
"Audio_0_Language": "und",
"Audio_0_Codec": "aac",
"Audio_0_Channels": 2,
"Audio_0_Bitrate": 125360,
"Audio_0_SampleRate": 48000,
"Audio_0_Default": true,
"PlaybackPositionTicks": 1000000,
"PlaybackPosition": "00:00:00",
"MediaSourceId": "4b92551344f53b560fb55cd6700ebc86",
"IsPaused": false,
"IsAutomated": false,
"DeviceId": "TW96aWxsxxxxxjA",
"DeviceName": "Edge Chromium",
"ClientName": "Jellyfin Web",
"NotificationUsername": "Jeaven",
"UserId": "9783d2432b0d40a8a716b6aa46xxxxx"
}
"""
logger.info(f"接收到jellyfin webhook{message}")
eventItem = WebhookEventInfo(
event=message.get('NotificationType', ''),
item_id=message.get('ItemId'),
item_name=message.get('Name'),
item_type=message.get('ItemType'),
item_favorite=message.get('Favorite'),
save_reason=message.get('SaveReason'),
tmdb_id=message.get('Provider_tmdb'),
user_name=message.get('NotificationUsername'),
channel="jellyfin"
)
eventItem.item_id = message.get('ItemId')
eventItem.tmdb_id = message.get('Provider_tmdb')
eventItem.overview = message.get('Overview')
eventItem.device_name = message.get('DeviceName')
eventItem.user_name = message.get('NotificationUsername')
eventItem.client = message.get('ClientName')
if message.get("ItemType") == "Episode":
# 剧集
eventItem.item_type = "TV"
eventItem.season_id = message.get('SeasonNumber')
eventItem.episode_id = message.get('EpisodeNumber')
eventItem.item_name = "%s %s%s %s" % (
message.get('SeriesName'),
"S" + str(eventItem.season_id),
"E" + str(eventItem.episode_id),
message.get('Name'))
else:
# 电影
eventItem.item_type = "MOV"
eventItem.item_name = "%s %s" % (
message.get('Name'), "(" + str(message.get('Year')) + ")")
# 获取消息图片
if eventItem.item_id:
# 根据返回的item_id去调用媒体服务器获取
eventItem.image_url = self.get_remote_image_by_id(item_id=eventItem.item_id,
image_type="Backdrop")
eventItem.image_url = self.get_remote_image_by_id(
item_id=eventItem.item_id,
image_type="Backdrop"
)
return eventItem
@@ -452,8 +548,8 @@ class Jellyfin(metaclass=Singleton):
"""
if not self._host or not self._apikey:
return None
url = url.replace("{HOST}", self._host)\
.replace("{APIKEY}", self._apikey)\
url = url.replace("{HOST}", self._host) \
.replace("{APIKEY}", self._apikey) \
.replace("{USER}", self.user)
try:
return RequestUtils().get_res(url=url)

View File

@@ -28,7 +28,7 @@ class PlexModule(_ModuleBase):
定时任务每10分钟调用一次
"""
# 定时重连
if not self.plex.get_plex():
if not self.plex.is_inactive():
self.plex = Plex()
def webhook_parser(self, body: Any, form: Any, args: Any) -> WebhookEventInfo:
@@ -54,7 +54,10 @@ class PlexModule(_ModuleBase):
if movie:
logger.info(f"媒体库中已存在:{movie}")
return ExistMediaInfo(type=MediaType.MOVIE)
movies = self.plex.get_movies(title=mediainfo.title, year=mediainfo.year)
movies = self.plex.get_movies(title=mediainfo.title,
original_title=mediainfo.original_title,
year=mediainfo.year,
tmdb_id=mediainfo.tmdb_id)
if not movies:
logger.info(f"{mediainfo.title_year} 在媒体库中不存在")
return None
@@ -63,7 +66,9 @@ class PlexModule(_ModuleBase):
return ExistMediaInfo(type=MediaType.MOVIE)
else:
tvs = self.plex.get_tv_episodes(title=mediainfo.title,
original_title=mediainfo.original_title,
year=mediainfo.year,
tmdb_id=mediainfo.tmdb_id,
item_id=itemid)
if not tvs:
logger.info(f"{mediainfo.title_year} 在媒体库中不存在")

View File

@@ -30,6 +30,14 @@ class Plex(metaclass=Singleton):
self._plex = None
logger.error(f"Plex服务器连接失败{str(e)}")
def is_inactive(self) -> bool:
"""
判断是否需要重连
"""
if not self._host or not self._token:
return False
return True if not self._plex else False
def get_librarys(self):
"""
获取媒体服务器所有媒体库列表
@@ -120,11 +128,17 @@ class Plex(metaclass=Singleton):
"EpisodeCount": EpisodeCount
}
def get_movies(self, title: str, year: str = None) -> Optional[List[dict]]:
def get_movies(self,
title: str,
original_title: str = None,
year: str = None,
tmdb_id: int = None) -> Optional[List[dict]]:
"""
根据标题和年份检查电影是否在Plex中存在存在则返回列表
:param title: 标题
:param original_title: 原产地标题
:param year: 年份,为空则不过滤
:param tmdb_id: TMDB ID
:return: 含title、year属性的字典列表
"""
if not self._plex:
@@ -132,22 +146,35 @@ class Plex(metaclass=Singleton):
ret_movies = []
if year:
movies = self._plex.library.search(title=title, year=year, libtype="movie")
# 根据原标题再查一遍
if original_title and str(original_title) != str(title):
movies.extend(self._plex.library.search(title=original_title, year=year, libtype="movie"))
else:
movies = self._plex.library.search(title=title, libtype="movie")
for movie in movies:
if original_title and str(original_title) != str(title):
movies.extend(self._plex.library.search(title=original_title, year=year, libtype="movie"))
for movie in set(movies):
movie_tmdbid = self.__get_ids(movie.guids).get("tmdb_id")
if tmdb_id and movie_tmdbid:
if str(movie_tmdbid) != str(tmdb_id):
continue
ret_movies.append({'title': movie.title, 'year': movie.year})
return ret_movies
def get_tv_episodes(self,
item_id: str = None,
title: str = None,
original_title: str = None,
year: str = None,
tmdb_id: int = None,
season: int = None) -> Optional[Dict[int, list]]:
"""
根据标题、年份、季查询电视剧所有集信息
:param item_id: 媒体ID
:param title: 标题
:param original_title: 原产地标题
:param year: 年份,可以为空,为空时不按年份过滤
:param tmdb_id: TMDB ID
:param season: 季号,数字
:return: 所有集的列表
"""
@@ -156,13 +183,19 @@ class Plex(metaclass=Singleton):
if item_id:
videos = self._plex.fetchItem(item_id)
else:
# 根据标题和年份模糊搜索,该结果不够准确
videos = self._plex.library.search(title=title, year=year, libtype="show")
if not videos and original_title and str(original_title) != str(title):
videos = self._plex.library.search(title=original_title, year=year, libtype="show")
if not videos:
return {}
if isinstance(videos, list):
episodes = videos[0].episodes()
else:
episodes = videos.episodes()
videos = videos[0]
video_tmdbid = self.__get_ids(videos.guids).get('tmdb_id')
if tmdb_id and video_tmdbid:
if str(video_tmdbid) != str(tmdb_id):
return {}
episodes = videos.episodes()
season_episodes = {}
for episode in episodes:
if season and episode.seasonNumber != int(season):
@@ -329,9 +362,104 @@ class Plex(metaclass=Singleton):
item_name TV:琅琊榜 S1E6 剖心明志 虎口脱险
MOV:猪猪侠大冒险(2001)
overview 剧情描述
{
"event": "media.scrobble",
"user": false,
"owner": true,
"Account": {
"id": 31646104,
"thumb": "https://plex.tv/users/xx",
"title": "播放"
},
"Server": {
"title": "Media-Server",
"uuid": "xxxx"
},
"Player": {
"local": false,
"publicAddress": "xx.xx.xx.xx",
"title": "MagicBook",
"uuid": "wu0uoa1ujfq90t0c5p9f7fw0"
},
"Metadata": {
"librarySectionType": "show",
"ratingKey": "40294",
"key": "/library/metadata/40294",
"parentRatingKey": "40291",
"grandparentRatingKey": "40275",
"guid": "plex://episode/615580a9fa828e7f1a0caabd",
"parentGuid": "plex://season/615580a9fa828e7f1a0caab8",
"grandparentGuid": "plex://show/60e81fd8d8000e002d7d2976",
"type": "episode",
"title": "The World's Strongest Senior",
"titleSort": "World's Strongest Senior",
"grandparentKey": "/library/metadata/40275",
"parentKey": "/library/metadata/40291",
"librarySectionTitle": "动漫剧集",
"librarySectionID": 7,
"librarySectionKey": "/library/sections/7",
"grandparentTitle": "范马刃牙",
"parentTitle": "Combat Shadow Fighting Saga / Great Prison Battle Saga",
"originalTitle": "Baki Hanma",
"contentRating": "TV-MA",
"summary": "The world is shaken by news of a man taking down a monstrous elephant with his bare hands. Back in Japan, Baki is confronted by a knife-wielding child.",
"index": 1,
"parentIndex": 1,
"audienceRating": 8.5,
"viewCount": 1,
"lastViewedAt": 1694320444,
"year": 2021,
"thumb": "/library/metadata/40294/thumb/1693544504",
"art": "/library/metadata/40275/art/1693952979",
"parentThumb": "/library/metadata/40291/thumb/1691115271",
"grandparentThumb": "/library/metadata/40275/thumb/1693952979",
"grandparentArt": "/library/metadata/40275/art/1693952979",
"duration": 1500000,
"originallyAvailableAt": "2021-09-30",
"addedAt": 1691115281,
"updatedAt": 1693544504,
"audienceRatingImage": "themoviedb://image.rating",
"Guid": [
{
"id": "imdb://tt14765720"
},
{
"id": "tmdb://3087250"
},
{
"id": "tvdb://8530933"
}
],
"Rating": [
{
"image": "themoviedb://image.rating",
"value": 8.5,
"type": "audience"
}
],
"Director": [
{
"id": 115144,
"filter": "director=115144",
"tag": "Keiya Saito",
"tagKey": "5f401c8d04a86500409ea6c1"
}
],
"Writer": [
{
"id": 115135,
"filter": "writer=115135",
"tag": "Tatsuhiko Urahata",
"tagKey": "5d7768e07a53e9001e6db1ce",
"thumb": "https://metadata-static.plex.tv/f/people/f6f90dc89fa87d459f85d40a09720c05.jpg"
}
]
}
}
"""
message = json.loads(message_str)
eventItem = WebhookEventInfo(event=message.get('Event', ''), channel="plex")
logger.info(f"接收到plex webhook{message}")
eventItem = WebhookEventInfo(event=message.get('event', ''), channel="plex")
if message.get('Metadata'):
if message.get('Metadata', {}).get('type') == 'episode':
eventItem.item_type = "TV"

View File

@@ -33,22 +33,25 @@ class QbittorrentModule(_ModuleBase):
定时任务每10分钟调用一次
"""
# 定时重连
if not self.qbittorrent.qbc:
if self.qbittorrent.is_inactive():
self.qbittorrent = Qbittorrent()
def download(self, torrent_path: Path, download_dir: Path, cookie: str,
def download(self, content: Union[Path, str], download_dir: Path, cookie: str,
episodes: Set[int] = None, category: str = None) -> Optional[Tuple[Optional[str], str]]:
"""
根据种子文件,选择并添加下载任务
:param torrent_path: 种子文件地址
:param content: 种子文件地址或者磁力链接
:param download_dir: 下载目录
:param cookie: cookie
:param episodes: 需要下载的集数
:param category: 分类
:return: 种子Hash错误信息
"""
if not torrent_path or not torrent_path.exists():
return None, f"种子文件不存在:{torrent_path}"
if not content:
return
if isinstance(content, Path) and not content.exists():
return None, f"种子文件不存在:{content}"
# 生成随机Tag
tag = StringUtils.generate_random_str(10)
if settings.TORRENT_TAG:
@@ -58,19 +61,21 @@ class QbittorrentModule(_ModuleBase):
# 如果要选择文件则先暂停
is_paused = True if episodes else False
# 添加任务
state = self.qbittorrent.add_torrent(content=torrent_path.read_bytes(),
download_dir=str(download_dir),
is_paused=is_paused,
tag=tags,
cookie=cookie,
category=category)
state = self.qbittorrent.add_torrent(
content=content.read_bytes() if isinstance(content, Path) else content,
download_dir=str(download_dir),
is_paused=is_paused,
tag=tags,
cookie=cookie,
category=category
)
if not state:
return None, f"添加种子任务失败:{torrent_path}"
return None, f"添加种子任务失败:{content}"
else:
# 获取种子Hash
torrent_hash = self.qbittorrent.get_torrent_id_by_tag(tags=tag)
if not torrent_hash:
return None, f"获取种子Hash失败{torrent_path}"
return None, f"获取种子Hash失败{content}"
else:
if is_paused:
# 种子文件

View File

@@ -27,6 +27,14 @@ class Qbittorrent(metaclass=Singleton):
if self._host and self._port:
self.qbc = self.__login_qbittorrent()
def is_inactive(self) -> bool:
"""
判断是否需要重连
"""
if not self._host or not self._port:
return False
return True if not self.qbc else False
def __login_qbittorrent(self) -> Optional[Client]:
"""
连接qbittorrent
@@ -178,7 +186,8 @@ class Qbittorrent(metaclass=Singleton):
download_dir: str = None,
tag: Union[str, list] = None,
category: str = None,
cookie=None
cookie=None,
**kwargs
) -> bool:
"""
添加种子
@@ -230,7 +239,8 @@ class Qbittorrent(metaclass=Singleton):
use_auto_torrent_management=is_auto,
is_sequential_download=True,
cookie=cookie,
category=category)
category=category,
**kwargs)
return True if qbc_ret and str(qbc_ret).find("Ok") != -1 else False
except Exception as err:
logger.error(f"添加种子出错:{err}")
@@ -331,6 +341,7 @@ class Qbittorrent(metaclass=Singleton):
try:
self.qbc.transfer.upload_limit = int(upload_limit)
self.qbc.transfer.download_limit = int(download_limit)
return True
except Exception as err:
logger.error(f"设置速度限制出错:{err}")
return False

View File

@@ -272,6 +272,7 @@ class Slack:
link = torrent.page_url
title = f"{meta.season_episode} " \
f"{meta.resource_term} " \
f"{meta.video_term} " \
f"{meta.release_group}"
title = re.sub(r"\s+", " ", title).strip()
free = torrent.volume_factor

View File

@@ -34,15 +34,22 @@ class SubtitleModule(_ModuleBase):
def stop(self) -> None:
pass
def download_added(self, context: Context, torrent_path: Path, download_dir: Path) -> None:
def download_added(self, context: Context, download_dir: Path, torrent_path: Path = None) -> None:
"""
添加下载任务成功后,从站点下载字幕,保存到下载目录
:param context: 上下文,包括识别信息、媒体信息、种子信息
:param torrent_path: 种子文件地址
:param download_dir: 下载目录
:param torrent_path: 种子文件地址
:return: None该方法可被多个模块同时处理
"""
# 种子信息
if not settings.DOWNLOAD_SUBTITLE:
return None
# 没有种子文件不处理
if not torrent_path:
return
# 没有详情页不处理
torrent = context.torrent_info
if not torrent.page_url:
return
@@ -50,21 +57,16 @@ class SubtitleModule(_ModuleBase):
logger.info("开始从站点下载字幕:%s" % torrent.page_url)
# 获取种子信息
folder_name, _ = TorrentHelper.get_torrent_info(torrent_path)
# 下载目录,也可能是文件名
download_dir = download_dir / (folder_name or "")
# 等待文件或者目录存在
# 文件保存目录如果是单文件种子则folder_name是空此时文件保存目录就是下载目录
download_dir = download_dir / folder_name
# 等待目录存在
for _ in range(30):
if download_dir.exists():
break
time.sleep(1)
# 目录仍然不存在,且是目录则创建目录
if not download_dir.exists() \
and download_dir.suffix not in settings.RMT_MEDIAEXT:
# 目录仍然不存在,且有文件夹名,则创建目录
if not download_dir.exists() and folder_name:
download_dir.mkdir(parents=True, exist_ok=True)
# 不是目录说明是单文件种子,直接使用下载目录
if download_dir.is_file() \
or download_dir.suffix in settings.RMT_MEDIAEXT:
download_dir = download_dir.parent
# 读取网站代码
request = RequestUtils(cookies=torrent.site_cookie, ua=torrent.site_ua)
res = request.get_res(torrent.page_url)

View File

@@ -1,5 +1,5 @@
import json
from typing import Optional, Union, List, Tuple, Any
from typing import Optional, Union, List, Tuple, Any, Dict
from app.core.context import MediaInfo, Context
from app.core.config import settings
@@ -120,7 +120,7 @@ class TelegramModule(_ModuleBase):
"""
return self.telegram.send_torrents_msg(title=message.title, torrents=torrents, userid=message.userid)
def register_commands(self, commands: dict):
def register_commands(self, commands: Dict[str, dict]):
"""
注册命令,实现这个函数接收系统可用的命令菜单
:param commands: 命令字典

View File

@@ -2,7 +2,7 @@ import re
import threading
from pathlib import Path
from threading import Event
from typing import Optional, List
from typing import Optional, List, Dict
import telebot
from telebot import apihelper
@@ -153,6 +153,7 @@ class Telegram(metaclass=Singleton):
link = torrent.page_url
title = f"{meta.season_episode} " \
f"{meta.resource_term} " \
f"{meta.video_term} " \
f"{meta.release_group}"
title = re.sub(r"\s+", " ", title).strip()
free = torrent.volume_factor
@@ -198,7 +199,7 @@ class Telegram(metaclass=Singleton):
return True if ret else False
def register_commands(self, commands: dict):
def register_commands(self, commands: Dict[str, dict]):
"""
注册菜单命令
"""

View File

@@ -74,8 +74,8 @@ class TheMovieDbModule(_ModuleBase):
mtype=meta.type,
season_year=meta.year,
season_number=meta.begin_season)
if meta.year:
# 非严格模式下去掉年份再查一次
if not info:
# 去掉年份再查一次
info = self.tmdb.match(name=meta.name,
mtype=meta.type)
else:
@@ -132,6 +132,12 @@ class TheMovieDbModule(_ModuleBase):
else:
logger.info(f"{tmdbid} 识别结果:{mediainfo.type.value} "
f"{mediainfo.title_year}")
# 补充剧集年份
if mediainfo.type == MediaType.TV:
episode_years = self.tmdb.get_tv_episode_years(info.get("id"))
if episode_years:
mediainfo.season_years = episode_years
return mediainfo
else:
logger.info(f"{meta.name if meta else tmdbid} 未匹配到媒体信息")
@@ -194,12 +200,17 @@ class TheMovieDbModule(_ModuleBase):
scrape_path = path / path.name
self.scraper.gen_scraper_files(mediainfo=mediainfo,
file_path=scrape_path)
elif path.is_file():
# 单个文件
logger.info(f"开始刮削媒体库文件:{path} ...")
self.scraper.gen_scraper_files(mediainfo=mediainfo,
file_path=path)
else:
# 目录下的所有文件
logger.info(f"开始刮削目录:{path} ...")
for file in SystemUtils.list_files(path, settings.RMT_MEDIAEXT):
if not file:
continue
logger.info(f"开始刮削媒体库文件:{file} ...")
self.scraper.gen_scraper_files(mediainfo=mediainfo,
file_path=file)
logger.info(f"{path} 刮削完成")

View File

@@ -2,17 +2,19 @@ import time
from pathlib import Path
from xml.dom import minidom
from requests import RequestException
from app.core.config import settings
from app.core.context import MediaInfo
from app.core.metainfo import MetaInfo
from app.log import logger
from app.schemas.types import MediaType
from app.utils.common import retry
from app.utils.dom import DomUtils
from app.utils.http import RequestUtils
class TmdbScraper:
tmdb = None
def __init__(self, tmdb):
@@ -20,7 +22,7 @@ class TmdbScraper:
def gen_scraper_files(self, mediainfo: MediaInfo, file_path: Path):
"""
生成刮削文件
生成刮削文件包括NFO和图片传入路径为文件路径
:param mediainfo: 媒体信息
:param file_path: 文件路径或者目录路径
"""
@@ -35,7 +37,7 @@ class TmdbScraper:
return {}
try:
# 电影
# 电影,路径为文件名 名称/名称.xxx 或者蓝光原盘目录 名称/名称
if mediainfo.type == MediaType.MOVIE:
# 不已存在时才处理
if not file_path.with_name("movie.nfo").exists() \
@@ -53,11 +55,11 @@ class TmdbScraper:
image_name = attr_name.replace("_path", "") + Path(attr_value).suffix
self.__save_image(url=attr_value,
file_path=file_path.with_name(image_name))
# 电视剧
# 电视剧,路径为每一季的文件名 名称/Season xx/名称 SxxExx.xxx
else:
# 识别
meta = MetaInfo(file_path.stem)
# 不存在时才处理
# 根目录不存在时才处理
if not file_path.parent.with_name("tvshow.nfo").exists():
# 根目录描述文件
self.__gen_tv_nfo_file(mediainfo=mediainfo,
@@ -81,19 +83,25 @@ class TmdbScraper:
self.__gen_tv_season_nfo_file(seasoninfo=seasoninfo,
season=meta.begin_season,
season_path=file_path.parent)
# 季的图片
# TMDB季poster图片
sea_seq = str(meta.begin_season).rjust(2, '0')
if seasoninfo.get("poster_path"):
# 后缀
ext = Path(seasoninfo.get('poster_path')).suffix
# URL
url = f"https://{settings.TMDB_IMAGE_DOMAIN}/t/p/original{seasoninfo.get('poster_path')}"
self.__save_image(url, file_path.parent.with_name(f"season{sea_seq}-poster{ext}"))
# 季的其它图片
for attr_name, attr_value in vars(mediainfo).items():
if attr_value \
and attr_name.startswith("season") \
and not attr_name.endswith("poster_path") \
and attr_value \
and isinstance(attr_value, str) \
and attr_value.startswith("http"):
image_name = attr_name.replace("_path",
"").replace("season",
f"{str(meta.begin_season).rjust(2, '0')}-") \
+ Path(attr_value).suffix
image_name = attr_name.replace("_path", "") + Path(attr_value).suffix
self.__save_image(url=attr_value,
file_path=file_path.parent.with_name(f"season{image_name}"))
file_path=file_path.parent.with_name(image_name))
# 查询集详情
episodeinfo = __get_episode_detail(seasoninfo, meta.begin_episode)
if episodeinfo:
@@ -105,10 +113,11 @@ class TmdbScraper:
episode=meta.begin_episode,
file_path=file_path)
# 集的图片
if episodeinfo.get('still_path'):
episode_image = episodeinfo.get("still_path")
if episode_image:
self.__save_image(
f"https://{settings.TMDB_IMAGE_DOMAIN}/t/p/original{episodeinfo.get('still_path')}",
file_path.with_suffix(Path(episodeinfo.get('still_path')).suffix))
f"https://{settings.TMDB_IMAGE_DOMAIN}/t/p/original{episode_image}",
file_path.with_suffix(Path(episode_image).suffix))
except Exception as e:
logger.error(f"{file_path} 刮削失败:{e}")
@@ -312,6 +321,7 @@ class TmdbScraper:
self.__save_nfo(doc, file_path.with_suffix(".nfo"))
@staticmethod
@retry(RequestException, logger=logger)
def __save_image(url: str, file_path: Path):
"""
下载图片并保存
@@ -320,7 +330,7 @@ class TmdbScraper:
return
try:
logger.info(f"正在下载{file_path.stem}图片:{url} ...")
r = RequestUtils().get_res(url=url)
r = RequestUtils().get_res(url=url, raise_exception=True)
if r:
file_path.write_bytes(r.content)
logger.info(f"图片已保存:{file_path}")

View File

@@ -447,7 +447,8 @@ class TmdbHelper:
ret_info = multi
break
# 类型变更
if ret_info:
if (ret_info
and not isinstance(ret_info.get("media_type"), MediaType)):
ret_info['media_type'] = MediaType.MOVIE if ret_info.get("media_type") == "movie" else MediaType.TV
return ret_info
@@ -1167,3 +1168,35 @@ class TmdbHelper:
清除缓存
"""
self.tmdb.cache_clear()
def get_tv_episode_years(self, tv_id: int):
"""
查询剧集组年份
"""
try:
episode_groups = self.tv.episode_groups(tv_id)
if not episode_groups:
return {}
episode_years = {}
for episode_group in episode_groups:
logger.info(f"正在获取剧集组年份:{episode_group.get('id')}...")
if episode_group.get('type') != 6:
# 只处理剧集部分
continue
group_episodes = self.tv.group_episodes(episode_group.get('id'))
if not group_episodes:
continue
for group_episode in group_episodes:
order = group_episode.get('order')
episodes = group_episode.get('episodes')
if not episodes or not order:
continue
# 当前季第一季时间
first_date = episodes[0].get("air_date")
if not first_date and str(first_date).split("-") != 3:
continue
episode_years[order] = str(first_date).split("-")[0]
return episode_years
except Exception as e:
print(str(e))
return {}

View File

@@ -33,6 +33,7 @@ class TV(TMDb):
"on_the_air": "/tv/on_the_air",
"popular": "/tv/popular",
"top_rated": "/tv/top_rated",
"group_episodes": "/tv/episode_group/%s",
}
def details(self, tv_id, append_to_response="videos,trailers,images,credits,translations"):
@@ -130,6 +131,17 @@ class TV(TMDb):
key="results"
)
def group_episodes(self, group_id):
"""
查询剧集组所有剧集
:param group_id: int
:return:
"""
return self._request_obj(
self._urls["group_episodes"] % group_id,
key="groups"
)
def external_ids(self, tv_id):
"""
Get the external ids for a TV show.

View File

@@ -33,20 +33,25 @@ class TransmissionModule(_ModuleBase):
定时任务每10分钟调用一次
"""
# 定时重连
if not self.transmission.trc:
if not self.transmission.is_inactive():
self.transmission = Transmission()
def download(self, torrent_path: Path, download_dir: Path, cookie: str,
def download(self, content: Union[Path, str], download_dir: Path, cookie: str,
episodes: Set[int] = None, category: str = None) -> Optional[Tuple[Optional[str], str]]:
"""
根据种子文件,选择并添加下载任务
:param torrent_path: 种子文件地址
:param content: 种子文件地址或者磁力链接
:param download_dir: 下载目录
:param cookie: cookie
:param episodes: 需要下载的集数
:param category: 分类TR中未使用
:return: 种子Hash
"""
if not content:
return
if isinstance(content, Path) and not content.exists():
return None, f"种子文件不存在:{content}"
# 如果要选择文件则先暂停
is_paused = True if episodes else False
# 标签
@@ -55,13 +60,15 @@ class TransmissionModule(_ModuleBase):
else:
labels = None
# 添加任务
torrent = self.transmission.add_torrent(content=torrent_path.read_bytes(),
download_dir=str(download_dir),
is_paused=is_paused,
labels=labels,
cookie=cookie)
torrent = self.transmission.add_torrent(
content=content.read_bytes() if isinstance(content, Path) else content,
download_dir=str(download_dir),
is_paused=is_paused,
labels=labels,
cookie=cookie
)
if not torrent:
return None, f"添加种子任务失败:{torrent_path}"
return None, f"添加种子任务失败:{content}"
else:
torrent_hash = torrent.hashString
if is_paused:
@@ -105,7 +112,7 @@ class TransmissionModule(_ModuleBase):
title=torrent.name,
path=Path(torrent.download_dir) / torrent.name,
hash=torrent.hashString,
tags=torrent.labels
tags=",".join(torrent.labels or [])
))
elif status == TorrentStatus.TRANSFER:
# 获取已完成且未整理的

View File

@@ -28,7 +28,7 @@ class Transmission(metaclass=Singleton):
self._host, self._port = StringUtils.get_domain_address(address=settings.TR_HOST, prefix=False)
self._username = settings.TR_USER
self._password = settings.TR_PASSWORD
if self._host and self._port and self._username and self._password:
if self._host and self._port:
self.trc = self.__login_transmission()
def __login_transmission(self) -> Optional[Client]:
@@ -48,6 +48,14 @@ class Transmission(metaclass=Singleton):
logger.error(f"transmission 连接出错:{err}")
return None
def is_inactive(self) -> bool:
"""
判断是否需要重连
"""
if not self._host or not self._port:
return False
return True if not self.trc else False
def get_torrents(self, ids: Union[str, list] = None, status: Union[str, list] = None,
tags: Union[str, list] = None) -> Tuple[List[Torrent], bool]:
"""
@@ -235,14 +243,14 @@ class Transmission(metaclass=Singleton):
logger.error(f"获取传输信息出错:{err}")
return None
def set_speed_limit(self, download_limit: float = None, upload_limit: float = None):
def set_speed_limit(self, download_limit: float = None, upload_limit: float = None) -> bool:
"""
设置速度限制
:param download_limit: 下载速度限制单位KB/s
:param upload_limit: 上传速度限制单位kB/s
"""
if not self.trc:
return
return False
try:
download_limit_enabled = True if download_limit else False
upload_limit_enabled = True if upload_limit else False
@@ -252,6 +260,7 @@ class Transmission(metaclass=Singleton):
speed_limit_down_enabled=download_limit_enabled,
speed_limit_up_enabled=upload_limit_enabled
)
return True
except Exception as err:
logger.error(f"设置速度限制出错:{err}")
return False
@@ -279,3 +288,58 @@ class Transmission(metaclass=Singleton):
except Exception as err:
logger.error(f"添加Tracker出错{err}")
return False
def change_torrent(self,
hash_string: str,
upload_limit=None,
download_limit=None,
ratio_limit=None,
seeding_time_limit=None):
"""
设置种子
:param hash_string: ID
:param upload_limit: 上传限速 Kb/s
:param download_limit: 下载限速 Kb/s
:param ratio_limit: 分享率限制
:param seeding_time_limit: 做种时间限制
:return: bool
"""
if not hash_string:
return False
if upload_limit:
uploadLimited = True
uploadLimit = int(upload_limit)
else:
uploadLimited = False
uploadLimit = 0
if download_limit:
downloadLimited = True
downloadLimit = int(download_limit)
else:
downloadLimited = False
downloadLimit = 0
if ratio_limit:
seedRatioMode = 1
seedRatioLimit = round(float(ratio_limit), 2)
else:
seedRatioMode = 2
seedRatioLimit = 0
if seeding_time_limit:
seedIdleMode = 1
seedIdleLimit = int(seeding_time_limit)
else:
seedIdleMode = 2
seedIdleLimit = 0
try:
self.trc.change_torrent(ids=hash_string,
uploadLimited=uploadLimited,
uploadLimit=uploadLimit,
downloadLimited=downloadLimited,
downloadLimit=downloadLimit,
seedRatioMode=seedRatioMode,
seedRatioLimit=seedRatioLimit,
seedIdleMode=seedIdleMode,
seedIdleLimit=seedIdleLimit)
except Exception as err:
logger.error(f"设置种子出错:{err}")
return False

View File

@@ -1,5 +1,5 @@
import xml.dom.minidom
from typing import Optional, Union, List, Tuple, Any
from typing import Optional, Union, List, Tuple, Any, Dict
from app.core.config import settings
from app.core.context import Context, MediaInfo
@@ -96,16 +96,23 @@ class WechatModule(_ModuleBase):
# 解析消息内容
if msg_type == "event" and event == "click":
# 校验用户有权限执行交互命令
wechat_admins = settings.WECHAT_ADMINS.split(',')
if wechat_admins and not any(
user_id == admin_user for admin_user in wechat_admins):
self.wechat.send_msg(title="用户无权限执行菜单命令", userid=user_id)
return None
if settings.WECHAT_ADMINS:
wechat_admins = settings.WECHAT_ADMINS.split(',')
if wechat_admins and not any(
user_id == admin_user for admin_user in wechat_admins):
self.wechat.send_msg(title="用户无权限执行菜单命令", userid=user_id)
return None
# 根据EventKey执行命令
content = DomUtils.tag_value(root_node, "EventKey")
logger.info(f"收到微信事件userid={user_id}, event={content}")
elif msg_type == "text":
# 文本消息
content = DomUtils.tag_value(root_node, "Content", default="")
if content:
logger.info(f"收到微信消息userid={user_id}, text={content}")
logger.info(f"收到微信消息userid={user_id}, text={content}")
else:
return None
if content:
# 处理消息内容
return CommingMessage(channel=MessageChannel.Wechat,
userid=user_id, username=user_id, text=content)
@@ -145,3 +152,10 @@ class WechatModule(_ModuleBase):
:return: 成功或失败
"""
return self.wechat.send_torrents_msg(title=message.title, torrents=torrents, userid=message.userid)
def register_commands(self, commands: Dict[str, dict]):
"""
注册命令,实现这个函数接收系统可用的命令菜单
:param commands: 命令字典
"""
self.wechat.create_menus(commands)

View File

@@ -2,7 +2,7 @@ import json
import re
import threading
from datetime import datetime
from typing import Optional, List
from typing import Optional, List, Dict
from app.core.config import settings
from app.core.context import MediaInfo, Context
@@ -33,6 +33,8 @@ class WeChat(metaclass=Singleton):
_send_msg_url = f"{settings.WECHAT_PROXY}/cgi-bin/message/send?access_token=%s"
# 企业微信获取TokenURL
_token_url = f"{settings.WECHAT_PROXY}/cgi-bin/gettoken?corpid=%s&corpsecret=%s"
# 企业微信创新菜单URL
_create_menu_url = f"{settings.WECHAT_PROXY}/cgi-bin/menu/create?access_token=%s&agentid=%s"
def __init__(self):
"""
@@ -69,6 +71,10 @@ class WeChat(metaclass=Singleton):
self._access_token = ret_json.get('access_token')
self._expires_in = ret_json.get('expires_in')
self._access_token_time = datetime.now()
elif res is not None:
logger.error(f"获取微信access_token失败错误码{res.status_code},错误原因:{res.reason}")
else:
logger.error(f"获取微信access_token失败未获取到返回信息")
except Exception as e:
logger.error(f"获取微信access_token失败错误信息{e}")
return None
@@ -216,6 +222,7 @@ class WeChat(metaclass=Singleton):
torrent_title = f"{index}.【{torrent.site_name}" \
f"{meta.season_episode} " \
f"{meta.resource_term} " \
f"{meta.video_term} " \
f"{meta.release_group} " \
f"{StringUtils.str_filesize(torrent.size)} " \
f"{torrent.volume_factor} " \
@@ -255,14 +262,87 @@ class WeChat(metaclass=Singleton):
else:
if ret_json.get('errcode') == 42001:
self.__get_access_token(force=True)
logger.error(f"发送消息失败,错误信息:{ret_json.get('errmsg')}")
logger.error(f"发送请求失败,错误信息:{ret_json.get('errmsg')}")
return False
elif res is not None:
logger.error(f"发送消息失败,错误码:{res.status_code},错误原因:{res.reason}")
logger.error(f"发送请求失败,错误码:{res.status_code},错误原因:{res.reason}")
return False
else:
logger.error(f"发送消息失败,未获取到返回信息")
logger.error(f"发送请求失败,未获取到返回信息")
return False
except Exception as err:
logger.error(f"发送消息失败,错误信息:{err}")
logger.error(f"发送请求失败,错误信息:{err}")
return False
def create_menus(self, commands: Dict[str, dict]):
"""
自动注册微信菜单
:param commands: 命令字典
命令字典:
{
"/cookiecloud": {
"func": CookieCloudChain(self._db).remote_sync,
"description": "同步站点",
"category": "站点",
"data": {}
}
}
注册报文格式一级菜单只有最多3条子菜单最多只有5条
{
"button":[
{
"type":"click",
"name":"今日歌曲",
"key":"V1001_TODAY_MUSIC"
},
{
"name":"菜单",
"sub_button":[
{
"type":"view",
"name":"搜索",
"url":"http://www.soso.com/"
},
{
"type":"click",
"name":"赞一下我们",
"key":"V1001_GOOD"
}
]
}
]
}
"""
# 请求URL
req_url = self._create_menu_url % (self.__get_access_token(), self._appid)
# 对commands按category分组
category_dict = {}
for key, value in commands.items():
category: Dict[str, dict] = value.get("category")
if category:
if not category_dict.get(category):
category_dict[category] = {}
category_dict[category][key] = value
# 一级菜单
buttons = []
for category, menu in category_dict.items():
# 二级菜单
sub_buttons = []
for key, value in menu.items():
sub_buttons.append({
"type": "click",
"name": value.get("description"),
"key": key
})
buttons.append({
"name": category,
"sub_button": sub_buttons[:5]
})
if buttons:
# 发送请求
self.__post_request(req_url, {
"button": buttons[:3]
})

View File

@@ -5,7 +5,7 @@ from typing import Any, List, Dict, Tuple
from app.chain import ChainBase
from app.core.config import settings
from app.core.event import EventManager
from app.db import ScopedSession
from app.db import SessionFactory
from app.db.models import Base
from app.db.plugindata_oper import PluginDataOper
from app.db.systemconfig_oper import SystemConfigOper
@@ -37,7 +37,7 @@ class _PluginBase(metaclass=ABCMeta):
def __init__(self):
# 数据库连接
self.db = ScopedSession()
self.db = SessionFactory()
# 插件数据
self.plugindata = PluginDataOper(self.db)
# 处理链
@@ -65,7 +65,8 @@ class _PluginBase(metaclass=ABCMeta):
[{
"cmd": "/xx",
"event": EventType.xx,
"desc": "xxxx",
"desc": "名称",
"category": "分类需要注册到Wechat时必须有分类",
"data": {}
}]
"""
@@ -190,6 +191,3 @@ class _PluginBase(metaclass=ABCMeta):
"""
if self.db:
self.db.close()
def __del__(self):
self.close()

View File

@@ -31,13 +31,13 @@ class AutoSignIn(_PluginBase):
# 插件名称
plugin_name = "站点自动签到"
# 插件描述
plugin_desc = "自动模拟登录站点签到。"
plugin_desc = "自动模拟登录站点签到。"
# 插件图标
plugin_icon = "signin.png"
# 主题色
plugin_color = "#4179F4"
# 插件版本
plugin_version = "1.0"
plugin_version = "1.1"
# 插件作者
plugin_author = "thsrite"
# 作者主页
@@ -61,16 +61,15 @@ class AutoSignIn(_PluginBase):
# 配置属性
_enabled: bool = False
_cron: str = ""
_sign_type: str = ""
_onlyonce: bool = False
_notify: bool = False
_queue_cnt: int = 5
_sign_sites: list = []
_login_sites: list = []
_retry_keyword = None
_clean: bool = False
_start_time: int = None
_end_time: int = None
_action: str = ""
def init_plugin(self, config: dict = None):
self.sites = SitesHelper()
@@ -87,10 +86,9 @@ class AutoSignIn(_PluginBase):
self._notify = config.get("notify")
self._queue_cnt = config.get("queue_cnt") or 5
self._sign_sites = config.get("sign_sites")
self._login_sites = config.get("login_sites")
self._retry_keyword = config.get("retry_keyword")
self._clean = config.get("clean")
self._sign_type = config.get("sign_type") or "sign"
self._action = "签到" if str(self._sign_type) == "sign" else "模拟登陆"
# 加载模块
if self._enabled or self._onlyonce:
@@ -103,10 +101,10 @@ class AutoSignIn(_PluginBase):
# 立即运行一次
if self._onlyonce:
logger.info(f"站点自动{self._action}服务启动,立即运行一次")
logger.info("站点自动签到服务启动,立即运行一次")
self._scheduler.add_job(func=self.sign_in, trigger='date',
run_date=datetime.now(tz=pytz.timezone(settings.TZ)) + timedelta(seconds=3),
name=f"站点自动{self._action}")
name="站点自动签到")
# 关闭一次性开关
self._onlyonce = False
@@ -117,17 +115,17 @@ class AutoSignIn(_PluginBase):
if self._enabled:
if self._cron:
try:
if self._cron.strip().count(" ") == 4:
if str(self._cron).strip().count(" ") == 4:
self._scheduler.add_job(func=self.sign_in,
trigger=CronTrigger.from_crontab(self._cron),
name=f"站点自动{self._action}")
logger.info(f"站点自动{self._action}服务启动,执行周期 {self._cron}")
name="站点自动签到")
logger.info(f"站点自动签到服务启动,执行周期 {self._cron}")
else:
# 2.3/9-23
crons = self._cron.strip().split("/")
crons = str(self._cron).strip().split("/")
if len(crons) == 2:
# 2.3
self._cron = crons[0]
cron = crons[0]
# 9-23
times = crons[1].split("-")
if len(times) == 2:
@@ -138,12 +136,12 @@ class AutoSignIn(_PluginBase):
if self._start_time and self._end_time:
self._scheduler.add_job(func=self.sign_in,
trigger="interval",
hours=float(self._cron.strip()),
name=f"站点自动{self._action}")
hours=float(str(cron).strip()),
name="站点自动签到")
logger.info(
f"站点自动{self._action}服务启动,执行周期 {self._start_time}点-{self._end_time}点 每{self._cron}小时执行一次")
f"站点自动签到服务启动,执行周期 {self._start_time}点-{self._end_time}点 每{cron}小时执行一次")
else:
logger.error(f"站点自动{self._action}服务启动失败,周期格式错误")
logger.error("站点自动签到服务启动失败,周期格式错误")
# 推送实时消息
self.systemmessage.put(f"执行周期配置错误")
self._cron = ""
@@ -155,10 +153,10 @@ class AutoSignIn(_PluginBase):
self._end_time = 24
self._scheduler.add_job(func=self.sign_in,
trigger="interval",
hours=float(self._cron.strip()),
name=f"站点自动{self._action}")
hours=float(str(self._cron).strip()),
name="站点自动签到")
logger.info(
f"站点自动{self._action}服务启动,执行周期 {self._start_time}点-{self._end_time}点 每{self._cron}小时执行一次")
f"站点自动签到服务启动,执行周期 {self._start_time}点-{self._end_time}点 每{self._cron}小时执行一次")
except Exception as err:
logger.error(f"定时任务配置错误:{err}")
# 推送实时消息
@@ -176,7 +174,7 @@ class AutoSignIn(_PluginBase):
for trigger in triggers:
self._scheduler.add_job(self.sign_in, "cron",
hour=trigger.hour, minute=trigger.minute,
name=f"站点自动{self._action}")
name="站点自动签到")
# 启动任务
if self._scheduler.get_jobs():
@@ -196,9 +194,9 @@ class AutoSignIn(_PluginBase):
"onlyonce": self._onlyonce,
"queue_cnt": self._queue_cnt,
"sign_sites": self._sign_sites,
"login_sites": self._login_sites,
"retry_keyword": self._retry_keyword,
"clean": self._clean,
"sign_type": self._sign_type,
}
)
@@ -212,6 +210,7 @@ class AutoSignIn(_PluginBase):
"cmd": "/site_signin",
"event": EventType.SiteSignin,
"desc": "站点签到",
"category": "站点",
"data": {}
}]
@@ -306,7 +305,7 @@ class AutoSignIn(_PluginBase):
'component': 'VSwitch',
'props': {
'model': 'clean',
'label': '清理本日已签到',
'label': '清理本日缓存',
}
}
]
@@ -320,27 +319,7 @@ class AutoSignIn(_PluginBase):
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
},
'content': [
{
'component': 'VSelect',
'props': {
'model': 'sign_type',
'label': '签到方式',
'items': [
{'title': '签到', 'value': 'sign'},
{'title': '登录', 'value': 'login'}
]
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
'md': 4
},
'content': [
{
@@ -357,7 +336,7 @@ class AutoSignIn(_PluginBase):
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
'md': 4
},
'content': [
{
@@ -373,7 +352,7 @@ class AutoSignIn(_PluginBase):
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
'md': 4
},
'content': [
{
@@ -408,6 +387,26 @@ class AutoSignIn(_PluginBase):
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'content': [
{
'component': 'VSelect',
'props': {
'chips': True,
'multiple': True,
'model': 'login_sites',
'label': '登录站点',
'items': site_options
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
@@ -436,12 +435,12 @@ class AutoSignIn(_PluginBase):
], {
"enabled": False,
"notify": True,
"sign_type": "sign",
"cron": "",
"onlyonce": False,
"clean": False,
"queue_cnt": 5,
"sign_sites": [],
"login_sites": [],
"retry_keyword": "错误|失败"
}
@@ -470,7 +469,7 @@ class AutoSignIn(_PluginBase):
{
'component': 'td',
'props': {
'class': 'whitespace-nowrap break-keep'
'class': 'whitespace-nowrap break-keep text-high-emphasis'
},
'text': current_day
},
@@ -555,77 +554,103 @@ class AutoSignIn(_PluginBase):
if self._start_time and self._end_time:
if int(datetime.today().hour) < self._start_time or int(datetime.today().hour) > self._end_time:
logger.error(
f"当前时间 {int(datetime.today().hour)} 不在 {self._start_time}-{self._end_time} 范围内,暂不{self._action}")
f"当前时间 {int(datetime.today().hour)} 不在 {self._start_time}-{self._end_time} 范围内,暂不执行任务")
return
if event:
logger.info(f"收到命令,开始站点{self._action} ...")
logger.info("收到命令,开始站点签到 ...")
self.post_message(channel=event.event_data.get("channel"),
title=f"开始站点{self._action} ...",
title="开始站点签到 ...",
userid=event.event_data.get("user"))
if self._sign_sites:
self.__do(today=today, type="签到", do_sites=self._sign_sites, event=event)
if self._login_sites:
self.__do(today=today, type="登录", do_sites=self._login_sites, event=event)
def __do(self, today: datetime, type: str, do_sites: list, event: Event = None):
"""
签到逻辑
"""
yesterday = today - timedelta(days=1)
yesterday_str = yesterday.strftime('%Y-%m-%d')
# 删除昨天历史
self.del_data(key=yesterday_str)
self.del_data(key=type + "-" + yesterday_str)
self.del_data(key=f"{yesterday.month}{yesterday.day}")
# 查看今天有没有签到历史
# 查看今天有没有签到|登录历史
today = today.strftime('%Y-%m-%d')
today_history = self.get_data(key=today)
today_history = self.get_data(key=type + "-" + today)
# 查询签到站点
sign_sites = [site for site in self.sites.get_indexers() if not site.get("public")]
# 查询所有站点
all_sites = [site for site in self.sites.get_indexers() if not site.get("public")]
# 过滤掉没有选中的站点
if self._sign_sites:
sign_sites = [site for site in sign_sites if site.get("id") in self._sign_sites]
if do_sites:
do_sites = [site for site in all_sites if site.get("id") in do_sites]
else:
do_sites = all_sites
# 今日没数据
if not today_history or self._clean:
logger.info(f"今日 {today}{self._action},开始{self._action}已选站点")
logger.info(f"今日 {today}{type},开始{type}已选站点")
# 过滤删除的站点
self._sign_sites = [site.get("id") for site in sign_sites if site]
if type == "签到":
self._sign_sites = [site.get("id") for site in do_sites if site]
if type == "登录":
self._login_sites = [site.get("id") for site in do_sites if site]
if self._clean:
# 关闭开关
self._clean = False
else:
# 今天已签到需要重站点
retry_sites = today_history.get("retry")
# 今天已签到站点
already_sign_sites = today_history.get("sign")
# 需要重站点
retry_sites = today_history.get("retry") or []
# 今天已签到|登录站点
already_sites = today_history.get("do") or []
# 今日未签站点
no_sign_sites = [site for site in sign_sites if
site.get("id") not in already_sign_sites or site.get("id") in retry_sites]
# 今日未签|登录站点
no_sites = [site for site in do_sites if
site.get("id") not in already_sites or site.get("id") in retry_sites]
if not no_sign_sites:
logger.info(f"今日 {today}{self._action},无重新{self._action}站点,本次任务结束")
if not no_sites:
logger.info(f"今日 {today}{type},无重新{type}站点,本次任务结束")
return
# 签到站点 = 需要重+今日未
sign_sites = no_sign_sites
logger.info(f"今日 {today}{self._action},开始重试命中关键词站点")
# 任务站点 = 需要重+今日未do
do_sites = no_sites
logger.info(f"今日 {today}{type},开始重试命中关键词站点")
if not sign_sites:
logger.info(f"没有需要{self._action}的站点")
if not do_sites:
logger.info(f"没有需要{type}的站点")
return
# 执行签到
logger.info(f"开始执行{self._action}任务 ...")
if str(self._sign_type) == "sign":
with ThreadPool(min(len(sign_sites), int(self._queue_cnt))) as p:
status = p.map(self.signin_site, sign_sites)
logger.info(f"开始执行{type}任务 ...")
if type == "签到":
with ThreadPool(min(len(do_sites), int(self._queue_cnt))) as p:
status = p.map(self.signin_site, do_sites)
else:
with ThreadPool(min(len(sign_sites), int(self._queue_cnt))) as p:
status = p.map(self.login_site, sign_sites)
with ThreadPool(min(len(do_sites), int(self._queue_cnt))) as p:
status = p.map(self.login_site, do_sites)
if status:
logger.info(f"站点{self._action}任务完成!")
logger.info(f"站点{type}任务完成!")
# 获取今天的日期
key = f"{datetime.now().month}{datetime.now().day}"
today_data = self.get_data(key)
if today_data:
if not isinstance(today_data, list):
today_data = [today_data]
for s in status:
today_data.append({
"site": s[0],
"status": s[1]
})
else:
today_data = [{
"site": s[0],
"status": s[1]
} for s in status]
# 保存数据
self.save_data(key, [{
"site": s[0],
"status": s[1]
} for s in status])
self.save_data(key, today_data)
# 命中重试词的站点id
retry_sites = []
@@ -673,13 +698,13 @@ class AutoSignIn(_PluginBase):
if not self._retry_keyword:
# 没设置重试关键词则重试已选站点
retry_sites = self._sign_sites
logger.debug(f"下次{self._action}重试站点 {retry_sites}")
retry_sites = self._sign_sites if type == "签到" else self._login_sites
logger.debug(f"下次{type}重试站点 {retry_sites}")
# 存入历史
self.save_data(key=today,
self.save_data(key=type + "-" + today,
value={
"sign": self._sign_sites,
"do": self._sign_sites if type == "签到" else self._login_sites,
"retry": retry_sites
})
@@ -691,21 +716,21 @@ class AutoSignIn(_PluginBase):
signin_message += retry_msg
signin_message = "\n".join([f'{s[0]}{s[1]}' for s in signin_message if s])
self.post_message(title=f"站点自动{self._action}",
self.post_message(title=f"站点自动{type}",
mtype=NotificationType.SiteMessage,
text=f"全部{self._action}数量: {len(list(self._sign_sites))} \n"
f"本次{self._action}数量: {len(sign_sites)} \n"
f"下次{self._action}数量: {len(retry_sites) if self._retry_keyword else 0} \n"
text=f"全部{type}数量: {len(self._sign_sites if type == '签到' else self._login_sites)} \n"
f"本次{type}数量: {len(do_sites)} \n"
f"下次{type}数量: {len(retry_sites) if self._retry_keyword else 0} \n"
f"{signin_message}"
)
if event:
self.post_message(channel=event.event_data.get("channel"),
title=f"站点{self._action}完成!", userid=event.event_data.get("user"))
title=f"站点{type}完成!", userid=event.event_data.get("user"))
else:
logger.error(f"站点{self._action}任务失败!")
logger.error(f"站点{type}任务失败!")
if event:
self.post_message(channel=event.event_data.get("channel"),
title=f"站点{self._action}任务失败!", userid=event.event_data.get("user"))
title=f"站点{type}任务失败!", userid=event.event_data.get("user"))
# 保存配置
self.__update_config()
@@ -788,6 +813,10 @@ class AutoSignIn(_PluginBase):
return f"无法通过Cloudflare"
return f"仿真登录失败Cookie已失效"
else:
# 判断是否已签到
if re.search(r'已签|签到已得', page_source, re.IGNORECASE) \
or SiteUtils.is_checkin(page_source):
return f"签到成功"
return "仿真签到成功"
else:
res = RequestUtils(cookies=site_cookie,

File diff suppressed because it is too large Load Diff

View File

@@ -69,9 +69,6 @@ class DirMonitor(_PluginBase):
# 可使用的用户级别
auth_level = 1
# 已处理的文件清单
_synced_files = []
# 私有属性
_scheduler = None
transferhis = None
@@ -125,7 +122,14 @@ class DirMonitor(_PluginBase):
continue
# 存储目的目录
paths = mon_path.split(":")
if SystemUtils.is_windows():
if mon_path.count(":") > 1:
paths = [mon_path.split(":")[0] + ":" + mon_path.split(":")[1],
mon_path.split(":")[2] + ":" + mon_path.split(":")[3]]
else:
paths = [mon_path]
else:
paths = mon_path.split(":")
target_path = None
if len(paths) > 1:
mon_path = paths[0]
@@ -193,9 +197,8 @@ class DirMonitor(_PluginBase):
# 全程加锁
with lock:
if event_path not in self._synced_files:
self._synced_files.append(event_path)
else:
transfer_history = self.transferhis.get_by_src(event_path)
if transfer_history:
logger.debug("文件已处理过:%s" % event_path)
return
@@ -220,7 +223,7 @@ class DirMonitor(_PluginBase):
for keyword in transfer_exclude_words:
if not keyword:
continue
if keyword and re.findall(keyword, event_path):
if keyword and re.search(r"%s" % keyword, event_path, re.IGNORECASE):
logger.info(f"{event_path} 命中整理屏蔽词 {keyword},不处理")
return
@@ -264,6 +267,13 @@ class DirMonitor(_PluginBase):
meta=file_meta
)
return
# 如果未开启新增已入库媒体是否跟随TMDB信息变化则根据tmdbid查询之前的title
if not settings.SCRAP_FOLLOW_TMDB:
transfer_history = self.transferhis.get_by_type_tmdbid(tmdbid=mediainfo.tmdb_id,
mtype=mediainfo.type.value)
if transfer_history:
mediainfo.title = transfer_history.title
logger.info(f"{file_path.name} 识别为:{mediainfo.type.value} {mediainfo.title_year}")
# 更新媒体图片
@@ -312,8 +322,10 @@ class DirMonitor(_PluginBase):
transferinfo=transferinfo
)
# 刮削元数据
self.chain.scrape_metadata(path=transferinfo.target_path, mediainfo=mediainfo)
# 刮削单个文件
if settings.SCRAP_METADATA:
self.chain.scrape_metadata(path=transferinfo.target_path,
mediainfo=mediainfo)
"""
{
@@ -375,7 +387,8 @@ class DirMonitor(_PluginBase):
self._medias[mediainfo.title_year + " " + meta.season] = media_list
# 汇总刷新媒体库
self.chain.refresh_mediaserver(mediainfo=mediainfo, file_path=transferinfo.target_path)
if settings.REFRESH_MEDIASERVER:
self.chain.refresh_mediaserver(mediainfo=mediainfo, file_path=transferinfo.target_path)
# 广播事件
self.eventmanager.send_event(EventType.TransferComplete, {
'meta': file_meta,
@@ -481,149 +494,149 @@ class DirMonitor(_PluginBase):
def get_form(self) -> Tuple[List[dict], Dict[str, Any]]:
return [
{
'component': 'VForm',
'content': [
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'enabled',
'label': '启用插件',
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'notify',
'label': '发送通知',
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
},
'content': [
{
'component': 'VSelect',
'props': {
'model': 'mode',
'label': '监控模式',
'items': [
{'title': '兼容模式', 'value': 'compatibility'},
{'title': '性能模式', 'value': 'fast'}
]
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
},
'content': [
{
'component': 'VSelect',
'props': {
'model': 'transfer_type',
'label': '转移方式',
'items': [
{'title': '移动', 'value': 'move'},
{'title': '复制', 'value': 'copy'},
{'title': '硬链接', 'value': 'link'},
{'title': '软链接', 'value': 'softlink'}
]
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12
},
'content': [
{
'component': 'VTextarea',
'props': {
'model': 'monitor_dirs',
'label': '监控目录',
'rows': 5,
'placeholder': '每一行一个目录,支持两种配置方式:\n'
'监控目录\n'
'监控目录:转移目的目录'
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12
},
'content': [
{
'component': 'VTextarea',
'props': {
'model': 'exclude_keywords',
'label': '排除关键词',
'rows': 2,
'placeholder': '每一行一个关键词'
}
}
]
}
]
}
]
}
], {
"enabled": False,
"notify": False,
"mode": "fast",
"transfer_type": settings.TRANSFER_TYPE,
"monitor_dirs": "",
"exclude_keywords": ""
}
{
'component': 'VForm',
'content': [
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'enabled',
'label': '启用插件',
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'notify',
'label': '发送通知',
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
},
'content': [
{
'component': 'VSelect',
'props': {
'model': 'mode',
'label': '监控模式',
'items': [
{'title': '兼容模式', 'value': 'compatibility'},
{'title': '性能模式', 'value': 'fast'}
]
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
},
'content': [
{
'component': 'VSelect',
'props': {
'model': 'transfer_type',
'label': '转移方式',
'items': [
{'title': '移动', 'value': 'move'},
{'title': '复制', 'value': 'copy'},
{'title': '硬链接', 'value': 'link'},
{'title': '软链接', 'value': 'softlink'}
]
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12
},
'content': [
{
'component': 'VTextarea',
'props': {
'model': 'monitor_dirs',
'label': '监控目录',
'rows': 5,
'placeholder': '每一行一个目录,支持两种配置方式:\n'
'监控目录\n'
'监控目录:转移目的目录'
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12
},
'content': [
{
'component': 'VTextarea',
'props': {
'model': 'exclude_keywords',
'label': '排除关键词',
'rows': 2,
'placeholder': '每一行一个关键词'
}
}
]
}
]
}
]
}
], {
"enabled": False,
"notify": False,
"mode": "fast",
"transfer_type": settings.TRANSFER_TYPE,
"monitor_dirs": "",
"exclude_keywords": ""
}
def get_page(self) -> List[dict]:
pass

View File

@@ -133,6 +133,7 @@ class DoubanSync(_PluginBase):
"cmd": "/douban_sync",
"event": EventType.DoubanSync,
"desc": "同步豆瓣想看",
"category": "订阅",
"data": {}
}]

View File

@@ -1,3 +1,4 @@
import os
import re
from datetime import datetime, timedelta
from threading import Event
@@ -60,6 +61,7 @@ class IYUUAutoSeed(_PluginBase):
_sites = []
_notify = False
_nolabels = None
_nopaths = None
_clearcache = False
# 退出事件
_event = Event()
@@ -101,6 +103,7 @@ class IYUUAutoSeed(_PluginBase):
self._sites = config.get("sites")
self._notify = config.get("notify")
self._nolabels = config.get("nolabels")
self._nopaths = config.get("nopaths")
self._clearcache = config.get("clearcache")
self._permanent_error_caches = config.get("permanent_error_caches") or []
self._error_caches = [] if self._clearcache else config.get("error_caches") or []
@@ -242,22 +245,6 @@ class IYUUAutoSeed(_PluginBase):
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12
},
'content': [
{
'component': 'VTextField',
'props': {
'model': 'nolabels',
'label': '不辅种标签',
'placeholder': '使用,分隔多个标签'
}
}
]
}
]
},
{
@@ -309,6 +296,44 @@ class IYUUAutoSeed(_PluginBase):
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12
},
'content': [
{
'component': 'VTextField',
'props': {
'model': 'nolabels',
'label': '不辅种标签',
'placeholder': '使用,分隔多个标签'
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12
},
'content': [
{
'component': 'VTextarea',
'props': {
'model': 'nopaths',
'label': '不辅种数据文件目录',
'rows': 3,
'placeholder': '每一行一个目录'
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
@@ -357,6 +382,7 @@ class IYUUAutoSeed(_PluginBase):
"token": "",
"downloaders": [],
"sites": [],
"nopaths": "",
"nolabels": ""
}
@@ -374,6 +400,7 @@ class IYUUAutoSeed(_PluginBase):
"sites": self._sites,
"notify": self._notify,
"nolabels": self._nolabels,
"nopaths": self._nopaths,
"success_caches": self._success_caches,
"error_caches": self._error_caches,
"permanent_error_caches": self._permanent_error_caches
@@ -397,6 +424,11 @@ class IYUUAutoSeed(_PluginBase):
if not self.iyuuhelper:
return
logger.info("开始辅种任务 ...")
# 排除已删除站点
self._sites = [site.get("id") for site in self.sites.get_indexers() if
site.get("id") in self._sites]
# 计数器初始化
self.total = 0
self.realtotal = 0
@@ -426,13 +458,25 @@ class IYUUAutoSeed(_PluginBase):
logger.info(f"种子 {hash_str} 辅种失败且已缓存,跳过 ...")
continue
save_path = self.__get_save_path(torrent, downloader)
if self._nopaths and save_path:
# 过滤不需要转移的路径
nopath_skip = False
for nopath in self._nopaths.split('\n'):
if os.path.normpath(save_path).startswith(os.path.normpath(nopath)):
logger.info(f"种子 {hash_str} 保存路径 {save_path} 不需要辅种,跳过 ...")
nopath_skip = True
break
if nopath_skip:
continue
# 获取种子标签
torrent_labels = self.__get_label(torrent, downloader)
if torrent_labels and self._nolabels:
is_skip = False
for label in self._nolabels.split(','):
if label in torrent_labels:
logger.info(f"种子 {hash_str} 含有不转移标签 {label},跳过 ...")
logger.info(f"种子 {hash_str} 含有不辅种标签 {label},跳过 ...")
is_skip = True
break
if is_skip:
@@ -671,6 +715,15 @@ class IYUUAutoSeed(_PluginBase):
"info_hash": "a444850638e7a6f6220e2efdde94099c53358159"
}
"""
def __is_special_site(url):
"""
判断是否为特殊站点是否需要添加https
"""
if "hdsky.me" in url:
return False
return True
self.total += 1
# 获取种子站点及下载地址模板
site_url, download_page = self.iyuuhelper.get_torrent_url(seed.get("sid"))
@@ -715,10 +768,11 @@ class IYUUAutoSeed(_PluginBase):
self.cached += 1
return False
# 强制使用Https
if "?" in torrent_url:
torrent_url += "&https=1"
else:
torrent_url += "?https=1"
if __is_special_site(torrent_url):
if "?" in torrent_url:
torrent_url += "&https=1"
else:
torrent_url += "?https=1"
# 下载种子文件
_, content, _, _, error_msg = self.torrent.download_torrent(
url=torrent_url,
@@ -870,6 +924,9 @@ class IYUUAutoSeed(_PluginBase):
"""
从详情页面获取下载链接
"""
if not site.get('url'):
logger.warn(f"站点 {site.get('name')} 未获取站点地址,无法获取种子下载链接")
return None
try:
page_url = f"{site.get('url')}details.php?id={seed.get('torrent_id')}&hit=1"
logger.info(f"正在获取种子下载链接:{page_url} ...")

View File

@@ -8,8 +8,9 @@ from apscheduler.schedulers.background import BackgroundScheduler
from apscheduler.triggers.cron import CronTrigger
from app.core.config import settings
from app.core.context import MediaInfo
from app.core.meta import MetaBase
from app.core.metainfo import MetaInfo
from app.db.transferhistory_oper import TransferHistoryOper
from app.helper.nfo import NfoReader
from app.log import logger
from app.plugins import _PluginBase
@@ -41,12 +42,14 @@ class LibraryScraper(_PluginBase):
user_level = 1
# 私有属性
transferhis = None
_scheduler = None
_scraper = None
# 限速开关
_enabled = False
_onlyonce = False
_cron = None
_mode = ""
_scraper_paths = ""
_exclude_paths = ""
# 退出事件
@@ -58,6 +61,7 @@ class LibraryScraper(_PluginBase):
self._enabled = config.get("enabled")
self._onlyonce = config.get("onlyonce")
self._cron = config.get("cron")
self._mode = config.get("mode") or ""
self._scraper_paths = config.get("scraper_paths") or ""
self._exclude_paths = config.get("exclude_paths") or ""
@@ -66,6 +70,7 @@ class LibraryScraper(_PluginBase):
# 启动定时任务 & 立即运行一次
if self._enabled or self._onlyonce:
self.transferhis = TransferHistoryOper(self.db)
self._scheduler = BackgroundScheduler(timezone=settings.TZ)
if self._cron:
logger.info(f"媒体库刮削服务启动,周期:{self._cron}")
@@ -92,6 +97,7 @@ class LibraryScraper(_PluginBase):
"onlyonce": False,
"enabled": self._enabled,
"cron": self._cron,
"mode": self._mode,
"scraper_paths": self._scraper_paths,
"exclude_paths": self._exclude_paths
})
@@ -155,6 +161,28 @@ class LibraryScraper(_PluginBase):
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
},
'content': [
{
'component': 'VSelect',
'props': {
'model': 'mode',
'label': '刮削模式',
'items': [
{'title': '仅刮削缺失元数据和图片', 'value': ''},
{'title': '覆盖所有元数据和图片', 'value': 'force_all'},
{'title': '覆盖所有元数据', 'value': 'force_nfo'},
{'title': '覆盖所有图片', 'value': 'force_image'},
]
}
}
]
},
{
'component': 'VCol',
'props': {
@@ -189,7 +217,7 @@ class LibraryScraper(_PluginBase):
'model': 'scraper_paths',
'label': '削刮路径',
'rows': 5,
'placeholder': '每一行一个目录'
'placeholder': '每一行一个目录,需配置到媒体文件的上级目录,即开了二级分类时需要配置到二级分类目录'
}
}
]
@@ -223,6 +251,7 @@ class LibraryScraper(_PluginBase):
], {
"enabled": False,
"cron": "0 0 */7 * *",
"mode": "",
"scraper_paths": "",
"err_hosts": ""
}
@@ -236,76 +265,132 @@ class LibraryScraper(_PluginBase):
"""
if not self._scraper_paths:
return
# 排除目录
exclude_paths = self._exclude_paths.split("\n")
# 已选择的目录
paths = self._scraper_paths.split("\n")
for path in paths:
if not path:
continue
if not Path(path).exists():
scraper_path = Path(path)
if not scraper_path.exists():
logger.warning(f"媒体库刮削路径不存在:{path}")
continue
logger.info(f"开始刮削媒体库:{path} ...")
if self._event.is_set():
logger.info(f"媒体库刮削服务停止")
return
# 刮削目录
self.__scrape_dir(Path(path))
logger.info(f"媒体库刮削完成")
# 遍历一层文件夹
for sub_path in scraper_path.iterdir():
if self._event.is_set():
logger.info(f"媒体库刮削服务停止")
return
# 排除目录
exclude_flag = False
for exclude_path in exclude_paths:
try:
if sub_path.is_relative_to(Path(exclude_path)):
exclude_flag = True
break
except Exception as err:
print(str(err))
if exclude_flag:
logger.debug(f"{sub_path} 在排除目录中,跳过 ...")
continue
# 开始刮削目录
if sub_path.is_dir():
# 判断目录是不是媒体目录
dir_meta = MetaInfo(sub_path.name)
if not dir_meta.name or not dir_meta.year:
logger.warn(f"{sub_path} 可能不是媒体目录,请检查刮削目录配置,跳过 ...")
continue
logger.info(f"开始刮削目录:{sub_path} ...")
self.__scrape_dir(path=sub_path, dir_meta=dir_meta)
logger.info(f"目录 {sub_path} 刮削完成")
logger.info(f"媒体库 {path} 刮削完成")
def __scrape_dir(self, path: Path):
def __scrape_dir(self, path: Path, dir_meta: MetaBase):
"""
削刮一个目录
削刮一个目录,该目录必须是媒体文件目录
"""
exclude_paths = self._exclude_paths.split("\n")
# 媒体信息
mediainfo = None
# 查找目录下所有的文件
files = SystemUtils.list_files(path, settings.RMT_MEDIAEXT)
for file in files:
if self._event.is_set():
logger.info(f"媒体库刮削服务停止")
return
# 排除目录
exclude_flag = False
for exclude_path in exclude_paths:
if file.is_relative_to(Path(exclude_path)):
exclude_flag = True
break
if exclude_flag:
logger.debug(f"{file} 在排除目录中,跳过 ...")
continue
# 识别媒体文件
meta_info = MetaInfo(file.name)
if meta_info.type == MediaType.TV:
dir_info = MetaInfo(file.parent.parent.name)
else:
dir_info = MetaInfo(file.parent.name)
meta_info.merge(dir_info)
# 优先读取本地nfo文件
tmdbid = None
if meta_info.type == MediaType.MOVIE:
# 电影
movie_nfo = file.parent / "movie.nfo"
if movie_nfo.exists():
tmdbid = self.__get_tmdbid_from_nfo(movie_nfo)
file_nfo = file.with_suffix(".nfo")
if not tmdbid and file_nfo.exists():
tmdbid = self.__get_tmdbid_from_nfo(file_nfo)
else:
# 电视剧
tv_nfo = file.parent.parent / "tvshow.nfo"
if tv_nfo.exists():
tmdbid = self.__get_tmdbid_from_nfo(tv_nfo)
if tmdbid:
logger.info(f"读取到本地nfo文件的tmdbid{tmdbid}")
# 识别媒体信息
mediainfo: MediaInfo = self.chain.recognize_media(tmdbid=tmdbid, mtype=meta_info.type)
else:
# 识别媒体信息
mediainfo: MediaInfo = self.chain.recognize_media(meta=meta_info)
if not mediainfo:
logger.warn(f"未识别到媒体信息:{file}")
continue
# 开始刮削
self.chain.scrape_metadata(path=file, mediainfo=mediainfo)
# 识别元数据
meta_info = MetaInfo(file.stem)
# 合并
meta_info.merge(dir_meta)
# 是否刮削
scrap_metadata = settings.SCRAP_METADATA
# 没有媒体信息或者名字出现变化时,需要重新识别
if not mediainfo \
or meta_info.name != dir_meta.name:
# 优先读取本地nfo文件
tmdbid = None
if meta_info.type == MediaType.MOVIE:
# 电影
movie_nfo = file.parent / "movie.nfo"
if movie_nfo.exists():
tmdbid = self.__get_tmdbid_from_nfo(movie_nfo)
file_nfo = file.with_suffix(".nfo")
if not tmdbid and file_nfo.exists():
tmdbid = self.__get_tmdbid_from_nfo(file_nfo)
else:
# 电视剧
tv_nfo = file.parent.parent / "tvshow.nfo"
if tv_nfo.exists():
tmdbid = self.__get_tmdbid_from_nfo(tv_nfo)
if tmdbid:
# 按TMDBID识别
logger.info(f"读取到本地nfo文件的tmdbid{tmdbid}")
mediainfo = self.chain.recognize_media(tmdbid=tmdbid, mtype=meta_info.type)
else:
# 按名称识别
mediainfo = self.chain.recognize_media(meta=meta_info)
if not mediainfo:
logger.warn(f"未识别到媒体信息:{file}")
continue
# 如果未开启新增已入库媒体是否跟随TMDB信息变化则根据tmdbid查询之前的title
if not settings.SCRAP_FOLLOW_TMDB:
transfer_history = self.transferhis.get_by_type_tmdbid(tmdbid=mediainfo.tmdb_id,
mtype=mediainfo.type.value)
if transfer_history:
mediainfo.title = transfer_history.title
# 覆盖模式时提前删除nfo
if self._mode in ["force_all", "force_nfo"]:
scrap_metadata = True
nfo_files = SystemUtils.list_files(path, [".nfo"])
for nfo_file in nfo_files:
try:
logger.warn(f"删除nfo文件{nfo_file}")
nfo_file.unlink()
except Exception as err:
print(str(err))
# 覆盖模式时,提前删除图片文件
if self._mode in ["force_all", "force_image"]:
scrap_metadata = True
image_files = SystemUtils.list_files(path, [".jpg", ".png"])
for image_file in image_files:
if ".actors" in str(image_file):
continue
try:
logger.warn(f"删除图片文件:{image_file}")
image_file.unlink()
except Exception as err:
print(str(err))
# 刮削单个文件
if scrap_metadata:
self.chain.scrape_metadata(path=file, mediainfo=mediainfo)
@staticmethod
def __get_tmdbid_from_nfo(file_path: Path):

View File

@@ -22,7 +22,7 @@ from app.modules.qbittorrent import Qbittorrent
from app.modules.themoviedb.tmdbv3api import Episode
from app.modules.transmission import Transmission
from app.plugins import _PluginBase
from app.schemas.types import NotificationType, EventType
from app.schemas.types import NotificationType, EventType, MediaType
from app.utils.path_utils import PathUtils
@@ -36,7 +36,7 @@ class MediaSyncDel(_PluginBase):
# 主题色
plugin_color = "#ff1a1a"
# 插件版本
plugin_version = "1.0"
plugin_version = "1.1"
# 插件作者
plugin_author = "thsrite"
# 作者主页
@@ -57,6 +57,7 @@ class MediaSyncDel(_PluginBase):
_notify = False
_del_source = False
_exclude_path = None
_library_path = None
_transferhis = None
_downloadhis = None
qb = None
@@ -80,6 +81,7 @@ class MediaSyncDel(_PluginBase):
self._notify = config.get("notify")
self._del_source = config.get("del_source")
self._exclude_path = config.get("exclude_path")
self._library_path = config.get("library_path")
if self._enabled and str(self._sync_type) == "log":
self._scheduler = BackgroundScheduler(timezone=settings.TZ)
@@ -106,12 +108,7 @@ class MediaSyncDel(_PluginBase):
定义远程控制命令
:return: 命令关键字、事件、描述、附带数据
"""
return [{
"cmd": "/sync_del",
"event": EventType.HistoryDeleted,
"desc": "媒体库同步删除",
"data": {}
}]
pass
def get_api(self) -> List[Dict[str, Any]]:
pass
@@ -121,150 +118,177 @@ class MediaSyncDel(_PluginBase):
拼装插件配置页面需要返回两块数据1、页面配置2、数据结构
"""
return [
{
'component': 'VForm',
'content': [
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'enabled',
'label': '启用插件',
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'notify',
'label': '发送通知',
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'del_source',
'label': '删除源文件',
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
},
'content': [
{
'component': 'VSelect',
'props': {
'model': 'sync_type',
'label': '同步方式',
'items': [
{'title': '日志', 'value': 'log'},
{'title': 'Scripter X', 'value': 'plugin'}
]
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
},
'content': [
{
'component': 'VTextField',
'props': {
'model': 'cron',
'label': '执行周期',
'placeholder': '5位cron表达式留空自动'
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
},
'content': [
{
'component': 'VTextField',
'props': {
'model': 'exclude_path',
'label': '排除路径'
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
},
'content': [
{
'component': 'VAlert',
'props': {
'text': '同步方式分为日志同步和Scripter X。日志同步需要配置执行周期默认30分钟执行一次。'
'Scripter X方式需要emby安装并配置Scripter X插件无需配置执行周期。'
}
}
]
}
]
}
]
}
], {
"enabled": False,
"notify": True,
"del_source": False,
"sync_type": "log",
"cron": "*/30 * * * *",
"exclude_path": "",
}
{
'component': 'VForm',
'content': [
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'enabled',
'label': '启用插件',
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'notify',
'label': '发送通知',
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'del_source',
'label': '删除源文件',
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
},
'content': [
{
'component': 'VSelect',
'props': {
'model': 'sync_type',
'label': '同步方式',
'items': [
{'title': 'webhook', 'value': 'webhook'},
{'title': '日志', 'value': 'log'},
{'title': 'Scripter X', 'value': 'plugin'}
]
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
},
'content': [
{
'component': 'VTextField',
'props': {
'model': 'cron',
'label': '执行周期',
'placeholder': '5位cron表达式留空自动'
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
},
'content': [
{
'component': 'VTextField',
'props': {
'model': 'exclude_path',
'label': '排除路径'
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
},
'content': [
{
'component': 'VTextarea',
'props': {
'model': 'library_path',
'rows': '2',
'label': '媒体库路径',
'placeholder': '媒体服务器路径:MoviePilot路径一行一个'
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
},
'content': [
{
'component': 'VAlert',
'props': {
'text': '同步方式分为webhook、日志同步和Scripter X。'
'webhook需要Emby4.8.0.45及以上开启媒体删除的webhook'
'(建议使用媒体库刮削插件覆盖元数据重新刮削剧集路径)。'
'日志同步需要配置执行周期默认30分钟执行一次。'
'Scripter X方式需要emby安装并配置Scripter X插件无需配置执行周期。'
}
}
]
}
]
}
]
}
], {
"enabled": False,
"notify": True,
"del_source": False,
"library_path": "",
"sync_type": "webhook",
"cron": "*/30 * * * *",
"exclude_path": "",
}
def get_page(self) -> List[dict]:
"""
@@ -419,39 +443,21 @@ class MediaSyncDel(_PluginBase):
]
@eventmanager.register(EventType.WebhookMessage)
def sync_del_by_plugin(self, event):
def sync_del_by_webhook(self, event: Event):
"""
emby删除媒体库同步删除历史记录
Scripter X插件
webhook
"""
if not self._enabled:
if not self._enabled or str(self._sync_type) != "webhook":
return
event_data = event.event_data
event_type = event_data.event
if not event_type or str(event_type) != 'media_del':
return
# 是否虚拟标识
item_isvirtual = event_data.item_isvirtual
if not item_isvirtual:
logger.error("item_isvirtual参数未配置为防止误删除暂停插件运行")
self.update_config({
"enabled": False,
"del_source": self._del_source,
"exclude_path": self._exclude_path,
"notify": self._notify,
"cron": self._cron,
"sync_type": self._sync_type,
})
# Emby Webhook event_type = library.deleted
if not event_type or str(event_type) != 'library.deleted':
return
# 如果是虚拟item则直接return不进行删除
if item_isvirtual == 'True':
return
# 读取历史记录
history = self.get_data('history') or []
# 媒体类型
media_type = event_data.item_type
# 媒体名称
@@ -462,13 +468,76 @@ class MediaSyncDel(_PluginBase):
tmdb_id = event_data.tmdb_id
# 季数
season_num = event_data.season_id
if season_num and str(season_num).isdigit() and int(season_num) < 10:
season_num = f'0{season_num}'
# 集数
episode_num = event_data.episode_id
if episode_num and str(episode_num).isdigit() and int(episode_num) < 10:
episode_num = f'0{episode_num}'
self.__sync_del(media_type=media_type,
media_name=media_name,
media_path=media_path,
tmdb_id=tmdb_id,
season_num=season_num,
episode_num=episode_num)
@eventmanager.register(EventType.WebhookMessage)
def sync_del_by_plugin(self, event):
"""
emby删除媒体库同步删除历史记录
Scripter X插件
"""
if not self._enabled or str(self._sync_type) != "plugin":
return
event_data = event.event_data
event_type = event_data.event
# Scripter X插件 event_type = media_del
if not event_type or str(event_type) != 'media_del':
return
# Scripter X插件 需要是否虚拟标识
item_isvirtual = event_data.item_isvirtual
if not item_isvirtual:
logger.error("Scripter X插件方式item_isvirtual参数未配置为防止误删除暂停插件运行")
self.update_config({
"enabled": False,
"del_source": self._del_source,
"exclude_path": self._exclude_path,
"library_path": self._library_path,
"notify": self._notify,
"cron": self._cron,
"sync_type": self._sync_type,
})
return
# 如果是虚拟item则直接return不进行删除
if item_isvirtual == 'True':
return
# 媒体类型
media_type = event_data.item_type
# 媒体名称
media_name = event_data.item_name
# 媒体路径
media_path = event_data.item_path
# tmdb_id
tmdb_id = event_data.tmdb_id
# 季数
season_num = event_data.season_id
# 集数
episode_num = event_data.episode_id
self.__sync_del(media_type=media_type,
media_name=media_name,
media_path=media_path,
tmdb_id=tmdb_id,
season_num=season_num,
episode_num=episode_num)
def __sync_del(self, media_type: str, media_name: str, media_path: str,
tmdb_id: int, season_num: int, episode_num: int):
"""
执行删除逻辑
"""
if not media_type:
logger.error(f"{media_name} 同步删除失败,未获取到媒体类型")
return
@@ -482,38 +551,18 @@ class MediaSyncDel(_PluginBase):
logger.info(f"媒体路径 {media_path} 已被排除,暂不处理")
return
# 删除电影
if media_type == "Movie":
msg = f'电影 {media_name} {tmdb_id}'
transfer_history: List[TransferHistory] = self._transferhis.get_by(tmdbid=tmdb_id)
# 删除电视剧
elif media_type == "Series":
msg = f'剧集 {media_name} {tmdb_id}'
transfer_history: List[TransferHistory] = self._transferhis.get_by(tmdbid=tmdb_id)
# 删除季 S02
elif media_type == "Season":
if not season_num or not str(season_num).isdigit():
logger.error(f"{media_name} 季同步删除失败,未获取到具体季")
return
msg = f'剧集 {media_name} S{season_num} {tmdb_id}'
transfer_history: List[TransferHistory] = self._transferhis.get_by(tmdbid=tmdb_id,
season=f'S{season_num}')
# 删除剧集S02E02
elif media_type == "Episode":
if not season_num or not str(season_num).isdigit() or not episode_num or not str(episode_num).isdigit():
logger.error(f"{media_name} 集同步删除失败,未获取到具体集")
return
msg = f'剧集 {media_name} S{season_num}E{episode_num} {tmdb_id}'
transfer_history: List[TransferHistory] = self._transferhis.get_by(tmdbid=tmdb_id,
season=f'S{season_num}',
episode=f'E{episode_num}')
else:
return
# 查询转移记录
msg, transfer_history = self.__get_transfer_his(media_type=media_type,
media_name=media_name,
media_path=media_path,
tmdb_id=tmdb_id,
season_num=season_num,
episode_num=episode_num)
logger.info(f"正在同步删除{msg}")
if not transfer_history:
logger.warn(f"{media_type} {media_name} 未获取到可删除数据")
logger.warn(f"{media_type} {media_name} 未获取到可删除数据,可使用媒体库刮削插件覆盖所有元数据")
return
# 开始删除
@@ -523,6 +572,11 @@ class MediaSyncDel(_PluginBase):
stop_cnt = 0
error_cnt = 0
for transferhis in transfer_history:
title = transferhis.title
if title not in media_name:
logger.warn(
f"当前转移记录 {transferhis.id} {title} {transferhis.tmdbid} 与删除媒体{media_name}不符,防误删,暂不自动删除")
continue
image = transferhis.image
year = transferhis.year
@@ -540,15 +594,15 @@ class MediaSyncDel(_PluginBase):
if transferhis.download_hash:
try:
# 2、判断种子是否被删除完
delete_flag, success_flag = self.handle_torrent(src=transferhis.src,
torrent_hash=transferhis.download_hash)
delete_flag, success_flag, handle_cnt = self.handle_torrent(src=transferhis.src,
torrent_hash=transferhis.download_hash)
if not success_flag:
error_cnt += 1
else:
if delete_flag:
del_cnt += 1
del_cnt += handle_cnt
else:
stop_cnt += 1
stop_cnt += handle_cnt
except Exception as e:
logger.error("删除种子失败,尝试删除源文件:%s" % str(e))
@@ -563,20 +617,30 @@ class MediaSyncDel(_PluginBase):
episode_num=episode_num)
if images:
image = self.get_tmdbimage_url(images[-1].get("file_path"), prefix="original")
torrent_cnt_msg = ""
if del_cnt:
torrent_cnt_msg += f"删除种子{del_cnt}\n"
if stop_cnt:
torrent_cnt_msg += f"暂停种子{stop_cnt}\n"
if error_cnt:
torrent_cnt_msg += f"删种失败{error_cnt}\n"
# 发送通知
self.post_message(
mtype=NotificationType.MediaServer,
title="媒体库同步删除任务完成",
image=image,
text=f"{msg}\n"
f"删除{del_cnt}\n"
f"暂停{stop_cnt}\n"
f"错误{error_cnt}\n"
f"删除记录{len(transfer_history)}\n"
f"{torrent_cnt_msg}"
f"时间 {time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(time.time()))}"
)
# 读取历史记录
history = self.get_data('history') or []
history.append({
"type": "电影" if media_type == "Movie" else "电视剧",
"type": "电影" if media_type == "Movie" or media_type == "MOV" else "电视剧",
"title": media_name,
"year": year,
"path": media_path,
@@ -589,6 +653,64 @@ class MediaSyncDel(_PluginBase):
# 保存历史
self.save_data("history", history)
def __get_transfer_his(self, media_type: str, media_name: str, media_path: str,
tmdb_id: int, season_num: int, episode_num: int):
"""
查询转移记录
"""
# 季数
if season_num:
season_num = str(season_num).rjust(2, '0')
# 集数
if episode_num:
episode_num = str(episode_num).rjust(2, '0')
# 类型
mtype = MediaType.MOVIE if media_type in ["Movie", "MOV"] else MediaType.TV
# 处理路径映射 (处理同一媒体多分辨率的情况)
if self._library_path:
paths = self._library_path.split("\n")
for path in paths:
sub_paths = path.split(":")
media_path = media_path.replace(sub_paths[0], sub_paths[1]).replace('\\', '/')
# 删除电影
if mtype == MediaType.MOVIE:
msg = f'电影 {media_name} {tmdb_id}'
transfer_history: List[TransferHistory] = self._transferhis.get_by(tmdbid=tmdb_id,
mtype=mtype.value,
dest=media_path)
# 删除电视剧
elif mtype == MediaType.TV and not season_num and not episode_num:
msg = f'剧集 {media_name} {tmdb_id}'
transfer_history: List[TransferHistory] = self._transferhis.get_by(tmdbid=tmdb_id,
mtype=mtype.value)
# 删除季 S02
elif mtype == MediaType.TV and season_num and not episode_num:
if not season_num or not str(season_num).isdigit():
logger.error(f"{media_name} 季同步删除失败,未获取到具体季")
return
msg = f'剧集 {media_name} S{season_num} {tmdb_id}'
transfer_history: List[TransferHistory] = self._transferhis.get_by(tmdbid=tmdb_id,
mtype=mtype.value,
season=f'S{season_num}')
# 删除剧集S02E02
elif mtype == MediaType.TV and season_num and episode_num:
if not season_num or not str(season_num).isdigit() or not episode_num or not str(episode_num).isdigit():
logger.error(f"{media_name} 集同步删除失败,未获取到具体集")
return
msg = f'剧集 {media_name} S{season_num}E{episode_num} {tmdb_id}'
transfer_history: List[TransferHistory] = self._transferhis.get_by(tmdbid=tmdb_id,
mtype=mtype.value,
season=f'S{season_num}',
episode=f'E{episode_num}',
dest=media_path)
else:
return "", []
return msg, transfer_history
def sync_del_by_log(self):
"""
emby删除媒体库同步删除历史记录
@@ -636,13 +758,21 @@ class MediaSyncDel(_PluginBase):
logger.info(f"媒体路径 {media_path} 已被排除,暂不处理")
return
# 处理路径映射 (处理同一媒体多分辨率的情况)
if self._library_path:
paths = self._library_path.split("\n")
for path in paths:
sub_paths = path.split(":")
media_path = media_path.replace(sub_paths[0], sub_paths[1]).replace('\\', '/')
# 获取删除的记录
# 删除电影
if media_type == "Movie":
msg = f'电影 {media_name}'
transfer_history: List[TransferHistory] = self._transferhis.get_by(
title=media_name,
year=media_year)
year=media_year,
dest=media_path)
# 删除电视剧
elif media_type == "Series":
msg = f'剧集 {media_name}'
@@ -663,7 +793,8 @@ class MediaSyncDel(_PluginBase):
title=media_name,
year=media_year,
season=media_season,
episode=media_episode)
episode=media_episode,
dest=media_path)
else:
continue
@@ -681,6 +812,11 @@ class MediaSyncDel(_PluginBase):
stop_cnt = 0
error_cnt = 0
for transferhis in transfer_history:
title = transferhis.title
if title not in media_name:
logger.warn(
f"当前转移记录 {transferhis.id} {title} {transferhis.tmdbid} 与删除媒体{media_name}不符,防误删,暂不自动删除")
continue
image = transferhis.image
# 0、删除转移记录
self._transferhis.delete(transferhis.id)
@@ -696,15 +832,15 @@ class MediaSyncDel(_PluginBase):
if transferhis.download_hash:
try:
# 2、判断种子是否被删除完
delete_flag, success_flag = self.handle_torrent(src=transferhis.src,
torrent_hash=transferhis.download_hash)
delete_flag, success_flag, handle_cnt = self.handle_torrent(src=transferhis.src,
torrent_hash=transferhis.download_hash)
if not success_flag:
error_cnt += 1
else:
if delete_flag:
del_cnt += 1
del_cnt += handle_cnt
else:
stop_cnt += 1
stop_cnt += handle_cnt
except Exception as e:
logger.error("删除种子失败,尝试删除源文件:%s" % str(e))
@@ -712,13 +848,19 @@ class MediaSyncDel(_PluginBase):
# 发送消息
if self._notify:
torrent_cnt_msg = ""
if del_cnt:
torrent_cnt_msg += f"删除种子{del_cnt}\n"
if stop_cnt:
torrent_cnt_msg += f"暂停种子{stop_cnt}\n"
if error_cnt:
torrent_cnt_msg += f"删种失败{error_cnt}\n"
self.post_message(
mtype=NotificationType.MediaServer,
title="媒体库同步删除任务完成",
text=f"{msg}\n"
f"删除{del_cnt}\n"
f"暂停{stop_cnt}\n"
f"错误{error_cnt}\n"
f"删除记录{len(transfer_history)}\n"
f"{torrent_cnt_msg}"
f"时间 {time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(time.time()))}",
image=image)
@@ -752,15 +894,27 @@ class MediaSyncDel(_PluginBase):
plugin_id=plugin_id)
logger.info(f"查询到 {history_key} 转种历史 {transfer_history}")
handle_cnt = 0
try:
# 删除本次种子记录
self._downloadhis.delete_file_by_fullpath(fullpath=src)
# 根据种子hash查询剩余未删除的记录
downloadHisNoDel = self._downloadhis.get_files_by_hash(download_hash=torrent_hash, state=1)
if downloadHisNoDel and len(downloadHisNoDel) > 0:
# 根据种子hash查询所有下载器文件记录
download_files = self._downloadhis.get_files_by_hash(download_hash=torrent_hash)
if not download_files:
logger.error(
f"未查询到种子任务 {torrent_hash} 存在文件记录,未执行下载器文件同步或该种子已被删除")
return False, False, 0
# 查询未删除数
no_del_cnt = 0
for download_file in download_files:
if download_file and download_file.state and int(download_file.state) == 1:
no_del_cnt += 1
if no_del_cnt > 0:
logger.info(
f"查询种子任务 {torrent_hash} 存在 {len(downloadHisNoDel)} 个未删除文件,执行暂停种子操作")
f"查询种子任务 {torrent_hash} 存在 {no_del_cnt} 个未删除文件,执行暂停种子操作")
delete_flag = False
else:
logger.info(
@@ -785,6 +939,7 @@ class MediaSyncDel(_PluginBase):
# 删除源种子
logger.info(f"删除源下载器下载任务:{settings.DOWNLOADER} - {torrent_hash}")
self.chain.remove_torrents(torrent_hash)
handle_cnt += 1
# 删除转种后任务
logger.info(f"删除转种后下载任务:{download} - {download_id}")
@@ -795,6 +950,7 @@ class MediaSyncDel(_PluginBase):
else:
self.qb.delete_torrents(delete_file=True,
ids=download_id)
handle_cnt += 1
else:
# 暂停种子
# 转种后未删除源种时,同步暂停源种
@@ -804,6 +960,7 @@ class MediaSyncDel(_PluginBase):
# 暂停源种子
logger.info(f"暂停源下载器下载任务:{settings.DOWNLOADER} - {torrent_hash}")
self.chain.stop_torrents(torrent_hash)
handle_cnt += 1
else:
# 未转种de情况
@@ -815,16 +972,20 @@ class MediaSyncDel(_PluginBase):
# 暂停源种子
logger.info(f"暂停源下载器下载任务:{download} - {download_id}")
self.chain.stop_torrents(download_id)
handle_cnt += 1
# 处理辅种
self.__del_seed(download=download, download_id=download_id, action_flag="del" if delete_flag else 'stop')
handle_cnt = self.__del_seed(download=download,
download_id=download_id,
action_flag="del" if delete_flag else 'stop',
handle_cnt=handle_cnt)
return delete_flag, True
return delete_flag, True, handle_cnt
except Exception as e:
logger.error(f"删种失败: {e}")
return False, False
return False, False, 0
def __del_seed(self, download, download_id, action_flag):
def __del_seed(self, download, download_id, action_flag, handle_cnt):
"""
删除辅种
"""
@@ -846,9 +1007,10 @@ class MediaSyncDel(_PluginBase):
torrents = [torrents]
# 删除辅种历史中与本下载器相同的辅种记录
if int(downloader) == download:
if str(downloader) == str(download):
for torrent in torrents:
if download == "qbittorrent":
handle_cnt += 1
if str(download) == "qbittorrent":
# 删除辅种
if action_flag == "del":
logger.info(f"删除辅种:{downloader} - {torrent}")
@@ -878,6 +1040,8 @@ class MediaSyncDel(_PluginBase):
value=seed_history,
plugin_id=plugin_id)
return handle_cnt
@staticmethod
def parse_emby_log(last_time):
log_url = "{HOST}System/Logs/embyserver.txt?api_key={APIKEY}"
@@ -951,7 +1115,7 @@ class MediaSyncDel(_PluginBase):
return del_medias
@staticmethod
def parse_jellyfin_log(last_time):
def parse_jellyfin_log(last_time: datetime):
# 根据加入日期 降序排序
log_url = "{HOST}System/Logs/Log?name=log_%s.log&api_key={APIKEY}" % datetime.date.today().strftime("%Y%m%d")
log_res = Jellyfin().get_data(log_url)
@@ -1024,7 +1188,7 @@ class MediaSyncDel(_PluginBase):
return del_medias
@staticmethod
def delete_media_file(filedir, filename):
def delete_media_file(filedir: str, filename: str):
"""
删除媒体文件,空目录也会被删除
"""
@@ -1092,7 +1256,7 @@ class MediaSyncDel(_PluginBase):
title="媒体库同步删除完成!", userid=event.event_data.get("user"))
@staticmethod
def get_tmdbimage_url(path, prefix="w500"):
def get_tmdbimage_url(path: str, prefix="w500"):
if not path:
return ""
tmdb_image_url = f"https://{settings.TMDB_IMAGE_DOMAIN}"

View File

@@ -31,7 +31,7 @@ class MessageForward(_PluginBase):
# 加载顺序
plugin_order = 16
# 可使用的用户级别
auth_level = 2
auth_level = 1
# 私有属性
_enabled = False
@@ -69,81 +69,81 @@ class MessageForward(_PluginBase):
拼装插件配置页面需要返回两块数据1、页面配置2、数据结构
"""
return [
{
'component': 'VForm',
'content': [
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'enabled',
'label': '开启转发'
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
},
'content': [
{
'component': 'VTextarea',
'props': {
'model': 'wechat',
'rows': '3',
'label': '应用配置',
'placeholder': 'appid:corpid:appsecret一行一个配置'
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
},
'content': [
{
'component': 'VTextarea',
'props': {
'model': 'pattern',
'rows': '3',
'label': '正则配置',
'placeholder': '对应上方应用配置,一行一个,一一对应'
}
}
]
}
]
},
]
}
], {
"enabled": False,
"wechat": "",
"pattern": ""
}
{
'component': 'VForm',
'content': [
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'enabled',
'label': '开启转发'
}
}
]
},
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
},
'content': [
{
'component': 'VTextarea',
'props': {
'model': 'wechat',
'rows': '3',
'label': '应用配置',
'placeholder': 'appid:corpid:appsecret一行一个配置'
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
},
'content': [
{
'component': 'VTextarea',
'props': {
'model': 'pattern',
'rows': '3',
'label': '正则配置',
'placeholder': '对应上方应用配置,一行一个,一一对应'
}
}
]
}
]
},
]
}
], {
"enabled": False,
"wechat": "",
"pattern": ""
}
def get_page(self) -> List[dict]:
pass
@@ -169,29 +169,27 @@ class MessageForward(_PluginBase):
# 正则匹配
patterns = self._pattern.split("\n")
for i, pattern in enumerate(patterns):
for index, pattern in enumerate(patterns):
msg_match = re.search(pattern, title)
if msg_match:
access_token, appid = self.__flush_access_token(i)
access_token, appid = self.__flush_access_token(index)
if not access_token:
logger.error("未获取到有效token请检查配置")
continue
# 发送消息
if image:
self.__send_image_message(title, text, image, userid, access_token, appid, i)
self.__send_image_message(title, text, image, userid, access_token, appid, index)
else:
self.__send_message(title, text, userid, access_token, appid, i)
self.__send_message(title, text, userid, access_token, appid, index)
def __save_wechat_token(self):
"""
获取并存储wechat token
"""
# 查询历史
wechat_token_history = self.get_data("wechat_token") or {}
# 解析配置
wechats = self._wechat.split("\n")
for i, wechat in enumerate(wechats):
for index, wechat in enumerate(wechats):
wechat_config = wechat.split(":")
if len(wechat_config) != 3:
logger.error(f"{wechat} 应用配置不正确")
@@ -200,53 +198,30 @@ class MessageForward(_PluginBase):
corpid = wechat_config[1]
appsecret = wechat_config[2]
# 查询历史是否存储token
wechat_config = wechat_token_history.get("appid")
access_token = None
expires_in = None
access_token_time = None
if wechat_config:
access_token_time = wechat_config['access_token_time']
expires_in = wechat_config['expires_in']
# 判断token是否过期
if (datetime.now() - access_token_time).seconds < expires_in:
# 重新获取token
access_token, expires_in, access_token_time = self.__get_access_token(corpid=corpid,
appsecret=appsecret)
# 已过期,重新获取token
access_token, expires_in, access_token_time = self.__get_access_token(corpid=corpid,
appsecret=appsecret)
if not access_token:
# 获取token
access_token, expires_in, access_token_time = self.__get_access_token(corpid=corpid,
appsecret=appsecret)
if access_token:
wechat_token_history[appid] = {
"access_token": access_token,
"expires_in": expires_in,
"access_token_time": str(access_token_time),
"corpid": corpid,
"appsecret": appsecret
}
self._pattern_token[i] = {
"appid": appid,
"corpid": corpid,
"appsecret": appsecret,
"access_token": access_token,
"expires_in": expires_in,
"access_token_time": access_token_time,
}
else:
# 没有token获取token
logger.error(f"wechat配置 appid = {appid} 获取token失败请检查配置")
continue
# 保存wechat token
if wechat_token_history:
self.save_data("wechat_token", wechat_token_history)
self._pattern_token[index] = {
"appid": appid,
"corpid": corpid,
"appsecret": appsecret,
"access_token": access_token,
"expires_in": expires_in,
"access_token_time": access_token_time,
}
def __flush_access_token(self, i: int):
def __flush_access_token(self, index: int, force: bool = False):
"""
获取第i个配置wechat token
"""
wechat_token = self._pattern_token[i]
wechat_token = self._pattern_token[index]
if not wechat_token:
logger.error(f"未获取到第 {i} 条正则对应的wechat应用token请检查配置")
logger.error(f"未获取到第 {index} 条正则对应的wechat应用token请检查配置")
return None
access_token = wechat_token['access_token']
expires_in = wechat_token['expires_in']
@@ -256,7 +231,7 @@ class MessageForward(_PluginBase):
appsecret = wechat_token['appsecret']
# 判断token有效期
if (datetime.now() - access_token_time).seconds < expires_in:
if force or (datetime.now() - access_token_time).seconds >= expires_in:
# 重新获取token
access_token, expires_in, access_token_time = self.__get_access_token(corpid=corpid,
appsecret=appsecret)
@@ -264,7 +239,7 @@ class MessageForward(_PluginBase):
logger.error(f"wechat配置 appid = {appid} 获取token失败请检查配置")
return None, None
self._pattern_token[i] = {
self._pattern_token[index] = {
"appid": appid,
"corpid": corpid,
"appsecret": appsecret,
@@ -275,8 +250,7 @@ class MessageForward(_PluginBase):
return access_token, appid
def __send_message(self, title: str, text: str = None, userid: str = None, access_token: str = None,
appid: str = None, i: int = None) -> \
Optional[bool]:
appid: str = None, index: int = None) -> Optional[bool]:
"""
发送文本消息
:param title: 消息标题
@@ -284,7 +258,6 @@ class MessageForward(_PluginBase):
:param userid: 消息发送对象的ID为空则发给所有人
:return: 发送状态,错误信息
"""
message_url = self._send_msg_url % access_token
if text:
conent = "%s\n%s" % (title, text.replace("\n\n", "\n"))
else:
@@ -303,10 +276,10 @@ class MessageForward(_PluginBase):
"enable_id_trans": 0,
"enable_duplicate_check": 0
}
return self.__post_request(message_url, req_json, i, title)
return self.__post_request(access_token=access_token, req_json=req_json, index=index, title=title)
def __send_image_message(self, title: str, text: str, image_url: str, userid: str = None, access_token: str = None,
appid: str = None, i: int = None) -> Optional[bool]:
def __send_image_message(self, title: str, text: str, image_url: str, userid: str = None,
access_token: str = None, appid: str = None, index: int = None) -> Optional[bool]:
"""
发送图文消息
:param title: 消息标题
@@ -315,7 +288,6 @@ class MessageForward(_PluginBase):
:param userid: 消息发送对象的ID为空则发给所有人
:return: 发送状态,错误信息
"""
message_url = self._send_msg_url % access_token
if text:
text = text.replace("\n\n", "\n")
if not userid:
@@ -335,9 +307,10 @@ class MessageForward(_PluginBase):
]
}
}
return self.__post_request(message_url, req_json, i, title)
return self.__post_request(access_token=access_token, req_json=req_json, index=index, title=title)
def __post_request(self, message_url: str, req_json: dict, i: int, title: str) -> bool:
def __post_request(self, access_token: str, req_json: dict, index: int, title: str, retry: int = 0) -> bool:
message_url = self._send_msg_url % access_token
"""
向微信发送请求
"""
@@ -352,10 +325,21 @@ class MessageForward(_PluginBase):
logger.info(f"转发消息 {title} 成功")
return True
else:
if ret_json.get('errcode') == 42001:
# 重新获取token
self.__flush_access_token(i)
logger.error(f"转发消息 {title} 失败,错误信息:{ret_json}")
if ret_json.get('errcode') == 42001 or ret_json.get('errcode') == 40014:
logger.info("token已过期正在重新刷新token重试")
# 重新获取token
access_token, appid = self.__flush_access_token(index=index,
force=True)
if access_token:
retry += 1
# 重发请求
if retry <= 3:
return self.__post_request(access_token=access_token,
req_json=req_json,
index=index,
title=title,
retry=retry)
return False
elif res is not None:
logger.error(f"转发消息 {title} 失败,错误码:{res.status_code},错误原因:{res.reason}")
@@ -364,10 +348,10 @@ class MessageForward(_PluginBase):
logger.error(f"转发消息 {title} 失败,未获取到返回信息")
return False
except Exception as err:
logger.error(f"转发消息 {title} 失败,错误信息:{err}")
logger.error(f"转发消息 {title} 异常,错误信息:{err}")
return False
def __get_access_token(self, corpid, appsecret):
def __get_access_token(self, corpid: str, appsecret: str):
"""
获取微信Token
:return 微信Token

View File

@@ -4,11 +4,8 @@ import sqlite3
from datetime import datetime
from app.db.downloadhistory_oper import DownloadHistoryOper
from app.db.models.plugin import PluginData
from app.db.plugindata_oper import PluginDataOper
from app.db.transferhistory_oper import TransferHistoryOper
from app.modules.qbittorrent import Qbittorrent
from app.modules.transmission import Transmission
from app.plugins import _PluginBase
from typing import Any, List, Dict, Tuple
from app.log import logger
@@ -34,7 +31,7 @@ class NAStoolSync(_PluginBase):
# 加载顺序
plugin_order = 15
# 可使用的用户级别
auth_level = 2
auth_level = 1
# 私有属性
_transferhistory = None
@@ -45,10 +42,7 @@ class NAStoolSync(_PluginBase):
_path = None
_site = None
_downloader = None
_supp = False
_transfer = False
qb = None
tr = None
def init_plugin(self, config: dict = None):
self._transferhistory = TransferHistoryOper(self.db)
@@ -61,13 +55,9 @@ class NAStoolSync(_PluginBase):
self._path = config.get("path")
self._site = config.get("site")
self._downloader = config.get("downloader")
self._supp = config.get("supp")
self._transfer = config.get("transfer")
if self._nt_db_path and self._transfer:
self.qb = Qbittorrent()
self.tr = Transmission()
# 读取sqlite数据
try:
gradedb = sqlite3.connect(self._nt_db_path)
@@ -80,7 +70,6 @@ class NAStoolSync(_PluginBase):
"path": self._path,
"downloader": self._downloader,
"site": self._site,
"supp": self._supp,
}
)
logger.error(f"无法打开数据库文件 {self._nt_db_path},请检查路径是否正确:{e}")
@@ -116,7 +105,6 @@ class NAStoolSync(_PluginBase):
"path": self._path,
"downloader": self._downloader,
"site": self._site,
"supp": self._supp,
}
)
@@ -144,6 +132,7 @@ class NAStoolSync(_PluginBase):
logger.info("MoviePilot插件记录已清空")
self._plugindata.truncate()
cnt = 0
for history in plugin_history:
plugin_id = history[1]
plugin_key = history[2]
@@ -153,7 +142,12 @@ class NAStoolSync(_PluginBase):
if self._downloader:
downloaders = self._downloader.split("\n")
for downloader in downloaders:
if not downloader:
continue
sub_downloaders = downloader.split(":")
if not str(sub_downloaders[0]).isdigit():
logger.error(f"下载器映射配置错误NAStool下载器id 应为数字!")
continue
# 替换转种记录
if str(plugin_id) == "TorrentTransfer":
keys = str(plugin_key).split("-")
@@ -172,7 +166,11 @@ class NAStoolSync(_PluginBase):
if str(plugin_id) == "IYUUAutoSeed":
if isinstance(plugin_value, str):
plugin_value = json.loads(plugin_value)
if not isinstance(plugin_value, list):
plugin_value = [plugin_value]
for value in plugin_value:
if not str(value.get("downloader")).isdigit():
continue
if str(value.get("downloader")).isdigit() and int(value.get("downloader")) == int(
sub_downloaders[0]):
value["downloader"] = sub_downloaders[1]
@@ -180,6 +178,9 @@ class NAStoolSync(_PluginBase):
self._plugindata.save(plugin_id=plugin_id,
key=plugin_key,
value=plugin_value)
cnt += 1
if cnt % 100 == 0:
logger.info(f"插件记录同步进度 {cnt} / {len(plugin_history)}")
# 计算耗时
end_time = datetime.now()
@@ -198,6 +199,7 @@ class NAStoolSync(_PluginBase):
logger.info("MoviePilot下载记录已清空")
self._downloadhistory.truncate()
cnt = 0
for history in download_history:
mpath = history[0]
mtype = history[1]
@@ -234,6 +236,9 @@ class NAStoolSync(_PluginBase):
torrent_description=mdesc,
torrent_site=msite
)
cnt += 1
if cnt % 100 == 0:
logger.info(f"下载记录同步进度 {cnt} / {len(download_history)}")
# 计算耗时
end_time = datetime.now()
@@ -253,36 +258,8 @@ class NAStoolSync(_PluginBase):
logger.info("MoviePilot转移记录已清空")
self._transferhistory.truncate()
# 转种后种子hash
transfer_hash = []
qb_torrents = []
tr_torrents = []
tr_torrents_all = []
if self._supp:
# 获取所有的转种数据
transfer_datas = self._plugindata.get_data_all("TorrentTransfer")
if transfer_datas:
if not isinstance(transfer_datas, list):
transfer_datas = [transfer_datas]
for transfer_data in transfer_datas:
if not transfer_data or not isinstance(transfer_data, PluginData):
continue
# 转移后种子hash
transfer_value = transfer_data.value
transfer_value = json.loads(transfer_value)
if not isinstance(transfer_value, dict):
transfer_value = json.loads(transfer_value)
to_hash = transfer_value.get("to_download_id")
# 转移前种子hash
transfer_hash.append(to_hash)
# 获取tr、qb所有种子
qb_torrents, _ = self.qb.get_torrents()
tr_torrents, _ = self.tr.get_torrents(ids=transfer_hash)
tr_torrents_all, _ = self.tr.get_torrents()
# 处理数据存入mp数据库
cnt = 0
for history in transfer_history:
msrc_path = history[0]
msrc_filename = history[1]
@@ -297,8 +274,7 @@ class NAStoolSync(_PluginBase):
mseasons = history[10]
mepisodes = history[11]
mimage = history[12]
mdownload_hash = history[13]
mdate = history[14]
mdate = history[13]
if not msrc_path or not mdest_path:
continue
@@ -306,78 +282,6 @@ class NAStoolSync(_PluginBase):
msrc = msrc_path + "/" + msrc_filename
mdest = mdest_path + "/" + mdest_filename
# 尝试补充download_id
if self._supp and not mdownload_hash:
logger.debug(f"转移记录 {mtitle} 缺失download_hash尝试补充……")
# 种子名称
torrent_name = str(msrc_path).split("/")[-1]
torrent_name2 = str(msrc_path).split("/")[-2]
# 处理下载器
for torrent in qb_torrents:
if str(torrent.get("name")) == str(torrent_name) \
or str(torrent.get("name")) == str(torrent_name2):
mdownload_hash = torrent.get("hash")
torrent_name = str(torrent.get("name"))
break
# 处理辅种器
if not mdownload_hash:
for torrent in tr_torrents:
if str(torrent.get("name")) == str(torrent_name) \
or str(torrent.get("name")) == str(torrent_name2):
mdownload_hash = torrent.get("hashString")
torrent_name = str(torrent.get("name"))
break
# 继续补充 遍历所有种子,按照添加升序升序,第一个种子是初始种子
if not mdownload_hash:
mate_torrents = []
for torrent in tr_torrents_all:
if str(torrent.get("name")) == str(torrent_name) \
or str(torrent.get("name")) == str(torrent_name2):
mate_torrents.append(torrent)
# 匹配上则按照时间升序
if mate_torrents:
if len(mate_torrents) > 1:
mate_torrents = sorted(mate_torrents, key=lambda x: x.added_date)
# 最早添加的hash是下载的hash
mdownload_hash = mate_torrents[0].get("hashString")
torrent_name = str(mate_torrents[0].get("name"))
# 补充转种记录
self._plugindata.save(plugin_id="TorrentTransfer",
key=f"qbittorrent-{mdownload_hash}",
value={
"to_download": "transmission",
"to_download_id": mdownload_hash,
"delete_source": True}
)
# 补充辅种记录
if len(mate_torrents) > 1:
self._plugindata.save(plugin_id="IYUUAutoSeed",
key=mdownload_hash,
value=[{"downloader": "transmission",
"torrents": [torrent.get("hashString") for torrent in
mate_torrents[1:]]}]
)
# 补充下载历史
self._downloadhistory.add(
path=msrc_filename,
type=mtype,
title=mtitle,
year=myear,
tmdbid=mtmdbid,
seasons=mseasons,
episodes=mepisodes,
image=mimage,
download_hash=mdownload_hash,
torrent_name=torrent_name,
torrent_description="",
torrent_site=""
)
# 处理路径映射
if self._path:
paths = self._path.split("\n")
@@ -387,7 +291,7 @@ class NAStoolSync(_PluginBase):
mdest = mdest.replace(sub_paths[0], sub_paths[1]).replace('\\', '/')
# 存库
self._transferhistory.add_force(
self._transferhistory.add(
src=msrc,
dest=mdest,
mode=mmode,
@@ -399,11 +303,14 @@ class NAStoolSync(_PluginBase):
seasons=mseasons,
episodes=mepisodes,
image=mimage,
download_hash=mdownload_hash,
date=mdate
)
logger.debug(f"{mtitle} {myear} {mtmdbid} {mseasons} {mepisodes} 已同步")
cnt += 1
if cnt % 100 == 0:
logger.info(f"转移记录同步进度 {cnt} / {len(transfer_history)}")
# 计算耗时
end_time = datetime.now()
@@ -507,7 +414,6 @@ class NAStoolSync(_PluginBase):
NULL ELSE substr( t.SEASON_EPISODE, instr ( t.SEASON_EPISODE, ' ' ) + 1 )
END AS episodes,
d.POSTER AS image,
d.DOWNLOAD_ID AS download_hash,
t.DATE AS date
FROM
TRANSFER_HISTORY t
@@ -549,7 +455,7 @@ class NAStoolSync(_PluginBase):
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
'md': 6
},
'content': [
{
@@ -565,7 +471,7 @@ class NAStoolSync(_PluginBase):
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
'md': 6
},
'content': [
{
@@ -576,22 +482,6 @@ class NAStoolSync(_PluginBase):
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'supp',
'label': '补充数据'
}
}
]
}
]
},
@@ -695,7 +585,6 @@ class NAStoolSync(_PluginBase):
'text': '开启清空记录时会在导入历史数据之前删除MoviePilot之前的记录。'
'如果转移记录很多同步时间可能会长3-10分钟'
'所以点击确定后页面没反应是正常现象,后台正在处理。'
'如果开启补充数据会获取tr、qb种子补充转移记录中download_hash缺失的情况同步删除需要'
}
}
]

View File

@@ -24,7 +24,7 @@ lock = Lock()
class RssSubscribe(_PluginBase):
# 插件名称
plugin_name = "RSS订阅"
plugin_name = "自定义订阅"
# 插件描述
plugin_desc = "定时刷新RSS报文识别内容后添加订阅或直接下载。"
# 插件图标
@@ -119,7 +119,7 @@ class RssSubscribe(_PluginBase):
# 记录清理缓存设置
self._clearflag = self._clear
# 关闭清理缓存开关
self._clearflag = False
self._clear = False
# 保存设置
self.__update_config()

View File

@@ -150,6 +150,7 @@ class SiteStatistic(_PluginBase):
"cmd": "/site_statistic",
"event": EventType.SiteStatistic,
"desc": "站点数据统计",
"category": "站点",
"data": {}
}]
@@ -965,6 +966,12 @@ class SiteStatistic(_PluginBase):
# 发送通知,存在未读消息
self.__notify_unread_msg(site_name, site_user_info, unread_msg_notify)
# 分享率接近1时发送消息提醒
if site_user_info.ratio and float(site_user_info.ratio) < 1:
self.post_message(mtype=NotificationType.SiteMessage,
title=f"【站点分享率低预警】",
text=f"站点 {site_user_info.site_name} 分享率 {site_user_info.ratio},请注意!")
self._sites_data.update(
{
site_name: {
@@ -1051,11 +1058,6 @@ class SiteStatistic(_PluginBase):
with ThreadPool(min(len(refresh_sites), int(self._queue_cnt or 5))) as p:
p.map(self.__refresh_site_data, refresh_sites)
# 获取今天的日期
key = datetime.now().strftime('%Y-%m-%d')
# 保存数据
self.save_data(key, self._sites_data)
# 通知刷新完成
if self._notify:
yesterday_sites_data = {}
@@ -1101,9 +1103,14 @@ class SiteStatistic(_PluginBase):
self.post_message(mtype=NotificationType.SiteMessage,
title="站点数据统计", text="\n".join(messages))
# 更新时间
self.save_data("last_update_time", key)
logger.info("站点数据刷新完成")
# 获取今天的日期
key = datetime.now().strftime('%Y-%m-%d')
# 保存数据
self.save_data(key, self._sites_data)
# 更新时间
self.save_data("last_update_time", key)
logger.info("站点数据刷新完成")
def __update_config(self):
self.update_config({

View File

@@ -56,5 +56,6 @@ class NexusHhanclubSiteUserInfo(NexusPhpSiteUserInfo):
def _get_user_level(self, html):
super()._get_user_level(html)
self.user_level = html.xpath('//*[@id="mainContent"]/div/div[2]/div[2]/div[4]/img/@title')[0]
user_level_path = html.xpath('//*[@id="mainContent"]/div/div[2]/div[2]/div[4]/span[2]/img/@title')
if user_level_path:
self.user_level = user_level_path[0]

View File

@@ -1,3 +1,4 @@
import ipaddress
from typing import List, Tuple, Dict, Any
from apscheduler.schedulers.background import BackgroundScheduler
@@ -36,7 +37,7 @@ class SpeedLimiter(_PluginBase):
# 加载顺序
plugin_order = 11
# 可使用的用户级别
auth_level = 2
auth_level = 1
# 私有属性
_scheduler = None
@@ -54,6 +55,8 @@ class SpeedLimiter(_PluginBase):
_allocation_ratio: str = ""
_auto_limit: bool = False
_limit_enabled: bool = False
# 不限速地址
_unlimited_ips = {}
# 当前限速状态
_current_state = ""
@@ -66,6 +69,7 @@ class SpeedLimiter(_PluginBase):
self._play_down_speed = float(config.get("play_down_speed")) if config.get("play_down_speed") else 0
self._noplay_up_speed = float(config.get("noplay_up_speed")) if config.get("noplay_up_speed") else 0
self._noplay_down_speed = float(config.get("noplay_down_speed")) if config.get("noplay_down_speed") else 0
self._current_state = f"U:{self._noplay_up_speed},D:{self._noplay_down_speed}"
try:
# 总带宽
self._bandwidth = int(float(config.get("bandwidth") or 0)) * 1000000
@@ -79,6 +83,10 @@ class SpeedLimiter(_PluginBase):
# 限速服务开关
self._limit_enabled = True if self._play_up_speed or self._play_down_speed or self._auto_limit else False
self._allocation_ratio = config.get("allocation_ratio") or ""
# 不限速地址
self._unlimited_ips["ipv4"] = config.get("ipv4") or ""
self._unlimited_ips["ipv6"] = config.get("ipv6") or ""
self._downloader = config.get("downloader") or []
if self._downloader:
if 'qbittorrent' in self._downloader:
@@ -302,6 +310,45 @@ class SpeedLimiter(_PluginBase):
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
},
'content': [
{
'component': 'VTextField',
'props': {
'model': 'ipv4',
'label': '不限速地址范围ipv4',
'placeholder': '留空默认不限速内网ipv4'
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
},
'content': [
{
'component': 'VTextField',
'props': {
'model': 'ipv6',
'label': '不限速地址范围ipv6',
'placeholder': '留空默认不限速内网ipv6'
}
}
]
}
]
}
]
}
], {
@@ -314,6 +361,8 @@ class SpeedLimiter(_PluginBase):
"noplay_down_speed": None,
"bandwidth": None,
"allocation_ratio": "",
"ipv4": "",
"ipv6": ""
}
def get_page(self) -> List[dict]:
@@ -326,6 +375,8 @@ class SpeedLimiter(_PluginBase):
"""
if not self._qb and not self._tr:
return
if not self._enabled:
return
if event:
event_data: WebhookEventInfo = event.event_data
if event_data.event not in ["playback.start", "PlaybackStart", "media.play"]:
@@ -347,7 +398,13 @@ class SpeedLimiter(_PluginBase):
logger.error(f"获取Emby播放会话失败{str(e)}")
# 计算有效比特率
for session in playing_sessions:
if not IpUtils.is_private_ip(session.get("RemoteEndPoint")) \
# 设置了不限速范围则判断session ip是否在不限速范围内
if self._unlimited_ips["ipv4"] or self._unlimited_ips["ipv6"]:
if not self.__allow_access(self._unlimited_ips, session.get("RemoteEndPoint")) \
and session.get("NowPlayingItem", {}).get("MediaType") == "Video":
total_bit_rate += int(session.get("NowPlayingItem", {}).get("Bitrate") or 0)
# 未设置不限速范围则默认不限速内网ip
elif not IpUtils.is_private_ip(session.get("RemoteEndPoint")) \
and session.get("NowPlayingItem", {}).get("MediaType") == "Video":
total_bit_rate += int(session.get("NowPlayingItem", {}).get("Bitrate") or 0)
elif settings.MEDIASERVER == "jellyfin":
@@ -363,7 +420,15 @@ class SpeedLimiter(_PluginBase):
logger.error(f"获取Jellyfin播放会话失败{str(e)}")
# 计算有效比特率
for session in playing_sessions:
if not IpUtils.is_private_ip(session.get("RemoteEndPoint")) \
# 设置了不限速范围则判断session ip是否在不限速范围内
if self._unlimited_ips["ipv4"] or self._unlimited_ips["ipv6"]:
if not self.__allow_access(self._unlimited_ips, session.get("RemoteEndPoint")) \
and session.get("NowPlayingItem", {}).get("MediaType") == "Video":
media_streams = session.get("NowPlayingItem", {}).get("MediaStreams") or []
for media_stream in media_streams:
total_bit_rate += int(media_stream.get("BitRate") or 0)
# 未设置不限速范围则默认不限速内网ip
elif not IpUtils.is_private_ip(session.get("RemoteEndPoint")) \
and session.get("NowPlayingItem", {}).get("MediaType") == "Video":
media_streams = session.get("NowPlayingItem", {}).get("MediaStreams") or []
for media_stream in media_streams:
@@ -381,7 +446,13 @@ class SpeedLimiter(_PluginBase):
})
# 计算有效比特率
for session in playing_sessions:
if IpUtils.is_private_ip(session.get("address")) \
# 设置了不限速范围则判断session ip是否在不限速范围内
if self._unlimited_ips["ipv4"] or self._unlimited_ips["ipv6"]:
if not self.__allow_access(self._unlimited_ips, session.get("address")) \
and session.get("type") == "Video":
total_bit_rate += int(session.get("bitrate") or 0)
# 未设置不限速范围则默认不限速内网ip
elif not IpUtils.is_private_ip(session.get("address")) \
and session.get("type") == "Video":
total_bit_rate += int(session.get("bitrate") or 0)
@@ -424,15 +495,7 @@ class SpeedLimiter(_PluginBase):
return
else:
self._current_state = state
if upload_limit:
text = f"上传:{upload_limit} KB/s"
else:
text = f"上传:未限速"
if download_limit:
text = f"{text}\n下载:{download_limit} KB/s"
else:
text = f"{text}\n下载:未限速"
try:
cnt = 0
for download in self._downloader:
@@ -449,9 +512,16 @@ class SpeedLimiter(_PluginBase):
else:
# 按比例
allocation_count = sum([int(i) for i in self._allocation_ratio.split(":")])
upload_limit = int(upload_limit * int(self._allocation_ratio[cnt]) / allocation_count)
upload_limit = int(upload_limit * int(self._allocation_ratio.split(":")[cnt]) / allocation_count)
cnt += 1
if upload_limit:
text = f"上传:{upload_limit} KB/s"
else:
text = f"上传:未限速"
if download_limit:
text = f"{text}\n下载:{download_limit} KB/s"
else:
text = f"{text}\n下载:未限速"
if str(download) == 'qbittorrent':
if self._qb:
self._qb.set_speed_limit(download_limit=download_limit, upload_limit=upload_limit)
@@ -471,28 +541,65 @@ class SpeedLimiter(_PluginBase):
title=title,
text=f"Qbittorrent 已取消限速"
)
else:
if self._tr:
self._tr.set_speed_limit(download_limit=download_limit, upload_limit=upload_limit)
# 发送通知
if self._notify:
title = "【播放限速】"
if upload_limit or download_limit:
subtitle = f"Transmission 开始{limit_type}限速"
self.post_message(
mtype=NotificationType.MediaServer,
title=title,
text=f"{subtitle}\n{text}"
)
else:
self.post_message(
mtype=NotificationType.MediaServer,
title=title,
text=f"Transmission 已取消限速"
)
else:
if self._tr:
self._tr.set_speed_limit(download_limit=download_limit, upload_limit=upload_limit)
# 发送通知
if self._notify:
title = "【播放限速】"
if upload_limit or download_limit:
subtitle = f"Transmission 开始{limit_type}限速"
self.post_message(
mtype=NotificationType.MediaServer,
title=title,
text=f"{subtitle}\n{text}"
)
else:
self.post_message(
mtype=NotificationType.MediaServer,
title=title,
text=f"Transmission 已取消限速"
)
except Exception as e:
logger.error(f"设置限速失败:{str(e)}")
@staticmethod
def __allow_access(allow_ips, ip):
"""
判断IP是否合法
:param allow_ips: 充许的IP范围 {"ipv4":, "ipv6":}
:param ip: 需要检查的ip
"""
if not allow_ips:
return True
try:
ipaddr = ipaddress.ip_address(ip)
if ipaddr.version == 4:
if not allow_ips.get('ipv4'):
return True
allow_ipv4s = allow_ips.get('ipv4').split(",")
for allow_ipv4 in allow_ipv4s:
if ipaddr in ipaddress.ip_network(allow_ipv4, strict=False):
return True
elif ipaddr.ipv4_mapped:
if not allow_ips.get('ipv4'):
return True
allow_ipv4s = allow_ips.get('ipv4').split(",")
for allow_ipv4 in allow_ipv4s:
if ipaddr.ipv4_mapped in ipaddress.ip_network(allow_ipv4, strict=False):
return True
else:
if not allow_ips.get('ipv6'):
return True
allow_ipv6s = allow_ips.get('ipv6').split(",")
for allow_ipv6 in allow_ipv6s:
if ipaddr in ipaddress.ip_network(allow_ipv6, strict=False):
return True
except Exception as err:
print(str(err))
return False
return False
def stop_service(self):
"""
退出插件

View File

@@ -1,19 +1,17 @@
import os
import time
from datetime import datetime
from pathlib import Path
from typing import Any, List, Dict, Tuple, Optional
from apscheduler.schedulers.background import BackgroundScheduler
from apscheduler.triggers.cron import CronTrigger
from app.core.config import settings
from app.db.downloadhistory_oper import DownloadHistoryOper
from app.db.transferhistory_oper import TransferHistoryOper
from app.log import logger
from app.modules.qbittorrent import Qbittorrent
from app.modules.transmission import Transmission
from app.plugins import _PluginBase
from typing import Any, List, Dict, Tuple, Optional
from app.log import logger
class SyncDownloadFiles(_PluginBase):
@@ -36,7 +34,7 @@ class SyncDownloadFiles(_PluginBase):
# 加载顺序
plugin_order = 20
# 可使用的用户级别
auth_level = 2
auth_level = 1
# 私有属性
_enabled = False
@@ -46,6 +44,7 @@ class SyncDownloadFiles(_PluginBase):
tr = None
_onlyonce = False
_history = False
_clear = False
_downloaders = []
_dirs = None
downloadhis = None
@@ -58,21 +57,33 @@ class SyncDownloadFiles(_PluginBase):
# 停止现有任务
self.stop_service()
self.qb = Qbittorrent()
self.tr = Transmission()
self.downloadhis = DownloadHistoryOper(self.db)
self.transferhis = TransferHistoryOper(self.db)
if config:
self._enabled = config.get('enabled')
self._time = config.get('time') or 6
self._history = config.get('history')
self._clear = config.get('clear')
self._onlyonce = config.get("onlyonce")
self._downloaders = config.get('downloaders') or []
self._dirs = config.get("dirs") or ""
if self._clear:
# 清理下载器文件记录
self.downloadhis.truncate_files()
# 清理下载器最后处理记录
for downloader in self._downloaders:
# 获取最后同步时间
self.del_data(f"last_sync_time_{downloader}")
# 关闭clear
self._clear = False
self.__update_config()
if self._onlyonce:
# 执行一次
self.qb = Qbittorrent()
self.tr = Transmission()
self.downloadhis = DownloadHistoryOper(self.db)
self.transferhis = TransferHistoryOper(self.db)
# 关闭onlyonce
self._onlyonce = False
self.__update_config()
@@ -86,7 +97,7 @@ class SyncDownloadFiles(_PluginBase):
try:
self._scheduler.add_job(func=self.sync,
trigger="interval",
hours=float(self._time.strip()),
hours=float(str(self._time).strip()),
name="自动同步下载器文件记录")
logger.info(f"自动同步下载器文件记录服务启动,时间间隔 {self._time} 小时")
except Exception as err:
@@ -151,11 +162,6 @@ class SyncDownloadFiles(_PluginBase):
# 获取种子download_dir
download_dir = self.__get_download_dir(torrent, downloader)
# 获取种子name
torrent_name = self.__get_torrent_name(torrent, downloader)
# 获取种子文件
torrent_files = self.__get_torrent_files(torrent, downloader, downloader_obj)
logger.info(f"开始同步种子 {hash_str}, 文件数 {len(torrent_files)}")
# 处理路径映射
if self._dirs:
@@ -164,14 +170,39 @@ class SyncDownloadFiles(_PluginBase):
sub_paths = path.split(":")
download_dir = download_dir.replace(sub_paths[0], sub_paths[1]).replace('\\', '/')
# 获取种子name
torrent_name = self.__get_torrent_name(torrent, downloader)
# 种子保存目录
save_path = Path(download_dir).joinpath(torrent_name)
# 获取种子文件
torrent_files = self.__get_torrent_files(torrent, downloader, downloader_obj)
logger.info(f"开始同步种子 {hash_str}, 文件数 {len(torrent_files)}")
download_files = []
for file in torrent_files:
file_name = self.__get_file_name(file, downloader)
full_path = Path(download_dir).joinpath(torrent_name, file_name)
# 过滤掉没下载的文件
if not self.__is_download(file, downloader):
continue
# 种子文件路径
file_path_str = self.__get_file_path(file, downloader)
file_path = Path(file_path_str)
# 只处理视频格式
if not file_path.suffix \
or file_path.suffix not in settings.RMT_MEDIAEXT:
continue
# 种子文件根路程
root_path = file_path.parts[0]
# 不含种子名称的种子文件相对路径
if root_path == torrent_name:
rel_path = str(file_path.relative_to(root_path))
else:
rel_path = str(file_path)
# 完整路径
full_path = save_path.joinpath(rel_path)
if self._history:
transferhis = self.transferhis.get_by_src(str(full_path))
if transferhis and not transferhis.download_hash:
logger.info(f"开始补充转移记录 {transferhis.id} download_hash {hash_str}")
logger.info(f"开始补充转移记录{transferhis.id} download_hash {hash_str}")
self.transferhis.update_download_hash(historyid=transferhis.id,
download_hash=hash_str)
@@ -181,8 +212,8 @@ class SyncDownloadFiles(_PluginBase):
"download_hash": hash_str,
"downloader": downloader,
"fullpath": str(full_path),
"savepath": str(Path(download_dir).joinpath(torrent_name)),
"filepath": file_name,
"savepath": str(save_path),
"filepath": rel_path,
"torrentname": torrent_name,
}
)
@@ -206,6 +237,7 @@ class SyncDownloadFiles(_PluginBase):
"enabled": self._enabled,
"time": self._time,
"history": self._history,
"clear": self._clear,
"onlyonce": self._onlyonce,
"downloaders": self._downloaders,
"dirs": self._dirs
@@ -268,12 +300,26 @@ class SyncDownloadFiles(_PluginBase):
return True
@staticmethod
def __get_file_name(file: Any, dl_type: str):
def __is_download(file: Any, dl_type: str):
"""
获取文件名
判断文件是否被下载
"""
try:
return os.path.basename(file.get("name")) if dl_type == "qbittorrent" else os.path.basename(file.name)
if dl_type == "qbittorrent":
return True
else:
return file.completed and file.completed > 0
except Exception as e:
print(str(e))
return True
@staticmethod
def __get_file_path(file: Any, dl_type: str):
"""
获取文件路径
"""
try:
return file.get("name") if dl_type == "qbittorrent" else file.name
except Exception as e:
print(str(e))
return ""
@@ -402,6 +448,22 @@ class SyncDownloadFiles(_PluginBase):
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'clear',
'label': '清理数据',
}
}
]
},
]
},
{
@@ -494,6 +556,7 @@ class SyncDownloadFiles(_PluginBase):
"enabled": False,
"onlyonce": False,
"history": False,
"clear": False,
"time": 6,
"dirs": "",
"downloaders": []

View File

@@ -605,7 +605,7 @@ class TorrentRemover(_PluginBase):
if torrents and message_text and self._notify:
self.post_message(
mtype=NotificationType.SiteMessage,
title=f"【自动删种任务执行完成】",
title=f"【自动删种任务完成】",
text=message_text
)
except Exception as e:
@@ -624,7 +624,7 @@ class TorrentRemover(_PluginBase):
# 平均上传速度
torrent_upload_avs = torrent.uploaded / torrent_seeding_time if torrent_seeding_time else 0
# 大小 单位GB
sizes = self._size.split(',') if self._size else []
sizes = self._size.split('-') if self._size else []
minsize = sizes[0] * 1024 * 1024 * 1024 if sizes else 0
maxsize = sizes[-1] * 1024 * 1024 * 1024 if sizes else 0
# 分享率
@@ -634,7 +634,7 @@ class TorrentRemover(_PluginBase):
if self._time and torrent_seeding_time <= float(self._time) * 3600:
return None
# 文件大小
if self._size and (torrent.size >= maxsize or torrent.size <= minsize):
if self._size and (torrent.size >= int(maxsize) or torrent.size <= int(minsize)):
return None
if self._upspeed and torrent_upload_avs >= float(self._upspeed) * 1024:
return None
@@ -668,7 +668,7 @@ class TorrentRemover(_PluginBase):
# 平均上传速茺
torrent_upload_avs = torrent_uploaded / torrent_seeding_time if torrent_seeding_time else 0
# 大小 单位GB
sizes = self._size.split(',') if self._size else []
sizes = self._size.split('-') if self._size else []
minsize = sizes[0] * 1024 * 1024 * 1024 if sizes else 0
maxsize = sizes[-1] * 1024 * 1024 * 1024 if sizes else 0
# 分享率
@@ -676,7 +676,7 @@ class TorrentRemover(_PluginBase):
return None
if self._time and torrent_seeding_time <= float(self._time) * 3600:
return None
if self._size and (torrent.total_size >= maxsize or torrent.total_size <= minsize):
if self._size and (torrent.total_size >= int(maxsize) or torrent.total_size <= int(minsize)):
return None
if self._upspeed and torrent_upload_avs >= float(self._upspeed) * 1024:
return None
@@ -736,14 +736,15 @@ class TorrentRemover(_PluginBase):
name = remove_torrent.get("name")
size = remove_torrent.get("size")
for torrent in torrents:
if torrent.name == name \
and torrent.size == size \
and torrent.hash not in [t.get("id") for t in remove_torrents]:
remove_torrents_plus.append({
"id": torrent.hash,
"name": torrent.name,
"site": StringUtils.get_url_sld(torrent.tracker),
"size": torrent.size
})
if downloader == "qbittorrent":
item_plus = self.__get_qb_torrent(torrent)
else:
item_plus = self.__get_tr_torrent(torrent)
if not item_plus:
continue
if item_plus.get("name") == name \
and item_plus.get("size") == size \
and item_plus.get("id") not in [t.get("id") for t in remove_torrents]:
remove_torrents_plus.append(item_plus)
remove_torrents.extend(remove_torrents_plus)
return remove_torrents

View File

@@ -7,7 +7,7 @@ from typing import Any, List, Dict, Tuple, Optional
import pytz
from apscheduler.schedulers.background import BackgroundScheduler
from apscheduler.triggers.cron import CronTrigger
from torrentool.torrent import Torrent
from bencode import bdecode, bencode
from app.core.config import settings
from app.helper.torrent import TorrentHelper
@@ -582,26 +582,50 @@ class TorrentTransfer(_PluginBase):
continue
# 如果源下载器是QB检查是否有Tracker没有的话额外获取
trackers = None
if downloader == "qbittorrent":
# 读取种子内容、解析种子文件
content = torrent_file.read_bytes()
if not content:
logger.warn(f"读取种子文件失败:{torrent_file}")
fail += 1
continue
# 读取trackers
try:
torrent_main = Torrent.from_file(torrent_file)
main_announce = torrent_main.announce_urls
torrent_main = bdecode(content)
main_announce = torrent_main.get('announce')
except Exception as err:
logger.error(f"解析种子文件 {torrent_file} 失败:{err}")
# 失败计数
logger.warn(f"解析种子文件 {torrent_file} 失败:{err}")
fail += 1
continue
if not main_announce:
logger.info(f"{torrent_item.get('hash')} 未发现tracker信息尝试补充tracker ...")
# 从源下载任务信息中获取Tracker
torrent = torrent_item.get('torrent')
# 源trackers
trackers = [tracker.get("url") for tracker in torrent.trackers
if str(tracker.get("url")).startswith('http')]
logger.info(f"获取到源tracker{trackers}")
logger.info(f"{torrent_item.get('hash')} 未发现tracker信息尝试补充tracker信息...")
# 读取fastresume文件
fastresume_file = Path(self._fromtorrentpath) / f"{torrent_item.get('hash')}.fastresume"
if not fastresume_file.exists():
logger.warn(f"fastresume文件不存在{fastresume_file}")
fail += 1
continue
# 尝试补充trackers
try:
# 解析fastresume文件
fastresume = fastresume_file.read_bytes()
torrent_fastresume = bdecode(fastresume)
# 读取trackers
fastresume_trackers = torrent_fastresume.get('trackers')
if isinstance(fastresume_trackers, list) \
and len(fastresume_trackers) > 0 \
and fastresume_trackers[0]:
# 重新赋值
torrent_main['announce'] = fastresume_trackers[0][0]
# 替换种子文件路径
torrent_file = settings.TEMP_PATH / f"{torrent_item.get('hash')}.torrent"
# 编码并保存到临时文件
torrent_file.write_bytes(bencode(torrent_main))
except Exception as err:
logger.error(f"解析fastresume文件 {fastresume_file} 出错:{err}")
fail += 1
continue
# 发送到另一个下载器中下载:默认暂停、传输下载路径、关闭自动管理模式
logger.info(f"添加转移做种任务到下载器 {todownloader}{torrent_file}")
@@ -617,11 +641,6 @@ class TorrentTransfer(_PluginBase):
# 下载成功
logger.info(f"成功添加转移做种任务,种子文件:{torrent_file}")
# 补充Tracker
if trackers:
logger.info(f"开始补充 {download_id} 的tracker{trackers}")
todownloader_obj.add_trackers(ids=[download_id], trackers=trackers)
# TR会自动校验QB需要手动校验
if todownloader == "qbittorrent":
logger.info(f"qbittorrent 开始校验 {download_id} ...")

View File

@@ -26,7 +26,7 @@ class WebHook(_PluginBase):
# 加载顺序
plugin_order = 14
# 可使用的用户级别
auth_level = 2
auth_level = 1
# 私有属性
_webhook_url = None

View File

@@ -8,11 +8,10 @@ from apscheduler.schedulers.background import BackgroundScheduler
from app.chain import ChainBase
from app.chain.cookiecloud import CookieCloudChain
from app.chain.mediaserver import MediaServerChain
from app.chain.rss import RssChain
from app.chain.subscribe import SubscribeChain
from app.chain.transfer import TransferChain
from app.core.config import settings
from app.db import ScopedSession
from app.db import SessionFactory
from app.log import logger
from app.utils.singleton import Singleton
from app.utils.timer import TimerUtils
@@ -40,7 +39,7 @@ class Scheduler(metaclass=Singleton):
def __init__(self):
# 数据库连接
self._db = ScopedSession()
self._db = SessionFactory()
# 调试模式不启动定时服务
if settings.DEV:
return
@@ -71,15 +70,20 @@ class Scheduler(metaclass=Singleton):
self._scheduler.add_job(SubscribeChain(self._db).search, "interval",
hours=24, kwargs={'state': 'R'}, name="订阅搜索")
# 站点首页种子定时刷新缓存并匹配订阅
triggers = TimerUtils.random_scheduler(num_executions=30)
for trigger in triggers:
self._scheduler.add_job(SubscribeChain(self._db).refresh, "cron",
hour=trigger.hour, minute=trigger.minute, name="订阅刷新")
# 自定义订阅
self._scheduler.add_job(RssChain(self._db).refresh, "interval",
minutes=30, name="自定义订阅刷新")
if settings.SUBSCRIBE_MODE == "spider":
# 站点首页种子定时刷新模式
triggers = TimerUtils.random_scheduler(num_executions=30)
for trigger in triggers:
self._scheduler.add_job(SubscribeChain(self._db).refresh, "cron",
hour=trigger.hour, minute=trigger.minute, name="订阅刷新")
else:
# RSS订阅模式
if not settings.SUBSCRIBE_RSS_INTERVAL:
settings.SUBSCRIBE_RSS_INTERVAL = 30
elif settings.SUBSCRIBE_RSS_INTERVAL < 5:
settings.SUBSCRIBE_RSS_INTERVAL = 5
self._scheduler.add_job(SubscribeChain(self._db).refresh, "interval",
minutes=settings.SUBSCRIBE_RSS_INTERVAL, name="订阅刷新")
# 下载器文件转移每5分钟
if settings.DOWNLOADER_MONITOR:
@@ -106,7 +110,5 @@ class Scheduler(metaclass=Singleton):
"""
if self._scheduler.running:
self._scheduler.shutdown()
def __del__(self):
if self._db:
self._db.close()

View File

@@ -12,5 +12,4 @@ from .mediaserver import *
from .message import *
from .tmdb import *
from .transfer import *
from .rss import *
from .file import *

View File

@@ -16,3 +16,5 @@ class FileItem(BaseModel):
extension: Optional[str] = None
# 文件大小
size: Optional[int] = None
# 修改时间
modify_time: Optional[float] = None

View File

@@ -32,3 +32,5 @@ class Plugin(BaseModel):
installed: Optional[bool] = False
# 运行状态
state: Optional[bool] = False
# 是否有详情页面
has_page: Optional[bool] = False

View File

@@ -1,54 +0,0 @@
from typing import Optional
from pydantic import BaseModel
class Rss(BaseModel):
id: Optional[int]
# 名称
name: Optional[str]
# RSS地址
url: Optional[str]
# 类型
type: Optional[str]
# 标题
title: Optional[str]
# 年份
year: Optional[str]
# TMDBID
tmdbid: Optional[int]
# 季号
season: Optional[int]
# 海报
poster: Optional[str]
# 背景图
backdrop: Optional[str]
# 评分
vote: Optional[float]
# 简介
description: Optional[str]
# 总集数
total_episode: Optional[int]
# 包含
include: Optional[str]
# 排除
exclude: Optional[str]
# 洗版
best_version: Optional[int]
# 是否使用代理服务器
proxy: Optional[int]
# 是否使用过滤规则
filter: Optional[int]
# 保存路径
save_path: Optional[str]
# 附加信息
note: Optional[str]
# 已处理数量
processed: Optional[int]
# 最后更新时间
last_update: Optional[str]
# 状态 0-停用1-启用
state: Optional[int]
class Config:
orm_mode = True

View File

@@ -6,6 +6,9 @@ from pydantic import BaseModel
class Token(BaseModel):
access_token: str
token_type: str
super_user: bool
user_name: str
avatar: Optional[str] = None
class TokenPayload(BaseModel):

Some files were not shown because too many files have changed in this diff Show More