Compare commits

..

380 Commits

Author SHA1 Message Date
jxxghp
d231d75446 v1.1.8
- 修复了Jellyfin/Plex的webhook通知消息
- 修复了手动整理时屏蔽词不生效的问题
- 优化了剧集的年份匹配
- 优化了站点种子的索引频率控制
- 增加了站点分享率低时的信息提醒
- 增加了重启系统的远程交互命令
2023-09-10 17:43:04 +08:00
jxxghp
afb5874350 fix #536 2023-09-10 17:35:58 +08:00
jxxghp
1bd7b5c77e fix jellyfin webhook 2023-09-10 17:07:24 +08:00
jxxghp
ba41de61cb fix plex webhook 2023-09-10 12:57:51 +08:00
jxxghp
ae40d32115 fix bug 2023-09-10 09:15:12 +08:00
jxxghp
3fe4c9467e fix 2023-09-10 09:06:00 +08:00
jxxghp
b89512cc33 fix #526 2023-09-10 09:02:46 +08:00
jxxghp
f3b12bed20 feat 分享率低通知预警 2023-09-10 08:54:33 +08:00
jxxghp
08c7fff5ab fix README.md 2023-09-10 08:32:52 +08:00
jxxghp
9c20d1a270 Merge pull request #530 from thsrite/main
feat 补充剧集全部季年份
2023-09-09 22:06:58 +08:00
thsrite
b7b1aee878 fix 2023-09-09 22:03:51 +08:00
jxxghp
f998b39152 fix 删除种子数无法计算 2023-09-09 21:58:49 +08:00
jxxghp
ca01db31a9 fix LIBRARY_PATH 2023-09-09 21:41:55 +08:00
thsrite
a0b8cc6719 feat 补充剧集全部季年份 2023-09-09 21:24:07 +08:00
jxxghp
66b91abe90 fix sites.cpython-311-darwin 2023-09-09 20:58:52 +08:00
jxxghp
9b17d55ac0 fix db session 2023-09-09 20:56:37 +08:00
jxxghp
a7a0889867 Merge pull request #528 from thsrite/main 2023-09-09 20:17:09 +08:00
thsrite
af6cf306c8 fix 交互命令重启 2023-09-09 20:01:43 +08:00
jxxghp
20f35854f9 fix update 2023-09-09 19:43:02 +08:00
jxxghp
e5165c8fea fix plugin db session 2023-09-09 19:41:06 +08:00
jxxghp
0e36d003c0 fix db session 2023-09-09 19:26:56 +08:00
jxxghp
ccc249f29d Merge pull request #527 from developer-wlj/wlj0909 2023-09-09 18:31:27 +08:00
mayun110
f4edb32886 fix Windows目录监控下获取目录问题 2023-09-09 18:11:50 +08:00
jxxghp
475a84bfa6 Merge pull request #525 from thsrite/main 2023-09-09 17:53:29 +08:00
mayun110
3914ff4dd6 fix Windows下获取目录问题 2023-09-09 17:49:40 +08:00
jxxghp
5bcbacf3a5 feat torrents全局缓存共享 2023-09-09 17:42:31 +08:00
jxxghp
27238ac467 fix brushflow plugin 2023-09-09 16:49:15 +08:00
thsrite
019d40c17a fix 辅种插件排除已删除站点 2023-09-09 16:40:09 +08:00
jxxghp
fa5b92214f fix ssd 2023-09-09 16:24:53 +08:00
jxxghp
32a5f67e72 Merge pull request #524 from thsrite/main 2023-09-09 15:56:02 +08:00
thsrite
d6e9c14183 fix qb删种 2023-09-09 15:47:51 +08:00
jxxghp
87325d5bbd Merge pull request #523 from thsrite/main 2023-09-09 15:07:53 +08:00
thsrite
67ead871c1 fix 删除清除缓存按钮 2023-09-09 15:06:44 +08:00
jxxghp
691beb1186 Merge pull request #522 from DDS-Derek/main 2023-09-09 14:57:02 +08:00
jxxghp
b30d3c7dac Merge pull request #521 from WithdewHua/rsssubscribe 2023-09-09 14:55:41 +08:00
DDSRem
5e048f0150 feat: 优化容器id获取 2023-09-09 14:18:10 +08:00
WithdewHua
cb2cfe9d85 fix: 关闭清理缓存开关 2023-09-09 14:13:17 +08:00
jxxghp
482fca9b8c Merge pull request #520 from DDS-Derek/main
fix: failed to obtain container id
2023-09-09 12:08:50 +08:00
DDSRem
42511b95d8 fix: failed to obtain container id 2023-09-09 12:03:48 +08:00
jxxghp
b18e901fbd fix plugin ui 2023-09-09 11:37:34 +08:00
jxxghp
a30e3f49a3 v1.1.7
- 修复了文件转移无法覆盖的问题
- 修复了过滤规则只能从尾部开始删除的问题
- 优化了内建重启,支持非root权限环境(需要重拉镜像)
- 文件管理功能支持排序
- 优化了插件页面交互,优先展示插件数据
2023-09-09 11:08:45 +08:00
jxxghp
65d202e636 fix README.md 2023-09-09 10:51:59 +08:00
jxxghp
4373c0596b Merge pull request #518 from DDS-Derek/main
fix: port conflict
2023-09-09 10:45:54 +08:00
DDSRem
0136d9fe06 fix: port conflict 2023-09-09 10:44:05 +08:00
jxxghp
933c6d838c fix #497 2023-09-09 08:27:40 +08:00
jxxghp
7ce656148f fix #508 2023-09-09 08:19:17 +08:00
jxxghp
c05ffed6df fix #514 文件管理支持排序 2023-09-09 08:00:17 +08:00
jxxghp
6770ba3a35 feat 文件管理API排序 2023-09-08 22:48:53 +08:00
jxxghp
3b73dfcdc6 fix 文件转移时无法覆盖 2023-09-08 22:36:31 +08:00
jxxghp
100ff97017 Merge pull request #515 from thsrite/main 2023-09-08 21:49:49 +08:00
thsrite
4fe96178ee fix 2023-09-08 21:44:32 +08:00
thsrite
86d484fac0 fix 2023-09-08 21:41:30 +08:00
thsrite
db23b62fd1 fix 2023-09-08 21:31:36 +08:00
jxxghp
b84c8fd7f1 Merge pull request #512 from thsrite/main 2023-09-08 21:29:46 +08:00
jxxghp
c9f6c75069 Merge pull request #510 from DDS-Derek/main 2023-09-08 21:26:00 +08:00
thsrite
846459c244 fix wechat token 2023-09-08 21:21:43 +08:00
DDSRem
c4898d04aa docs: update 2023-09-08 20:38:07 +08:00
DDSRem
c8bc6a4618 fix: 重启更新 2023-09-08 20:33:23 +08:00
DDSRem
55dce26cb8 test: restart 2023-09-08 19:55:03 +08:00
DDSRem
ae3b73a73f feat: 优化重启 2023-09-08 19:49:10 +08:00
jxxghp
091df01b7c fix plugin 2023-09-08 16:48:13 +08:00
jxxghp
20c4c7d6e6 add 发布时间 2023-09-08 16:36:02 +08:00
jxxghp
eb1e045d8f Merge remote-tracking branch 'origin/main' 2023-09-08 15:38:38 +08:00
jxxghp
678638e9f1 feat 插件API 2023-09-08 15:38:32 +08:00
jxxghp
d8b78d3051 Merge pull request #505 from thsrite/main
fix 消息转发插件清理缓存按钮
2023-09-08 13:15:45 +08:00
thsrite
eaf0d17118 fix 消息转发插件清理缓存按钮 2023-09-08 13:13:11 +08:00
jxxghp
81bcfef6ec Merge pull request #504 from thsrite/main 2023-09-08 13:11:13 +08:00
thsrite
0997691b23 fix time format 2023-09-08 13:06:40 +08:00
jxxghp
d1f9647a63 Merge pull request #503 from thsrite/main
feat 签到插件支持分别配置签到、登录站点
2023-09-08 12:26:15 +08:00
thsrite
64a04ba8ed fix 2023-09-08 12:24:27 +08:00
jxxghp
726c130f1f Merge remote-tracking branch 'origin/main' 2023-09-08 12:23:29 +08:00
jxxghp
215b56b9f2 feat 打印jellyfin/plex webhook报文 2023-09-08 12:23:19 +08:00
thsrite
516bd8bc30 Merge remote-tracking branch 'origin/main' 2023-09-08 12:21:38 +08:00
thsrite
8bc6e04665 feat 签到插件支持分别配置签到、登录站点 2023-09-08 12:21:29 +08:00
jxxghp
94057cd5f1 Merge pull request #499 from thsrite/main 2023-09-08 11:24:49 +08:00
thsrite
2e80586436 Merge branch 'jxxghp:main' into main 2023-09-08 11:22:42 +08:00
thsrite
faa6d7dadd fix bug 2023-09-08 11:20:25 +08:00
jxxghp
071c81d52c v1.1.6
- 修复了一个未设置媒体服务器时订阅日志报错的问题
- 媒体库刮削插件支持覆盖已有元数据和图片
- 新增了一个沿用已有刮削名称的开关(默认开),避免TMDB信息变化时导致整理后名称不一致
- 刮削时季的海报优先使用TMDB的图片
- 增加了内建重启失败时的提示
2023-09-08 11:01:37 +08:00
jxxghp
52d4feb583 Update README.md 2023-09-08 10:45:55 +08:00
jxxghp
584e05e63e fix ui 2023-09-08 10:34:05 +08:00
jxxghp
061ff322ab fix bug 2023-09-08 10:26:00 +08:00
jxxghp
a2bcf8df9a Merge remote-tracking branch 'origin/main' 2023-09-08 10:03:23 +08:00
jxxghp
6c85040eb6 fix plugin 2023-09-08 10:03:14 +08:00
jxxghp
2e5d892120 fix plugin 2023-09-08 09:44:51 +08:00
jxxghp
43d108aea9 Merge pull request #498 from thsrite/main 2023-09-08 09:44:07 +08:00
thsrite
c46b1dd116 fix 消息转发插件bug 2023-09-08 09:22:59 +08:00
jxxghp
d3fac56e9a fix 2023-09-08 08:05:54 +08:00
jxxghp
b3f5b87b02 fix 2023-09-08 08:04:09 +08:00
jxxghp
03abdf9cb4 fix 2023-09-08 07:52:05 +08:00
jxxghp
42bc354e06 fix 2023-09-08 07:39:08 +08:00
jxxghp
02e81a79b2 fix 2023-09-07 23:11:08 +08:00
jxxghp
9fa4b8dfbe fix 2023-09-07 23:04:35 +08:00
jxxghp
366f59623a fix 2023-09-07 23:00:51 +08:00
jxxghp
d4c28500b7 fix plugin 2023-09-07 22:04:07 +08:00
jxxghp
5780344c43 fix 2023-09-07 20:19:03 +08:00
jxxghp
18970efc1a add index 2023-09-07 18:23:43 +08:00
jxxghp
5725584176 add index 2023-09-07 18:23:30 +08:00
jxxghp
4e26168ab5 fix plugin 2023-09-07 17:40:09 +08:00
jxxghp
f694dee71d fix 2023-09-07 16:16:04 +08:00
jxxghp
a9db0f6bbf fix 2023-09-07 16:05:48 +08:00
jxxghp
7efcde89b9 fix 2023-09-07 14:59:32 +08:00
jxxghp
1c07b306c3 Merge pull request #489 from thsrite/main
fix 目录监控已处理逻辑 && feat 新增已入库媒体是否跟随TMDB信息变化开关,关闭则延用媒体库名称
2023-09-07 14:29:08 +08:00
jxxghp
6c59a5ebb0 Merge branch 'main' into main 2023-09-07 14:29:01 +08:00
thsrite
4c7321a738 fix 2023-09-07 13:57:27 +08:00
jxxghp
f42fd023bb fix #490 2023-09-07 13:39:08 +08:00
jxxghp
9b8a4ebdd4 fix 2023-09-07 12:56:39 +08:00
jxxghp
443e2d8104 fix 减少刮削识别次数 2023-09-07 12:51:49 +08:00
jxxghp
2c61d439ca feat 媒体库刮削支持覆盖
fix 类型声明
2023-09-07 12:35:35 +08:00
thsrite
e01268222c fix 2023-09-07 12:27:04 +08:00
thsrite
27ff77b504 fix type 2023-09-07 12:25:01 +08:00
thsrite
bf8893d71b fix 文件所在文件夹重新刮削bug 2023-09-07 11:16:11 +08:00
thsrite
54b09a17c2 fix 2023-09-07 11:12:16 +08:00
thsrite
b01621049b feat 新增已入库媒体是否跟随TMDB信息变化开关,关闭则延用媒体库名称 2023-09-07 10:55:01 +08:00
thsrite
e5dc40e3c1 fix token过期后重新获取、重新发送请求 2023-09-07 10:24:21 +08:00
thsrite
44d4bcdd19 fix 目录监控已处理逻辑 2023-09-07 10:16:44 +08:00
jxxghp
b899b23d04 fix 2023-09-07 08:37:57 +08:00
jxxghp
fa23012adb fix #486 季图片优先使用TMDB的 2023-09-07 08:03:05 +08:00
jxxghp
d836b385ae fix 2023-09-07 07:20:10 +08:00
jxxghp
15a0bc6c12 fix 重启失败提示 2023-09-06 21:48:09 +08:00
jxxghp
22791e361d 更新 README.md 2023-09-06 21:41:23 +08:00
jxxghp
47b7dade5d Merge pull request #484 from thsrite/main 2023-09-06 21:23:32 +08:00
thsrite
c57d13afcc fix 优化同步删除插件msg 2023-09-06 21:21:00 +08:00
jxxghp
8db1c2952c Merge remote-tracking branch 'origin/main' 2023-09-06 21:15:21 +08:00
jxxghp
28c19bc4e3 fix 优化文件整理进度提示 2023-09-06 21:15:10 +08:00
jxxghp
fbef1735b0 Merge pull request #482 from thsrite/main
fix bug
2023-09-06 20:19:44 +08:00
thsrite
9869af992b fix bug 2023-09-06 20:18:43 +08:00
jxxghp
b6cb241b8a Merge pull request #480 from WPF0414/main
fix:限速通知速率展示问题
2023-09-06 20:09:02 +08:00
jxxghp
7edf8e7c30 Merge pull request #481 from thsrite/main
fix 删除辅种bug
2023-09-06 20:07:42 +08:00
thsrite
452161f1b8 fix 删除辅种bug 2023-09-06 20:05:29 +08:00
jxxghp
f75abb27b6 v1.1.5
- 修复了批量整理时只刮削第一个文件的问题
- 修复了多下载任务同一下载目录时会重复处理文件的问题
- 支持在WEB页面操作重启(需要映射`/var/run/docker.sock`文件到容器)
2023-09-06 19:54:23 +08:00
wangpengfei
30311e8e56 fix:限速通知
修复限速时通知错误问题
2023-09-06 19:50:18 +08:00
jxxghp
adff3b22e9 Merge pull request #476 from thsrite/main
fix 媒体库同步删除插件优化
2023-09-06 16:55:10 +08:00
thsrite
013c0dea3b fix NAStool同步插件不处理download_hash 2023-09-06 16:22:32 +08:00
jxxghp
c593c3ba16 fix #461 已转移成功的文件不重复处理 2023-09-06 16:12:40 +08:00
jxxghp
61b74735de fix #464 2023-09-06 16:00:42 +08:00
thsrite
952cae50e2 fix 同步删除插件删种逻辑 2023-09-06 15:57:46 +08:00
thsrite
7a9f89e86c fix 删除同步删除插件交互命令 2023-09-06 15:34:35 +08:00
jxxghp
f14d8bec1b fix api 2023-09-06 15:29:52 +08:00
thsrite
697d5a815b fix 标题不一致时防误删 2023-09-06 15:07:06 +08:00
thsrite
cfeaa2674d fix 媒体库同步删除插件优化 2023-09-06 14:26:24 +08:00
jxxghp
08f046f059 fix #465 批量转移时只刮削一个文件的问题 2023-09-06 13:04:18 +08:00
jxxghp
a66912f41a fix #465 批量转移时只刮削一个文件的问题 2023-09-06 13:01:13 +08:00
jxxghp
f244728a96 Merge remote-tracking branch 'origin/main' 2023-09-06 12:56:04 +08:00
jxxghp
576ac08a05 feat 内建重启 2023-09-06 12:55:48 +08:00
jxxghp
e874b3f294 Merge pull request #474 from thsrite/main
fix NAStool记录同步增加进度…
2023-09-06 11:34:13 +08:00
thsrite
90ff0fc793 fix NAStool记录同步增加进度… 2023-09-06 11:32:34 +08:00
jxxghp
259e8fc2e1 fix #463 2023-09-06 11:29:47 +08:00
jxxghp
5c0be93913 Merge pull request #471 from thsrite/main 2023-09-06 10:47:21 +08:00
thsrite
e84a5c74f6 fix 同步删除插件防重复消费 2023-09-06 09:37:02 +08:00
jxxghp
5145527d0e fix #456 2023-09-06 08:34:04 +08:00
jxxghp
e3f7f873c0 Merge pull request #462 from WPF0414/main 2023-09-05 22:38:21 +08:00
wangpengfei
84a2db2247 Update __init__.py
修复按比例的bug
2023-09-05 22:35:39 +08:00
jxxghp
4902d5ebed feat 本地文件系统判重 2023-09-05 20:32:38 +08:00
jxxghp
243391ee30 fix release 2023-09-05 19:57:24 +08:00
jxxghp
c424de65b3 - 修复了站点数据统计某些情况下不发消息的问题
- 修复了播放限速TR不生效的问题
- 优化了下载器文件同步插件
- 优化了数据库异常处理
- 媒体库同步删除插件支持Emby Webhook方式。
- 微信现在会自动添加交互操作菜单了
- 新增了一套UI主题配色
2023-09-05 19:52:46 +08:00
jxxghp
2077eede8c Merge pull request #459 from thsrite/main 2023-09-05 19:48:27 +08:00
thsrite
876d1e01b4 fix 签到插件strip 2023-09-05 19:33:27 +08:00
thsrite
dec022fd89 fix 同步删除插件 2023-09-05 19:30:46 +08:00
jxxghp
83829cbe27 Merge pull request #458 from thsrite/main 2023-09-05 19:08:41 +08:00
thsrite
8249f9356f fix 同步删除插件适配emby webhook方式! 2023-09-05 19:07:07 +08:00
jxxghp
b5fc6cdd1e fix 统一处理db事务回滚 2023-09-05 18:19:02 +08:00
jxxghp
51b959cff8 Merge remote-tracking branch 'origin/main' 2023-09-05 17:11:09 +08:00
jxxghp
36880a8b7d fix 下载文件记录只登记选中的文件 2023-09-05 17:11:02 +08:00
jxxghp
380cc7552f Merge pull request #453 from thsrite/main
fix 同步插件路径替换
2023-09-05 16:57:04 +08:00
thsrite
0f1c8cb226 Merge branch 'jxxghp:main' into main 2023-09-05 16:55:08 +08:00
thsrite
7435fb0c10 fix tr文件同步过滤掉未下载的文件 2023-09-05 16:52:02 +08:00
jxxghp
1a03981463 fix #193 2023-09-05 16:43:34 +08:00
jxxghp
4cb7a488a9 fix #193 2023-09-05 16:43:02 +08:00
jxxghp
c69762d4c9 fix #448 TR限速不生效的问题 2023-09-05 16:35:15 +08:00
thsrite
03d9bf6d05 fix 路径替换 2023-09-05 16:20:42 +08:00
jxxghp
6a08b4ba7f fix 提高DB连接等待时间,避免database locked报错。 2023-09-05 16:18:04 +08:00
jxxghp
99218515ea fix 部分数据库操作没有Commit 2023-09-05 16:12:43 +08:00
jxxghp
c3a0a839c3 Merge pull request #450 from thsrite/main
fix 交互命令消息原路返回
2023-09-05 13:39:14 +08:00
thsrite
351513bcbc fix 交互命令消息原路返回 2023-09-05 13:19:25 +08:00
jxxghp
ed5dec1b0f feat 种子刷新频率控制 2023-09-05 12:39:01 +08:00
jxxghp
c62b29edc4 fix 微信菜单 2023-09-05 11:54:16 +08:00
jxxghp
c224a7c07b fix bug 2023-09-05 11:52:46 +08:00
jxxghp
a7b244a4b4 fix README.md 2023-09-05 11:48:36 +08:00
jxxghp
b564f70c63 feat 微信自动注册菜单 2023-09-05 11:33:42 +08:00
jxxghp
551f32491d fix 微信菜单长度 2023-09-05 11:23:21 +08:00
jxxghp
2826b9411d fix bug 2023-09-05 11:20:06 +08:00
jxxghp
4bf9045784 fix bug 2023-09-05 11:01:12 +08:00
jxxghp
114788e3ed feat 微信自动注册菜单 2023-09-05 10:58:19 +08:00
jxxghp
bb729bf976 fix #442 2023-09-05 08:39:23 +08:00
jxxghp
bedc885232 Merge pull request #440 from amtoaer/memory_percent 2023-09-04 23:18:46 +08:00
amtoaer
21e39611bc feat: 内存占用图使用百分比 2023-09-04 23:07:39 +08:00
jxxghp
73e7e547ea Merge pull request #437 from thsrite/main 2023-09-04 22:22:40 +08:00
thsrite
bc25d71b88 fix #407 2023-09-04 22:21:03 +08:00
jxxghp
ff8a9dc8c7 v1.1.3
- 修复了历史记录重新整理记录缺失的问题
- 优化了数据库会话处理
- 优化了普通用户的菜单权限
- 优化了文件管理UI细节
- 调整了仪表仪显示内容
- 捷径新增了过滤规则测试功能
- 图片刮削下载失败时支持重试
- 播放限速插件支持手动配置不限速地址范围
2023-09-04 21:21:04 +08:00
jxxghp
4ee7daa673 Merge remote-tracking branch 'origin/main' 2023-09-04 20:40:28 +08:00
jxxghp
aca1673ee3 fix db session 2023-09-04 20:40:17 +08:00
jxxghp
87ece98471 Merge pull request #435 from thsrite/main 2023-09-04 20:24:39 +08:00
thsrite
4c16cd7bfb fix b7d2168f 2023-09-04 20:20:42 +08:00
jxxghp
712af24a72 fix 2023-09-04 20:13:16 +08:00
jxxghp
b7d2168f8e fix #434 2023-09-04 19:30:06 +08:00
jxxghp
65ad7123f9 fix #419 2023-09-04 18:08:11 +08:00
jxxghp
ce42e48b37 fix login api 2023-09-04 17:48:44 +08:00
jxxghp
45b53da056 Merge pull request #428 from thsrite/main 2023-09-04 11:47:52 +08:00
thsrite
70f93e02e4 fix #365 限速插件增加不限速地址范围,不设置默认不限速内网ip 2023-09-04 11:40:19 +08:00
jxxghp
e4b63eacae add system apis 2023-09-04 11:07:30 +08:00
jxxghp
96f17e2bc2 fix #426 刮削下载图片重试 2023-09-04 10:14:05 +08:00
jxxghp
7eb77875f1 fix 重连机制 2023-09-03 21:59:18 +08:00
jxxghp
bbc27bbe19 更新 README.md 2023-09-03 21:39:47 +08:00
jxxghp
3691b2a10b add 过滤规则测试API 2023-09-03 18:36:06 +08:00
jxxghp
08a3d02daf fix 调整重新整理的删除顺序 2023-09-03 17:37:06 +08:00
jxxghp
57abc7816b Merge pull request #420 from thsrite/main 2023-09-03 16:30:01 +08:00
thsrite
69c277777e fix 签到周期重启bug 2023-09-03 16:23:38 +08:00
jxxghp
5f88fe81e3 fix 手动整理时剧集处理 2023-09-03 14:38:24 +08:00
jxxghp
d043dbd89e v1.1.2 2023-09-03 14:22:26 +08:00
jxxghp
53a2887717 fix 蓝光原盘刮削 2023-09-03 14:14:41 +08:00
jxxghp
28d181db44 fix #403 修复蓝光原盘转移失败 2023-09-03 13:40:39 +08:00
jxxghp
7d3f43e488 fix 媒体库同步使用独立数据库会话 2023-09-03 13:11:42 +08:00
jxxghp
62df3f7c84 add 文件识别API 2023-09-03 13:04:08 +08:00
jxxghp
1338a061c4 更新 __init__.py 2023-09-03 11:14:36 +08:00
jxxghp
4f26f0607a 更新 transfer.py 2023-09-03 11:13:42 +08:00
jxxghp
b72aa314b6 emby/jellyfin异常数据兼容 2023-09-03 09:50:05 +08:00
jxxghp
082ec8d718 fix #340 前端已调整日志位置
fix #239 增加转移屏蔽词设置
2023-09-03 09:29:38 +08:00
jxxghp
e785f20c5a fix #352 历史记录重新整理时删除原已整理的文件 2023-09-03 08:40:26 +08:00
jxxghp
0050a96faf fix #406 支持QB分类自动管理模式 2023-09-03 07:56:20 +08:00
jxxghp
31b460f89f Merge pull request #408 from amtoaer/fix_subscribe_lack 2023-09-03 07:16:58 +08:00
jxxghp
89cd2bbadc Merge pull request #405 from thsrite/main 2023-09-03 07:15:16 +08:00
amtoaer
7d19467b6c fix: 修复自定义开始集导致的订阅集数不刷新问题 2023-09-03 01:53:47 +08:00
thsrite
97667249d5 Merge branch 'jxxghp:main' into main 2023-09-02 23:49:23 +08:00
thsrite
2e2472a387 fix 目录监控汇总消息适当增加处理时间 2023-09-02 23:48:48 +08:00
jxxghp
4b10028690 fix update 2023-09-02 22:27:09 +08:00
jxxghp
e0a492d8ab v1.1.1 2023-09-02 22:05:41 +08:00
jxxghp
52e89747b7 feat 电视剧无法识别集时发送消息 2023-09-02 21:38:01 +08:00
jxxghp
59b947fa65 fix 目录监控登录转移方式错误 2023-09-02 21:22:03 +08:00
jxxghp
212e2f1287 Merge pull request #399 from thsrite/main 2023-09-02 18:31:46 +08:00
thsrite
685be88c46 fix 目录监控增加失败历史记录 2023-09-02 18:28:12 +08:00
jxxghp
8297b3e199 更新 scheduler.py 2023-09-02 18:08:34 +08:00
jxxghp
75c5844d64 Merge pull request #397 from DDS-Derek/main 2023-09-02 17:53:24 +08:00
DDSRem
ad5ca69bbb feat: 前端下载前判断版本号是否获取成功 2023-09-02 17:49:14 +08:00
jxxghp
6befa35a26 Merge pull request #395 from WithdewHua/fix-torrentremover 2023-09-02 16:29:41 +08:00
WithdewHua
4fec6aede4 fix: 删除自动删种插件通知消息中多余的文件单位 2023-09-02 16:24:09 +08:00
jxxghp
68a3bc8732 Merge pull request #394 from amtoaer/main 2023-09-02 16:06:36 +08:00
amtoaer
ba2745266a fix: 修复消息中百分比多乘了 100 的问题 2023-09-02 16:03:28 +08:00
jxxghp
2fcf5039ff Merge pull request #392 from DDS-Derek/main 2023-09-02 14:44:43 +08:00
DDSRem
b37dc4471e fix: update env 2023-09-02 14:43:33 +08:00
jxxghp
ffc5c48830 更新 __init__.py 2023-09-02 13:34:17 +08:00
jxxghp
dbe3701032 Merge pull request #385 from DDS-Derek/main 2023-09-02 11:16:38 +08:00
DDSRem
751d405aac fix: update curl 2023-09-02 11:15:44 +08:00
jxxghp
9224169f31 Merge pull request #384 from DDS-Derek/main 2023-09-02 10:54:44 +08:00
DDSRem
62c1a924e8 feat: dev update 2023-09-02 10:52:19 +08:00
jxxghp
9fdd838b7a Merge pull request #368 from DDS-Derek/main 2023-09-02 08:40:57 +08:00
DDSDerek
510911b7a3 feat: add discussions 2023-09-02 08:39:52 +08:00
DDSDerek
36e68f44dc fix: delete discussion 2023-09-02 08:38:39 +08:00
jxxghp
374e633ca7 fix 调整数据库会话 #330 2023-09-02 08:18:01 +08:00
jxxghp
ec8c9c996a fix #356 猫站数据统计问题 2023-09-02 07:57:44 +08:00
jxxghp
3c753686c6 fix #359 定期自动刷新订阅的TMDB数据 2023-09-02 07:33:27 +08:00
jxxghp
5f4580282e fix #362 恢复动漫独立目录二级分类 2023-09-02 07:11:21 +08:00
jxxghp
5d9e0b699c fix 转移历史记录没有时间 2023-09-02 07:09:38 +08:00
jxxghp
5debfca89a fix #361
fix #357
2023-09-01 22:42:58 +08:00
jxxghp
3eeb9e299a Merge pull request #360 from thsrite/main 2023-09-01 21:16:28 +08:00
thsrite
9c4aba10bf Update downloadhistory_oper.py 2023-09-01 21:12:32 +08:00
jxxghp
7b37d86527 fix #358 2023-09-01 18:28:05 +08:00
jxxghp
55c061176d fix #358 2023-09-01 18:24:43 +08:00
jxxghp
5dc11b07e3 fix #342 2023-09-01 17:30:21 +08:00
jxxghp
0bb67824bd Merge remote-tracking branch 'origin/main' 2023-09-01 15:00:37 +08:00
jxxghp
ac1dcbed3c fix 偿试减少会话使用 2023-09-01 15:00:27 +08:00
jxxghp
d0a586a46b 更新 transfer.py 2023-09-01 12:07:37 +08:00
jxxghp
fa8dcea7da 更新 system.py 2023-09-01 12:07:04 +08:00
jxxghp
76a94a80ef Merge pull request #354 from thsrite/main 2023-09-01 11:51:06 +08:00
thsrite
9139c1297e fix 订阅创建一分钟内不自动搜索,留出编辑订阅的时间 2023-09-01 11:48:31 +08:00
jxxghp
4dba739d54 fix bug 2023-09-01 11:35:46 +08:00
jxxghp
fe80f86518 fix 2023-09-01 11:05:17 +08:00
jxxghp
7307105dcd - 站点新增支持Rousi、蝴蝶、OpenCD
- 电影搜索增加了纪录片类型
- 支持设置自建OCR识别服务地址
- 下载器监控、手动整理按文件登记历史记录
- 新增了下载器文件同步插件,可将非MoviePilot添加下载的任务文件导入数据库,以便删除文件时联动删除下载任务
- 整理历史记录支持批量操作
- 播放限速插件支持智能限速
- 刮削海报优先使用TMDB图片
- 修复了憨憨站点数据统计
- 修复了过滤规则无法清空的问题
- 修复了自定义订阅已处理状态计算的问题
- 修复了Slack消息过长导致发送失败的问题
- 修复了动漫独立目录时出现两级目录的问题
- 调整了暗黑主题的UI配色
2023-09-01 11:01:13 +08:00
jxxghp
1c7715d94c 更新 transfer.py 2023-09-01 07:35:27 +08:00
jxxghp
4dd2d6d307 更新 __init__.py 2023-09-01 07:34:12 +08:00
jxxghp
7cfd05a7a5 fix 通知标题计算方法 2023-09-01 07:29:49 +08:00
jxxghp
8eab38c91e fix 优化目录监控通知标题计算方法 2023-09-01 07:16:39 +08:00
jxxghp
6ad78fa875 add 剧集格式化方法 2023-08-31 21:29:28 +08:00
jxxghp
781cffb255 fix bug 2023-08-31 20:12:38 +08:00
jxxghp
2a7fc7bbe6 Merge pull request #350 from thsrite/main
feat 播放限速插件支持智能限速、不限速地址
2023-08-31 19:48:48 +08:00
thsrite
f65da9b202 fix 删除不限速地址配置 2023-08-31 19:48:00 +08:00
thsrite
0cf11db76a fix 自动限速 2023-08-31 19:33:16 +08:00
thsrite
37bada89ef Merge branch 'main' of https://github.com/thsrite/MoviePilot into main 2023-08-31 19:26:24 +08:00
thsrite
38d6467740 fix 播放限速 2023-08-31 19:26:18 +08:00
thsrite
3bc639bcab fix 播放限速 2023-08-31 19:08:50 +08:00
thsrite
7baa07474c Update __init__.py 2023-08-31 19:01:14 +08:00
jxxghp
8e304f77b4 fix ui 2023-08-31 19:01:10 +08:00
thsrite
93ec8df713 Merge branch 'jxxghp:main' into main 2023-08-31 17:07:06 +08:00
thsrite
8854acf908 Merge remote-tracking branch 'origin/main' 2023-08-31 17:05:55 +08:00
thsrite
143ffd18b7 feat 限速插件支持智能限速 2023-08-31 17:05:47 +08:00
jxxghp
212f9c250f fix #343 2023-08-31 16:38:46 +08:00
jxxghp
fa62943679 fix ui 2023-08-31 16:28:18 +08:00
jxxghp
3f95962ced Merge pull request #347 from thsrite/main
fix 下载器种子排除辅种、防止mp下载任务重复处理
2023-08-31 16:23:49 +08:00
jxxghp
e68aab423e Merge branch 'main' into main 2023-08-31 16:23:42 +08:00
jxxghp
49d51ca13e fix 2023-08-31 16:20:44 +08:00
jxxghp
f6b5994fe5 fix plugin manager 2023-08-31 16:13:19 +08:00
thsrite
8ad75e93a9 fix 下载器任务同步插件支持周期运行 2023-08-31 15:51:39 +08:00
jxxghp
796133e26f fix SyncDownloadFiles 2023-08-31 15:50:46 +08:00
thsrite
8414c5df0a fix 下载器种子排除辅种、防止mp下载任务重复处理 2023-08-31 15:31:35 +08:00
jxxghp
1fcdf633ba Merge pull request #345 from thsrite/main
feat 下载器种子同步插件 && fix 同步删除插件
2023-08-31 15:11:58 +08:00
jxxghp
b503dee631 add opencd 2023-08-31 15:08:18 +08:00
thsrite
0837950334 fix 下载器文件同步插件友情提示 2023-08-31 15:07:30 +08:00
thsrite
95787f6ef6 fix last_sync_time按照下载器设置 2023-08-31 15:02:05 +08:00
thsrite
3943a7a793 fix NAStool数据同步插件 2023-08-31 14:47:21 +08:00
thsrite
9f0bd2b933 fix 签到插件 2023-08-31 14:43:41 +08:00
thsrite
053c89bf9f fix 同步删除插件 2023-08-31 14:37:10 +08:00
thsrite
8739a67679 feat 下载器种子同步插件 2023-08-31 14:33:39 +08:00
jxxghp
cb41086fa3 fix 目录监控从表中查询download_hash 2023-08-31 13:56:51 +08:00
jxxghp
84cbeaada2 fix bug 2023-08-31 13:52:48 +08:00
jxxghp
344742871c fix bug 2023-08-31 12:45:33 +08:00
jxxghp
95df1c4c1c fix bug 2023-08-31 12:28:30 +08:00
jxxghp
593211c037 feat 下载时记录文件清单 2023-08-31 08:37:00 +08:00
jxxghp
f80e5739ca feat 媒体服务器/下载器定时检查重连 2023-08-31 08:15:43 +08:00
jxxghp
17fcd77b8e fix 2023-08-31 07:14:57 +08:00
jxxghp
f0666986f0 fix 2023-08-30 23:59:27 +08:00
jxxghp
854fafd880 fix 2023-08-30 23:07:48 +08:00
jxxghp
bdd45304c8 fix 2023-08-30 22:50:38 +08:00
jxxghp
c372d0451e fix 2023-08-30 22:40:36 +08:00
jxxghp
38eff64c95 need fix 2023-08-30 22:01:07 +08:00
jxxghp
9326676bb6 - 新增了正在热映推荐
- 站点新增支持Rousi、蝴蝶
- 电影搜索增加了纪录片类型
- 修复了憨憨站点数据统计
- 修复了过滤规则无法清空的问题
- 修复了自定义订阅已处理状态计算的问题
- 修复了Slack消息过长导致发送失败的问题
2023-08-30 19:39:36 +08:00
jxxghp
7df1d807bb fix README 2023-08-30 19:15:46 +08:00
jxxghp
cce543274e fix Ocr Host 2023-08-30 19:00:48 +08:00
jxxghp
3b7c1fed74 fix #283 2023-08-30 17:32:59 +08:00
jxxghp
e0dfbc213a fix #283 2023-08-30 17:09:49 +08:00
jxxghp
d76fa9bb00 fix #324 2023-08-30 16:56:49 +08:00
jxxghp
e59a498826 fix #271 2023-08-30 16:38:41 +08:00
jxxghp
e6452d68bb fix #326 2023-08-30 16:14:21 +08:00
jxxghp
0d830b237b fix #336 2023-08-30 15:51:01 +08:00
jxxghp
470ebb7b79 Merge remote-tracking branch 'origin/main' 2023-08-30 15:46:13 +08:00
jxxghp
a6819c08bf fix #286 2023-08-30 15:46:04 +08:00
jxxghp
16ba4587e1 Merge pull request #338 from thsrite/main 2023-08-30 15:37:00 +08:00
jxxghp
911651a5f7 Merge remote-tracking branch 'origin/main' 2023-08-30 15:31:21 +08:00
jxxghp
3f94f5f709 fix 站点数据统计UI 2023-08-30 15:31:11 +08:00
jxxghp
16289d86b6 fix hhanclub数据统计 2023-08-30 14:51:55 +08:00
thsrite
17450c7c70 fix 优选插件get_state 2023-08-30 14:00:20 +08:00
jxxghp
eac9fc02fa Merge pull request #333 from thsrite/main 2023-08-30 12:09:16 +08:00
thsrite
1a026ffb12 fix plugins 2023-08-30 12:03:52 +08:00
thsrite
85477a4bd3 fix #184 2023-08-30 10:26:24 +08:00
jxxghp
f8221bb526 Merge remote-tracking branch 'origin/main' 2023-08-30 08:29:00 +08:00
jxxghp
85a581f0cd feat 推荐新增正在热映
fix 豆瓣搜索API
2023-08-30 08:28:37 +08:00
jxxghp
ae7b48ad9f Merge pull request #325 from DDS-Derek/main 2023-08-29 22:40:58 +08:00
jxxghp
59907af4f4 Create LICENSE 2023-08-29 22:35:46 +08:00
DDSRem
e63f52bee5 feat: optimize image size 2023-08-29 22:20:18 +08:00
jxxghp
b9b8b86019 fix build 2023-08-29 19:47:42 +08:00
jxxghp
bfca8a52d6 fix build 2023-08-29 19:44:40 +08:00
jxxghp
99ccbfef22 Merge pull request #320 from thsrite/main 2023-08-29 19:15:31 +08:00
thsrite
5e2f4b413d fix 2b462a1b 2023-08-29 18:53:31 +08:00
jxxghp
a0ec38a6a9 Merge remote-tracking branch 'origin/main' 2023-08-29 17:14:56 +08:00
jxxghp
eae89b2d36 fix #318 2023-08-29 17:14:45 +08:00
jxxghp
e5926a489d Merge pull request #316 from thsrite/main 2023-08-29 15:49:41 +08:00
thsrite
8acfde7906 fix 签到插件 2023-08-29 15:46:20 +08:00
jxxghp
24a164f47e v1.0.9 2023-08-29 15:15:24 +08:00
jxxghp
72fbbffa02 Merge pull request #315 from thsrite/main
fix 站点签到插件支持仅模拟登陆
2023-08-29 14:04:57 +08:00
thsrite
95a87f3e33 feat 站点签到插件支持仅模拟登陆 2023-08-29 13:49:38 +08:00
jxxghp
55206ea092 fix #299 搜索时去掉特殊字符 2023-08-29 12:29:18 +08:00
jxxghp
c138cda735 fix #300 2023-08-29 12:22:14 +08:00
jxxghp
d0a92531ac fix #301
fix #303
2023-08-29 12:11:25 +08:00
jxxghp
96fc32efd0 fix #308 缺失集计算错误问题 2023-08-29 11:41:30 +08:00
jxxghp
a9a0acc091 fix #312 无年份无季集时优先匹配电影 2023-08-29 11:12:55 +08:00
jxxghp
fa6f2c01e0 fix #313 检查本地存在时未应用订阅总集数的问题 2023-08-29 10:48:27 +08:00
jxxghp
05a0026ea4 fix #306 2023-08-29 08:18:34 +08:00
jxxghp
8f352c23c8 更新 __init__.py 2023-08-28 22:58:09 +08:00
jxxghp
8bc883b621 fix 2023-08-28 19:04:37 +08:00
jxxghp
6a34c7196c Merge pull request #307 from thsrite/main 2023-08-28 14:53:26 +08:00
thsrite
58ded2ef5e feat 订阅站点单独配置 2023-08-28 13:23:56 +08:00
jxxghp
2b462a1b9c fix #305 2023-08-28 13:04:18 +08:00
jxxghp
a6d0504900 Merge pull request #305 from thsrite/main
feat 动漫一级分类 && fix bugs
2023-08-28 12:54:55 +08:00
thsrite
7717afab69 fix 动漫一级分类判断条件 2023-08-28 12:50:47 +08:00
jxxghp
683ba4cfad feat 手动整理支持自动识别批量处理,增加进度显示 2023-08-28 12:50:21 +08:00
jxxghp
921783d6bb fix #304 增加订阅搜索开关且默认关闭 2023-08-28 11:43:55 +08:00
thsrite
b7e9e8ee21 feat 动漫一级分类 2023-08-28 10:01:14 +08:00
thsrite
dadad74085 fix 剧集文件命名没有季,默认1 2023-08-28 10:00:54 +08:00
thsrite
e405c98bae fix qb下载按文件循序下载 2023-08-28 10:00:29 +08:00
jxxghp
9d4bec7d81 fix bug 2023-08-28 08:30:39 +08:00
jxxghp
d6a73d6017 Merge pull request #298 from thsrite/main 2023-08-27 20:40:15 +08:00
thsrite
b4a780aba7 fix #292 2023-08-27 20:30:54 +08:00
thsrite
f15f98fcfc fix 签到每天首次全量签到后续签到命中错误关键词 2023-08-27 20:20:42 +08:00
jxxghp
4bb8b01301 Merge pull request #296 from lightolly/dev/20230827 2023-08-27 18:47:25 +08:00
olly
aa8cb889f8 fix:修复tr下载显示速率问题 2023-08-27 18:22:34 +08:00
jxxghp
9e31c53fa5 Merge pull request #291 from DDS-Derek/main 2023-08-27 13:02:11 +08:00
DDSRem
4b23f3f076 fix: repeat install pysocks 2023-08-27 13:01:18 +08:00
DDSRem
52fac09021 fix: 更新成功提示 2023-08-27 12:38:03 +08:00
DDSRem
bb67e902c5 feat: 优化重启更新逻辑
先安装依赖,再替换文件,防止依赖安装失败导致无法正常启动
2023-08-27 12:36:48 +08:00
DDSRem
6206c5f4a3 fix: 优化代码 2023-08-27 12:29:55 +08:00
DDSRem
de3d3de411 feat: 依赖安装添加代理 2023-08-27 12:21:12 +08:00
jxxghp
91896946d8 fix 文件管理列表图标 2023-08-27 10:31:06 +08:00
130 changed files with 8100 additions and 2338 deletions

3
.dockerignore Normal file
View File

@@ -0,0 +1,3 @@
# Ignore git
.github
.git

View File

@@ -1,5 +1,8 @@
blank_issues_enabled: false
contact_links:
- name: 项目讨论
url: https://github.com/jxxghp/MoviePilot/discussions/new/choose
about: discussion
- name: Telegram 频道
url: https://t.me/moviepilot_channel
about: 更新日志

View File

@@ -1,17 +0,0 @@
name: 项目讨论
description: discussion
title: "[Discussion]: "
labels: ["discussion"]
body:
- type: markdown
attributes:
value: |
[BUG](https://github.com/jxxghp/MoviePilot/issues/new?assignees=&labels=bug&template=bug_report.yml&title=%5BBUG%5D%3A) 与 [Feature Request](https://github.com/jxxghp/MoviePilot/issues/new?assignees=&labels=feature+request&template=feature_request.yml&title=%5BFeature+Request%5D%3A+) 请转到对应位置提交。
- type: textarea
id: discussion
attributes:
label: 项目讨论
description: 请详细描述需要讨论的内容。
placeholder: "项目讨论"
validations:
required: true

View File

@@ -1,4 +1,4 @@
name: MoviePilot Docker
name: MoviePilot Builder
on:
workflow_dispatch:
push:
@@ -14,13 +14,7 @@ jobs:
steps:
-
name: Checkout
uses: actions/checkout@v3
- name: Docker meta
id: meta
uses: docker/metadata-action@v4
with:
images: ${{ secrets.DOCKER_USERNAME }}/moviepilot
uses: actions/checkout@v4
-
name: Release version
@@ -29,6 +23,16 @@ jobs:
app_version=$(cat version.py |sed -ne "s/APP_VERSION\s=\s'v\(.*\)'/\1/gp")
echo "app_version=$app_version" >> $GITHUB_ENV
-
name: Docker meta
id: meta
uses: docker/metadata-action@v4
with:
images: ${{ secrets.DOCKER_USERNAME }}/moviepilot
tags: |
type=raw,value=${{ env.app_version }}
type=raw,value=latest
-
name: Set Up QEMU
uses: docker/setup-qemu-action@v2
@@ -52,11 +56,9 @@ jobs:
file: Dockerfile
platforms: |
linux/amd64
linux/arm64
linux/arm64/v8
push: true
build-args: |
MOVIEPILOT_VERSION=${{ env.app_version }}
tags: |
${{ secrets.DOCKER_USERNAME }}/moviepilot:latest
${{ secrets.DOCKER_USERNAME }}/moviepilot:${{ env.app_version }}
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}

37
.github/workflows/release.yml vendored Normal file
View File

@@ -0,0 +1,37 @@
name: MoviePilot Release
on:
workflow_dispatch:
push:
branches:
- main
paths:
- version.py
jobs:
build:
runs-on: ubuntu-latest
name: Build Docker Image
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Release Version
id: release_version
run: |
app_version=$(cat version.py |sed -ne "s/APP_VERSION\s=\s'v\(.*\)'/\1/gp")
echo "app_version=$app_version" >> $GITHUB_ENV
-
name: Generate Release
uses: actions/create-release@latest
with:
tag_name: v${{ env.app_version }}
release_name: v${{ env.app_version }}
body: |
${{ github.event.commits[0].message }}
draft: false
prerelease: false
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -8,6 +8,8 @@ ENV LANG="C.UTF-8" \
PGID=0 \
UMASK=000 \
MOVIEPILOT_AUTO_UPDATE=true \
MOVIEPILOT_AUTO_UPDATE_DEV=false \
PORT=3001 \
NGINX_PORT=3000 \
CONFIG_DIR="/config" \
API_TOKEN="moviepilot" \
@@ -47,6 +49,7 @@ RUN apt-get update \
busybox \
dumb-init \
jq \
haproxy \
&& \
if [ "$(uname -m)" = "x86_64" ]; \
then ln -s /usr/lib/x86_64-linux-musl/libc.so /lib/libc.musl-x86_64.so.1; \
@@ -57,7 +60,7 @@ RUN apt-get update \
&& cp -f /app/update /usr/local/bin/mp_update \
&& cp -f /app/entrypoint /entrypoint \
&& chmod +x /entrypoint /usr/local/bin/mp_update \
&& mkdir -p ${HOME} \
&& mkdir -p ${HOME} /var/lib/haproxy/server-state \
&& groupadd -r moviepilot -g 911 \
&& useradd -r moviepilot -g moviepilot -d ${HOME} -s /bin/bash -u 911 \
&& apt-get install -y build-essential \
@@ -81,5 +84,5 @@ RUN apt-get update \
/var/lib/apt/lists/* \
/var/tmp/*
EXPOSE 3000
VOLUME ["/config"]
VOLUME [ "/config" ]
ENTRYPOINT [ "/entrypoint" ]

674
LICENSE Normal file
View File

@@ -0,0 +1,674 @@
GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<https://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<https://www.gnu.org/licenses/why-not-lgpl.html>.

View File

@@ -21,10 +21,7 @@ Dockerhttps://hub.docker.com/r/jxxghp/moviepilot
2. **安装CookieCloud服务端可选**
MoviePilot内置了公共CookieCloud服务器(https://movie-pilot.org/cookiecloud) ,如果需要自建服务,可参考 [CookieCloud](https://github.com/easychen/CookieCloud) 项目进行搭建。
```shell
docker pull easychen/cookiecloud:latest
```
MoviePilot内置了公共CookieCloud服务器如果需要自建服务可参考 [CookieCloud](https://github.com/easychen/CookieCloud) 项目进行搭建docker镜像请点击 [这里](https://hub.docker.com/r/easychen/cookiecloud)
**声明:** 本项目不会收集用户敏感数据Cookie同步也是基于CookieCloud项目实现非本项目提供的能力。技术角度上CookieCloud采用端到端加密在个人不泄露`用户KEY``端对端加密密码`的情况下第三方无法窃取任何用户信息(包括服务器持有者)。如果你不放心,可以不使用公共服务或者不使用本项目,但如果使用后发生了任何信息泄露与本项目无关!
@@ -36,7 +33,7 @@ MoviePilot需要配套下载器和媒体服务器配合使用。
4. **安装MoviePilot**
目前仅提供docker镜像后续可能会提供更多安装方式。
目前仅提供docker镜像点击 [这里](https://hub.docker.com/r/jxxghp/moviepilot) 或执行命令:
```shell
docker pull jxxghp/moviepilot:latest
@@ -54,31 +51,37 @@ docker pull jxxghp/moviepilot:latest
- **PGID**:运行程序用户的`gid`,默认`0`
- **UMASK**:掩码权限,默认`000`,可以考虑设置为`022`
- **MOVIEPILOT_AUTO_UPDATE**:重启更新,`true`/`false`,默认`true` **注意:如果出现网络问题可以配置`PROXY_HOST`,具体看下方`PROXY_HOST`解释**
- **NGINX_PORT** WEB服务端口默认`3000`,可自行修改,不能`3001`
- **NGINX_PORT** WEB服务端口默认`3000`,可自行修改,不能与API服务端口冲突
- **PORT** API服务端口默认`3001`可自行修改不能与WEB服务端口冲突
- **SUPERUSER** 超级管理员用户名,默认`admin`,安装后使用该用户登录后台管理界面
- **SUPERUSER_PASSWORD** 超级管理员初始密码,默认`password`,建议修改为复杂密码
- **API_TOKEN** API密钥默认`moviepilot`在媒体服务器Webhook、微信回调等地址配置中需要加上`?token=`该值,建议修改为复杂字符串
- **PROXY_HOST** 网络代理可选访问themoviedb或者重启更新需要使用代理访问格式为`http(s)://ip:port`
- **TMDB_API_DOMAIN** TMDB API地址默认`api.themoviedb.org`,也可配置为`api.tmdb.org`或其它中转代理服务地址,能连通即可
- **DOWNLOAD_PATH** 下载保存目录,**注意:需要将`moviepilot``下载器`的映射路径保持一致**,否则会导致下载文件无法转移
- **DOWNLOAD_MOVIE_PATH** 电影下载保存目录,**必须是`DOWNLOAD_PATH`的下级路径**不设置则下载到`DOWNLOAD_PATH`
- **DOWNLOAD_TV_PATH** 电视剧下载保存目录,**必须是`DOWNLOAD_PATH`的下级路径**不设置则下载到`DOWNLOAD_PATH`
- **DOWNLOAD_MOVIE_PATH** 电影下载保存目录,不设置则下载到`DOWNLOAD_PATH`
- **DOWNLOAD_TV_PATH** 电视剧下载保存目录,不设置则下载到`DOWNLOAD_PATH`
- **DOWNLOAD_ANIME_PATH** 动漫下载保存目录,不设置则下载到`DOWNLOAD_PATH`
- **DOWNLOAD_CATEGORY** 下载二级分类开关,`true`/`false`,默认`false`,开启后会根据配置`category.yaml`自动在下载目录下建立二级目录分类
- **DOWNLOAD_SUBTITLE** 下载站点字幕,`true`/`false`,默认`true`
- **REFRESH_MEDIASERVER** 入库刷新媒体库,`true`/`false`,默认`true`
- **SCRAP_METADATA** 刮削入库的媒体文件,`true`/`false`,默认`true`
- **SCRAP_FOLLOW_TMDB** 新增已入库媒体是否跟随TMDB信息变化`true`/`false`,默认`true`
- **TORRENT_TAG** 种子标签,默认为`MOVIEPILOT`设置后只有MoviePilot添加的下载才会处理留空所有下载器中的任务均会处理
- **LIBRARY_PATH** 媒体库目录,多个目录使用`,`分隔
- **LIBRARY_MOVIE_NAME** 电影媒体库目录名,默认`电影`
- **LIBRARY_TV_NAME** 电视剧媒体库目录名,默认`电视剧`
- **LIBRARY_ANIME_NAME** 动漫媒体库目录名,默认`电视剧/动漫`
- **LIBRARY_CATEGORY** 媒体库二级分类开关,`true`/`false`,默认`false`,开启后会根据配置`category.yaml`自动在媒体库目录下建立二级目录分类
- **TRANSFER_TYPE** 转移方式,支持`link`/`copy`/`move`/`softlink` **注意:在`link`和`softlink`转移方式下,转移后的文件会继承源文件的权限掩码,不受`UMASK`影响**
- **COOKIECLOUD_HOST** CookieCloud服务器地址格式`http://ip:port`必须配置,否则无法添加站点
- **COOKIECLOUD_HOST** CookieCloud服务器地址格式`http(s)://ip:port`不配置默认使用内建服务器`https://movie-pilot.org/cookiecloud`
- **COOKIECLOUD_KEY** CookieCloud用户KEY
- **COOKIECLOUD_PASSWORD** CookieCloud端对端加密密码
- **COOKIECLOUD_INTERVAL** CookieCloud同步间隔分钟
- **OCR_HOST** OCR识别服务器地址格式`http(s)://ip:port`用于识别站点二维码实现自动登录获取Cookie等不配置默认使用内建服务器`https://movie-pilot.org`,可使用 [这个镜像](https://hub.docker.com/r/jxxghp/moviepilot-ocr) 自行搭建。
- **USER_AGENT** CookieCloud对应的浏览器UA可选设置后可增加连接站点的成功率同步站点后可以在管理界面中修改
- **AUTO_DOWNLOAD_USER** 交互搜索自动下载用户ID使用,分割
- **SUBSCRIBE_SEARCH** 订阅搜索,`true`/`false`,默认`false`开启后会每隔24小时对所有订阅进行全量搜索以补齐缺失剧集一般情况下正常订阅即可订阅搜索只做为兜底会增加站点压力不建议开启
- **MESSAGER** 消息通知渠道,支持 `telegram`/`wechat`/`slack`,开启多个渠道时使用`,`分隔。同时还需要配置对应渠道的环境变量,非对应渠道的变量可删除,推荐使用`telegram`
- `wechat`设置项:
@@ -112,6 +115,7 @@ docker pull jxxghp/moviepilot:latest
- **QB_HOST** qbittorrent地址格式`ip:port`https需要添加`https://`前缀
- **QB_USER** qbittorrent用户名
- **QB_PASSWORD** qbittorrent密码
- **QB_CATEGORY** qbittorrent分类自动管理`true`/`false`,默认`false`,开启后会将下载二级分类传递到下载器,由下载器管理下载目录,需要同步开启`DOWNLOAD_CATEGORY`
- `transmission`设置项:
@@ -218,12 +222,13 @@ docker pull jxxghp/moviepilot:latest
## 使用
- 通过CookieCloud同步快速同步站点不需要使用的站点可在WEB管理界面中禁用。
- 通过下载器监控实现自动整理入库刮削
- 通过微信/Telegram/Slack远程管理其中Telegram将会自动添加操作菜单。微信回调相对路径为`/api/v1/message/`
- 通过WEB进行管理将WEB添加到手机桌面获得类App使用效果管理界面端口`3000`
- 设置媒体服务器Webhook通过MoviePilot发送播放通知等。Webhook回调相对路径为`/api/v1/webhook?token=moviepilot`,其中`moviepilot`为设置的`API_TOKEN`
- 将MoviePilot做为Radarr或Sonarr服务器添加到Overseerr或Jellyseerr可使用Overseerr/Jellyseerr浏览订阅。
- 通过CookieCloud同步快速同步站点不需要使用的站点可在WEB管理界面中禁用,无法同步的站点可手动新增
- 通过WEB进行管理将WEB添加到手机桌面获得类App使用效果管理界面端口`3000`后台API端口`3001`
- 通过下载器监控或使用目录监控插件实现自动整理入库刮削(二选一)
- 通过微信/Telegram/Slack远程管理其中微信/Telegram将会自动添加操作菜单微信菜单条数有限制部分菜单不显示微信需要在官方页面设置回调地址地址相对路径为`/api/v1/message/`
- 设置媒体服务器Webhook通过MoviePilot发送播放通知等。Webhook回调相对路径为`/api/v1/webhook?token=moviepilot``3001`端口),其中`moviepilot`为设置的`API_TOKEN`
- 将MoviePilot做为Radarr或Sonarr服务器添加到Overseerr或Jellyseerr`3001`端口)可使用Overseerr/Jellyseerr浏览订阅。
- 映射宿主机docker.sock文件到容器`/var/run/docker.sock`,以支持内建重启操作。实例:`-v /var/run/docker.sock:/var/run/docker.sock:ro`
**注意**
@@ -238,11 +243,24 @@ location / {
proxy_set_header X-Forwarded-Proto $scheme;
}
```
3) 新建的企业微信应用需要固定公网IP的代理才能收到消息代理添加以下代码
```nginx configuration
location /cgi-bin/gettoken {
proxy_pass https://qyapi.weixin.qq.com;
}
location /cgi-bin/message/send {
proxy_pass https://qyapi.weixin.qq.com;
}
location /cgi-bin/menu/create {
proxy_pass https://qyapi.weixin.qq.com;
}
```
![image](https://github.com/jxxghp/MoviePilot/assets/51039935/b8f0238d-847f-4f9d-b210-e905837362b9)
![image](https://github.com/jxxghp/MoviePilot/assets/51039935/f2654b09-26f3-464f-a0af-1de3f97832ee)
![image](https://github.com/jxxghp/MoviePilot/assets/51039935/28219233-ec7d-479b-b184-9a901c947dd1)
![image](https://github.com/jxxghp/MoviePilot/assets/51039935/fcb87529-56dd-43df-8337-6e34b8582819)
![image](https://github.com/jxxghp/MoviePilot/assets/51039935/f7df0806-668d-4c8b-ad41-133bf8f0bf73)
![image](https://github.com/jxxghp/MoviePilot/assets/51039935/bfa77c71-510a-46a6-9c1e-cf98cb101e3a)
![image](https://github.com/jxxghp/MoviePilot/assets/51039935/51cafd09-e38c-47f9-ae62-1e83ab8bf89b)
![image](https://github.com/jxxghp/MoviePilot/assets/51039935/f7ea77cd-0362-4c35-967c-7f1b22dbef05)

View File

@@ -0,0 +1,30 @@
"""1.0.4
Revision ID: 1e169250e949
Revises: 52ab4930be04
Create Date: 2023-09-01 09:56:33.907661
"""
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = '1e169250e949'
down_revision = '52ab4930be04'
branch_labels = None
depends_on = None
def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
try:
with op.batch_alter_table("subscribe") as batch_op:
batch_op.add_column(sa.Column('date', sa.String, nullable=True))
except Exception as e:
pass
# ### end Alembic commands ###
def downgrade() -> None:
pass

View File

@@ -0,0 +1,28 @@
"""1_0_3
Revision ID: 52ab4930be04
Revises: ec5fb51fc300
Create Date: 2023-08-28 13:21:45.152012
"""
from alembic import op
# revision identifiers, used by Alembic.
revision = '52ab4930be04'
down_revision = 'ec5fb51fc300'
branch_labels = None
depends_on = None
def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
op.execute("delete from systemconfig where key = 'RssSites';")
op.execute("insert into systemconfig(key, value) VALUES('RssSites', (select value from systemconfig where key= 'IndexerSites'));")
op.execute("delete from systemconfig where key = 'SearchResults';")
# ### end Alembic commands ###
def downgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
pass
# ### end Alembic commands ###

View File

@@ -0,0 +1,27 @@
"""1.0.5
Revision ID: e734c7fe6056
Revises: 1e169250e949
Create Date: 2023-09-07 18:19:41.250957
"""
from alembic import op
# revision identifiers, used by Alembic.
revision = 'e734c7fe6056'
down_revision = '1e169250e949'
branch_labels = None
depends_on = None
def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
try:
op.create_index('ix_transferhistory_tmdbid', 'transferhistory', ['tmdbid'], unique=False)
except Exception as err:
pass
# ### end Alembic commands ###
def downgrade() -> None:
pass

View File

@@ -41,12 +41,7 @@ def storage(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询存储空间信息
"""
if settings.LIBRARY_PATH:
total_storage, free_storage = SystemUtils.space_usage(
[Path(path) for path in settings.LIBRARY_PATH.split(",")]
)
else:
total_storage, free_storage = 0, 0
total_storage, free_storage = SystemUtils.space_usage(settings.LIBRARY_PATHS)
return schemas.Storage(
total_storage=total_storage,
used_storage=total_storage - free_storage
@@ -124,3 +119,19 @@ def transfer(days: int = 7, db: Session = Depends(get_db),
"""
transfer_stat = TransferHistory.statistic(db, days)
return [stat[1] for stat in transfer_stat]
@router.get("/cpu", summary="获取当前CPU使用率", response_model=int)
def cpu(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
获取当前CPU使用率
"""
return SystemUtils.cpu_usage()
@router.get("/memory", summary="获取当前内存使用量和使用率", response_model=List[int])
def memory(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
获取当前内存使用率
"""
return SystemUtils.memory_usage()

View File

@@ -45,6 +45,21 @@ def recognize_doubanid(doubanid: str,
return schemas.Context()
@router.get("/showing", summary="豆瓣正在热映", response_model=List[schemas.MediaInfo])
def movie_showing(page: int = 1,
count: int = 30,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览豆瓣正在热映
"""
movies = DoubanChain(db).movie_showing(page=page, count=count)
if not movies:
return []
medias = [MediaInfo(douban_info=movie) for movie in movies]
return [media.to_dict() for media in medias]
@router.get("/movies", summary="豆瓣电影", response_model=List[schemas.MediaInfo])
def douban_movies(sort: str = "R",
tags: str = "",

View File

@@ -17,7 +17,7 @@ IMAGE_TYPES = [".jpg", ".png", ".gif", ".bmp", ".jpeg", ".webp"]
@router.get("/list", summary="所有插件", response_model=List[schemas.FileItem])
def list_path(path: str, _: schemas.TokenPayload = Depends(verify_token)) -> Any:
def list_path(path: str, sort: str = 'time', _: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询当前目录下所有目录和文件
"""
@@ -53,8 +53,9 @@ def list_path(path: str, _: schemas.TokenPayload = Depends(verify_token)) -> Any
path=str(path_obj).replace("\\", "/"),
name=path_obj.name,
basename=path_obj.stem,
extension=path_obj.suffix,
extension=path_obj.suffix[1:],
size=path_obj.stat().st_size,
modify_time=path_obj.stat().st_mtime,
))
return ret_items
@@ -65,6 +66,7 @@ def list_path(path: str, _: schemas.TokenPayload = Depends(verify_token)) -> Any
path=str(item).replace("\\", "/") + "/",
name=item.name,
basename=item.stem,
modify_time=item.stat().st_mtime,
))
# 遍历所有文件,不含子目录
@@ -78,10 +80,15 @@ def list_path(path: str, _: schemas.TokenPayload = Depends(verify_token)) -> Any
path=str(item).replace("\\", "/"),
name=item.name,
basename=item.stem,
extension=item.suffix,
extension=item.suffix[1:],
size=item.stat().st_size,
modify_time=item.stat().st_mtime,
))
# 排序
if sort == 'time':
ret_items.sort(key=lambda x: x.modify_time, reverse=True)
else:
ret_items.sort(key=lambda x: x.name, reverse=False)
return ret_items

View File

@@ -74,7 +74,8 @@ def delete_transfer_history(history_in: schemas.TransferHistory,
if not history:
return schemas.Response(success=False, msg="记录不存在")
# 册除文件
TransferChain(db).delete_files(Path(history.dest))
if history.dest:
TransferChain(db).delete_files(Path(history.dest))
# 删除记录
TransferHistory.delete(db, history_in.id)
return schemas.Response(success=True)

View File

@@ -56,6 +56,9 @@ async def login_access_token(
user.id, expires_delta=access_token_expires
),
token_type="bearer",
super_user=user.is_superuser,
user_name=user.name,
avatar=user.avatar
)

View File

@@ -17,7 +17,7 @@ from app.schemas import MediaType
router = APIRouter()
@router.get("/recognize", summary="识别媒体信息", response_model=schemas.Context)
@router.get("/recognize", summary="识别媒体信息(种子)", response_model=schemas.Context)
def recognize(title: str,
subtitle: str = None,
db: Session = Depends(get_db),
@@ -32,6 +32,20 @@ def recognize(title: str,
return schemas.Context()
@router.get("/recognize_file", summary="识别媒体信息(文件)", response_model=schemas.Context)
def recognize(path: str,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据文件路径识别媒体信息
"""
# 识别媒体信息
context = MediaChain(db).recognize_by_path(path)
if context:
return context.to_dict()
return schemas.Context()
@router.get("/search", summary="搜索媒体信息", response_model=List[schemas.MediaInfo])
def search_by_title(title: str,
page: int = 1,

View File

@@ -64,14 +64,13 @@ def wechat_verify(echostr: str, msg_signature: str,
@router.get("/switchs", summary="查询通知消息渠道开关", response_model=List[NotificationSwitch])
def read_switchs(db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
def read_switchs(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询通知消息渠道开关
"""
return_list = []
# 读取数据库
switchs = SystemConfigOper(db).get(SystemConfigKey.NotificationChannels)
switchs = SystemConfigOper().get(SystemConfigKey.NotificationChannels)
if not switchs:
for noti in NotificationType:
return_list.append(NotificationSwitch(mtype=noti.value, wechat=True, telegram=True, slack=True))
@@ -83,7 +82,6 @@ def read_switchs(db: Session = Depends(get_db),
@router.post("/switchs", summary="设置通知消息渠道开关", response_model=schemas.Response)
def set_switchs(switchs: List[NotificationSwitch],
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询通知消息渠道开关
@@ -92,6 +90,6 @@ def set_switchs(switchs: List[NotificationSwitch],
for switch in switchs:
switch_list.append(switch.dict())
# 存入数据库
SystemConfigOper(db).set(SystemConfigKey.NotificationChannels, switch_list)
SystemConfigOper().set(SystemConfigKey.NotificationChannels, switch_list)
return schemas.Response(success=True)

View File

@@ -22,28 +22,26 @@ def all_plugins(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
@router.get("/installed", summary="已安装插件", response_model=List[str])
def installed_plugins(db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
def installed_plugins(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询用户已安装插件清单
"""
return SystemConfigOper(db).get(SystemConfigKey.UserInstalledPlugins) or []
return SystemConfigOper().get(SystemConfigKey.UserInstalledPlugins) or []
@router.get("/install/{plugin_id}", summary="安装插件", response_model=schemas.Response)
def install_plugin(plugin_id: str,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
安装插件
"""
# 已安装插件
install_plugins = SystemConfigOper(db).get(SystemConfigKey.UserInstalledPlugins) or []
install_plugins = SystemConfigOper().get(SystemConfigKey.UserInstalledPlugins) or []
# 安装插件
if plugin_id not in install_plugins:
install_plugins.append(plugin_id)
# 保存设置
SystemConfigOper(db).set(SystemConfigKey.UserInstalledPlugins, install_plugins)
SystemConfigOper().set(SystemConfigKey.UserInstalledPlugins, install_plugins)
# 重载插件管理器
PluginManager().init_config()
return schemas.Response(success=True)
@@ -93,19 +91,18 @@ def set_plugin_config(plugin_id: str, conf: dict,
@router.delete("/{plugin_id}", summary="卸载插件", response_model=schemas.Response)
def uninstall_plugin(plugin_id: str,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
卸载插件
"""
# 删除已安装信息
install_plugins = SystemConfigOper(db).get(SystemConfigKey.UserInstalledPlugins) or []
install_plugins = SystemConfigOper().get(SystemConfigKey.UserInstalledPlugins) or []
for plugin in install_plugins:
if plugin == plugin_id:
install_plugins.remove(plugin)
break
# 保存
SystemConfigOper(db).set(SystemConfigKey.UserInstalledPlugins, install_plugins)
SystemConfigOper().set(SystemConfigKey.UserInstalledPlugins, install_plugins)
# 重载插件管理器
PluginManager().init_config()
return schemas.Response(success=True)

View File

@@ -1,6 +1,6 @@
from typing import List, Any
from fastapi import APIRouter, Depends, HTTPException
from fastapi import APIRouter, Depends
from sqlalchemy.orm import Session
from app import schemas
@@ -42,7 +42,7 @@ def search_by_tmdbid(mediaid: str,
# 识别豆瓣信息
context = DoubanChain(db).recognize_by_doubanid(doubanid)
if not context or not context.media_info or not context.media_info.tmdb_id:
raise HTTPException(status_code=404, detail="无法识别TMDB媒体信息")
return []
torrents = SearchChain(db).search_by_tmdbid(tmdbid=context.media_info.tmdb_id,
mtype=context.media_info.type,
area=area)

View File

@@ -6,8 +6,8 @@ from starlette.background import BackgroundTasks
from app import schemas
from app.chain.cookiecloud import CookieCloudChain
from app.chain.search import SearchChain
from app.chain.site import SiteChain
from app.chain.torrents import TorrentsChain
from app.core.event import EventManager
from app.core.security import verify_token
from app.db import get_db
@@ -117,8 +117,9 @@ def cookie_cloud_sync(db: Session = Depends(get_db),
清空所有站点数据并重新同步CookieCloud站点信息
"""
Site.reset(db)
SystemConfigOper(db).set(SystemConfigKey.IndexerSites, [])
CookieCloudChain(db).process(manual=True)
SystemConfigOper().set(SystemConfigKey.IndexerSites, [])
SystemConfigOper().set(SystemConfigKey.RssSites, [])
CookieCloudChain().process(manual=True)
# 插件站点删除
EventManager().send_event(EventType.SiteDeleted,
{
@@ -190,7 +191,7 @@ def site_icon(site_id: int,
@router.get("/resource/{site_id}", summary="站点资源", response_model=List[schemas.TorrentInfo])
def site_resource(site_id: int, keyword: str = None,
def site_resource(site_id: int,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
@@ -202,7 +203,7 @@ def site_resource(site_id: int, keyword: str = None,
status_code=404,
detail=f"站点 {site_id} 不存在",
)
torrents = SearchChain(db).browse(site.domain, keyword)
torrents = TorrentsChain().browse(domain=site.domain)
if not torrents:
return []
return [torrent.to_dict() for torrent in torrents]
@@ -227,6 +228,23 @@ def read_site_by_domain(
return site
@router.get("/rss", summary="所有订阅站点", response_model=List[schemas.Site])
def read_rss_sites(db: Session = Depends(get_db)) -> List[dict]:
"""
获取站点列表
"""
# 选中的rss站点
rss_sites = SystemConfigOper().get(SystemConfigKey.RssSites)
# 所有站点
all_site = Site.list_order_by_pri(db)
if not rss_sites or not all_site:
return []
# 选中的rss站点
rss_sites = [site for site in all_site if site and site.id in rss_sites]
return rss_sites
@router.get("/{site_id}", summary="站点详情", response_model=schemas.Site)
def read_site(
site_id: int,

View File

@@ -9,13 +9,16 @@ from fastapi.responses import StreamingResponse
from sqlalchemy.orm import Session
from app import schemas
from app.chain.search import SearchChain
from app.core.config import settings
from app.core.security import verify_token
from app.db import get_db
from app.db.systemconfig_oper import SystemConfigOper
from app.helper.message import MessageHelper
from app.helper.progress import ProgressHelper
from app.schemas.types import SystemConfigKey
from app.utils.http import RequestUtils
from app.utils.system import SystemUtils
from version import APP_VERSION
router = APIRouter()
@@ -60,24 +63,22 @@ def get_progress(process_type: str, token: str):
@router.get("/setting/{key}", summary="查询系统设置", response_model=schemas.Response)
def get_setting(key: str,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)):
"""
查询系统设置
"""
return schemas.Response(success=True, data={
"value": SystemConfigOper(db).get(key)
"value": SystemConfigOper().get(key)
})
@router.post("/setting/{key}", summary="更新系统设置", response_model=schemas.Response)
def set_setting(key: str, value: Union[list, dict, str, int] = None,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)):
"""
更新系统设置
"""
SystemConfigOper(db).set(key, value)
SystemConfigOper().set(key, value)
return schemas.Response(success=True)
@@ -166,3 +167,45 @@ def latest_version(_: schemas.TokenPayload = Depends(verify_token)):
if ver_json:
return schemas.Response(success=True, data=ver_json)
return schemas.Response(success=False)
@router.get("/ruletest", summary="过滤规则测试", response_model=schemas.Response)
def ruletest(title: str,
subtitle: str = None,
ruletype: str = None,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)):
"""
过滤规则测试,规则类型 1-订阅2-洗版
"""
torrent = schemas.TorrentInfo(
title=title,
description=subtitle,
)
if ruletype == "2":
rule_string = SystemConfigOper().get(SystemConfigKey.FilterRules2)
else:
rule_string = SystemConfigOper().get(SystemConfigKey.FilterRules)
if not rule_string:
return schemas.Response(success=False, message="过滤规则未设置!")
# 过滤
result = SearchChain(db).filter_torrents(rule_string=rule_string,
torrent_list=[torrent])
if not result:
return schemas.Response(success=False, message="不符合过滤规则!")
return schemas.Response(success=True, data={
"priority": 100 - result[0].pri_order + 1
})
@router.get("/restart", summary="重启系统", response_model=schemas.Response)
def restart_system(_: schemas.TokenPayload = Depends(verify_token)):
"""
重启系统
"""
if not SystemUtils.can_restart():
return schemas.Response(success=False, message="当前运行环境不支持重启操作!")
# 执行重启
ret, msg = SystemUtils.restart()
return schemas.Response(success=ret, message=msg)

View File

@@ -5,11 +5,7 @@ from fastapi import APIRouter, Depends
from sqlalchemy.orm import Session
from app import schemas
from app.chain.media import MediaChain
from app.chain.transfer import TransferChain
from app.core.config import settings
from app.core.context import MediaInfo
from app.core.metainfo import MetaInfo
from app.core.security import verify_token
from app.db import get_db
from app.schemas import MediaType
@@ -19,11 +15,11 @@ router = APIRouter()
@router.post("/manual", summary="手动转移", response_model=schemas.Response)
def manual_transfer(path: str,
tmdbid: int,
type_name: str,
target: str = None,
tmdbid: int = None,
type_name: str = None,
season: int = None,
transfer_type: str = settings.TRANSFER_TYPE,
transfer_type: str = None,
episode_format: str = None,
episode_detail: str = None,
episode_part: str = None,
@@ -52,17 +48,8 @@ def manual_transfer(path: str,
target = Path(target)
if not target.exists():
return schemas.Response(success=False, message=f"目标路径不存在")
# 识别元数据
meta = MetaInfo(in_path.stem)
mtype = MediaType(type_name)
# 整合数据
meta.type = mtype
if season:
meta.begin_season = season
# 识别媒体信息
mediainfo: MediaInfo = MediaChain(db).recognize_media(tmdbid=tmdbid, mtype=mtype)
if not mediainfo:
return schemas.Response(success=False, message=f"媒体信息识别失败tmdbid: {tmdbid}")
# 类型
mtype = MediaType(type_name) if type_name else None
# 自定义格式
epformat = None
if episode_offset or episode_part or episode_detail or episode_format:
@@ -75,15 +62,18 @@ def manual_transfer(path: str,
# 开始转移
state, errormsg = TransferChain(db).manual_transfer(
in_path=in_path,
mediainfo=mediainfo,
transfer_type=transfer_type,
target=target,
meta=meta,
tmdbid=tmdbid,
mtype=mtype,
season=season,
transfer_type=transfer_type,
epformat=epformat,
min_filesize=min_filesize
)
# 失败
if not state:
if isinstance(errormsg, list):
errormsg = f"整理完成,{len(errormsg)} 个文件转移失败!"
return schemas.Response(success=False, message=errormsg)
# 成功
return schemas.Response(success=True)

View File

@@ -132,13 +132,10 @@ def arr_rootfolder(apikey: str) -> Any:
status_code=403,
detail="认证失败!",
)
library_path = "/"
if settings.LIBRARY_PATH:
library_path = settings.LIBRARY_PATH.split(",")[0]
return [
{
"id": 1,
"path": library_path,
"path": "/" if not settings.LIBRARY_PATHS else str(settings.LIBRARY_PATHS[0]),
"accessible": True,
"freeSpace": 0,
"unmappedFolders": []

View File

@@ -18,7 +18,7 @@ from app.core.meta import MetaBase
from app.core.module import ModuleManager
from app.log import logger
from app.schemas import TransferInfo, TransferTorrent, ExistMediaInfo, DownloadingTorrent, CommingMessage, Notification, \
WebhookEventInfo, EpisodeFormat
WebhookEventInfo
from app.schemas.types import TorrentStatus, MediaType, MediaImageType, EventType
from app.utils.object import ObjectUtils
@@ -197,7 +197,7 @@ class ChainBase(metaclass=ABCMeta):
return self.run_module("search_medias", meta=meta)
def search_torrents(self, site: CommentedMap,
mediainfo: Optional[MediaInfo] = None,
mediainfo: MediaInfo,
keyword: str = None,
page: int = 0,
area: str = "title") -> List[TorrentInfo]:
@@ -235,7 +235,7 @@ class ChainBase(metaclass=ABCMeta):
torrent_list=torrent_list, season_episodes=season_episodes)
def download(self, torrent_path: Path, download_dir: Path, cookie: str,
episodes: Set[int] = None,
episodes: Set[int] = None, category: str = None
) -> Optional[Tuple[Optional[str], str]]:
"""
根据种子文件,选择并添加下载任务
@@ -243,10 +243,11 @@ class ChainBase(metaclass=ABCMeta):
:param download_dir: 下载目录
:param cookie: cookie
:param episodes: 需要下载的集数
:param category: 种子分类
:return: 种子Hash错误信息
"""
return self.run_module("download", torrent_path=torrent_path, download_dir=download_dir,
cookie=cookie, episodes=episodes, )
cookie=cookie, episodes=episodes, category=category)
def download_added(self, context: Context, torrent_path: Path, download_dir: Path) -> None:
"""
@@ -271,34 +272,27 @@ class ChainBase(metaclass=ABCMeta):
"""
return self.run_module("list_torrents", status=status, hashs=hashs)
def transfer(self, path: Path, mediainfo: MediaInfo,
transfer_type: str,
target: Path = None,
meta: MetaBase = None,
epformat: EpisodeFormat = None,
min_filesize: int = 0) -> Optional[TransferInfo]:
def transfer(self, path: Path, meta: MetaBase, mediainfo: MediaInfo,
transfer_type: str, target: Path = None) -> Optional[TransferInfo]:
"""
文件转移
:param path: 文件路径
:param meta: 预识别的元数据
:param mediainfo: 识别的媒体信息
:param transfer_type: 转移模式
:param target: 转移目标路径
:param meta: 预识别的元数据,仅单文件转移时传递
:param epformat: 自定义剧集识别格式
:param min_filesize: 最小文件大小
:return: {path, target_path, message}
"""
return self.run_module("transfer", path=path, mediainfo=mediainfo,
transfer_type=transfer_type, target=target, meta=meta,
epformat=epformat, min_filesize=min_filesize)
return self.run_module("transfer", path=path, meta=meta, mediainfo=mediainfo,
transfer_type=transfer_type, target=target)
def transfer_completed(self, hashs: Union[str, list], transinfo: TransferInfo = None) -> None:
def transfer_completed(self, hashs: Union[str, list], path: Path = None) -> None:
"""
转移完成后的处理
:param hashs: 种子Hash
:param transinfo: 转移信息
:param path: 源目录
"""
return self.run_module("transfer_completed", hashs=hashs, transinfo=transinfo)
return self.run_module("transfer_completed", hashs=hashs, path=path)
def remove_torrents(self, hashs: Union[str, list]) -> bool:
"""
@@ -402,7 +396,7 @@ class ChainBase(metaclass=ABCMeta):
return self.run_module("scrape_metadata", path=path, mediainfo=mediainfo)
return None
def register_commands(self, commands: dict) -> None:
def register_commands(self, commands: Dict[str, dict]) -> None:
"""
注册菜单命令
"""

View File

@@ -8,6 +8,7 @@ from app.chain import ChainBase
from app.core.config import settings
from app.core.context import MediaInfo, TorrentInfo, Context
from app.core.meta import MetaBase
from app.core.metainfo import MetaInfo
from app.db.downloadhistory_oper import DownloadHistoryOper
from app.db.mediaserver_oper import MediaServerOper
from app.helper.torrent import TorrentHelper
@@ -104,39 +105,70 @@ class DownloadChain(ChainBase):
_folder_name = ""
if not torrent_file:
# 下载种子文件
torrent_file, _folder_name, _ = self.download_torrent(_torrent, userid=userid)
torrent_file, _folder_name, _file_list = self.download_torrent(_torrent, userid=userid)
if not torrent_file:
return
else:
# 获取种子文件的文件夹名和文件清单
_folder_name, _file_list = self.torrent.get_torrent_info(torrent_file)
# 下载目录
if not save_path:
if settings.DOWNLOAD_CATEGORY and _media and _media.category:
# 开启下载二级目录
if _media.type == MediaType.MOVIE:
# 电影
download_dir = Path(settings.DOWNLOAD_MOVIE_PATH or settings.DOWNLOAD_PATH) / _media.category
else:
download_dir = Path(settings.DOWNLOAD_TV_PATH or settings.DOWNLOAD_PATH) / _media.category
if settings.DOWNLOAD_ANIME_PATH \
and _media.genre_ids \
and set(_media.genre_ids).intersection(set(settings.ANIME_GENREIDS)):
# 动漫
download_dir = Path(settings.DOWNLOAD_ANIME_PATH)
else:
# 电视剧
download_dir = Path(settings.DOWNLOAD_TV_PATH or settings.DOWNLOAD_PATH) / _media.category
elif _media:
# 未开启下载二级目录
if _media.type == MediaType.MOVIE:
# 电影
download_dir = Path(settings.DOWNLOAD_MOVIE_PATH or settings.DOWNLOAD_PATH)
else:
download_dir = Path(settings.DOWNLOAD_TV_PATH or settings.DOWNLOAD_PATH)
if settings.DOWNLOAD_ANIME_PATH \
and _media.genre_ids \
and set(_media.genre_ids).intersection(set(settings.ANIME_GENREIDS)):
# 动漫
download_dir = Path(settings.DOWNLOAD_ANIME_PATH)
else:
# 电视剧
download_dir = Path(settings.DOWNLOAD_TV_PATH or settings.DOWNLOAD_PATH)
else:
# 未识别
download_dir = Path(settings.DOWNLOAD_PATH)
else:
# 自定义下载目录
download_dir = Path(save_path)
# 添加下载
result: Optional[tuple] = self.download(torrent_path=torrent_file,
cookie=_torrent.site_cookie,
episodes=episodes,
download_dir=download_dir)
download_dir=download_dir,
category=_media.category)
if result:
_hash, error_msg = result
else:
_hash, error_msg = None, "未知错误"
if _hash:
# 下载文件路径
if _folder_name:
download_path = download_dir / _folder_name
else:
download_path = download_dir / _file_list[0] if _file_list else download_dir
# 登记下载记录
self.downloadhis.add(
path=_folder_name or _torrent.title,
path=str(download_path),
type=_media.type.value,
title=_media.title,
year=_media.year,
@@ -152,6 +184,27 @@ class DownloadChain(ChainBase):
torrent_description=_torrent.description,
torrent_site=_torrent.site_name
)
# 登记下载文件
files_to_add = []
for file in _file_list:
if episodes:
# 识别文件集
file_meta = MetaInfo(Path(file).stem)
if not file_meta.begin_episode \
or file_meta.begin_episode not in episodes:
continue
files_to_add.append({
"download_hash": _hash,
"downloader": settings.DOWNLOADER,
"fullpath": str(download_dir / _folder_name / file),
"savepath": str(download_dir / _folder_name),
"filepath": file,
"torrentname": _meta.org_string,
})
if files_to_add:
self.downloadhis.add_files(files_to_add)
# 发送消息
self.post_download_message(meta=_meta, mediainfo=_media, torrent=_torrent, channel=channel)
# 下载成功后处理
@@ -298,18 +351,25 @@ class DownloadChain(ChainBase):
if not torrent_path:
continue
torrent_episodes = self.torrent.get_torrent_episodes(torrent_files)
if torrent_episodes \
and len(torrent_episodes) >= __get_season_episodes(need_tmdbid,
torrent_season[0]):
logger.info(f"{meta.org_string} 解析文件集数为 {torrent_episodes}")
if not torrent_episodes:
continue
# 总集数
need_total = __get_season_episodes(need_tmdbid, torrent_season[0])
if len(torrent_episodes) < need_total:
# 更新集数范围
begin_ep = min(torrent_episodes)
end_ep = max(torrent_episodes)
meta.set_episodes(begin=begin_ep, end=end_ep)
logger.info(
f"{meta.org_string} 解析文件集数发现不是完整合集")
continue
else:
# 下载
download_id = self.download_single(context=context,
torrent_file=torrent_path,
save_path=save_path,
userid=userid)
else:
logger.info(
f"{meta.org_string} 解析文件集数为 {len(torrent_episodes)},未含所需集数")
continue
else:
# 下载
download_id = self.download_single(context, save_path=save_path, userid=userid)
@@ -431,11 +491,13 @@ class DownloadChain(ChainBase):
continue
# 种子全部集
torrent_episodes = self.torrent.get_torrent_episodes(torrent_files)
logger.info(f"{torrent.site_name} - {meta.org_string} 解析文件集数:{torrent_episodes}")
# 选中的集
selected_episodes = set(torrent_episodes).intersection(set(need_episodes))
if not selected_episodes:
logger.info(f"{torrent.site_name} - {torrent.title} 没有需要的集,跳过...")
continue
logger.info(f"{torrent.site_name} - {torrent.title} 选中集数:{selected_episodes}")
# 添加下载
download_id = self.download_single(context=context,
torrent_file=torrent_path,
@@ -460,13 +522,15 @@ class DownloadChain(ChainBase):
def get_no_exists_info(self, meta: MetaBase,
mediainfo: MediaInfo,
no_exists: Dict[int, Dict[int, NotExistMediaInfo]] = None
no_exists: Dict[int, Dict[int, NotExistMediaInfo]] = None,
totals: Dict[int, int] = None
) -> Tuple[bool, Dict[int, Dict[int, NotExistMediaInfo]]]:
"""
检查媒体库,查询是否存在,对于剧集同时返回不存在的季集信息
:param meta: 元数据
:param mediainfo: 已识别的媒体信息
:param no_exists: 在调用该方法前已经存储的不存在的季集信息,有传入时该函数搜索的内容将会叠加后输出
:param totals: 电视剧每季的总集数
:return: 当前媒体是否缺失,各标题总的季集和缺失的季集
"""
@@ -499,6 +563,10 @@ class DownloadChain(ChainBase):
if not no_exists:
no_exists = {}
if not totals:
totals = {}
if mediainfo.type == MediaType.MOVIE:
# 电影
itemid = self.mediaserver.get_item_id(mtype=mediainfo.type.value,
@@ -523,40 +591,54 @@ class DownloadChain(ChainBase):
itemid = self.mediaserver.get_item_id(mtype=mediainfo.type.value,
tmdbid=mediainfo.tmdb_id,
season=mediainfo.season)
# 媒体库已存在的剧集
exists_tvs: Optional[ExistMediaInfo] = self.media_exists(mediainfo=mediainfo, itemid=itemid)
if not exists_tvs:
# 所有集均缺失
# 所有集均缺失
for season, episodes in mediainfo.seasons.items():
if not episodes:
continue
# 全季不存在
if meta.begin_season \
if meta.season_list \
and season not in meta.season_list:
continue
__append_no_exists(_season=season, _episodes=[], _total=len(episodes), _start=min(episodes))
# 总集数
total_ep = totals.get(season) or len(episodes)
__append_no_exists(_season=season, _episodes=[],
_total=total_ep, _start=min(episodes))
return False, no_exists
else:
# 存在一些,检查缺失的季集
# 存在一些,检查每季缺失的季集
for season, episodes in mediainfo.seasons.items():
if meta.begin_season \
and season not in meta.season_list:
continue
if not episodes:
continue
exist_seasons = exists_tvs.seasons
if exist_seasons.get(season):
# 取差
lack_episodes = list(set(episodes).difference(set(exist_seasons[season])))
# 该季总集数
season_total = totals.get(season) or len(episodes)
# 该季已存在的
exist_episodes = exists_tvs.seasons.get(season)
if exist_episodes:
# 已存在取差集
if totals.get(season):
# 按总集数计算缺失集开始集为TMDB中的最小集
lack_episodes = list(set(range(min(episodes),
season_total + min(episodes))
).difference(set(exist_episodes)))
else:
# 按TMDB集数计算缺失集
lack_episodes = list(set(episodes).difference(set(exist_episodes)))
if not lack_episodes:
# 全部集存在
continue
# 添加不存在的季集信息
__append_no_exists(_season=season, _episodes=lack_episodes,
_total=len(episodes), _start=min(episodes))
_total=season_total, _start=min(lack_episodes))
else:
# 全季不存在
__append_no_exists(_season=season, _episodes=[],
_total=len(episodes), _start=min(episodes))
_total=season_total, _start=min(episodes))
# 存在不完整的剧集
if no_exists:
logger.debug(f"媒体库中已存在部分剧集,缺失:{no_exists}")
@@ -573,7 +655,8 @@ class DownloadChain(ChainBase):
self.post_message(Notification(
channel=channel,
mtype=NotificationType.Download,
title="没有正在下载的任务!"))
title="没有正在下载的任务!",
userid=userid))
return
# 发送消息
title = f"{len(torrents)} 个任务正在下载:"
@@ -582,7 +665,7 @@ class DownloadChain(ChainBase):
for torrent in torrents:
messages.append(f"{index}. {torrent.title} "
f"{StringUtils.str_filesize(torrent.size)} "
f"{round(torrent.progress * 100, 1)}%")
f"{round(torrent.progress, 1)}%")
index += 1
self.post_message(Notification(
channel=channel, mtype=NotificationType.Download,

View File

@@ -1,3 +1,4 @@
from pathlib import Path
from typing import Optional, List, Tuple
from app.chain import ChainBase
@@ -31,6 +32,29 @@ class MediaChain(ChainBase):
# 返回上下文
return Context(meta_info=metainfo, media_info=mediainfo)
def recognize_by_path(self, path: str) -> Optional[Context]:
"""
根据文件路径识别媒体信息
"""
logger.info(f'开始识别媒体信息,文件:{path} ...')
file_path = Path(path)
# 上级目录元数据
dir_meta = MetaInfo(title=file_path.parent.name)
# 文件元数据,不包含后缀
file_meta = MetaInfo(title=file_path.stem)
# 合并元数据
file_meta.merge(dir_meta)
# 识别媒体信息
mediainfo = self.recognize_media(meta=file_meta)
if not mediainfo:
logger.warn(f'{path} 未识别到媒体信息')
return Context(meta_info=file_meta)
logger.info(f'{path} 识别到媒体信息:{mediainfo.type.value} {mediainfo.title_year}')
# 更新媒体图片
self.obtain_images(mediainfo=mediainfo)
# 返回上下文
return Context(meta_info=file_meta, media_info=mediainfo)
def search(self, title: str) -> Tuple[MetaBase, List[MediaInfo]]:
"""
搜索媒体信息

View File

@@ -7,6 +7,7 @@ from sqlalchemy.orm import Session
from app import schemas
from app.chain import ChainBase
from app.core.config import settings
from app.db import SessionFactory
from app.db.mediaserver_oper import MediaServerOper
from app.log import logger
from app.schemas import MessageChannel, Notification
@@ -21,7 +22,6 @@ class MediaServerChain(ChainBase):
def __init__(self, db: Session = None):
super().__init__(db)
self.mediaserverdb = MediaServerOper(db)
def librarys(self) -> List[schemas.MediaServerLibrary]:
"""
@@ -56,11 +56,14 @@ class MediaServerChain(ChainBase):
同步媒体库所有数据到本地数据库
"""
with lock:
# 媒体服务器同步使用独立的会话
_db = SessionFactory()
_dbOper = MediaServerOper(_db)
logger.info("开始同步媒体库数据 ...")
# 汇总统计
total_count = 0
# 清空登记薄
self.mediaserverdb.empty(server=settings.MEDIASERVER)
_dbOper.empty(server=settings.MEDIASERVER)
for library in self.librarys():
logger.info(f"正在同步媒体库 {library.name} ...")
library_count = 0
@@ -83,8 +86,11 @@ class MediaServerChain(ChainBase):
item_dict = item.dict()
item_dict['seasoninfo'] = json.dumps(seasoninfo)
item_dict['item_type'] = item_type
self.mediaserverdb.add(**item_dict)
_dbOper.add(**item_dict)
logger.info(f"媒体库 {library.name} 同步完成,共同步数量:{library_count}")
# 总数累加
total_count += library_count
# 关闭数据库连接
if _db:
_db.close()
logger.info("【MediaServer】媒体库数据同步完成同步数量%s" % total_count)

View File

@@ -30,7 +30,7 @@ class RssChain(ChainBase):
super().__init__(db)
self.rssoper = RssOper(self._db)
self.sites = SitesHelper()
self.systemconfig = SystemConfigOper(self._db)
self.systemconfig = SystemConfigOper()
self.downloadchain = DownloadChain(self._db)
self.message = MessageHelper()
@@ -42,6 +42,7 @@ class RssChain(ChainBase):
识别媒体信息并添加订阅
"""
logger.info(f'开始添加自定义订阅,标题:{title} ...')
# 识别元数据
metainfo = MetaInfo(title)
if year:
@@ -51,13 +52,16 @@ class RssChain(ChainBase):
if season:
metainfo.type = MediaType.TV
metainfo.begin_season = season
# 识别媒体信息
mediainfo: MediaInfo = self.recognize_media(meta=metainfo)
if not mediainfo:
logger.warn(f'{title} 未识别到媒体信息')
return None, "未识别到媒体信息"
# 更新媒体图片
self.obtain_images(mediainfo=mediainfo)
# 总集数
if mediainfo.type == MediaType.TV:
if not season:
@@ -81,6 +85,7 @@ class RssChain(ChainBase):
kwargs.update({
'total_episode': total_episode
})
# 检查是否存在
if self.rssoper.exists(tmdbid=mediainfo.tmdb_id, season=season):
logger.warn(f'{mediainfo.title} 已存在')
@@ -96,6 +101,7 @@ class RssChain(ChainBase):
"vote": mediainfo.vote_average,
"description": mediainfo.overview,
})
# 添加订阅
sid = self.rssoper.add(title=title, year=year, season=season, **kwargs)
if not sid:
@@ -119,33 +125,41 @@ class RssChain(ChainBase):
continue
if not rss_task.url:
continue
# 下载Rss报文
items = RssHelper.parse(rss_task.url, True if rss_task.proxy else False)
if not items:
logger.error(f"RSS未下载到数据{rss_task.url}")
logger.info(f"{rss_task.name} RSS下载到数据{len(items)}")
# 检查站点
domain = StringUtils.get_url_domain(rss_task.url)
site_info = self.sites.get_indexer(domain) or {}
# 过滤规则
if rss_task.best_version:
filter_rule = self.systemconfig.get(SystemConfigKey.FilterRules2)
else:
filter_rule = self.systemconfig.get(SystemConfigKey.FilterRules)
# 处理RSS条目
matched_contexts = []
# 处理过的title
processed_data = json.loads(rss_task.note) if rss_task.note else {
"titles": [],
"season_episodes": []
}
for item in items:
if not item.get("title"):
continue
# 标题是否已处理过
if item.get("title") in processed_data.get('titles'):
logger.info(f"{item.get('title')} 已处理过")
continue
# 基本要素匹配
if rss_task.include \
and not re.search(r"%s" % rss_task.include, item.get("title")):
@@ -155,6 +169,7 @@ class RssChain(ChainBase):
and re.search(r"%s" % rss_task.exclude, item.get("title")):
logger.info(f"{item.get('title')} 包含 {rss_task.exclude}")
continue
# 识别媒体信息
meta = MetaInfo(title=item.get("title"), subtitle=item.get("description"))
if not meta.name:
@@ -167,10 +182,12 @@ class RssChain(ChainBase):
if mediainfo.tmdb_id != rss_task.tmdbid:
logger.error(f"{item.get('title')} 不匹配")
continue
# 季集是否已处理过
if meta.season_episode in processed_data.get('season_episodes'):
logger.info(f"{meta.season_episode} 已处理过")
logger.info(f"{meta.org_string} {meta.season_episode} 已处理过")
continue
# 种子
torrentinfo = TorrentInfo(
site=site_info.get("id"),
@@ -186,6 +203,7 @@ class RssChain(ChainBase):
size=item.get("size"),
pubdate=item["pubdate"].strftime("%Y-%m-%d %H:%M:%S") if item.get("pubdate") else None,
)
# 过滤种子
if rss_task.filter:
result = self.filter_torrents(
@@ -195,23 +213,24 @@ class RssChain(ChainBase):
if not result:
logger.info(f"{rss_task.name} 不匹配过滤规则")
continue
# 更新已处理数据
processed_data['titles'].append(item.get("title"))
processed_data['season_episodes'].append(meta.season_episode)
# 清除多条数据
# 清除多余数据
mediainfo.clear()
# 匹配到的数据
matched_contexts.append(Context(
meta_info=meta,
media_info=mediainfo,
torrent_info=torrentinfo
))
# 更新已处理过的title
self.rssoper.update(rssid=rss_task.id, note=json.dumps(processed_data))
# 匹配结果
if not matched_contexts:
logger.info(f"{rss_task.name} 未匹配到数据")
continue
logger.info(f"{rss_task.name} 匹配到 {len(matched_contexts)} 条数据")
# 查询本地存在情况
if not rss_task.best_version:
# 查询缺失的媒体信息
@@ -219,6 +238,15 @@ class RssChain(ChainBase):
rss_meta.year = rss_task.year
rss_meta.begin_season = rss_task.season
rss_meta.type = MediaType(rss_task.type)
# 每季总集数
totals = {}
if rss_task.season and rss_task.total_episode:
totals = {
rss_task.season: rss_task.total_episode
}
# 检查缺失
exist_flag, no_exists = self.downloadchain.get_no_exists_info(
meta=rss_meta,
mediainfo=MediaInfo(
@@ -227,6 +255,7 @@ class RssChain(ChainBase):
tmdb_id=rss_task.tmdbid,
season=rss_task.season
),
totals=totals
)
if exist_flag:
logger.info(f'{rss_task.name} 媒体库中已存在,完成订阅')
@@ -253,24 +282,36 @@ class RssChain(ChainBase):
}
else:
no_exists = {}
# 开始下载
downloads, lefts = self.downloadchain.batch_download(contexts=matched_contexts,
no_exists=no_exists,
save_path=rss_task.save_path)
if downloads and not lefts:
if not rss_task.best_version:
# 非洗版结束订阅
self.rssoper.delete(rss_task.id)
# 发送通知
self.post_message(Notification(mtype=NotificationType.Subscribe,
title=f'自定义订阅 {rss_task.name} 已完成',
image=rss_task.backdrop))
# 未完成下载
logger.info(f'{rss_task.name} 未下载未完整,继续订阅 ...')
if downloads:
# 更新最后更新时间和已处理数量
self.rssoper.update(rssid=rss_task.id,
processed=(rss_task.processed or 0) + len(downloads),
last_update=datetime.now().strftime("%Y-%m-%d %H:%M:%S"))
else:
# 未完成下载
logger.info(f'{rss_task.name} 未下载未完整,继续订阅 ...')
if downloads:
for download in downloads:
meta = download.meta_info
# 更新已处理数据
processed_data['titles'].append(meta.org_string)
processed_data['season_episodes'].append(meta.season_episode)
# 更新已处理过的数据
self.rssoper.update(rssid=rss_task.id, note=json.dumps(processed_data))
# 更新最后更新时间和已处理数量
self.rssoper.update(rssid=rss_task.id,
processed=(rss_task.processed or 0) + len(downloads),
last_update=datetime.now().strftime("%Y-%m-%d %H:%M:%S"))
logger.info("刷新RSS订阅数据完成")
if manual:
if len(rss_tasks) == 1:

View File

@@ -29,7 +29,7 @@ class SearchChain(ChainBase):
super().__init__(db)
self.siteshelper = SitesHelper()
self.progress = ProgressHelper()
self.systemconfig = SystemConfigOper(self._db)
self.systemconfig = SystemConfigOper()
self.torrenthelper = TorrentHelper()
def search_by_tmdbid(self, tmdbid: int, mtype: MediaType = None, area: str = "title") -> List[Context]:
@@ -76,22 +76,6 @@ class SearchChain(ChainBase):
print(str(e))
return []
def browse(self, domain: str, keyword: str = None) -> List[TorrentInfo]:
"""
浏览站点首页内容
:param domain: 站点域名
:param keyword: 关键词,有值时为搜索
"""
if not keyword:
logger.info(f'开始浏览站点首页内容,站点:{domain} ...')
else:
logger.info(f'开始搜索资源,关键词:{keyword},站点:{domain} ...')
site = self.siteshelper.get_indexer(domain)
if not site:
logger.error(f'站点 {domain} 不存在!')
return []
return self.search_torrents(site=site, keyword=keyword)
def process(self, mediainfo: MediaInfo,
keyword: str = None,
no_exists: Dict[int, Dict[int, NotExistMediaInfo]] = None,

View File

@@ -91,7 +91,8 @@ class SiteChain(ChainBase):
if not site_list:
self.post_message(Notification(
channel=channel,
title="没有维护任何站点信息!"))
title="没有维护任何站点信息!",
userid=userid))
title = f"共有 {len(site_list)} 个站点,回复对应指令操作:" \
f"\n- 禁用站点:/site_disable [id]" \
f"\n- 启用站点:/site_enable [id]" \
@@ -221,8 +222,8 @@ class SiteChain(ChainBase):
title=f"站点编号 {site_id} 不存在!", userid=userid))
return
self.post_message(Notification(
channel=channel,
title=f"开始更新【{site_info.name}】Cookie&UA ...", userid=userid))
channel=channel,
title=f"开始更新【{site_info.name}】Cookie&UA ...", userid=userid))
# 用户名
username = args[1]
# 密码

View File

@@ -33,7 +33,7 @@ class SubscribeChain(ChainBase):
self.subscribeoper = SubscribeOper(self._db)
self.torrentschain = TorrentsChain()
self.message = MessageHelper()
self.systemconfig = SystemConfigOper(self._db)
self.systemconfig = SystemConfigOper()
def add(self, title: str, year: str,
mtype: MediaType = None,
@@ -185,6 +185,13 @@ class SubscribeChain(ChainBase):
subscribes = self.subscribeoper.list(state)
# 遍历订阅
for subscribe in subscribes:
# 校验当前时间减订阅创建时间是否大于1分钟否则跳过先留出编辑订阅的时间
if subscribe.date:
now = datetime.now()
subscribe_time = datetime.strptime(subscribe.date, '%Y-%m-%d %H:%M:%S')
if (now - subscribe_time).total_seconds() < 60:
logger.debug(f"订阅标题:{subscribe.name} 新增小于1分钟暂不搜索...")
continue
logger.info(f'开始搜索订阅,标题:{subscribe.name} ...')
# 如果状态为N则更新为R
if subscribe.state == 'N':
@@ -202,8 +209,18 @@ class SubscribeChain(ChainBase):
# 非洗版状态
if not subscribe.best_version:
# 每季总集数
totals = {}
if subscribe.season and subscribe.total_episode:
totals = {
subscribe.season: subscribe.total_episode
}
# 查询缺失的媒体信息
exist_flag, no_exists = self.downloadchain.get_no_exists_info(meta=meta, mediainfo=mediainfo)
exist_flag, no_exists = self.downloadchain.get_no_exists_info(
meta=meta,
mediainfo=mediainfo,
totals=totals
)
if exist_flag:
logger.info(f'{mediainfo.title_year} 媒体库中已存在,完成订阅')
self.subscribeoper.delete(subscribe.id)
@@ -358,14 +375,14 @@ class SubscribeChain(ChainBase):
def refresh(self):
"""
刷新订阅
订阅刷新
"""
# 查询所有订阅
subscribes = self.subscribeoper.list('R')
if not subscribes:
# 没有订阅不运行
return
# 刷新站点资源,从缓存中匹配订阅
# 触发刷新站点资源,从缓存中匹配订阅
self.match(
self.torrentschain.refresh()
)
@@ -394,8 +411,18 @@ class SubscribeChain(ChainBase):
continue
# 非洗版
if not subscribe.best_version:
# 每季总集数
totals = {}
if subscribe.season and subscribe.total_episode:
totals = {
subscribe.season: subscribe.total_episode
}
# 查询缺失的媒体信息
exist_flag, no_exists = self.downloadchain.get_no_exists_info(meta=meta, mediainfo=mediainfo)
exist_flag, no_exists = self.downloadchain.get_no_exists_info(
meta=meta,
mediainfo=mediainfo,
totals=totals
)
if exist_flag:
logger.info(f'{mediainfo.title_year} 媒体库中已存在,完成订阅')
self.subscribeoper.delete(subscribe.id)
@@ -537,6 +564,55 @@ class SubscribeChain(ChainBase):
# 未搜索到资源,但本地缺失可能有变化,更新订阅剩余集数
self.__upate_lack_episodes(lefts=no_exists, subscribe=subscribe, mediainfo=mediainfo)
def check(self):
"""
定时检查订阅,更新订阅信息
"""
# 查询所有订阅
subscribes = self.subscribeoper.list()
if not subscribes:
# 没有订阅不运行
return
# 遍历订阅
for subscribe in subscribes:
logger.info(f'开始检查订阅:{subscribe.name} ...')
# 生成元数据
meta = MetaInfo(subscribe.name)
meta.year = subscribe.year
meta.begin_season = subscribe.season or None
meta.type = MediaType(subscribe.type)
# 识别媒体信息
mediainfo: MediaInfo = self.recognize_media(meta=meta, mtype=meta.type, tmdbid=subscribe.tmdbid)
if not mediainfo:
logger.warn(f'未识别到媒体信息,标题:{subscribe.name}tmdbid{subscribe.tmdbid}')
continue
if not mediainfo.seasons:
continue
# 获取当前季的总集数
episodes = mediainfo.seasons.get(subscribe.season) or []
if len(episodes) > subscribe.total_episode or 0:
total_episode = len(episodes)
lack_episode = subscribe.lack_episode + (total_episode - subscribe.total_episode)
logger.info(f'订阅 {subscribe.name} 总集数变化,更新总集数为{total_episode},缺失集数为{lack_episode} ...')
else:
total_episode = subscribe.total_episode
lack_episode = subscribe.lack_episode
logger.info(f'订阅 {subscribe.name} 总集数未变化')
# 更新TMDB信息
self.subscribeoper.update(subscribe.id, {
"name": mediainfo.title,
"year": mediainfo.year,
"vote": mediainfo.vote_average,
"poster": mediainfo.get_poster_image(),
"backdrop": mediainfo.get_backdrop_image(),
"description": mediainfo.overview,
"imdbid": mediainfo.imdb_id,
"tvdbid": mediainfo.tvdb_id,
"total_episode": total_episode,
"lack_episode": lack_episode
})
logger.info(f'订阅 {subscribe.name} 更新完成')
def __update_subscribe_note(self, subscribe: Subscribe, downloads: List[Context]):
"""
更新已下载集数到note字段
@@ -677,32 +753,39 @@ class SubscribeChain(ChainBase):
if no_exists \
and no_exists.get(tmdb_id) \
and (total_episode or start_episode):
# 该季原缺失信息
no_exist_season = no_exists.get(tmdb_id).get(begin_season)
if no_exist_season:
# 原集列表
# 原集列表
episode_list = no_exist_season.episodes
# 原总集数
total = no_exist_season.total_episode
# 原开始集数
start = no_exist_season.start_episode
# 更新剧集列表、开始集数、总集数
if not episode_list and not start_episode:
# 整季缺失且没有开始集
if not episode_list:
# 整季缺失
episodes = []
start_episode = 1
start_episode = start_episode or start
total_episode = total_episode or total
elif total_episode and start_episode:
# 有开始集和总集数
episodes = list(range(start_episode, total_episode + 1))
elif not start_episode:
# 有总集数没有开始集
episodes = list(range(min(episode_list or [1]), total_episode + 1))
start_episode = min(episode_list or [1])
elif not total_episode:
# 有开始集没有总集数
episodes = list(range(start_episode, max(episode_list or [total]) + 1))
total_episode = no_exist_season.total_episode
else:
return no_exists
# 处理集合
# 部分缺失
if not start_episode \
and not total_episode:
# 无需调整
return no_exists
if not start_episode:
# 没有自定义开始集
start_episode = start
if not total_episode:
# 没有自定义总集数
total_episode = total
# 新的集列表
new_episodes = list(range(max(start_episode, start), total_episode + 1))
# 与原集列表取交集
episodes = list(set(episode_list).intersection(set(new_episodes)))
# 更新集合
no_exists[tmdb_id][begin_season] = NotExistMediaInfo(
season=begin_season,
episodes=episodes,

View File

@@ -1,30 +1,33 @@
from typing import Dict, List, Union
from requests import Session
from cachetools import cached, TTLCache
from app.chain import ChainBase
from app.core.config import settings
from app.core.context import TorrentInfo, Context, MediaInfo
from app.core.metainfo import MetaInfo
from app.db import SessionFactory
from app.db.systemconfig_oper import SystemConfigOper
from app.helper.sites import SitesHelper
from app.log import logger
from app.schemas import Notification
from app.schemas.types import SystemConfigKey, MessageChannel
from app.utils.singleton import Singleton
from app.utils.string import StringUtils
class TorrentsChain(ChainBase):
class TorrentsChain(ChainBase, metaclass=Singleton):
"""
种子刷新处理链
站点首页种子处理链,服务于订阅、刷流等
"""
_cache_file = "__torrents_cache__"
def __init__(self, db: Session = None):
super().__init__(db)
def __init__(self):
self._db = SessionFactory()
super().__init__(self._db)
self.siteshelper = SitesHelper()
self.systemconfig = SystemConfigOper(self._db)
self.systemconfig = SystemConfigOper()
def remote_refresh(self, channel: MessageChannel, userid: Union[str, int] = None):
"""
@@ -43,25 +46,38 @@ class TorrentsChain(ChainBase):
# 读取缓存
return self.load_cache(self._cache_file) or {}
@cached(cache=TTLCache(maxsize=128, ttl=600))
def browse(self, domain: str) -> List[TorrentInfo]:
"""
浏览站点首页内容返回种子清单TTL缓存10分钟
:param domain: 站点域名
"""
logger.info(f'开始获取站点 {domain} 最新种子 ...')
site = self.siteshelper.get_indexer(domain)
if not site:
logger.error(f'站点 {domain} 不存在!')
return []
return self.refresh_torrents(site=site)
def refresh(self) -> Dict[str, List[Context]]:
"""
刷新站点最新资源
刷新站点最新资源,识别并缓存起来
"""
# 读取缓存
torrents_cache = self.get_torrents()
# 所有站点索引
indexers = self.siteshelper.get_indexers()
# 配置的索引站点
config_indexers = [str(sid) for sid in self.systemconfig.get(SystemConfigKey.IndexerSites) or []]
# 配置的Rss站点
config_indexers = [str(sid) for sid in self.systemconfig.get(SystemConfigKey.RssSites) or []]
# 遍历站点缓存资源
for indexer in indexers:
# 未开启的站点不搜索
if config_indexers and str(indexer.get("id")) not in config_indexers:
continue
logger.info(f'开始刷新 {indexer.get("name")} 最新种子 ...')
domain = StringUtils.get_url_domain(indexer.get("domain"))
torrents: List[TorrentInfo] = self.refresh_torrents(site=indexer)
torrents: List[TorrentInfo] = self.browse(domain=domain)
# 按pubdate降序排列
torrents.sort(key=lambda x: x.pubdate or '', reverse=True)
# 取前N条

View File

@@ -1,12 +1,13 @@
import json
import re
import shutil
import threading
from pathlib import Path
from typing import List, Optional, Tuple, Union
from typing import List, Optional, Tuple, Union, Dict
from sqlalchemy.orm import Session
from app.chain import ChainBase
from app.chain.media import MediaChain
from app.core.config import settings
from app.core.context import MediaInfo
from app.core.meta import MetaBase
@@ -14,11 +15,14 @@ from app.core.metainfo import MetaInfo
from app.db.downloadhistory_oper import DownloadHistoryOper
from app.db.models.downloadhistory import DownloadHistory
from app.db.models.transferhistory import TransferHistory
from app.db.systemconfig_oper import SystemConfigOper
from app.db.transferhistory_oper import TransferHistoryOper
from app.helper.format import FormatParser
from app.helper.progress import ProgressHelper
from app.log import logger
from app.schemas import TransferInfo, TransferTorrent, Notification, EpisodeFormat
from app.schemas.types import TorrentStatus, EventType, MediaType, ProgressKey, NotificationType, MessageChannel
from app.schemas.types import TorrentStatus, EventType, MediaType, ProgressKey, NotificationType, MessageChannel, \
SystemConfigKey
from app.utils.string import StringUtils
from app.utils.system import SystemUtils
@@ -35,6 +39,8 @@ class TransferChain(ChainBase):
self.downloadhis = DownloadHistoryOper(self._db)
self.transferhis = TransferHistoryOper(self._db)
self.progress = ProgressHelper()
self.mediachain = MediaChain(self._db)
self.systemconfig = SystemConfigOper()
def process(self) -> bool:
"""
@@ -51,132 +57,339 @@ class TransferChain(ChainBase):
return False
logger.info(f"获取到 {len(torrents)} 个已完成的下载任务")
# 开始进度
self.progress.start(ProgressKey.FileTransfer)
# 总数
total_num = len(torrents)
# 已处理数量
processed_num = 0
self.progress.update(value=0,
text=f"开始转移下载任务文件,共 {total_num} 个任务 ...",
key=ProgressKey.FileTransfer)
for torrent in torrents:
# 更新进度
self.progress.update(value=processed_num / total_num * 100,
text=f"正在转移 {torrent.title} ...",
key=ProgressKey.FileTransfer)
# 识别元数据
meta: MetaBase = MetaInfo(title=torrent.title)
if not meta.name:
logger.error(f'未识别到元数据,标题:{torrent.title}')
continue
# 查询下载记录识别情况
downloadhis: DownloadHistory = self.downloadhis.get_by_hash(torrent.hash)
if downloadhis:
# 类型
mtype = MediaType(downloadhis.type)
# 补充剧集信息
if mtype == MediaType.TV \
and ((not meta.season_list and downloadhis.seasons)
or (not meta.episode_list and downloadhis.episodes)):
meta = MetaInfo(f"{torrent.title} {downloadhis.seasons} {downloadhis.episodes}")
# 按TMDBID识别
mediainfo = self.recognize_media(mtype=mtype,
tmdbid=downloadhis.tmdbid)
else:
mediainfo = self.recognize_media(meta=meta)
# 非MoviePilot下载的任务按文件识别
mediainfo = None
# 执行转移
self.do_transfer(path=torrent.path, mediainfo=mediainfo,
download_hash=torrent.hash)
# 设置下载任务状态
self.transfer_completed(hashs=torrent.hash, path=torrent.path)
# 结束
logger.info("下载器文件转移执行完成")
return True
def do_transfer(self, path: Path, meta: MetaBase = None,
mediainfo: MediaInfo = None, download_hash: str = None,
target: Path = None, transfer_type: str = None,
season: int = None, epformat: EpisodeFormat = None,
min_filesize: int = 0, force: bool = False) -> Tuple[bool, str]:
"""
执行一个复杂目录的转移操作
:param path: 待转移目录或文件
:param meta: 元数据
:param mediainfo: 媒体信息
:param download_hash: 下载记录hash
:param target: 目标路径
:param transfer_type: 转移类型
:param season: 季
:param epformat: 剧集格式
:param min_filesize: 最小文件大小(MB)
:param force: 是否强制转移
返回:成功标识,错误信息
"""
if not transfer_type:
transfer_type = settings.TRANSFER_TYPE
# 获取待转移路径清单
trans_paths = self.__get_trans_paths(path)
if not trans_paths:
logger.warn(f"{path.name} 没有找到可转移的媒体文件")
return False, f"{path.name} 没有找到可转移的媒体文件"
# 汇总错误信息
err_msgs: List[str] = []
# 汇总季集清单
season_episodes: Dict[Tuple, List[int]] = {}
# 汇总元数据
metas: Dict[Tuple, MetaBase] = {}
# 汇总媒体信息
medias: Dict[Tuple, MediaInfo] = {}
# 汇总转移信息
transfers: Dict[Tuple, TransferInfo] = {}
# 有集自定义格式
formaterHandler = FormatParser(eformat=epformat.format,
details=epformat.detail,
part=epformat.part,
offset=epformat.offset) if epformat else None
# 开始进度
self.progress.start(ProgressKey.FileTransfer)
# 总数
transfer_files = SystemUtils.list_files(directory=path,
extensions=settings.RMT_MEDIAEXT,
min_filesize=min_filesize)
if formaterHandler:
# 有集自定义格式,过滤文件
transfer_files = [f for f in transfer_files if formaterHandler.match(f.name)]
# 总数
total_num = len(transfer_files)
# 已处理数量
processed_num = 0
self.progress.update(value=0,
text=f"开始转移 {path},共 {total_num} 个文件 ...",
key=ProgressKey.FileTransfer)
# 整理屏蔽词
transfer_exclude_words = self.systemconfig.get(SystemConfigKey.TransferExcludeWords)
# 处理所有待转移目录或文件,默认一个转移路径或文件只有一个媒体信息
for trans_path in trans_paths:
# 如果是目录且不是⼀蓝光原盘,获取所有文件并转移
if (not trans_path.is_file()
and not SystemUtils.is_bluray_dir(trans_path)):
# 遍历获取下载目录所有文件
file_paths = SystemUtils.list_files(directory=trans_path,
extensions=settings.RMT_MEDIAEXT,
min_filesize=min_filesize)
else:
file_paths = [trans_path]
if formaterHandler:
# 有集自定义格式,过滤文件
file_paths = [f for f in file_paths if formaterHandler.match(f.name)]
# 转移所有文件
for file_path in file_paths:
# 回收站及隐藏的文件不处理
file_path_str = str(file_path)
if file_path_str.find('/@Recycle/') != -1 \
or file_path_str.find('/#recycle/') != -1 \
or file_path_str.find('/.') != -1 \
or file_path_str.find('/@eaDir') != -1:
logger.debug(f"{file_path_str} 是回收站或隐藏的文件")
continue
# 整理屏蔽词不处理
is_blocked = False
if transfer_exclude_words:
for keyword in transfer_exclude_words:
if not keyword:
continue
if keyword and re.search(r"%s" % keyword, file_path_str, re.IGNORECASE):
logger.info(f"{file_path} 命中整理屏蔽词 {keyword},不处理")
is_blocked = True
break
if is_blocked:
err_msgs.append(f"{file_path.name} 命中整理屏蔽词")
continue
# 转移成功的不再处理
if not force:
transferd = self.transferhis.get_by_src(file_path_str)
if transferd and transferd.status:
logger.info(f"{file_path} 已成功转移过,如需重新处理,请删除历史记录。")
continue
# 更新进度
self.progress.update(value=processed_num / total_num * 100,
text=f"正在转移 {processed_num + 1}/{total_num}{file_path.name} ...",
key=ProgressKey.FileTransfer)
if not meta:
# 上级目录元数据
dir_meta = MetaInfo(title=file_path.parent.name)
# 文件元数据,不包含后缀
file_meta = MetaInfo(title=file_path.stem)
# 合并元数据
file_meta.merge(dir_meta)
else:
file_meta = meta
# 合并季
if season is not None:
file_meta.begin_season = season
if not file_meta:
logger.error(f"{file_path} 无法识别有效信息")
err_msgs.append(f"{file_path} 无法识别有效信息")
continue
# 自定义识别
if formaterHandler:
# 开始集、结束集、PART
begin_ep, end_ep, part = formaterHandler.split_episode(file_path.stem)
if begin_ep is not None:
file_meta.begin_episode = begin_ep
file_meta.part = part
if end_ep is not None:
file_meta.end_episode = end_ep
if not mediainfo:
logger.warn(f'识别媒体信息,标题:{torrent.title}')
# 识别媒体信息
file_mediainfo = self.recognize_media(meta=file_meta)
else:
file_mediainfo = mediainfo
if not file_mediainfo:
logger.warn(f'{file_path} 未识别到媒体信息')
# 新增转移失败历史记录
his = self.__insert_fail_history(
src_path=torrent.path,
download_hash=torrent.hash,
meta=meta
his = self.transferhis.add_fail(
src_path=file_path,
mode=settings.TRANSFER_TYPE,
meta=file_meta,
download_hash=download_hash
)
self.post_message(Notification(
mtype=NotificationType.Manual,
title=f"{torrent.title} 未识别到媒体信息,无法入库!\n"
title=f"{file_path.name} 未识别到媒体信息,无法入库!\n"
f"回复:```\n/redo {his.id} [tmdbid]|[类型]\n``` 手动识别转移。"
))
# 设置种子状态,避免一直报错
self.transfer_completed(hashs=torrent.hash)
continue
logger.info(f"{torrent.title} 识别为:{mediainfo.type.value} {mediainfo.title_year}")
# 如果未开启新增已入库媒体是否跟随TMDB信息变化则根据tmdbid查询之前的title
if not settings.SCRAP_FOLLOW_TMDB:
transfer_historys = self.transferhis.get_by(tmdbid=file_mediainfo.tmdb_id,
mtype=file_mediainfo.type.value)
if transfer_historys:
file_mediainfo.title = transfer_historys[0].title
logger.info(f"{file_path.name} 识别为:{file_mediainfo.type.value} {file_mediainfo.title_year}")
# 电视剧没有集无法转移
if file_mediainfo.type == MediaType.TV and not file_meta.episode:
# 转移失败
logger.warn(f"{file_path.name} 入库失败:未识别到集数")
err_msgs.append(f"{file_path.name} 未识别到集数")
# 新增转移失败历史记录
self.transferhis.add_fail(
src_path=file_path,
mode=settings.TRANSFER_TYPE,
download_hash=download_hash,
meta=file_meta,
mediainfo=file_mediainfo
)
# 发送消息
self.post_message(Notification(
mtype=NotificationType.Manual,
title=f"{file_path.name} 入库失败!",
text=f"原因:未识别到集数",
image=file_mediainfo.get_message_image()
))
continue
# 更新媒体图片
self.obtain_images(mediainfo=mediainfo)
self.obtain_images(mediainfo=file_mediainfo)
# 获取待转移路径清单
trans_paths = self.__get_trans_paths(torrent.path)
if not trans_paths:
logger.warn(f"{torrent.title} 对应目录没有找到媒体文件")
continue
if not download_hash:
download_file = self.downloadhis.get_file_by_fullpath(file_path_str)
if download_file:
download_hash = download_file.download_hash
# 转移所有文件
for trans_path in trans_paths:
transferinfo: TransferInfo = self.transfer(mediainfo=mediainfo,
path=trans_path,
transfer_type=settings.TRANSFER_TYPE)
if not transferinfo:
logger.error("文件转移模块运行失败")
continue
if not transferinfo.target_path:
# 转移失败
logger.warn(f"{torrent.title} 入库失败:{transferinfo.message}")
# 新增转移失败历史记录
self.__insert_fail_history(
src_path=trans_path,
download_hash=torrent.hash,
meta=meta,
mediainfo=mediainfo,
transferinfo=transferinfo
)
# 发送消息
self.post_message(Notification(
title=f"{mediainfo.title_year} {meta.season_episode} 入库失败!",
text=f"原因:{transferinfo.message or '未知'}",
image=mediainfo.get_message_image()
))
continue
# 新增转移成功历史记录
self.__insert_sucess_history(
src_path=trans_path,
download_hash=torrent.hash,
meta=meta,
mediainfo=mediainfo,
# 执行转移
transferinfo: TransferInfo = self.transfer(meta=file_meta,
mediainfo=file_mediainfo,
path=file_path,
transfer_type=transfer_type,
target=target)
if not transferinfo:
logger.error("文件转移模块运行失败")
return False, "文件转移模块运行失败"
if not transferinfo.target_path:
# 转移失败
logger.warn(f"{file_path.name} 入库失败:{transferinfo.message}")
err_msgs.append(f"{file_path.name} {transferinfo.message}")
# 新增转移失败历史记录
self.transferhis.add_fail(
src_path=file_path,
mode=settings.TRANSFER_TYPE,
download_hash=download_hash,
meta=file_meta,
mediainfo=file_mediainfo,
transferinfo=transferinfo
)
# 刮削元数据
self.scrape_metadata(path=transferinfo.target_path, mediainfo=mediainfo)
# 刷新媒体库
self.refresh_mediaserver(mediainfo=mediainfo, file_path=transferinfo.target_path)
# 发送通知
self.send_transfer_message(meta=meta, mediainfo=mediainfo, transferinfo=transferinfo)
# 广播事件
self.eventmanager.send_event(EventType.TransferComplete, {
'meta': meta,
'mediainfo': mediainfo,
'transferinfo': transferinfo
})
# 发送消息
self.post_message(Notification(
mtype=NotificationType.Manual,
title=f"{file_mediainfo.title_year} {file_meta.season_episode} 入库失败!",
text=f"原因:{transferinfo.message or '未知'}",
image=file_mediainfo.get_message_image()
))
continue
# 转移完成
self.transfer_completed(hashs=torrent.hash, transinfo=transferinfo)
# 汇总信息
mkey = (file_mediainfo.tmdb_id, file_meta.begin_season)
if mkey not in medias:
# 新增信息
metas[mkey] = file_meta
medias[mkey] = file_mediainfo
season_episodes[mkey] = file_meta.episode_list
transfers[mkey] = transferinfo
else:
# 合并季集清单
season_episodes[mkey] = list(set(season_episodes[mkey] + file_meta.episode_list))
# 合并转移数据
transfers[mkey].file_count += transferinfo.file_count
transfers[mkey].total_size += transferinfo.total_size
transfers[mkey].file_list.extend(transferinfo.file_list)
transfers[mkey].file_list_new.extend(transferinfo.file_list_new)
transfers[mkey].fail_list.extend(transferinfo.fail_list)
# 计数
processed_num += 1
# 新增转移成功历史记录
self.transferhis.add_success(
src_path=file_path,
mode=settings.TRANSFER_TYPE,
download_hash=download_hash,
meta=file_meta,
mediainfo=file_mediainfo,
transferinfo=transferinfo
)
# 刮削单个文件
self.scrape_metadata(path=transferinfo.target_path, mediainfo=file_mediainfo)
# 更新进度
processed_num += 1
self.progress.update(value=processed_num / total_num * 100,
text=f"{torrent.title} 转移完成",
text=f"{file_path.name} 转移完成",
key=ProgressKey.FileTransfer)
# 目录或文件转移完成
self.progress.update(value=100,
text=f"所有文件转移完成,正在执行后续处理 ...",
key=ProgressKey.FileTransfer)
# 执行后续处理
for mkey, media in medias.items():
transfer_meta = metas[mkey]
transfer_info = transfers[mkey]
# 媒体目录
if transfer_info.target_path.is_file():
transfer_info.target_path = transfer_info.target_path.parent
# 刷新媒体库,根目录或季目录
self.refresh_mediaserver(mediainfo=media, file_path=transfer_info.target_path)
# 发送通知
se_str = None
if media.type == MediaType.TV:
se_str = f"{transfer_meta.season} {StringUtils.format_ep(season_episodes[mkey])}"
self.send_transfer_message(meta=transfer_meta,
mediainfo=media,
transferinfo=transfer_info,
season_episode=se_str)
# 广播事件
self.eventmanager.send_event(EventType.TransferComplete, {
'meta': transfer_meta,
'mediainfo': media,
'transferinfo': transfer_info
})
# 结束进度
logger.info(f"{path} 转移完成,共 {total_num} 个文件,"
f"成功 {total_num - len(err_msgs)} 个,失败 {len(err_msgs)}")
self.progress.end(ProgressKey.FileTransfer)
logger.info("下载器文件转移执行完成")
return True
return True, "\n".join(err_msgs)
@staticmethod
def __get_trans_paths(directory: Path):
@@ -201,10 +414,12 @@ class TransferChain(ChainBase):
# 先检查当前目录的下级目录,以支持合集的情况
for sub_dir in SystemUtils.list_sub_directory(directory):
# 如果是蓝光原盘
if SystemUtils.is_bluray_dir(sub_dir):
trans_paths.append(sub_dir)
# 没有媒体文件的目录跳过
if not SystemUtils.list_files(sub_dir, extensions=settings.RMT_MEDIAEXT):
continue
trans_paths.append(sub_dir)
elif SystemUtils.list_files(sub_dir, extensions=settings.RMT_MEDIAEXT):
trans_paths.append(sub_dir)
if not trans_paths:
# 没有有效子目录,直接转移当前目录
@@ -272,10 +487,6 @@ class TransferChain(ChainBase):
src_path = Path(history.src)
if not src_path.exists():
return False, f"源目录不存在:{src_path}"
# 识别元数据
meta = MetaInfo(title=src_path.stem)
if not meta.name:
return False, f"未识别到元数据,标题:{src_path.stem}"
# 查询媒体信息
mediainfo = self.recognize_media(mtype=mtype, tmdbid=tmdbid)
if not mediainfo:
@@ -285,174 +496,79 @@ class TransferChain(ChainBase):
# 更新媒体图片
self.obtain_images(mediainfo=mediainfo)
# 转移
transferinfo: TransferInfo = self.transfer(mediainfo=mediainfo,
path=src_path,
transfer_type=settings.TRANSFER_TYPE)
if not transferinfo:
logger.error("文件转移模块运行失败")
return False, "文件转移模块运行失败"
# 删除旧的已整理文件
if history.dest:
self.delete_files(Path(history.dest))
# 删除旧历史记录
self.transferhis.delete(logid)
if not transferinfo.target_path:
# 转移失败
logger.warn(f"{src_path} 入库失败:{transferinfo.message}")
# 新增转移失败历史记录
self.__insert_fail_history(
src_path=src_path,
download_hash=history.download_hash,
meta=meta,
mediainfo=mediainfo,
transferinfo=transferinfo
)
return False, transferinfo.message
# 新增转移成功历史记录
self.__insert_sucess_history(
src_path=src_path,
download_hash=history.download_hash,
meta=meta,
mediainfo=mediainfo,
transferinfo=transferinfo
)
# 刮削元数据
self.scrape_metadata(path=transferinfo.target_path, mediainfo=mediainfo)
# 刷新媒体库
self.refresh_mediaserver(mediainfo=mediainfo, file_path=transferinfo.target_path)
# 发送通知
self.send_transfer_message(meta=meta, mediainfo=mediainfo, transferinfo=transferinfo)
# 广播事件
self.eventmanager.send_event(EventType.TransferComplete, {
'meta': meta,
'mediainfo': mediainfo,
'transferinfo': transferinfo
})
# 强制转移
state, errmsg = self.do_transfer(path=src_path,
mediainfo=mediainfo,
download_hash=history.download_hash,
force=True)
if not state:
return False, errmsg
return True, ""
def manual_transfer(self, in_path: Path,
mediainfo: MediaInfo,
transfer_type: str = settings.TRANSFER_TYPE,
target: Path = None,
meta: MetaBase = None,
tmdbid: int = None,
mtype: MediaType = None,
season: int = None,
transfer_type: str = None,
epformat: EpisodeFormat = None,
min_filesize: int = 0) -> Tuple[bool, str]:
min_filesize: int = 0) -> Tuple[bool, Union[str, list]]:
"""
手动转移
:param in_path: 源文件路径
:param mediainfo: 媒体信息
:param transfer_type: 转移类型
:param target: 目标路径
:param meta: 元数据
:param tmdbid: TMDB ID
:param mtype: 媒体类型
:param season: 季度
:param transfer_type: 转移类型
:param epformat: 剧集格式
:param min_filesize: 最小文件大小(MB)
"""
# 开始转移
transferinfo: TransferInfo = self.transfer(
path=in_path,
mediainfo=mediainfo,
transfer_type=transfer_type,
target=target,
meta=meta,
epformat=epformat,
min_filesize=min_filesize
)
if not transferinfo:
return False, "文件转移模块运行失败"
if not transferinfo.target_path:
return False, transferinfo.message
logger.info(f"手动转移:{in_path} ...")
# 新增转移成功历史记录
self.__insert_sucess_history(
src_path=in_path,
meta=meta,
mediainfo=mediainfo,
transferinfo=transferinfo
)
# 刮削元数据
self.scrape_metadata(path=transferinfo.target_path, mediainfo=mediainfo)
# 刷新媒体库
self.refresh_mediaserver(mediainfo=mediainfo, file_path=transferinfo.target_path)
# 发送通知
self.send_transfer_message(meta=meta, mediainfo=mediainfo, transferinfo=transferinfo)
# 广播事件
self.eventmanager.send_event(EventType.TransferComplete, {
'meta': meta,
'mediainfo': mediainfo,
'transferinfo': transferinfo
})
return True, ""
def __insert_sucess_history(self, src_path: Path, meta: MetaBase,
mediainfo: MediaInfo, transferinfo: TransferInfo,
download_hash: str = None):
"""
新增转移成功历史记录
"""
self.transferhis.add(
src=str(src_path),
dest=str(transferinfo.target_path),
mode=settings.TRANSFER_TYPE,
type=mediainfo.type.value,
category=mediainfo.category,
title=mediainfo.title,
year=mediainfo.year,
tmdbid=mediainfo.tmdb_id,
imdbid=mediainfo.imdb_id,
tvdbid=mediainfo.tvdb_id,
doubanid=mediainfo.douban_id,
seasons=meta.season,
episodes=meta.episode,
image=mediainfo.get_poster_image(),
download_hash=download_hash,
status=1,
files=json.dumps(transferinfo.file_list)
)
def __insert_fail_history(self, src_path: Path, download_hash: str, meta: MetaBase,
transferinfo: TransferInfo = None, mediainfo: MediaInfo = None):
"""
新增转移失败历史记录不能按download_hash判重
"""
if mediainfo and transferinfo:
his = self.transferhis.add(
src=str(src_path),
dest=str(transferinfo.target_path),
mode=settings.TRANSFER_TYPE,
type=mediainfo.type.value,
category=mediainfo.category,
title=mediainfo.title or meta.name,
year=mediainfo.year or meta.year,
tmdbid=mediainfo.tmdb_id,
imdbid=mediainfo.imdb_id,
tvdbid=mediainfo.tvdb_id,
doubanid=mediainfo.douban_id,
seasons=meta.season,
episodes=meta.episode,
image=mediainfo.get_poster_image(),
download_hash=download_hash,
status=0,
errmsg=transferinfo.message or '未知错误',
files=json.dumps(transferinfo.file_list)
if tmdbid:
# 有输入TMDBID时单个识别
# 识别媒体信息
mediainfo: MediaInfo = self.mediachain.recognize_media(tmdbid=tmdbid, mtype=mtype)
if not mediainfo:
return False, f"媒体信息识别失败tmdbid: {tmdbid}, type: {mtype.value}"
# 开始进度
self.progress.start(ProgressKey.FileTransfer)
self.progress.update(value=0,
text=f"开始转移 {in_path} ...",
key=ProgressKey.FileTransfer)
# 开始转移
state, errmsg = self.do_transfer(
path=in_path,
mediainfo=mediainfo,
target=target,
season=season,
epformat=epformat,
min_filesize=min_filesize
)
if not state:
return False, errmsg
self.progress.end(ProgressKey.FileTransfer)
logger.info(f"{in_path} 转移完成")
return True, ""
else:
his = self.transferhis.add(
title=meta.name,
year=meta.year,
src=str(src_path),
mode=settings.TRANSFER_TYPE,
seasons=meta.season,
episodes=meta.episode,
download_hash=download_hash,
status=0,
errmsg="未识别到媒体信息"
)
return his
# 没有输入TMDBID时按文件识别
state, errmsg = self.do_transfer(path=in_path,
target=target,
transfer_type=transfer_type,
season=season,
epformat=epformat,
min_filesize=min_filesize)
return state, errmsg
def send_transfer_message(self, meta: MetaBase, mediainfo: MediaInfo, transferinfo: TransferInfo,
season_episode: str = None):
def send_transfer_message(self, meta: MetaBase, mediainfo: MediaInfo,
transferinfo: TransferInfo, season_episode: str = None):
"""
发送入库成功的消息
"""
@@ -481,22 +597,27 @@ class TransferChain(ChainBase):
"""
logger.info(f"开始删除文件以及空目录:{path} ...")
if not path.exists():
logger.error(f"{path} 不存在")
return
elif path.is_file():
if path.is_file():
# 删除文件
path.unlink()
logger.warn(f"文件 {path} 已删除")
# 判断目录是否为空, 为空则删除
if str(path.parent.parent) != str(path.root):
# 父目录非根目录,删除父目录
files = SystemUtils.list_files(path.parent, settings.RMT_MEDIAEXT)
if not files:
shutil.rmtree(path.parent)
logger.warn(f"目录 {path.parent} 已删除")
# 需要删除父目录
elif str(path.parent) == str(path.root):
# 根目录,删除
logger.warn(f"根目录 {path} 不能删除!")
return
else:
if str(path.parent) != str(path.root):
# 父目录非根目录,才删除目录
shutil.rmtree(path)
# 删除目录
logger.warn(f"目录 {path} 已删除")
# 非根目录,才删除目录
shutil.rmtree(path)
# 删除目录
logger.warn(f"目录 {path} 已删除")
# 需要删除父目录
# 判断父目录是否为空, 为空则删除
for parent_path in path.parents:
if str(parent_path.parent) != str(path.root):
# 父目录非根目录,才删除父目录
files = SystemUtils.list_files(parent_path, settings.RMT_MEDIAEXT)
if not files:
shutil.rmtree(parent_path)
logger.warn(f"目录 {parent_path} 已删除")

View File

@@ -13,11 +13,12 @@ from app.chain.transfer import TransferChain
from app.core.event import Event as ManagerEvent
from app.core.event import eventmanager, EventManager
from app.core.plugin import PluginManager
from app.db import SessionLocal
from app.db import SessionFactory
from app.log import logger
from app.schemas.types import EventType, MessageChannel
from app.utils.object import ObjectUtils
from app.utils.singleton import Singleton
from app.utils.system import SystemUtils
class CommandChian(ChainBase):
@@ -41,7 +42,7 @@ class Command(metaclass=Singleton):
def __init__(self):
# 数据库连接
self._db = SessionLocal()
self._db = SessionFactory()
# 事件管理器
self.eventmanager = EventManager()
# 插件管理器
@@ -53,11 +54,13 @@ class Command(metaclass=Singleton):
"/cookiecloud": {
"func": CookieCloudChain(self._db).remote_sync,
"description": "同步站点",
"category": "站点",
"data": {}
},
"/sites": {
"func": SiteChain(self._db).remote_list,
"description": "查询站点",
"category": "站点",
"data": {}
},
"/site_cookie": {
@@ -78,21 +81,25 @@ class Command(metaclass=Singleton):
"/mediaserver_sync": {
"func": MediaServerChain(self._db).remote_sync,
"description": "同步媒体服务器",
"category": "管理",
"data": {}
},
"/subscribes": {
"func": SubscribeChain(self._db).remote_list,
"description": "查询订阅",
"category": "订阅",
"data": {}
},
"/subscribe_refresh": {
"func": SubscribeChain(self._db).remote_refresh,
"description": "刷新订阅",
"category": "订阅",
"data": {}
},
"/subscribe_search": {
"func": SubscribeChain(self._db).remote_search,
"description": "搜索订阅",
"category": "订阅",
"data": {}
},
"/subscribe_delete": {
@@ -103,11 +110,13 @@ class Command(metaclass=Singleton):
"/downloading": {
"func": DownloadChain(self._db).remote_downloading,
"description": "正在下载",
"category": "管理",
"data": {}
},
"/transfer": {
"func": TransferChain(self._db).process,
"description": "下载文件整理",
"category": "管理",
"data": {}
},
"/redo": {
@@ -118,6 +127,13 @@ class Command(metaclass=Singleton):
"/clear_cache": {
"func": SystemChain(self._db).remote_clear_cache,
"description": "清理缓存",
"category": "管理",
"data": {}
},
"/restart": {
"func": SystemUtils.restart,
"description": "重启系统",
"category": "管理",
"data": {}
}
}
@@ -128,6 +144,7 @@ class Command(metaclass=Singleton):
cmd=command.get('cmd'),
func=Command.send_plugin_event,
desc=command.get('desc'),
category=command.get('category'),
data={
'etype': command.get('event'),
'data': command.get('data')
@@ -164,6 +181,8 @@ class Command(metaclass=Singleton):
"""
self._event.set()
self._thread.join()
if self._db:
self._db.close()
def get_commands(self):
"""
@@ -171,13 +190,15 @@ class Command(metaclass=Singleton):
"""
return self._commands
def register(self, cmd: str, func: Any, data: dict = None, desc: str = None) -> None:
def register(self, cmd: str, func: Any, data: dict = None,
desc: str = None, category: str = None) -> None:
"""
注册命令
"""
self._commands[cmd] = {
"func": func,
"description": desc,
"category": category,
"data": data or {}
}
@@ -242,7 +263,3 @@ class Command(metaclass=Singleton):
args = " ".join(event_str.split()[1:])
if self.get(cmd):
self.execute(cmd, args, event_channel, event_user)
def __del__(self):
if self._db:
self._db.close()

View File

@@ -1,5 +1,6 @@
import secrets
from pathlib import Path
from typing import List
from pydantic import BaseSettings
@@ -39,6 +40,8 @@ class Settings(BaseSettings):
SEARCH_SOURCE: str = "themoviedb"
# 刮削入库的媒体文件
SCRAP_METADATA: bool = True
# 新增已入库媒体是否跟随TMDB信息变化
SCRAP_FOLLOW_TMDB: bool = True
# 刮削来源
SCRAP_SOURCE: str = "themoviedb"
# TMDB图片地址
@@ -63,6 +66,8 @@ class Settings(BaseSettings):
RMT_AUDIO_TRACK_EXT: list = ['.mka']
# 索引器
INDEXER: str = "builtin"
# 订阅搜索开关
SUBSCRIBE_SEARCH: bool = False
# 用户认证站点 hhclub/audiences/hddolby/zmpt/freefarm/hdfans/wintersakura/leaves/1ptba/icc2022/iyuu
AUTH_SITE: str = ""
# 交互搜索自动下载用户ID使用,分割
@@ -107,6 +112,8 @@ class Settings(BaseSettings):
QB_USER: str = None
# Qbittorrent密码
QB_PASSWORD: str = None
# Qbittorrent分类自动管理
QB_CATEGORY: bool = False
# Transmission地址IP:PORT
TR_HOST: str = None
# Transmission用户名
@@ -121,6 +128,8 @@ class Settings(BaseSettings):
DOWNLOAD_MOVIE_PATH: str = None
# 电视剧下载保存目录,容器内映射路径需要一致
DOWNLOAD_TV_PATH: str = None
# 动漫下载保存目录,容器内映射路径需要一致
DOWNLOAD_ANIME_PATH: str = None
# 下载目录二级分类
DOWNLOAD_CATEGORY: bool = False
# 下载站点字幕
@@ -153,16 +162,22 @@ class Settings(BaseSettings):
COOKIECLOUD_PASSWORD: str = None
# CookieCloud同步间隔分钟
COOKIECLOUD_INTERVAL: int = 60 * 24
# OCR服务器地址
OCR_HOST: str = "https://movie-pilot.org"
# CookieCloud对应的浏览器UA
USER_AGENT = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/113.0.0.0 Safari/537.36 Edg/113.0.1774.57"
# 媒体库目录
# 媒体库目录,多个目录使用,分隔
LIBRARY_PATH: str = None
# 电影媒体库目录名,默认"电影"
LIBRARY_MOVIE_NAME: str = None
# 电视剧媒体库目录名,默认"电视剧"
LIBRARY_TV_NAME: str = None
# 动漫媒体库目录名,默认"电视剧/动漫"
LIBRARY_ANIME_NAME: str = None
# 二级分类
LIBRARY_CATEGORY: bool = True
# 电视剧动漫的分类genre_ids
ANIME_GENREIDS = [16]
# 电影重命名格式
MOVIE_RENAME_FORMAT: str = "{{title}}{% if year %} ({{year}}){% endif %}" \
"/{{title}}{% if year %} ({{year}}){% endif %}{% if part %}-{{part}}{% endif %}{% if videoFormat %} - {{videoFormat}}{% endif %}" \
@@ -237,6 +252,12 @@ class Settings(BaseSettings):
"server": self.PROXY_HOST
}
@property
def LIBRARY_PATHS(self) -> List[Path]:
if self.LIBRARY_PATH:
return [Path(path) for path in self.LIBRARY_PATH.split(",")]
return []
def __init__(self):
super().__init__()
with self.CONFIG_PATH as p:

View File

@@ -148,6 +148,8 @@ class MediaInfo:
vote_average: int = 0
# 描述
overview: str = None
# 风格ID
genre_ids: list = field(default_factory=list)
# 所有别名和译名
names: list = field(default_factory=list)
# 各季的剧集清单信息
@@ -250,6 +252,15 @@ class MediaInfo:
"""
setattr(self, f"{name}_path", image)
def get_image(self, name: str):
"""
获取图片地址
"""
try:
return getattr(self, f"{name}_path")
except AttributeError:
return None
def set_category(self, cat: str):
"""
设置二级分类
@@ -338,6 +349,8 @@ class MediaInfo:
self.vote_average = round(float(info.get('vote_average')), 1) if info.get('vote_average') else 0
# 描述
self.overview = info.get('overview')
# 风格
self.genre_ids = info.get('genre_ids') or []
# 原语种
self.original_language = info.get('original_language')
if self.type == MediaType.MOVIE:
@@ -442,6 +455,8 @@ class MediaInfo:
self.poster_path = info.get("pic", {}).get("large")
if not self.poster_path and info.get("cover_url"):
self.poster_path = info.get("cover_url")
if not self.poster_path and info.get("cover"):
self.poster_path = info.get("cover").get("url")
# 简介
if not self.overview:
self.overview = info.get("intro") or info.get("card_subtitle") or ""
@@ -549,7 +564,6 @@ class MediaInfo:
dicts["type"] = self.type.value if self.type else None
dicts["detail_link"] = self.detail_link
dicts["title_year"] = self.title_year
dicts["tmdb_info"]["media_type"] = self.type.value if self.type else None
return dicts
def clear(self):

View File

@@ -15,9 +15,9 @@ class MetaBase(object):
"""
# 是否处理的文件
isfile: bool = False
# 原标题字符串
# 原标题字符串(未经过识别词处理)
title: str = ""
# 识别用字符串
# 识别用字符串(经过识别词处理后)
org_string: Optional[str] = None
# 副标题
subtitle: Optional[str] = None
@@ -440,9 +440,21 @@ class MetaBase(object):
elif len(ep) > 1 and str(ep[0]).isdigit() and str(ep[-1]).isdigit():
self.begin_episode = int(ep[0])
self.end_episode = int(ep[-1])
self.total_episode = (self.end_episode - self.begin_episode) + 1
elif str(ep).isdigit():
self.begin_episode = int(ep)
self.end_episode = None
def set_episodes(self, begin: int, end: int):
"""
设置开始集结束集
"""
if begin:
self.begin_episode = begin
if end:
self.end_episode = end
if self.begin_episode and self.end_episode:
self.total_episode = (self.end_episode - self.begin_episode) + 1
def merge(self, meta: Self):
"""

View File

@@ -371,6 +371,8 @@ class MetaVideo(MetaBase):
self.type = MediaType.TV
elif token.upper() == "SEASON" and self.begin_season is None:
self._last_token_type = "SEASON"
elif self.type == MediaType.TV and self.begin_season is None:
self.begin_season = 1
def __init_episode(self, token: str):
re_res = re.findall(r"%s" % self._episode_re, token, re.IGNORECASE)

View File

@@ -3,6 +3,7 @@ from typing import List, Any, Dict, Tuple
from app.db.systemconfig_oper import SystemConfigOper
from app.helper.module import ModuleHelper
from app.helper.sites import SitesHelper
from app.log import logger
from app.schemas.types import SystemConfigKey
from app.utils.object import ObjectUtils
@@ -23,6 +24,7 @@ class PluginManager(metaclass=Singleton):
_config_key: str = "plugin.%s"
def __init__(self):
self.siteshelper = SitesHelper()
self.init_config()
def init_config(self):
@@ -37,6 +39,7 @@ class PluginManager(metaclass=Singleton):
"""
启动加载插件
"""
# 扫描插件目录
plugins = ModuleHelper.load(
"app.plugins",
@@ -80,8 +83,15 @@ class PluginManager(metaclass=Singleton):
"""
# 停止所有插件
for plugin in self._running_plugins.values():
# 关闭数据库
if hasattr(plugin, "close"):
plugin.close()
# 关闭插件
if hasattr(plugin, "stop_service"):
plugin.stop_service()
# 清空对像
self._plugins = {}
self._running_plugins = {}
def get_plugin_config(self, pid: str) -> dict:
"""
@@ -176,6 +186,8 @@ class PluginManager(metaclass=Singleton):
# 已安装插件
installed_apps = self.systemconfig.get(SystemConfigKey.UserInstalledPlugins) or []
for pid, plugin in self._plugins.items():
# 运行状插件
plugin_obj = self._running_plugins.get(pid)
# 基本属性
conf = {}
# ID
@@ -186,11 +198,20 @@ class PluginManager(metaclass=Singleton):
else:
conf.update({"installed": False})
# 运行状态
if pid in self._running_plugins.keys() and hasattr(plugin, "get_state"):
plugin_obj = self._running_plugins.get(pid)
if plugin_obj and hasattr(plugin, "get_state"):
conf.update({"state": plugin_obj.get_state()})
else:
conf.update({"state": False})
# 是否有详情页面
if hasattr(plugin, "get_page"):
if ObjectUtils.check_method(plugin.get_page):
conf.update({"has_page": True})
else:
conf.update({"has_page": False})
# 权限
if hasattr(plugin, "auth_level"):
if self.siteshelper.auth_level < plugin.auth_level:
continue
# 名称
if hasattr(plugin, "plugin_name"):
conf.update({"plugin_name": plugin.plugin_name})

View File

@@ -1,5 +1,5 @@
from sqlalchemy import create_engine, QueuePool
from sqlalchemy.orm import sessionmaker, Session
from sqlalchemy.orm import sessionmaker, Session, scoped_session
from app.core.config import settings
@@ -8,11 +8,16 @@ Engine = create_engine(f"sqlite:///{settings.CONFIG_PATH}/user.db",
pool_pre_ping=True,
echo=False,
poolclass=QueuePool,
pool_size=1000,
pool_recycle=60 * 10,
max_overflow=0)
# 数据库会话
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=Engine)
pool_size=1024,
pool_recycle=600,
pool_timeout=180,
max_overflow=0,
connect_args={"timeout": 60})
# 会话工厂
SessionFactory = sessionmaker(autocommit=False, autoflush=False, bind=Engine)
# 多线程全局使用的数据库会话
ScopedSession = scoped_session(SessionFactory)
def get_db():
@@ -22,7 +27,7 @@ def get_db():
"""
db = None
try:
db = SessionLocal()
db = SessionFactory()
yield db
finally:
if db:
@@ -30,15 +35,10 @@ def get_db():
class DbOper:
_db: Session = None
def __init__(self, db: Session = None):
if db:
self._db = db
else:
self._db = SessionLocal()
def __del__(self):
if self._db:
self._db.close()
self._db = ScopedSession()

View File

@@ -1,8 +1,8 @@
from pathlib import Path
from typing import Any
from typing import List
from app.db import DbOper
from app.db.models.downloadhistory import DownloadHistory
from app.db.models.downloadhistory import DownloadHistory, DownloadFiles
class DownloadHistoryOper(DbOper):
@@ -10,28 +10,71 @@ class DownloadHistoryOper(DbOper):
下载历史管理
"""
def get_by_path(self, path: Path) -> Any:
def get_by_path(self, path: Path) -> DownloadHistory:
"""
按路径查询下载记录
:param path: 数据key
"""
return DownloadHistory.get_by_path(self._db, str(path))
def get_by_hash(self, download_hash: str) -> Any:
def get_by_hash(self, download_hash: str) -> DownloadHistory:
"""
按Hash查询下载记录
:param download_hash: 数据key
"""
return DownloadHistory.get_by_hash(self._db, download_hash)
def add(self, **kwargs):
def add(self, **kwargs) -> DownloadHistory:
"""
新增下载历史
"""
downloadhistory = DownloadHistory(**kwargs)
return downloadhistory.create(self._db)
def list_by_page(self, page: int = 1, count: int = 30):
def add_files(self, file_items: List[dict]):
"""
新增下载历史文件
"""
for file_item in file_items:
downloadfile = DownloadFiles(**file_item)
downloadfile.create(self._db)
def truncate_files(self):
"""
清空下载历史文件记录
"""
DownloadFiles.truncate(self._db)
def get_files_by_hash(self, download_hash: str, state: int = None) -> List[DownloadFiles]:
"""
按Hash查询下载文件记录
:param download_hash: 数据key
:param state: 删除状态
"""
return DownloadFiles.get_by_hash(self._db, download_hash, state)
def get_file_by_fullpath(self, fullpath: str) -> DownloadFiles:
"""
按fullpath查询下载文件记录
:param fullpath: 数据key
"""
return DownloadFiles.get_by_fullpath(self._db, fullpath)
def get_files_by_savepath(self, fullpath: str) -> List[DownloadFiles]:
"""
按savepath查询下载文件记录
:param fullpath: 数据key
"""
return DownloadFiles.get_by_savepath(self._db, fullpath)
def delete_file_by_fullpath(self, fullpath: str):
"""
按fullpath删除下载文件记录
:param fullpath: 数据key
"""
DownloadFiles.delete_by_fullpath(self._db, fullpath)
def list_by_page(self, page: int = 1, count: int = 30) -> List[DownloadHistory]:
"""
分页查询下载历史
"""
@@ -44,7 +87,7 @@ class DownloadHistoryOper(DbOper):
DownloadHistory.truncate(self._db)
def get_last_by(self, mtype=None, title: str = None, year: str = None,
season: str = None, episode: str = None, tmdbid=None):
season: str = None, episode: str = None, tmdbid=None) -> List[DownloadHistory]:
"""
按类型、标题、年份、季集查询下载记录
"""

View File

@@ -6,7 +6,7 @@ from alembic.config import Config
from app.core.config import settings
from app.core.security import get_password_hash
from app.db import Engine, SessionLocal
from app.db import Engine, SessionFactory
from app.db.models import Base
from app.db.models.user import User
from app.log import logger
@@ -22,15 +22,16 @@ def init_db():
# 全量建表
Base.metadata.create_all(bind=Engine)
# 初始化超级管理员
_db = SessionLocal()
user = User.get_by_name(db=_db, name=settings.SUPERUSER)
db = SessionFactory()
user = User.get_by_name(db=db, name=settings.SUPERUSER)
if not user:
user = User(
name=settings.SUPERUSER,
hashed_password=get_password_hash(settings.SUPERUSER_PASSWORD),
is_superuser=True,
)
user.create(_db)
user.create(db)
db.close()
def update_db():

View File

@@ -1,6 +1,6 @@
from typing import Any
from sqlalchemy.orm import as_declarative, declared_attr
from sqlalchemy.orm import as_declarative, declared_attr, Session
@as_declarative()
@@ -8,33 +8,41 @@ class Base:
id: Any
__name__: str
def create(self, db):
@staticmethod
def commit(db: Session):
try:
db.commit()
except Exception as err:
db.rollback()
raise err
def create(self, db: Session):
db.add(self)
db.commit()
self.commit(db)
return self
@classmethod
def get(cls, db, rid: int):
def get(cls, db: Session, rid: int):
return db.query(cls).filter(cls.id == rid).first()
def update(self, db, payload: dict):
def update(self, db: Session, payload: dict):
payload = {k: v for k, v in payload.items() if v is not None}
for key, value in payload.items():
setattr(self, key, value)
db.commit()
Base.commit(db)
@classmethod
def delete(cls, db, rid):
def delete(cls, db: Session, rid):
db.query(cls).filter(cls.id == rid).delete()
db.commit()
Base.commit(db)
@classmethod
def truncate(cls, db):
def truncate(cls, db: Session):
db.query(cls).delete()
db.commit()
Base.commit(db)
@classmethod
def list(cls, db):
def list(cls, db: Session):
return db.query(cls).all()
def to_dict(self):

View File

@@ -52,7 +52,7 @@ class DownloadHistory(Base):
@staticmethod
def get_last_by(db: Session, mtype: str = None, title: str = None, year: int = None, season: str = None,
episode: str = None, tmdbid: str = None):
episode: str = None, tmdbid: int = None):
"""
据tmdbid、season、season_episode查询转移记录
"""
@@ -89,3 +89,51 @@ class DownloadHistory(Base):
DownloadHistory.seasons == season,
DownloadHistory.episodes == episode).order_by(
DownloadHistory.id.desc()).all()
class DownloadFiles(Base):
"""
下载文件记录
"""
id = Column(Integer, Sequence('id'), primary_key=True, index=True)
# 下载任务Hash
download_hash = Column(String, index=True)
# 下载器
downloader = Column(String)
# 完整路径
fullpath = Column(String, index=True)
# 保存路径
savepath = Column(String, index=True)
# 文件相对路径/名称
filepath = Column(String)
# 种子名称
torrentname = Column(String)
# 状态 0-已删除 1-正常
state = Column(Integer, nullable=False, default=1)
@staticmethod
def get_by_hash(db: Session, download_hash: str, state: int = None):
if state:
return db.query(DownloadFiles).filter(DownloadFiles.download_hash == download_hash,
DownloadFiles.state == state).all()
else:
return db.query(DownloadFiles).filter(DownloadFiles.download_hash == download_hash).all()
@staticmethod
def get_by_fullpath(db: Session, fullpath: str):
return db.query(DownloadFiles).filter(DownloadFiles.fullpath == fullpath).order_by(
DownloadFiles.id.desc()).first()
@staticmethod
def get_by_savepath(db: Session, savepath: str):
return db.query(DownloadFiles).filter(DownloadFiles.savepath == savepath).all()
@staticmethod
def delete_by_fullpath(db: Session, fullpath: str):
db.query(DownloadFiles).filter(DownloadFiles.fullpath == fullpath,
DownloadFiles.state == 1).update(
{
"state": 0
}
)
Base.commit(db)

View File

@@ -47,7 +47,7 @@ class MediaServerItem(Base):
@staticmethod
def empty(db: Session, server: str):
db.query(MediaServerItem).filter(MediaServerItem.server == server).delete()
db.commit()
Base.commit(db)
@staticmethod
def exist_by_tmdbid(db: Session, tmdbid: int, mtype: str):

View File

@@ -23,7 +23,8 @@ class PluginData(Base):
@staticmethod
def del_plugin_data_by_key(db: Session, plugin_id: str, key: str):
return db.query(PluginData).filter(PluginData.plugin_id == plugin_id, PluginData.key == key).delete()
db.query(PluginData).filter(PluginData.plugin_id == plugin_id, PluginData.key == key).delete()
Base.commit(db)
@staticmethod
def get_plugin_data_by_plugin_id(db: Session, plugin_id: str):

View File

@@ -61,4 +61,4 @@ class Site(Base):
@staticmethod
def reset(db: Session):
db.query(Site).delete()
db.commit()
Base.commit(db)

View File

@@ -49,6 +49,8 @@ class Subscribe(Base):
state = Column(String, nullable=False, index=True, default='N')
# 最后更新时间
last_update = Column(String)
# 创建时间
date = Column(String)
# 订阅用户
username = Column(String)
# 订阅站点

View File

@@ -25,7 +25,7 @@ class TransferHistory(Base):
title = Column(String, index=True)
# 年份
year = Column(String)
tmdbid = Column(Integer)
tmdbid = Column(Integer, index=True)
imdbid = Column(String)
tvdbid = Column(Integer)
doubanid = Column(String)
@@ -85,32 +85,54 @@ class TransferHistory(Base):
return db.query(func.count(TransferHistory.id)).filter(TransferHistory.title.like(f'%{title}%')).first()[0]
@staticmethod
def list_by(db: Session, title: str = None, year: int = None, season: str = None,
episode: str = None, tmdbid: str = None):
def list_by(db: Session, mtype: str = None, title: str = None, year: str = None, season: str = None,
episode: str = None, tmdbid: int = None):
"""
据tmdbid、season、season_episode查询转移记录
tmdbid + mtype 或 title + year 必输
"""
if tmdbid and not season and not episode:
return db.query(TransferHistory).filter(TransferHistory.tmdbid == tmdbid).all()
if tmdbid and season and not episode:
return db.query(TransferHistory).filter(TransferHistory.tmdbid == tmdbid,
TransferHistory.seasons == season).all()
if tmdbid and season and episode:
return db.query(TransferHistory).filter(TransferHistory.tmdbid == tmdbid,
TransferHistory.seasons == season,
TransferHistory.episodes == episode).all()
# 电视剧所有季集|电影
if not season and not episode:
return db.query(TransferHistory).filter(TransferHistory.title == title,
TransferHistory.year == year).all()
# 电视剧某季
if season and not episode:
return db.query(TransferHistory).filter(TransferHistory.title == title,
TransferHistory.year == year,
TransferHistory.seasons == season).all()
# 电视剧某季某集
if season and episode:
return db.query(TransferHistory).filter(TransferHistory.title == title,
TransferHistory.year == year,
TransferHistory.seasons == season,
TransferHistory.episodes == episode).all()
# TMDBID + 类型
if tmdbid and mtype:
if season and episode:
# 查询一集
return db.query(TransferHistory).filter(TransferHistory.tmdbid == tmdbid,
TransferHistory.type == mtype,
TransferHistory.seasons == season,
TransferHistory.episodes == episode).all()
elif season:
# 查询一季
return db.query(TransferHistory).filter(TransferHistory.tmdbid == tmdbid,
TransferHistory.type == mtype,
TransferHistory.seasons == season).all()
else:
# 查询所有
return db.query(TransferHistory).filter(TransferHistory.tmdbid == tmdbid,
TransferHistory.type == mtype).all()
# 标题 + 年份
elif title and year:
# 电视剧某季某集
if season and episode:
return db.query(TransferHistory).filter(TransferHistory.title == title,
TransferHistory.year == year,
TransferHistory.seasons == season,
TransferHistory.episodes == episode).all()
# 电视剧某季
elif season:
return db.query(TransferHistory).filter(TransferHistory.title == title,
TransferHistory.year == year,
TransferHistory.seasons == season).all()
# 电视剧所有季集|电影
else:
return db.query(TransferHistory).filter(TransferHistory.title == title,
TransferHistory.year == year).all()
return []
@staticmethod
def update_download_hash(db: Session, historyid: int = None, download_hash: str = None):
db.query(TransferHistory).filter(TransferHistory.id == historyid).update(
{
"download_hash": download_hash
}
)
Base.commit(db)

View File

@@ -2,7 +2,6 @@ import json
from typing import Any
from app.db import DbOper
from app.db.models import Base
from app.db.models.plugin import PluginData
from app.utils.object import ObjectUtils
@@ -12,7 +11,7 @@ class PluginDataOper(DbOper):
插件数据管理
"""
def save(self, plugin_id: str, key: str, value: Any) -> Base:
def save(self, plugin_id: str, key: str, value: Any) -> PluginData:
"""
保存插件数据
:param plugin_id: 插件id

View File

@@ -19,7 +19,7 @@ class SiteOper(DbOper):
return True, "新增站点成功"
return False, "站点已存在"
def get(self, sid: int):
def get(self, sid: int) -> Site:
"""
查询单个站点
"""
@@ -31,7 +31,7 @@ class SiteOper(DbOper):
"""
return Site.list(self._db)
def list_active(self):
def list_active(self) -> List[Site]:
"""
按状态获取站点列表
"""
@@ -41,9 +41,9 @@ class SiteOper(DbOper):
"""
删除站点
"""
return Site.delete(self._db, sid)
Site.delete(self._db, sid)
def update(self, sid: int, payload: dict):
def update(self, sid: int, payload: dict) -> Site:
"""
更新站点
"""

View File

@@ -1,3 +1,4 @@
import time
from typing import Tuple, List
from app.core.context import MediaInfo
@@ -26,13 +27,14 @@ class SubscribeOper(DbOper):
backdrop=mediainfo.get_backdrop_image(),
vote=mediainfo.vote_average,
description=mediainfo.overview,
date=time.strftime("%Y-%m-%d %H:%M:%S", time.localtime()),
**kwargs)
subscribe.create(self._db)
return subscribe.id, "新增订阅成功"
else:
return subscribe.id, "订阅已存在"
def exists(self, tmdbid: int, season: int):
def exists(self, tmdbid: int, season: int) -> bool:
"""
判断是否存在
"""
@@ -61,7 +63,7 @@ class SubscribeOper(DbOper):
"""
Subscribe.delete(self._db, rid=sid)
def update(self, sid: int, payload: dict):
def update(self, sid: int, payload: dict) -> Subscribe:
"""
更新订阅
"""

View File

@@ -1,9 +1,7 @@
import json
from typing import Any, Union
from sqlalchemy.orm import Session
from app.db import DbOper
from app.db import DbOper, SessionFactory
from app.db.models.systemconfig import SystemConfig
from app.schemas.types import SystemConfigKey
from app.utils.object import ObjectUtils
@@ -14,11 +12,12 @@ class SystemConfigOper(DbOper, metaclass=Singleton):
# 配置对象
__SYSTEMCONF: dict = {}
def __init__(self, db: Session = None):
def __init__(self):
"""
加载配置到内存
"""
super().__init__(db)
self._db = SessionFactory()
super().__init__(self._db)
for item in SystemConfig.list(self._db):
if ObjectUtils.is_obj(item.value):
self.__SYSTEMCONF[item.key] = json.loads(item.value)
@@ -35,18 +34,20 @@ class SystemConfigOper(DbOper, metaclass=Singleton):
self.__SYSTEMCONF[key] = value
# 写入数据库
if ObjectUtils.is_obj(value):
if value is not None:
value = json.dumps(value)
else:
value = ''
value = json.dumps(value)
elif value is None:
value = ''
conf = SystemConfig.get_by_key(self._db, key)
if conf:
conf.update(self._db, {"value": value})
if value:
conf.update(self._db, {"value": value})
else:
conf.delete(self._db, conf.id)
else:
conf = SystemConfig(key=key, value=value)
conf.create(self._db)
def get(self, key: Union[str, SystemConfigKey] = None):
def get(self, key: Union[str, SystemConfigKey] = None) -> Any:
"""
获取系统设置
"""
@@ -55,3 +56,7 @@ class SystemConfigOper(DbOper, metaclass=Singleton):
if not key:
return self.__SYSTEMCONF
return self.__SYSTEMCONF.get(key)
def __del__(self):
if self._db:
self._db.close()

View File

@@ -1,8 +1,13 @@
import json
import time
from typing import Any
from pathlib import Path
from typing import Any, List
from app.core.context import MediaInfo
from app.core.meta import MetaBase
from app.db import DbOper
from app.db.models.transferhistory import TransferHistory
from app.schemas import TransferInfo
class TransferHistoryOper(DbOper):
@@ -10,28 +15,28 @@ class TransferHistoryOper(DbOper):
转移历史管理
"""
def get(self, historyid: int) -> Any:
def get(self, historyid: int) -> TransferHistory:
"""
获取转移历史
:param historyid: 转移历史id
"""
return TransferHistory.get(self._db, historyid)
def get_by_title(self, title: str) -> Any:
def get_by_title(self, title: str) -> List[TransferHistory]:
"""
按标题查询转移记录
:param title: 数据key
"""
return TransferHistory.list_by_title(self._db, title)
def get_by_src(self, src: str) -> Any:
def get_by_src(self, src: str) -> TransferHistory:
"""
按源查询转移记录
:param src: 数据key
"""
return TransferHistory.get_by_src(self._db, src)
def add(self, **kwargs):
def add(self, **kwargs) -> TransferHistory:
"""
新增转移历史
"""
@@ -40,18 +45,19 @@ class TransferHistoryOper(DbOper):
})
return TransferHistory(**kwargs).create(self._db)
def statistic(self, days: int = 7):
def statistic(self, days: int = 7) -> List[Any]:
"""
统计最近days天的下载历史数量
"""
return TransferHistory.statistic(self._db, days)
def get_by(self, title: str = None, year: str = None,
season: str = None, episode: str = None, tmdbid: str = None) -> Any:
def get_by(self, title: str = None, year: str = None, mtype: str = None,
season: str = None, episode: str = None, tmdbid: int = None) -> List[TransferHistory]:
"""
按类型、标题、年份、季集查询转移记录
"""
return TransferHistory.list_by(db=self._db,
mtype=mtype,
title=title,
year=year,
season=season,
@@ -70,7 +76,7 @@ class TransferHistoryOper(DbOper):
"""
TransferHistory.truncate(self._db)
def add_force(self, **kwargs):
def add_force(self, **kwargs) -> TransferHistory:
"""
新增转移历史,相同源目录的记录会被删除
"""
@@ -78,4 +84,79 @@ class TransferHistoryOper(DbOper):
transferhistory = TransferHistory.get_by_src(self._db, kwargs.get("src"))
if transferhistory:
transferhistory.delete(self._db, transferhistory.id)
kwargs.update({
"date": time.strftime("%Y-%m-%d %H:%M:%S", time.localtime())
})
return TransferHistory(**kwargs).create(self._db)
def update_download_hash(self, historyid, download_hash):
"""
补充转移记录download_hash
"""
TransferHistory.update_download_hash(self._db, historyid, download_hash)
def add_success(self, src_path: Path, mode: str, meta: MetaBase,
mediainfo: MediaInfo, transferinfo: TransferInfo,
download_hash: str = None):
"""
新增转移成功历史记录
"""
self.add_force(
src=str(src_path),
dest=str(transferinfo.target_path),
mode=mode,
type=mediainfo.type.value,
category=mediainfo.category,
title=mediainfo.title,
year=mediainfo.year,
tmdbid=mediainfo.tmdb_id,
imdbid=mediainfo.imdb_id,
tvdbid=mediainfo.tvdb_id,
doubanid=mediainfo.douban_id,
seasons=meta.season,
episodes=meta.episode,
image=mediainfo.get_poster_image(),
download_hash=download_hash,
status=1,
files=json.dumps(transferinfo.file_list)
)
def add_fail(self, src_path: Path, mode: str, meta: MetaBase, mediainfo: MediaInfo = None,
transferinfo: TransferInfo = None, download_hash: str = None):
"""
新增转移失败历史记录
"""
if mediainfo and transferinfo:
his = self.add_force(
src=str(src_path),
dest=str(transferinfo.target_path),
mode=mode,
type=mediainfo.type.value,
category=mediainfo.category,
title=mediainfo.title or meta.name,
year=mediainfo.year or meta.year,
tmdbid=mediainfo.tmdb_id,
imdbid=mediainfo.imdb_id,
tvdbid=mediainfo.tvdb_id,
doubanid=mediainfo.douban_id,
seasons=meta.season,
episodes=meta.episode,
image=mediainfo.get_poster_image(),
download_hash=download_hash,
status=0,
errmsg=transferinfo.message or '未知错误',
files=json.dumps(transferinfo.file_list)
)
else:
his = self.add_force(
title=meta.name,
year=meta.year,
src=str(src_path),
mode=mode,
seasons=meta.season,
episodes=meta.episode,
download_hash=download_hash,
status=0,
errmsg="未识别到媒体信息"
)
return his

View File

@@ -1,11 +1,12 @@
import base64
from app.core.config import settings
from app.utils.http import RequestUtils
class OcrHelper:
_ocr_b64_url = "https://movie-pilot.org/captcha/base64"
_ocr_b64_url = f"{settings.OCR_HOST}/captcha/base64"
def get_captcha_text(self, image_url=None, image_b64=None, cookie=None, ua=None):
"""

Binary file not shown.

View File

@@ -130,21 +130,34 @@ class TorrentHelper:
"""
获取种子文件的文件夹名和文件清单
:param torrent_path: 种子文件路径
:return: 文件夹名、文件清单
:return: 文件夹名、文件清单,单文件种子返回空文件夹名
"""
if not torrent_path or not torrent_path.exists():
return "", []
try:
torrentinfo = Torrent.from_file(torrent_path)
# 获取目录名
folder_name = torrentinfo.name
# 获取文件清单
if len(torrentinfo.files) <= 1:
if (not torrentinfo.files
or (len(torrentinfo.files) == 1
and torrentinfo.files[0].name == torrentinfo.name)):
# 单文件种子目录名返回空
folder_name = ""
# 单文件种子
file_list = [torrentinfo.name]
else:
file_list = [fileinfo.name for fileinfo in torrentinfo.files]
logger.debug(f"{torrent_path.stem} -> 目录:{folder_name},文件清单:{file_list}")
# 目录名
folder_name = torrentinfo.name
# 文件清单,如果一级目录与种子名相同则去掉
file_list = []
for fileinfo in torrentinfo.files:
file_path = Path(fileinfo.name)
# 根路径
root_path = file_path.parts[0]
if root_path == folder_name:
file_list.append(str(file_path.relative_to(root_path)))
else:
file_list.append(fileinfo.name)
logger.info(f"解析种子:{torrent_path.name} => 目录:{folder_name},文件清单:{file_list}")
return folder_name, file_list
except Exception as err:
logger.error(f"种子文件解析失败:{err}")
@@ -254,9 +267,11 @@ class TorrentHelper:
for file in files:
if not file:
continue
if Path(file).suffix not in settings.RMT_MEDIAEXT:
file_path = Path(file)
if file_path.suffix not in settings.RMT_MEDIAEXT:
continue
meta = MetaInfo(file)
# 只使用文件名识别
meta = MetaInfo(file_path.stem)
if not meta.begin_episode:
continue
episodes = list(set(episodes).union(set(meta.episode_list)))

View File

@@ -166,22 +166,42 @@ class DoubanModule(_ModuleBase):
"""
if settings.SCRAP_SOURCE != "douban":
return None
# 目录下的所有文件
for file in SystemUtils.list_files(path, settings.RMT_MEDIAEXT):
if not file:
continue
logger.info(f"开始刮削媒体库文件:{file} ...")
try:
meta = MetaInfo(file.stem)
if not meta.name:
if SystemUtils.is_bluray_dir(path):
# 蓝光原盘
logger.info(f"开始刮削蓝光原盘:{path} ...")
meta = MetaInfo(path.stem)
if not meta.name:
return
# 根据名称查询豆瓣数据
doubaninfo = self.__match(name=mediainfo.title, year=mediainfo.year, season=meta.begin_season)
if not doubaninfo:
logger.warn(f"未找到 {mediainfo.title} 的豆瓣信息")
return
scrape_path = path / path.name
self.scraper.gen_scraper_files(meta=meta,
mediainfo=MediaInfo(douban_info=doubaninfo),
file_path=scrape_path)
else:
# 目录下的所有文件
for file in SystemUtils.list_files(path, settings.RMT_MEDIAEXT):
if not file:
continue
# 根据名称查询豆瓣数据
doubaninfo = self.__match(name=mediainfo.title, year=mediainfo.year, season=meta.begin_season)
if not doubaninfo:
logger.warn(f"未找到 {mediainfo.title} 的豆瓣信息")
break
# 刮削
self.scraper.gen_scraper_files(meta, MediaInfo(douban_info=doubaninfo), file)
except Exception as e:
logger.error(f"刮削文件 {file} 失败,原因:{e}")
logger.info(f"{file} 刮削完成")
logger.info(f"开始刮削媒体库文件:{file} ...")
try:
meta = MetaInfo(file.stem)
if not meta.name:
continue
# 根据名称查询豆瓣数据
doubaninfo = self.__match(name=mediainfo.title,
year=mediainfo.year,
season=meta.begin_season)
if not doubaninfo:
logger.warn(f"未找到 {mediainfo.title} 的豆瓣信息")
break
# 刮削
self.scraper.gen_scraper_files(meta=meta,
mediainfo=MediaInfo(douban_info=doubaninfo),
file_path=file)
except Exception as e:
logger.error(f"刮削文件 {file} 失败,原因:{e}")
logger.info(f"{path} 刮削完成")

View File

@@ -17,7 +17,7 @@ class DoubanScraper:
生成刮削文件
:param meta: 元数据
:param mediainfo: 媒体信息
:param file_path: 文件路径
:param file_path: 文件路径或者目录路径
"""
try:

View File

@@ -23,6 +23,14 @@ class EmbyModule(_ModuleBase):
def init_setting(self) -> Tuple[str, Union[str, bool]]:
return "MEDIASERVER", "emby"
def scheduler_job(self) -> None:
"""
定时任务每10分钟调用一次
"""
# 定时重连
if not self.emby.is_inactive():
self.emby = Emby()
def user_authenticate(self, name: str, password: str) -> Optional[str]:
"""
使用Emby用户辅助完成用户认证

View File

@@ -24,8 +24,16 @@ class Emby(metaclass=Singleton):
if not self._host.startswith("http"):
self._host = "http://" + self._host
self._apikey = settings.EMBY_API_KEY
self._user = self.get_user()
self._folders = self.get_emby_folders()
self.user = self.get_user()
self.folders = self.get_emby_folders()
def is_inactive(self) -> bool:
"""
判断是否需要重连
"""
if not self._host or not self._apikey:
return False
return True if not self.user else False
def get_emby_folders(self) -> List[dict]:
"""
@@ -51,7 +59,7 @@ class Emby(metaclass=Singleton):
"""
if not self._host or not self._apikey:
return []
req_url = f"{self._host}emby/Users/{self._user}/Views?api_key={self._apikey}"
req_url = f"{self._host}emby/Users/{self.user}/Views?api_key={self._apikey}"
try:
res = RequestUtils().get_res(req_url)
if res:
@@ -318,7 +326,7 @@ class Emby(metaclass=Singleton):
if not item_id:
return {}
# 验证tmdbid是否相同
item_tmdbid = self.get_iteminfo(item_id).get("ProviderIds", {}).get("Tmdb")
item_tmdbid = (self.get_iteminfo(item_id).get("ProviderIds") or {}).get("Tmdb")
if tmdb_id and item_tmdbid:
if str(tmdb_id) != str(item_tmdbid):
return {}
@@ -452,32 +460,16 @@ class Emby(metaclass=Singleton):
return None
# 查找需要刷新的媒体库ID
item_path = Path(item.target_path)
for folder in self._folders:
# 找同级路径最多的媒体库(要求容器内映射路径与实际一致)
max_comm_path = ""
match_num = 0
match_id = None
for folder in self.folders:
# 匹配子目录
for subfolder in folder.get("SubFolders"):
try:
# 查询最大公共路径
# 匹配子目录
subfolder_path = Path(subfolder.get("Path"))
item_path_parents = list(item_path.parents)
subfolder_path_parents = list(subfolder_path.parents)
common_path = next(p1 for p1, p2 in zip(reversed(item_path_parents),
reversed(subfolder_path_parents)
) if p1 == p2)
if len(common_path) > len(max_comm_path):
max_comm_path = common_path
match_id = subfolder.get("Id")
match_num += 1
except StopIteration:
continue
if item_path.is_relative_to(subfolder_path):
return subfolder.get("Id")
except Exception as err:
print(str(err))
# 检查匹配情况
if match_id:
return match_id if match_num == 1 else folder.get("Id")
# 如果找不到,只要路径中有分类目录名就命中
for subfolder in folder.get("SubFolders"):
if subfolder.get("Path") and re.search(r"[/\\]%s" % item.category,
@@ -494,7 +486,7 @@ class Emby(metaclass=Singleton):
return {}
if not self._host or not self._apikey:
return {}
req_url = "%semby/Users/%s/Items/%s?api_key=%s" % (self._host, self._user, itemid, self._apikey)
req_url = "%semby/Users/%s/Items/%s?api_key=%s" % (self._host, self.user, itemid, self._apikey)
try:
res = RequestUtils().get_res(req_url)
if res and res.status_code == 200:
@@ -511,7 +503,7 @@ class Emby(metaclass=Singleton):
yield {}
if not self._host or not self._apikey:
yield {}
req_url = "%semby/Users/%s/Items?ParentId=%s&api_key=%s" % (self._host, self._user, parent, self._apikey)
req_url = "%semby/Users/%s/Items?ParentId=%s&api_key=%s" % (self._host, self.user, parent, self._apikey)
try:
res = RequestUtils().get_res(req_url)
if res and res.status_code == 200:
@@ -778,6 +770,7 @@ class Emby(metaclass=Singleton):
}
"""
message = json.loads(message_str)
logger.info(f"接收到emby webhook{message}")
eventItem = WebhookEventInfo(event=message.get('Event', ''), channel="emby")
if message.get('Item'):
if message.get('Item', {}).get('Type') == 'Episode':
@@ -806,9 +799,9 @@ class Emby(metaclass=Singleton):
eventItem.item_type = "MOV"
eventItem.item_name = "%s %s" % (
message.get('Item', {}).get('Name'), "(" + str(message.get('Item', {}).get('ProductionYear')) + ")")
eventItem.item_path = message.get('Item', {}).get('Path')
eventItem.item_id = message.get('Item', {}).get('Id')
eventItem.item_path = message.get('Item', {}).get('Path')
eventItem.tmdb_id = message.get('Item', {}).get('ProviderIds', {}).get('Tmdb')
if message.get('Item', {}).get('Overview') and len(message.get('Item', {}).get('Overview')) > 100:
eventItem.overview = str(message.get('Item', {}).get('Overview'))[:100] + "..."
@@ -849,9 +842,9 @@ class Emby(metaclass=Singleton):
"""
if not self._host or not self._apikey:
return None
url = url.replace("{HOST}", self._host)\
.replace("{APIKEY}", self._apikey)\
.replace("{USER}", self._user)
url = url.replace("{HOST}", self._host) \
.replace("{APIKEY}", self._apikey) \
.replace("{USER}", self.user)
try:
return RequestUtils().get_res(url=url)
except Exception as e:

View File

@@ -11,6 +11,299 @@ from app.schemas.types import MediaType
class FanartModule(_ModuleBase):
"""
{
"name": "The Wheel of Time",
"thetvdb_id": "355730",
"tvposter": [
{
"id": "174068",
"url": "http://assets.fanart.tv/fanart/tv/355730/tvposter/the-wheel-of-time-64b009de9548d.jpg",
"lang": "en",
"likes": "3"
},
{
"id": "176424",
"url": "http://assets.fanart.tv/fanart/tv/355730/tvposter/the-wheel-of-time-64de44fe42073.jpg",
"lang": "00",
"likes": "3"
},
{
"id": "176407",
"url": "http://assets.fanart.tv/fanart/tv/355730/tvposter/the-wheel-of-time-64dde63c7c941.jpg",
"lang": "en",
"likes": "0"
},
{
"id": "177321",
"url": "http://assets.fanart.tv/fanart/tv/355730/tvposter/the-wheel-of-time-64eda10599c3d.jpg",
"lang": "cz",
"likes": "0"
},
{
"id": "155050",
"url": "http://assets.fanart.tv/fanart/tv/355730/tvposter/the-wheel-of-time-6313adbd1fd58.jpg",
"lang": "pl",
"likes": "0"
},
{
"id": "140198",
"url": "http://assets.fanart.tv/fanart/tv/355730/tvposter/the-wheel-of-time-61a0d7b11952e.jpg",
"lang": "en",
"likes": "0"
},
{
"id": "140034",
"url": "http://assets.fanart.tv/fanart/tv/355730/tvposter/the-wheel-of-time-619e65b73871d.jpg",
"lang": "en",
"likes": "0"
}
],
"hdtvlogo": [
{
"id": "139835",
"url": "http://assets.fanart.tv/fanart/tv/355730/hdtvlogo/the-wheel-of-time-6197d9392faba.png",
"lang": "en",
"likes": "3"
},
{
"id": "140039",
"url": "http://assets.fanart.tv/fanart/tv/355730/hdtvlogo/the-wheel-of-time-619e87941a128.png",
"lang": "pt",
"likes": "3"
},
{
"id": "140092",
"url": "http://assets.fanart.tv/fanart/tv/355730/hdtvlogo/the-wheel-of-time-619fa2347bada.png",
"lang": "en",
"likes": "3"
},
{
"id": "164312",
"url": "http://assets.fanart.tv/fanart/tv/355730/hdtvlogo/the-wheel-of-time-63c8185cb8824.png",
"lang": "hu",
"likes": "1"
},
{
"id": "139827",
"url": "http://assets.fanart.tv/fanart/tv/355730/hdtvlogo/the-wheel-of-time-6197539658a9e.png",
"lang": "en",
"likes": "1"
},
{
"id": "177214",
"url": "http://assets.fanart.tv/fanart/tv/355730/hdtvlogo/the-wheel-of-time-64ebae44c23a6.png",
"lang": "cz",
"likes": "0"
},
{
"id": "177215",
"url": "http://assets.fanart.tv/fanart/tv/355730/hdtvlogo/the-wheel-of-time-64ebae472deef.png",
"lang": "cz",
"likes": "0"
},
{
"id": "156163",
"url": "http://assets.fanart.tv/fanart/tv/355730/hdtvlogo/the-wheel-of-time-63316bef1ff9d.png",
"lang": "cz",
"likes": "0"
},
{
"id": "155051",
"url": "http://assets.fanart.tv/fanart/tv/355730/hdtvlogo/the-wheel-of-time-6313add04ca92.png",
"lang": "pl",
"likes": "0"
},
{
"id": "152668",
"url": "http://assets.fanart.tv/fanart/tv/355730/hdtvlogo/the-wheel-of-time-62ced3775a40a.png",
"lang": "pl",
"likes": "0"
},
{
"id": "142266",
"url": "http://assets.fanart.tv/fanart/tv/355730/hdtvlogo/the-wheel-of-time-61ccd93eeac2b.png",
"lang": "de",
"likes": "0"
}
],
"hdclearart": [
{
"id": "164313",
"url": "http://assets.fanart.tv/fanart/tv/355730/hdclearart/the-wheel-of-time-63c81871c982c.png",
"lang": "en",
"likes": "3"
},
{
"id": "140284",
"url": "http://assets.fanart.tv/fanart/tv/355730/hdclearart/the-wheel-of-time-61a2128ed1df2.png",
"lang": "pt",
"likes": "3"
},
{
"id": "139828",
"url": "http://assets.fanart.tv/fanart/tv/355730/hdclearart/the-wheel-of-time-61975401e894c.png",
"lang": "en",
"likes": "1"
},
{
"id": "164314",
"url": "http://assets.fanart.tv/fanart/tv/355730/hdclearart/the-wheel-of-time-63c8188488a5f.png",
"lang": "hu",
"likes": "1"
},
{
"id": "177322",
"url": "http://assets.fanart.tv/fanart/tv/355730/hdclearart/the-wheel-of-time-64eda135933b6.png",
"lang": "cz",
"likes": "0"
},
{
"id": "142267",
"url": "http://assets.fanart.tv/fanart/tv/355730/hdclearart/the-wheel-of-time-61ccda9918c5c.png",
"lang": "de",
"likes": "0"
}
],
"seasonposter": [
{
"id": "140199",
"url": "http://assets.fanart.tv/fanart/tv/355730/seasonposter/the-wheel-of-time-61a0d7c2976de.jpg",
"lang": "en",
"likes": "1",
"season": "1"
},
{
"id": "176395",
"url": "http://assets.fanart.tv/fanart/tv/355730/seasonposter/the-wheel-of-time-64dd80b3d79a9.jpg",
"lang": "en",
"likes": "0",
"season": "1"
},
{
"id": "140035",
"url": "http://assets.fanart.tv/fanart/tv/355730/seasonposter/the-wheel-of-time-619e65c4d5357.jpg",
"lang": "en",
"likes": "0",
"season": "1"
}
],
"tvthumb": [
{
"id": "140242",
"url": "http://assets.fanart.tv/fanart/tv/355730/tvthumb/the-wheel-of-time-61a1813035506.jpg",
"lang": "en",
"likes": "1"
},
{
"id": "177323",
"url": "http://assets.fanart.tv/fanart/tv/355730/tvthumb/the-wheel-of-time-64eda15b6dce6.jpg",
"lang": "cz",
"likes": "0"
},
{
"id": "176399",
"url": "http://assets.fanart.tv/fanart/tv/355730/tvthumb/the-wheel-of-time-64dd85c9b618c.jpg",
"lang": "en",
"likes": "0"
},
{
"id": "152669",
"url": "http://assets.fanart.tv/fanart/tv/355730/tvthumb/the-wheel-of-time-62ced53d16574.jpg",
"lang": "pl",
"likes": "0"
},
{
"id": "141983",
"url": "http://assets.fanart.tv/fanart/tv/355730/tvthumb/the-wheel-of-time-61c6d04a6d701.jpg",
"lang": "en",
"likes": "0"
}
],
"showbackground": [
{
"id": "177324",
"url": "http://assets.fanart.tv/fanart/tv/355730/showbackground/the-wheel-of-time-64eda1833ccb1.jpg",
"lang": "",
"likes": "0",
"season": "all"
},
{
"id": "141986",
"url": "http://assets.fanart.tv/fanart/tv/355730/showbackground/the-wheel-of-time-61c6d08f7c7e2.jpg",
"lang": "",
"likes": "0",
"season": "all"
},
{
"id": "139868",
"url": "http://assets.fanart.tv/fanart/tv/355730/showbackground/the-wheel-of-time-6198ce358b98a.jpg",
"lang": "",
"likes": "0",
"season": "all"
}
],
"seasonthumb": [
{
"id": "176396",
"url": "http://assets.fanart.tv/fanart/tv/355730/seasonthumb/the-wheel-of-time-64dd80c8593f9.jpg",
"lang": "en",
"likes": "0",
"season": "1"
},
{
"id": "176400",
"url": "http://assets.fanart.tv/fanart/tv/355730/seasonthumb/the-wheel-of-time-64dd85da7c5e9.jpg",
"lang": "en",
"likes": "0",
"season": "0"
}
],
"tvbanner": [
{
"id": "176397",
"url": "http://assets.fanart.tv/fanart/tv/355730/tvbanner/the-wheel-of-time-64dd80da9a255.jpg",
"lang": "en",
"likes": "0"
},
{
"id": "176401",
"url": "http://assets.fanart.tv/fanart/tv/355730/tvbanner/the-wheel-of-time-64dd85e8904ea.jpg",
"lang": "en",
"likes": "0"
},
{
"id": "141988",
"url": "http://assets.fanart.tv/fanart/tv/355730/tvbanner/the-wheel-of-time-61c6d34bceb5f.jpg",
"lang": "en",
"likes": "0"
},
{
"id": "141984",
"url": "http://assets.fanart.tv/fanart/tv/355730/tvbanner/the-wheel-of-time-61c6d06c1c21c.jpg",
"lang": "en",
"likes": "0"
}
],
"seasonbanner": [
{
"id": "176398",
"url": "http://assets.fanart.tv/fanart/tv/355730/seasonbanner/the-wheel-of-time-64dd80e7dbd9f.jpg",
"lang": "en",
"likes": "0",
"season": "1"
},
{
"id": "176402",
"url": "http://assets.fanart.tv/fanart/tv/355730/seasonbanner/the-wheel-of-time-64dd85fb4f1b1.jpg",
"lang": "en",
"likes": "0",
"season": "0"
}
]
}
"""
# 代理
_proxies: dict = settings.PROXY
@@ -40,6 +333,7 @@ class FanartModule(_ModuleBase):
if not result or result.get('status') == 'error':
logger.warn(f"没有获取到 {mediainfo.title_year} 的Fanart图片数据")
return
# 获取所有图片
for name, images in result.items():
if not images:
continue
@@ -47,7 +341,17 @@ class FanartModule(_ModuleBase):
continue
# 按欢迎程度倒排
images.sort(key=lambda x: int(x.get('likes', 0)), reverse=True)
mediainfo.set_image(self.__name(name), images[0].get('url'))
# 取第一张图片
image_obj = images[0]
# 图片属性xx_path
image_name = self.__name(name)
image_season = image_obj.get('season')
# 设置图片
if image_name.startswith("season") and image_season:
# 季图片格式 seasonxx-poster
image_name = f"season{str(image_season).rjust(2, '0')}-{image_name[6:]}"
if not mediainfo.get_image(image_name):
mediainfo.set_image(image_name, image_obj.get('url'))
return mediainfo

View File

@@ -1,20 +1,19 @@
import re
from pathlib import Path
from threading import Lock
from typing import Optional, List, Tuple, Union
from typing import Optional, List, Tuple, Union, Dict
from jinja2 import Template
from app.core.context import MediaInfo
from app.core.metainfo import MetaInfo
from app.core.config import settings
from app.core.context import MediaInfo
from app.core.meta import MetaBase
from app.core.metainfo import MetaInfo
from app.log import logger
from app.modules import _ModuleBase
from app.modules.filetransfer.format_parser import FormatParser
from app.schemas import TransferInfo, EpisodeFormat
from app.utils.system import SystemUtils
from app.schemas import TransferInfo, ExistMediaInfo
from app.schemas.types import MediaType
from app.utils.system import SystemUtils
lock = Lock()
@@ -30,20 +29,15 @@ class FileTransferModule(_ModuleBase):
def init_setting(self) -> Tuple[str, Union[str, bool]]:
pass
def transfer(self, path: Path, mediainfo: MediaInfo,
transfer_type: str, target: Path = None,
meta: MetaBase = None,
epformat: EpisodeFormat = None,
min_filesize: int = 0) -> TransferInfo:
def transfer(self, path: Path, meta: MetaBase, mediainfo: MediaInfo,
transfer_type: str, target: Path = None) -> TransferInfo:
"""
文件转移
:param path: 文件路径
:param meta: 预识别的元数据,仅单文件转移时传递
:param mediainfo: 识别的媒体信息
:param transfer_type: 转移方式
:param target: 目标路径
:param meta: 预识别的元数据,仅单文件转移时传递
:param epformat: 集识别格式
:param min_filesize: 最小文件大小(MB)
:return: {path, target_path, message}
"""
# 获取目标路径
@@ -54,12 +48,10 @@ class FileTransferModule(_ModuleBase):
return TransferInfo(message="未找到媒体库目录,无法转移文件")
# 转移
return self.transfer_media(in_path=path,
in_meta=meta,
mediainfo=mediainfo,
transfer_type=transfer_type,
target_dir=target,
in_meta=meta,
epformat=epformat,
min_filesize=min_filesize)
target_dir=target)
@staticmethod
def __transfer_command(file_item: Path, target_file: Path, transfer_type: str) -> int:
@@ -244,9 +236,9 @@ class FileTransferModule(_ModuleBase):
logger.error(f"音轨文件 {file_name} {transfer_type}失败:{reason}")
return 0
def __transfer_bluray_dir(self, file_path: Path, new_path: Path, transfer_type: str) -> int:
def __transfer_dir(self, file_path: Path, new_path: Path, transfer_type: str) -> int:
"""
转移蓝光文件夹
转移整个文件夹
:param file_path: 原路径
:param new_path: 新路径
:param transfer_type: RmtMode转移方式
@@ -265,13 +257,16 @@ class FileTransferModule(_ModuleBase):
def __transfer_dir_files(self, src_dir: Path, target_dir: Path, transfer_type: str) -> int:
"""
按目录结构转移所有文件
按目录结构转移目录下所有文件
:param src_dir: 原路径
:param target_dir: 新路径
:param transfer_type: RmtMode转移方式
"""
retcode = 0
for file in src_dir.glob("**/*"):
# 过滤掉目录
if file.is_dir():
continue
# 使用target_dir的父目录作为新的父目录
new_file = target_dir.joinpath(file.relative_to(src_dir))
if new_file.exists():
@@ -288,7 +283,7 @@ class FileTransferModule(_ModuleBase):
return retcode
def __transfer_file(self, file_item: Path, new_file: Path, transfer_type: str,
over_flag: bool = False, old_file: Path = None) -> int:
over_flag: bool = False) -> int:
"""
转移一个文件,同时处理其他相关文件
:param file_item: 原文件路径
@@ -296,12 +291,13 @@ class FileTransferModule(_ModuleBase):
:param transfer_type: RmtMode转移方式
:param over_flag: 是否覆盖为True时会先删除再转移
"""
if not over_flag and new_file.exists():
logger.warn(f"文件已存在:{new_file}")
return 0
if over_flag and old_file and old_file.exists():
logger.info(f"正在删除已存在的文件:{old_file}")
old_file.unlink()
if new_file.exists():
if not over_flag:
logger.warn(f"文件已存在:{new_file}")
return 0
else:
logger.info(f"正在删除已存在的文件:{new_file}")
new_file.unlink()
logger.info(f"正在转移文件:{file_item}{new_file}")
# 创建父目录
new_file.parent.mkdir(parents=True, exist_ok=True)
@@ -319,24 +315,48 @@ class FileTransferModule(_ModuleBase):
transfer_type=transfer_type,
over_flag=over_flag)
@staticmethod
def __get_library_dir(mediainfo: MediaInfo, target_dir: Path) -> Path:
"""
根据设置并装媒体库目录
"""
if mediainfo.type == MediaType.MOVIE:
# 电影
if settings.LIBRARY_MOVIE_NAME:
target_dir = target_dir / settings.LIBRARY_MOVIE_NAME / mediainfo.category
else:
# 目的目录加上类型和二级分类
target_dir = target_dir / mediainfo.type.value / mediainfo.category
if mediainfo.type == MediaType.TV:
# 电视剧
if settings.LIBRARY_ANIME_NAME \
and mediainfo.genre_ids \
and set(mediainfo.genre_ids).intersection(set(settings.ANIME_GENREIDS)):
# 动漫
target_dir = target_dir / settings.LIBRARY_ANIME_NAME / mediainfo.category
elif settings.LIBRARY_TV_NAME:
# 电视剧
target_dir = target_dir / settings.LIBRARY_TV_NAME / mediainfo.category
else:
# 目的目录加上类型和二级分类
target_dir = target_dir / mediainfo.type.value / mediainfo.category
return target_dir
def transfer_media(self,
in_path: Path,
in_meta: MetaBase,
mediainfo: MediaInfo,
transfer_type: str,
target_dir: Path = None,
in_meta: MetaBase = None,
epformat: EpisodeFormat = None,
min_filesize: int = 0
target_dir: Path,
) -> TransferInfo:
"""
识别并转移一个文件、多个文件或者目录
识别并转移一个文件或者一个目录下的所有文件
:param in_path: 转移的路径,可能是一个文件也可以是一个目录
:param in_meta预识别元数据
:param mediainfo: 媒体信息
:param target_dir: 目的文件夹,非空的转移到该文件夹,为空时则按类型转移到配置文件中的媒体库文件夹
:param transfer_type: 文件转移方式
:param in_meta预识别元数为空则重新识别
:param epformat: 识别的剧集格式
:param min_filesize: 最小文件大小MB小于该值的文件不转移
:return: TransferInfo、错误信息
"""
# 检查目录路径
@@ -346,178 +366,91 @@ class FileTransferModule(_ModuleBase):
if not target_dir.exists():
return TransferInfo(message=f"{target_dir} 目标路径不存在")
if mediainfo.type == MediaType.MOVIE:
if settings.LIBRARY_MOVIE_NAME:
target_dir = target_dir / settings.LIBRARY_MOVIE_NAME / mediainfo.category
else:
# 目的目录加上类型和二级分类
target_dir = target_dir / mediainfo.type.value / mediainfo.category
if mediainfo.type == MediaType.TV:
if settings.LIBRARY_TV_NAME:
target_dir = target_dir / settings.LIBRARY_TV_NAME / mediainfo.category
else:
# 目的目录加上类型和二级分类
target_dir = target_dir / mediainfo.type.value / mediainfo.category
# 媒体库目录
target_dir = self.__get_library_dir(mediainfo=mediainfo, target_dir=target_dir)
# 重命名格式
rename_format = settings.TV_RENAME_FORMAT \
if mediainfo.type == MediaType.TV else settings.MOVIE_RENAME_FORMAT
# 总大小
total_filesize = 0
# 处理文件清单
file_list = []
# 目标文件清单
file_list_new = []
# 失败文件清单
fail_list = []
# 错误信息
err_msgs = []
# 判断是否为蓝光原盘
bluray_flag = SystemUtils.is_bluray_dir(in_path)
if bluray_flag:
# 识别目录名称,不包括后缀
meta = MetaInfo(in_path.stem)
# 判断是否为文件夹
if in_path.is_dir():
# 转移整个目录
# 是否蓝光原盘
bluray_flag = SystemUtils.is_bluray_dir(in_path)
if bluray_flag:
logger.info(f"{in_path} 是蓝光原盘文件夹")
# 目的路径
new_path = self.get_rename_path(
path=target_dir,
template_string=rename_format,
rename_dict=self.__get_naming_dict(meta=meta,
rename_dict=self.__get_naming_dict(meta=in_meta,
mediainfo=mediainfo)
).parent
# 转移蓝光原盘
retcode = self.__transfer_bluray_dir(file_path=in_path,
new_path=new_path,
transfer_type=transfer_type)
retcode = self.__transfer_dir(file_path=in_path,
new_path=new_path,
transfer_type=transfer_type)
if retcode != 0:
return TransferInfo(message=f"{retcode},蓝光原盘转移失败")
else:
# 计算大小
total_filesize += in_path.stat().st_size
# 返回转移后的路径
return TransferInfo(path=in_path,
target_path=new_path,
total_size=total_filesize,
is_bluray=bluray_flag,
file_list=[],
file_list_new=[])
else:
# 获取文件清单
transfer_files: List[Path] = SystemUtils.list_files(
directory=in_path,
extensions=settings.RMT_MEDIAEXT,
min_filesize=min_filesize
)
if len(transfer_files) == 0:
return TransferInfo(message=f"{in_path} 目录下没有找到可转移的文件")
# 有集自定义格式
formaterHandler = FormatParser(eformat=epformat.format,
details=epformat.detail,
part=epformat.part,
offset=epformat.offset) if epformat else None
# 过滤出符合自定义剧集格式的文件
if formaterHandler:
transfer_files = [x for x in transfer_files if formaterHandler.match(x.name)]
if len(transfer_files) == 0:
return TransferInfo(message=f"{in_path} 目录下没有找到符合自定义剧集格式的文件")
if not in_meta:
# 识别目录名称,不包括后缀
meta = MetaInfo(in_path.stem)
else:
meta = in_meta
# 目的路径
new_path = target_dir / (self.get_rename_path(
template_string=rename_format,
rename_dict=self.__get_naming_dict(meta=meta,
mediainfo=mediainfo)).parents[-2].name)
# 转移所有文件
for transfer_file in transfer_files:
try:
if not in_meta:
# 识别文件元数据,不包含后缀
file_meta = MetaInfo(transfer_file.stem)
# 合并元数据
file_meta.merge(meta)
else:
file_meta = in_meta
# 文件结束季为空
file_meta.end_season = None
# 文件总季数为1
if file_meta.total_season:
file_meta.total_season = 1
# 文件不可能有多集
if file_meta.total_episode > 2:
file_meta.total_episode = 1
file_meta.end_episode = None
# 自定义识别
if formaterHandler:
# 开始集、结束集、PART
begin_ep, end_ep, part = formaterHandler.split_episode(transfer_file.stem)
if begin_ep is not None:
file_meta.begin_episode = begin_ep
file_meta.part = part
if end_ep is not None:
file_meta.end_episode = end_ep
# 目的文件名
new_file = self.get_rename_path(
path=target_dir,
template_string=rename_format,
rename_dict=self.__get_naming_dict(meta=file_meta,
mediainfo=mediainfo,
file_ext=transfer_file.suffix)
)
# 判断是否要覆盖
overflag = False
if new_file.exists():
if new_file.stat().st_size < transfer_file.stat().st_size:
logger.info(f"目标文件已存在,但文件大小更小,将覆盖:{new_file}")
overflag = True
# 转移文件
retcode = self.__transfer_file(file_item=transfer_file,
new_file=new_file,
transfer_type=transfer_type,
over_flag=overflag)
if retcode != 0:
logger.error(f"{transfer_file} 转移文件失败,错误码:{retcode}")
err_msgs.append(f"{transfer_file.name}:错误码 {retcode}")
fail_list.append(transfer_file)
continue
# 源文件清单
file_list.append(str(transfer_file))
# 目的文件清单
file_list_new.append(str(new_file))
# 计算总大小
total_filesize += new_file.stat().st_size
except Exception as err:
err_msgs.append(f"{transfer_file.name}{err}")
logger.error(f"{transfer_file}转移失败:{err}")
fail_list.append(transfer_file)
if not file_list:
# 没有成功的
return TransferInfo(message="\n".join(err_msgs))
logger.error(f"文件夹 {in_path} 转移失败,错误码:{retcode}")
return TransferInfo(message=f"文件夹 {in_path} 转移失败,错误码:{retcode}")
logger.info(f"文件夹 {in_path} 转移成功")
# 返回转移后的路径
return TransferInfo(path=in_path,
target_path=new_path,
message="\n".join(err_msgs),
file_count=len(file_list),
total_size=total_filesize,
fail_list=fail_list,
is_bluray=bluray_flag,
file_list=file_list,
file_list_new=file_list_new)
total_size=new_path.stat().st_size,
is_bluray=bluray_flag)
else:
# 转移单个文件
# 文件结束季为空
in_meta.end_season = None
# 文件总季数为1
if in_meta.total_season:
in_meta.total_season = 1
# 文件不可能有多集
if in_meta.total_episode > 2:
in_meta.total_episode = 1
in_meta.end_episode = None
# 目的文件名
new_file = self.get_rename_path(
path=target_dir,
template_string=rename_format,
rename_dict=self.__get_naming_dict(
meta=in_meta,
mediainfo=mediainfo,
file_ext=in_path.suffix
)
)
# 判断是否要覆盖
overflag = False
if new_file.exists():
if new_file.stat().st_size < in_path.stat().st_size:
logger.info(f"目标文件已存在,但文件大小更小,将覆盖:{new_file}")
overflag = True
# 转移文件
retcode = self.__transfer_file(file_item=in_path,
new_file=new_file,
transfer_type=transfer_type,
over_flag=overflag)
if retcode != 0:
logger.error(f"文件 {in_path} 转移失败,错误码:{retcode}")
return TransferInfo(message=f"文件 {in_path.name} 转移失败,错误码:{retcode}",
fail_list=[str(in_path)])
logger.info(f"文件 {in_path} 转移成功")
return TransferInfo(path=in_path,
target_path=new_file,
file_count=1,
total_size=new_file.stat().st_size,
is_bluray=False,
file_list=[str(in_path)],
file_list_new=[str(new_file)])
@staticmethod
def __get_naming_dict(meta: MetaBase, mediainfo: MediaInfo, file_ext: str = None) -> dict:
@@ -587,13 +520,13 @@ class FileTransferModule(_ModuleBase):
计算一个最好的目的目录有in_path时找与in_path同路径的没有in_path时顺序查找1个符合大小要求的没有in_path和size时返回第1个
:param in_path: 源目录
"""
if not settings.LIBRARY_PATH:
if not settings.LIBRARY_PATHS:
return None
# 目的路径,多路径以,分隔
dest_paths = str(settings.LIBRARY_PATH).split(",")
dest_paths = settings.LIBRARY_PATHS
# 只有一个路径,直接返回
if len(dest_paths) == 1:
return Path(dest_paths[0])
return dest_paths[0]
# 匹配有最长共同上级路径的目录
max_length = 0
target_path = None
@@ -605,14 +538,81 @@ class FileTransferModule(_ModuleBase):
max_length = len(relative)
target_path = path
except Exception as e:
logger.debug(f"计算目标路径时出错:{e}")
continue
if target_path:
return Path(target_path)
return target_path
# 顺序匹配第1个满足空间存储要求的目录
if in_path.exists():
file_size = in_path.stat().st_size
for path in dest_paths:
if SystemUtils.free_space(Path(path)) > file_size:
return Path(path)
if SystemUtils.free_space(path) > file_size:
return path
# 默认返回第1个
return Path(dest_paths[0])
return dest_paths[0]
def media_exists(self, mediainfo: MediaInfo, itemid: str = None) -> Optional[ExistMediaInfo]:
"""
判断媒体文件是否存在于本地文件系统
:param mediainfo: 识别的媒体信息
:param itemid: 媒体服务器ItemID
:return: 如不存在返回None存在时返回信息包括每季已存在所有集{type: movie/tv, seasons: {season: [episodes]}}
"""
if not settings.LIBRARY_PATHS:
return None
# 目的路径
dest_paths = settings.LIBRARY_PATHS
# 检查每一个媒体库目录
for dest_path in dest_paths:
# 媒体库路径
target_dir = self.get_target_path(dest_path)
if not target_dir:
continue
# 媒体分类路径
target_dir = self.__get_library_dir(mediainfo=mediainfo, target_dir=target_dir)
# 重命名格式
rename_format = settings.TV_RENAME_FORMAT \
if mediainfo.type == MediaType.TV else settings.MOVIE_RENAME_FORMAT
# 媒体完整路径
media_path = self.get_rename_path(
path=target_dir,
template_string=rename_format,
rename_dict=self.__get_naming_dict(meta=MetaInfo(mediainfo.title),
mediainfo=mediainfo)
)
if mediainfo.type == MediaType.MOVIE:
# 电影取父目录
media_path = media_path.parent
else:
# 电视剧取上两级目录
media_path = media_path.parent.parent
if not media_path.exists():
continue
# 检索媒体文件
media_files = SystemUtils.list_files(directory=media_path, extensions=settings.RMT_MEDIAEXT)
if not media_files:
continue
if mediainfo.type == MediaType.MOVIE:
# 电影存在任何文件为存在
logger.info(f"文件系统已存在:{mediainfo.title_year}")
return ExistMediaInfo(type=MediaType.MOVIE)
else:
# 电视剧检索集数
seasons: Dict[int, list] = {}
for media_file in media_files:
file_meta = MetaInfo(media_file.stem)
season_index = file_meta.begin_season or 1
episode_index = file_meta.begin_episode
if not episode_index:
continue
if season_index not in seasons:
seasons[season_index] = []
seasons[season_index].append(episode_index)
# 返回剧集情况
logger.info(f"{mediainfo.title_year} 文件系统已存在:{seasons}")
return ExistMediaInfo(type=MediaType.TV, seasons=seasons)
# 不存在
return None

View File

@@ -53,6 +53,10 @@ class IndexerModule(_ModuleBase):
logger.warn(f"{site.get('name')} 不支持中文搜索")
return []
# 去除搜索关键字中的特殊字符
if search_word:
search_word = StringUtils.clear(search_word, replace_word=" ", allow_space=True)
# 开始索引
result_array = []
# 开始计时

View File

@@ -262,7 +262,12 @@ class TorrentSpider:
# 解码为字符串
page_source = raw_data.decode(encoding)
except Exception as e:
logger.error(f"chardet解码失败{e}")
logger.debug(f"chardet解码失败{e}")
# 探测utf-8解码
if re.search(r"charset=\"?utf-8\"?", ret.text, re.IGNORECASE):
ret.encoding = "utf-8"
else:
ret.encoding = ret.apparent_encoding
page_source = ret.text
else:
page_source = ret.text

View File

@@ -17,12 +17,20 @@ class JellyfinModule(_ModuleBase):
def init_module(self) -> None:
self.jellyfin = Jellyfin()
def stop(self):
pass
def init_setting(self) -> Tuple[str, Union[str, bool]]:
return "MEDIASERVER", "jellyfin"
def scheduler_job(self) -> None:
"""
定时任务每10分钟调用一次
"""
# 定时重连
if not self.jellyfin.is_inactive():
self.jellyfin = Jellyfin()
def stop(self):
pass
def user_authenticate(self, name: str, password: str) -> Optional[str]:
"""
使用Emby用户辅助完成用户认证

View File

@@ -22,8 +22,16 @@ class Jellyfin(metaclass=Singleton):
if not self._host.startswith("http"):
self._host = "http://" + self._host
self._apikey = settings.JELLYFIN_API_KEY
self._user = self.get_user()
self._serverid = self.get_server_id()
self.user = self.get_user()
self.serverid = self.get_server_id()
def is_inactive(self) -> bool:
"""
判断是否需要重连
"""
if not self._host or not self._apikey:
return False
return True if not self.user else False
def __get_jellyfin_librarys(self) -> List[dict]:
"""
@@ -31,7 +39,7 @@ class Jellyfin(metaclass=Singleton):
"""
if not self._host or not self._apikey:
return []
req_url = f"{self._host}Users/{self._user}/Views?api_key={self._apikey}"
req_url = f"{self._host}Users/{self.user}/Views?api_key={self._apikey}"
try:
res = RequestUtils().get_res(req_url)
if res:
@@ -222,10 +230,10 @@ class Jellyfin(metaclass=Singleton):
"""
根据名称查询Jellyfin中剧集的SeriesId
"""
if not self._host or not self._apikey or not self._user:
if not self._host or not self._apikey or not self.user:
return None
req_url = "%sUsers/%s/Items?api_key=%s&searchTerm=%s&IncludeItemTypes=Series&Limit=10&Recursive=true" % (
self._host, self._user, self._apikey, name)
self._host, self.user, self._apikey, name)
try:
res = RequestUtils().get_res(req_url)
if res:
@@ -247,10 +255,10 @@ class Jellyfin(metaclass=Singleton):
:param year: 年份,为空则不过滤
:return: 含title、year属性的字典列表
"""
if not self._host or not self._apikey or not self._user:
if not self._host or not self._apikey or not self.user:
return None
req_url = "%sUsers/%s/Items?api_key=%s&searchTerm=%s&IncludeItemTypes=Movie&Limit=10&Recursive=true" % (
self._host, self._user, self._apikey, title)
self._host, self.user, self._apikey, title)
try:
res = RequestUtils().get_res(req_url)
if res:
@@ -283,7 +291,7 @@ class Jellyfin(metaclass=Singleton):
:param season: 季
:return: 集号的列表
"""
if not self._host or not self._apikey or not self._user:
if not self._host or not self._apikey or not self.user:
return None
# 查TVID
if not item_id:
@@ -293,7 +301,7 @@ class Jellyfin(metaclass=Singleton):
if not item_id:
return {}
# 验证tmdbid是否相同
item_tmdbid = self.get_iteminfo(item_id).get("ProviderIds", {}).get("Tmdb")
item_tmdbid = (self.get_iteminfo(item_id).get("ProviderIds") or {}).get("Tmdb")
if tmdb_id and item_tmdbid:
if str(tmdb_id) != str(item_tmdbid):
return {}
@@ -301,7 +309,7 @@ class Jellyfin(metaclass=Singleton):
season = ""
try:
req_url = "%sShows/%s/Episodes?season=%s&&userId=%s&isMissing=false&api_key=%s" % (
self._host, item_id, season, self._user, self._apikey)
self._host, item_id, season, self.user, self._apikey)
res_json = RequestUtils().get_res(req_url)
if res_json:
res_items = res_json.json().get("Items")
@@ -370,24 +378,100 @@ class Jellyfin(metaclass=Singleton):
def get_webhook_message(self, message: dict) -> WebhookEventInfo:
"""
解析Jellyfin报文
{
"ServerId": "d79d3a6261614419a114595a585xxxxx",
"ServerName": "nyanmisaka-jellyfin1",
"ServerVersion": "10.8.10",
"ServerUrl": "http://xxxxxxxx:8098",
"NotificationType": "PlaybackStart",
"Timestamp": "2023-09-10T08:35:25.3996506+00:00",
"UtcTimestamp": "2023-09-10T08:35:25.3996527Z",
"Name": "慕灼华逃婚离开",
"Overview": "慕灼华假装在读书,她害怕大娘子说她不务正业。",
"Tagline": "",
"ItemId": "4b92551344f53b560fb55cd6700xxxxx",
"ItemType": "Episode",
"RunTimeTicks": 27074985984,
"RunTime": "00:45:07",
"Year": 2023,
"SeriesName": "灼灼风流",
"SeasonNumber": 1,
"SeasonNumber00": "01",
"SeasonNumber000": "001",
"EpisodeNumber": 1,
"EpisodeNumber00": "01",
"EpisodeNumber000": "001",
"Provider_tmdb": "229210",
"Video_0_Title": "4K HEVC SDR",
"Video_0_Type": "Video",
"Video_0_Codec": "hevc",
"Video_0_Profile": "Main",
"Video_0_Level": 150,
"Video_0_Height": 2160,
"Video_0_Width": 3840,
"Video_0_AspectRatio": "16:9",
"Video_0_Interlaced": false,
"Video_0_FrameRate": 25,
"Video_0_VideoRange": "SDR",
"Video_0_ColorSpace": "bt709",
"Video_0_ColorTransfer": "bt709",
"Video_0_ColorPrimaries": "bt709",
"Video_0_PixelFormat": "yuv420p",
"Video_0_RefFrames": 1,
"Audio_0_Title": "AAC - Stereo - Default",
"Audio_0_Type": "Audio",
"Audio_0_Language": "und",
"Audio_0_Codec": "aac",
"Audio_0_Channels": 2,
"Audio_0_Bitrate": 125360,
"Audio_0_SampleRate": 48000,
"Audio_0_Default": true,
"PlaybackPositionTicks": 1000000,
"PlaybackPosition": "00:00:00",
"MediaSourceId": "4b92551344f53b560fb55cd6700ebc86",
"IsPaused": false,
"IsAutomated": false,
"DeviceId": "TW96aWxsxxxxxjA",
"DeviceName": "Edge Chromium",
"ClientName": "Jellyfin Web",
"NotificationUsername": "Jeaven",
"UserId": "9783d2432b0d40a8a716b6aa46xxxxx"
}
"""
logger.info(f"接收到jellyfin webhook{message}")
eventItem = WebhookEventInfo(
event=message.get('NotificationType', ''),
item_id=message.get('ItemId'),
item_name=message.get('Name'),
item_type=message.get('ItemType'),
item_favorite=message.get('Favorite'),
save_reason=message.get('SaveReason'),
tmdb_id=message.get('Provider_tmdb'),
user_name=message.get('NotificationUsername'),
channel="jellyfin"
)
eventItem.item_id = message.get('ItemId')
eventItem.tmdb_id = message.get('Provider_tmdb')
eventItem.overview = message.get('Overview')
eventItem.device_name = message.get('DeviceName')
eventItem.user_name = message.get('NotificationUsername')
eventItem.client = message.get('ClientName')
if message.get("ItemType") == "Episode":
# 剧集
eventItem.item_type = "TV"
eventItem.season_id = message.get('SeasonNumber')
eventItem.episode_id = message.get('EpisodeNumber')
eventItem.item_name = "%s %s%s %s" % (
message.get('SeriesName'),
"S" + str(eventItem.season_id),
"E" + str(eventItem.episode_id),
message.get('Name'))
else:
# 电影
eventItem.item_type = "MOV"
eventItem.item_name = "%s %s" % (
message.get('Name'), "(" + str(message.get('Year')) + ")")
# 获取消息图片
if eventItem.item_id:
# 根据返回的item_id去调用媒体服务器获取
eventItem.image_url = self.get_remote_image_by_id(item_id=eventItem.item_id,
image_type="Backdrop")
eventItem.image_url = self.get_remote_image_by_id(
item_id=eventItem.item_id,
image_type="Backdrop"
)
return eventItem
@@ -400,7 +484,7 @@ class Jellyfin(metaclass=Singleton):
if not self._host or not self._apikey:
return {}
req_url = "%sUsers/%s/Items/%s?api_key=%s" % (
self._host, self._user, itemid, self._apikey)
self._host, self.user, itemid, self._apikey)
try:
res = RequestUtils().get_res(req_url)
if res and res.status_code == 200:
@@ -417,7 +501,7 @@ class Jellyfin(metaclass=Singleton):
yield {}
if not self._host or not self._apikey:
yield {}
req_url = "%sUsers/%s/Items?parentId=%s&api_key=%s" % (self._host, self._user, parent, self._apikey)
req_url = "%sUsers/%s/Items?parentId=%s&api_key=%s" % (self._host, self.user, parent, self._apikey)
try:
res = RequestUtils().get_res(req_url)
if res and res.status_code == 200:
@@ -452,9 +536,9 @@ class Jellyfin(metaclass=Singleton):
"""
if not self._host or not self._apikey:
return None
url = url.replace("{HOST}", self._host)\
.replace("{APIKEY}", self._apikey)\
.replace("{USER}", self._user)
url = url.replace("{HOST}", self._host) \
.replace("{APIKEY}", self._apikey) \
.replace("{USER}", self.user)
try:
return RequestUtils().get_res(url=url)
except Exception as e:

View File

@@ -23,6 +23,14 @@ class PlexModule(_ModuleBase):
def init_setting(self) -> Tuple[str, Union[str, bool]]:
return "MEDIASERVER", "plex"
def scheduler_job(self) -> None:
"""
定时任务每10分钟调用一次
"""
# 定时重连
if not self.plex.is_inactive():
self.plex = Plex()
def webhook_parser(self, body: Any, form: Any, args: Any) -> WebhookEventInfo:
"""
解析Webhook报文体

View File

@@ -30,6 +30,14 @@ class Plex(metaclass=Singleton):
self._plex = None
logger.error(f"Plex服务器连接失败{str(e)}")
def is_inactive(self) -> bool:
"""
判断是否需要重连
"""
if not self._host or not self._token:
return False
return True if not self._plex else False
def get_librarys(self):
"""
获取媒体服务器所有媒体库列表
@@ -329,9 +337,104 @@ class Plex(metaclass=Singleton):
item_name TV:琅琊榜 S1E6 剖心明志 虎口脱险
MOV:猪猪侠大冒险(2001)
overview 剧情描述
{
"event": "media.scrobble",
"user": false,
"owner": true,
"Account": {
"id": 31646104,
"thumb": "https://plex.tv/users/xx",
"title": "播放"
},
"Server": {
"title": "Media-Server",
"uuid": "xxxx"
},
"Player": {
"local": false,
"publicAddress": "xx.xx.xx.xx",
"title": "MagicBook",
"uuid": "wu0uoa1ujfq90t0c5p9f7fw0"
},
"Metadata": {
"librarySectionType": "show",
"ratingKey": "40294",
"key": "/library/metadata/40294",
"parentRatingKey": "40291",
"grandparentRatingKey": "40275",
"guid": "plex://episode/615580a9fa828e7f1a0caabd",
"parentGuid": "plex://season/615580a9fa828e7f1a0caab8",
"grandparentGuid": "plex://show/60e81fd8d8000e002d7d2976",
"type": "episode",
"title": "The World's Strongest Senior",
"titleSort": "World's Strongest Senior",
"grandparentKey": "/library/metadata/40275",
"parentKey": "/library/metadata/40291",
"librarySectionTitle": "动漫剧集",
"librarySectionID": 7,
"librarySectionKey": "/library/sections/7",
"grandparentTitle": "范马刃牙",
"parentTitle": "Combat Shadow Fighting Saga / Great Prison Battle Saga",
"originalTitle": "Baki Hanma",
"contentRating": "TV-MA",
"summary": "The world is shaken by news of a man taking down a monstrous elephant with his bare hands. Back in Japan, Baki is confronted by a knife-wielding child.",
"index": 1,
"parentIndex": 1,
"audienceRating": 8.5,
"viewCount": 1,
"lastViewedAt": 1694320444,
"year": 2021,
"thumb": "/library/metadata/40294/thumb/1693544504",
"art": "/library/metadata/40275/art/1693952979",
"parentThumb": "/library/metadata/40291/thumb/1691115271",
"grandparentThumb": "/library/metadata/40275/thumb/1693952979",
"grandparentArt": "/library/metadata/40275/art/1693952979",
"duration": 1500000,
"originallyAvailableAt": "2021-09-30",
"addedAt": 1691115281,
"updatedAt": 1693544504,
"audienceRatingImage": "themoviedb://image.rating",
"Guid": [
{
"id": "imdb://tt14765720"
},
{
"id": "tmdb://3087250"
},
{
"id": "tvdb://8530933"
}
],
"Rating": [
{
"image": "themoviedb://image.rating",
"value": 8.5,
"type": "audience"
}
],
"Director": [
{
"id": 115144,
"filter": "director=115144",
"tag": "Keiya Saito",
"tagKey": "5f401c8d04a86500409ea6c1"
}
],
"Writer": [
{
"id": 115135,
"filter": "writer=115135",
"tag": "Tatsuhiko Urahata",
"tagKey": "5d7768e07a53e9001e6db1ce",
"thumb": "https://metadata-static.plex.tv/f/people/f6f90dc89fa87d459f85d40a09720c05.jpg"
}
]
}
}
"""
message = json.loads(message_str)
eventItem = WebhookEventInfo(event=message.get('Event', ''), channel="plex")
logger.info(f"接收到plex webhook{message}")
eventItem = WebhookEventInfo(event=message.get('event', ''), channel="plex")
if message.get('Metadata'):
if message.get('Metadata', {}).get('type') == 'episode':
eventItem.item_type = "TV"

View File

@@ -10,7 +10,7 @@ from app.core.metainfo import MetaInfo
from app.log import logger
from app.modules import _ModuleBase
from app.modules.qbittorrent.qbittorrent import Qbittorrent
from app.schemas import TransferInfo, TransferTorrent, DownloadingTorrent
from app.schemas import TransferTorrent, DownloadingTorrent
from app.schemas.types import TorrentStatus
from app.utils.string import StringUtils
from app.utils.system import SystemUtils
@@ -28,14 +28,23 @@ class QbittorrentModule(_ModuleBase):
def init_setting(self) -> Tuple[str, Union[str, bool]]:
return "DOWNLOADER", "qbittorrent"
def scheduler_job(self) -> None:
"""
定时任务每10分钟调用一次
"""
# 定时重连
if self.qbittorrent.is_inactive():
self.qbittorrent = Qbittorrent()
def download(self, torrent_path: Path, download_dir: Path, cookie: str,
episodes: Set[int] = None) -> Optional[Tuple[Optional[str], str]]:
episodes: Set[int] = None, category: str = None) -> Optional[Tuple[Optional[str], str]]:
"""
根据种子文件,选择并添加下载任务
:param torrent_path: 种子文件地址
:param download_dir: 下载目录
:param cookie: cookie
:param episodes: 需要下载的集数
:param category: 分类
:return: 种子Hash错误信息
"""
if not torrent_path or not torrent_path.exists():
@@ -53,7 +62,8 @@ class QbittorrentModule(_ModuleBase):
download_dir=str(download_dir),
is_paused=is_paused,
tag=tags,
cookie=cookie)
cookie=cookie,
category=category)
if not state:
return None, f"添加种子任务失败:{torrent_path}"
else:
@@ -156,11 +166,11 @@ class QbittorrentModule(_ModuleBase):
return ret_torrents
def transfer_completed(self, hashs: Union[str, list],
transinfo: TransferInfo = None) -> None:
path: Path = None) -> None:
"""
转移完成后的处理
:param hashs: 种子Hash
:param transinfo: 转移信息
:param path: 源目录
"""
self.qbittorrent.set_torrents_tag(ids=hashs, tags=['已整理'])
# 移动模式删除种子
@@ -168,11 +178,11 @@ class QbittorrentModule(_ModuleBase):
if self.remove_torrents(hashs):
logger.info(f"移动模式删除种子成功:{hashs} ")
# 删除残留文件
if transinfo and transinfo.path and transinfo.path.exists():
files = SystemUtils.list_files(transinfo.path, settings.RMT_MEDIAEXT)
if path and path.exists():
files = SystemUtils.list_files(path, settings.RMT_MEDIAEXT)
if not files:
logger.warn(f"删除残留文件夹:{transinfo.path}")
shutil.rmtree(transinfo.path, ignore_errors=True)
logger.warn(f"删除残留文件夹:{path}")
shutil.rmtree(path, ignore_errors=True)
def remove_torrents(self, hashs: Union[str, list]) -> bool:
"""

View File

@@ -27,6 +27,14 @@ class Qbittorrent(metaclass=Singleton):
if self._host and self._port:
self.qbc = self.__login_qbittorrent()
def is_inactive(self) -> bool:
"""
判断是否需要重连
"""
if not self._host or not self._port:
return False
return True if not self.qbc else False
def __login_qbittorrent(self) -> Optional[Client]:
"""
连接qbittorrent
@@ -177,13 +185,16 @@ class Qbittorrent(metaclass=Singleton):
is_paused: bool = False,
download_dir: str = None,
tag: Union[str, list] = None,
cookie=None
category: str = None,
cookie=None,
**kwargs
) -> bool:
"""
添加种子
:param content: 种子urls或文件内容
:param is_paused: 添加后暂停
:param tag: 标签
:param category: 种子分类
:param download_dir: 下载路径
:param cookie: 站点Cookie用于辅助下载种子
:return: bool
@@ -191,6 +202,7 @@ class Qbittorrent(metaclass=Singleton):
if not self.qbc or not content:
return False
# 下载内容
if isinstance(content, str):
urls = content
torrent_files = None
@@ -198,20 +210,26 @@ class Qbittorrent(metaclass=Singleton):
urls = None
torrent_files = content
# 保存目录
if download_dir:
save_path = download_dir
is_auto = False
else:
save_path = None
is_auto = None
# 标签
if tag:
tags = tag
else:
tags = None
try:
# 分类自动管理
if category and settings.QB_CATEGORY:
is_auto = True
else:
is_auto = False
category = None
try:
# 添加下载
qbc_ret = self.qbc.torrents_add(urls=urls,
torrent_files=torrent_files,
@@ -219,7 +237,10 @@ class Qbittorrent(metaclass=Singleton):
is_paused=is_paused,
tags=tags,
use_auto_torrent_management=is_auto,
cookie=cookie)
is_sequential_download=True,
cookie=cookie,
category=category,
**kwargs)
return True if qbc_ret and str(qbc_ret).find("Ok") != -1 else False
except Exception as err:
logger.error(f"添加种子出错:{err}")
@@ -320,6 +341,7 @@ class Qbittorrent(metaclass=Singleton):
try:
self.qbc.transfer.upload_limit = int(upload_limit)
self.qbc.transfer.download_limit = int(download_limit)
return True
except Exception as err:
logger.error(f"设置速度限制出错:{err}")
return False

View File

@@ -146,7 +146,7 @@ class Slack:
# 发送
result = self._client.chat_postMessage(
channel=channel,
text=message_text,
text=message_text[:1000],
blocks=blocks,
mrkdwn=True
)

View File

@@ -50,21 +50,16 @@ class SubtitleModule(_ModuleBase):
logger.info("开始从站点下载字幕:%s" % torrent.page_url)
# 获取种子信息
folder_name, _ = TorrentHelper.get_torrent_info(torrent_path)
# 下载目录,也可能是文件名
download_dir = download_dir / (folder_name or "")
# 等待文件或者目录存在
# 文件保存目录如果是单文件种子则folder_name是空此时文件保存目录就是下载目录
download_dir = download_dir / folder_name
# 等待目录存在
for _ in range(30):
if download_dir.exists():
break
time.sleep(1)
# 目录仍然不存在,且是目录则创建目录
if not download_dir.exists() \
and download_dir.suffix not in settings.RMT_MEDIAEXT:
# 目录仍然不存在,且有文件夹名,则创建目录
if not download_dir.exists() and folder_name:
download_dir.mkdir(parents=True, exist_ok=True)
# 不是目录说明是单文件种子,直接使用下载目录
if download_dir.is_file() \
or download_dir.suffix in settings.RMT_MEDIAEXT:
download_dir = download_dir.parent
# 读取网站代码
request = RequestUtils(cookies=torrent.site_cookie, ua=torrent.site_ua)
res = request.get_res(torrent.page_url)

View File

@@ -1,5 +1,5 @@
import json
from typing import Optional, Union, List, Tuple, Any
from typing import Optional, Union, List, Tuple, Any, Dict
from app.core.context import MediaInfo, Context
from app.core.config import settings
@@ -120,7 +120,7 @@ class TelegramModule(_ModuleBase):
"""
return self.telegram.send_torrents_msg(title=message.title, torrents=torrents, userid=message.userid)
def register_commands(self, commands: dict):
def register_commands(self, commands: Dict[str, dict]):
"""
注册命令,实现这个函数接收系统可用的命令菜单
:param commands: 命令字典

View File

@@ -2,7 +2,7 @@ import re
import threading
from pathlib import Path
from threading import Event
from typing import Optional, List
from typing import Optional, List, Dict
import telebot
from telebot import apihelper
@@ -198,7 +198,7 @@ class Telegram(metaclass=Singleton):
return True if ret else False
def register_commands(self, commands: dict):
def register_commands(self, commands: Dict[str, dict]):
"""
注册菜单命令
"""

View File

@@ -132,6 +132,12 @@ class TheMovieDbModule(_ModuleBase):
else:
logger.info(f"{tmdbid} 识别结果:{mediainfo.type.value} "
f"{mediainfo.title_year}")
# 补充剧集年份
if mediainfo.type == MediaType.TV:
episode_years = self.tmdb.get_tv_episode_years(info.get("id"))
if episode_years:
mediainfo.season_years = episode_years
return mediainfo
else:
logger.info(f"{meta.name if meta else tmdbid} 未匹配到媒体信息")
@@ -187,14 +193,27 @@ class TheMovieDbModule(_ModuleBase):
"""
if settings.SCRAP_SOURCE != "themoviedb":
return None
# 目录下的所有文件
for file in SystemUtils.list_files(path, settings.RMT_MEDIAEXT):
if not file:
continue
logger.info(f"开始刮削媒体库文件:{file} ...")
if SystemUtils.is_bluray_dir(path):
# 蓝光原盘
logger.info(f"开始刮削蓝光原盘:{path} ...")
scrape_path = path / path.name
self.scraper.gen_scraper_files(mediainfo=mediainfo,
file_path=file)
logger.info(f"{file} 刮削完成")
file_path=scrape_path)
elif path.is_file():
# 单个文件
logger.info(f"开始刮削媒体库文件:{path} ...")
self.scraper.gen_scraper_files(mediainfo=mediainfo,
file_path=path)
else:
# 目录下的所有文件
logger.info(f"开始刮削目录:{path} ...")
for file in SystemUtils.list_files(path, settings.RMT_MEDIAEXT):
if not file:
continue
self.scraper.gen_scraper_files(mediainfo=mediainfo,
file_path=file)
logger.info(f"{path} 刮削完成")
def tmdb_discover(self, mtype: MediaType, sort_by: str, with_genres: str, with_original_language: str,
page: int = 1) -> Optional[List[dict]]:

View File

@@ -2,17 +2,19 @@ import time
from pathlib import Path
from xml.dom import minidom
from requests import RequestException
from app.core.config import settings
from app.core.context import MediaInfo
from app.core.metainfo import MetaInfo
from app.log import logger
from app.schemas.types import MediaType
from app.utils.common import retry
from app.utils.dom import DomUtils
from app.utils.http import RequestUtils
class TmdbScraper:
tmdb = None
def __init__(self, tmdb):
@@ -20,9 +22,9 @@ class TmdbScraper:
def gen_scraper_files(self, mediainfo: MediaInfo, file_path: Path):
"""
生成刮削文件
生成刮削文件包括NFO和图片传入路径为文件路径
:param mediainfo: 媒体信息
:param file_path: 文件路径
:param file_path: 文件路径或者目录路径
"""
def __get_episode_detail(_seasoninfo: dict, _episode: int):
@@ -35,9 +37,9 @@ class TmdbScraper:
return {}
try:
# 电影
# 电影,路径为文件名 名称/名称.xxx 或者蓝光原盘目录 名称/名称
if mediainfo.type == MediaType.MOVIE:
# 强制或者不已存在时才处理
# 不已存在时才处理
if not file_path.with_name("movie.nfo").exists() \
and not file_path.with_suffix(".nfo").exists():
# 生成电影描述文件
@@ -53,11 +55,11 @@ class TmdbScraper:
image_name = attr_name.replace("_path", "") + Path(attr_value).suffix
self.__save_image(url=attr_value,
file_path=file_path.with_name(image_name))
# 电视剧
# 电视剧,路径为每一季的文件名 名称/Season xx/名称 SxxExx.xxx
else:
# 识别
meta = MetaInfo(file_path.stem)
# 不存在时才处理
# 根目录不存在时才处理
if not file_path.parent.with_name("tvshow.nfo").exists():
# 根目录描述文件
self.__gen_tv_nfo_file(mediainfo=mediainfo,
@@ -81,19 +83,25 @@ class TmdbScraper:
self.__gen_tv_season_nfo_file(seasoninfo=seasoninfo,
season=meta.begin_season,
season_path=file_path.parent)
# 季的图片
# TMDB季poster图片
sea_seq = str(meta.begin_season).rjust(2, '0')
if seasoninfo.get("poster_path"):
# 后缀
ext = Path(seasoninfo.get('poster_path')).suffix
# URL
url = f"https://{settings.TMDB_IMAGE_DOMAIN}/t/p/original{seasoninfo.get('poster_path')}"
self.__save_image(url, file_path.parent.with_name(f"season{sea_seq}-poster{ext}"))
# 季的其它图片
for attr_name, attr_value in vars(mediainfo).items():
if attr_value \
and attr_name.startswith("season") \
and not attr_name.endswith("poster_path") \
and attr_value \
and isinstance(attr_value, str) \
and attr_value.startswith("http"):
image_name = attr_name.replace("_path",
"").replace("season",
f"{str(meta.begin_season).rjust(2, '0')}-") \
+ Path(attr_value).suffix
image_name = attr_name.replace("_path", "") + Path(attr_value).suffix
self.__save_image(url=attr_value,
file_path=file_path.parent.with_name(f"season{image_name}"))
file_path=file_path.parent.with_name(image_name))
# 查询集详情
episodeinfo = __get_episode_detail(seasoninfo, meta.begin_episode)
if episodeinfo:
@@ -105,10 +113,11 @@ class TmdbScraper:
episode=meta.begin_episode,
file_path=file_path)
# 集的图片
if episodeinfo.get('still_path'):
episode_image = episodeinfo.get("still_path")
if episode_image:
self.__save_image(
f"https://{settings.TMDB_IMAGE_DOMAIN}/t/p/original{episodeinfo.get('still_path')}",
file_path.with_suffix(Path(episodeinfo.get('still_path')).suffix))
f"https://{settings.TMDB_IMAGE_DOMAIN}/t/p/original{episode_image}",
file_path.with_suffix(Path(episode_image).suffix))
except Exception as e:
logger.error(f"{file_path} 刮削失败:{e}")
@@ -312,6 +321,7 @@ class TmdbScraper:
self.__save_nfo(doc, file_path.with_suffix(".nfo"))
@staticmethod
@retry(RequestException, logger=logger)
def __save_image(url: str, file_path: Path):
"""
下载图片并保存
@@ -320,7 +330,7 @@ class TmdbScraper:
return
try:
logger.info(f"正在下载{file_path.stem}图片:{url} ...")
r = RequestUtils().get_res(url=url)
r = RequestUtils().get_res(url=url, raise_exception=True)
if r:
file_path.write_bytes(r.content)
logger.info(f"图片已保存:{file_path}")

View File

@@ -393,7 +393,7 @@ class TmdbHelper:
def match_multi(self, name: str) -> Optional[dict]:
"""
根据名称同时查询电影和电视剧,不带年份
根据名称同时查询电影和电视剧,没有类型也没有年份时使用
:param name: 识别的文件名或种子名
:return: 匹配的媒体信息
"""
@@ -407,35 +407,50 @@ class TmdbHelper:
print(traceback.print_exc())
return None
logger.debug(f"API返回{str(self.search.total_results)}")
# 返回结果
ret_info = {}
if len(multis) == 0:
logger.debug(f"{name} 未找到相关媒体息!")
return {}
else:
# 按年份降序排列
# 按年份降序排列,电影在前面
multis = sorted(
multis,
key=lambda x: x.get('release_date') or x.get('first_air_date') or '0000-00-00',
key=lambda x: ("1"
if x.get("media_type") == "movie"
else "0") + (x.get('release_date')
or x.get('first_air_date')
or '0000-00-00'),
reverse=True
)
for multi in multis:
if multi.get("media_type") == "movie":
if self.__compare_names(name, multi.get('title')) \
or self.__compare_names(name, multi.get('original_title')):
return multi
ret_info = multi
break
# 匹配别名、译名
if not multi.get("names"):
multi = self.get_info(mtype=MediaType.MOVIE, tmdbid=multi.get("id"))
if multi and self.__compare_names(name, multi.get("names")):
return multi
ret_info = multi
break
elif multi.get("media_type") == "tv":
if self.__compare_names(name, multi.get('name')) \
or self.__compare_names(name, multi.get('original_name')):
return multi
ret_info = multi
break
# 匹配别名、译名
if not multi.get("names"):
multi = self.get_info(mtype=MediaType.TV, tmdbid=multi.get("id"))
if multi and self.__compare_names(name, multi.get("names")):
return multi
return {}
ret_info = multi
break
# 类型变更
if ret_info:
ret_info['media_type'] = MediaType.MOVIE if ret_info.get("media_type") == "movie" else MediaType.TV
return ret_info
@lru_cache(maxsize=settings.CACHE_CONF.get('tmdb'))
def match_web(self, name: str, mtype: MediaType) -> Optional[dict]:
@@ -1152,3 +1167,35 @@ class TmdbHelper:
清除缓存
"""
self.tmdb.cache_clear()
def get_tv_episode_years(self, tv_id: int):
"""
查询剧集组年份
"""
try:
episode_groups = self.tv.episode_groups(tv_id)
if not episode_groups:
return {}
episode_years = {}
for episode_group in episode_groups:
logger.info(f"正在获取剧集组年份:{episode_group.get('id')}...")
if episode_group.get('type') != 6:
# 只处理剧集部分
continue
group_episodes = self.tv.group_episodes(episode_group.get('id'))
if not group_episodes:
continue
for group_episode in group_episodes:
order = group_episode.get('order')
episodes = group_episode.get('episodes')
if not episodes or not order:
continue
# 当前季第一季时间
first_date = episodes[0].get("air_date")
if not first_date and str(first_date).split("-") != 3:
continue
episode_years[order] = str(first_date).split("-")[0]
return episode_years
except Exception as e:
print(str(e))
return {}

View File

@@ -33,6 +33,7 @@ class TV(TMDb):
"on_the_air": "/tv/on_the_air",
"popular": "/tv/popular",
"top_rated": "/tv/top_rated",
"group_episodes": "/tv/episode_group/%s",
}
def details(self, tv_id, append_to_response="videos,trailers,images,credits,translations"):
@@ -130,6 +131,17 @@ class TV(TMDb):
key="results"
)
def group_episodes(self, group_id):
"""
查询剧集组所有剧集
:param group_id: int
:return:
"""
return self._request_obj(
self._urls["group_episodes"] % group_id,
key="groups"
)
def external_ids(self, tv_id):
"""
Get the external ids for a TV show.

View File

@@ -10,7 +10,7 @@ from app.core.metainfo import MetaInfo
from app.log import logger
from app.modules import _ModuleBase
from app.modules.transmission.transmission import Transmission
from app.schemas import TransferInfo, TransferTorrent, DownloadingTorrent
from app.schemas import TransferTorrent, DownloadingTorrent
from app.schemas.types import TorrentStatus
from app.utils.string import StringUtils
from app.utils.system import SystemUtils
@@ -28,14 +28,23 @@ class TransmissionModule(_ModuleBase):
def init_setting(self) -> Tuple[str, Union[str, bool]]:
return "DOWNLOADER", "transmission"
def scheduler_job(self) -> None:
"""
定时任务每10分钟调用一次
"""
# 定时重连
if not self.transmission.is_inactive():
self.transmission = Transmission()
def download(self, torrent_path: Path, download_dir: Path, cookie: str,
episodes: Set[int] = None) -> Optional[Tuple[Optional[str], str]]:
episodes: Set[int] = None, category: str = None) -> Optional[Tuple[Optional[str], str]]:
"""
根据种子文件,选择并添加下载任务
:param torrent_path: 种子文件地址
:param download_dir: 下载目录
:param cookie: cookie
:param episodes: 需要下载的集数
:param category: 分类TR中未使用
:return: 种子Hash
"""
# 如果要选择文件则先暂停
@@ -122,6 +131,8 @@ class TransmissionModule(_ModuleBase):
torrents = self.transmission.get_downloading_torrents(tags=settings.TORRENT_TAG)
for torrent in torrents or []:
meta = MetaInfo(torrent.name)
dlspeed = torrent.rate_download if hasattr(torrent, "rate_download") else torrent.rateDownload
upspeed = torrent.rate_upload if hasattr(torrent, "rate_upload") else torrent.rateUpload
ret_torrents.append(DownloadingTorrent(
hash=torrent.hashString,
title=torrent.name,
@@ -131,19 +142,19 @@ class TransmissionModule(_ModuleBase):
progress=torrent.progress,
size=torrent.total_size,
state="paused" if torrent.status == "stopped" else "downloading",
dlspeed=StringUtils.str_filesize(torrent.download_speed),
ulspeed=StringUtils.str_filesize(torrent.upload_speed),
dlspeed=StringUtils.str_filesize(dlspeed),
upspeed=StringUtils.str_filesize(upspeed),
))
else:
return None
return ret_torrents
def transfer_completed(self, hashs: Union[str, list],
transinfo: TransferInfo = None) -> None:
path: Path = None) -> None:
"""
转移完成后的处理
:param hashs: 种子Hash
:param transinfo: 转移信息
:param path: 源目录
:return: None
"""
self.transmission.set_torrent_tag(ids=hashs, tags=['已整理'])
@@ -152,11 +163,11 @@ class TransmissionModule(_ModuleBase):
if self.remove_torrents(hashs):
logger.info(f"移动模式删除种子成功:{hashs} ")
# 删除残留文件
if transinfo and transinfo.path and transinfo.path.exists():
files = SystemUtils.list_files(transinfo.path, settings.RMT_MEDIAEXT)
if path and path.exists():
files = SystemUtils.list_files(path, settings.RMT_MEDIAEXT)
if not files:
logger.warn(f"删除残留文件夹:{transinfo.path}")
shutil.rmtree(transinfo.path, ignore_errors=True)
logger.warn(f"删除残留文件夹:{path}")
shutil.rmtree(path, ignore_errors=True)
def remove_torrents(self, hashs: Union[str, list]) -> bool:
"""

View File

@@ -28,7 +28,7 @@ class Transmission(metaclass=Singleton):
self._host, self._port = StringUtils.get_domain_address(address=settings.TR_HOST, prefix=False)
self._username = settings.TR_USER
self._password = settings.TR_PASSWORD
if self._host and self._port and self._username and self._password:
if self._host and self._port:
self.trc = self.__login_transmission()
def __login_transmission(self) -> Optional[Client]:
@@ -48,6 +48,14 @@ class Transmission(metaclass=Singleton):
logger.error(f"transmission 连接出错:{err}")
return None
def is_inactive(self) -> bool:
"""
判断是否需要重连
"""
if not self._host or not self._port:
return False
return True if not self.trc else False
def get_torrents(self, ids: Union[str, list] = None, status: Union[str, list] = None,
tags: Union[str, list] = None) -> Tuple[List[Torrent], bool]:
"""
@@ -235,14 +243,14 @@ class Transmission(metaclass=Singleton):
logger.error(f"获取传输信息出错:{err}")
return None
def set_speed_limit(self, download_limit: float = None, upload_limit: float = None):
def set_speed_limit(self, download_limit: float = None, upload_limit: float = None) -> bool:
"""
设置速度限制
:param download_limit: 下载速度限制单位KB/s
:param upload_limit: 上传速度限制单位kB/s
"""
if not self.trc:
return
return False
try:
download_limit_enabled = True if download_limit else False
upload_limit_enabled = True if upload_limit else False
@@ -252,6 +260,7 @@ class Transmission(metaclass=Singleton):
speed_limit_down_enabled=download_limit_enabled,
speed_limit_up_enabled=upload_limit_enabled
)
return True
except Exception as err:
logger.error(f"设置速度限制出错:{err}")
return False
@@ -279,3 +288,58 @@ class Transmission(metaclass=Singleton):
except Exception as err:
logger.error(f"添加Tracker出错{err}")
return False
def change_torrent(self,
hash_string: str,
upload_limit=None,
download_limit=None,
ratio_limit=None,
seeding_time_limit=None):
"""
设置种子
:param hash_string: ID
:param upload_limit: 上传限速 Kb/s
:param download_limit: 下载限速 Kb/s
:param ratio_limit: 分享率限制
:param seeding_time_limit: 做种时间限制
:return: bool
"""
if not hash_string:
return False
if upload_limit:
uploadLimited = True
uploadLimit = int(upload_limit)
else:
uploadLimited = False
uploadLimit = 0
if download_limit:
downloadLimited = True
downloadLimit = int(download_limit)
else:
downloadLimited = False
downloadLimit = 0
if ratio_limit:
seedRatioMode = 1
seedRatioLimit = round(float(ratio_limit), 2)
else:
seedRatioMode = 2
seedRatioLimit = 0
if seeding_time_limit:
seedIdleMode = 1
seedIdleLimit = int(seeding_time_limit)
else:
seedIdleMode = 2
seedIdleLimit = 0
try:
self.trc.change_torrent(ids=hash_string,
uploadLimited=uploadLimited,
uploadLimit=uploadLimit,
downloadLimited=downloadLimited,
downloadLimit=downloadLimit,
seedRatioMode=seedRatioMode,
seedRatioLimit=seedRatioLimit,
seedIdleMode=seedIdleMode,
seedIdleLimit=seedIdleLimit)
except Exception as err:
logger.error(f"设置种子出错:{err}")
return False

View File

@@ -1,5 +1,5 @@
import xml.dom.minidom
from typing import Optional, Union, List, Tuple, Any
from typing import Optional, Union, List, Tuple, Any, Dict
from app.core.config import settings
from app.core.context import Context, MediaInfo
@@ -96,16 +96,23 @@ class WechatModule(_ModuleBase):
# 解析消息内容
if msg_type == "event" and event == "click":
# 校验用户有权限执行交互命令
wechat_admins = settings.WECHAT_ADMINS.split(',')
if wechat_admins and not any(
user_id == admin_user for admin_user in wechat_admins):
self.wechat.send_msg(title="用户无权限执行菜单命令", userid=user_id)
return None
if settings.WECHAT_ADMINS:
wechat_admins = settings.WECHAT_ADMINS.split(',')
if wechat_admins and not any(
user_id == admin_user for admin_user in wechat_admins):
self.wechat.send_msg(title="用户无权限执行菜单命令", userid=user_id)
return None
# 根据EventKey执行命令
content = DomUtils.tag_value(root_node, "EventKey")
logger.info(f"收到微信事件userid={user_id}, event={content}")
elif msg_type == "text":
# 文本消息
content = DomUtils.tag_value(root_node, "Content", default="")
if content:
logger.info(f"收到微信消息userid={user_id}, text={content}")
logger.info(f"收到微信消息userid={user_id}, text={content}")
else:
return None
if content:
# 处理消息内容
return CommingMessage(channel=MessageChannel.Wechat,
userid=user_id, username=user_id, text=content)
@@ -145,3 +152,10 @@ class WechatModule(_ModuleBase):
:return: 成功或失败
"""
return self.wechat.send_torrents_msg(title=message.title, torrents=torrents, userid=message.userid)
def register_commands(self, commands: Dict[str, dict]):
"""
注册命令,实现这个函数接收系统可用的命令菜单
:param commands: 命令字典
"""
self.wechat.create_menus(commands)

View File

@@ -2,7 +2,7 @@ import json
import re
import threading
from datetime import datetime
from typing import Optional, List
from typing import Optional, List, Dict
from app.core.config import settings
from app.core.context import MediaInfo, Context
@@ -33,6 +33,8 @@ class WeChat(metaclass=Singleton):
_send_msg_url = f"{settings.WECHAT_PROXY}/cgi-bin/message/send?access_token=%s"
# 企业微信获取TokenURL
_token_url = f"{settings.WECHAT_PROXY}/cgi-bin/gettoken?corpid=%s&corpsecret=%s"
# 企业微信创新菜单URL
_create_menu_url = f"{settings.WECHAT_PROXY}/cgi-bin/menu/create?access_token=%s&agentid=%s"
def __init__(self):
"""
@@ -69,6 +71,10 @@ class WeChat(metaclass=Singleton):
self._access_token = ret_json.get('access_token')
self._expires_in = ret_json.get('expires_in')
self._access_token_time = datetime.now()
elif res is not None:
logger.error(f"获取微信access_token失败错误码{res.status_code},错误原因:{res.reason}")
else:
logger.error(f"获取微信access_token失败未获取到返回信息")
except Exception as e:
logger.error(f"获取微信access_token失败错误信息{e}")
return None
@@ -255,14 +261,87 @@ class WeChat(metaclass=Singleton):
else:
if ret_json.get('errcode') == 42001:
self.__get_access_token(force=True)
logger.error(f"发送消息失败,错误信息:{ret_json.get('errmsg')}")
logger.error(f"发送请求失败,错误信息:{ret_json.get('errmsg')}")
return False
elif res is not None:
logger.error(f"发送消息失败,错误码:{res.status_code},错误原因:{res.reason}")
logger.error(f"发送请求失败,错误码:{res.status_code},错误原因:{res.reason}")
return False
else:
logger.error(f"发送消息失败,未获取到返回信息")
logger.error(f"发送请求失败,未获取到返回信息")
return False
except Exception as err:
logger.error(f"发送消息失败,错误信息:{err}")
logger.error(f"发送请求失败,错误信息:{err}")
return False
def create_menus(self, commands: Dict[str, dict]):
"""
自动注册微信菜单
:param commands: 命令字典
命令字典:
{
"/cookiecloud": {
"func": CookieCloudChain(self._db).remote_sync,
"description": "同步站点",
"category": "站点",
"data": {}
}
}
注册报文格式一级菜单只有最多3条子菜单最多只有5条
{
"button":[
{
"type":"click",
"name":"今日歌曲",
"key":"V1001_TODAY_MUSIC"
},
{
"name":"菜单",
"sub_button":[
{
"type":"view",
"name":"搜索",
"url":"http://www.soso.com/"
},
{
"type":"click",
"name":"赞一下我们",
"key":"V1001_GOOD"
}
]
}
]
}
"""
# 请求URL
req_url = self._create_menu_url % (self.__get_access_token(), self._appid)
# 对commands按category分组
category_dict = {}
for key, value in commands.items():
category: Dict[str, dict] = value.get("category")
if category:
if not category_dict.get(category):
category_dict[category] = {}
category_dict[category][key] = value
# 一级菜单
buttons = []
for category, menu in category_dict.items():
# 二级菜单
sub_buttons = []
for key, value in menu.items():
sub_buttons.append({
"type": "click",
"name": value.get("description"),
"key": key
})
buttons.append({
"name": category,
"sub_button": sub_buttons[:5]
})
if buttons:
# 发送请求
self.__post_request(req_url, {
"button": buttons[:3]
})

View File

@@ -5,7 +5,7 @@ from typing import Any, List, Dict, Tuple
from app.chain import ChainBase
from app.core.config import settings
from app.core.event import EventManager
from app.db import SessionLocal
from app.db import SessionFactory
from app.db.models import Base
from app.db.plugindata_oper import PluginDataOper
from app.db.systemconfig_oper import SystemConfigOper
@@ -17,9 +17,7 @@ class PluginChian(ChainBase):
"""
插件处理链
"""
def process(self, *args, **kwargs):
pass
pass
class _PluginBase(metaclass=ABCMeta):
@@ -39,7 +37,7 @@ class _PluginBase(metaclass=ABCMeta):
def __init__(self):
# 数据库连接
self.db = SessionLocal()
self.db = SessionFactory()
# 插件数据
self.plugindata = PluginDataOper(self.db)
# 处理链
@@ -67,7 +65,8 @@ class _PluginBase(metaclass=ABCMeta):
[{
"cmd": "/xx",
"event": EventType.xx,
"desc": "xxxx",
"desc": "名称",
"category": "分类需要注册到Wechat时必须有分类",
"data": {}
}]
"""
@@ -185,3 +184,10 @@ class _PluginBase(metaclass=ABCMeta):
channel=channel, mtype=mtype, title=title, text=text,
image=image, link=link, userid=userid
))
def close(self):
"""
关闭数据库连接
"""
if self.db:
self.db.close()

View File

@@ -74,10 +74,10 @@ class AutoBackup(_PluginBase):
logger.error(f"定时任务配置错误:{err}")
if self._onlyonce:
logger.info(f"Cloudflare CDN优选服务启动,立即运行一次")
logger.info(f"自动备份服务启动,立即运行一次")
self._scheduler.add_job(func=self.__backup, trigger='date',
run_date=datetime.now(tz=pytz.timezone(settings.TZ)) + timedelta(seconds=3),
name="Cloudflare优选")
name="自动备份")
# 关闭一次性开关
self._onlyonce = False
self.update_config({

View File

@@ -1,3 +1,4 @@
import re
import traceback
from datetime import datetime, timedelta
from multiprocessing.dummy import Pool as ThreadPool
@@ -5,6 +6,7 @@ from multiprocessing.pool import ThreadPool
from typing import Any, List, Dict, Tuple, Optional
from urllib.parse import urljoin
import pytz
from apscheduler.schedulers.background import BackgroundScheduler
from apscheduler.triggers.cron import CronTrigger
from ruamel.yaml import CommentedMap
@@ -29,13 +31,13 @@ class AutoSignIn(_PluginBase):
# 插件名称
plugin_name = "站点自动签到"
# 插件描述
plugin_desc = "自动模拟登录站点签到。"
plugin_desc = "自动模拟登录站点签到。"
# 插件图标
plugin_icon = "signin.png"
# 主题色
plugin_color = "#4179F4"
# 插件版本
plugin_version = "1.0"
plugin_version = "1.1"
# 插件作者
plugin_author = "thsrite"
# 作者主页
@@ -63,6 +65,11 @@ class AutoSignIn(_PluginBase):
_notify: bool = False
_queue_cnt: int = 5
_sign_sites: list = []
_login_sites: list = []
_retry_keyword = None
_clean: bool = False
_start_time: int = None
_end_time: int = None
def init_plugin(self, config: dict = None):
self.sites = SitesHelper()
@@ -79,6 +86,9 @@ class AutoSignIn(_PluginBase):
self._notify = config.get("notify")
self._queue_cnt = config.get("queue_cnt") or 5
self._sign_sites = config.get("sign_sites")
self._login_sites = config.get("login_sites")
self._retry_keyword = config.get("retry_keyword")
self._clean = config.get("clean")
# 加载模块
if self._enabled or self._onlyonce:
@@ -89,41 +99,82 @@ class AutoSignIn(_PluginBase):
# 定时服务
self._scheduler = BackgroundScheduler(timezone=settings.TZ)
if self._cron:
try:
self._scheduler.add_job(func=self.sign_in,
trigger=CronTrigger.from_crontab(self._cron),
name="站点自动签到")
except Exception as err:
logger.error(f"定时任务配置错误:{err}")
# 推送实时消息
self.systemmessage.put(f"执行周期配置错误:{err}")
else:
# 随机时间
triggers = TimerUtils.random_scheduler(num_executions=2,
begin_hour=9,
end_hour=23,
max_interval=12 * 60,
min_interval=6 * 60)
for trigger in triggers:
self._scheduler.add_job(self.sign_in, "cron",
hour=trigger.hour, minute=trigger.minute,
name="站点自动签到")
# 立即运行一次
if self._onlyonce:
logger.info("站点自动签到服务启动,立即运行一次")
self._scheduler.add_job(func=self.sign_in, trigger='date',
run_date=datetime.now(tz=pytz.timezone(settings.TZ)) + timedelta(seconds=3),
name="站点自动签到")
# 关闭一次性开关
self._onlyonce = False
# 保存配置
self.update_config(
{
"enabled": self._enabled,
"notify": self._notify,
"cron": self._cron,
"onlyonce": self._onlyonce,
"queue_cnt": self._queue_cnt,
"sign_sites": self._sign_sites
}
)
self.__update_config()
# 周期运行
if self._enabled:
if self._cron:
try:
if str(self._cron).strip().count(" ") == 4:
self._scheduler.add_job(func=self.sign_in,
trigger=CronTrigger.from_crontab(self._cron),
name="站点自动签到")
logger.info(f"站点自动签到服务启动,执行周期 {self._cron}")
else:
# 2.3/9-23
crons = str(self._cron).strip().split("/")
if len(crons) == 2:
# 2.3
cron = crons[0]
# 9-23
times = crons[1].split("-")
if len(times) == 2:
# 9
self._start_time = int(times[0])
# 23
self._end_time = int(times[1])
if self._start_time and self._end_time:
self._scheduler.add_job(func=self.sign_in,
trigger="interval",
hours=float(str(cron).strip()),
name="站点自动签到")
logger.info(
f"站点自动签到服务启动,执行周期 {self._start_time}点-{self._end_time}点 每{cron}小时执行一次")
else:
logger.error("站点自动签到服务启动失败,周期格式错误")
# 推送实时消息
self.systemmessage.put(f"执行周期配置错误")
self._cron = ""
self._enabled = False
self.__update_config()
else:
# 默认0-24 按照周期运行
self._start_time = 0
self._end_time = 24
self._scheduler.add_job(func=self.sign_in,
trigger="interval",
hours=float(str(self._cron).strip()),
name="站点自动签到")
logger.info(
f"站点自动签到服务启动,执行周期 {self._start_time}点-{self._end_time}点 每{self._cron}小时执行一次")
except Exception as err:
logger.error(f"定时任务配置错误:{err}")
# 推送实时消息
self.systemmessage.put(f"执行周期配置错误:{err}")
self._cron = ""
self._enabled = False
self.__update_config()
else:
# 随机时间
triggers = TimerUtils.random_scheduler(num_executions=2,
begin_hour=9,
end_hour=23,
max_interval=12 * 60,
min_interval=6 * 60)
for trigger in triggers:
self._scheduler.add_job(self.sign_in, "cron",
hour=trigger.hour, minute=trigger.minute,
name="站点自动签到")
# 启动任务
if self._scheduler.get_jobs():
@@ -133,6 +184,22 @@ class AutoSignIn(_PluginBase):
def get_state(self) -> bool:
return self._enabled
def __update_config(self):
# 保存配置
self.update_config(
{
"enabled": self._enabled,
"notify": self._notify,
"cron": self._cron,
"onlyonce": self._onlyonce,
"queue_cnt": self._queue_cnt,
"sign_sites": self._sign_sites,
"login_sites": self._login_sites,
"retry_keyword": self._retry_keyword,
"clean": self._clean,
}
)
@staticmethod
def get_command() -> List[Dict[str, Any]]:
"""
@@ -143,6 +210,7 @@ class AutoSignIn(_PluginBase):
"cmd": "/site_signin",
"event": EventType.SiteSignin,
"desc": "站点签到",
"category": "站点",
"data": {}
}]
@@ -182,7 +250,7 @@ class AutoSignIn(_PluginBase):
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
'md': 3
},
'content': [
{
@@ -198,7 +266,7 @@ class AutoSignIn(_PluginBase):
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
'md': 3
},
'content': [
{
@@ -214,7 +282,7 @@ class AutoSignIn(_PluginBase):
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
'md': 3
},
'content': [
{
@@ -225,6 +293,22 @@ class AutoSignIn(_PluginBase):
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 3
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'clean',
'label': '清理本日缓存',
}
}
]
}
]
},
@@ -235,7 +319,7 @@ class AutoSignIn(_PluginBase):
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
'md': 4
},
'content': [
{
@@ -252,7 +336,7 @@ class AutoSignIn(_PluginBase):
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
'md': 4
},
'content': [
{
@@ -263,6 +347,23 @@ class AutoSignIn(_PluginBase):
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
},
'content': [
{
'component': 'VTextField',
'props': {
'model': 'retry_keyword',
'label': '重试关键词',
'placeholder': '支持正则表达式,命中才重签'
}
}
]
}
]
},
@@ -285,6 +386,49 @@ class AutoSignIn(_PluginBase):
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'content': [
{
'component': 'VSelect',
'props': {
'chips': True,
'multiple': True,
'model': 'login_sites',
'label': '登录站点',
'items': site_options
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
},
'content': [
{
'component': 'VAlert',
'props': {
'text': '执行周期支持:'
'1、5位cron表达式'
'2、配置间隔小时如2.3/9-239-23点之间每隔2.3小时执行一次);'
'3、周期不填默认9-23点随机执行2次。'
'每天首次全量执行,其余执行命中重试关键词的站点。'
}
}
]
}
]
}
]
}
@@ -293,8 +437,11 @@ class AutoSignIn(_PluginBase):
"notify": True,
"cron": "",
"onlyonce": False,
"clean": False,
"queue_cnt": 5,
"sign_sites": []
"sign_sites": [],
"login_sites": [],
"retry_keyword": "错误|失败"
}
def get_page(self) -> List[dict]:
@@ -322,7 +469,7 @@ class AutoSignIn(_PluginBase):
{
'component': 'td',
'props': {
'class': 'whitespace-nowrap break-keep'
'class': 'whitespace-nowrap break-keep text-high-emphasis'
},
'text': current_day
},
@@ -400,30 +547,91 @@ class AutoSignIn(_PluginBase):
@eventmanager.register(EventType.SiteSignin)
def sign_in(self, event: Event = None):
"""
自动签到
自动签到|模拟登陆
"""
# 日期
today = datetime.today()
if self._start_time and self._end_time:
if int(datetime.today().hour) < self._start_time or int(datetime.today().hour) > self._end_time:
logger.error(
f"当前时间 {int(datetime.today().hour)} 不在 {self._start_time}-{self._end_time} 范围内,暂不执行任务")
return
if event:
logger.info("收到命令,开始站点签到 ...")
self.post_message(channel=event.event_data.get("channel"),
title="开始站点签到 ...",
userid=event.event_data.get("user"))
# 查询签到站点
sign_sites = [site for site in self.sites.get_indexers() if not site.get("public")]
# 过滤掉没有选中的站点
if self._sign_sites:
sign_sites = [site for site in sign_sites if site.get("id") in self._sign_sites]
if not sign_sites:
logger.info("没有需要签到的站点")
if self._sign_sites:
self.__do(today=today, type="签到", do_sites=self._sign_sites, event=event)
if self._login_sites:
self.__do(today=today, type="登录", do_sites=self._login_sites, event=event)
def __do(self, today: datetime, type: str, do_sites: list, event: Event = None):
"""
签到逻辑
"""
yesterday = today - timedelta(days=1)
yesterday_str = yesterday.strftime('%Y-%m-%d')
# 删除昨天历史
self.del_data(key=type + "-" + yesterday_str)
# 查看今天有没有签到|登录历史
today = today.strftime('%Y-%m-%d')
today_history = self.get_data(key=type + "-" + today)
# 查询所有站点
all_sites = [site for site in self.sites.get_indexers() if not site.get("public")]
# 过滤掉没有选中的站点
if do_sites:
do_sites = [site for site in all_sites if site.get("id") in do_sites]
else:
do_sites = all_sites
# 今日没数据
if not today_history or self._clean:
logger.info(f"今日 {today}{type},开始{type}已选站点")
# 过滤删除的站点
if type == "签到":
self._sign_sites = [site.get("id") for site in do_sites if site]
if type == "登录":
self._login_sites = [site.get("id") for site in do_sites if site]
if self._clean:
# 关闭开关
self._clean = False
else:
# 需要重试站点
retry_sites = today_history.get("retry")
# 今天已签到|登录站点
already_sites = today_history.get("sign")
# 今日未签|登录站点
no_sites = [site for site in do_sites if
site.get("id") not in already_sites or site.get("id") in retry_sites]
if not no_sites:
logger.info(f"今日 {today}{type},无重新{type}站点,本次任务结束")
return
# 任务站点 = 需要重试+今日未do
do_sites = no_sites
logger.info(f"今日 {today}{type},开始重试命中关键词站点")
if not do_sites:
logger.info(f"没有需要{type}的站点")
return
# 执行签到
logger.info("开始执行签到任务 ...")
with ThreadPool(min(len(sign_sites), int(self._queue_cnt))) as p:
status = p.map(self.signin_site, sign_sites)
logger.info(f"开始执行{type}任务 ...")
if type == "签到":
with ThreadPool(min(len(do_sites), int(self._queue_cnt))) as p:
status = p.map(self.signin_site, do_sites)
else:
with ThreadPool(min(len(do_sites), int(self._queue_cnt))) as p:
status = p.map(self.login_site, do_sites)
if status:
logger.info("站点签到任务完成!")
logger.info(f"站点{type}任务完成!")
# 获取今天的日期
key = f"{datetime.now().month}{datetime.now().day}"
# 保存数据
@@ -431,19 +639,88 @@ class AutoSignIn(_PluginBase):
"site": s[0],
"status": s[1]
} for s in status])
# 命中重试词的站点id
retry_sites = []
# 命中重试词的站点签到msg
retry_msg = []
# 登录成功
login_success_msg = []
# 签到成功
sign_success_msg = []
# 已签到
already_sign_msg = []
# 仿真签到成功
fz_sign_msg = []
# 失败|错误
failed_msg = []
sites = {site.get('name'): site.get("id") for site in self.sites.get_indexers() if not site.get("public")}
for s in status:
site_name = s[0]
site_id = None
if site_name:
site_id = sites.get(site_name)
# 记录本次命中重试关键词的站点
if self._retry_keyword:
if site_id:
match = re.search(self._retry_keyword, s[1])
if match:
logger.debug(f"站点 {site_name} 命中重试关键词 {self._retry_keyword}")
retry_sites.append(site_id)
# 命中的站点
retry_msg.append(s)
continue
if "登录成功" in s:
login_success_msg.append(s)
elif "仿真签到成功" in s:
fz_sign_msg.append(s)
continue
elif "签到成功" in s:
sign_success_msg.append(s)
elif '已签到' in s:
already_sign_msg.append(s)
else:
failed_msg.append(s)
if not self._retry_keyword:
# 没设置重试关键词则重试已选站点
retry_sites = self._sign_sites
logger.debug(f"下次{type}重试站点 {retry_sites}")
# 存入历史
self.save_data(key=type + "-" + today,
value={
"sign": self._sign_sites,
"retry": retry_sites
})
# 发送通知
if self._notify:
self.post_message(title="站点自动签到",
# 签到详细信息 登录成功、签到成功、已签到、仿真签到成功、失败--命中重试
signin_message = login_success_msg + sign_success_msg + already_sign_msg + fz_sign_msg + failed_msg
if len(retry_msg) > 0:
signin_message += retry_msg
signin_message = "\n".join([f'{s[0]}{s[1]}' for s in signin_message if s])
self.post_message(title=f"【站点自动{type}",
mtype=NotificationType.SiteMessage,
text="\n".join([f'{s[0]}{s[1]}' for s in status if s]))
text=f"全部{type}数量: {len(list(self._sign_sites))} \n"
f"本次{type}数量: {len(do_sites)} \n"
f"下次{type}数量: {len(retry_sites) if self._retry_keyword else 0} \n"
f"{signin_message}"
)
if event:
self.post_message(channel=event.event_data.get("channel"),
title="站点签到完成!", userid=event.event_data.get("user"))
title=f"站点{type}完成!", userid=event.event_data.get("user"))
else:
logger.error("站点签到任务失败!")
logger.error(f"站点{type}任务失败!")
if event:
self.post_message(channel=event.event_data.get("channel"),
title="站点签到任务失败!", userid=event.event_data.get("user"))
title=f"站点{type}任务失败!", userid=event.event_data.get("user"))
# 保存配置
self.__update_config()
def __build_class(self, url) -> Any:
for site_schema in self._site_schema:
@@ -523,6 +800,8 @@ class AutoSignIn(_PluginBase):
if under_challenge(page_source):
return f"无法通过Cloudflare"
return f"仿真登录失败Cookie已失效"
else:
return "仿真签到成功"
else:
res = RequestUtils(cookies=site_cookie,
ua=ua,
@@ -559,6 +838,77 @@ class AutoSignIn(_PluginBase):
traceback.print_exc()
return f"签到失败:{str(e)}"
def login_site(self, site_info: CommentedMap) -> Tuple[str, str]:
"""
模拟登陆一个站点
"""
return site_info.get("name"), self.__login_base(site_info)
@staticmethod
def __login_base(site_info: CommentedMap) -> str:
"""
模拟登陆通用处理
:param site_info: 站点信息
:return: 签到结果信息
"""
if not site_info:
return ""
site = site_info.get("name")
site_url = site_info.get("url")
site_cookie = site_info.get("cookie")
ua = site_info.get("ua")
render = site_info.get("render")
proxies = settings.PROXY if site_info.get("proxy") else None
proxy_server = settings.PROXY_SERVER if site_info.get("proxy") else None
if not site_url or not site_cookie:
logger.warn(f"未配置 {site} 的站点地址或Cookie无法签到")
return ""
# 模拟登录
try:
# 访问链接
site_url = str(site_url).replace("attendance.php", "")
logger.info(f"开始站点模拟登陆:{site},地址:{site_url}...")
if render:
page_source = PlaywrightHelper().get_page_source(url=site_url,
cookies=site_cookie,
ua=ua,
proxies=proxy_server)
if not SiteUtils.is_logged_in(page_source):
if under_challenge(page_source):
return f"无法通过Cloudflare"
return f"仿真登录失败Cookie已失效"
else:
return "模拟登陆成功"
else:
res = RequestUtils(cookies=site_cookie,
ua=ua,
proxies=proxies
).get_res(url=site_url)
# 判断登录状态
if res and res.status_code in [200, 500, 403]:
if not SiteUtils.is_logged_in(res.text):
if under_challenge(res.text):
msg = "站点被Cloudflare防护请打开站点浏览器仿真"
elif res.status_code == 200:
msg = "Cookie已失效"
else:
msg = f"状态码:{res.status_code}"
logger.warn(f"{site} 模拟登陆失败,{msg}")
return f"模拟登陆失败,{msg}"
else:
logger.info(f"{site} 模拟登陆成功")
return f"模拟登陆成功"
elif res is not None:
logger.warn(f"{site} 模拟登陆失败,状态码:{res.status_code}")
return f"模拟登陆失败,状态码:{res.status_code}"
else:
logger.warn(f"{site} 模拟登陆失败,无法打开网站")
return f"模拟登陆失败,无法打开网站!"
except Exception as e:
logger.warn("%s 模拟登陆失败:%s" % (site, str(e)))
traceback.print_exc()
return f"模拟登陆失败:{str(e)}"
def stop_service(self):
"""
退出插件

File diff suppressed because it is too large Load Diff

View File

@@ -79,7 +79,7 @@ class CloudflareSpeedTest(_PluginBase):
if self.get_state() or self._onlyonce:
self._scheduler = BackgroundScheduler(timezone=settings.TZ)
if self._cron:
if self.get_state() and self._cron:
logger.info(f"Cloudflare CDN优选服务启动周期{self._cron}")
self._scheduler.add_job(func=self.__cloudflareSpeedTest,
trigger=CronTrigger.from_crontab(self._cron),
@@ -390,7 +390,7 @@ class CloudflareSpeedTest(_PluginBase):
})
def get_state(self) -> bool:
return self._cf_ip and True if self._cron else False
return True if self._cf_ip and self._cron else False
@staticmethod
def get_command() -> List[Dict[str, Any]]:

View File

@@ -1,7 +1,6 @@
import re
import shutil
import threading
import time
import traceback
from datetime import datetime
from pathlib import Path
@@ -20,11 +19,10 @@ from app.core.metainfo import MetaInfo
from app.db.downloadhistory_oper import DownloadHistoryOper
from app.db.transferhistory_oper import TransferHistoryOper
from app.log import logger
from app.modules.qbittorrent import Qbittorrent
from app.modules.transmission import Transmission
from app.plugins import _PluginBase
from app.schemas import Notification, NotificationType, TransferInfo
from app.schemas.types import EventType, MediaType
from app.schemas.types import EventType, MediaType, SystemConfigKey
from app.utils.string import StringUtils
from app.utils.system import SystemUtils
lock = threading.Lock()
@@ -71,9 +69,6 @@ class DirMonitor(_PluginBase):
# 可使用的用户级别
auth_level = 1
# 已处理的文件清单
_synced_files = []
# 私有属性
_scheduler = None
transferhis = None
@@ -90,8 +85,6 @@ class DirMonitor(_PluginBase):
_exclude_keywords = ""
# 存储源目录与目的目录关系
_dirconf: Dict[str, Path] = {}
qb = None
tr = None
_medias = {}
# 退出事件
_event = Event()
@@ -117,8 +110,6 @@ class DirMonitor(_PluginBase):
self.stop_service()
if self._enabled:
self.qb = Qbittorrent()
self.tr = Transmission()
self._scheduler = BackgroundScheduler(timezone=settings.TZ)
# 启动任务
@@ -131,7 +122,14 @@ class DirMonitor(_PluginBase):
continue
# 存储目的目录
paths = mon_path.split(":")
if SystemUtils.is_windows():
if mon_path.count(":") > 1:
paths = [mon_path.split(":")[0] + ":" + mon_path.split(":")[1],
mon_path.split(":")[2] + ":" + mon_path.split(":")[3]]
else:
paths = [mon_path]
else:
paths = mon_path.split(":")
target_path = None
if len(paths) > 1:
mon_path = paths[0]
@@ -199,19 +197,11 @@ class DirMonitor(_PluginBase):
# 全程加锁
with lock:
if event_path not in self._synced_files:
self._synced_files.append(event_path)
else:
transfer_history = self.transferhis.get_by_src(event_path)
if transfer_history:
logger.debug("文件已处理过:%s" % event_path)
return
# 命中过滤关键字不处理
if self._exclude_keywords:
for keyword in self._exclude_keywords.split("\n"):
if keyword and re.findall(keyword, event_path):
logger.debug(f"{event_path} 命中过滤关键字 {keyword}")
return
# 回收站及隐藏的文件不处理
if event_path.find('/@Recycle/') != -1 \
or event_path.find('/#recycle/') != -1 \
@@ -220,6 +210,23 @@ class DirMonitor(_PluginBase):
logger.debug(f"{event_path} 是回收站或隐藏的文件")
return
# 命中过滤关键字不处理
if self._exclude_keywords:
for keyword in self._exclude_keywords.split("\n"):
if keyword and re.findall(keyword, event_path):
logger.info(f"{event_path} 命中过滤关键字 {keyword},不处理")
return
# 整理屏蔽词不处理
transfer_exclude_words = self.systemconfig.get(SystemConfigKey.TransferExcludeWords)
if transfer_exclude_words:
for keyword in transfer_exclude_words:
if not keyword:
continue
if keyword and re.search(r"%s" % keyword, event_path, re.IGNORECASE):
logger.info(f"{event_path} 命中整理屏蔽词 {keyword},不处理")
return
# 不是媒体文件不处理
if file_path.suffix not in settings.RMT_MEDIAEXT:
logger.debug(f"{event_path} 不是媒体文件")
@@ -254,24 +261,27 @@ class DirMonitor(_PluginBase):
title=f"{file_path.name} 未识别到媒体信息,无法入库!"
))
# 新增转移成功历史记录
self.transferhis.add_force(
src=event_path,
dest=str(target),
self.transferhis.add_fail(
src_path=file_path,
mode=self._transfer_type,
title=meta.name,
year=meta.year,
seasons=file_meta.season,
episodes=file_meta.episode,
status=0,
errmsg="未识别到媒体信息",
date=time.strftime("%Y-%m-%d %H:%M:%S", time.localtime())
meta=file_meta
)
return
# 如果未开启新增已入库媒体是否跟随TMDB信息变化则根据tmdbid查询之前的title
if not settings.SCRAP_FOLLOW_TMDB:
transfer_historys = self.transferhis.get_by(tmdbid=mediainfo.tmdb_id,
mtype=mediainfo.type.value)
if transfer_historys:
mediainfo.title = transfer_historys[0].title
logger.info(f"{file_path.name} 识别为:{mediainfo.type.value} {mediainfo.title_year}")
# 更新媒体图片
self.chain.obtain_images(mediainfo=mediainfo)
# 获取downloadhash
download_hash = self.get_download_hash(src=str(file_path))
# 转移
transferinfo: TransferInfo = self.chain.transfer(mediainfo=mediainfo,
path=file_path,
@@ -285,6 +295,15 @@ class DirMonitor(_PluginBase):
if not transferinfo.target_path:
# 转移失败
logger.warn(f"{file_path.name} 入库失败:{transferinfo.message}")
# 新增转移失败历史记录
self.transferhis.add_fail(
src_path=file_path,
mode=self._transfer_type,
download_hash=download_hash,
meta=file_meta,
mediainfo=mediainfo,
transferinfo=transferinfo
)
if self._notify:
self.chain.post_message(Notification(
title=f"{mediainfo.title_year}{file_meta.season_episode} 入库失败!",
@@ -293,36 +312,19 @@ class DirMonitor(_PluginBase):
))
return
# 获取downloadhash
download_hash = self.get_download_hash(src=file_path,
tmdb_id=mediainfo.tmdb_id)
target_path = str(transferinfo.file_list_new[0]) if transferinfo.file_list_new else str(
transferinfo.target_path)
# 新增转移成功历史记录
self.transferhis.add_force(
src=event_path,
dest=target_path,
self.transferhis.add_success(
src_path=file_path,
mode=self._transfer_type,
type=mediainfo.type.value,
category=mediainfo.category,
title=mediainfo.title,
year=mediainfo.year,
tmdbid=mediainfo.tmdb_id,
imdbid=mediainfo.imdb_id,
tvdbid=mediainfo.tvdb_id,
doubanid=mediainfo.douban_id,
seasons=file_meta.season,
episodes=file_meta.episode,
image=mediainfo.get_poster_image(),
download_hash=download_hash,
status=1,
date=time.strftime("%Y-%m-%d %H:%M:%S", time.localtime())
meta=file_meta,
mediainfo=mediainfo,
transferinfo=transferinfo
)
# 刮削元数据
self.chain.scrape_metadata(path=Path(target_path), mediainfo=mediainfo)
# 刮削单个文件
self.chain.scrape_metadata(path=transferinfo.target_path,
mediainfo=mediainfo)
"""
{
@@ -383,8 +385,8 @@ class DirMonitor(_PluginBase):
}
self._medias[mediainfo.title_year + " " + meta.season] = media_list
# 刷新媒体库
self.chain.refresh_mediaserver(mediainfo=mediainfo, file_path=target_path)
# 汇总刷新媒体库
self.chain.refresh_mediaserver(mediainfo=mediainfo, file_path=transferinfo.target_path)
# 广播事件
self.eventmanager.send_event(EventType.TransferComplete, {
'meta': file_meta,
@@ -430,8 +432,8 @@ class DirMonitor(_PluginBase):
transferinfo = media_files[0].get("transferinfo")
file_meta = media_files[0].get("file_meta")
mediainfo = media_files[0].get("mediainfo")
# 判断最后更新时间距现在是已超过3秒,超过则发送消息
if (datetime.now() - last_update_time).total_seconds() > 3:
# 判断最后更新时间距现在是已超过5秒,超过则发送消息
if (datetime.now() - last_update_time).total_seconds() > 5:
# 发送通知
if self._notify:
@@ -457,29 +459,10 @@ class DirMonitor(_PluginBase):
# 剧集季集信息 S01 E01-E04 || S01 E01、E02、E04
season_episode = None
# 处理文件多,说明是剧集,显示季入库消息
if mediainfo.type == MediaType.TV and len(episodes) > 1:
# 剧集
season = "S%s" % str(file_meta.begin_season).rjust(2, "0")
# 剧集按照升序排序
episodes.sort()
# 开始、结束index
start = int(episodes[0])
end = int(episodes[len(episodes) - 1])
# 开始结束间所有的元素 1,2,3,4
all_ele = [i for i in range(start, end + 1)]
# 本次剧集组所有的元素 1,2,4
episode_ele = [int(e) for e in episodes]
# 如果本次剧集组所有元素=开始结束间所有元素,则表示区间内 S01 E01-E04
if all_ele == episode_ele:
season_episode = f"{season} E{str(episodes[start - 1]).rjust(2, '0')}-E{str(episodes[end - 1]).rjust(2, '0')}"
else:
# 否则所有剧集组逗号分隔显示 S01 E01、E02、E04
episodes = ["E%s" % str(episode).rjust(2, "0") for episode in episodes]
season_episode = f"{season} {''.join(episodes)}"
if mediainfo.type == MediaType.TV:
# 季集文本
season_episode = f"{file_meta.season} {StringUtils.format_ep(episodes)}"
# 发送消息
self.transferchian.send_transfer_message(meta=file_meta,
mediainfo=mediainfo,
transferinfo=transferinfo,
@@ -488,65 +471,13 @@ class DirMonitor(_PluginBase):
del self._medias[medis_title_year_season]
continue
def get_download_hash(self, src: Path, tmdb_id: int):
def get_download_hash(self, src: str):
"""
获取download_hash
从表中获取download_hash,避免连接下载器
"""
file_name = src.name
downloadHis = self.downloadhis.get_last_by(tmdbid=tmdb_id)
downloadHis = self.downloadhis.get_file_by_fullpath(src)
if downloadHis:
for his in downloadHis:
# qb
if settings.DOWNLOADER == "qbittorrent":
files = self.qb.get_files(tid=his.download_hash)
if files:
for file in files:
torrent_file_name = file.get("name")
if file_name == Path(torrent_file_name).name:
return his.download_hash
# tr
if settings.DOWNLOADER == "transmission":
files = self.tr.get_files(tid=his.download_hash)
if files:
for file in files:
torrent_file_name = file.name
if file_name == Path(torrent_file_name).name:
return his.download_hash
# 尝试获取下载任务补充download_hash
logger.debug(f"转移记录 {src} 缺失download_hash尝试补充……")
# 获取tr、qb所有种子
qb_torrents, _ = self.qb.get_torrents()
tr_torrents, _ = self.tr.get_torrents()
# 种子名称
torrent_name = str(src).split("/")[-1]
torrent_name2 = str(src).split("/")[-2]
# 处理下载器
for torrent in qb_torrents:
if str(torrent.get("name")) == str(torrent_name) \
or str(torrent.get("name")) == str(torrent_name2):
files = self.qb.get_files(tid=torrent.get("hash"))
if files:
for file in files:
torrent_file_name = file.get("name")
if file_name == Path(torrent_file_name).name:
return torrent.get("hash")
# 处理辅种器 遍历所有种子,按照添加时间升序
if len(tr_torrents) > 0:
tr_torrents = sorted(tr_torrents, key=lambda x: x.added_date)
for torrent in tr_torrents:
if str(torrent.get("name")) == str(torrent_name) \
or str(torrent.get("name")) == str(torrent_name2):
files = self.tr.get_files(tid=torrent.get("hashString"))
if files:
for file in files:
torrent_file_name = file.name
if file_name == Path(torrent_file_name).name:
return torrent.get("hashString")
return downloadHis.download_hash
return None
def get_state(self) -> bool:
@@ -561,149 +492,149 @@ class DirMonitor(_PluginBase):
def get_form(self) -> Tuple[List[dict], Dict[str, Any]]:
return [
{
'component': 'VForm',
'content': [
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'enabled',
'label': '启用插件',
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'notify',
'label': '发送通知',
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
},
'content': [
{
'component': 'VSelect',
'props': {
'model': 'mode',
'label': '监控模式',
'items': [
{'title': '兼容模式', 'value': 'compatibility'},
{'title': '性能模式', 'value': 'fast'}
]
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
},
'content': [
{
'component': 'VSelect',
'props': {
'model': 'transfer_type',
'label': '转移方式',
'items': [
{'title': '移动', 'value': 'move'},
{'title': '复制', 'value': 'copy'},
{'title': '硬链接', 'value': 'link'},
{'title': '软链接', 'value': 'softlink'}
]
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12
},
'content': [
{
'component': 'VTextarea',
'props': {
'model': 'monitor_dirs',
'label': '监控目录',
'rows': 5,
'placeholder': '每一行一个目录,支持两种配置方式:\n'
'监控目录\n'
'监控目录:转移目的目录'
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12
},
'content': [
{
'component': 'VTextarea',
'props': {
'model': 'exclude_keywords',
'label': '排除关键词',
'rows': 2,
'placeholder': '每一行一个关键词'
}
}
]
}
]
}
]
}
], {
"enabled": False,
"notify": False,
"mode": "fast",
"transfer_type": settings.TRANSFER_TYPE,
"monitor_dirs": "",
"exclude_keywords": ""
}
{
'component': 'VForm',
'content': [
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'enabled',
'label': '启用插件',
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'notify',
'label': '发送通知',
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
},
'content': [
{
'component': 'VSelect',
'props': {
'model': 'mode',
'label': '监控模式',
'items': [
{'title': '兼容模式', 'value': 'compatibility'},
{'title': '性能模式', 'value': 'fast'}
]
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
},
'content': [
{
'component': 'VSelect',
'props': {
'model': 'transfer_type',
'label': '转移方式',
'items': [
{'title': '移动', 'value': 'move'},
{'title': '复制', 'value': 'copy'},
{'title': '硬链接', 'value': 'link'},
{'title': '软链接', 'value': 'softlink'}
]
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12
},
'content': [
{
'component': 'VTextarea',
'props': {
'model': 'monitor_dirs',
'label': '监控目录',
'rows': 5,
'placeholder': '每一行一个目录,支持两种配置方式:\n'
'监控目录\n'
'监控目录:转移目的目录'
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12
},
'content': [
{
'component': 'VTextarea',
'props': {
'model': 'exclude_keywords',
'label': '排除关键词',
'rows': 2,
'placeholder': '每一行一个关键词'
}
}
]
}
]
}
]
}
], {
"enabled": False,
"notify": False,
"mode": "fast",
"transfer_type": settings.TRANSFER_TYPE,
"monitor_dirs": "",
"exclude_keywords": ""
}
def get_page(self) -> List[dict]:
pass

View File

@@ -133,6 +133,7 @@ class DoubanSync(_PluginBase):
"cmd": "/douban_sync",
"event": EventType.DoubanSync,
"desc": "同步豆瓣想看",
"category": "订阅",
"data": {}
}]

View File

@@ -397,6 +397,11 @@ class IYUUAutoSeed(_PluginBase):
if not self.iyuuhelper:
return
logger.info("开始辅种任务 ...")
# 排除已删除站点
self._sites = [site.get("id") for site in self.sites.get_indexers() if
site.get("id") in self._sites]
# 计数器初始化
self.total = 0
self.realtotal = 0

View File

@@ -8,8 +8,8 @@ from apscheduler.schedulers.background import BackgroundScheduler
from apscheduler.triggers.cron import CronTrigger
from app.core.config import settings
from app.core.context import MediaInfo
from app.core.metainfo import MetaInfo
from app.db.transferhistory_oper import TransferHistoryOper
from app.helper.nfo import NfoReader
from app.log import logger
from app.plugins import _PluginBase
@@ -41,12 +41,14 @@ class LibraryScraper(_PluginBase):
user_level = 1
# 私有属性
transferhis = None
_scheduler = None
_scraper = None
# 限速开关
_enabled = False
_onlyonce = False
_cron = None
_mode = ""
_scraper_paths = ""
_exclude_paths = ""
# 退出事件
@@ -58,6 +60,7 @@ class LibraryScraper(_PluginBase):
self._enabled = config.get("enabled")
self._onlyonce = config.get("onlyonce")
self._cron = config.get("cron")
self._mode = config.get("mode") or ""
self._scraper_paths = config.get("scraper_paths") or ""
self._exclude_paths = config.get("exclude_paths") or ""
@@ -66,6 +69,7 @@ class LibraryScraper(_PluginBase):
# 启动定时任务 & 立即运行一次
if self._enabled or self._onlyonce:
self.transferhis = TransferHistoryOper(self.db)
self._scheduler = BackgroundScheduler(timezone=settings.TZ)
if self._cron:
logger.info(f"媒体库刮削服务启动,周期:{self._cron}")
@@ -92,6 +96,7 @@ class LibraryScraper(_PluginBase):
"onlyonce": False,
"enabled": self._enabled,
"cron": self._cron,
"mode": self._mode,
"scraper_paths": self._scraper_paths,
"exclude_paths": self._exclude_paths
})
@@ -155,6 +160,28 @@ class LibraryScraper(_PluginBase):
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
},
'content': [
{
'component': 'VSelect',
'props': {
'model': 'mode',
'label': '刮削模式',
'items': [
{'title': '仅刮削缺失元数据和图片', 'value': ''},
{'title': '覆盖所有元数据和图片', 'value': 'force_all'},
{'title': '覆盖所有元数据', 'value': 'force_nfo'},
{'title': '覆盖所有图片', 'value': 'force_image'},
]
}
}
]
},
{
'component': 'VCol',
'props': {
@@ -189,7 +216,7 @@ class LibraryScraper(_PluginBase):
'model': 'scraper_paths',
'label': '削刮路径',
'rows': 5,
'placeholder': '每一行一个目录'
'placeholder': '每一行一个目录,需配置到媒体文件的上级目录,即开了二级分类时需要配置到二级分类目录'
}
}
]
@@ -223,6 +250,7 @@ class LibraryScraper(_PluginBase):
], {
"enabled": False,
"cron": "0 0 */7 * *",
"mode": "",
"scraper_paths": "",
"err_hosts": ""
}
@@ -236,75 +264,122 @@ class LibraryScraper(_PluginBase):
"""
if not self._scraper_paths:
return
# 排除目录
exclude_paths = self._exclude_paths.split("\n")
# 已选择的目录
paths = self._scraper_paths.split("\n")
for path in paths:
if not path:
continue
if not Path(path).exists():
scraper_path = Path(path)
if not scraper_path.exists():
logger.warning(f"媒体库刮削路径不存在:{path}")
continue
logger.info(f"开始刮削媒体库:{path} ...")
if self._event.is_set():
logger.info(f"媒体库刮削服务停止")
return
# 刮削目录
self.__scrape_dir(Path(path))
logger.info(f"媒体库刮削完成")
# 遍历一层文件夹
for sub_path in scraper_path.iterdir():
if self._event.is_set():
logger.info(f"媒体库刮削服务停止")
return
# 排除目录
exclude_flag = False
for exclude_path in exclude_paths:
try:
if sub_path.is_relative_to(Path(exclude_path)):
exclude_flag = True
break
except Exception as err:
print(str(err))
if exclude_flag:
logger.debug(f"{sub_path} 在排除目录中,跳过 ...")
continue
# 开始刮削目录
if sub_path.is_dir():
logger.info(f"开始刮削目录:{sub_path} ...")
self.__scrape_dir(sub_path)
logger.info(f"目录 {sub_path} 刮削完成")
logger.info(f"媒体库 {path} 刮削完成")
def __scrape_dir(self, path: Path):
"""
削刮一个目录
削刮一个目录,该目录必须是媒体文件目录
"""
exclude_paths = self._exclude_paths.split("\n")
# 目录识别
dir_meta = MetaInfo(path.name)
# 媒体信息
mediainfo = None
# 查找目录下所有的文件
files = SystemUtils.list_files(path, settings.RMT_MEDIAEXT)
for file in files:
if self._event.is_set():
logger.info(f"媒体库刮削服务停止")
return
# 排除目录
exclude_flag = False
for exclude_path in exclude_paths:
if file.is_relative_to(Path(exclude_path)):
exclude_flag = True
break
if exclude_flag:
logger.debug(f"{file} 在排除目录中,跳过 ...")
continue
# 识别媒体文件
# 识别元数据
meta_info = MetaInfo(file.name)
if meta_info.type == MediaType.TV:
dir_info = MetaInfo(file.parent.parent.name)
else:
dir_info = MetaInfo(file.parent.name)
meta_info.merge(dir_info)
# 优先读取本地nfo文件
tmdbid = None
if meta_info.type == MediaType.MOVIE:
# 电影
movie_nfo = file.parent / "movie.nfo"
if movie_nfo.exists():
tmdbid = self.__get_tmdbid_from_nfo(movie_nfo)
file_nfo = file.with_suffix(".nfo")
if not tmdbid and file_nfo.exists():
tmdbid = self.__get_tmdbid_from_nfo(file_nfo)
else:
# 电视剧
tv_nfo = file.parent.parent / "tvshow.nfo"
if tv_nfo.exists():
tmdbid = self.__get_tmdbid_from_nfo(tv_nfo)
if tmdbid:
logger.info(f"读取到本地nfo文件的tmdbid{tmdbid}")
# 识别媒体信息
mediainfo: MediaInfo = self.chain.recognize_media(tmdbid=tmdbid, mtype=meta_info.type)
else:
# 识别媒体信息
mediainfo: MediaInfo = self.chain.recognize_media(meta=meta_info)
# 合并
meta_info.merge(dir_meta)
# 识别媒体信息
if not mediainfo:
logger.warn(f"未识别到媒体信息:{file}")
continue
# 开始刮削
# 优先读取本地nfo文件
tmdbid = None
if meta_info.type == MediaType.MOVIE:
# 电影
movie_nfo = file.parent / "movie.nfo"
if movie_nfo.exists():
tmdbid = self.__get_tmdbid_from_nfo(movie_nfo)
file_nfo = file.with_suffix(".nfo")
if not tmdbid and file_nfo.exists():
tmdbid = self.__get_tmdbid_from_nfo(file_nfo)
else:
# 电视剧
tv_nfo = file.parent.parent / "tvshow.nfo"
if tv_nfo.exists():
tmdbid = self.__get_tmdbid_from_nfo(tv_nfo)
if tmdbid:
# 按TMDBID识别
logger.info(f"读取到本地nfo文件的tmdbid{tmdbid}")
mediainfo = self.chain.recognize_media(tmdbid=tmdbid, mtype=meta_info.type)
else:
# 按名称识别
mediainfo = self.chain.recognize_media(meta=meta_info)
if not mediainfo:
logger.warn(f"未识别到媒体信息:{file}")
continue
# 如果未开启新增已入库媒体是否跟随TMDB信息变化则根据tmdbid查询之前的title
if not settings.SCRAP_FOLLOW_TMDB:
transfer_historys = self.transferhis.get_by(tmdbid=mediainfo.tmdb_id,
mtype=mediainfo.type.value)
if transfer_historys:
mediainfo.title = transfer_historys[0].title
# 覆盖模式时提前删除nfo
if self._mode in ["force_all", "force_nfo"]:
nfo_files = SystemUtils.list_files(path, [".nfo"])
for nfo_file in nfo_files:
try:
logger.warn(f"删除nfo文件{nfo_file}")
nfo_file.unlink()
except Exception as err:
print(str(err))
# 覆盖模式时,提前删除图片文件
if self._mode in ["force_all", "force_image"]:
image_files = SystemUtils.list_files(path, [".jpg", ".png"])
for image_file in image_files:
if ".actors" in str(image_file):
continue
try:
logger.warn(f"删除图片文件:{image_file}")
image_file.unlink()
except Exception as err:
print(str(err))
# 刮削单个文件
self.chain.scrape_metadata(path=file, mediainfo=mediainfo)
@staticmethod

View File

@@ -12,6 +12,7 @@ from apscheduler.triggers.cron import CronTrigger
from app.core.config import settings
from app.core.event import eventmanager, Event
from app.db.downloadhistory_oper import DownloadHistoryOper
from app.db.models.transferhistory import TransferHistory
from app.db.transferhistory_oper import TransferHistoryOper
from app.log import logger
@@ -21,7 +22,7 @@ from app.modules.qbittorrent import Qbittorrent
from app.modules.themoviedb.tmdbv3api import Episode
from app.modules.transmission import Transmission
from app.plugins import _PluginBase
from app.schemas.types import NotificationType, EventType
from app.schemas.types import NotificationType, EventType, MediaType
from app.utils.path_utils import PathUtils
@@ -29,13 +30,13 @@ class MediaSyncDel(_PluginBase):
# 插件名称
plugin_name = "媒体库同步删除"
# 插件描述
plugin_desc = "媒体库删除媒体后同步删除历史记录源文件。"
plugin_desc = "媒体库删除媒体后同步删除历史记录源文件和下载任务"
# 插件图标
plugin_icon = "mediasyncdel.png"
# 主题色
plugin_color = "#ff1a1a"
# 插件版本
plugin_version = "1.0"
plugin_version = "1.1"
# 插件作者
plugin_author = "thsrite"
# 作者主页
@@ -57,11 +58,13 @@ class MediaSyncDel(_PluginBase):
_del_source = False
_exclude_path = None
_transferhis = None
_downloadhis = None
qb = None
tr = None
def init_plugin(self, config: dict = None):
self._transferhis = TransferHistoryOper(self.db)
self._downloadhis = DownloadHistoryOper(self.db)
self.episode = Episode()
self.qb = Qbittorrent()
self.tr = Transmission()
@@ -103,12 +106,7 @@ class MediaSyncDel(_PluginBase):
定义远程控制命令
:return: 命令关键字、事件、描述、附带数据
"""
return [{
"cmd": "/sync_del",
"event": EventType.HistoryDeleted,
"desc": "媒体库同步删除",
"data": {}
}]
pass
def get_api(self) -> List[Dict[str, Any]]:
pass
@@ -118,150 +116,154 @@ class MediaSyncDel(_PluginBase):
拼装插件配置页面需要返回两块数据1、页面配置2、数据结构
"""
return [
{
'component': 'VForm',
'content': [
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'enabled',
'label': '启用插件',
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'notify',
'label': '发送通知',
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'del_source',
'label': '删除源文件',
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
},
'content': [
{
'component': 'VSelect',
'props': {
'model': 'sync_type',
'label': '同步方式',
'items': [
{'title': '日志', 'value': 'log'},
{'title': 'Scripter X', 'value': 'plugin'}
]
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
},
'content': [
{
'component': 'VTextField',
'props': {
'model': 'cron',
'label': '执行周期',
'placeholder': '5位cron表达式留空自动'
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
},
'content': [
{
'component': 'VTextField',
'props': {
'model': 'exclude_path',
'label': '排除路径'
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
},
'content': [
{
'component': 'VAlert',
'props': {
'text': '同步方式分为日志同步和Scripter X。日志同步需要配置执行周期默认30分钟执行一次。'
'Scripter X方式需要emby安装并配置Scripter X插件无需配置执行周期'
}
}
]
}
]
}
]
}
], {
"enabled": False,
"notify": True,
"del_source": False,
"sync_type": "log",
"cron": "*/30 * * * *",
"exclude_path": "",
}
{
'component': 'VForm',
'content': [
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'enabled',
'label': '启用插件',
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'notify',
'label': '发送通知',
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'del_source',
'label': '删除源文件',
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
},
'content': [
{
'component': 'VSelect',
'props': {
'model': 'sync_type',
'label': '同步方式',
'items': [
{'title': 'webhook', 'value': 'webhook'},
{'title': '日志', 'value': 'log'},
{'title': 'Scripter X', 'value': 'plugin'}
]
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
},
'content': [
{
'component': 'VTextField',
'props': {
'model': 'cron',
'label': '执行周期',
'placeholder': '5位cron表达式留空自动'
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
},
'content': [
{
'component': 'VTextField',
'props': {
'model': 'exclude_path',
'label': '排除路径'
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
},
'content': [
{
'component': 'VAlert',
'props': {
'text': '同步方式分为webhook、日志同步和Scripter X'
'webhook需要Emby4.8.0.45及以上开启媒体删除的webhook'
'(建议使用媒体库刮削插件覆盖元数据重新刮削剧集路径)。'
'日志同步需要配置执行周期默认30分钟执行一次。'
'Scripter X方式需要emby安装并配置Scripter X插件无需配置执行周期。'
}
}
]
}
]
}
]
}
], {
"enabled": False,
"notify": True,
"del_source": False,
"sync_type": "webhook",
"cron": "*/30 * * * *",
"exclude_path": "",
}
def get_page(self) -> List[dict]:
"""
@@ -415,23 +417,62 @@ class MediaSyncDel(_PluginBase):
}
]
@eventmanager.register(EventType.WebhookMessage)
def sync_del_by_webhook(self, event: Event):
"""
emby删除媒体库同步删除历史记录
webhook
"""
if not self._enabled or str(self._sync_type) != "webhook":
return
event_data = event.event_data
event_type = event_data.event
# Emby Webhook event_type = library.deleted
if not event_type or str(event_type) != 'library.deleted':
return
# 媒体类型
media_type = event_data.item_type
# 媒体名称
media_name = event_data.item_name
# 媒体路径
media_path = event_data.item_path
# tmdb_id
tmdb_id = event_data.tmdb_id
# 季数
season_num = event_data.season_id
# 集数
episode_num = event_data.episode_id
self.__sync_del(media_type=media_type,
media_name=media_name,
media_path=media_path,
tmdb_id=tmdb_id,
season_num=season_num,
episode_num=episode_num)
@eventmanager.register(EventType.WebhookMessage)
def sync_del_by_plugin(self, event):
"""
emby删除媒体库同步删除历史记录
Scripter X插件
"""
if not self._enabled:
if not self._enabled or str(self._sync_type) != "plugin":
return
event_data = event.event_data
event_type = event_data.event
# Scripter X插件 event_type = media_del
if not event_type or str(event_type) != 'media_del':
return
# 是否虚拟标识
# Scripter X插件 需要是否虚拟标识
item_isvirtual = event_data.item_isvirtual
if not item_isvirtual:
logger.error("item_isvirtual参数未配置为防止误删除暂停插件运行")
logger.error("Scripter X插件方式item_isvirtual参数未配置为防止误删除暂停插件运行")
self.update_config({
"enabled": False,
"del_source": self._del_source,
@@ -446,9 +487,6 @@ class MediaSyncDel(_PluginBase):
if item_isvirtual == 'True':
return
# 读取历史记录
history = self.get_data('history') or []
# 媒体类型
media_type = event_data.item_type
# 媒体名称
@@ -459,13 +497,21 @@ class MediaSyncDel(_PluginBase):
tmdb_id = event_data.tmdb_id
# 季数
season_num = event_data.season_id
if season_num and str(season_num).isdigit() and int(season_num) < 10:
season_num = f'0{season_num}'
# 集数
episode_num = event_data.episode_id
if episode_num and str(episode_num).isdigit() and int(episode_num) < 10:
episode_num = f'0{episode_num}'
self.__sync_del(media_type=media_type,
media_name=media_name,
media_path=media_path,
tmdb_id=tmdb_id,
season_num=season_num,
episode_num=episode_num)
def __sync_del(self, media_type: str, media_name: str, media_path: str,
tmdb_id: int, season_num: int, episode_num: int):
"""
执行删除逻辑
"""
if not media_type:
logger.error(f"{media_name} 同步删除失败,未获取到媒体类型")
return
@@ -479,38 +525,17 @@ class MediaSyncDel(_PluginBase):
logger.info(f"媒体路径 {media_path} 已被排除,暂不处理")
return
# 删除电影
if media_type == "Movie":
msg = f'电影 {media_name} {tmdb_id}'
transfer_history: List[TransferHistory] = self._transferhis.get_by(tmdbid=tmdb_id)
# 删除电视剧
elif media_type == "Series":
msg = f'剧集 {media_name} {tmdb_id}'
transfer_history: List[TransferHistory] = self._transferhis.get_by(tmdbid=tmdb_id)
# 删除季 S02
elif media_type == "Season":
if not season_num or not str(season_num).isdigit():
logger.error(f"{media_name} 季同步删除失败,未获取到具体季")
return
msg = f'剧集 {media_name} S{season_num} {tmdb_id}'
transfer_history: List[TransferHistory] = self._transferhis.get_by(tmdbid=tmdb_id,
season=f'S{season_num}')
# 删除剧集S02E02
elif media_type == "Episode":
if not season_num or not str(season_num).isdigit() or not episode_num or not str(episode_num).isdigit():
logger.error(f"{media_name} 集同步删除失败,未获取到具体集")
return
msg = f'剧集 {media_name} S{season_num}E{episode_num} {tmdb_id}'
transfer_history: List[TransferHistory] = self._transferhis.get_by(tmdbid=tmdb_id,
season=f'S{season_num}',
episode=f'E{episode_num}')
else:
return
# 查询转移记录
msg, transfer_history = self.__get_transfer_his(media_type=media_type,
media_name=media_name,
tmdb_id=tmdb_id,
season_num=season_num,
episode_num=episode_num)
logger.info(f"正在同步删除{msg}")
if not transfer_history:
logger.warn(f"{media_type} {media_name} 未获取到可删除数据")
logger.warn(f"{media_type} {media_name} 未获取到可删除数据,可使用媒体库刮削插件覆盖所有元数据")
return
# 开始删除
@@ -518,13 +543,21 @@ class MediaSyncDel(_PluginBase):
year = None
del_cnt = 0
stop_cnt = 0
error_cnt = 0
for transferhis in transfer_history:
title = transferhis.title
if title not in media_name:
logger.warn(
f"当前转移记录 {transferhis.id} {title} {transferhis.tmdbid} 与删除媒体{media_name}不符,防误删,暂不自动删除")
continue
image = transferhis.image
year = transferhis.year
# 0、删除转移记录
self._transferhis.delete(transferhis.id)
# 删除种子任务
if self._del_source:
# 0、删除转移记录
self._transferhis.delete(transferhis.id)
# 1、直接删除源文件
if transferhis.src and Path(transferhis.src).suffix in settings.RMT_MEDIAEXT:
source_name = os.path.basename(transferhis.src)
@@ -534,12 +567,15 @@ class MediaSyncDel(_PluginBase):
if transferhis.download_hash:
try:
# 2、判断种子是否被删除完
delete_flag, stop_flag = self.handle_torrent(src=source_path,
torrent_hash=transferhis.download_hash)
if delete_flag:
del_cnt += 1
if stop_flag:
stop_cnt += 1
delete_flag, success_flag, handle_cnt = self.handle_torrent(src=transferhis.src,
torrent_hash=transferhis.download_hash)
if not success_flag:
error_cnt += 1
else:
if delete_flag:
del_cnt += handle_cnt
else:
stop_cnt += handle_cnt
except Exception as e:
logger.error("删除种子失败,尝试删除源文件:%s" % str(e))
@@ -554,18 +590,30 @@ class MediaSyncDel(_PluginBase):
episode_num=episode_num)
if images:
image = self.get_tmdbimage_url(images[-1].get("file_path"), prefix="original")
torrent_cnt_msg = ""
if del_cnt:
torrent_cnt_msg += f"删除种子{del_cnt}\n"
if stop_cnt:
torrent_cnt_msg += f"暂停种子{stop_cnt}\n"
if error_cnt:
torrent_cnt_msg += f"删种失败{error_cnt}\n"
# 发送通知
self.post_message(
mtype=NotificationType.MediaServer,
title="媒体库同步删除任务完成",
image=image,
text=f"{msg}\n"
f"数量 删除{del_cnt}个 暂停{stop_cnt}\n"
f"删除记录{len(transfer_history)}\n"
f"{torrent_cnt_msg}"
f"时间 {time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(time.time()))}"
)
# 读取历史记录
history = self.get_data('history') or []
history.append({
"type": "电影" if media_type == "Movie" else "电视剧",
"type": "电影" if media_type == "Movie" or media_type == "MOV" else "电视剧",
"title": media_name,
"year": year,
"path": media_path,
@@ -578,6 +626,56 @@ class MediaSyncDel(_PluginBase):
# 保存历史
self.save_data("history", history)
def __get_transfer_his(self, media_type: str, media_name: str,
tmdb_id: int, season_num: int, episode_num: int):
"""
查询转移记录
"""
# 季数
if season_num:
season_num = str(season_num).rjust(2, '0')
# 集数
if episode_num:
episode_num = str(episode_num).rjust(2, '0')
# 类型
mtype = MediaType.MOVIE if media_type in ["Movie", "MOV"] else MediaType.TV
# 删除电影
if mtype == MediaType.MOVIE:
msg = f'电影 {media_name} {tmdb_id}'
transfer_history: List[TransferHistory] = self._transferhis.get_by(tmdbid=tmdb_id,
mtype=mtype.value)
# 删除电视剧
elif mtype == MediaType.TV and not season_num and not episode_num:
msg = f'剧集 {media_name} {tmdb_id}'
transfer_history: List[TransferHistory] = self._transferhis.get_by(tmdbid=tmdb_id,
mtype=mtype.value)
# 删除季 S02
elif mtype == MediaType.TV and season_num and not episode_num:
if not season_num or not str(season_num).isdigit():
logger.error(f"{media_name} 季同步删除失败,未获取到具体季")
return
msg = f'剧集 {media_name} S{season_num} {tmdb_id}'
transfer_history: List[TransferHistory] = self._transferhis.get_by(tmdbid=tmdb_id,
mtype=mtype.value,
season=f'S{season_num}')
# 删除剧集S02E02
elif mtype == MediaType.TV and season_num and episode_num:
if not season_num or not str(season_num).isdigit() or not episode_num or not str(episode_num).isdigit():
logger.error(f"{media_name} 集同步删除失败,未获取到具体集")
return
msg = f'剧集 {media_name} S{season_num}E{episode_num} {tmdb_id}'
transfer_history: List[TransferHistory] = self._transferhis.get_by(tmdbid=tmdb_id,
mtype=mtype.value,
season=f'S{season_num}',
episode=f'E{episode_num}')
else:
return "", []
return msg, transfer_history
def sync_del_by_log(self):
"""
emby删除媒体库同步删除历史记录
@@ -668,13 +766,19 @@ class MediaSyncDel(_PluginBase):
image = 'https://emby.media/notificationicon.png'
del_cnt = 0
stop_cnt = 0
error_cnt = 0
for transferhis in transfer_history:
title = transferhis.title
if title not in media_name:
logger.warn(
f"当前转移记录 {transferhis.id} {title} {transferhis.tmdbid} 与删除媒体{media_name}不符,防误删,暂不自动删除")
continue
image = transferhis.image
# 0、删除转移记录
self._transferhis.delete(transferhis.id)
# 删除种子任务
if self._del_source:
# 0、删除转移记录
self._transferhis.delete(transferhis.id)
# 1、直接删除源文件
if transferhis.src and Path(transferhis.src).suffix in settings.RMT_MEDIAEXT:
source_name = os.path.basename(transferhis.src)
@@ -684,12 +788,15 @@ class MediaSyncDel(_PluginBase):
if transferhis.download_hash:
try:
# 2、判断种子是否被删除完
delete_flag, stop_flag = self.handle_torrent(src=source_path,
torrent_hash=transferhis.download_hash)
if delete_flag:
del_cnt += 1
if stop_flag:
stop_cnt += 1
delete_flag, success_flag, handle_cnt = self.handle_torrent(src=transferhis.src,
torrent_hash=transferhis.download_hash)
if not success_flag:
error_cnt += 1
else:
if delete_flag:
del_cnt += handle_cnt
else:
stop_cnt += handle_cnt
except Exception as e:
logger.error("删除种子失败,尝试删除源文件:%s" % str(e))
@@ -697,11 +804,19 @@ class MediaSyncDel(_PluginBase):
# 发送消息
if self._notify:
torrent_cnt_msg = ""
if del_cnt:
torrent_cnt_msg += f"删除种子{del_cnt}\n"
if stop_cnt:
torrent_cnt_msg += f"暂停种子{stop_cnt}\n"
if error_cnt:
torrent_cnt_msg += f"删种失败{error_cnt}\n"
self.post_message(
mtype=NotificationType.MediaServer,
title="媒体库同步删除任务完成",
text=f"{msg}\n"
f"数量 删除{del_cnt}个 暂停{stop_cnt}\n"
f"删除记录{len(transfer_history)}\n"
f"{torrent_cnt_msg}"
f"时间 {time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(time.time()))}",
image=image)
@@ -735,126 +850,98 @@ class MediaSyncDel(_PluginBase):
plugin_id=plugin_id)
logger.info(f"查询到 {history_key} 转种历史 {transfer_history}")
# 删除历史标志
del_history = False
# 删除种子标志
delete_flag = True
# 是否需要暂停源下载器种子
stop_flag = False
handle_cnt = 0
try:
# 删除本次种子记录
self._downloadhis.delete_file_by_fullpath(fullpath=src)
# 如果有转种记录,则删除转种后的下载任务
if transfer_history and isinstance(transfer_history, dict):
download = transfer_history['to_download']
download_id = transfer_history['to_download_id']
delete_source = transfer_history['delete_source']
del_history = True
# 根据种子hash查询所有下载器文件记录
download_files = self._downloadhis.get_files_by_hash(download_hash=torrent_hash)
if not download_files:
logger.error(
f"未查询到种子任务 {torrent_hash} 存在文件记录,未执行下载器文件同步或该种子已被删除")
return False, False, 0
# 转种后未删除源种时,同步删除源种
if not delete_source:
logger.info(f"{history_key} 转种时未删除源下载任务,开始删除源下载任务…")
# 查询未删除数
no_del_cnt = 0
for download_file in download_files:
if download_file and download_file.state and int(download_file.state) == 1:
no_del_cnt += 1
try:
dl_files = self.chain.torrent_files(tid=torrent_hash)
if not dl_files:
logger.info(f"未获取到 {settings.DOWNLOADER} - {torrent_hash} 种子文件,种子已被删除")
else:
for dl_file in dl_files:
dl_file_name = dl_file.get("name")
torrent_file = os.path.join(src, os.path.basename(dl_file_name))
if Path(torrent_file).exists():
logger.warn(f"种子有文件被删除,种子文件{torrent_file}暂未删除,暂停种子")
delete_flag = False
stop_flag = True
break
if delete_flag:
logger.info(f"删除下载任务:{settings.DOWNLOADER} - {torrent_hash}")
if no_del_cnt > 0:
logger.info(
f"查询种子任务 {torrent_hash} 存在 {no_del_cnt} 个未删除文件,执行暂停种子操作")
delete_flag = False
else:
logger.info(
f"查询种子任务 {torrent_hash} 文件已全部删除,执行删除种子操作")
delete_flag = True
# 如果有转种记录,则删除转种后的下载任务
if transfer_history and isinstance(transfer_history, dict):
download = transfer_history['to_download']
download_id = transfer_history['to_download_id']
delete_source = transfer_history['delete_source']
# 删除种子
if delete_flag:
# 删除转种记录
self.del_data(key=history_key, plugin_id=plugin_id)
# 转种后未删除源种时,同步删除源种
if not delete_source:
logger.info(f"{history_key} 转种时未删除源下载任务,开始删除源下载任务…")
# 删除源种子
logger.info(f"删除源下载器下载任务:{settings.DOWNLOADER} - {torrent_hash}")
self.chain.remove_torrents(torrent_hash)
except Exception as e:
logger.error(f"删除源下载任务 {history_key} 失败: {str(e)}")
handle_cnt += 1
# 如果是False则说明种子文件没有完全被删除暂停种子暂不处理
if delete_flag:
try:
# 转种download
if download == "transmission":
dl_files = self.tr.get_files(tid=download_id)
if not dl_files:
logger.info(f"未获取到 {download} - {download_id} 种子文件,种子已被删除")
else:
for dl_file in dl_files:
dl_file_name = dl_file.name
if not transfer_history or not stop_flag:
torrent_file = os.path.join(src, os.path.basename(dl_file_name))
if Path(torrent_file).exists():
logger.info(f"种子有文件被删除,种子文件{torrent_file}暂未删除,暂停种子")
delete_flag = False
stop_flag = True
break
if delete_flag:
# 删除源下载任务或转种后下载任务
logger.info(f"删除下载任务:{download} - {download_id}")
# 删除转种后任务
logger.info(f"删除转种后下载任务:{download} - {download_id}")
# 删除转种后下载任务
if download == "transmission":
self.tr.delete_torrents(delete_file=True,
ids=download_id)
# 删除转种记录
if del_history:
self.del_data(key=history_key, plugin_id=plugin_id)
# 处理辅种
self.__del_seed(download=download, download_id=download_id, action_flag="del")
else:
dl_files = self.qb.get_files(tid=download_id)
if not dl_files:
logger.info(f"未获取到 {download} - {download_id} 种子文件,种子已被删除")
else:
for dl_file in dl_files:
dl_file_name = dl_file.get("name")
if not transfer_history or not stop_flag:
torrent_file = os.path.join(src, os.path.basename(dl_file_name))
if Path(torrent_file).exists():
logger.info(f"种子有文件被删除,种子文件{torrent_file}暂未删除,暂停种子")
delete_flag = False
stop_flag = True
break
if delete_flag:
# 删除源下载任务或转种后下载任务
logger.info(f"删除下载任务:{download} - {download_id}")
self.qb.delete_torrents(delete_file=True,
ids=download_id)
handle_cnt += 1
else:
# 暂停种子
# 转种后未删除源种时,同步暂停源种
if not delete_source:
logger.info(f"{history_key} 转种时未删除源下载任务,开始暂停源下载任务…")
# 删除转种记录
if del_history:
self.del_data(key=history_key, plugin_id=plugin_id)
# 暂停源种子
logger.info(f"暂停源下载器下载任务:{settings.DOWNLOADER} - {torrent_hash}")
self.chain.stop_torrents(torrent_hash)
handle_cnt += 1
# 处理辅种
self.__del_seed(download=download, download_id=download_id, action_flag="del")
except Exception as e:
logger.error(f"删除转种辅种下载任务失败: {str(e)}")
else:
# 未转种de情况
if delete_flag:
# 删除源种子
logger.info(f"删除源下载器下载任务:{download} - {download_id}")
self.chain.remove_torrents(download_id)
else:
# 暂停源种子
logger.info(f"暂停源下载器下载任务:{download} - {download_id}")
self.chain.stop_torrents(download_id)
handle_cnt += 1
# 判断是否暂停
if not delete_flag:
logger.error("开始暂停种子")
# 暂停种子
if stop_flag:
# 暂停源种
self.chain.stop_torrents(torrent_hash)
logger.info(f"种子:{settings.DOWNLOADER} - {torrent_hash} 暂停")
# 处理辅种
handle_cnt = self.__del_seed(download=download,
download_id=download_id,
action_flag="del" if delete_flag else 'stop',
handle_cnt=handle_cnt)
# 暂停转种
if del_history:
if download == "qbittorrent":
self.qb.stop_torrents(download_id)
logger.info(f"转种:{download} - {download_id} 暂停")
else:
self.tr.stop_torrents(download_id)
logger.info(f"转种:{download} - {download_id} 暂停")
# 暂停辅种
self.__del_seed(download=download, download_id=download_id, action_flag="stop")
return delete_flag, True, handle_cnt
except Exception as e:
logger.error(f"删种失败: {e}")
return False, False, 0
return delete_flag, stop_flag
def __del_seed(self, download, download_id, action_flag):
def __del_seed(self, download, download_id, action_flag, handle_cnt):
"""
删除辅种
"""
@@ -876,9 +963,10 @@ class MediaSyncDel(_PluginBase):
torrents = [torrents]
# 删除辅种历史中与本下载器相同的辅种记录
if int(downloader) == download:
if str(downloader) == str(download):
for torrent in torrents:
if download == "qbittorrent":
handle_cnt += 1
if str(download) == "qbittorrent":
# 删除辅种
if action_flag == "del":
logger.info(f"删除辅种:{downloader} - {torrent}")
@@ -908,6 +996,8 @@ class MediaSyncDel(_PluginBase):
value=seed_history,
plugin_id=plugin_id)
return handle_cnt
@staticmethod
def parse_emby_log(last_time):
log_url = "{HOST}System/Logs/embyserver.txt?api_key={APIKEY}"
@@ -981,7 +1071,7 @@ class MediaSyncDel(_PluginBase):
return del_medias
@staticmethod
def parse_jellyfin_log(last_time):
def parse_jellyfin_log(last_time: datetime):
# 根据加入日期 降序排序
log_url = "{HOST}System/Logs/Log?name=log_%s.log&api_key={APIKEY}" % datetime.date.today().strftime("%Y%m%d")
log_res = Jellyfin().get_data(log_url)
@@ -1054,7 +1144,7 @@ class MediaSyncDel(_PluginBase):
return del_medias
@staticmethod
def delete_media_file(filedir, filename):
def delete_media_file(filedir: str, filename: str):
"""
删除媒体文件,空目录也会被删除
"""
@@ -1122,7 +1212,7 @@ class MediaSyncDel(_PluginBase):
title="媒体库同步删除完成!", userid=event.event_data.get("user"))
@staticmethod
def get_tmdbimage_url(path, prefix="w500"):
def get_tmdbimage_url(path: str, prefix="w500"):
if not path:
return ""
tmdb_image_url = f"https://{settings.TMDB_IMAGE_DOMAIN}"

View File

@@ -31,7 +31,7 @@ class MessageForward(_PluginBase):
# 加载顺序
plugin_order = 16
# 可使用的用户级别
auth_level = 2
auth_level = 1
# 私有属性
_enabled = False
@@ -69,81 +69,81 @@ class MessageForward(_PluginBase):
拼装插件配置页面需要返回两块数据1、页面配置2、数据结构
"""
return [
{
'component': 'VForm',
'content': [
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'enabled',
'label': '开启转发'
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
},
'content': [
{
'component': 'VTextarea',
'props': {
'model': 'wechat',
'rows': '3',
'label': '应用配置',
'placeholder': 'appid:corpid:appsecret一行一个配置'
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
},
'content': [
{
'component': 'VTextarea',
'props': {
'model': 'pattern',
'rows': '3',
'label': '正则配置',
'placeholder': '对应上方应用配置,一行一个,一一对应'
}
}
]
}
]
},
]
}
], {
"enabled": False,
"wechat": "",
"pattern": ""
}
{
'component': 'VForm',
'content': [
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'enabled',
'label': '开启转发'
}
}
]
},
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
},
'content': [
{
'component': 'VTextarea',
'props': {
'model': 'wechat',
'rows': '3',
'label': '应用配置',
'placeholder': 'appid:corpid:appsecret一行一个配置'
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
},
'content': [
{
'component': 'VTextarea',
'props': {
'model': 'pattern',
'rows': '3',
'label': '正则配置',
'placeholder': '对应上方应用配置,一行一个,一一对应'
}
}
]
}
]
},
]
}
], {
"enabled": False,
"wechat": "",
"pattern": ""
}
def get_page(self) -> List[dict]:
pass
@@ -169,29 +169,27 @@ class MessageForward(_PluginBase):
# 正则匹配
patterns = self._pattern.split("\n")
for i, pattern in enumerate(patterns):
for index, pattern in enumerate(patterns):
msg_match = re.search(pattern, title)
if msg_match:
access_token, appid = self.__flush_access_token(i)
access_token, appid = self.__flush_access_token(index)
if not access_token:
logger.error("未获取到有效token请检查配置")
continue
# 发送消息
if image:
self.__send_image_message(title, text, image, userid, access_token, appid, i)
self.__send_image_message(title, text, image, userid, access_token, appid, index)
else:
self.__send_message(title, text, userid, access_token, appid, i)
self.__send_message(title, text, userid, access_token, appid, index)
def __save_wechat_token(self):
"""
获取并存储wechat token
"""
# 查询历史
wechat_token_history = self.get_data("wechat_token") or {}
# 解析配置
wechats = self._wechat.split("\n")
for i, wechat in enumerate(wechats):
for index, wechat in enumerate(wechats):
wechat_config = wechat.split(":")
if len(wechat_config) != 3:
logger.error(f"{wechat} 应用配置不正确")
@@ -200,53 +198,30 @@ class MessageForward(_PluginBase):
corpid = wechat_config[1]
appsecret = wechat_config[2]
# 查询历史是否存储token
wechat_config = wechat_token_history.get("appid")
access_token = None
expires_in = None
access_token_time = None
if wechat_config:
access_token_time = wechat_config['access_token_time']
expires_in = wechat_config['expires_in']
# 判断token是否过期
if (datetime.now() - access_token_time).seconds < expires_in:
# 重新获取token
access_token, expires_in, access_token_time = self.__get_access_token(corpid=corpid,
appsecret=appsecret)
# 已过期,重新获取token
access_token, expires_in, access_token_time = self.__get_access_token(corpid=corpid,
appsecret=appsecret)
if not access_token:
# 获取token
access_token, expires_in, access_token_time = self.__get_access_token(corpid=corpid,
appsecret=appsecret)
if access_token:
wechat_token_history[appid] = {
"access_token": access_token,
"expires_in": expires_in,
"access_token_time": str(access_token_time),
"corpid": corpid,
"appsecret": appsecret
}
self._pattern_token[i] = {
"appid": appid,
"corpid": corpid,
"appsecret": appsecret,
"access_token": access_token,
"expires_in": expires_in,
"access_token_time": access_token_time,
}
else:
# 没有token获取token
logger.error(f"wechat配置 appid = {appid} 获取token失败请检查配置")
continue
# 保存wechat token
if wechat_token_history:
self.save_data("wechat_token", wechat_token_history)
self._pattern_token[index] = {
"appid": appid,
"corpid": corpid,
"appsecret": appsecret,
"access_token": access_token,
"expires_in": expires_in,
"access_token_time": access_token_time,
}
def __flush_access_token(self, i: int):
def __flush_access_token(self, index: int, force: bool = False):
"""
获取第i个配置wechat token
"""
wechat_token = self._pattern_token[i]
wechat_token = self._pattern_token[index]
if not wechat_token:
logger.error(f"未获取到第 {i} 条正则对应的wechat应用token请检查配置")
logger.error(f"未获取到第 {index} 条正则对应的wechat应用token请检查配置")
return None
access_token = wechat_token['access_token']
expires_in = wechat_token['expires_in']
@@ -256,7 +231,7 @@ class MessageForward(_PluginBase):
appsecret = wechat_token['appsecret']
# 判断token有效期
if (datetime.now() - access_token_time).seconds < expires_in:
if force or (datetime.now() - access_token_time).seconds >= expires_in:
# 重新获取token
access_token, expires_in, access_token_time = self.__get_access_token(corpid=corpid,
appsecret=appsecret)
@@ -264,7 +239,7 @@ class MessageForward(_PluginBase):
logger.error(f"wechat配置 appid = {appid} 获取token失败请检查配置")
return None, None
self._pattern_token[i] = {
self._pattern_token[index] = {
"appid": appid,
"corpid": corpid,
"appsecret": appsecret,
@@ -275,8 +250,7 @@ class MessageForward(_PluginBase):
return access_token, appid
def __send_message(self, title: str, text: str = None, userid: str = None, access_token: str = None,
appid: str = None, i: int = None) -> \
Optional[bool]:
appid: str = None, index: int = None) -> Optional[bool]:
"""
发送文本消息
:param title: 消息标题
@@ -284,7 +258,6 @@ class MessageForward(_PluginBase):
:param userid: 消息发送对象的ID为空则发给所有人
:return: 发送状态,错误信息
"""
message_url = self._send_msg_url % access_token
if text:
conent = "%s\n%s" % (title, text.replace("\n\n", "\n"))
else:
@@ -303,10 +276,10 @@ class MessageForward(_PluginBase):
"enable_id_trans": 0,
"enable_duplicate_check": 0
}
return self.__post_request(message_url, req_json, i, title)
return self.__post_request(access_token=access_token, req_json=req_json, index=index, title=title)
def __send_image_message(self, title: str, text: str, image_url: str, userid: str = None, access_token: str = None,
appid: str = None, i: int = None) -> Optional[bool]:
def __send_image_message(self, title: str, text: str, image_url: str, userid: str = None,
access_token: str = None, appid: str = None, index: int = None) -> Optional[bool]:
"""
发送图文消息
:param title: 消息标题
@@ -315,7 +288,6 @@ class MessageForward(_PluginBase):
:param userid: 消息发送对象的ID为空则发给所有人
:return: 发送状态,错误信息
"""
message_url = self._send_msg_url % access_token
if text:
text = text.replace("\n\n", "\n")
if not userid:
@@ -335,9 +307,10 @@ class MessageForward(_PluginBase):
]
}
}
return self.__post_request(message_url, req_json, i, title)
return self.__post_request(access_token=access_token, req_json=req_json, index=index, title=title)
def __post_request(self, message_url: str, req_json: dict, i: int, title: str) -> bool:
def __post_request(self, access_token: str, req_json: dict, index: int, title: str, retry: int = 0) -> bool:
message_url = self._send_msg_url % access_token
"""
向微信发送请求
"""
@@ -352,10 +325,21 @@ class MessageForward(_PluginBase):
logger.info(f"转发消息 {title} 成功")
return True
else:
if ret_json.get('errcode') == 42001:
# 重新获取token
self.__flush_access_token(i)
logger.error(f"转发消息 {title} 失败,错误信息:{ret_json}")
if ret_json.get('errcode') == 42001 or ret_json.get('errcode') == 40014:
logger.info("token已过期正在重新刷新token重试")
# 重新获取token
access_token, appid = self.__flush_access_token(index=index,
force=True)
if access_token:
retry += 1
# 重发请求
if retry <= 3:
return self.__post_request(access_token=access_token,
req_json=req_json,
index=index,
title=title,
retry=retry)
return False
elif res is not None:
logger.error(f"转发消息 {title} 失败,错误码:{res.status_code},错误原因:{res.reason}")
@@ -364,10 +348,10 @@ class MessageForward(_PluginBase):
logger.error(f"转发消息 {title} 失败,未获取到返回信息")
return False
except Exception as err:
logger.error(f"转发消息 {title} 失败,错误信息:{err}")
logger.error(f"转发消息 {title} 异常,错误信息:{err}")
return False
def __get_access_token(self, corpid, appsecret):
def __get_access_token(self, corpid: str, appsecret: str):
"""
获取微信Token
:return 微信Token

View File

@@ -4,11 +4,8 @@ import sqlite3
from datetime import datetime
from app.db.downloadhistory_oper import DownloadHistoryOper
from app.db.models.plugin import PluginData
from app.db.plugindata_oper import PluginDataOper
from app.db.transferhistory_oper import TransferHistoryOper
from app.modules.qbittorrent import Qbittorrent
from app.modules.transmission import Transmission
from app.plugins import _PluginBase
from typing import Any, List, Dict, Tuple
from app.log import logger
@@ -34,7 +31,7 @@ class NAStoolSync(_PluginBase):
# 加载顺序
plugin_order = 15
# 可使用的用户级别
auth_level = 2
auth_level = 1
# 私有属性
_transferhistory = None
@@ -45,10 +42,7 @@ class NAStoolSync(_PluginBase):
_path = None
_site = None
_downloader = None
_supp = False
_transfer = False
qb = None
tr = None
def init_plugin(self, config: dict = None):
self._transferhistory = TransferHistoryOper(self.db)
@@ -61,15 +55,26 @@ class NAStoolSync(_PluginBase):
self._path = config.get("path")
self._site = config.get("site")
self._downloader = config.get("downloader")
self._supp = config.get("supp")
self._transfer = config.get("transfer")
if self._nt_db_path and self._transfer:
self.qb = Qbittorrent()
self.tr = Transmission()
# 读取sqlite数据
gradedb = sqlite3.connect(self._nt_db_path)
try:
gradedb = sqlite3.connect(self._nt_db_path)
except Exception as e:
self.update_config(
{
"transfer": False,
"clear": False,
"nt_db_path": None,
"path": self._path,
"downloader": self._downloader,
"site": self._site,
}
)
logger.error(f"无法打开数据库文件 {self._nt_db_path},请检查路径是否正确:{e}")
return
# 创建游标cursor来执行execute语句
cursor = gradedb.cursor()
@@ -100,7 +105,6 @@ class NAStoolSync(_PluginBase):
"path": self._path,
"downloader": self._downloader,
"site": self._site,
"supp": self._supp,
}
)
@@ -128,6 +132,7 @@ class NAStoolSync(_PluginBase):
logger.info("MoviePilot插件记录已清空")
self._plugindata.truncate()
cnt = 0
for history in plugin_history:
plugin_id = history[1]
plugin_key = history[2]
@@ -137,7 +142,12 @@ class NAStoolSync(_PluginBase):
if self._downloader:
downloaders = self._downloader.split("\n")
for downloader in downloaders:
if not downloader:
continue
sub_downloaders = downloader.split(":")
if not str(sub_downloaders[0]).isdigit():
logger.error(f"下载器映射配置错误NAStool下载器id 应为数字!")
continue
# 替换转种记录
if str(plugin_id) == "TorrentTransfer":
keys = str(plugin_key).split("-")
@@ -156,7 +166,11 @@ class NAStoolSync(_PluginBase):
if str(plugin_id) == "IYUUAutoSeed":
if isinstance(plugin_value, str):
plugin_value = json.loads(plugin_value)
if not isinstance(plugin_value, list):
plugin_value = [plugin_value]
for value in plugin_value:
if not str(value.get("downloader")).isdigit():
continue
if str(value.get("downloader")).isdigit() and int(value.get("downloader")) == int(
sub_downloaders[0]):
value["downloader"] = sub_downloaders[1]
@@ -164,6 +178,9 @@ class NAStoolSync(_PluginBase):
self._plugindata.save(plugin_id=plugin_id,
key=plugin_key,
value=plugin_value)
cnt += 1
if cnt % 100 == 0:
logger.info(f"插件记录同步进度 {cnt} / {len(plugin_history)}")
# 计算耗时
end_time = datetime.now()
@@ -182,6 +199,7 @@ class NAStoolSync(_PluginBase):
logger.info("MoviePilot下载记录已清空")
self._downloadhistory.truncate()
cnt = 0
for history in download_history:
mpath = history[0]
mtype = history[1]
@@ -218,6 +236,9 @@ class NAStoolSync(_PluginBase):
torrent_description=mdesc,
torrent_site=msite
)
cnt += 1
if cnt % 100 == 0:
logger.info(f"下载记录同步进度 {cnt} / {len(download_history)}")
# 计算耗时
end_time = datetime.now()
@@ -237,33 +258,8 @@ class NAStoolSync(_PluginBase):
logger.info("MoviePilot转移记录已清空")
self._transferhistory.truncate()
# 转种后种子hash
transfer_hash = []
# 获取所有的转种数据
transfer_datas = self._plugindata.get_data_all("TorrentTransfer")
if transfer_datas:
if not isinstance(transfer_datas, list):
transfer_datas = [transfer_datas]
for transfer_data in transfer_datas:
if not transfer_data or not isinstance(transfer_data, PluginData):
continue
# 转移后种子hash
transfer_value = transfer_data.value
transfer_value = json.loads(transfer_value)
if not isinstance(transfer_value, dict):
transfer_value = json.loads(transfer_value)
to_hash = transfer_value.get("to_download_id")
# 转移前种子hash
transfer_hash.append(to_hash)
# 获取tr、qb所有种子
qb_torrents, _ = self.qb.get_torrents()
tr_torrents, _ = self.tr.get_torrents(ids=transfer_hash)
tr_torrents_all, _ = self.tr.get_torrents()
# 处理数据存入mp数据库
cnt = 0
for history in transfer_history:
msrc_path = history[0]
msrc_filename = history[1]
@@ -278,8 +274,7 @@ class NAStoolSync(_PluginBase):
mseasons = history[10]
mepisodes = history[11]
mimage = history[12]
mdownload_hash = history[13]
mdate = history[14]
mdate = history[13]
if not msrc_path or not mdest_path:
continue
@@ -287,78 +282,6 @@ class NAStoolSync(_PluginBase):
msrc = msrc_path + "/" + msrc_filename
mdest = mdest_path + "/" + mdest_filename
# 尝试补充download_id
if self._supp and not mdownload_hash:
logger.debug(f"转移记录 {mtitle} 缺失download_hash尝试补充……")
# 种子名称
torrent_name = str(msrc_path).split("/")[-1]
torrent_name2 = str(msrc_path).split("/")[-2]
# 处理下载器
for torrent in qb_torrents:
if str(torrent.get("name")) == str(torrent_name) \
or str(torrent.get("name")) == str(torrent_name2):
mdownload_hash = torrent.get("hash")
torrent_name = str(torrent.get("name"))
break
# 处理辅种器
if not mdownload_hash:
for torrent in tr_torrents:
if str(torrent.get("name")) == str(torrent_name) \
or str(torrent.get("name")) == str(torrent_name2):
mdownload_hash = torrent.get("hashString")
torrent_name = str(torrent.get("name"))
break
# 继续补充 遍历所有种子,按照添加升序升序,第一个种子是初始种子
if not mdownload_hash:
mate_torrents = []
for torrent in tr_torrents_all:
if str(torrent.get("name")) == str(torrent_name) \
or str(torrent.get("name")) == str(torrent_name2):
mate_torrents.append(torrent)
# 匹配上则按照时间升序
if mate_torrents:
if len(mate_torrents) > 1:
mate_torrents = sorted(mate_torrents, key=lambda x: x.added_date)
# 最早添加的hash是下载的hash
mdownload_hash = mate_torrents[0].get("hashString")
torrent_name = str(mate_torrents[0].get("name"))
# 补充转种记录
self._plugindata.save(plugin_id="TorrentTransfer",
key=f"qbittorrent-{mdownload_hash}",
value={
"to_download": "transmission",
"to_download_id": mdownload_hash,
"delete_source": True}
)
# 补充辅种记录
if len(mate_torrents) > 1:
self._plugindata.save(plugin_id="IYUUAutoSeed",
key=mdownload_hash,
value=[{"downloader": "transmission",
"torrents": [torrent.get("hashString") for torrent in
mate_torrents[1:]]}]
)
# 补充下载历史
self._downloadhistory.add(
path=msrc_filename,
type=mtype,
title=mtitle,
year=myear,
tmdbid=mtmdbid,
seasons=mseasons,
episodes=mepisodes,
image=mimage,
download_hash=mdownload_hash,
torrent_name=torrent_name,
torrent_description="",
torrent_site=""
)
# 处理路径映射
if self._path:
paths = self._path.split("\n")
@@ -368,7 +291,7 @@ class NAStoolSync(_PluginBase):
mdest = mdest.replace(sub_paths[0], sub_paths[1]).replace('\\', '/')
# 存库
self._transferhistory.add_force(
self._transferhistory.add(
src=msrc,
dest=mdest,
mode=mmode,
@@ -380,11 +303,14 @@ class NAStoolSync(_PluginBase):
seasons=mseasons,
episodes=mepisodes,
image=mimage,
download_hash=mdownload_hash,
date=mdate
)
logger.debug(f"{mtitle} {myear} {mtmdbid} {mseasons} {mepisodes} 已同步")
cnt += 1
if cnt % 100 == 0:
logger.info(f"转移记录同步进度 {cnt} / {len(transfer_history)}")
# 计算耗时
end_time = datetime.now()
@@ -488,7 +414,6 @@ class NAStoolSync(_PluginBase):
NULL ELSE substr( t.SEASON_EPISODE, instr ( t.SEASON_EPISODE, ' ' ) + 1 )
END AS episodes,
d.POSTER AS image,
d.DOWNLOAD_ID AS download_hash,
t.DATE AS date
FROM
TRANSFER_HISTORY t
@@ -520,180 +445,163 @@ class NAStoolSync(_PluginBase):
拼装插件配置页面需要返回两块数据1、页面配置2、数据结构
"""
return [
{
'component': 'VForm',
'content': [
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'transfer',
'label': '同步记录'
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'clear',
'label': '清空记录'
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'supp',
'label': '补充数据'
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
},
'content': [
{
'component': 'VTextField',
'props': {
'model': 'nt_db_path',
'label': 'NAStool数据库user.db路径',
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
},
'content': [
{
'component': 'VTextarea',
'props': {
'model': 'path',
'rows': '2',
'label': '历史记录路径映射',
'placeholder': 'NAStool路径:MoviePilot路径一行一个'
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
},
'content': [
{
'component': 'VTextarea',
'props': {
'model': 'downloader',
'rows': '2',
'label': '插件数据下载器映射',
'placeholder': 'NAStool下载器id:qbittorrent|transmission一行一个'
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
},
'content': [
{
'component': 'VTextarea',
'props': {
'model': 'site',
'label': '下载历史站点映射',
'placeholder': 'NAStool站点名:MoviePilot站点名一行一个'
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
},
'content': [
{
'component': 'VAlert',
'props': {
'text': '开启清空记录时会在导入历史数据之前删除MoviePilot之前的记录。'
'如果转移记录很多同步时间可能会长3-10分钟'
'所以点击确定后页面没反应是正常现象,后台正在处理。'
'如果开启补充数据会获取tr、qb种子补充转移记录中download_hash缺失的情况同步删除需要'
}
}
]
}
]
}
]
}
], {
"transfer": False,
"clear": False,
"supp": False,
"nt_db_path": "",
"path": "",
"downloader": "",
"site": "",
}
{
'component': 'VForm',
'content': [
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'transfer',
'label': '同步记录'
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'clear',
'label': '清空记录'
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
},
'content': [
{
'component': 'VTextField',
'props': {
'model': 'nt_db_path',
'label': 'NAStool数据库user.db路径',
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
},
'content': [
{
'component': 'VTextarea',
'props': {
'model': 'path',
'rows': '2',
'label': '历史记录路径映射',
'placeholder': 'NAStool路径:MoviePilot路径一行一个'
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
},
'content': [
{
'component': 'VTextarea',
'props': {
'model': 'downloader',
'rows': '2',
'label': '插件数据下载器映射',
'placeholder': 'NAStool下载器id:qbittorrent|transmission一行一个'
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
},
'content': [
{
'component': 'VTextarea',
'props': {
'model': 'site',
'label': '下载历史站点映射',
'placeholder': 'NAStool站点名:MoviePilot站点名一行一个'
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
},
'content': [
{
'component': 'VAlert',
'props': {
'text': '开启清空记录时会在导入历史数据之前删除MoviePilot之前的记录。'
'如果转移记录很多同步时间可能会长3-10分钟'
'所以点击确定后页面没反应是正常现象,后台正在处理。'
}
}
]
}
]
}
]
}
], {
"transfer": False,
"clear": False,
"supp": False,
"nt_db_path": "",
"path": "",
"downloader": "",
"site": "",
}
def get_page(self) -> List[dict]:
pass

Some files were not shown because too many files have changed in this diff Show More