Compare commits

..

210 Commits

Author SHA1 Message Date
jxxghp
0fc7d883c0 v1.9.3
- 搜索功能全面升级,支持搜索多种类型数据,支持模糊搜索站点资源(不匹配不过滤)
- 设置目录时支持浏览选择路径
- 支持配置Github代理地址,以加快版本及插件更新下载
- 优化了插件去重显示
- 优化了目录匹配同路径优先处理逻辑
- 设定等表单中的提示信息强制显示
- 修复了个别插件安装后会消失的问题

注意:前端变化升级后清理浏览器缓存文件(无需清理cookie)
2024-06-03 17:38:31 +08:00
jxxghp
95b480af6d fix #2235 2024-06-03 12:40:35 +08:00
jxxghp
abe7795105 Merge pull request #2259 from thsrite/main 2024-06-03 09:59:52 +08:00
thsrite
74c71390c9 fix unhashable type: 'dict' 2024-06-03 09:54:32 +08:00
jxxghp
1ddd844c17 fix 2024-06-03 08:20:16 +08:00
jxxghp
de3ff2db2e remove api async 2024-06-03 08:17:53 +08:00
jxxghp
655e73f829 fix dir match 2024-06-03 08:03:40 +08:00
jxxghp
2232e51509 add GITHUB_PROXY 2024-06-03 07:09:30 +08:00
jxxghp
44f1a321d2 fix #2249 2024-06-02 21:15:58 +08:00
jxxghp
c05223846f fix api 2024-06-02 21:09:15 +08:00
jxxghp
45945bd025 feat:增加删除下载任务事件,历史记录中删除源文件时主程序会同步删除种子,同时会发出该事件(以便处理辅种等) 2024-06-01 07:47:00 +08:00
jxxghp
acff7e0610 fix log 2024-05-31 20:03:48 +08:00
jxxghp
e97ae488fd fix 线上插件去重 2024-05-31 16:03:27 +08:00
jxxghp
a7689e1e10 fix 2024-05-31 15:08:22 +08:00
jxxghp
9a4d537543 Merge pull request #2237 from hotlcc/develop-20240531 2024-05-31 15:05:22 +08:00
Allen
1b09bb8d22 去掉主动创建下载目录的逻辑,解耦下载器,避免在容器环境当下载器目录与MP映射不一致时导致的目录权限异常 2024-05-31 15:00:47 +08:00
jxxghp
13832a51e0 add listdir api 2024-05-31 13:55:36 +08:00
jxxghp
a09b2fa88a fix log 2024-05-31 09:10:26 +08:00
jxxghp
6361f8654c fix README 2024-05-30 14:13:15 +08:00
jxxghp
db4bda3b73 Merge remote-tracking branch 'origin/main' 2024-05-30 12:39:02 +08:00
jxxghp
3f557ee43c fix README 2024-05-30 12:38:55 +08:00
jxxghp
9e7e0a8730 Merge pull request #2223 from thsrite/main 2024-05-30 10:42:16 +08:00
thsrite
07de1eaa0d fix 插件版本比较 2024-05-30 10:02:26 +08:00
jxxghp
c872043bf4 Merge pull request #2226 from InfinityPacer/main 2024-05-30 06:33:48 +08:00
InfinityPacer
7ed194a62c fix 优先加载子模块 2024-05-30 00:58:20 +08:00
jxxghp
882da68903 Merge pull request #2224 from InfinityPacer/main 2024-05-29 23:06:31 +08:00
InfinityPacer
2798700f71 fix 插件重载时,支持reload一级子模块 2024-05-29 23:01:58 +08:00
thsrite
34e70adabb fix 插件库相同ID的插件保留版本号最大版本 2024-05-29 20:40:04 +08:00
jxxghp
fe999aa346 fix dir match 2024-05-29 17:30:00 +08:00
jxxghp
f7ca4abb01 fix dir match 2024-05-29 17:28:49 +08:00
jxxghp
8a4202cee5 fix dir match 2024-05-29 17:16:35 +08:00
jxxghp
55a85b87dd fix README 2024-05-29 16:47:03 +08:00
jxxghp
3470f96e39 feat:系统错误时发出事件 2024-05-29 16:29:47 +08:00
jxxghp
74980911fe feat:系统错误时发出事件 2024-05-29 16:28:17 +08:00
jxxghp
4c5366f8b4 fix 订阅类型错误日志 2024-05-29 13:41:19 +08:00
jxxghp
8eb89eec86 fix message 2024-05-28 15:39:17 +08:00
jxxghp
cfd7208cda Merge remote-tracking branch 'origin/main' 2024-05-28 12:25:39 +08:00
jxxghp
0c6684a572 fix #2208 下载历史错误数据兼容 2024-05-28 12:25:33 +08:00
jxxghp
f0692b2fb8 更新 __init__.py 2024-05-28 11:50:11 +08:00
jxxghp
c29ee4fb07 Merge pull request #2203 from BrettDean/main 2024-05-28 11:49:07 +08:00
jxxghp
dd40ef54c0 fix #1838 2024-05-28 08:16:51 +08:00
Dean
84d5e2a6b3 fix: Plex刷新媒体库无用 2024-05-28 02:11:42 +08:00
jxxghp
7defcff0e5 v1.9.2
- 修复了目录匹配时同路径优先无效的问题
- 修复了文件管理默认路径还在使用旧变量的问题
- 修复了会删除空媒体库目录的问题
- 修复了TR下载器整理后会覆盖任务原有标签的问题
- 修复了Windows打包,并默认内置了几个主要的插件库
- 优化了资源搜索页面剧集的过滤使用体验
- 硬链接模式下,如果硬链接失败(实际为复制)会发送通知提醒,同时历史记录的整理方式会显示为`复制`
- 消息类型新增了插件消息,专用于需要发送消息类的插件选择使用
2024-05-27 12:46:58 +08:00
jxxghp
d9e767f87d fix 新增消息类型显示 2024-05-27 12:31:48 +08:00
jxxghp
2b82173fba fix windows build 2024-05-27 12:27:15 +08:00
jxxghp
1425b15333 fix #2192 2024-05-27 11:16:41 +08:00
jxxghp
8d82d0f4fd Merge remote-tracking branch 'origin/main' 2024-05-27 10:26:21 +08:00
jxxghp
d352f09d4e fix #2193 2024-05-27 10:26:09 +08:00
jxxghp
aebd121939 Merge pull request #2195 from hotlcc/develop-20240527
Develop 20240527
2024-05-27 10:13:39 +08:00
jxxghp
81eed0d06d fix windows build 2024-05-27 10:11:41 +08:00
Allen
bacb7aaeb4 模块管理种新增根据模块id精确获取模块的方法,以便在插件等场景精确获取qb/tr等由系统统一维护的模块,而不需要插件各自为政 2024-05-27 10:09:18 +08:00
Allen
b238c6ad11 消息类型增加插件消息 2024-05-27 10:06:59 +08:00
jxxghp
5c8b843030 fix 目录健康检查 2024-05-27 08:46:44 +08:00
jxxghp
58acc62e16 feat:硬链接转复制时发系统通知提醒 2024-05-27 08:11:44 +08:00
jxxghp
ca5a240fc4 fix 同路径优先 2024-05-27 07:52:36 +08:00
jxxghp
dd5887d18d fix 同路径优先 2024-05-26 13:28:32 +08:00
jxxghp
97669405d0 fix #2185 2024-05-26 13:15:34 +08:00
jxxghp
bf2ea271b6 fix 硬链接检测 2024-05-26 12:48:52 +08:00
jxxghp
afd91bf760 v1.9.2-beta
- 修复了个别情况下剧集会超出下载范围的问题
- 修复了手动整理目的目录、订阅保存目录未去重显示的问题
- 手动整理下拉选择目的目录时,元数据刮削选项会自动跟随目录设定打开或关闭
- 当指定目的目录的情况下,会尝试查找是否有对应的目录设定,并应用目录的分类和刮削配置
2024-05-26 10:03:52 +08:00
jxxghp
7e982eaf4d fix 订阅整季重复下载 2024-05-26 09:34:50 +08:00
jxxghp
5f13824aa6 fix #2184 手动选择下载/媒体库目录时尝试查找对应的配置并做分类(如果查询不到则不自动分类) 2024-05-26 08:39:59 +08:00
jxxghp
9ca8e3f4a8 更新 category.py 2024-05-25 09:28:48 +08:00
jxxghp
9b749035c9 fix windows build 2024-05-25 07:21:26 +08:00
jxxghp
b8e09a6b06 fix 2024-05-25 07:14:24 +08:00
jxxghp
4bb95d519d v1.9.1-1
- 目前的目录设定结构已经可以做到按二级分类精细化设定目录,动漫一级分类已没有意义,现已取消,同时解决了动漫分类错误的问题。如动漫需单独一级目录,可在目录设定中对电影/电视剧下的动漫二级分类单独调定目录,并将优先级调高。 - 索引站点支持ToSky。 - 仪表板支持一个插件实现多个组件。
2024-05-25 06:37:28 +08:00
jxxghp
04280021b4 fix 动漫分类 2024-05-24 17:21:03 +08:00
jxxghp
355dad9205 Merge pull request #2167 from hotlcc/develop-20240524-插件支持多仪表板组件 2024-05-24 15:49:00 +08:00
Allen
a6714d3712 一个插件支持透出多个仪表板控件,并兼容历史 2024-05-24 14:54:46 +08:00
jxxghp
fe53819a81 fix directory helper 2024-05-24 13:26:20 +08:00
jxxghp
6965415c52 v1.9.1
- 回退了多线程优先级过滤,以暂时解决过滤器偶发报错的问题(搜索慢的过滤规则层数减一减吧)
- 发现推荐改为使用TTL缓存,解决榜单长时间不刷新的问题
- 修复了目录指定了类别时会重复创建类别文件夹的问题
- 订阅保存路径支持下拉选择已配置的下载目录
- 降低了INFO日志打印量,如需打印详细日志请设置`DEBUG`为`true`(写过多日志会降低响应速度)
- 目录设定中增加了同盘优先选项,开启后在匹配媒体库目录时优先使用同盘/同根路径目录
- 修复了一个订阅匹配的问题

注意:因媒体库目录设置结构调整,`目录监控`插件指定目的路径时不再支持自动创建下级分类目录。建议插件中不指定目的目录,而是在`设定->目录`中设定好目录结构让系统自动匹配使用(根据需要打开同盘优先选项)。实际上如果是可以通过分类来指定目录的,建议直接使用内建的下载器监控功能,目录监控仅是个插件,仅适用于不通过MoviePilot自动下载的文件监控整理。
2024-05-24 12:47:34 +08:00
jxxghp
9be671fa2c fix 82226f1956 2024-05-24 12:36:51 +08:00
jxxghp
27b4f206a1 fix 订阅刷新站点范围扩大问题 2024-05-24 12:31:11 +08:00
jxxghp
a2b0c9bd3a fix #2164 2024-05-24 11:37:26 +08:00
jxxghp
ebc46d7d3b fix #2165 2024-05-24 11:31:37 +08:00
jxxghp
eb4e4b5141 feat:新增同盘优先设置 2024-05-24 11:19:21 +08:00
jxxghp
be11ef72a9 调整INFO日志打印量 && 回滚多线程过滤 2024-05-24 10:48:56 +08:00
jxxghp
a278c80951 fix #2151 TMDB发现和趋势修改为TTL缓存 2024-05-24 09:29:51 +08:00
jxxghp
6ee6de48ff 尝试修复过滤器并发报错 2024-05-24 08:56:32 +08:00
jxxghp
671bdad77c v1.9.1-beta
- 优化了订阅匹配逻辑
- 目录设置全新改版,多目录支持更加灵活,存量目录配置会自动升级为新格式
- 手动整理时支持下拉选择已配置目录
- 元数据刮削取消了全局开关,按媒体库目录设置,目录监控等插件新增了刮削开关并需要手动打开
- 站点管理支持拖动排序
- 修复了仪表板不可见组件仍会刷新的问题

注意:涉及前端变化升级后需要清理浏览器缓存(仅清理缓存文件,无需清理cookie)
2024-05-23 19:43:08 +08:00
jxxghp
a9ff8ec96d 更新 README.md 2024-05-23 14:28:40 +08:00
jxxghp
d1678355f1 fix #2099 2024-05-23 12:45:27 +08:00
jxxghp
ea399daef9 更新 __init__.py 2024-05-23 11:46:30 +08:00
jxxghp
e1122af97c 更新 __init__.py 2024-05-23 10:05:56 +08:00
jxxghp
21861111e6 更新 download.py 2024-05-23 10:05:07 +08:00
jxxghp
bd1e83ee8a fix 2024-05-23 08:53:20 +08:00
jxxghp
43da33bc50 fix 2024-05-23 08:41:20 +08:00
jxxghp
a09a207407 fix manual_transfer api 2024-05-23 08:09:36 +08:00
jxxghp
0aa3aa8521 Merge pull request #2146 from zhu0823/main 2024-05-23 07:10:48 +08:00
jxxghp
d9c6375252 feat:目录结构重大调整,谨慎更新到dev 2024-05-22 20:02:14 +08:00
jxxghp
f1f187fc77 add media category api 2024-05-22 18:02:28 +08:00
zhu0823
99d22554a1 fix: plex最近添加类型为show的资源 2024-05-22 13:34:03 +08:00
jxxghp
4835f6c6c9 更新 subscribe.py 2024-05-21 23:17:59 +08:00
jxxghp
5be2bf0633 feat:服务器地址变量化 2024-05-21 17:31:12 +08:00
jxxghp
7c7bc0b504 fix log 2024-05-21 17:14:16 +08:00
jxxghp
e0939fee75 fix https://github.com/jxxghp/MoviePilot/issues/2134 2024-05-21 16:36:09 +08:00
jxxghp
82226f1956 fix https://github.com/jxxghp/MoviePilot/issues/2125 调整订阅匹配逻辑 2024-05-21 11:57:12 +08:00
jxxghp
cfb43b4b04 fix torrent match 2024-05-21 09:54:06 +08:00
jxxghp
ebe2795eae Merge pull request #2128 from jeblove/main 2024-05-21 06:48:54 +08:00
jeblove
f7f747278d fix: jellyfin webhook百分比数据有时为None引发报错 2024-05-20 23:47:37 +08:00
jxxghp
58f17e89b6 v1.9.0
- 支持用户自定义主题风格
- 资源搜索列表模式下支持排序
- 索引站点新增支持`YemaPT`
- 修复了某些情况下订阅查询转圈的问题
- 修复了默认洗版设置不生效的问题
- 修复了存在活动连接时无法正常关闭系统的问题
- 回退了上个版本的最后一点修改(会导致手机端添加桌面图标时没有App效果)
2024-05-20 18:24:39 +08:00
jxxghp
433ca2ec28 fix YemaPT 2024-05-20 17:41:46 +08:00
jxxghp
ffac57ad4d 支持 YemaPT 2024-05-20 16:55:36 +08:00
jxxghp
0d2a4c50d6 fix error message 2024-05-20 15:39:43 +08:00
jxxghp
02c2edc30e fix WEB页面活动时无法正常停止服务的问题 2024-05-20 11:50:49 +08:00
jxxghp
65975235d4 fix scheduler 2024-05-20 10:16:20 +08:00
jxxghp
07a6abde0e fix json exception 2024-05-20 09:13:51 +08:00
jxxghp
fa47d9adeb fix 2024-05-19 14:17:34 +08:00
jxxghp
18d08c3672 fix 2024-05-19 12:35:21 +08:00
jxxghp
cf20049b7f Merge pull request #2117 from InfinityPacer/main
fix 开启DEBUG时,LOG文件也记录DEBUG日志
2024-05-19 12:28:08 +08:00
InfinityPacer
e3ce3302da fix 开启DEBUG时,LOG文件也记录DEBUG日志 2024-05-19 12:18:43 +08:00
jxxghp
d20951e7a0 Merge pull request #2116 from InfinityPacer/main 2024-05-19 12:08:03 +08:00
InfinityPacer
8a565bb79f fix #2113 2024-05-19 11:53:18 +08:00
jxxghp
cfdc8fb2c3 Merge pull request #2115 from InfinityPacer/main 2024-05-19 11:02:19 +08:00
InfinityPacer
111f830664 fix #2104 2024-05-19 10:58:09 +08:00
jxxghp
2821d6a9dc fix #2076 2024-05-19 09:54:52 +08:00
jxxghp
495d98c2b2 fix #2086 没有站点时订阅打不开的问题 && 默认洗版不生效的问题 2024-05-19 09:52:00 +08:00
jxxghp
e1e2779e48 fix #2101 1ptba签到增加index.php,认证及种子浏览有带具体的URI,如有问题需要站点放开。 2024-05-19 09:42:22 +08:00
jxxghp
363318f4f0 智能下载增加日志打印 2024-05-19 09:19:44 +08:00
jxxghp
521b960364 catch command exception 2024-05-19 08:35:18 +08:00
jxxghp
d2bcb197eb v1.8.9
- 站点增加了独立的超时时间设置(默认15秒),可根据站点连通情况调整对应超时时间,以缩短搜索耗时
- 使用多线程加快搜索结果的优先级过滤速度
- 极大提升了非首次登录时的页面加载速度(需要重新拉取镜像该特性才会生效)
- 仪表仪的拖拽图标默认隐藏,避免影响无边框组件的显示效果
- 安装网页到桌面App时,支持前进后退操作按钮
2024-05-17 14:51:02 +08:00
jxxghp
b0f9ca52e3 fix nginx cache 2024-05-17 14:37:25 +08:00
jxxghp
01a3efd402 fix nginx cache 2024-05-17 13:58:29 +08:00
jxxghp
a50427948a feat:多线程过滤 2024-05-17 12:20:39 +08:00
jxxghp
5614f10962 fix get_dashboard 2024-05-17 10:50:03 +08:00
jxxghp
5ff80dbe89 fix get_dashboard 2024-05-17 10:49:47 +08:00
jxxghp
278835c5d4 fix 2024-05-17 07:46:27 +08:00
jxxghp
92cdd67f3a Merge pull request #2093 from thsrite/main 2024-05-16 20:53:57 +08:00
thsrite
c56b58cc56 fix 2024-05-16 20:50:42 +08:00
thsrite
8bd4c21511 fix 2024-05-16 20:49:28 +08:00
thsrite
b94f201667 fix 拆包场景下,下载通知标题展示实际下载的集数 2024-05-16 20:44:57 +08:00
jxxghp
125e9eb30a Merge pull request #2091 from happyTonakai/main
Jellyfin webhook 添加播放百分比字段
2024-05-16 20:36:24 +08:00
happyTonakai
ea09d8c8d4 Jellyfin webhook 添加播放百分比字段 2024-05-16 20:20:24 +08:00
jxxghp
de0237f348 dashboard 传递 UA 2024-05-16 17:32:35 +08:00
jxxghp
62143bf7b6 fix db 2024-05-16 16:18:22 +08:00
jxxghp
3088bbb2f8 站点独立设置超时时间 2024-05-16 14:42:10 +08:00
jxxghp
43647e59a4 fix module name 2024-05-16 14:20:18 +08:00
jxxghp
a740330e66 Update README.md 2024-05-15 19:25:53 +08:00
jxxghp
10d4766353 fix bug 2024-05-15 07:42:03 +08:00
jxxghp
0a845fe8b6 Merge remote-tracking branch 'origin/main' 2024-05-14 20:37:18 +08:00
jxxghp
90dc52bb70 v1.8.8
- 新增了系统级通知功能
- 优化了主题切换使用体验
- 修复了仪表板存储空间卡片无法移动的问题
- 修复了仪表板自动刷新组件会重复生成的问题
- 修复了插件图表在暗黑主题下字体颜色偏暗的问题
- 仪表板组件支持无边框模式
- 默认内置了几个主要的第三方插件库
- 优先级规则支持拖拽排序
2024-05-14 20:37:03 +08:00
jxxghp
3816e2fba8 Merge pull request #2073 from developer-wlj/wlj0409 2024-05-14 20:35:22 +08:00
mayun110
20d92ca577 fix: 收藏洗版 jellyfin webhook 缺少的字段 2024-05-14 20:11:49 +08:00
jxxghp
db92761964 fix ide warning 2024-05-14 18:04:10 +08:00
jxxghp
3af5870733 Merge pull request #2071 from hotlcc/develop-20240514-修复一些发现的问题 2024-05-14 18:00:08 +08:00
Allen
53d01267b8 修复模块加载时前置模块失败导致后续均无法加载的问题 2024-05-14 17:25:22 +08:00
jxxghp
5ff21641f9 内置部分第三方插件库 2024-05-14 15:27:50 +08:00
jxxghp
643f2e3e66 Merge remote-tracking branch 'origin/main' 2024-05-14 10:08:15 +08:00
jxxghp
ae839235eb feat:服务异常时推送前台消息 2024-05-14 10:08:07 +08:00
jxxghp
5af94144ce Merge pull request #2067 from thsrite/main 2024-05-14 09:48:05 +08:00
thsrite
4281692321 fix add插件热加载变量开关(保留DEV开关 2024-05-14 09:24:02 +08:00
jxxghp
bd6d6b6882 fix 2024-05-14 08:30:29 +08:00
jxxghp
fabb02a8a0 fix README 2024-05-13 20:25:52 +08:00
jxxghp
223e655b6f feat:插件API立即生效 2024-05-13 20:15:47 +08:00
jxxghp
f0cb5b3e85 Merge pull request #2059 from EkkoG/fix_min_seeders_time 2024-05-13 11:25:10 +08:00
EkkoG
7579aae823 修复订阅时没有读取 min_seeders_time 导致最少做种人数生效时间不生效 2024-05-13 11:06:49 +08:00
jxxghp
36a8b6d780 更新 download.py 2024-05-13 07:38:14 +08:00
jxxghp
0471167b74 fix dev 2024-05-12 20:12:59 +08:00
jxxghp
28b996e54b Merge pull request #2056 from DDS-Derek/bump 2024-05-12 11:17:13 +08:00
DDSRem
d0989f72a9 bump: debian bullseye to bookworm 2024-05-12 09:55:35 +08:00
jxxghp
663e61e3a1 fix message api 2024-05-11 16:24:25 +08:00
jxxghp
71b0090947 Merge remote-tracking branch 'origin/main' 2024-05-11 12:53:40 +08:00
jxxghp
8ff0f81f47 fix message api 2024-05-11 12:53:32 +08:00
jxxghp
4186613a86 Merge pull request #2051 from InfinityPacer/main 2024-05-11 06:25:50 +08:00
InfinityPacer
4e1be23317 fix 热加载增加双重防抖 2024-05-11 01:23:33 +08:00
jxxghp
0e9e626ab6 fix 2024-05-10 20:09:45 +08:00
jxxghp
3d5761157a Merge pull request #2047 from InfinityPacer/main
fix 热加载不同平台路径及插件实例化
2024-05-10 19:57:48 +08:00
InfinityPacer
c407800b30 fix 热加载不同平台路径及插件实例化 2024-05-10 19:27:44 +08:00
jxxghp
c888a37aba fix README 2024-05-10 18:29:20 +08:00
jxxghp
d43998efee fix 插件热加载控重 2024-05-10 18:27:31 +08:00
jxxghp
8813e84053 feat:DEV模式下插件文件修改后自动重新加载 2024-05-10 13:33:00 +08:00
jxxghp
cf5a746f53 fix #2037 2024-05-10 08:22:05 +08:00
jxxghp
f9f58fc559 v1.8.7
- 认证站点增加新成员:DiscFan
- 支持插件在仪表板中显示Widget,站点数据统计、站点刷流插件已支持,其余插件待开发者适配,插件开发说明:https://github.com/jxxghp/MoviePilot-Plugins
- 仪表板组件支持用户通过拖拽调整显示顺序
- 调整了热门订阅订阅人数的显示样式
2024-05-09 20:06:19 +08:00
jxxghp
f59b5b6d27 fix plugin dashboard 2024-05-09 19:12:43 +08:00
jxxghp
30b3ad4a99 fix plugin dashboard 2024-05-09 19:12:12 +08:00
jxxghp
dfb9ce7520 fix tmdb match api 2024-05-09 18:49:58 +08:00
jxxghp
6c365f552e Merge pull request #2038 from Devinaille/fix_subtitle_match 2024-05-09 11:40:03 +08:00
tianyf
a81ee7d89a 修复副标题包含【】的情况下无法匹配标题的问题 2024-05-09 11:18:00 +08:00
jxxghp
5c9039e6d0 feat:插件仪表板API 2024-05-08 20:58:28 +08:00
jxxghp
cce2e13e21 fix README.md 2024-05-08 16:46:35 +08:00
jxxghp
0da87abc71 Merge remote-tracking branch 'origin/main' 2024-05-08 15:54:56 +08:00
jxxghp
6a2eecc744 fix #2016 2024-05-08 15:54:50 +08:00
jxxghp
c049e13c1c Merge pull request #2032 from z3shan33/main 2024-05-08 13:04:13 +08:00
z3shan33
7ec49ce076 fix #2031 2024-05-08 11:09:36 +08:00
jxxghp
5be2fc35b5 Merge pull request #2026 from zhu0823/main 2024-05-07 18:31:12 +08:00
zhu0823
0b84312559 fix: 某些剧集parentThumb为None时的问题 2024-05-07 18:28:53 +08:00
jxxghp
8bb43b52bc - 优化热门订阅数据统计,热门订阅支持展示订阅人数 2024-05-07 16:19:05 +08:00
jxxghp
bd348f118c fix 订阅统计清理 2024-05-07 16:01:52 +08:00
jxxghp
4a3a3483d0 fix api 2024-05-07 12:31:24 +08:00
jxxghp
fd6314f19f fix 2024-05-06 22:48:24 +08:00
jxxghp
17a9f3a626 更新 version.py 2024-05-06 19:08:49 +08:00
jxxghp
75c898e6eb Merge pull request #2018 from zhu0823/main 2024-05-06 18:37:00 +08:00
zhu0823
089d4785aa fix: plex最近添加的剧集封面地址 2024-05-06 18:35:10 +08:00
jxxghp
4d48295f72 fix bug 2024-05-06 16:46:03 +08:00
jxxghp
ed119b7beb fix 数据共享开关 2024-05-06 12:37:51 +08:00
jxxghp
90d5a8b0c9 add 数据共享开关 2024-05-06 11:54:32 +08:00
jxxghp
dd5c0de7b1 feat:订阅统计 2024-05-06 11:19:41 +08:00
jxxghp
73bdca282c fix api desc 2024-05-05 13:51:49 +08:00
jxxghp
360a54581f add api 2024-05-05 13:43:14 +08:00
jxxghp
1fc7587cbb fix 豆瓣&Bangumi搜索过于宽泛 2024-05-05 12:14:58 +08:00
jxxghp
dcd46f1627 fix #2001 2024-05-05 11:54:42 +08:00
jxxghp
d8644a20c0 fix #2011 2024-05-05 11:22:23 +08:00
jxxghp
23b47f98c1 Merge pull request #2009 from DDS-Derek/main 2024-05-05 09:55:00 +08:00
DDSRem
347c91fa0b bump: ca-certificates version to bookworm 2024-05-04 19:14:30 +08:00
jxxghp
ac961b37b4 Merge pull request #2004 from thsrite/main 2024-05-03 13:52:45 +08:00
thsrite
068c49a79a fix 最小做种数生效时间 2024-05-03 13:09:08 +08:00
jxxghp
e7174b402c Merge pull request #1988 from zhu0823/main 2024-04-30 19:20:33 +08:00
zhu0823
d21267090a 同步更新上游api 2024-04-30 19:14:29 +08:00
zhu0823
51dc2c33a0 feat: plex最近添加过滤黑名单 2024-04-30 19:14:08 +08:00
jxxghp
8aef488ab6 Merge pull request #1986 from Sxnan/tmdbid-type 2024-04-30 16:03:44 +08:00
sxnan
0cbf45f9b9 Cast tmdbid to int type after getting metainfo from title 2024-04-30 15:55:10 +08:00
110 changed files with 3223 additions and 1367 deletions

View File

@@ -80,11 +80,13 @@ jobs:
- name: Prepare Frontend
run: |
# 下载nginx
Invoke-WebRequest -Uri "http://nginx.org/download/nginx-1.25.2.zip" -OutFile "nginx.zip"
Expand-Archive -Path "nginx.zip" -DestinationPath "nginx-1.25.2"
Move-Item -Path "nginx-1.25.2/nginx-1.25.2" -Destination "nginx"
Remove-Item -Path "nginx.zip"
Remove-Item -Path "nginx-1.25.2" -Recurse -Force
# 下载前端
$FRONTEND_VERSION = (Invoke-WebRequest -Uri "https://api.github.com/repos/jxxghp/MoviePilot-Frontend/releases/latest" | ConvertFrom-Json).tag_name
Invoke-WebRequest -Uri "https://github.com/jxxghp/MoviePilot-Frontend/releases/download/$FRONTEND_VERSION/dist.zip" -OutFile "dist.zip"
Expand-Archive -Path "dist.zip" -DestinationPath "dist"
@@ -96,11 +98,31 @@ jobs:
New-Item -Path "nginx/temp/__keep__.txt" -ItemType File -Force
New-Item -Path "nginx/logs" -ItemType Directory -Force
New-Item -Path "nginx/logs/__keep__.txt" -ItemType File -Force
# 下载插件 jxxghp
Invoke-WebRequest -Uri "https://github.com/jxxghp/MoviePilot-Plugins/archive/refs/heads/main.zip" -OutFile "MoviePilot-Plugins-main.zip"
Expand-Archive -Path "MoviePilot-Plugins-main.zip" -DestinationPath "MoviePilot-Plugins-main"
Move-Item -Path "MoviePilot-Plugins-main/MoviePilot-Plugins-main/plugins/*" -Destination "app/plugins/" -Force
Move-Item -Path "MoviePilot-Plugins-main/MoviePilot-Plugins-main/plugins/*" -Destination "app/plugins/" -Force -ErrorAction SilentlyContinue
Remove-Item -Path "MoviePilot-Plugins-main.zip"
Remove-Item -Path "MoviePilot-Plugins-main" -Recurse -Force
# 下载插件 thsrite
Invoke-WebRequest -Uri "https://github.com/thsrite/MoviePilot-Plugins/archive/refs/heads/main.zip" -OutFile "MoviePilot-Plugins-main.zip"
Expand-Archive -Path "MoviePilot-Plugins-main.zip" -DestinationPath "MoviePilot-Plugins-main"
Move-Item -Path "MoviePilot-Plugins-main/MoviePilot-Plugins-main/plugins/*" -Destination "app/plugins/" -Force -ErrorAction SilentlyContinue
Remove-Item -Path "MoviePilot-Plugins-main.zip"
Remove-Item -Path "MoviePilot-Plugins-main" -Recurse -Force
# 下载插件 honue
Invoke-WebRequest -Uri "https://github.com/honue/MoviePilot-Plugins/archive/refs/heads/main.zip" -OutFile "MoviePilot-Plugins-main.zip"
Expand-Archive -Path "MoviePilot-Plugins-main.zip" -DestinationPath "MoviePilot-Plugins-main"
Move-Item -Path "MoviePilot-Plugins-main/MoviePilot-Plugins-main/plugins/*" -Destination "app/plugins/" -Force -ErrorAction SilentlyContinue
Remove-Item -Path "MoviePilot-Plugins-main.zip"
Remove-Item -Path "MoviePilot-Plugins-main" -Recurse -Force
# 下载插件 InfinityPacer
Invoke-WebRequest -Uri "https://github.com/InfinityPacer/MoviePilot-Plugins/archive/refs/heads/main.zip" -OutFile "MoviePilot-Plugins-main.zip"
Expand-Archive -Path "MoviePilot-Plugins-main.zip" -DestinationPath "MoviePilot-Plugins-main"
Move-Item -Path "MoviePilot-Plugins-main/MoviePilot-Plugins-main/plugins/*" -Destination "app/plugins/" -Force -ErrorAction SilentlyContinue
Remove-Item -Path "MoviePilot-Plugins-main.zip"
Remove-Item -Path "MoviePilot-Plugins-main" -Recurse -Force
# 下载资源
Invoke-WebRequest -Uri "https://github.com/jxxghp/MoviePilot-Resources/archive/refs/heads/main.zip" -OutFile "MoviePilot-Resources-main.zip"
Expand-Archive -Path "MoviePilot-Resources-main.zip" -DestinationPath "MoviePilot-Resources-main"
Move-Item -Path "MoviePilot-Resources-main/MoviePilot-Resources-main/resources/*" -Destination "app/helper/" -Force
@@ -137,6 +159,7 @@ jobs:
python -m pip install --upgrade pip
pip install wheel pyinstaller
pip install -r requirements.txt
find app/plugins -name requirements.txt -exec pip install -r {} \;
- name: Prepare Frontend
run: |

3
.gitignore vendored
View File

@@ -15,4 +15,5 @@ config/user.db
config/sites/**
*.pyc
*.log
.vscode
.vscode
venv

View File

@@ -1,4 +1,4 @@
FROM python:3.11.4-slim-bullseye
FROM python:3.11.4-slim-bookworm
ARG MOVIEPILOT_VERSION
ENV LANG="C.UTF-8" \
TZ="Asia/Shanghai" \
@@ -16,6 +16,7 @@ ENV LANG="C.UTF-8" \
IYUU_SIGN=""
WORKDIR "/app"
RUN apt-get update -y \
&& apt-get upgrade -y \
&& apt-get -y install \
musl-dev \
nginx \

View File

@@ -6,6 +6,8 @@
发布频道https://t.me/moviepilot_channel
Wikihttps://wiki.movie-pilot.org
## 主要特性
- 前后端分离基于FastApi + Vue3前端项目地址[MoviePilot-Frontend](https://github.com/jxxghp/MoviePilot-Frontend)APIhttp://localhost:3001/docs
- 聚焦核心需求,简化功能和设置,部分设置项可直接使用默认值。
@@ -48,7 +50,7 @@ MoviePilot需要配套下载器和媒体服务器配合使用。
- Windows
1. 独立执行文件版本:下载 [MoviePilot.exe](https://github.com/jxxghp/MoviePilot/releases)双击运行后自动生成配置文件目录访问http://localhost:3000
2. 安装包版本:[Windows-MoviePilot](https://github.com/developer-wlj/Windows-MoviePilot)
2. 安装包版本【推荐】[Windows-MoviePilot](https://github.com/developer-wlj/Windows-MoviePilot)
- 群晖套件
@@ -82,7 +84,7 @@ MoviePilot需要配套下载器和媒体服务器配合使用。
- **❗AUTH_SITE** 认证站点(认证通过后才能使用站点相关功能),支持配置多个认证站点,使用`,`分隔,如:`iyuu,hhclub`,会依次执行认证操作,直到有一个站点认证成功。
配置`AUTH_SITE`后,需要根据下表配置对应站点的认证参数。
认证资源`v1.2.4+`支持:`iyuu`/`hhclub`/`audiences`/`hddolby`/`zmpt`/`freefarm`/`hdfans`/`wintersakura`/`leaves`/`ptba` /`icc2022`/`ptlsp`/`xingtan`/`ptvicomo`/`agsvpt`/`hdkyl`/`qingwa`
认证资源`v1.2.8+`支持:`iyuu`/`hhclub`/`audiences`/`hddolby`/`zmpt`/`freefarm`/`hdfans`/`wintersakura`/`leaves`/`ptba` /`icc2022`/`ptlsp`/`xingtan`/`ptvicomo`/`agsvpt`/`hdkyl`/`qingwa`/`discfan`
| 站点 | 参数 |
|:------------:|:-----------------------------------------------------:|
@@ -103,6 +105,7 @@ MoviePilot需要配套下载器和媒体服务器配合使用。
| agsvpt | `AGSVPT_UID`用户ID<br/>`AGSVPT_PASSKEY`:密钥 |
| hdkyl | `HDKYL_UID`用户ID<br/>`HDKYL_PASSKEY`:密钥 |
| qingwa | `QINGWA_UID`用户ID<br/>`QINGWA_PASSKEY`:密钥 |
| discfan | `DISCFAN_UID`用户ID<br/>`DISCFAN_PASSKEY`:密钥 |
### 2. **环境变量 / 配置文件**
@@ -115,7 +118,8 @@ MoviePilot需要配套下载器和媒体服务器配合使用。
- **DOH_ENABLE** DNS over HTTPS开关`true`/`false`,默认`true`开启后会使用DOH对api.themoviedb.org等域名进行解析以减少被DNS污染的情况提升网络连通性
- **META_CACHE_EXPIRE** 元数据识别缓存过期时间小时数字型不配置或者配置为0时使用系统默认大内存模式为7天否则为3天调大该值可减少themoviedb的访问次数
- **GITHUB_TOKEN** Github token提高自动更新、插件安装等请求Github Api的限流阈值格式ghp_****
- **DEV:** 开发者模式,`true`/`false`,默认`false`,开启后会暂停所有定时任务
- **GITHUB_PROXY** Github代理地址用于加速版本及插件升级安装格式`https://mirror.ghproxy.com/`
- **DEV:** 开发者模式,`true`/`false`,默认`false`,仅用于本地开发使用,开启后会暂停所有定时任务,且插件代码文件的修改无需重启会自动重载生效
- **AUTO_UPDATE_RESOURCE**:启动时自动检测和更新资源包(站点索引及认证等),`true`/`false`,默认`true`需要能正常连接Github仅支持Docker镜像
---
- **TMDB_API_DOMAIN** TMDB API地址默认`api.themoviedb.org`,也可配置为`api.tmdb.org`、`tmdb.movie-pilot.org` 或其它中转代理服务地址,能连通即可
@@ -127,13 +131,13 @@ MoviePilot需要配套下载器和媒体服务器配合使用。
- **SCRAP_FOLLOW_TMDB** 新增已入库媒体是否跟随TMDB信息变化`true`/`false`,默认`true`,为`false`时即使TMDB信息变化了也会仍然按历史记录中已入库的信息进行刮削
---
- **AUTO_DOWNLOAD_USER** 远程交互搜索时自动择优下载的用户ID消息通知渠道的用户ID多个用户使用,分割,设置为 all 代表全部用户自动择优下载,未设置需要手动选择资源或者回复`0`才自动择优下载
- **DOWNLOAD_SUBTITLE** 下载站点字幕,`true`/`false`,默认`true`
- **SEARCH_MULTIPLE_NAME** 搜索时是否使用多个名称搜索,`true`/`false`,默认`false`,开启后会使用多个名称进行搜索,搜索结果会更全面,但会增加搜索时间;关闭时只要其中一个名称搜索到结果或全部名称搜索完毕即停止
- **SUBSCRIBE_STATISTIC_SHARE** 是否匿名分享订阅数据,用于统计和展示用户热门订阅,`true`/`false`,默认`true`
- **PLUGIN_STATISTIC_SHARE** 是否匿名分享插件安装统计数据,用于统计和显示插件下载安装次数,`true`/`false`,默认`true`
---
- **OCR_HOST** OCR识别服务器地址格式`http(s)://ip:port`用于识别站点验证码实现自动登录获取Cookie等不配置默认使用内建服务器`https://movie-pilot.org`,可使用 [这个镜像](https://hub.docker.com/r/jxxghp/moviepilot-ocr) 自行搭建。
---
- **DOWNLOAD_SUBTITLE** 下载站点字幕,`true`/`false`,默认`true`
---
- **SEARCH_MULTIPLE_NAME** 搜索时是否使用多个名称搜索,`true`/`false`,默认`false`,开启后会使用多个名称进行搜索,搜索结果会更全面,但会增加搜索时间;关闭时只要其中一个名称搜索到结果或全部名称搜索完毕即停止
---
- **MOVIE_RENAME_FORMAT** 电影重命名格式基于jinjia2语法
`MOVIE_RENAME_FORMAT`支持的配置项:
@@ -190,8 +194,12 @@ MoviePilot需要配套下载器和媒体服务器配合使用。
### 4. **插件扩展**
- **PLUGIN_MARKET** 插件市场仓库地址仅支持Github仓库`main`分支,多个地址使用`,`分隔,默认为官方插件仓库:`https://github.com/jxxghp/MoviePilot-Plugins` 通过查看[MoviePilot-Plugins](https://github.com/jxxghp/MoviePilot-Plugins)项目的fork或者查看频道置顶了解更多第三方插件仓库
- **PLUGIN_MARKET** 插件市场仓库地址仅支持Github仓库`main`分支,多个地址使用`,`分隔,通过查看[MoviePilot-Plugins](https://github.com/jxxghp/MoviePilot-Plugins)项目的fork或者查看频道置顶了解更多第三方插件仓库,目前已有 `130+` 插件。
默认已内置以下插件库:
1. https://github.com/jxxghp/MoviePilot-Plugins
2. https://github.com/thsrite/MoviePilot-Plugins
3. https://github.com/honue/MoviePilot-Plugins
4. https://github.com/InfinityPacer/MoviePilot-Plugins
## 使用
@@ -202,16 +210,17 @@ MoviePilot需要配套下载器和媒体服务器配合使用。
- 通过CookieCloud同步快速添加站点不需要使用的站点可在WEB管理界面中禁用或删除无法同步的站点也可手动新增。
- 需要通过环境变量设置用户认证信息且认证成功后才能使用站点相关功能,未认证通过时站点相关的插件也会无法显示。
### 3. **文件整理**
- 默认通过监控下载器实现下载完成后自动整理入库并刮削媒体信息,需要后台打开`下载器监控`开关且仅会处理通过MoviePilot添加下载的任务。
- 默认通过监控下载器实现下载完成后自动整理入库并刮削媒体信息,需要后台打开`下载器监控`开关,并在设定中维护好下载目录和媒体库目录,且仅会处理通过MoviePilot添加下载的任务(含`MOVIEPILOT`标签)
- 下载器监控默认轮循间隔为5分钟如果是使用qbittorrent可在 `QB设置`->`下载完成时运行外部程序` 处填入:`curl "http://localhost:3000/api/v1/transfer/now?token=moviepilot" `实现无需等待轮循下载完成后立即整理入库地址、端口和token按实际调整curl也可更换为wget
- 使用`目录监控`等插件实现更灵活的自动整理。
- 使用`目录监控`等插件实现更灵活的自动整理使用MoviePilot整理其它途径下载的资源时使用
### 4. **通知交互**
- 支持通过`微信`/`Telegram`/`Slack`/`SynologyChat`/`VoceChat`等渠道远程管理和订阅下载,其中 微信/Telegram 将会自动添加操作菜单(微信菜单条数有限制,部分菜单不显示)。
- `微信`回调地址、`SynologyChat`传入地址地址相对路径均为:`/api/v1/message/``VoceChat`的Webhook地址相对路径为`/api/v1/message/?token=moviepilot`其中moviepilot为设置的`API_TOKEN`。
- 插件市场中有其它渠道的通知插件(仅支持单向通知),可安装使用。
### 5. **订阅与搜索**
- 通过MoviePilot管理后台搜索和订阅。
- 将MoviePilot做为`Radarr`或`Sonarr`服务器添加到`Overseerr`或`Jellyseerr`,可使用`Overseerr/Jellyseerr`浏览和添加订阅。
- 安装`豆瓣榜单订阅`、`猫眼订阅`等插件,实现自动订阅豆瓣榜单、猫眼榜单等
- 安装`豆瓣榜单订阅`、`猫眼订阅`、`热门订阅`等插件,实现自动订阅各类榜单
### 6. **其他**
- 通过设置媒体服务器Webhook指向MoviePilot相对路径为`/api/v1/webhook?token=moviepilot`,其中`moviepilot`为设置的`API_TOKEN`可实现通过MoviePilot发送播放通知以及配合各类插件实现播放限速等功能。
- 映射宿主机`docker.sock`文件到容器`/var/run/docker.sock`,可支持应用内建重启操作。实例:`-v /var/run/docker.sock:/var/run/docker.sock:ro`。
@@ -259,9 +268,15 @@ sudo sysctl -p
![image](https://github.com/jxxghp/MoviePilot/assets/51039935/f2654b09-26f3-464f-a0af-1de3f97832ee)
![image](https://github.com/jxxghp/MoviePilot/assets/51039935/fcb87529-56dd-43df-8337-6e34b8582819)
![mp1](https://github.com/jxxghp/MoviePilot/assets/51039935/123a39cd-11b6-4bd3-b3a3-e3e1103416b9)
![mp2](https://github.com/jxxghp/MoviePilot/assets/51039935/230de330-8162-4314-a897-0bce9b1e5d00)
![mp3](https://github.com/jxxghp/MoviePilot/assets/51039935/de678f45-bbe4-4bdc-aefc-cc5ae774a726)
![mp4](https://github.com/jxxghp/MoviePilot/assets/51039935/8bc6a0a5-7c57-464f-ae63-d6241310dc43)
![image](https://github.com/jxxghp/MoviePilot/assets/51039935/bfa77c71-510a-46a6-9c1e-cf98cb101e3a)
![image](https://github.com/jxxghp/MoviePilot/assets/51039935/51cafd09-e38c-47f9-ae62-1e83ab8bf89b)

View File

@@ -1,3 +1,4 @@
from pathlib import Path
from typing import Any, List, Optional
from fastapi import APIRouter, Depends
@@ -5,10 +6,10 @@ from sqlalchemy.orm import Session
from app import schemas
from app.chain.dashboard import DashboardChain
from app.core.config import settings
from app.core.security import verify_token, verify_uri_token
from app.db import get_db
from app.db.models.transferhistory import TransferHistory
from app.helper.directory import DirectoryHelper
from app.scheduler import Scheduler
from app.utils.system import SystemUtils
@@ -47,7 +48,8 @@ def storage(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询存储空间信息
"""
total_storage, free_storage = SystemUtils.space_usage(settings.LIBRARY_PATHS)
library_dirs = DirectoryHelper().get_library_dirs()
total_storage, free_storage = SystemUtils.space_usage([Path(d.path) for d in library_dirs if d.path])
return schemas.Storage(
total_storage=total_storage,
used_storage=total_storage - free_storage
@@ -75,6 +77,10 @@ def downloader(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询下载器信息
"""
# 下载目录空间
download_dirs = DirectoryHelper().get_download_dirs()
_, free_space = SystemUtils.space_usage([Path(d.path) for d in download_dirs if d.path])
# 下载器信息
downloader_info = schemas.DownloaderInfo()
transfer_infos = DashboardChain().downloader_info()
if transfer_infos:
@@ -83,7 +89,7 @@ def downloader(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
downloader_info.upload_speed += transfer_info.upload_speed
downloader_info.download_size += transfer_info.download_size
downloader_info.upload_size += transfer_info.upload_size
downloader_info.free_space = SystemUtils.free_space(settings.SAVE_PATH)
downloader_info.free_space = free_space
return downloader_info

View File

@@ -23,13 +23,13 @@ def read(
return DownloadChain().downloading()
@router.post("/", summary="添加下载", response_model=schemas.Response)
@router.post("/", summary="添加下载(含媒体信息)", response_model=schemas.Response)
def download(
media_in: schemas.MediaInfo,
torrent_in: schemas.TorrentInfo,
current_user: User = Depends(get_current_active_user)) -> Any:
"""
添加下载任务
添加下载任务(含媒体信息)
"""
# 元数据
metainfo = MetaInfo(title=torrent_in.title, subtitle=torrent_in.description)
@@ -46,17 +46,19 @@ def download(
torrent_info=torrentinfo
)
did = DownloadChain().download_single(context=context, username=current_user.name)
return schemas.Response(success=True if did else False, data={
if not did:
return schemas.Response(success=False, message="任务添加失败")
return schemas.Response(success=True, data={
"download_id": did
})
@router.post("/add", summary="添加下载", response_model=schemas.Response)
@router.post("/add", summary="添加下载(不含媒体信息)", response_model=schemas.Response)
def add(
torrent_in: schemas.TorrentInfo,
current_user: User = Depends(get_current_active_user)) -> Any:
"""
添加下载任
添加下载任务(不含媒体信息)
"""
# 元数据
metainfo = MetaInfo(title=torrent_in.title, subtitle=torrent_in.description)
@@ -74,7 +76,9 @@ def add(
torrent_info=torrentinfo
)
did = DownloadChain().download_single(context=context, username=current_user.name)
return schemas.Response(success=True if did else False, data={
if not did:
return schemas.Response(success=False, message="任务添加失败")
return schemas.Response(success=True, data={
"download_id": did
})

View File

@@ -49,7 +49,7 @@ def list_path(path: str,
# 遍历目录
path_obj = Path(path)
if not path_obj.exists():
logger.error(f"目录不存在:{path}")
logger.warn(f"目录不存在:{path}")
return []
# 如果是文件
@@ -98,6 +98,47 @@ def list_path(path: str,
return ret_items
@router.get("/listdir", summary="所有目录(不含文件)", response_model=List[schemas.FileItem])
def list_dir(path: str, _: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询当前目录下所有目录
"""
# 返回结果
ret_items = []
if not path or path == "/":
if SystemUtils.is_windows():
partitions = SystemUtils.get_windows_drives() or ["C:/"]
for partition in partitions:
ret_items.append(schemas.FileItem(
type="dir",
path=partition + "/",
name=partition,
children=[]
))
return ret_items
else:
path = "/"
else:
if not SystemUtils.is_windows() and not path.startswith("/"):
path = "/" + path
# 遍历目录
path_obj = Path(path)
if not path_obj.exists():
logger.warn(f"目录不存在:{path}")
return []
# 扁历所有目录
for item in SystemUtils.list_sub_directory(path_obj):
ret_items.append(schemas.FileItem(
type="dir",
path=str(item).replace("\\", "/") + "/",
name=item.name,
children=[]
))
return ret_items
@router.get("/mkdir", summary="创建目录", response_model=schemas.Response)
def mkdir(path: str, _: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""

View File

@@ -9,8 +9,10 @@ from app.chain.transfer import TransferChain
from app.core.event import eventmanager
from app.core.security import verify_token
from app.db import get_db
from app.db.models import User
from app.db.models.downloadhistory import DownloadHistory
from app.db.models.transferhistory import TransferHistory
from app.db.userauth import get_current_active_superuser
from app.schemas.types import EventType
router = APIRouter()
@@ -103,3 +105,13 @@ def delete_transfer_history(history_in: schemas.TransferHistory,
# 删除记录
TransferHistory.delete(db, history_in.id)
return schemas.Response(success=True)
@router.get("/empty/transfer", summary="清空转移历史记录", response_model=schemas.Response)
def delete_transfer_history(db: Session = Depends(get_db),
_: User = Depends(get_current_active_superuser)) -> Any:
"""
清空转移历史记录
"""
TransferHistory.truncate(db)
return schemas.Response(success=True)

View File

@@ -119,6 +119,14 @@ def scrape(path: str,
return schemas.Response(success=True, message="刮削完成")
@router.get("/category", summary="查询自动分类配置", response_model=dict)
def category(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询自动分类配置
"""
return MediaChain().media_category() or {}
@router.get("/{mediaid}", summary="查询媒体详情", response_model=schemas.MediaInfo)
def media_info(mediaid: str, type_name: str,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:

View File

@@ -1,13 +1,13 @@
from typing import Any, List
from typing import Any, List, Dict
from fastapi import APIRouter, Depends, HTTPException
from fastapi import APIRouter, Depends
from sqlalchemy.orm import Session
from app import schemas
from app.chain.download import DownloadChain
from app.chain.media import MediaChain
from app.chain.mediaserver import MediaServerChain
from app.core.config import settings
from app.core.context import MediaInfo
from app.core.metainfo import MetaInfo
from app.core.security import verify_token
from app.db import get_db
@@ -27,7 +27,10 @@ def play_item(itemid: str) -> schemas.Response:
return schemas.Response(success=False, msg="参数错误")
if not settings.MEDIASERVER:
return schemas.Response(success=False, msg="未配置媒体服务器")
mediaserver = settings.MEDIASERVER.split(",")[0]
# 查找一个不为空的值
mediaserver = next((server for server in settings.MEDIASERVER.split(",") if server), None)
if not mediaserver:
return schemas.Response(success=False, msg="未配置媒体服务器")
play_url = MediaServerChain().get_play_url(server=mediaserver, item_id=itemid)
# 重定向到play_url
if not play_url:
@@ -37,14 +40,14 @@ def play_item(itemid: str) -> schemas.Response:
})
@router.get("/exists", summary="本地是否存在", response_model=schemas.Response)
def exists(title: str = None,
year: int = None,
mtype: str = None,
tmdbid: int = None,
season: int = None,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
@router.get("/exists", summary="查询本地是否存在(数据库)", response_model=schemas.Response)
def exists_local(title: str = None,
year: int = None,
mtype: str = None,
tmdbid: int = None,
season: int = None,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
判断本地是否存在
"""
@@ -66,11 +69,30 @@ def exists(title: str = None,
})
@router.post("/notexists", summary="查询缺失媒体信息", response_model=List[schemas.NotExistMediaInfo])
@router.post("/exists_remote", summary="查询已存在的剧集信息(媒体服务器)", response_model=Dict[int, list])
def exists(media_in: schemas.MediaInfo,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据媒体信息查询媒体库已存在的剧集信息
"""
# 转化为媒体信息对象
mediainfo = MediaInfo()
mediainfo.from_dict(media_in.dict())
existsinfo: schemas.ExistMediaInfo = MediaServerChain().media_exists(mediainfo=mediainfo)
if not existsinfo:
return []
if media_in.season:
return {
media_in.season: existsinfo.seasons.get(media_in.season) or []
}
return existsinfo.seasons
@router.post("/notexists", summary="查询媒体库缺失信息(媒体服务器)", response_model=List[schemas.NotExistMediaInfo])
def not_exists(media_in: schemas.MediaInfo,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询缺失媒体信息
根据媒体信息查询缺失电影/剧集
"""
# 媒体信息
meta = MetaInfo(title=media_in.title)
@@ -82,18 +104,13 @@ def not_exists(media_in: schemas.MediaInfo,
meta.type = MediaType.TV
if media_in.year:
meta.year = media_in.year
if media_in.tmdb_id or media_in.douban_id:
mediainfo = MediaChain().recognize_media(meta=meta, mtype=mtype,
tmdbid=media_in.tmdb_id, doubanid=media_in.douban_id)
else:
mediainfo = MediaChain().recognize_by_meta(metainfo=meta)
# 查询缺失信息
if not mediainfo:
raise HTTPException(status_code=404, detail="媒体信息不存在")
mediakey = mediainfo.tmdb_id or mediainfo.douban_id
# 转化为媒体信息对象
mediainfo = MediaInfo()
mediainfo.from_dict(media_in.dict())
exist_flag, no_exists = DownloadChain().get_no_exists_info(meta=meta, mediainfo=mediainfo)
mediakey = mediainfo.tmdb_id or mediainfo.douban_id
if mediainfo.type == MediaType.MOVIE:
# 电影已存在时返回空列表,存在时返回空对像列表
# 电影已存在时返回空列表,存在时返回空对像列表
return [] if exist_flag else [NotExistMediaInfo()]
elif no_exists and no_exists.get(mediakey):
# 电视剧返回缺失的剧集

View File

@@ -134,6 +134,11 @@ def read_switchs(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
else:
for switch in switchs:
return_list.append(NotificationSwitch(**switch))
for noti in NotificationType:
if not any([x.mtype == noti.value for x in return_list]):
return_list.append(NotificationSwitch(mtype=noti.value, wechat=True,
telegram=True, slack=True,
synologychat=True, vocechat=True))
return return_list

View File

@@ -1,6 +1,6 @@
from typing import Any, List
from typing import Any, List, Annotated
from fastapi import APIRouter, Depends
from fastapi import APIRouter, Depends, Header
from app import schemas
from app.core.plugin import PluginManager
@@ -13,6 +13,29 @@ from app.schemas.types import SystemConfigKey
router = APIRouter()
def register_plugin_api(plugin_id: str = None):
"""
注册插件API先删除后新增
"""
for api in PluginManager().get_plugin_apis(plugin_id):
for r in router.routes:
if r.path == api.get("path"):
router.routes.remove(r)
break
router.add_api_route(**api)
def remove_plugin_api(plugin_id: str):
"""
移除插件API
"""
for api in PluginManager().get_plugin_apis(plugin_id):
for r in router.routes:
if r.path == api.get("path"):
router.routes.remove(r)
break
@router.get("/", summary="所有插件", response_model=List[schemas.Plugin])
def all_plugins(_: schemas.TokenPayload = Depends(verify_token), state: str = "all") -> List[schemas.Plugin]:
"""
@@ -101,6 +124,8 @@ def install(plugin_id: str,
PluginManager().reload_plugin(plugin_id)
# 注册插件服务
Scheduler().update_plugin_job(plugin_id)
# 注册插件API
register_plugin_api(plugin_id)
return schemas.Response(success=True)
@@ -125,6 +150,32 @@ def plugin_page(plugin_id: str, _: schemas.TokenPayload = Depends(verify_token))
return PluginManager().get_plugin_page(plugin_id)
@router.get("/dashboard/meta", summary="获取所有插件仪表板元信息")
def plugin_dashboard_meta(_: schemas.TokenPayload = Depends(verify_token)) -> List[dict]:
"""
获取所有插件仪表板元信息
"""
return PluginManager().get_plugin_dashboard_meta()
@router.get("/dashboard/{plugin_id}", summary="获取插件仪表板配置")
def plugin_dashboard(plugin_id: str, user_agent: Annotated[str | None, Header()] = None,
_: schemas.TokenPayload = Depends(verify_token)) -> schemas.PluginDashboard:
"""
根据插件ID获取插件仪表板
"""
return PluginManager().get_plugin_dashboard(plugin_id, key=None, user_agent=user_agent)
@router.get("/dashboard/{plugin_id}/{key}", summary="获取插件仪表板配置")
def plugin_dashboard(plugin_id: str, key: str, user_agent: Annotated[str | None, Header()] = None,
_: schemas.TokenPayload = Depends(verify_token)) -> schemas.PluginDashboard:
"""
根据插件ID获取插件仪表板
"""
return PluginManager().get_plugin_dashboard(plugin_id, key=key, user_agent=user_agent)
@router.get("/reset/{plugin_id}", summary="重置插件配置", response_model=schemas.Response)
def reset_plugin(plugin_id: str, _: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
@@ -139,6 +190,8 @@ def reset_plugin(plugin_id: str, _: schemas.TokenPayload = Depends(verify_token)
})
# 注册插件服务
Scheduler().update_plugin_job(plugin_id)
# 注册插件API
register_plugin_api(plugin_id)
return schemas.Response(success=True)
@@ -162,6 +215,8 @@ def set_plugin_config(plugin_id: str, conf: dict,
PluginManager().init_plugin(plugin_id, conf)
# 注册插件服务
Scheduler().update_plugin_job(plugin_id)
# 注册插件API
register_plugin_api(plugin_id)
return schemas.Response(success=True)
@@ -183,9 +238,10 @@ def uninstall_plugin(plugin_id: str,
PluginManager().remove_plugin(plugin_id)
# 移除插件服务
Scheduler().remove_plugin_job(plugin_id)
# 移除插件API
remove_plugin_api(plugin_id)
return schemas.Response(success=True)
# 注册插件API
for api in PluginManager().get_plugin_apis():
router.add_api_route(**api)
# 注册全部插件API
register_plugin_api()

View File

@@ -13,7 +13,7 @@ router = APIRouter()
@router.get("/last", summary="查询搜索结果", response_model=List[schemas.Context])
async def search_latest(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
def search_latest(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询搜索结果
"""
@@ -85,13 +85,15 @@ def search_by_id(mediaid: str,
return schemas.Response(success=True, data=[torrent.to_dict() for torrent in torrents])
@router.get("/title", summary="模糊搜索资源", response_model=List[schemas.TorrentInfo])
async def search_by_title(keyword: str = None,
page: int = 0,
site: int = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
@router.get("/title", summary="模糊搜索资源", response_model=schemas.Response)
def search_by_title(keyword: str = None,
page: int = 0,
site: int = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据名称模糊搜索站点资源,支持分页,关键词为空是返回首页资源
"""
torrents = SearchChain().search_by_title(title=keyword, page=page, site=site)
return [torrent.to_dict() for torrent in torrents]
if not torrents:
return schemas.Response(success=False, message="未搜索到任何资源")
return schemas.Response(success=True, data=[torrent.to_dict() for torrent in torrents])

View File

@@ -10,10 +10,12 @@ from app.chain.torrents import TorrentsChain
from app.core.event import EventManager
from app.core.security import verify_token
from app.db import get_db
from app.db.models import User
from app.db.models.site import Site
from app.db.models.siteicon import SiteIcon
from app.db.models.sitestatistic import SiteStatistic
from app.db.systemconfig_oper import SystemConfigOper
from app.db.userauth import get_current_active_superuser
from app.helper.sites import SitesHelper
from app.scheduler import Scheduler
from app.schemas.types import SystemConfigKey, EventType
@@ -92,24 +94,6 @@ def update_site(
return schemas.Response(success=True)
@router.delete("/{site_id}", summary="删除站点", response_model=schemas.Response)
def delete_site(
site_id: int,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)
) -> Any:
"""
删除站点
"""
Site.delete(db, site_id)
# 插件站点删除
EventManager().send_event(EventType.SiteDeleted,
{
"site_id": site_id
})
return schemas.Response(success=True)
@router.get("/cookiecloud", summary="CookieCloud同步", response_model=schemas.Response)
def cookie_cloud_sync(background_tasks: BackgroundTasks,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
@@ -122,7 +106,7 @@ def cookie_cloud_sync(background_tasks: BackgroundTasks,
@router.get("/reset", summary="重置站点", response_model=schemas.Response)
def reset(db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
_: User = Depends(get_current_active_superuser)) -> Any:
"""
清空所有站点数据并重新同步CookieCloud站点信息
"""
@@ -139,6 +123,21 @@ def reset(db: Session = Depends(get_db),
return schemas.Response(success=True, message="站点已重置!")
@router.post("/priorities", summary="批量更新站点优先级", response_model=schemas.Response)
def update_sites_priority(
priorities: List[dict],
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
批量更新站点优先级
"""
for priority in priorities:
site = Site.get(db, priority.get("id"))
if site:
site.update(db, {"pri": priority.get("pri")})
return schemas.Response(success=True)
@router.get("/cookie/{site_id}", summary="更新站点Cookie&UA", response_model=schemas.Response)
def update_cookie(
site_id: int,
@@ -291,3 +290,21 @@ def read_site(
detail=f"站点 {site_id} 不存在",
)
return site
@router.delete("/{site_id}", summary="删除站点", response_model=schemas.Response)
def delete_site(
site_id: int,
db: Session = Depends(get_db),
_: User = Depends(get_current_active_superuser)
) -> Any:
"""
删除站点
"""
Site.delete(db, site_id)
# 插件站点删除
EventManager().send_event(EventType.SiteDeleted,
{
"site_id": site_id
})
return schemas.Response(success=True)

View File

@@ -1,12 +1,14 @@
import json
from typing import List, Any
import cn2an
from fastapi import APIRouter, Request, BackgroundTasks, Depends, HTTPException, Header
from sqlalchemy.orm import Session
from app import schemas
from app.chain.subscribe import SubscribeChain
from app.core.config import settings
from app.core.context import MediaInfo
from app.core.metainfo import MetaInfo
from app.core.security import verify_token, verify_uri_token
from app.db import get_db
@@ -14,6 +16,7 @@ from app.db.models.subscribe import Subscribe
from app.db.models.subscribehistory import SubscribeHistory
from app.db.models.user import User
from app.db.userauth import get_current_active_user
from app.helper.subscribe import SubscribeHelper
from app.scheduler import Scheduler
from app.schemas.types import MediaType
@@ -39,7 +42,12 @@ def read_subscribes(
subscribes = Subscribe.list(db)
for subscribe in subscribes:
if subscribe.sites:
subscribe.sites = json.loads(subscribe.sites)
try:
subscribe.sites = json.loads(str(subscribe.sites))
except json.JSONDecodeError:
subscribe.sites = []
else:
subscribe.sites = []
return subscribes
@@ -163,7 +171,10 @@ def subscribe_mediaid(
meta.begin_season = season
result = Subscribe.get_by_title(db, title=meta.name, season=meta.begin_season)
if result and result.sites:
result.sites = json.loads(result.sites)
try:
result.sites = json.loads(result.sites)
except json.JSONDecodeError:
result.sites = []
return result if result else Subscribe()
@@ -317,7 +328,10 @@ def read_subscribe(
historys = SubscribeHistory.list_by_type(db, mtype=mtype, page=page, count=count)
for history in historys:
if history and history.sites:
history.sites = json.loads(history.sites)
try:
history.sites = json.loads(history.sites)
except json.JSONDecodeError:
history.sites = []
return historys
@@ -334,6 +348,51 @@ def delete_subscribe(
return schemas.Response(success=True)
@router.get("/popular", summary="热门订阅(基于用户共享数据)", response_model=List[schemas.MediaInfo])
def popular_subscribes(
stype: str,
page: int = 1,
count: int = 30,
min_sub: int = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询热门订阅
"""
subscribes = SubscribeHelper().get_statistic(stype=stype, page=page, count=count)
if subscribes:
ret_medias = []
for sub in subscribes:
# 订阅人数
count = sub.get("count")
if min_sub and count < min_sub:
continue
media = MediaInfo()
media.type = MediaType(sub.get("type"))
media.tmdb_id = sub.get("tmdbid")
# 处理标题
title = sub.get("name")
season = sub.get("season")
if season and int(season) > 1 and media.tmdb_id:
# 小写数据转大写
season_str = cn2an.an2cn(season, "low")
title = f"{title}{season_str}"
media.title = title
media.year = sub.get("year")
media.douban_id = sub.get("doubanid")
media.bangumi_id = sub.get("bangumiid")
media.tvdb_id = sub.get("tvdbid")
media.imdb_id = sub.get("imdbid")
media.season = sub.get("season")
media.overview = sub.get("description")
media.vote_average = sub.get("vote")
media.poster_path = sub.get("poster")
media.backdrop_path = sub.get("backdrop")
media.popularity = count
ret_medias.append(media)
return [media.to_dict() for media in ret_medias]
return []
@router.get("/{subscribe_id}", summary="订阅详情", response_model=schemas.Subscribe)
def read_subscribe(
subscribe_id: int,
@@ -346,7 +405,10 @@ def read_subscribe(
return Subscribe()
subscribe = Subscribe.get(db, subscribe_id)
if subscribe and subscribe.sites:
subscribe.sites = json.loads(subscribe.sites)
try:
subscribe.sites = json.loads(subscribe.sites)
except json.JSONDecodeError:
subscribe.sites = []
return subscribe
@@ -359,5 +421,12 @@ def delete_subscribe(
"""
删除订阅信息
"""
Subscribe.delete(db, subscribe_id)
subscribe = Subscribe.get(db, subscribe_id)
if subscribe:
subscribe.delete(db, subscribe_id)
# 统计订阅
SubscribeHelper().sub_done_async({
"tmdbid": subscribe.tmdbid,
"doubanid": subscribe.doubanid
})
return schemas.Response(success=True)

View File

@@ -11,10 +11,12 @@ from fastapi.responses import StreamingResponse
from app import schemas
from app.chain.search import SearchChain
from app.chain.system import SystemChain
from app.core.config import settings
from app.core.config import settings, global_vars
from app.core.module import ModuleManager
from app.core.security import verify_token
from app.db.models import User
from app.db.systemconfig_oper import SystemConfigOper
from app.db.userauth import get_current_active_superuser
from app.helper.message import MessageHelper
from app.helper.progress import ProgressHelper
from app.helper.sites import SitesHelper
@@ -44,7 +46,7 @@ def get_img(imgurl: str, proxy: bool = False) -> Any:
@router.get("/env", summary="查询系统环境变量", response_model=schemas.Response)
def get_env_setting(_: schemas.TokenPayload = Depends(verify_token)):
def get_env_setting(_: User = Depends(get_current_active_superuser)):
"""
查询系统环境变量,包括当前版本号
"""
@@ -63,7 +65,7 @@ def get_env_setting(_: schemas.TokenPayload = Depends(verify_token)):
@router.post("/env", summary="更新系统环境变量", response_model=schemas.Response)
def set_env_setting(env: dict,
_: schemas.TokenPayload = Depends(verify_token)):
_: User = Depends(get_current_active_superuser)):
"""
更新系统环境变量
"""
@@ -97,6 +99,8 @@ def get_progress(process_type: str, token: str):
def event_generator():
while True:
if global_vars.is_system_stopped():
break
detail = progress.get(process_type)
yield 'data: %s\n\n' % json.dumps(detail)
time.sleep(0.2)
@@ -106,7 +110,7 @@ def get_progress(process_type: str, token: str):
@router.get("/setting/{key}", summary="查询系统设置", response_model=schemas.Response)
def get_setting(key: str,
_: schemas.TokenPayload = Depends(verify_token)):
_: User = Depends(get_current_active_superuser)):
"""
查询系统设置
"""
@@ -121,7 +125,7 @@ def get_setting(key: str,
@router.post("/setting/{key}", summary="更新系统设置", response_model=schemas.Response)
def set_setting(key: str, value: Union[list, dict, bool, int, str] = None,
_: schemas.TokenPayload = Depends(verify_token)):
_: User = Depends(get_current_active_superuser)):
"""
更新系统设置
"""
@@ -140,7 +144,7 @@ def set_setting(key: str, value: Union[list, dict, bool, int, str] = None,
@router.get("/message", summary="实时消息")
def get_message(token: str, role: str = "sys"):
def get_message(token: str, role: str = "system"):
"""
实时获取系统消息返回格式为SSE
"""
@@ -154,6 +158,8 @@ def get_message(token: str, role: str = "sys"):
def event_generator():
while True:
if global_vars.is_system_stopped():
break
detail = message.get(role)
yield 'data: %s\n\n' % (detail or '')
time.sleep(3)
@@ -182,6 +188,8 @@ def get_logging(token: str, length: int = 50, logfile: str = "moviepilot.log"):
for line in f.readlines()[-max(length, 50):]:
yield 'data: %s\n\n' % line
while True:
if global_vars.is_system_stopped():
break
for t in tailer.follow(open(log_path, 'r', encoding='utf-8')):
yield 'data: %s\n\n' % (t or '')
time.sleep(1)
@@ -278,9 +286,12 @@ def modulelist(_: schemas.TokenPayload = Depends(verify_token)):
"""
查询已加载的模块ID列表
"""
module_ids = [module.__name__ for module in ModuleManager().get_modules("test")]
modules = [{
"id": k,
"name": v.get_name(),
} for k, v in ModuleManager().get_modules().items()]
return schemas.Response(success=True, data={
"ids": module_ids
"modules": modules
})
@@ -294,19 +305,21 @@ def moduletest(moduleid: str, _: schemas.TokenPayload = Depends(verify_token)):
@router.get("/restart", summary="重启系统", response_model=schemas.Response)
def restart_system(_: schemas.TokenPayload = Depends(verify_token)):
def restart_system(_: User = Depends(get_current_active_superuser)):
"""
重启系统
"""
if not SystemUtils.can_restart():
return schemas.Response(success=False, message="当前运行环境不支持重启操作!")
# 标识停止事件
global_vars.stop_system()
# 执行重启
ret, msg = SystemUtils.restart()
return schemas.Response(success=ret, message=msg)
@router.get("/reload", summary="重新加载模块", response_model=schemas.Response)
def reload_module(_: schemas.TokenPayload = Depends(verify_token)):
def reload_module(_: User = Depends(get_current_active_superuser)):
"""
重新加载模块
"""
@@ -317,7 +330,7 @@ def reload_module(_: schemas.TokenPayload = Depends(verify_token)):
@router.get("/runscheduler", summary="运行服务", response_model=schemas.Response)
def execute_command(jobid: str,
_: schemas.TokenPayload = Depends(verify_token)):
_: User = Depends(get_current_active_superuser)):
"""
执行命令
"""

View File

@@ -19,6 +19,7 @@ def manual_transfer(path: str = None,
logid: int = None,
target: str = None,
tmdbid: int = None,
doubanid: str = None,
type_name: str = None,
season: int = None,
transfer_type: str = None,
@@ -27,6 +28,7 @@ def manual_transfer(path: str = None,
episode_part: str = None,
episode_offset: int = 0,
min_filesize: int = 0,
scrape: bool = None,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
@@ -36,6 +38,7 @@ def manual_transfer(path: str = None,
:param target: 目标路径
:param type_name: 媒体类型、电影/电视剧
:param tmdbid: tmdbid
:param doubanid: 豆瓣ID
:param season: 剧集季号
:param transfer_type: 转移类型move/copy 等
:param episode_format: 剧集识别格式
@@ -43,6 +46,7 @@ def manual_transfer(path: str = None,
:param episode_part: 剧集识别分集信息
:param episode_offset: 剧集识别偏移量
:param min_filesize: 最小文件大小(MB)
:param scrape: 是否刮削元数据
:param db: 数据库
:param _: Token校验
"""
@@ -66,10 +70,6 @@ def manual_transfer(path: str = None,
if history.dest and str(history.dest) != "None":
# 删除旧的已整理文件
transfer.delete_files(Path(history.dest))
if not target:
target = transfer.get_root_path(path=history.dest,
type_name=history.type,
category=history.category)
elif path:
in_path = Path(path)
else:
@@ -91,11 +91,13 @@ def manual_transfer(path: str = None,
in_path=in_path,
target=target,
tmdbid=tmdbid,
doubanid=doubanid,
mtype=mtype,
season=season,
transfer_type=transfer_type,
epformat=epformat,
min_filesize=min_filesize,
scrape=scrape,
force=force
)
# 失败

View File

@@ -87,8 +87,8 @@ def read_current_user(
@router.post("/avatar/{user_id}", summary="上传用户头像", response_model=schemas.Response)
async def upload_avatar(user_id: int, db: Session = Depends(get_db),
file: UploadFile = File(...)):
def upload_avatar(user_id: int, db: Session = Depends(get_db), file: UploadFile = File(...),
_: User = Depends(get_current_active_user)):
"""
上传用户头像
"""

View File

@@ -32,8 +32,8 @@ async def webhook_message(background_tasks: BackgroundTasks,
@router.get("/", summary="Webhook消息响应", response_model=schemas.Response)
async def webhook_message(background_tasks: BackgroundTasks,
request: Request, _: str = Depends(verify_uri_token)) -> Any:
def webhook_message(background_tasks: BackgroundTasks,
request: Request, _: str = Depends(verify_uri_token)) -> Any:
"""
Webhook响应
"""

View File

@@ -6,7 +6,6 @@ from sqlalchemy.orm import Session
from app import schemas
from app.chain.media import MediaChain
from app.chain.subscribe import SubscribeChain
from app.core.config import settings
from app.core.metainfo import MetaInfo
from app.core.security import verify_uri_apikey
from app.db import get_db
@@ -121,7 +120,7 @@ def arr_rootfolder(_: str = Depends(verify_uri_apikey)) -> Any:
return [
{
"id": 1,
"path": "/" if not settings.LIBRARY_PATHS else str(settings.LIBRARY_PATHS[0]),
"path": "/",
"accessible": True,
"freeSpace": 0,
"unmappedFolders": []

View File

@@ -92,8 +92,14 @@ class ChainBase(metaclass=ABCMeta):
logger.debug(f"请求模块执行:{method} ...")
result = None
modules = self.modulemanager.get_modules(method)
modules = self.modulemanager.get_running_modules(method)
for module in modules:
module_id = module.__class__.__name__
try:
module_name = module.get_name()
except Exception as err:
logger.error(f"获取模块名称出错:{str(err)}")
module_name = module_id
try:
func = getattr(module, method)
if is_result_empty(result):
@@ -112,7 +118,21 @@ class ChainBase(metaclass=ABCMeta):
break
except Exception as err:
logger.error(
f"运行模块 {method} 出错:{module.__class__.__name__} - {str(err)}\n{traceback.format_exc()}")
f"运行模块 {module_id}.{method} 出错:{str(err)}\n{traceback.format_exc()}")
self.messagehelper.put(title=f"{module_name}发生了错误",
message=str(err),
role="system")
self.eventmanager.send_event(
EventType.SystemError,
{
"type": "module",
"module_id": module_id,
"module_name": module_name,
"module_method": method,
"error": str(err),
"traceback": traceback.format_exc()
}
)
return result
def recognize_media(self, meta: MetaBase = None,
@@ -350,7 +370,8 @@ class ChainBase(metaclass=ABCMeta):
def transfer(self, path: Path, meta: MetaBase, mediainfo: MediaInfo,
transfer_type: str, target: Path = None,
episodes_info: List[TmdbEpisode] = None) -> Optional[TransferInfo]:
episodes_info: List[TmdbEpisode] = None,
scrape: bool = None) -> Optional[TransferInfo]:
"""
文件转移
:param path: 文件路径
@@ -359,12 +380,14 @@ class ChainBase(metaclass=ABCMeta):
:param transfer_type: 转移模式
:param target: 转移目标路径
:param episodes_info: 当前季的全部集信息
:param scrape: 是否刮削元数据
:return: {path, target_path, message}
"""
return self.run_module("transfer", path=path, meta=meta, mediainfo=mediainfo,
transfer_type=transfer_type, target=target, episodes_info=episodes_info)
transfer_type=transfer_type, target=target, episodes_info=episodes_info,
scrape=scrape)
def transfer_completed(self, hashs: Union[str, list], path: Path = None,
def transfer_completed(self, hashs: str, path: Path = None,
downloader: str = settings.DEFAULT_DOWNLOADER) -> None:
"""
转移完成后的处理
@@ -498,6 +521,13 @@ class ChainBase(metaclass=ABCMeta):
self.run_module("scrape_metadata", path=path, mediainfo=mediainfo, metainfo=metainfo,
transfer_type=transfer_type, force_nfo=force_nfo, force_img=force_img)
def media_category(self) -> Optional[Dict[str, list]]:
"""
获取媒体分类
:return: 获取二级分类配置字典项,需包括电影、电视剧
"""
return self.run_module("media_category")
def register_commands(self, commands: Dict[str, dict]) -> None:
"""
注册菜单命令

View File

@@ -6,6 +6,7 @@ import time
from pathlib import Path
from typing import List, Optional, Tuple, Set, Dict, Union
from app import schemas
from app.chain import ChainBase
from app.core.config import settings
from app.core.context import MediaInfo, TorrentInfo, Context
@@ -14,6 +15,8 @@ from app.core.meta import MetaBase
from app.core.metainfo import MetaInfo
from app.db.downloadhistory_oper import DownloadHistoryOper
from app.db.mediaserver_oper import MediaServerOper
from app.helper.directory import DirectoryHelper
from app.helper.message import MessageHelper
from app.helper.torrent import TorrentHelper
from app.log import logger
from app.schemas import ExistMediaInfo, NotExistMediaInfo, DownloadingTorrent, Notification
@@ -32,9 +35,12 @@ class DownloadChain(ChainBase):
self.torrent = TorrentHelper()
self.downloadhis = DownloadHistoryOper()
self.mediaserver = MediaServerOper()
self.directoryhelper = DirectoryHelper()
self.messagehelper = MessageHelper()
def post_download_message(self, meta: MetaBase, mediainfo: MediaInfo, torrent: TorrentInfo,
channel: MessageChannel = None, userid: str = None, username: str = None):
channel: MessageChannel = None, userid: str = None, username: str = None,
download_episodes: str = None):
"""
发送添加下载的消息
:param meta: 元数据
@@ -43,6 +49,7 @@ class DownloadChain(ChainBase):
:param channel: 通知渠道
:param userid: 用户ID指定时精确发送对应用户
:param username: 通知显示的下载用户信息
:param download_episodes: 下载的集数
"""
msg_text = ""
if username:
@@ -80,7 +87,7 @@ class DownloadChain(ChainBase):
mtype=NotificationType.Download,
userid=userid,
title=f"{mediainfo.title_year} "
f"{meta.season_episode} 开始下载",
f"{'%s %s' % (meta.season, download_episodes) if download_episodes else meta.season_episode} 开始下载",
text=msg_text,
image=mediainfo.get_message_image()))
@@ -207,6 +214,9 @@ class DownloadChain(ChainBase):
_torrent = context.torrent_info
_media = context.media_info
_meta = context.meta_info
# 实际下载的集数
download_episodes = StringUtils.format_ep(list(episodes)) if episodes else None
_folder_name = ""
if not torrent_file:
# 下载种子文件,得到的可能是文件也可能是磁力链
@@ -221,39 +231,35 @@ class DownloadChain(ChainBase):
_folder_name, _file_list = self.torrent.get_torrent_info(torrent_file)
# 下载目录
if not save_path:
if settings.DOWNLOAD_CATEGORY and _media and _media.category:
# 开启下载二级目录
if _media.type == MediaType.MOVIE:
# 电影
download_dir = settings.SAVE_MOVIE_PATH / _media.category
else:
if _media.genre_ids \
and set(_media.genre_ids).intersection(set(settings.ANIME_GENREIDS)):
# 动漫
download_dir = settings.SAVE_ANIME_PATH / _media.category
else:
# 电视剧
download_dir = settings.SAVE_TV_PATH / _media.category
elif _media:
# 未开启下载二级目录
if _media.type == MediaType.MOVIE:
# 电影
download_dir = settings.SAVE_MOVIE_PATH
else:
if _media.genre_ids \
and set(_media.genre_ids).intersection(set(settings.ANIME_GENREIDS)):
# 动漫
download_dir = settings.SAVE_ANIME_PATH
else:
# 电视剧
download_dir = settings.SAVE_TV_PATH
else:
# 未识别
download_dir = settings.SAVE_PATH
if save_path:
# 有自定义下载目录时,尝试匹配目录配置
dir_info = self.directoryhelper.get_download_dir(_media, to_path=Path(save_path))
else:
# 根据媒体信息查询下载目录配置
dir_info = self.directoryhelper.get_download_dir(_media)
# 拼装子目录
if dir_info:
# 一级目录
if not dir_info.media_type and dir_info.auto_category:
# 一级自动分类
download_dir = Path(dir_info.path) / _media.type.value
else:
# 一级不分类
download_dir = Path(dir_info.path)
# 二级目录
if not dir_info.category and dir_info.auto_category and _media and _media.category:
# 二级自动分类
download_dir = download_dir / _media.category
elif save_path:
# 自定义下载目录
download_dir = Path(save_path)
else:
# 未找到下载目录,且没有自定义下载目录
logger.error(f"未找到下载目录:{_media.type.value} {_media.title_year}")
self.messagehelper.put(f"{_media.type.value} {_media.title_year} 未找到下载目录!",
title="下载失败", role="system")
return None
# 添加下载
result: Optional[tuple] = self.download(content=content,
@@ -284,7 +290,7 @@ class DownloadChain(ChainBase):
tvdbid=_media.tvdb_id,
doubanid=_media.douban_id,
seasons=_meta.season,
episodes=_meta.episode,
episodes=download_episodes or _meta.episode,
image=_media.get_backdrop_image(),
download_hash=_hash,
torrent_name=_torrent.title,
@@ -321,7 +327,8 @@ class DownloadChain(ChainBase):
self.downloadhis.add_files(files_to_add)
# 发送消息群发不带channel和userid
self.post_download_message(meta=_meta, mediainfo=_media, torrent=_torrent, username=username)
self.post_download_message(meta=_meta, mediainfo=_media, torrent=_torrent,
username=username, download_episodes=download_episodes)
# 下载成功后处理
self.download_added(context=context, download_dir=download_dir, torrent_path=torrent_file)
# 广播事件
@@ -429,12 +436,15 @@ class DownloadChain(ChainBase):
# 如果是电影,直接下载
for context in contexts:
if context.media_info.type == MediaType.MOVIE:
logger.info(f"开始下载电影 {context.torrent_info.title} ...")
if self.download_single(context, save_path=save_path, channel=channel,
userid=userid, username=username):
# 下载成功
logger.info(f"{context.torrent_info.title} 添加下载成功")
downloaded_list.append(context)
# 电视剧整季匹配
logger.info(f"开始匹配电视剧整季:{no_exists}")
if no_exists:
# 先把整季缺失的拿出来,看是否刚好有所有季都满足的种子 {tmdbid: [seasons]}
need_seasons: Dict[int, list] = {}
@@ -447,6 +457,7 @@ class DownloadChain(ChainBase):
if not need_seasons.get(need_mid):
need_seasons[need_mid] = []
need_seasons[need_mid].append(tv.season or 1)
logger.info(f"缺失整季:{need_seasons}")
# 查找整季包含的种子,只处理整季没集的种子或者是集数超过季的种子
for need_mid, need_season in need_seasons.items():
# 循环种子
@@ -462,23 +473,31 @@ class DownloadChain(ChainBase):
continue
# 种子的季清单
torrent_season = meta.season_list
# 没有季的默认为第1季
if not torrent_season:
torrent_season = [1]
# 种子有集的不要
if meta.episode_list:
continue
# 匹配TMDBID
if need_mid == media.tmdb_id or need_mid == media.douban_id:
# 不重复添加
if context in downloaded_list:
continue
# 种子季是需要季或者子集
if set(torrent_season).issubset(set(need_season)):
if len(torrent_season) == 1:
# 只有一季的可能是命名错误,需要打开种子鉴别,只有实际集数大于等于总集数才下载
logger.info(f"开始下载种子 {torrent.title} ...")
content, _, torrent_files = self.download_torrent(torrent)
if not content:
logger.warn(f"{torrent.title} 种子下载失败!")
continue
if isinstance(content, str):
logger.warn(f"{meta.org_string} 下载地址是磁力链,无法确定种子文件集数")
continue
torrent_episodes = self.torrent.get_torrent_episodes(torrent_files)
logger.info(f"{meta.org_string} 解析文件集数为 {torrent_episodes}")
logger.info(f"{meta.org_string} 解析种子文件集数为 {torrent_episodes}")
if not torrent_episodes:
continue
# 更新集数范围
@@ -489,10 +508,11 @@ class DownloadChain(ChainBase):
need_total = __get_season_episodes(need_mid, torrent_season[0])
if len(torrent_episodes) < need_total:
logger.info(
f"{meta.org_string} 解析文件集数发现不是完整合集")
f"{meta.org_string} 解析文件集数发现不是完整合集,先放弃这个种子")
continue
else:
# 下载
logger.info(f"开始下载 {torrent.title} ...")
download_id = self.download_single(
context=context,
torrent_file=content if isinstance(content, Path) else None,
@@ -503,21 +523,25 @@ class DownloadChain(ChainBase):
)
else:
# 下载
logger.info(f"开始下载 {torrent.title} ...")
download_id = self.download_single(context,
save_path=save_path, channel=channel,
userid=userid, username=username)
if download_id:
# 下载成功
logger.info(f"{torrent.title} 添加下载成功")
downloaded_list.append(context)
# 更新仍需季集
need_season = __update_seasons(_mid=need_mid,
_need=need_season,
_current=torrent_season)
logger.info(f"{need_mid} 剩余需要季:{need_season}")
if not need_season:
# 全部下载完成
break
# 电视剧季内的集匹配
logger.info(f"开始电视剧完整集匹配:{no_exists}")
if no_exists:
# TMDBID列表
need_tv_list = list(no_exists)
@@ -567,19 +591,23 @@ class DownloadChain(ChainBase):
# 为需要集的子集则下载
if torrent_episodes.issubset(set(need_episodes)):
# 下载
logger.info(f"开始下载 {meta.title} ...")
download_id = self.download_single(context,
save_path=save_path, channel=channel,
userid=userid, username=username)
if download_id:
# 下载成功
logger.info(f"{meta.title} 添加下载成功")
downloaded_list.append(context)
# 更新仍需集数
need_episodes = __update_episodes(_mid=need_mid,
_need=need_episodes,
_sea=need_season,
_current=torrent_episodes)
logger.info(f"{need_season} 剩余需要集:{need_episodes}")
# 仍然缺失的剧集从整季中选择需要的集数文件下载仅支持QB和TR
logger.info(f"开始电视剧多集拆包匹配:{no_exists}")
if no_exists:
# TMDBID列表
no_exists_list = list(no_exists)
@@ -625,15 +653,17 @@ class DownloadChain(ChainBase):
and len(meta.season_list) == 1 \
and meta.season_list[0] == need_season:
# 检查种子看是否有需要的集
logger.info(f"开始下载种子 {torrent.title} ...")
content, _, torrent_files = self.download_torrent(torrent)
if not content:
logger.info(f"{torrent.title} 种子下载失败!")
continue
if isinstance(content, str):
logger.warn(f"{meta.org_string} 下载地址是磁力链,无法解析种子文件集数")
continue
# 种子全部集
torrent_episodes = self.torrent.get_torrent_episodes(torrent_files)
logger.info(f"{torrent.site_name} - {meta.org_string} 解析文件集数:{torrent_episodes}")
logger.info(f"{torrent.site_name} - {meta.org_string} 解析种子文件集数:{torrent_episodes}")
# 选中的集
selected_episodes = set(torrent_episodes).intersection(set(need_episodes))
if not selected_episodes:
@@ -641,6 +671,7 @@ class DownloadChain(ChainBase):
continue
logger.info(f"{torrent.site_name} - {torrent.title} 选中集数:{selected_episodes}")
# 添加下载
logger.info(f"开始下载 {torrent.title} ...")
download_id = self.download_single(
context=context,
torrent_file=content if isinstance(content, Path) else None,
@@ -653,6 +684,7 @@ class DownloadChain(ChainBase):
if not download_id:
continue
# 下载成功
logger.info(f"{torrent.title} 添加下载成功")
downloaded_list.append(context)
# 更新种子集数范围
begin_ep = min(torrent_episodes)
@@ -663,8 +695,10 @@ class DownloadChain(ChainBase):
_need=need_episodes,
_sea=need_season,
_current=selected_episodes)
logger.info(f"{need_season} 剩余需要集:{need_episodes}")
# 返回下载的资源,剩下没下完的
logger.info(f"成功下载种子数:{len(downloaded_list)},剩余未下载的剧集:{no_exists}")
return downloaded_list, no_exists
def get_no_exists_info(self, meta: MetaBase,
@@ -875,4 +909,14 @@ class DownloadChain(ChainBase):
if not hash_str:
return
logger.warn(f"检测到下载源文件被删除,删除下载任务(不含文件):{hash_str}")
self.remove_torrents(hashs=[hash_str], delete_file=False)
# 先查询种子
torrents: List[schemas.TransferTorrent] = self.list_torrents(hashs=[hash_str])
if torrents:
self.remove_torrents(hashs=[hash_str], delete_file=False)
# 发出下载任务删除事件,如需处理辅种,可监听该事件
self.eventmanager.send_event(EventType.DownloadDeleted, {
"hash": hash_str,
"torrents": [torrent.dict() for torrent in torrents]
})
else:
logger.info(f"没有在下载器中查询到 {hash_str} 对应的下载任务")

View File

@@ -34,7 +34,7 @@ class MediaChain(ChainBase, metaclass=Singleton):
# 识别媒体信息
mediainfo: MediaInfo = self.recognize_media(meta=metainfo)
if not mediainfo:
# 试使用辅助识别,如果有注册响应事件的话
# 试使用辅助识别,如果有注册响应事件的话
if eventmanager.check(EventType.NameRecognize):
logger.info(f'请求辅助识别,标题:{title} ...')
mediainfo = self.recognize_help(title=title, org_meta=metainfo)
@@ -143,7 +143,7 @@ class MediaChain(ChainBase, metaclass=Singleton):
# 识别媒体信息
mediainfo = self.recognize_media(meta=file_meta)
if not mediainfo:
# 试使用辅助识别,如果有注册响应事件的话
# 试使用辅助识别,如果有注册响应事件的话
if eventmanager.check(EventType.NameRecognize):
logger.info(f'请求辅助识别,标题:{file_path.name} ...')
mediainfo = self.recognize_help(title=path, org_meta=file_meta)

View File

@@ -80,6 +80,8 @@ class MediaServerChain(ChainBase):
self.dboper.empty()
# 遍历媒体服务器
for mediaserver in mediaservers:
if not mediaserver:
continue
logger.info(f"开始同步媒体库 {mediaserver} 的数据 ...")
for library in self.librarys(mediaserver):
# 同步黑名单 跳过

View File

@@ -53,12 +53,12 @@ class SearchChain(ChainBase):
}
}
results = self.process(mediainfo=mediainfo, area=area, no_exists=no_exists)
# 保存结果
# 保存结果
bytes_results = pickle.dumps(results)
self.systemconfig.set(SystemConfigKey.SearchResults, bytes_results)
return results
def search_by_title(self, title: str, page: int = 0, site: int = None) -> List[TorrentInfo]:
def search_by_title(self, title: str, page: int = 0, site: int = None) -> List[Context]:
"""
根据标题搜索资源,不识别不过滤,直接返回站点内容
:param title: 标题,为空时返回所有站点首页内容
@@ -70,7 +70,17 @@ class SearchChain(ChainBase):
else:
logger.info(f'开始浏览资源,站点:{site} ...')
# 搜索
return self.__search_all_sites(keywords=[title], sites=[site] if site else None, page=page) or []
torrents = self.__search_all_sites(keywords=[title], sites=[site] if site else None, page=page) or []
if not torrents:
logger.warn(f'{title} 未搜索到资源')
return []
# 组装上下文
contexts = [Context(meta_info=MetaInfo(title=torrent.title, subtitle=torrent.description),
torrent_info=torrent) for torrent in torrents]
# 保存结果
bytes_results = pickle.dumps(contexts)
self.systemconfig.set(SystemConfigKey.SearchResults, bytes_results)
return contexts
def last_search_results(self) -> List[Context]:
"""
@@ -102,12 +112,23 @@ class SearchChain(ChainBase):
:param filter_rule: 过滤规则,为空是使用默认过滤规则
:param area: 搜索范围title or imdbid
"""
def __do_filter(torrent_list: List[TorrentInfo]) -> List[TorrentInfo]:
"""
执行优先级过滤
"""
return self.filter_torrents(rule_string=priority_rule,
torrent_list=torrent_list,
season_episodes=season_episodes,
mediainfo=mediainfo) or []
# 豆瓣标题处理
if not mediainfo.tmdb_id:
meta = MetaInfo(title=mediainfo.title)
mediainfo.title = meta.name
mediainfo.season = meta.begin_season
logger.info(f'开始搜索资源,关键词:{keyword or mediainfo.title} ...')
# 补充媒体信息
if not mediainfo.names:
mediainfo: MediaInfo = self.recognize_media(mtype=mediainfo.type,
@@ -116,17 +137,19 @@ class SearchChain(ChainBase):
if not mediainfo:
logger.error(f'媒体信息识别失败!')
return []
# 缺失的季集
mediakey = mediainfo.tmdb_id or mediainfo.douban_id
if no_exists and no_exists.get(mediakey):
# 过滤剧集
season_episodes = {sea: info.episodes
for sea, info in no_exists[mediainfo.tmdb_id].items()}
for sea, info in no_exists[mediakey].items()}
elif mediainfo.season:
# 豆瓣只搜索当前季
season_episodes = {mediainfo.season: []}
else:
season_episodes = None
# 搜索关键词
if keyword:
keywords = [keyword]
@@ -147,9 +170,11 @@ class SearchChain(ChainBase):
if not torrents:
logger.warn(f'{keyword or mediainfo.title} 未搜索到资源')
return []
# 开始新进度
self.progress.start(ProgressKey.Search)
# 匹配的资源
# 开始匹配
_match_torrents = []
# 总数
_total = len(torrents)
@@ -177,17 +202,6 @@ class SearchChain(ChainBase):
torrent_meta = MetaInfo(title=torrent.title, subtitle=torrent.description)
if torrent.title != torrent_meta.org_string:
logger.info(f"种子名称应用识别词后发生改变:{torrent.title} => {torrent_meta.org_string}")
# 比对词条指定的tmdbid
if torrent_meta.tmdbid or torrent_meta.doubanid:
if torrent_meta.tmdbid and torrent_meta.tmdbid == mediainfo.tmdb_id:
logger.info(f'{mediainfo.title} 通过词表指定TMDBID匹配到资源{torrent.site_name} - {torrent.title}')
_match_torrents.append(torrent)
continue
if torrent_meta.doubanid and torrent_meta.doubanid == mediainfo.douban_id:
logger.info(f'{mediainfo.title} 通过词表指定豆瓣ID匹配到资源{torrent.site_name} - {torrent.title}')
_match_torrents.append(torrent)
continue
# 比对种子
if self.torrenthelper.match_torrent(mediainfo=mediainfo,
torrent_meta=torrent_meta,
@@ -202,25 +216,12 @@ class SearchChain(ChainBase):
key=ProgressKey.Search)
else:
_match_torrents = torrents
# 开始过滤
self.progress.update(value=98, text=f'开始过滤,总 {len(_match_torrents)} 个资源,请稍候...',
key=ProgressKey.Search)
# 过滤种子
if priority_rule is None:
# 取搜索优先级规则
priority_rule = self.systemconfig.get(SystemConfigKey.SearchFilterRules)
if priority_rule:
logger.info(f'开始优先级规则/剧集过滤,当前规则:{priority_rule} ...')
result: List[TorrentInfo] = self.filter_torrents(rule_string=priority_rule,
torrent_list=_match_torrents,
season_episodes=season_episodes,
mediainfo=mediainfo)
if result is not None:
_match_torrents = result
if not _match_torrents:
logger.warn(f'{keyword or mediainfo.title} 没有符合优先级规则的资源')
return []
# 使用过滤规则再次过滤
# 开始过滤规则过滤
if _match_torrents:
logger.info(f'开始过滤规则过滤,当前规则:{filter_rule} ...')
_match_torrents = self.filter_torrents_by_rule(torrents=_match_torrents,
@@ -229,22 +230,43 @@ class SearchChain(ChainBase):
if not _match_torrents:
logger.warn(f'{keyword or mediainfo.title} 没有符合过滤规则的资源')
return []
logger.info(f"过滤规则过滤完成,剩余 {len(_match_torrents)} 个资源")
# 开始优先级规则/剧集过滤
if priority_rule is None:
# 取搜索优先级规则
priority_rule = self.systemconfig.get(SystemConfigKey.SearchFilterRules)
if priority_rule:
logger.info(f'开始优先级规则/剧集过滤,当前规则:{priority_rule} ...')
_match_torrents = __do_filter(_match_torrents)
if not _match_torrents:
logger.warn(f'{keyword or mediainfo.title} 没有符合优先级规则的资源')
return []
logger.info(f"优先级规则/剧集过滤完成,剩余 {len(_match_torrents)} 个资源")
# 去掉mediainfo中多余的数据
mediainfo.clear()
# 组装上下文
contexts = [Context(meta_info=MetaInfo(title=torrent.title, subtitle=torrent.description),
media_info=mediainfo,
torrent_info=torrent) for torrent in _match_torrents]
logger.info(f"过滤完成,剩余 {len(contexts)} 个资源")
self.progress.update(value=99, text=f'过滤完成,剩余 {len(contexts)} 个资源', key=ProgressKey.Search)
# 排序
self.progress.update(value=100,
self.progress.update(value=99,
text=f'正在对 {len(contexts)} 个资源进行排序,请稍候...',
key=ProgressKey.Search)
contexts = self.torrenthelper.sort_torrents(contexts)
# 结束进度
self.progress.update(value=100,
text=f'搜索完成,共 {len(contexts)} 个资源',
key=ProgressKey.Search)
logger.info(f'搜索完成,共 {len(contexts)} 个资源')
self.progress.end(ProgressKey.Search)
# 返回
return contexts

View File

@@ -52,7 +52,10 @@ class SiteChain(ChainBase):
"zhuque.in": self.__zhuque_test,
"m-team.io": self.__mteam_test,
"m-team.cc": self.__mteam_test,
"ptlsp.com": self.__ptlsp_test,
"ptlsp.com": self.__indexphp_test,
"1ptba.com": self.__indexphp_test,
"star-space.net": self.__indexphp_test,
"yemapt.org": self.__yema_test,
}
def is_special_site(self, domain: str) -> bool:
@@ -73,7 +76,7 @@ class SiteChain(ChainBase):
ua=user_agent,
cookies=site.cookie,
proxies=settings.PROXY if site.proxy else None,
timeout=15
timeout=site.timeout or 15
).get_res(url=site.url)
if res and res.status_code == 200:
csrf_token = re.search(r'<meta name="x-csrf-token" content="(.+?)">', res.text)
@@ -90,7 +93,7 @@ class SiteChain(ChainBase):
},
cookies=site.cookie,
proxies=settings.PROXY if site.proxy else None,
timeout=15
timeout=site.timeout or 15
).get_res(url=f"{site.url}api/user/getInfo")
if user_res and user_res.status_code == 200:
user_info = user_res.json()
@@ -114,14 +117,14 @@ class SiteChain(ChainBase):
res = RequestUtils(
headers=headers,
proxies=settings.PROXY if site.proxy else None,
timeout=15
timeout=site.timeout or 15
).post_res(url=url)
if res and res.status_code == 200:
user_info = res.json()
if user_info and user_info.get("data"):
# 更新最后访问时间
res = RequestUtils(headers=headers,
timeout=60,
timeout=site.timeout or 15,
proxies=settings.PROXY if site.proxy else None,
referer=f"{site.url}index"
).post_res(url=urljoin(url, "api/member/updateLastBrowse"))
@@ -131,9 +134,33 @@ class SiteChain(ChainBase):
return True, f"连接成功,但更新状态失败"
return False, "鉴权已过期或无效"
def __ptlsp_test(self, site: Site) -> Tuple[bool, str]:
@staticmethod
def __yema_test(site: Site) -> Tuple[bool, str]:
"""
判断站点是否已经登陆:ptlsp
判断站点是否已经登陆:yemapt
"""
user_agent = site.ua or settings.USER_AGENT
url = f"{site.url}api/consumer/fetchSelfDetail"
headers = {
"User-Agent": user_agent,
"Content-Type": "application/json",
"Accept": "application/json, text/plain, */*",
}
res = RequestUtils(
headers=headers,
cookies=site.cookie,
proxies=settings.PROXY if site.proxy else None,
timeout=site.timeout or 15
).get_res(url=url)
if res and res.status_code == 200:
user_info = res.json()
if user_info and user_info.get("success"):
return True, "连接成功"
return False, "Cookie已过期"
def __indexphp_test(self, site: Site) -> Tuple[bool, str]:
"""
判断站点是否已经登陆ptlsp/1ptba
"""
site.url = f"{site.url}index.php"
return self.__test(site)
@@ -148,7 +175,7 @@ class SiteChain(ChainBase):
:return:
"""
favicon_url = urljoin(url, "favicon.ico")
res = RequestUtils(cookies=cookie, timeout=60, ua=ua).get_res(url=url)
res = RequestUtils(cookies=cookie, timeout=30, ua=ua).get_res(url=url)
if res:
html_text = res.text
else:
@@ -160,7 +187,7 @@ class SiteChain(ChainBase):
if fav_link:
favicon_url = urljoin(url, fav_link[0])
res = RequestUtils(cookies=cookie, timeout=20, ua=ua).get_res(url=favicon_url)
res = RequestUtils(cookies=cookie, timeout=15, ua=ua).get_res(url=favicon_url)
if res:
return favicon_url, base64.b64encode(res.content).decode()
else:
@@ -188,7 +215,7 @@ class SiteChain(ChainBase):
if not cookies:
logger.error(f"CookieCloud同步失败{msg}")
if manual:
self.message.put(f"CookieCloud同步失败 {msg}")
self.message.put(msg, title="CookieCloud同步失败", role="system")
return False, msg
# 保存Cookie或新增站点
_update_count = 0
@@ -276,7 +303,7 @@ class SiteChain(ChainBase):
if _fail_count > 0:
ret_msg += f"{_fail_count}个站点添加失败,下次同步时将重试,也可以手动添加"
if manual:
self.message.put(f"CookieCloud同步成功, {ret_msg}")
self.message.put(ret_msg, title="CookieCloud同步成功", role="system")
logger.info(f"CookieCloud同步成功{ret_msg}")
return True, ret_msg

View File

@@ -2,6 +2,7 @@ import json
import random
import time
from datetime import datetime
from json import JSONDecodeError
from typing import Dict, List, Optional, Union, Tuple
from app.chain import ChainBase
@@ -15,10 +16,12 @@ from app.core.event import eventmanager, Event, EventManager
from app.core.meta import MetaBase
from app.core.metainfo import MetaInfo
from app.db.models.subscribe import Subscribe
from app.db.site_oper import SiteOper
from app.db.subscribe_oper import SubscribeOper
from app.db.subscribehistory_oper import SubscribeHistoryOper
from app.db.systemconfig_oper import SystemConfigOper
from app.helper.message import MessageHelper
from app.helper.subscribe import SubscribeHelper
from app.helper.torrent import TorrentHelper
from app.log import logger
from app.schemas import NotExistMediaInfo, Notification
@@ -36,11 +39,13 @@ class SubscribeChain(ChainBase):
self.searchchain = SearchChain()
self.subscribeoper = SubscribeOper()
self.subscribehistoryoper = SubscribeHistoryOper()
self.subscribehelper = SubscribeHelper()
self.torrentschain = TorrentsChain()
self.mediachain = MediaChain()
self.message = MessageHelper()
self.systemconfig = SystemConfigOper()
self.torrenthelper = TorrentHelper()
self.siteoper = SiteOper()
def add(self, title: str, year: str,
mtype: MediaType = None,
@@ -122,6 +127,9 @@ class SubscribeChain(ChainBase):
kwargs.update({
'lack_episode': kwargs.get('total_episode')
})
else:
# 避免season为0的问题
season = None
# 更新媒体图片
self.obtain_images(mediainfo=mediainfo)
# 合并信息
@@ -136,7 +144,7 @@ class SubscribeChain(ChainBase):
'effect': self.__get_default_subscribe_config(mediainfo.type, "effect"),
'include': self.__get_default_subscribe_config(mediainfo.type, "include"),
'exclude': self.__get_default_subscribe_config(mediainfo.type, "exclude"),
'best_version': self.__get_default_subscribe_config(mediainfo.type, "best_version"),
'best_version': self.__get_default_subscribe_config(mediainfo.type, "best_version") if not kwargs.get("best_version") else kwargs.get("best_version"),
'search_imdbid': self.__get_default_subscribe_config(mediainfo.type, "search_imdbid"),
'sites': self.__get_default_subscribe_config(mediainfo.type, "sites") or None,
'save_path': self.__get_default_subscribe_config(mediainfo.type, "save_path"),
@@ -153,6 +161,7 @@ class SubscribeChain(ChainBase):
text=f"{err_msg}",
image=mediainfo.get_message_image(),
userid=userid))
return None, err_msg
elif message:
logger.info(f'{mediainfo.title_year} {metainfo.season} 添加订阅成功')
if username:
@@ -164,12 +173,28 @@ class SubscribeChain(ChainBase):
title=f"{mediainfo.title_year} {metainfo.season} 已添加订阅",
text=text,
image=mediainfo.get_message_image()))
# 发送事件
EventManager().send_event(EventType.SubscribeAdded, {
"subscribe_id": sid,
"username": username,
"mediainfo": mediainfo.to_dict(),
})
# 发送事件
EventManager().send_event(EventType.SubscribeAdded, {
"subscribe_id": sid,
"username": username,
"mediainfo": mediainfo.to_dict(),
})
# 统计订阅
self.subscribehelper.sub_reg_async({
"name": title,
"year": year,
"type": metainfo.type.value,
"tmdbid": mediainfo.tmdb_id,
"imdbid": mediainfo.imdb_id,
"tvdbid": mediainfo.tvdb_id,
"doubanid": mediainfo.douban_id,
"bangumiid": mediainfo.bangumi_id,
"season": metainfo.begin_season,
"poster": mediainfo.get_poster_image(),
"backdrop": mediainfo.get_backdrop_image(),
"vote": mediainfo.vote_average,
"description": mediainfo.overview
})
# 返回结果
return sid, ""
@@ -218,7 +243,11 @@ class SubscribeChain(ChainBase):
meta = MetaInfo(subscribe.name)
meta.year = subscribe.year
meta.begin_season = subscribe.season or None
meta.type = MediaType(subscribe.type)
try:
meta.type = MediaType(subscribe.type)
except ValueError:
logger.error(f'订阅 {subscribe.name} 类型错误:{subscribe.type}')
continue
# 识别媒体信息
mediainfo: MediaInfo = self.recognize_media(meta=meta, mtype=meta.type,
tmdbid=subscribe.tmdbid,
@@ -356,9 +385,9 @@ class SubscribeChain(ChainBase):
# 手动触发时发送系统消息
if manual:
if sid:
self.message.put(f'订阅 {subscribes[0].name} 搜索完成!')
self.message.put(f'{subscribes[0].name} 搜索完成!', title="订阅搜索", role="system")
else:
self.message.put('所有订阅搜索完成!')
self.message.put('所有订阅搜索完成!', title="订阅搜索", role="system")
def update_subscribe_priority(self, subscribe: Subscribe, meta: MetaInfo,
mediainfo: MediaInfo, downloads: List[Context]):
@@ -435,11 +464,29 @@ class SubscribeChain(ChainBase):
def get_sub_sites(self, subscribe: Subscribe) -> List[int]:
"""
获取订阅中涉及的站点清单
:param subscribe: 订阅信息对象
:return: 涉及的站点清单
"""
if subscribe.sites:
return json.loads(subscribe.sites)
# 默认站点
return self.systemconfig.get(SystemConfigKey.RssSites) or []
# 从系统配置获取默认订阅站点
default_sites = self.systemconfig.get(SystemConfigKey.RssSites) or []
# 如果订阅未指定站点信息,直接返回默认站点
if not subscribe.sites:
return default_sites
try:
# 尝试解析订阅中的站点数据
user_sites = json.loads(subscribe.sites)
# 计算 user_sites 和 default_sites 的交集
intersection_sites = [site for site in user_sites if site in default_sites]
# 如果交集与原始订阅不一致,更新数据库
if set(intersection_sites) != set(user_sites):
self.subscribeoper.update(subscribe.id, {
"sites": json.dumps(intersection_sites)
})
# 如果交集为空,返回默认站点
return intersection_sites if intersection_sites else default_sites
except JSONDecodeError:
# 如果 JSON 解析失败,返回默认站点
return default_sites
def get_subscribed_sites(self) -> Optional[List[int]]:
"""
@@ -476,6 +523,7 @@ class SubscribeChain(ChainBase):
"tv_size": default_rule.get("tv_size"),
"movie_size": default_rule.get("movie_size"),
"min_seeders": default_rule.get("min_seeders"),
"min_seeders_time": default_rule.get("min_seeders_time"),
}
def match(self, torrents: Dict[str, List[Context]]):
@@ -485,6 +533,8 @@ class SubscribeChain(ChainBase):
if not torrents:
logger.warn('没有缓存资源,无法匹配订阅')
return
# 记录重新识别过的种子
_recognize_cached = []
# 所有订阅
subscribes = self.subscribeoper.list('R')
# 遍历订阅
@@ -495,7 +545,20 @@ class SubscribeChain(ChainBase):
meta = MetaInfo(subscribe.name)
meta.year = subscribe.year
meta.begin_season = subscribe.season or None
meta.type = MediaType(subscribe.type)
try:
meta.type = MediaType(subscribe.type)
except ValueError:
logger.error(f'订阅 {subscribe.name} 类型错误:{subscribe.type}')
continue
# 订阅的站点域名列表
domains = []
if subscribe.sites:
try:
siteids = json.loads(subscribe.sites)
if siteids:
domains = self.siteoper.get_domains_by_ids(siteids)
except JSONDecodeError:
pass
# 识别媒体信息
mediainfo: MediaInfo = self.recognize_media(meta=meta, mtype=meta.type,
tmdbid=subscribe.tmdbid,
@@ -564,15 +627,38 @@ class SubscribeChain(ChainBase):
# 遍历缓存种子
_match_context = []
for domain, contexts in torrents.items():
logger.info(f'开始匹配站点:{domain},共缓存了 {len(contexts)} 个种子...')
if domains and domain not in domains:
continue
logger.debug(f'开始匹配站点:{domain},共缓存了 {len(contexts)} 个种子...')
for context in contexts:
# 检查是否匹配
torrent_meta = context.meta_info
torrent_mediainfo = context.media_info
torrent_info = context.torrent_info
# 如果识别了媒体信息则比对TMDBID和类型
if torrent_mediainfo.tmdb_id or torrent_mediainfo.douban_id:
# 直接比对媒体信息
# 先判断是否有没识别的种子
if not torrent_mediainfo or (not torrent_mediainfo.tmdb_id and not torrent_mediainfo.douban_id):
_cache_key = f"{torrent_info.title}_{torrent_info.description}"
if _cache_key not in _recognize_cached:
_recognize_cached.append(_cache_key)
logger.info(f'{torrent_info.site_name} - {torrent_info.title} 订阅缓存为未识别状态,尝试重新识别...')
# 重新识别(不使用缓存)
torrent_mediainfo = self.recognize_media(meta=torrent_meta, cache=False)
if not torrent_mediainfo:
logger.warn(f'{torrent_info.site_name} - {torrent_info.title} 重新识别失败,尝试通过标题匹配...')
if self.torrenthelper.match_torrent(mediainfo=mediainfo,
torrent_meta=torrent_meta,
torrent=torrent_info):
# 匹配成功
logger.info(f'{mediainfo.title_year} 通过标题匹配到资源:{torrent_info.site_name} - {torrent_info.title}')
# 更新缓存
torrent_mediainfo = mediainfo
context.media_info = mediainfo
else:
continue
# 直接比对媒体信息
if torrent_mediainfo and (torrent_mediainfo.tmdb_id or torrent_mediainfo.douban_id):
if torrent_mediainfo.type != mediainfo.type:
continue
if torrent_mediainfo.tmdb_id \
@@ -584,22 +670,8 @@ class SubscribeChain(ChainBase):
logger.info(
f'{mediainfo.title_year} 通过媒体信ID匹配到资源{torrent_info.site_name} - {torrent_info.title}')
else:
# 没有torrent_mediainfo媒体信息按标题匹配
manual_match = False
# 比对词条指定的tmdbid
if torrent_meta.tmdbid or torrent_meta.doubanid:
if torrent_meta.tmdbid and torrent_meta.tmdbid != mediainfo.tmdb_id:
continue
if torrent_meta.doubanid and torrent_meta.doubanid != mediainfo.douban_id:
continue
manual_match = True
if not manual_match:
# 没有指定tmdbid按标题匹配
if not self.torrenthelper.match_torrent(mediainfo=mediainfo,
torrent_meta=torrent_meta,
torrent=torrent_info,
logerror=False):
continue
continue
# 优先级过滤规则
if subscribe.best_version:
priority_rule = self.systemconfig.get(SystemConfigKey.BestVersionFilterRules)
@@ -611,28 +683,28 @@ class SubscribeChain(ChainBase):
mediainfo=torrent_mediainfo)
if result is not None and not result:
# 不符合过滤规则
logger.info(f"{torrent_info.title} 不匹配当前过滤规则")
logger.debug(f"{torrent_info.title} 不匹配当前过滤规则")
continue
# 不在订阅站点范围的不处理
sub_sites = self.get_sub_sites(subscribe)
if sub_sites and torrent_info.site not in sub_sites:
logger.info(f"{torrent_info.site_name} - {torrent_info.title} 不符合订阅站点要求")
logger.debug(f"{torrent_info.site_name} - {torrent_info.title} 不符合订阅站点要求")
continue
# 如果是电视剧
if torrent_mediainfo.type == MediaType.TV:
# 有多季的不要
if len(torrent_meta.season_list) > 1:
logger.info(f'{torrent_info.title} 有多季,不处理')
logger.debug(f'{torrent_info.title} 有多季,不处理')
continue
# 比对季
if torrent_meta.begin_season:
if meta.begin_season != torrent_meta.begin_season:
logger.info(f'{torrent_info.title} 季不匹配')
logger.debug(f'{torrent_info.title} 季不匹配')
continue
elif meta.begin_season != 1:
logger.info(f'{torrent_info.title} 季不匹配')
logger.debug(f'{torrent_info.title} 季不匹配')
continue
# 非洗版
if not subscribe.best_version:
@@ -647,7 +719,7 @@ class SubscribeChain(ChainBase):
not set(no_exists_info.episodes).intersection(
set(torrent_meta.episode_list)
):
logger.info(
logger.debug(
f'{torrent_info.title} 对应剧集 {torrent_meta.episode_list} 未包含缺失的剧集'
)
continue
@@ -659,7 +731,7 @@ class SubscribeChain(ChainBase):
# 洗版时,非整季不要
if meta.type == MediaType.TV:
if torrent_meta.episode_list:
logger.info(f'{subscribe.name} 正在洗版,{torrent_info.title} 不是整季')
logger.debug(f'{subscribe.name} 正在洗版,{torrent_info.title} 不是整季')
continue
# 过滤规则
@@ -713,7 +785,11 @@ class SubscribeChain(ChainBase):
meta = MetaInfo(subscribe.name)
meta.year = subscribe.year
meta.begin_season = subscribe.season or None
meta.type = MediaType(subscribe.type)
try:
meta.type = MediaType(subscribe.type)
except ValueError:
logger.error(f'订阅 {subscribe.name} 类型错误:{subscribe.type}')
continue
# 识别媒体信息
mediainfo: MediaInfo = self.recognize_media(meta=meta, mtype=meta.type,
tmdbid=subscribe.tmdbid,
@@ -757,7 +833,10 @@ class SubscribeChain(ChainBase):
return
note = []
if subscribe.note:
note = json.loads(subscribe.note)
try:
note = json.loads(subscribe.note)
except JSONDecodeError:
note = []
for context in downloads:
meta = context.meta_info
mediainfo = context.media_info
@@ -788,7 +867,10 @@ class SubscribeChain(ChainBase):
return False
if not episodes:
return False
note = json.loads(subscribe.note)
try:
note = json.loads(subscribe.note)
except JSONDecodeError:
return False
if set(episodes).issubset(set(note)):
return True
return False
@@ -849,6 +931,11 @@ class SubscribeChain(ChainBase):
"subscribe_info": subscribe.to_dict(),
"mediainfo": mediainfo.to_dict(),
})
# 统计订阅
self.subscribehelper.sub_done_async({
"tmdbid": mediainfo.tmdb_id,
"doubanid": mediainfo.douban_id
})
def remote_list(self, channel: MessageChannel, userid: Union[str, int] = None):
"""
@@ -898,6 +985,11 @@ class SubscribeChain(ChainBase):
return
# 删除订阅
self.subscribeoper.delete(subscribe_id)
# 统计订阅
self.subscribehelper.sub_done_async({
"tmdbid": subscribe.tmdbid,
"doubanid": subscribe.doubanid
})
# 重新发送消息
self.remote_list(channel, userid)
@@ -990,7 +1082,10 @@ class SubscribeChain(ChainBase):
for subscribe in self.subscribeoper.list():
if not subscribe.sites:
continue
sites = json.loads(subscribe.sites) or []
try:
sites = json.loads(subscribe.sites)
except JSONDecodeError:
sites = []
if site_id not in sites:
continue
sites.remove(site_id)

View File

@@ -96,14 +96,14 @@ class SystemChain(ChainBase, metaclass=Singleton):
获取后端最新版本
"""
try:
with RequestUtils(proxies=settings.PROXY, headers=settings.GITHUB_HEADERS).get_res(
"https://api.github.com/repos/jxxghp/MoviePilot/releases/latest") as version_res:
if version_res:
ver_json = version_res.json()
version = f"{ver_json['tag_name']}"
return version
else:
return None
version_res = RequestUtils(proxies=settings.PROXY, headers=settings.GITHUB_HEADERS).get_res(
"https://api.github.com/repos/jxxghp/MoviePilot/releases/latest")
if version_res:
ver_json = version_res.json()
version = f"{ver_json['tag_name']}"
return version
else:
return None
except Exception as err:
logger.error(f"获取后端最新版本失败:{str(err)}")
return None
@@ -114,14 +114,14 @@ class SystemChain(ChainBase, metaclass=Singleton):
获取前端最新版本
"""
try:
with RequestUtils(proxies=settings.PROXY, headers=settings.GITHUB_HEADERS).get_res(
"https://api.github.com/repos/jxxghp/MoviePilot-Frontend/releases/latest") as version_res:
if version_res:
ver_json = version_res.json()
version = f"{ver_json['tag_name']}"
return version
else:
return None
version_res = RequestUtils(proxies=settings.PROXY, headers=settings.GITHUB_HEADERS).get_res(
"https://api.github.com/repos/jxxghp/MoviePilot-Frontend/releases/latest")
if version_res:
ver_json = version_res.json()
version = f"{ver_json['tag_name']}"
return version
else:
return None
except Exception as err:
logger.error(f"获取前端最新版本失败:{str(err)}")
return None

View File

@@ -153,12 +153,15 @@ class TorrentsChain(ChainBase, metaclass=Singleton):
# 所有站点索引
indexers = self.siteshelper.get_indexers()
# 需要刷新的站点domain
domains = []
# 遍历站点缓存资源
for indexer in indexers:
# 未开启的站点不刷新
if sites and indexer.get("id") not in sites:
continue
domain = StringUtils.get_url_domain(indexer.get("domain"))
domains.append(domain)
if stype == "spider":
# 刷新首页种子
torrents: List[TorrentInfo] = self.browse(domain=domain)
@@ -219,7 +222,9 @@ class TorrentsChain(ChainBase, metaclass=Singleton):
else:
self.save_cache(torrents_cache, self._rss_file)
# 返回
# 去除不在站点范围内的缓存种子
if sites and torrents_cache:
torrents_cache = {k: v for k, v in torrents_cache.items() if k in domains}
return torrents_cache
def __renew_rss_url(self, domain: str, site: dict):

View File

@@ -16,7 +16,9 @@ from app.db.models.downloadhistory import DownloadHistory
from app.db.models.transferhistory import TransferHistory
from app.db.systemconfig_oper import SystemConfigOper
from app.db.transferhistory_oper import TransferHistoryOper
from app.helper.directory import DirectoryHelper
from app.helper.format import FormatParser
from app.helper.message import MessageHelper
from app.helper.progress import ProgressHelper
from app.log import logger
from app.schemas import TransferInfo, TransferTorrent, Notification, EpisodeFormat
@@ -41,6 +43,8 @@ class TransferChain(ChainBase):
self.mediachain = MediaChain()
self.tmdbchain = TmdbChain()
self.systemconfig = SystemConfigOper()
self.directoryhelper = DirectoryHelper()
self.messagehelper = MessageHelper()
def process(self) -> bool:
"""
@@ -63,7 +67,10 @@ class TransferChain(ChainBase):
downloadhis: DownloadHistory = self.downloadhis.get_by_hash(torrent.hash)
if downloadhis:
# 类型
mtype = MediaType(downloadhis.type)
try:
mtype = MediaType(downloadhis.type)
except ValueError:
mtype = MediaType.TV
# 按TMDBID识别
mediainfo = self.recognize_media(mtype=mtype,
tmdbid=downloadhis.tmdbid,
@@ -86,7 +93,8 @@ class TransferChain(ChainBase):
mediainfo: MediaInfo = None, download_hash: str = None,
target: Path = None, transfer_type: str = None,
season: int = None, epformat: EpisodeFormat = None,
min_filesize: int = 0, force: bool = False) -> Tuple[bool, str]:
min_filesize: int = 0, scrape: bool = None,
force: bool = False) -> Tuple[bool, str]:
"""
执行一个复杂目录的转移操作
:param path: 待转移目录或文件
@@ -98,6 +106,7 @@ class TransferChain(ChainBase):
:param season: 季
:param epformat: 剧集格式
:param min_filesize: 最小文件大小(MB)
:param scrape: 是否刮削元数据
:param force: 是否强制转移
返回:成功标识,错误信息
"""
@@ -300,7 +309,8 @@ class TransferChain(ChainBase):
path=file_path,
transfer_type=transfer_type,
target=target,
episodes_info=episodes_info)
episodes_info=episodes_info,
scrape=scrape)
if not transferinfo:
logger.error("文件转移模块运行失败")
return False, "文件转移模块运行失败"
@@ -347,20 +357,32 @@ class TransferChain(ChainBase):
transfers[mkey].file_list_new.extend(transferinfo.file_list_new)
transfers[mkey].fail_list.extend(transferinfo.fail_list)
# 硬链接检查
temp_transfer_type = transfer_type
if transfer_type == "link":
if not SystemUtils.is_same_disk(file_path, transferinfo.target_path):
logger.warn(
f"{file_path}{transferinfo.target_path} 不在同一磁盘/存储空间/映射目录,未能硬链接,请检查存储空间占用和整理耗时,确认是否为复制")
self.messagehelper.put(
f"{file_path}{transferinfo.target_path} 不在同一磁盘/存储空间/映射目录,疑似硬链接失败,请检查是否为复制",
title="硬链接失败",
role="system")
temp_transfer_type = "copy"
# 新增转移成功历史记录
self.transferhis.add_success(
src_path=file_path,
mode=transfer_type,
mode=temp_transfer_type,
download_hash=download_hash,
meta=file_meta,
mediainfo=file_mediainfo,
transferinfo=transferinfo
)
# 刮削单个文件
if settings.SCRAP_METADATA:
if transferinfo.need_scrape:
self.scrape_metadata(path=transferinfo.target_path,
mediainfo=file_mediainfo,
transfer_type=transfer_type,
transfer_type=temp_transfer_type,
metainfo=file_meta)
# 更新进度
processed_num += 1
@@ -487,24 +509,6 @@ class TransferChain(ChainBase):
text=errmsg, userid=userid))
return
@staticmethod
def get_root_path(path: str, type_name: str, category: str) -> Optional[Path]:
"""
计算媒体库目录的根路径
"""
if not path or path == "None":
return None
index = -2
if type_name != '电影':
index = -3
if category:
index -= 1
if '/' in path:
retpath = '/'.join(path.split('/')[:index])
else:
retpath = '\\'.join(path.split('\\')[:index])
return Path(retpath)
def re_transfer(self, logid: int, mtype: MediaType = None,
mediaid: str = None) -> Tuple[bool, str]:
"""
@@ -522,7 +526,6 @@ class TransferChain(ChainBase):
src_path = Path(history.src)
if not src_path.exists():
return False, f"源目录不存在:{src_path}"
dest_path = self.get_root_path(path=history.dest, type_name=history.type, category=history.category)
# 查询媒体信息
if mtype and mediaid:
mediainfo = self.recognize_media(mtype=mtype, tmdbid=int(mediaid) if str(mediaid).isdigit() else None,
@@ -545,7 +548,6 @@ class TransferChain(ChainBase):
state, errmsg = self.do_transfer(path=src_path,
mediainfo=mediainfo,
download_hash=history.download_hash,
target=dest_path,
force=True)
if not state:
return False, errmsg
@@ -555,32 +557,36 @@ class TransferChain(ChainBase):
def manual_transfer(self, in_path: Path,
target: Path = None,
tmdbid: int = None,
doubanid: str = None,
mtype: MediaType = None,
season: int = None,
transfer_type: str = None,
epformat: EpisodeFormat = None,
min_filesize: int = 0,
scrape: bool = None,
force: bool = False) -> Tuple[bool, Union[str, list]]:
"""
手动转移,支持复杂条件,带进度显示
:param in_path: 源文件路径
:param target: 目标路径
:param tmdbid: TMDB ID
:param doubanid: 豆瓣ID
:param mtype: 媒体类型
:param season: 季度
:param transfer_type: 转移类型
:param epformat: 剧集格式
:param min_filesize: 最小文件大小(MB)
:param scrape: 是否刮削元数据
:param force: 是否强制转移
"""
logger.info(f"手动转移:{in_path} ...")
if tmdbid:
if tmdbid or doubanid:
# 有输入TMDBID时单个识别
# 识别媒体信息
mediainfo: MediaInfo = self.mediachain.recognize_media(tmdbid=tmdbid, mtype=mtype)
mediainfo: MediaInfo = self.mediachain.recognize_media(tmdbid=tmdbid, doubanid=doubanid, mtype=mtype)
if not mediainfo:
return False, f"媒体信息识别失败tmdbid: {tmdbid}, type: {mtype.value}"
return False, f"媒体信息识别失败tmdbid{tmdbid}doubanid{doubanid}type: {mtype.value}"
# 开始进度
self.progress.start(ProgressKey.FileTransfer)
self.progress.update(value=0,
@@ -595,6 +601,7 @@ class TransferChain(ChainBase):
season=season,
epformat=epformat,
min_filesize=min_filesize,
scrape=scrape,
force=force
)
if not state:
@@ -637,8 +644,7 @@ class TransferChain(ChainBase):
mtype=NotificationType.Organize,
title=msg_title, text=msg_str, image=mediainfo.get_message_image()))
@staticmethod
def delete_files(path: Path) -> Tuple[bool, str]:
def delete_files(self, path: Path) -> Tuple[bool, str]:
"""
删除转移后的文件以及空目录
:param path: 文件路径
@@ -668,26 +674,31 @@ class TransferChain(ChainBase):
# 判断当前媒体父路径下是否有媒体文件,如有则无需遍历父级
if not SystemUtils.exits_files(path.parent, settings.RMT_MEDIAEXT):
# 媒体库二级分类根路径
library_root_names = [
settings.LIBRARY_MOVIE_NAME or '电影',
settings.LIBRARY_TV_NAME or '电视剧',
settings.LIBRARY_ANIME_NAME or '动漫',
]
# 所有媒体库根目录的名称
library_roots = self.directoryhelper.get_library_dirs()
library_root_names = [Path(library_root.path).name for library_root in library_roots if library_root.path]
# 所有二级分类的名称
category_names = []
category_conf = self.media_category()
if category_conf:
category_names += list(category_conf.keys())
for cats in category_conf.values():
category_names += cats
# 判断父目录是否为空, 为空则删除
for parent_path in path.parents:
# 遍历父目录到媒体库二级分类根路径
if str(parent_path.name) in library_root_names:
if parent_path.name in library_root_names:
break
if parent_path.name in category_names:
continue
if str(parent_path.parent) != str(path.root):
# 父目录非根目录,才删除父目录
if not SystemUtils.exits_files(parent_path, settings.RMT_MEDIAEXT):
# 当前路径下没有媒体文件则删除
try:
shutil.rmtree(parent_path)
logger.warn(f"目录 {parent_path} 已删除")
except Exception as e:
logger.error(f"删除目录 {parent_path} 失败:{str(e)}")
return False, f"删除目录 {parent_path} 失败:{str(e)}"
logger.warn(f"目录 {parent_path} 已删除")
return True, ""

View File

@@ -10,9 +10,11 @@ from app.chain.site import SiteChain
from app.chain.subscribe import SubscribeChain
from app.chain.system import SystemChain
from app.chain.transfer import TransferChain
from app.core.config import settings
from app.core.event import Event as ManagerEvent
from app.core.event import eventmanager, EventManager
from app.core.plugin import PluginManager
from app.helper.message import MessageHelper
from app.helper.thread import ThreadHelper
from app.log import logger
from app.scheduler import Scheduler
@@ -50,6 +52,8 @@ class Command(metaclass=Singleton):
self.chain = CommandChian()
# 定时服务管理
self.scheduler = Scheduler()
# 消息管理器
self.messagehelper = MessageHelper()
# 线程管理器
self.threader = ThreadHelper()
# 内置命令
@@ -165,7 +169,8 @@ class Command(metaclass=Singleton):
}
)
# 广播注册命令菜单
self.chain.register_commands(commands=self.get_commands())
if not settings.DEV:
self.chain.register_commands(commands=self.get_commands())
# 消息处理线程
self._thread = Thread(target=self.__run)
# 启动事件处理线程
@@ -182,9 +187,9 @@ class Command(metaclass=Singleton):
if event:
logger.info(f"处理事件:{event.event_type} - {handlers}")
for handler in handlers:
names = handler.__qualname__.split(".")
[class_name, method_name] = names
try:
names = handler.__qualname__.split(".")
[class_name, method_name] = names
if class_name in self.pluginmanager.get_plugin_ids():
# 插件事件
self.threader.submit(
@@ -196,10 +201,15 @@ class Command(metaclass=Singleton):
# 检查全局变量中是否存在
if class_name not in globals():
# 导入模块除了插件和Command本身只有chain能响应事件
module = importlib.import_module(
f"app.chain.{class_name[:-5].lower()}"
)
class_obj = getattr(module, class_name)()
try:
module = importlib.import_module(
f"app.chain.{class_name[:-5].lower()}"
)
class_obj = getattr(module, class_name)()
except Exception as e:
logger.error(f"事件处理出错:{str(e)} - {traceback.format_exc()}")
continue
else:
# 通过类名创建类实例
class_obj = globals()[class_name]()
@@ -211,6 +221,19 @@ class Command(metaclass=Singleton):
)
except Exception as e:
logger.error(f"事件处理出错:{str(e)} - {traceback.format_exc()}")
self.messagehelper.put(title=f"{event.event_type} 事件处理出错",
message=f"{class_name}.{method_name}{str(e)}",
role="system")
self.eventmanager.send_event(
EventType.SystemError,
{
"type": "event",
"event_type": event.event_type,
"event_handle": f"{class_name}.{method_name}",
"error": str(e),
"traceback": traceback.format_exc()
}
)
def __run_command(self, command: Dict[str, any],
data_str: str = "",
@@ -266,9 +289,11 @@ class Command(metaclass=Singleton):
"""
停止事件处理线程
"""
logger.info("正在停止事件处理...")
self._event.set()
try:
self._thread.join()
logger.info("事件处理停止完成")
except Exception as e:
logger.error(f"停止事件处理线程出错:{str(e)} - {traceback.format_exc()}")
@@ -319,6 +344,9 @@ class Command(metaclass=Singleton):
logger.info(f"{command.get('description')} 执行完成")
except Exception as err:
logger.error(f"执行命令 {cmd} 出错:{str(err)} - {traceback.format_exc()}")
self.messagehelper.put(title=f"执行命令 {cmd} 出错",
message=str(err),
role="system")
@staticmethod
def send_plugin_event(etype: EventType, data: dict) -> None:

View File

@@ -1,7 +1,8 @@
import secrets
import sys
import threading
from pathlib import Path
from typing import List, Optional
from typing import Optional
from pydantic import BaseSettings, validator
@@ -9,6 +10,9 @@ from app.utils.system import SystemUtils
class Settings(BaseSettings):
"""
系统配置类
"""
# 项目名称
PROJECT_NAME = "MoviePilot"
# API路径
@@ -33,6 +37,8 @@ class Settings(BaseSettings):
DEBUG: bool = False
# 是否开发模式
DEV: bool = False
# 是否开启插件热加载
PLUGIN_AUTO_RELOAD: bool = False
# 配置文件目录
CONFIG_DIR: Optional[str] = None
# 超级管理员
@@ -49,8 +55,6 @@ class Settings(BaseSettings):
RECOGNIZE_SOURCE: str = "themoviedb"
# 刮削来源 themoviedb/douban
SCRAP_SOURCE: str = "themoviedb"
# 刮削入库的媒体文件
SCRAP_METADATA: bool = True
# 新增已入库媒体是否跟随TMDB信息变化
SCRAP_FOLLOW_TMDB: bool = True
# TMDB图片地址
@@ -153,16 +157,6 @@ class Settings(BaseSettings):
TR_PASSWORD: Optional[str] = None
# 种子标签
TORRENT_TAG: str = "MOVIEPILOT"
# 下载保存目录,容器内映射路径需要一致
DOWNLOAD_PATH: Optional[str] = None
# 电影下载保存目录,容器内映射路径需要一致
DOWNLOAD_MOVIE_PATH: Optional[str] = None
# 电视剧下载保存目录,容器内映射路径需要一致
DOWNLOAD_TV_PATH: Optional[str] = None
# 动漫下载保存目录,容器内映射路径需要一致
DOWNLOAD_ANIME_PATH: Optional[str] = None
# 下载目录二级分类
DOWNLOAD_CATEGORY: bool = False
# 下载站点字幕
DOWNLOAD_SUBTITLE: bool = True
# 媒体服务器 emby/jellyfin/plex多个媒体服务器,分割
@@ -191,6 +185,8 @@ class Settings(BaseSettings):
PLEX_TOKEN: Optional[str] = None
# 转移方式 link/copy/move/softlink
TRANSFER_TYPE: str = "copy"
# 是否同盘优先
TRANSFER_SAME_DISK: bool = True
# CookieCloud是否启动本地服务
COOKIECLOUD_ENABLE_LOCAL: Optional[bool] = False
# CookieCloud服务器地址
@@ -205,16 +201,6 @@ class Settings(BaseSettings):
OCR_HOST: str = "https://movie-pilot.org"
# CookieCloud对应的浏览器UA
USER_AGENT = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/113.0.0.0 Safari/537.36 Edg/113.0.1774.57"
# 媒体库目录,多个目录使用,分隔
LIBRARY_PATH: Optional[str] = None
# 电影媒体库目录名
LIBRARY_MOVIE_NAME: str = "电影"
# 电视剧媒体库目录名
LIBRARY_TV_NAME: str = "电视剧"
# 动漫媒体库目录名,不设置时使用电视剧目录
LIBRARY_ANIME_NAME: Optional[str] = None
# 二级分类
LIBRARY_CATEGORY: bool = True
# 电视剧动漫的分类genre_ids
ANIME_GENREIDS = [16]
# 电影重命名格式
@@ -231,9 +217,11 @@ class Settings(BaseSettings):
# 大内存模式
BIG_MEMORY_MODE: bool = False
# 插件市场仓库地址,多个地址使用,分隔,地址以/结尾
PLUGIN_MARKET: str = "https://github.com/jxxghp/MoviePilot-Plugins"
PLUGIN_MARKET: str = "https://github.com/jxxghp/MoviePilot-Plugins,https://github.com/thsrite/MoviePilot-Plugins,https://github.com/honue/MoviePilot-Plugins,https://github.com/InfinityPacer/MoviePilot-Plugins"
# Github token提高请求api限流阈值 ghp_****
GITHUB_TOKEN: Optional[str] = None
# Github代理服务器格式https://mirror.ghproxy.com/
GITHUB_PROXY: Optional[str] = ''
# 自动检查和更新站点资源包(站点索引、认证等)
AUTO_UPDATE_RESOURCE: bool = True
# 元数据识别缓存过期时间(小时)
@@ -242,6 +230,35 @@ class Settings(BaseSettings):
DOH_ENABLE: bool = True
# 搜索多个名称
SEARCH_MULTIPLE_NAME: bool = False
# 订阅数据共享
SUBSCRIBE_STATISTIC_SHARE: bool = True
# 插件安装数据共享
PLUGIN_STATISTIC_SHARE: bool = True
# 服务器地址,对应 https://github.com/jxxghp/MoviePilot-Server 项目
MP_SERVER_HOST: str = "https://movie-pilot.org"
# 【已弃用】刮削入库的媒体文件
SCRAP_METADATA: bool = True
# 【已弃用】下载保存目录,容器内映射路径需要一致
DOWNLOAD_PATH: Optional[str] = None
# 【已弃用】电影下载保存目录,容器内映射路径需要一致
DOWNLOAD_MOVIE_PATH: Optional[str] = None
# 【已弃用】电视剧下载保存目录,容器内映射路径需要一致
DOWNLOAD_TV_PATH: Optional[str] = None
# 【已弃用】动漫下载保存目录,容器内映射路径需要一致
DOWNLOAD_ANIME_PATH: Optional[str] = None
# 【已弃用】下载目录二级分类
DOWNLOAD_CATEGORY: bool = False
# 【已弃用】媒体库目录,多个目录使用,分隔
LIBRARY_PATH: Optional[str] = None
# 【已弃用】电影媒体库目录名
LIBRARY_MOVIE_NAME: str = "电影"
# 【已弃用】电视剧媒体库目录名
LIBRARY_TV_NAME: str = "电视剧"
# 【已弃用】动漫媒体库目录名,不设置时使用电视剧目录
LIBRARY_ANIME_NAME: Optional[str] = None
# 【已弃用】二级分类
LIBRARY_CATEGORY: bool = True
@validator("SUBSCRIBE_RSS_INTERVAL",
"COOKIECLOUD_INTERVAL",
@@ -326,48 +343,6 @@ class Settings(BaseSettings):
"server": self.PROXY_HOST
}
@property
def LIBRARY_PATHS(self) -> List[Path]:
if self.LIBRARY_PATH:
return [Path(path) for path in self.LIBRARY_PATH.split(",")]
return [self.CONFIG_PATH / "library"]
@property
def SAVE_PATH(self) -> Path:
"""
获取下载保存目录
"""
if self.DOWNLOAD_PATH:
return Path(self.DOWNLOAD_PATH)
return self.CONFIG_PATH / "downloads"
@property
def SAVE_MOVIE_PATH(self) -> Path:
"""
获取电影下载保存目录
"""
if self.DOWNLOAD_MOVIE_PATH:
return Path(self.DOWNLOAD_MOVIE_PATH)
return self.SAVE_PATH
@property
def SAVE_TV_PATH(self) -> Path:
"""
获取电视剧下载保存目录
"""
if self.DOWNLOAD_TV_PATH:
return Path(self.DOWNLOAD_TV_PATH)
return self.SAVE_PATH
@property
def SAVE_ANIME_PATH(self) -> Path:
"""
获取动漫下载保存目录
"""
if self.DOWNLOAD_ANIME_PATH:
return Path(self.DOWNLOAD_ANIME_PATH)
return self.SAVE_TV_PATH
@property
def GITHUB_HEADERS(self):
"""
@@ -386,7 +361,7 @@ class Settings(BaseSettings):
"""
if not self.DOWNLOADER:
return None
return self.DOWNLOADER.split(",")[0]
return next((d for d in settings.DOWNLOADER.split(",") if d), None)
@property
def DOWNLOADERS(self):
@@ -395,7 +370,7 @@ class Settings(BaseSettings):
"""
if not self.DOWNLOADER:
return []
return self.DOWNLOADER.split(",")
return [d for d in settings.DOWNLOADER.split(",") if d]
def __init__(self, **kwargs):
super().__init__(**kwargs)
@@ -419,7 +394,31 @@ class Settings(BaseSettings):
case_sensitive = True
class GlobalVar(object):
"""
全局标识
"""
# 系统停止事件
STOP_EVENT: threading.Event = threading.Event()
def stop_system(self):
"""
停止系统
"""
self.STOP_EVENT.set()
def is_system_stopped(self):
"""
是否停止
"""
return self.STOP_EVENT.is_set()
# 实例化配置
settings = Settings(
_env_file=Settings().CONFIG_PATH / "app.env",
_env_file_encoding="utf-8"
)
# 全局标识
global_vars = GlobalVar()

View File

@@ -1,5 +1,4 @@
import re
from pathlib import Path
from typing import Optional
from Pinyin2Hanzi import is_pinyin
@@ -27,7 +26,7 @@ class MetaVideo(MetaBase):
_source = ""
_effect = []
# 正则式区
_season_re = r"S(\d{2})|^S(\d{1,2})$|S(\d{1,2})E"
_season_re = r"S(\d{3})|^S(\d{1,3})$|S(\d{1,3})E"
_episode_re = r"EP?(\d{2,4})$|^EP?(\d{1,4})$|^S\d{1,2}EP?(\d{1,4})$|S\d{2}EP?(\d{2,4})"
_part_re = r"(^PART[0-9ABI]{0,2}$|^CD[0-9]{0,2}$|^DVD[0-9]{0,2}$|^DISK[0-9]{0,2}$|^DISC[0-9]{0,2}$)"
_roman_numerals = r"^(?=[MDCLXVI])M*(C[MD]|D?C{0,3})(X[CL]|L?X{0,3})(I[XV]|V?I{0,3})$"
@@ -138,7 +137,7 @@ class MetaVideo(MetaBase):
# 处理part
if self.part and self.part.upper() == "PART":
self.part = None
# 没有中文标题时,试中描述中获取中文名
# 没有中文标题时,试中描述中获取中文名
if not self.cn_name and self.en_name and self.subtitle:
if self.__is_pinyin(self.en_name):
# 英文名是拼音

View File

@@ -6,6 +6,7 @@ import regex as re
from app.core.config import settings
from app.core.meta import MetaAnime, MetaVideo, MetaBase
from app.core.meta.words import WordsMatcher
from app.log import logger
from app.schemas.types import MediaType
@@ -37,9 +38,12 @@ def MetaInfo(title: str, subtitle: str = None) -> MetaBase:
meta.apply_words = apply_words or []
# 修正媒体信息
if metainfo.get('tmdbid'):
meta.tmdbid = metainfo['tmdbid']
try:
meta.tmdbid = int(metainfo['tmdbid'])
except ValueError as _:
logger.warn("tmdbid 必须是数字")
if metainfo.get('doubanid'):
meta.tmdbid = metainfo['doubanid']
meta.doubanid = metainfo['doubanid']
if metainfo.get('type'):
meta.type = metainfo['type']
if metainfo.get('begin_season'):

View File

@@ -1,4 +1,5 @@
from typing import Generator, Optional, Tuple
import traceback
from typing import Generator, Optional, Tuple, Any
from app.core.config import settings
from app.helper.module import ModuleHelper
@@ -34,22 +35,31 @@ class ModuleManager(metaclass=Singleton):
for module in modules:
module_id = module.__name__
self._modules[module_id] = module
# 生成实例
_module = module()
# 初始化模块
if self.check_setting(_module.init_setting()):
# 通过模板开关控制加载
_module.init_module()
self._running_modules[module_id] = _module
logger.info(f"Moudle Loaded{module_id}")
try:
# 生成实例
_module = module()
# 初始化模块
if self.check_setting(_module.init_setting()):
# 通过模板开关控制加载
_module.init_module()
self._running_modules[module_id] = _module
logger.info(f"Moudle Loaded{module_id}")
except Exception as err:
logger.error(f"Load Moudle Error{module_id}{str(err)} - {traceback.format_exc()}", exc_info=True)
def stop(self):
"""
停止所有模块
"""
for _, module in self._running_modules.items():
logger.info("正在停止所有模块...")
for module_id, module in self._running_modules.items():
if hasattr(module, "stop"):
module.stop()
try:
module.stop()
logger.info(f"Moudle Stoped{module_id}")
except Exception as err:
logger.error(f"Stop Moudle Error{module_id}{str(err)} - {traceback.format_exc()}", exc_info=True)
logger.info("模块停止完成")
def reload(self):
"""
@@ -87,7 +97,17 @@ class ModuleManager(metaclass=Singleton):
return True
return False
def get_modules(self, method: str) -> Generator:
def get_running_module(self, module_id: str) -> Any:
"""
根据模块id获取模块运行实例
"""
if not module_id:
return None
if not self._running_modules:
return None
return self._running_modules.get(module_id)
def get_running_modules(self, method: str) -> Generator:
"""
获取实现了同一方法的模块列表
"""
@@ -97,3 +117,19 @@ class ModuleManager(metaclass=Singleton):
if hasattr(module, method) \
and ObjectUtils.check_method(getattr(module, method)):
yield module
def get_module(self, module_id: str) -> Any:
"""
根据模块id获取模块
"""
if not module_id:
return None
if not self._modules:
return None
return self._modules.get(module_id)
def get_modules(self) -> dict:
"""
获取模块列表
"""
return self._modules

View File

@@ -1,7 +1,14 @@
import concurrent
import concurrent.futures
import inspect
import os
import threading
import time
import traceback
from typing import List, Any, Dict, Tuple, Optional
from typing import List, Any, Dict, Tuple, Optional, Callable
from watchdog.events import FileSystemEventHandler
from watchdog.observers import Observer
from app import schemas
from app.core.config import settings
@@ -18,6 +25,58 @@ from app.utils.string import StringUtils
from app.utils.system import SystemUtils
class PluginMonitorHandler(FileSystemEventHandler):
# 计时器
__reload_timer = None
# 防抖时间间隔
__debounce_interval = 0.5
# 最近一次修改时间
__last_modified = 0
# 修改间隔
__timeout = 2
def on_modified(self, event):
"""
插件文件修改后重载
"""
if event.is_directory:
return
current_time = time.time()
if current_time - self.__last_modified < self.__timeout:
return
self.__last_modified = current_time
# 读取插件根目录下的__init__.py文件读取class XXXX(_PluginBase)的类名
try:
# 使用os.path和pathlib处理跨平台的路径问题
plugin_dir = event.src_path.split("plugins" + os.sep)[1].split(os.sep)[0]
init_file = settings.ROOT_PATH / "app" / "plugins" / plugin_dir / "__init__.py"
with open(init_file, "r", encoding="utf-8") as f:
lines = f.readlines()
pid = None
for line in lines:
if line.startswith("class") and "(_PluginBase)" in line:
pid = line.split("class ")[1].split("(_PluginBase)")[0]
if pid:
# 防抖处理,通过计时器延迟加载
if self.__reload_timer:
self.__reload_timer.cancel()
self.__reload_timer = threading.Timer(self.__debounce_interval, self.__reload_plugin, [pid])
self.__reload_timer.start()
except Exception as e:
logger.error(f"插件文件修改后重载出错:{str(e)}")
@staticmethod
def __reload_plugin(pid):
"""
重新加载插件
"""
try:
logger.info(f"插件 {pid} 文件修改,重新加载...")
PluginManager().reload_plugin(pid)
except Exception as e:
logger.error(f"插件文件修改后重载出错:{str(e)}")
class PluginManager(metaclass=Singleton):
"""
插件管理器
@@ -30,11 +89,16 @@ class PluginManager(metaclass=Singleton):
_running_plugins: dict = {}
# 配置Key
_config_key: str = "plugin.%s"
# 监听器
_observer: Observer = None
def __init__(self):
self.siteshelper = SitesHelper()
self.pluginhelper = PluginHelper()
self.systemconfig = SystemConfigOper()
# 开发者模式监测插件修改
if settings.DEV or settings.PLUGIN_AUTO_RELOAD:
self.__start_monitor()
def init_config(self):
# 停止已有插件
@@ -47,11 +111,28 @@ class PluginManager(metaclass=Singleton):
启动加载插件
:param pid: 插件ID为空加载所有插件
"""
def check_module(module: Any):
"""
检查模块
"""
if not hasattr(module, 'init_plugin') or not hasattr(module, "plugin_name"):
return False
return True
# 扫描插件目录
plugins = ModuleHelper.load(
"app.plugins",
filter_func=lambda _, obj: hasattr(obj, 'init_plugin') and hasattr(obj, "plugin_name")
)
if pid:
# 加载指定插件
plugins = ModuleHelper.load_with_pre_filter(
"app.plugins",
filter_func=lambda name, obj: check_module(obj) and name == pid
)
else:
# 加载所有插件
plugins = ModuleHelper.load(
"app.plugins",
filter_func=lambda _, obj: check_module(obj)
)
# 已安装插件
installed_plugins = self.systemconfig.get(SystemConfigKey.UserInstalledPlugins) or []
# 排序
@@ -105,6 +186,10 @@ class PluginManager(metaclass=Singleton):
:param pid: 插件ID为空停止所有插件
"""
# 停止插件
if pid:
logger.info(f"正在停止插件 {pid}...")
else:
logger.info("正在停止所有插件...")
for plugin_id, plugin in self._running_plugins.items():
if pid and plugin_id != pid:
continue
@@ -120,6 +205,28 @@ class PluginManager(metaclass=Singleton):
# 清空
self._plugins = {}
self._running_plugins = {}
logger.info("插件停止完成")
def __start_monitor(self):
"""
开发者模式下监测插件文件修改
"""
logger.info("开发者模式下开始监测插件文件修改...")
monitor_handler = PluginMonitorHandler()
self._observer = Observer()
self._observer.schedule(monitor_handler, str(settings.ROOT_PATH / "app" / "plugins"), recursive=True)
self._observer.start()
def stop_monitor(self):
"""
停止监测插件修改
"""
# 停止监测
if self._observer:
logger.info("正在停止插件文件修改监测...")
self._observer.stop()
self._observer.join()
logger.info("插件文件修改监测停止完成")
@staticmethod
def __stop_plugin(plugin: Any):
@@ -217,10 +324,11 @@ class PluginManager(metaclass=Singleton):
获取插件表单
:param pid: 插件ID
"""
if not self._running_plugins.get(pid):
plugin = self._running_plugins.get(pid)
if not plugin:
return [], {}
if hasattr(self._running_plugins[pid], "get_form"):
return self._running_plugins[pid].get_form() or ([], {})
if hasattr(plugin, "get_form"):
return plugin.get_form() or ([], {})
return [], {}
def get_plugin_page(self, pid: str) -> List[dict]:
@@ -228,12 +336,51 @@ class PluginManager(metaclass=Singleton):
获取插件页面
:param pid: 插件ID
"""
if not self._running_plugins.get(pid):
plugin = self._running_plugins.get(pid)
if not plugin:
return []
if hasattr(self._running_plugins[pid], "get_page"):
return self._running_plugins[pid].get_page() or []
if hasattr(plugin, "get_page"):
return plugin.get_page() or []
return []
def get_plugin_dashboard(self, pid: str, key: str, **kwargs) -> Optional[schemas.PluginDashboard]:
"""
获取插件仪表盘
:param pid: 插件ID
:param key: 仪表盘key
"""
def __get_params_count(func: Callable):
"""
获取函数的参数信息
"""
signature = inspect.signature(func)
return len(signature.parameters)
plugin = self._running_plugins.get(pid)
if not plugin:
return None
if hasattr(plugin, "get_dashboard"):
# 检查方法的参数个数
params_count = __get_params_count(plugin.get_dashboard)
if params_count > 1:
dashboard: Tuple = plugin.get_dashboard(key=key, **kwargs)
elif params_count > 0:
dashboard: Tuple = plugin.get_dashboard(**kwargs)
else:
dashboard: Tuple = plugin.get_dashboard()
if dashboard:
cols, attrs, elements = dashboard
return schemas.PluginDashboard(
id=pid,
name=plugin.plugin_name,
key=key or "",
cols=cols or {},
elements=elements,
attrs=attrs or {}
)
return None
def get_plugin_commands(self) -> List[Dict[str, Any]]:
"""
获取插件命令
@@ -254,7 +401,7 @@ class PluginManager(metaclass=Singleton):
logger.error(f"获取插件命令出错:{str(e)}")
return ret_commands
def get_plugin_apis(self) -> List[Dict[str, Any]]:
def get_plugin_apis(self, plugin_id: str = None) -> List[Dict[str, Any]]:
"""
获取插件API
[{
@@ -267,6 +414,8 @@ class PluginManager(metaclass=Singleton):
"""
ret_apis = []
for pid, plugin in self._running_plugins.items():
if plugin_id and pid != plugin_id:
continue
if hasattr(plugin, "get_api") \
and ObjectUtils.check_method(plugin.get_api):
try:
@@ -301,17 +450,48 @@ class PluginManager(metaclass=Singleton):
logger.error(f"获取插件 {pid} 服务出错:{str(e)}")
return ret_services
def get_plugin_dashboard_meta(self):
"""
获取所有插件仪表盘元信息
"""
dashboard_meta = []
for plugin_id, plugin in self._running_plugins.items():
if not hasattr(plugin, "get_dashboard") or not ObjectUtils.check_method(plugin.get_dashboard):
continue
try:
if not plugin.get_state():
continue
# 如果是多仪表盘实现
if hasattr(plugin, "get_dashboard_meta") and ObjectUtils.check_method(plugin.get_dashboard_meta):
meta = plugin.get_dashboard_meta()
if meta:
dashboard_meta.extend([{
"id": plugin_id,
"name": m.get("name"),
"key": m.get("key"),
} for m in meta if m])
else:
dashboard_meta.append({
"id": plugin_id,
"name": plugin.plugin_name,
"key": "",
})
except Exception as e:
logger.error(f"获取插件[{plugin_id}]仪表盘元数据出错:{str(e)}")
return dashboard_meta
def get_plugin_attr(self, pid: str, attr: str) -> Any:
"""
获取插件属性
:param pid: 插件ID
:param attr: 属性名
"""
if not self._running_plugins.get(pid):
plugin = self._running_plugins.get(pid)
if not plugin:
return None
if not hasattr(self._running_plugins[pid], attr):
if not hasattr(plugin, attr):
return None
return getattr(self._running_plugins[pid], attr)
return getattr(plugin, attr)
def run_plugin_method(self, pid: str, method: str, *args, **kwargs) -> Any:
"""
@@ -321,11 +501,12 @@ class PluginManager(metaclass=Singleton):
:param args: 参数
:param kwargs: 关键字参数
"""
if not self._running_plugins.get(pid):
plugin = self._running_plugins.get(pid)
if not plugin:
return None
if not hasattr(self._running_plugins[pid], method):
if not hasattr(plugin, method):
return None
return getattr(self._running_plugins[pid], method)(*args, **kwargs)
return getattr(plugin, method)(*args, **kwargs)
def get_plugin_ids(self) -> List[str]:
"""
@@ -438,24 +619,27 @@ class PluginManager(metaclass=Singleton):
with concurrent.futures.ThreadPoolExecutor() as executor:
futures = []
for m in settings.PLUGIN_MARKET.split(","):
if not m:
continue
futures.append(executor.submit(__get_plugin_info, m))
for future in concurrent.futures.as_completed(futures):
plugins = future.result()
if plugins:
all_plugins.extend(plugins)
# 去重
all_plugins = list({f"{p.id}{p.plugin_version}": p for p in all_plugins}.values())
# 所有插件按repo在设置中的顺序排序
all_plugins.sort(
key=lambda x: settings.PLUGIN_MARKET.split(",").index(x.repo_url) if x.repo_url else 0
)
# 按插件ID和版本号去重相同插件以前面的为准
result = []
_dup = []
# 相同ID的插件保留版本号最大版本
max_versions = {}
for p in all_plugins:
key = f"{p.id}v{p.plugin_version}"
if key not in _dup:
_dup.append(key)
result.append(p)
logger.info(f"共获取到 {len(result)}第三方插件")
if p.id not in max_versions or StringUtils.compare_version(p.plugin_version, max_versions[p.id]) > 0:
max_versions[p.id] = p.plugin_version
result = [p for p in all_plugins if
p.plugin_version == max_versions[p.id]]
logger.info(f"共获取到 {len(result)}线上插件")
return result
def get_local_plugins(self) -> List[schemas.Plugin]:

View File

@@ -7,4 +7,4 @@ from .subscribe import Subscribe
from .systemconfig import SystemConfig
from .transferhistory import TransferHistory
from .user import User
from .userconfig import UserConfig
from .userconfig import UserConfig

View File

@@ -45,6 +45,8 @@ class Site(Base):
limit_count = Column(Integer, default=0)
# 流控间隔
limit_seconds = Column(Integer, default=0)
# 超时时间
timeout = Column(Integer, default=0)
# 是否启用
is_active = Column(Boolean(), default=True)
# 创建时间
@@ -67,6 +69,12 @@ class Site(Base):
result = db.query(Site).order_by(Site.pri).all()
return list(result)
@staticmethod
@db_query
def get_domains_by_ids(db: Session, ids: list):
result = db.query(Site.domain).filter(Site.id.in_(ids)).all()
return [r[0] for r in result]
@staticmethod
@db_update
def reset(db: Session):

View File

@@ -63,6 +63,12 @@ class SiteOper(DbOper):
"""
return Site.get_by_domain(self._db, domain)
def get_domains_by_ids(self, ids: List[int]) -> List[str]:
"""
按ID获取站点域名
"""
return Site.get_domains_by_ids(self._db, ids)
def exists(self, domain: str) -> bool:
"""
判断站点是否存在

161
app/helper/directory.py Normal file
View File

@@ -0,0 +1,161 @@
from pathlib import Path
from typing import List, Optional
from app import schemas
from app.core.config import settings
from app.core.context import MediaInfo
from app.db.systemconfig_oper import SystemConfigOper
from app.log import logger
from app.schemas.types import SystemConfigKey, MediaType
from app.utils.string import StringUtils
from app.utils.system import SystemUtils
class DirectoryHelper:
"""
下载目录/媒体库目录帮助类
"""
def __init__(self):
self.systemconfig = SystemConfigOper()
def get_download_dirs(self) -> List[schemas.MediaDirectory]:
"""
获取下载目录
"""
dir_conf: List[dict] = self.systemconfig.get(SystemConfigKey.DownloadDirectories)
if not dir_conf:
return []
return [schemas.MediaDirectory(**d) for d in dir_conf]
def get_library_dirs(self) -> List[schemas.MediaDirectory]:
"""
获取媒体库目录
"""
dir_conf: List[dict] = self.systemconfig.get(SystemConfigKey.LibraryDirectories)
if not dir_conf:
return []
return [schemas.MediaDirectory(**d) for d in dir_conf]
def get_download_dir(self, media: MediaInfo = None, to_path: Path = None) -> Optional[schemas.MediaDirectory]:
"""
根据媒体信息获取下载目录
:param media: 媒体信息
:param to_path: 目标目录
"""
# 处理类型
if media:
media_type = media.type.value
else:
media_type = MediaType.UNKNOWN.value
download_dirs = self.get_download_dirs()
# 按照配置顺序查找(保存后的数据已经排序)
for download_dir in download_dirs:
if not download_dir.path:
continue
download_path = Path(download_dir.path)
# 有目标目录,但目标目录与当前目录不相等时不要
if to_path and download_path != to_path:
continue
# 目录类型为全部的,符合条件
if not download_dir.media_type:
return download_dir
# 目录类型相等,目录类别为全部,符合条件
if download_dir.media_type == media_type and not download_dir.category:
return download_dir
# 目录类型相等,目录类别相等,符合条件
if download_dir.media_type == media_type and download_dir.category == media.category:
return download_dir
return None
def get_library_dir(self, media: MediaInfo = None, in_path: Path = None,
to_path: Path = None) -> Optional[schemas.MediaDirectory]:
"""
根据媒体信息获取媒体库目录,需判断是否同盘优先
:param media: 媒体信息
:param in_path: 源目录
:param to_path: 目标目录
"""
def __comman_parts(path1: Path, path2: Path) -> int:
"""
计算两个路径的公共路径长度
"""
parts1 = path1.parts
parts2 = path2.parts
root_flag = parts1[0] == '/' and parts2[0] == '/'
length = min(len(parts1), len(parts2))
for i in range(length):
if parts1[i] == '/' and parts2[i] == '/':
continue
if parts1[i] != parts2[i]:
return i - 1 if root_flag else i
return length - 1 if root_flag else length
# 处理类型
if media:
media_type = media.type.value
else:
media_type = MediaType.UNKNOWN.value
# 匹配的目录
matched_dirs = []
library_dirs = self.get_library_dirs()
# 按照配置顺序查找(保存后的数据已经排序)
for library_dir in library_dirs:
if not library_dir.path:
continue
# 有目标目录,但目标目录与当前目录不相等时不要
if to_path and Path(library_dir.path) != to_path:
continue
# 目录类型为全部的,符合条件
if not library_dir.media_type:
matched_dirs.append(library_dir)
# 目录类型相等,目录类别为全部,符合条件
if library_dir.media_type == media_type and not library_dir.category:
matched_dirs.append(library_dir)
# 目录类型相等,目录类别相等,符合条件
if library_dir.media_type == media_type and library_dir.category == media.category:
matched_dirs.append(library_dir)
# 未匹配到
if not matched_dirs:
return None
# 没有目录则创建
for matched_dir in matched_dirs:
matched_path = Path(matched_dir.path)
if not matched_path.exists():
matched_path.mkdir(parents=True, exist_ok=True)
# 只匹配到一项
if len(matched_dirs) == 1:
return matched_dirs[0]
# 有源路径,且开启同盘/同目录优先时
if in_path and settings.TRANSFER_SAME_DISK:
# 优先同根路径
max_length = 0
target_dirs = []
for matched_dir in matched_dirs:
try:
# 计算in_path和path的公共路径长度
relative_len = __comman_parts(in_path, Path(matched_dir.path))
if relative_len and relative_len >= max_length:
max_length = relative_len
target_dirs.append(matched_dir)
except Exception as e:
logger.debug(f"计算目标路径时出错:{str(e)}")
continue
if target_dirs:
matched_dirs = target_dirs
# 优先同盘
for matched_dir in matched_dirs:
matched_path = Path(matched_dir.path)
if SystemUtils.is_same_disk(matched_path, in_path):
return matched_dir
# 返回最优先的匹配
return matched_dirs[0]

View File

@@ -10,34 +10,54 @@ class MessageHelper(metaclass=Singleton):
"""
消息队列管理器,包括系统消息和用户消息
"""
def __init__(self):
self.sys_queue = queue.Queue()
self.user_queue = queue.Queue()
def put(self, message: Any, role: str = "sys", note: Union[list, dict] = None):
def put(self, message: Any, role: str = "plugin", title: str = None, note: Union[list, dict] = None):
"""
存消息
:param message: 消息
:param role: 消息通道 sys/user
:param role: 消息通道 systm系统消息plugin插件消息user用户消息
:param title: 标题
:param note: 附件json
"""
if role == "sys":
self.sys_queue.put(message)
if role in ["system", "plugin"]:
# 没有标题时获取插件名称
if role == "plugin" and not title:
title = "插件通知"
# 系统通知,默认
self.sys_queue.put(json.dumps({
"type": role,
"title": title,
"text": message,
"date": time.strftime("%Y-%m-%d %H:%M:%S", time.localtime()),
"note": note
}))
else:
if isinstance(message, str):
self.user_queue.put(message)
# 非系统的文本通知
self.user_queue.put(json.dumps({
"title": title,
"text": message,
"date": time.strftime("%Y-%m-%d %H:%M:%S", time.localtime()),
"note": note
}))
elif hasattr(message, "to_dict"):
# 非系统的复杂结构通知,如媒体信息/种子列表等。
content = message.to_dict()
content['title'] = title
content['date'] = time.strftime("%Y-%m-%d %H:%M:%S", time.localtime())
content['note'] = json.dumps(note) if note else None
content['note'] = note
self.user_queue.put(json.dumps(content))
def get(self, role: str = "sys") -> Optional[str]:
def get(self, role: str = "system") -> Optional[str]:
"""
取消息
:param role: 消息通道 sys/user
:param role: 消息通道 systm系统消息plugin插件消息user用户消息
"""
if role == "sys":
if role == "system":
if not self.sys_queue.empty():
return self.sys_queue.get(block=False)
else:

View File

@@ -13,12 +13,12 @@ class ModuleHelper:
"""
@classmethod
def load(cls, package_path, filter_func=lambda name, obj: True):
def load(cls, package_path: str, filter_func=lambda name, obj: True):
"""
导入模块
导入模块
:param package_path: 父包名
:param filter_func: 子模块过滤函数入参为模块名和模块对象返回True则导入否则不导入
:return:
:return: 导入的模块对象列表
"""
submodules: list = []
@@ -40,6 +40,58 @@ class ModuleHelper:
return submodules
@classmethod
def load_with_pre_filter(cls, package_path: str, filter_func=lambda name, obj: True):
"""
导入子模块
:param package_path: 父包名
:param filter_func: 子模块过滤函数入参为模块名和模块对象返回True则导入否则不导入
:return: 导入的模块对象列表
"""
submodules: list = []
packages = importlib.import_module(package_path)
def reload_module_objects(target_module):
"""加载模块并返回对象"""
importlib.reload(target_module)
# reload后重新过滤已经重新加载后的模块中的对象
return [
obj for name, obj in target_module.__dict__.items()
if not name.startswith('_') and isinstance(obj, type) and filter_func(name, obj)
]
def reload_sub_modules(parent_module, parent_module_name):
"""重新加载一级子模块"""
for sub_importer, sub_module_name, sub_is_pkg in pkgutil.walk_packages(parent_module.__path__):
full_sub_module_name = f'{parent_module_name}.{sub_module_name}'
try:
full_sub_module = importlib.import_module(full_sub_module_name)
importlib.reload(full_sub_module)
except Exception as sub_err:
logger.debug(f'加载子模块 {full_sub_module_name} 失败:{str(sub_err)} - {traceback.format_exc()}')
# 遍历包中的所有子模块
for importer, package_name, is_pkg in pkgutil.iter_modules(packages.__path__):
if package_name.startswith('_'):
continue
full_package_name = f'{package_path}.{package_name}'
try:
module = importlib.import_module(full_package_name)
# 预检查模块中的对象
candidates = [(name, obj) for name, obj in module.__dict__.items() if
not name.startswith('_') and isinstance(obj, type)]
# 确定是否需要重新加载
if any(filter_func(name, obj) for name, obj in candidates):
# 如果子模块是包,重新加载其子模块
if is_pkg:
reload_sub_modules(module, full_package_name)
submodules.extend(reload_module_objects(module))
except Exception as err:
logger.debug(f'加载模块 {package_name} 失败:{str(err)} - {traceback.format_exc()}')
return submodules
@staticmethod
def dynamic_import_all_modules(base_path: Path, package_name: str):
"""

View File

@@ -20,19 +20,20 @@ class PluginHelper(metaclass=Singleton):
插件市场管理,下载安装插件到本地
"""
_base_url = "https://raw.githubusercontent.com/%s/%s/main/"
_base_url = f"{settings.GITHUB_PROXY}https://raw.githubusercontent.com/%s/%s/main/"
_install_reg = "https://movie-pilot.org/plugin/install/%s"
_install_reg = f"{settings.MP_SERVER_HOST}/plugin/install/%s"
_install_report = "https://movie-pilot.org/plugin/install"
_install_report = f"{settings.MP_SERVER_HOST}/plugin/install"
_install_statistic = "https://movie-pilot.org/plugin/statistic"
_install_statistic = f"{settings.MP_SERVER_HOST}/plugin/statistic"
def __init__(self):
self.systemconfig = SystemConfigOper()
if not self.systemconfig.get(SystemConfigKey.PluginInstallReport):
if self.install_report():
self.systemconfig.set(SystemConfigKey.PluginInstallReport, "1")
if settings.PLUGIN_STATISTIC_SHARE:
if not self.systemconfig.get(SystemConfigKey.PluginInstallReport):
if self.install_report():
self.systemconfig.set(SystemConfigKey.PluginInstallReport, "1")
@cached(cache=TTLCache(maxsize=1000, ttl=1800))
def get_plugins(self, repo_url: str) -> Dict[str, dict]:
@@ -80,6 +81,8 @@ class PluginHelper(metaclass=Singleton):
"""
获取插件安装统计
"""
if not settings.PLUGIN_STATISTIC_SHARE:
return {}
res = RequestUtils(timeout=10).get_res(self._install_statistic)
if res and res.status_code == 200:
return res.json()
@@ -89,6 +92,8 @@ class PluginHelper(metaclass=Singleton):
"""
安装插件统计
"""
if not settings.PLUGIN_STATISTIC_SHARE:
return False
if not pid:
return False
res = RequestUtils(timeout=5).get_res(self._install_reg % pid)
@@ -100,6 +105,8 @@ class PluginHelper(metaclass=Singleton):
"""
上报存量插件安装统计
"""
if not settings.PLUGIN_STATISTIC_SHARE:
return False
plugins = self.systemconfig.get(SystemConfigKey.UserInstalledPlugins)
if not plugins:
return False
@@ -150,9 +157,10 @@ class PluginHelper(metaclass=Singleton):
return False, "文件列表为空"
for item in _l:
if item.get("download_url"):
download_url = f"{settings.GITHUB_PROXY}{item.get('download_url')}"
# 下载插件文件
res = RequestUtils(proxies=settings.PROXY,
headers=settings.GITHUB_HEADERS, timeout=60).get_res(item["download_url"])
headers=settings.GITHUB_HEADERS, timeout=60).get_res(download_url)
if not res:
return False, f"文件 {item.get('name')} 下载失败!"
elif res.status_code != 200:

View File

@@ -15,7 +15,7 @@ class ResourceHelper(metaclass=Singleton):
检测和更新资源包
"""
# 资源包的git仓库地址
_repo = "https://raw.githubusercontent.com/jxxghp/MoviePilot-Resources/main/package.json"
_repo = f"{settings.GITHUB_PROXY}https://raw.githubusercontent.com/jxxghp/MoviePilot-Resources/main/package.json"
_files_api = f"https://api.github.com/repos/jxxghp/MoviePilot-Resources/contents/resources"
_base_dir: Path = settings.ROOT_PATH
@@ -86,9 +86,11 @@ class ResourceHelper(metaclass=Singleton):
if not save_path:
continue
if item.get("download_url"):
logger.info(f"开始更新资源文件:{item.get('name')} ...")
download_url = f"{settings.GITHUB_PROXY}{item.get('download_url')}"
# 下载资源文件
res = RequestUtils(proxies=settings.PROXY, headers=settings.GITHUB_HEADERS,
timeout=180).get_res(item["download_url"])
timeout=180).get_res(download_url)
if not res:
logger.error(f"文件 {item.get('name')} 下载失败!")
elif res.status_code != 200:

122
app/helper/subscribe.py Normal file
View File

@@ -0,0 +1,122 @@
from threading import Thread
from typing import List
from cachetools import TTLCache, cached
from app.core.config import settings
from app.db.subscribe_oper import SubscribeOper
from app.db.systemconfig_oper import SystemConfigOper
from app.schemas.types import SystemConfigKey
from app.utils.http import RequestUtils
from app.utils.singleton import Singleton
class SubscribeHelper(metaclass=Singleton):
"""
订阅数据统计
"""
_sub_reg = f"{settings.MP_SERVER_HOST}/subscribe/add"
_sub_done = f"{settings.MP_SERVER_HOST}/subscribe/done"
_sub_report = f"{settings.MP_SERVER_HOST}/subscribe/report"
_sub_statistic = f"{settings.MP_SERVER_HOST}/subscribe/statistic"
def __init__(self):
self.systemconfig = SystemConfigOper()
if settings.SUBSCRIBE_STATISTIC_SHARE:
if not self.systemconfig.get(SystemConfigKey.SubscribeReport):
if self.sub_report():
self.systemconfig.set(SystemConfigKey.SubscribeReport, "1")
@cached(cache=TTLCache(maxsize=20, ttl=1800))
def get_statistic(self, stype: str, page: int = 1, count: int = 30) -> List[dict]:
"""
获取订阅统计数据
"""
if not settings.SUBSCRIBE_STATISTIC_SHARE:
return []
res = RequestUtils(timeout=15).get_res(self._sub_statistic, params={
"stype": stype,
"page": page,
"count": count
})
if res and res.status_code == 200:
return res.json()
return []
def sub_reg(self, sub: dict) -> bool:
"""
新增订阅统计
"""
if not settings.SUBSCRIBE_STATISTIC_SHARE:
return False
res = RequestUtils(timeout=5, headers={
"Content-Type": "application/json"
}).post_res(self._sub_reg, json=sub)
if res and res.status_code == 200:
return True
return False
def sub_done(self, sub: dict) -> bool:
"""
完成订阅统计
"""
if not settings.SUBSCRIBE_STATISTIC_SHARE:
return False
res = RequestUtils(timeout=5, headers={
"Content-Type": "application/json"
}).post_res(self._sub_done, json=sub)
if res and res.status_code == 200:
return True
return False
def sub_reg_async(self, sub: dict) -> bool:
"""
异步新增订阅统计
"""
# 开新线程处理
Thread(target=self.sub_reg, args=(sub,)).start()
return True
def sub_done_async(self, sub: dict) -> bool:
"""
异步完成订阅统计
"""
# 开新线程处理
Thread(target=self.sub_done, args=(sub,)).start()
return True
def sub_report(self) -> bool:
"""
上报存量订阅统计
"""
if not settings.SUBSCRIBE_STATISTIC_SHARE:
return False
subscribes = SubscribeOper().list()
if not subscribes:
return True
res = RequestUtils(content_type="application/json",
timeout=10).post(self._sub_report,
json={
"subscribes": [
{
"name": sub.name,
"year": sub.year,
"type": sub.type,
"tmdbid": sub.tmdbid,
"imdbid": sub.imdbid,
"tvdbid": sub.tvdbid,
"doubanid": sub.doubanid,
"bangumiid": sub.bangumiid,
"season": sub.season,
"poster": sub.poster,
"backdrop": sub.backdrop,
"vote": sub.vote,
"description": sub.description
} for sub in subscribes
]
})
return True if res else False

View File

@@ -353,8 +353,8 @@ class TorrentHelper(metaclass=Singleton):
min_seeders_time = filter_rule.get("min_seeders_time") or 0
if min_seeders_time:
# 发布时间与当前时间差(分钟)
pubdate_minutes = __get_pubminutes(min_seeders_time)
if pubdate_minutes > min_seeders_time:
pubdate_minutes = __get_pubminutes(torrent_info.pubdate)
if pubdate_minutes > int(min_seeders_time):
logger.info(f"{torrent_info.title} 发布时间大于 {min_seeders_time} 分钟,做种人数不足 {min_seeders}")
return False
else:
@@ -428,15 +428,23 @@ class TorrentHelper(metaclass=Singleton):
return True
@staticmethod
def match_torrent(mediainfo: MediaInfo, torrent_meta: MetaInfo,
torrent: TorrentInfo, logerror: bool = True) -> bool:
def match_torrent(mediainfo: MediaInfo, torrent_meta: MetaInfo, torrent: TorrentInfo) -> bool:
"""
检查种子是否匹配媒体信息
:param mediainfo: 需要匹配的媒体信息
:param torrent_meta: 种子识别信息
:param torrent: 种子信息
:param logerror: 是否记录错误日志
"""
# 比对词条指定的tmdbid
if torrent_meta.tmdbid or torrent_meta.doubanid:
if torrent_meta.tmdbid and torrent_meta.tmdbid == mediainfo.tmdb_id:
logger.info(
f'{mediainfo.title} 通过词表指定TMDBID匹配到资源{torrent.site_name} - {torrent.title}')
return True
if torrent_meta.doubanid and torrent_meta.doubanid == mediainfo.douban_id:
logger.info(
f'{mediainfo.title} 通过词表指定豆瓣ID匹配到资源{torrent.site_name} - {torrent.title}')
return True
# 要匹配的媒体标题、原标题
media_titles = {
StringUtils.clear_upper(mediainfo.title),
@@ -451,32 +459,28 @@ class TorrentHelper(metaclass=Singleton):
} - {""}
# 比对种子识别类型
if torrent_meta.type == MediaType.TV and mediainfo.type != MediaType.TV:
if logerror:
logger.warn(f'{torrent.site_name} - {torrent.title} 种子标题类型为 {torrent_meta.type.value}'
f'不匹配 {mediainfo.type.value}')
logger.debug(f'{torrent.site_name} - {torrent.title} 种子标题类型为 {torrent_meta.type.value}'
f'不匹配 {mediainfo.type.value}')
return False
# 比对种子在站点中的类型
if torrent.category == MediaType.TV.value and mediainfo.type != MediaType.TV:
if logerror:
logger.warn(f'{torrent.site_name} - {torrent.title} 种子在站点中归类为 {torrent.category}'
f'不匹配 {mediainfo.type.value}')
logger.debug(f'{torrent.site_name} - {torrent.title} 种子在站点中归类为 {torrent.category}'
f'不匹配 {mediainfo.type.value}')
return False
# 比对年份
if mediainfo.year:
if mediainfo.type == MediaType.TV:
# 剧集年份,每季的年份可能不同
# 剧集年份,每季的年份可能不同,没年份时不比较年份(很多剧集种子不带年份)
if torrent_meta.year and torrent_meta.year not in [year for year in
mediainfo.season_years.values()]:
if logerror:
logger.warn(f'{torrent.site_name} - {torrent.title} 年份不匹配 {mediainfo.season_years}')
logger.debug(f'{torrent.site_name} - {torrent.title} 年份不匹配 {mediainfo.season_years}')
return False
else:
# 电影年份上下浮动1年
if torrent_meta.year not in [str(int(mediainfo.year) - 1),
mediainfo.year,
str(int(mediainfo.year) + 1)]:
if logerror:
logger.warn(f'{torrent.site_name} - {torrent.title} 年份不匹配 {mediainfo.year}')
# 电影年份上下浮动1年,没年份时不通过
if not torrent_meta.year or torrent_meta.year not in [str(int(mediainfo.year) - 1),
mediainfo.year,
str(int(mediainfo.year) + 1)]:
logger.debug(f'{torrent.site_name} - {torrent.title} 年份不匹配 {mediainfo.year}')
return False
# 比对标题和原语种标题
if meta_names.intersection(media_titles):
@@ -489,21 +493,24 @@ class TorrentHelper(metaclass=Singleton):
return True
# 标题拆分
if torrent_meta.org_string:
titles = [StringUtils.clear_upper(t) for t in re.split(r'[\s/【】.\[\]\-]+',
torrent_meta.org_string) if t]
# 只拆分出标题中的非英文单词进行匹配,英文单词容易误匹配(带空格的多个单词组合除外)
titles = [StringUtils.clear_upper(t) for t in re.split(
r'[\s/【】.\[\]\-]+',
torrent_meta.org_string
) if not StringUtils.is_english_word(t)]
# 在标题中判断是否存在标题、原语种标题
if media_titles.intersection(titles):
logger.info(f'{mediainfo.title} 通过标题匹配到资源:{torrent.site_name} - {torrent.title}')
return True
# 在副标题中判断是否存在标题、原语种标题、别名、译名
# 在副标题中(非英文单词)判断是否存在标题、原语种标题、别名、译名
if torrent.description:
subtitles = {StringUtils.clear_upper(t) for t in re.split(r'[\s/|]+',
torrent.description) if t}
subtitles = {StringUtils.clear_upper(t) for t in re.split(
r'[\s/【】|]+',
torrent.description) if not StringUtils.is_english_word(t)}
if media_titles.intersection(subtitles) or media_names.intersection(subtitles):
logger.info(f'{mediainfo.title} 通过副标题匹配到资源:{torrent.site_name} - {torrent.title}'
f'副标题:{torrent.description}')
return True
# 未匹配
if logerror:
logger.warn(f'{torrent.site_name} - {torrent.title} 标题不匹配,识别名称:{meta_names}')
logger.debug(f'{torrent.site_name} - {torrent.title} 标题不匹配,识别名称:{meta_names}')
return False

View File

@@ -98,7 +98,6 @@ class LoggerManager:
# 终端日志
console_handler = logging.StreamHandler()
console_handler.setLevel(logging.DEBUG)
console_formatter = CustomFormatter(f"%(leveltext)s%(message)s")
console_handler.setFormatter(console_formatter)
_logger.addHandler(console_handler)
@@ -109,7 +108,6 @@ class LoggerManager:
maxBytes=5 * 1024 * 1024,
backupCount=3,
encoding='utf-8')
file_handler.setLevel(logging.INFO)
file_formater = CustomFormatter(f"【%(levelname)s】%(asctime)s - %(message)s")
file_handler.setFormatter(file_formater)
_logger.addHandler(file_handler)
@@ -138,6 +136,7 @@ class LoggerManager:
if not _logger:
_logger = self.__setup_logger(logfile)
self._loggers[logfile] = _logger
# 调用logger的方法打印日志
if hasattr(_logger, method):
method = getattr(_logger, method)
method(f"{caller_name} - {msg}", *args, **kwargs)

View File

@@ -1,7 +1,9 @@
import multiprocessing
import os
import signal
import sys
import threading
from types import FrameType
import uvicorn as uvicorn
from PIL import Image
@@ -16,7 +18,7 @@ if SystemUtils.is_frozen():
sys.stdout = open(os.devnull, 'w')
sys.stderr = open(os.devnull, 'w')
from app.core.config import settings
from app.core.config import settings, global_vars
from app.core.module import ModuleManager
from app.core.plugin import PluginManager
from app.db.init import init_db, update_db, init_super_user
@@ -149,7 +151,7 @@ def check_auth():
"""
if SitesHelper().auth_level < 2:
err_msg = "用户认证失败,站点相关功能将无法使用!"
MessageHelper().put(f"注意:{err_msg}")
MessageHelper().put(f"注意:{err_msg}", title="用户认证", role="system")
CommandChian().post_message(
Notification(
mtype=NotificationType.Manual,
@@ -159,6 +161,22 @@ def check_auth():
)
def singal_handle():
"""
监听停止信号
"""
def stop_event(signum: int, _: FrameType):
"""
SIGTERM信号处理
"""
print(f"接收到停止信号:{signum},正在停止系统...")
global_vars.stop_system()
# 设置信号处理程序
signal.signal(signal.SIGTERM, stop_event)
signal.signal(signal.SIGINT, stop_event)
@App.on_event("shutdown")
def shutdown_server():
"""
@@ -168,6 +186,7 @@ def shutdown_server():
ModuleManager().stop()
# 停止插件
PluginManager().stop()
PluginManager().stop_monitor()
# 停止事件消费
Command().stop()
# 停止虚拟显示
@@ -209,6 +228,8 @@ def start_module():
start_frontend()
# 检查认证状态
check_auth()
# 监听停止信号
singal_handle()
if __name__ == '__main__':

View File

@@ -27,6 +27,14 @@ class _ModuleBase(metaclass=ABCMeta):
"""
pass
@staticmethod
@abstractmethod
def get_name() -> str:
"""
获取模块名称
"""
pass
@abstractmethod
def stop(self) -> None:
"""

View File

@@ -23,16 +23,20 @@ class BangumiModule(_ModuleBase):
"""
测试模块连接性
"""
with RequestUtils().get_res("https://api.bgm.tv/") as ret:
if ret and ret.status_code == 200:
return True, ""
elif ret:
return False, f"无法连接Bangumi错误码{ret.status_code}"
ret = RequestUtils().get_res("https://api.bgm.tv/")
if ret and ret.status_code == 200:
return True, ""
elif ret:
return False, f"无法连接Bangumi错误码{ret.status_code}"
return False, "Bangumi网络连接失败"
def init_setting(self) -> Tuple[str, Union[str, bool]]:
pass
@staticmethod
def get_name() -> str:
return "Bangumi"
def recognize_media(self, bangumiid: int = None,
**kwargs) -> Optional[MediaInfo]:
"""
@@ -68,7 +72,9 @@ class BangumiModule(_ModuleBase):
return []
infos = self.bangumiapi.search(meta.name)
if infos:
return [MediaInfo(bangumi_info=info) for info in infos]
return [MediaInfo(bangumi_info=info) for info in infos
if meta.name.lower() in str(info.get("name")).lower()
or meta.name.lower() in str(info.get("name_cn")).lower()]
return []
def bangumi_info(self, bangumiid: int) -> Optional[dict]:

View File

@@ -38,16 +38,20 @@ class DoubanModule(_ModuleBase):
"""
测试模块连接性
"""
with RequestUtils().get_res("https://movie.douban.com/") as ret:
if ret and ret.status_code == 200:
return True, ""
elif ret:
return False, f"无法连接豆瓣,错误码:{ret.status_code}"
ret = RequestUtils().get_res("https://movie.douban.com/")
if ret and ret.status_code == 200:
return True, ""
elif ret:
return False, f"无法连接豆瓣,错误码:{ret.status_code}"
return False, "豆瓣网络连接失败"
def init_setting(self) -> Tuple[str, Union[str, bool]]:
pass
@staticmethod
def get_name() -> str:
return "豆瓣"
def recognize_media(self, meta: MetaBase = None,
mtype: MediaType = None,
doubanid: str = None,
@@ -468,7 +472,7 @@ class DoubanModule(_ModuleBase):
else:
infos = self.doubanapi.tv_recommend(start=(page - 1) * count, count=count,
sort=sort, tags=tags)
if infos:
if infos and infos.get("items"):
medias = [MediaInfo(douban_info=info) for info in infos.get("items")]
return [media for media in medias if media.poster_path
and "movie_large.jpg" not in media.poster_path
@@ -484,7 +488,7 @@ class DoubanModule(_ModuleBase):
"""
infos = self.doubanapi.movie_showing(start=(page - 1) * count,
count=count)
if infos:
if infos and infos.get("subject_collection_items"):
return [MediaInfo(douban_info=info) for info in infos.get("subject_collection_items")]
return []
@@ -504,7 +508,7 @@ class DoubanModule(_ModuleBase):
"""
infos = self.doubanapi.tv_global_best_weekly(start=(page - 1) * count,
count=count)
if infos:
if infos and infos.get("subject_collection_items"):
return [MediaInfo(douban_info=info) for info in infos.get("subject_collection_items")]
return []
@@ -514,7 +518,7 @@ class DoubanModule(_ModuleBase):
"""
infos = self.doubanapi.tv_animation(start=(page - 1) * count,
count=count)
if infos:
if infos and infos.get("subject_collection_items"):
return [MediaInfo(douban_info=info) for info in infos.get("subject_collection_items")]
return []
@@ -524,7 +528,7 @@ class DoubanModule(_ModuleBase):
"""
infos = self.doubanapi.movie_hot_gaia(start=(page - 1) * count,
count=count)
if infos:
if infos and infos.get("subject_collection_items"):
return [MediaInfo(douban_info=info) for info in infos.get("subject_collection_items")]
return []
@@ -534,7 +538,7 @@ class DoubanModule(_ModuleBase):
"""
infos = self.doubanapi.tv_hot(start=(page - 1) * count,
count=count)
if infos:
if infos and infos.get("subject_collection_items"):
return [MediaInfo(douban_info=info) for info in infos.get("subject_collection_items")]
return []
@@ -549,7 +553,7 @@ class DoubanModule(_ModuleBase):
if not meta.name:
return []
result = self.doubanapi.search(meta.name)
if not result:
if not result or not result.get("items"):
return []
# 返回数据
ret_medias = []
@@ -558,6 +562,8 @@ class DoubanModule(_ModuleBase):
continue
if item_obj.get("type_name") not in (MediaType.TV.value, MediaType.MOVIE.value):
continue
if meta.name not in item_obj.get("target", {}).get("title"):
continue
ret_medias.append(MediaInfo(douban_info=item_obj.get("target")))
# 将搜索词中的季写入标题中
if ret_medias and meta.begin_season:
@@ -612,7 +618,7 @@ class DoubanModule(_ModuleBase):
# 搜索
logger.info(f"开始使用名称 {name} 匹配豆瓣信息 ...")
result = self.doubanapi.search(f"{name} {year or ''}".strip())
if not result:
if not result or not result.get("items"):
logger.warn(f"未找到 {name} 的豆瓣信息")
return {}
# 触发rate limit
@@ -648,7 +654,7 @@ class DoubanModule(_ModuleBase):
"""
infos = self.doubanapi.movie_top250(start=(page - 1) * count,
count=count)
if infos:
if infos and infos.get("subject_collection_items"):
return [MediaInfo(douban_info=info) for info in infos.get("subject_collection_items")]
return []

View File

@@ -195,13 +195,13 @@ class DoubanApi(metaclass=Singleton):
'_ts': ts,
'_sig': self.__sign(url=req_url, ts=ts)
})
with RequestUtils(
ua=choice(self._user_agents),
session=self._session
).get_res(url=req_url, params=params) as resp:
if resp is not None and resp.status_code == 400 and "rate_limit" in resp.text:
return resp.json()
return resp.json() if resp else {}
resp = RequestUtils(
ua=choice(self._user_agents),
session=self._session
).get_res(url=req_url, params=params)
if resp is not None and resp.status_code == 400 and "rate_limit" in resp.text:
return resp.json()
return resp.json() if resp else {}
@lru_cache(maxsize=settings.CACHE_CONF.get('douban'))
def __post(self, url: str, **kwargs) -> dict:
@@ -228,7 +228,7 @@ class DoubanApi(metaclass=Singleton):
ua=settings.USER_AGENT,
session=self._session,
).post_res(url=req_url, data=params)
if resp.status_code == 400 and "rate_limit" in resp.text:
if resp is not None and resp.status_code == 400 and "rate_limit" in resp.text:
return resp.json()
return resp.json() if resp else {}

View File

@@ -193,15 +193,15 @@ class DoubanScraper:
url = url.replace("/format/webp", "/format/jpg")
file_path.with_suffix(".jpg")
logger.info(f"正在下载{file_path.stem}图片:{url} ...")
with RequestUtils().get_res(url=url) as r:
if r:
if self._transfer_type in ['rclone_move', 'rclone_copy']:
self.__save_remove_file(file_path, r.content)
else:
file_path.write_bytes(r.content)
logger.info(f"图片已保存:{file_path}")
r = RequestUtils().get_res(url=url)
if r:
if self._transfer_type in ['rclone_move', 'rclone_copy']:
self.__save_remove_file(file_path, r.content)
else:
logger.info(f"{file_path.stem}图片下载失败,请检查网络连通性")
file_path.write_bytes(r.content)
logger.info(f"图片已保存:{file_path}")
else:
logger.info(f"{file_path.stem}图片下载失败,请检查网络连通性")
except Exception as err:
logger.error(f"{file_path.stem}图片下载失败:{str(err)}")

View File

@@ -14,6 +14,10 @@ class EmbyModule(_ModuleBase):
def init_module(self) -> None:
self.emby = Emby()
@staticmethod
def get_name() -> str:
return "Emby"
def stop(self):
pass

View File

@@ -56,12 +56,12 @@ class Emby:
return []
req_url = "%semby/Library/SelectableMediaFolders?api_key=%s" % (self._host, self._apikey)
try:
with RequestUtils().get_res(req_url) as res:
if res:
return res.json()
else:
logger.error(f"Library/SelectableMediaFolders 未获取到返回数据")
return []
res = RequestUtils().get_res(req_url)
if res:
return res.json()
else:
logger.error(f"Library/SelectableMediaFolders 未获取到返回数据")
return []
except Exception as e:
logger.error(f"连接Library/SelectableMediaFolders 出错:" + str(e))
return []
@@ -74,29 +74,29 @@ class Emby:
return []
req_url = "%semby/Library/VirtualFolders/Query?api_key=%s" % (self._host, self._apikey)
try:
with RequestUtils().get_res(req_url) as res:
if res:
library_items = res.json().get("Items")
librarys = []
for library_item in library_items:
library_name = library_item.get('Name')
pathInfos = library_item.get('LibraryOptions', {}).get('PathInfos')
library_paths = []
for path in pathInfos:
if path.get('NetworkPath'):
library_paths.append(path.get('NetworkPath'))
else:
library_paths.append(path.get('Path'))
res = RequestUtils().get_res(req_url)
if res:
library_items = res.json().get("Items")
librarys = []
for library_item in library_items:
library_name = library_item.get('Name')
pathInfos = library_item.get('LibraryOptions', {}).get('PathInfos')
library_paths = []
for path in pathInfos:
if path.get('NetworkPath'):
library_paths.append(path.get('NetworkPath'))
else:
library_paths.append(path.get('Path'))
if library_name and library_paths:
librarys.append({
'Name': library_name,
'Path': library_paths
})
return librarys
else:
logger.error(f"Library/VirtualFolders/Query 未获取到返回数据")
return []
if library_name and library_paths:
librarys.append({
'Name': library_name,
'Path': library_paths
})
return librarys
else:
logger.error(f"Library/VirtualFolders/Query 未获取到返回数据")
return []
except Exception as e:
logger.error(f"连接Library/VirtualFolders/Query 出错:" + str(e))
return []
@@ -113,12 +113,12 @@ class Emby:
user = self.user
req_url = f"{self._host}emby/Users/{user}/Views?api_key={self._apikey}"
try:
with RequestUtils().get_res(req_url) as res:
if res:
return res.json().get("Items")
else:
logger.error(f"User/Views 未获取到返回数据")
return []
res = RequestUtils().get_res(req_url)
if res:
return res.json().get("Items")
else:
logger.error(f"User/Views 未获取到返回数据")
return []
except Exception as e:
logger.error(f"连接User/Views 出错:" + str(e))
return []
@@ -164,20 +164,20 @@ class Emby:
return None
req_url = "%sUsers?api_key=%s" % (self._host, self._apikey)
try:
with RequestUtils().get_res(req_url) as res:
if res:
users = res.json()
# 先查询是否有与当前用户名称匹配的
if user_name:
for user in users:
if user.get("Name") == user_name:
return user.get("Id")
# 查询管理员
res = RequestUtils().get_res(req_url)
if res:
users = res.json()
# 先查询是否有与当前用户名称匹配的
if user_name:
for user in users:
if user.get("Policy", {}).get("IsAdministrator"):
if user.get("Name") == user_name:
return user.get("Id")
else:
logger.error(f"Users 未获取到返回数据")
# 查询管理员
for user in users:
if user.get("Policy", {}).get("IsAdministrator"):
return user.get("Id")
else:
logger.error(f"Users 未获取到返回数据")
except Exception as e:
logger.error(f"连接Users出错" + str(e))
return None
@@ -227,11 +227,11 @@ class Emby:
return None
req_url = "%sSystem/Info?api_key=%s" % (self._host, self._apikey)
try:
with RequestUtils().get_res(req_url) as res:
if res:
return res.json().get("Id")
else:
logger.error(f"System/Info 未获取到返回数据")
res = RequestUtils().get_res(req_url)
if res:
return res.json().get("Id")
else:
logger.error(f"System/Info 未获取到返回数据")
except Exception as e:
logger.error(f"连接System/Info出错" + str(e))
@@ -245,12 +245,12 @@ class Emby:
return 0
req_url = "%semby/Users/Query?api_key=%s" % (self._host, self._apikey)
try:
with RequestUtils().get_res(req_url) as res:
if res:
return res.json().get("TotalRecordCount")
else:
logger.error(f"Users/Query 未获取到返回数据")
return 0
res = RequestUtils().get_res(req_url)
if res:
return res.json().get("TotalRecordCount")
else:
logger.error(f"Users/Query 未获取到返回数据")
return 0
except Exception as e:
logger.error(f"连接Users/Query出错" + str(e))
return 0
@@ -264,17 +264,17 @@ class Emby:
return schemas.Statistic()
req_url = "%semby/Items/Counts?api_key=%s" % (self._host, self._apikey)
try:
with RequestUtils().get_res(req_url) as res:
if res:
result = res.json()
return schemas.Statistic(
movie_count=result.get("MovieCount") or 0,
tv_count=result.get("SeriesCount") or 0,
episode_count=result.get("EpisodeCount") or 0
)
else:
logger.error(f"Items/Counts 未获取到返回数据")
return schemas.Statistic()
res = RequestUtils().get_res(req_url)
if res:
result = res.json()
return schemas.Statistic(
movie_count=result.get("MovieCount") or 0,
tv_count=result.get("SeriesCount") or 0,
episode_count=result.get("EpisodeCount") or 0
)
else:
logger.error(f"Items/Counts 未获取到返回数据")
return schemas.Statistic()
except Exception as e:
logger.error(f"连接Items/Counts出错" + str(e))
return schemas.Statistic()
@@ -299,14 +299,14 @@ class Emby:
"&api_key=%s") % (
self._host, name, self._apikey)
try:
with RequestUtils().get_res(req_url) as res:
if res:
res_items = res.json().get("Items")
if res_items:
for res_item in res_items:
if res_item.get('Name') == name and (
not year or str(res_item.get('ProductionYear')) == str(year)):
return res_item.get('Id')
res = RequestUtils().get_res(req_url)
if res:
res_items = res.json().get("Items")
if res_items:
for res_item in res_items:
if res_item.get('Name') == name and (
not year or str(res_item.get('ProductionYear')) == str(year)):
return res_item.get('Id')
except Exception as e:
logger.error(f"连接Items出错" + str(e))
return None
@@ -329,36 +329,36 @@ class Emby:
"&Recursive=true&SearchTerm=%s&Limit=10&IncludeSearchTypes=false&api_key=%s" % (
self._host, title, self._apikey)
try:
with RequestUtils().get_res(req_url) as res:
if res:
res_items = res.json().get("Items")
if res_items:
ret_movies = []
for res_item in res_items:
item_tmdbid = res_item.get("ProviderIds", {}).get("Tmdb")
mediaserver_item = schemas.MediaServerItem(
server="emby",
library=res_item.get("ParentId"),
item_id=res_item.get("Id"),
item_type=res_item.get("Type"),
title=res_item.get("Name"),
original_title=res_item.get("OriginalTitle"),
year=res_item.get("ProductionYear"),
tmdbid=int(item_tmdbid) if item_tmdbid else None,
imdbid=res_item.get("ProviderIds", {}).get("Imdb"),
tvdbid=res_item.get("ProviderIds", {}).get("Tvdb"),
path=res_item.get("Path")
)
if tmdb_id and item_tmdbid:
if str(item_tmdbid) != str(tmdb_id):
continue
else:
ret_movies.append(mediaserver_item)
continue
if (mediaserver_item.title == title
and (not year or str(mediaserver_item.year) == str(year))):
res = RequestUtils().get_res(req_url)
if res:
res_items = res.json().get("Items")
if res_items:
ret_movies = []
for res_item in res_items:
item_tmdbid = res_item.get("ProviderIds", {}).get("Tmdb")
mediaserver_item = schemas.MediaServerItem(
server="emby",
library=res_item.get("ParentId"),
item_id=res_item.get("Id"),
item_type=res_item.get("Type"),
title=res_item.get("Name"),
original_title=res_item.get("OriginalTitle"),
year=res_item.get("ProductionYear"),
tmdbid=int(item_tmdbid) if item_tmdbid else None,
imdbid=res_item.get("ProviderIds", {}).get("Imdb"),
tvdbid=res_item.get("ProviderIds", {}).get("Tvdb"),
path=res_item.get("Path")
)
if tmdb_id and item_tmdbid:
if str(item_tmdbid) != str(tmdb_id):
continue
else:
ret_movies.append(mediaserver_item)
return ret_movies
continue
if (mediaserver_item.title == title
and (not year or str(mediaserver_item.year) == str(year))):
ret_movies.append(mediaserver_item)
return ret_movies
except Exception as e:
logger.error(f"连接Items出错" + str(e))
return None
@@ -401,25 +401,25 @@ class Emby:
try:
req_url = "%semby/Shows/%s/Episodes?Season=%s&IsMissing=false&api_key=%s" % (
self._host, item_id, season, self._apikey)
with RequestUtils().get_res(req_url) as res_json:
if res_json:
tv_item = res_json.json()
res_items = tv_item.get("Items")
season_episodes = {}
for res_item in res_items:
season_index = res_item.get("ParentIndexNumber")
if not season_index:
continue
if season and season != season_index:
continue
episode_index = res_item.get("IndexNumber")
if not episode_index:
continue
if season_index not in season_episodes:
season_episodes[season_index] = []
season_episodes[season_index].append(episode_index)
# 返回
return item_id, season_episodes
res_json = RequestUtils().get_res(req_url)
if res_json:
tv_item = res_json.json()
res_items = tv_item.get("Items")
season_episodes = {}
for res_item in res_items:
season_index = res_item.get("ParentIndexNumber")
if not season_index:
continue
if season and season != season_index:
continue
episode_index = res_item.get("IndexNumber")
if not episode_index:
continue
if season_index not in season_episodes:
season_episodes[season_index] = []
season_episodes[season_index].append(episode_index)
# 返回
return item_id, season_episodes
except Exception as e:
logger.error(f"连接Shows/Id/Episodes出错" + str(e))
return None, None
@@ -463,13 +463,13 @@ class Emby:
req_url = "%sItems/%s/Images/%s" % (self._playhost, item_id, image_type)
try:
with RequestUtils().get_res(req_url) as res:
if res and res.status_code != 404:
logger.info(f"影片图片链接:{res.url}")
return res.url
else:
logger.error("Items/Id/Images 未获取到返回数据或无该影片{}图片".format(image_type))
return None
res = RequestUtils().get_res(req_url)
if res and res.status_code != 404:
logger.info(f"影片图片链接:{res.url}")
return res.url
else:
logger.error("Items/Id/Images 未获取到返回数据或无该影片{}图片".format(image_type))
return None
except Exception as e:
logger.error(f"连接Items/Id/Images出错" + str(e))
return None
@@ -482,11 +482,11 @@ class Emby:
return False
req_url = "%semby/Items/%s/Refresh?Recursive=true&api_key=%s" % (self._host, item_id, self._apikey)
try:
with RequestUtils().post_res(req_url) as res:
if res:
return True
else:
logger.info(f"刷新媒体库对象 {item_id} 失败无法连接Emby")
res = RequestUtils().post_res(req_url)
if res:
return True
else:
logger.info(f"刷新媒体库对象 {item_id} 失败无法连接Emby")
except Exception as e:
logger.error(f"连接Items/Id/Refresh出错" + str(e))
return False
@@ -500,11 +500,11 @@ class Emby:
return False
req_url = "%semby/Library/Refresh?api_key=%s" % (self._host, self._apikey)
try:
with RequestUtils().post_res(req_url) as res:
if res:
return True
else:
logger.info(f"刷新媒体库失败无法连接Emby")
res = RequestUtils().post_res(req_url)
if res:
return True
else:
logger.info(f"刷新媒体库失败无法连接Emby")
except Exception as e:
logger.error(f"连接Library/Refresh出错" + str(e))
return False
@@ -579,23 +579,23 @@ class Emby:
return None
req_url = "%semby/Users/%s/Items/%s?api_key=%s" % (self._host, self.user, itemid, self._apikey)
try:
with RequestUtils().get_res(req_url) as res:
if res and res.status_code == 200:
item = res.json()
tmdbid = item.get("ProviderIds", {}).get("Tmdb")
return schemas.MediaServerItem(
server="emby",
library=item.get("ParentId"),
item_id=item.get("Id"),
item_type=item.get("Type"),
title=item.get("Name"),
original_title=item.get("OriginalTitle"),
year=item.get("ProductionYear"),
tmdbid=int(tmdbid) if tmdbid else None,
imdbid=item.get("ProviderIds", {}).get("Imdb"),
tvdbid=item.get("ProviderIds", {}).get("Tvdb"),
path=item.get("Path")
)
res = RequestUtils().get_res(req_url)
if res and res.status_code == 200:
item = res.json()
tmdbid = item.get("ProviderIds", {}).get("Tmdb")
return schemas.MediaServerItem(
server="emby",
library=item.get("ParentId"),
item_id=item.get("Id"),
item_type=item.get("Type"),
title=item.get("Name"),
original_title=item.get("OriginalTitle"),
year=item.get("ProductionYear"),
tmdbid=int(tmdbid) if tmdbid else None,
imdbid=item.get("ProviderIds", {}).get("Imdb"),
tvdbid=item.get("ProviderIds", {}).get("Tvdb"),
path=item.get("Path")
)
except Exception as e:
logger.error(f"连接Items/Id出错" + str(e))
return None
@@ -610,17 +610,17 @@ class Emby:
yield None
req_url = "%semby/Users/%s/Items?ParentId=%s&api_key=%s" % (self._host, self.user, parent, self._apikey)
try:
with RequestUtils().get_res(req_url) as res:
if res and res.status_code == 200:
results = res.json().get("Items") or []
for result in results:
if not result:
continue
if result.get("Type") in ["Movie", "Series"]:
yield self.get_iteminfo(result.get("Id"))
elif "Folder" in result.get("Type"):
for item in self.get_items(parent=result.get('Id')):
yield item
res = RequestUtils().get_res(req_url)
if res and res.status_code == 200:
results = res.json().get("Items") or []
for result in results:
if not result:
continue
if result.get("Type") in ["Movie", "Series"]:
yield self.get_iteminfo(result.get("Id"))
elif "Folder" in result.get("Type"):
for item in self.get_items(parent=result.get('Id')):
yield item
except Exception as e:
logger.error(f"连接Users/Items出错" + str(e))
yield None
@@ -1032,52 +1032,52 @@ class Emby:
req_url = (f"{self._host}Users/{user}/Items/Resume?"
f"Limit=100&MediaTypes=Video&api_key={self._apikey}&Fields=ProductionYear,Path")
try:
with RequestUtils().get_res(req_url) as res:
if res:
result = res.json().get("Items") or []
ret_resume = []
# 用户媒体库文件夹列表(排除黑名单)
library_folders = self.get_user_library_folders()
for item in result:
if len(ret_resume) == num:
break
if item.get("Type") not in ["Movie", "Episode"]:
continue
item_path = item.get("Path")
if item_path and library_folders and not any(
str(item_path).startswith(folder) for folder in library_folders):
continue
item_type = MediaType.MOVIE.value if item.get("Type") == "Movie" else MediaType.TV.value
link = self.get_play_url(item.get("Id"))
if item_type == MediaType.MOVIE.value:
title = item.get("Name")
subtitle = item.get("ProductionYear")
res = RequestUtils().get_res(req_url)
if res:
result = res.json().get("Items") or []
ret_resume = []
# 用户媒体库文件夹列表(排除黑名单)
library_folders = self.get_user_library_folders()
for item in result:
if len(ret_resume) == num:
break
if item.get("Type") not in ["Movie", "Episode"]:
continue
item_path = item.get("Path")
if item_path and library_folders and not any(
str(item_path).startswith(folder) for folder in library_folders):
continue
item_type = MediaType.MOVIE.value if item.get("Type") == "Movie" else MediaType.TV.value
link = self.get_play_url(item.get("Id"))
if item_type == MediaType.MOVIE.value:
title = item.get("Name")
subtitle = item.get("ProductionYear")
else:
title = f'{item.get("SeriesName")}'
subtitle = f'S{item.get("ParentIndexNumber")}:{item.get("IndexNumber")} - {item.get("Name")}'
if item_type == MediaType.MOVIE.value:
if item.get("BackdropImageTags"):
image = self.__get_backdrop_url(item_id=item.get("Id"),
image_tag=item.get("BackdropImageTags")[0])
else:
title = f'{item.get("SeriesName")}'
subtitle = f'S{item.get("ParentIndexNumber")}:{item.get("IndexNumber")} - {item.get("Name")}'
if item_type == MediaType.MOVIE.value:
if item.get("BackdropImageTags"):
image = self.__get_backdrop_url(item_id=item.get("Id"),
image_tag=item.get("BackdropImageTags")[0])
else:
image = self.__get_local_image_by_id(item.get("Id"))
else:
image = self.__get_backdrop_url(item_id=item.get("SeriesId"),
image_tag=item.get("SeriesPrimaryImageTag"))
if not image:
image = self.__get_local_image_by_id(item.get("SeriesId"))
ret_resume.append(schemas.MediaServerPlayItem(
id=item.get("Id"),
title=title,
subtitle=subtitle,
type=item_type,
image=image,
link=link,
percent=item.get("UserData", {}).get("PlayedPercentage")
))
return ret_resume
else:
logger.error(f"Users/Items/Resume 未获取到返回数据")
image = self.__get_local_image_by_id(item.get("Id"))
else:
image = self.__get_backdrop_url(item_id=item.get("SeriesId"),
image_tag=item.get("SeriesPrimaryImageTag"))
if not image:
image = self.__get_local_image_by_id(item.get("SeriesId"))
ret_resume.append(schemas.MediaServerPlayItem(
id=item.get("Id"),
title=title,
subtitle=subtitle,
type=item_type,
image=image,
link=link,
percent=item.get("UserData", {}).get("PlayedPercentage")
))
return ret_resume
else:
logger.error(f"Users/Items/Resume 未获取到返回数据")
except Exception as e:
logger.error(f"连接Users/Items/Resume出错" + str(e))
return []
@@ -1095,35 +1095,35 @@ class Emby:
req_url = (f"{self._host}Users/{user}/Items/Latest?"
f"Limit=100&MediaTypes=Video&api_key={self._apikey}&Fields=ProductionYear,Path")
try:
with RequestUtils().get_res(req_url) as res:
if res:
result = res.json() or []
ret_latest = []
# 用户媒体库文件夹列表(排除黑名单)
library_folders = self.get_user_library_folders()
for item in result:
if len(ret_latest) == num:
break
if item.get("Type") not in ["Movie", "Series"]:
continue
item_path = item.get("Path")
if item_path and library_folders and not any(
str(item_path).startswith(folder) for folder in library_folders):
continue
item_type = MediaType.MOVIE.value if item.get("Type") == "Movie" else MediaType.TV.value
link = self.get_play_url(item.get("Id"))
image = self.__get_local_image_by_id(item_id=item.get("Id"))
ret_latest.append(schemas.MediaServerPlayItem(
id=item.get("Id"),
title=item.get("Name"),
subtitle=item.get("ProductionYear"),
type=item_type,
image=image,
link=link
))
return ret_latest
else:
logger.error(f"Users/Items/Latest 未获取到返回数据")
res = RequestUtils().get_res(req_url)
if res:
result = res.json() or []
ret_latest = []
# 用户媒体库文件夹列表(排除黑名单)
library_folders = self.get_user_library_folders()
for item in result:
if len(ret_latest) == num:
break
if item.get("Type") not in ["Movie", "Series"]:
continue
item_path = item.get("Path")
if item_path and library_folders and not any(
str(item_path).startswith(folder) for folder in library_folders):
continue
item_type = MediaType.MOVIE.value if item.get("Type") == "Movie" else MediaType.TV.value
link = self.get_play_url(item.get("Id"))
image = self.__get_local_image_by_id(item_id=item.get("Id"))
ret_latest.append(schemas.MediaServerPlayItem(
id=item.get("Id"),
title=item.get("Name"),
subtitle=item.get("ProductionYear"),
type=item_type,
image=image,
link=link
))
return ret_latest
else:
logger.error(f"Users/Items/Latest 未获取到返回数据")
except Exception as e:
logger.error(f"连接Users/Items/Latest出错" + str(e))
return []

View File

@@ -321,16 +321,20 @@ class FanartModule(_ModuleBase):
"""
测试模块连接性
"""
with RequestUtils().get_res("https://webservice.fanart.tv") as ret:
if ret and ret.status_code == 200:
return True, ""
elif ret:
return False, f"无法连接fanart错误码{ret.status_code}"
ret = RequestUtils().get_res("https://webservice.fanart.tv")
if ret and ret.status_code == 200:
return True, ""
elif ret:
return False, f"无法连接fanart错误码{ret.status_code}"
return False, "fanart网络连接失败"
def init_setting(self) -> Tuple[str, Union[str, bool]]:
return "FANART_API_KEY", True
@staticmethod
def get_name() -> str:
return "Fanart"
def obtain_images(self, mediainfo: MediaInfo) -> Optional[MediaInfo]:
"""
获取图片

View File

@@ -9,21 +9,34 @@ from app.core.config import settings
from app.core.context import MediaInfo
from app.core.meta import MetaBase
from app.core.metainfo import MetaInfo, MetaInfoPath
from app.helper.directory import DirectoryHelper
from app.helper.message import MessageHelper
from app.log import logger
from app.modules import _ModuleBase
from app.schemas import TransferInfo, ExistMediaInfo, TmdbEpisode
from app.schemas import TransferInfo, ExistMediaInfo, TmdbEpisode, MediaDirectory
from app.schemas.types import MediaType
from app.utils.string import StringUtils
from app.utils.system import SystemUtils
lock = Lock()
class FileTransferModule(_ModuleBase):
"""
文件整理模块
"""
def __init__(self):
super().__init__()
self.directoryhelper = DirectoryHelper()
self.messagehelper = MessageHelper()
def init_module(self) -> None:
pass
@staticmethod
def get_name() -> str:
return "文件整理"
def stop(self):
pass
@@ -31,34 +44,40 @@ class FileTransferModule(_ModuleBase):
"""
测试模块连接性
"""
if not settings.DOWNLOAD_PATH:
return False, "下载目录未设置"
directoryhelper = DirectoryHelper()
# 检查下载目录
download_paths: List[str] = []
for path in [settings.DOWNLOAD_PATH,
settings.DOWNLOAD_MOVIE_PATH,
settings.DOWNLOAD_TV_PATH,
settings.DOWNLOAD_ANIME_PATH]:
download_paths = directoryhelper.get_download_dirs()
if not download_paths:
return False, "下载目录未设置"
for d_path in download_paths:
path = d_path.path
if not path:
continue
return False, f"下载目录 {d_path.name} 对应路径未设置"
download_path = Path(path)
if not download_path.exists():
return False, f"下载目录 {download_path} 不存在"
download_paths.append(path)
# 下载目录的设备ID
download_devids = [Path(path).stat().st_dev for path in download_paths]
return False, f"下载目录 {d_path.name} 对应路径 {path} 不存在"
# 检查媒体库目录
if not settings.LIBRARY_PATH:
libaray_paths = directoryhelper.get_library_dirs()
if not libaray_paths:
return False, "媒体库目录未设置"
# 比较媒体库目录的设备ID
for path in settings.LIBRARY_PATHS:
for l_path in libaray_paths:
path = l_path.path
if not path:
return False, f"媒体库目录 {l_path.name} 对应路径未设置"
library_path = Path(path)
if not library_path.exists():
return False, f"媒体库目录不存在:{library_path}"
if settings.DOWNLOADER_MONITOR and settings.TRANSFER_TYPE == "link":
if library_path.stat().st_dev not in download_devids:
return False, f"媒体库目录 {library_path} " \
f"与下载目录 {','.join(download_paths)} 不在同一设备,将无法硬链接"
return False, f"媒体库目录{l_path.name} 对应的路径 {path} 不存在"
# 检查硬链接条件
if settings.DOWNLOADER_MONITOR and settings.TRANSFER_TYPE == "link":
for d_path in download_paths:
link_ok = False
for l_path in libaray_paths:
if SystemUtils.is_same_disk(Path(d_path.path), Path(l_path.path)):
link_ok = True
break
if not link_ok:
return False, f"媒体库目录中未找到" \
f"与下载目录 {d_path.path} 在同一磁盘/存储空间/映射路径的目录,将无法硬链接"
return True, ""
def init_setting(self) -> Tuple[str, Union[str, bool]]:
@@ -66,7 +85,8 @@ class FileTransferModule(_ModuleBase):
def transfer(self, path: Path, meta: MetaBase, mediainfo: MediaInfo,
transfer_type: str, target: Path = None,
episodes_info: List[TmdbEpisode] = None) -> TransferInfo:
episodes_info: List[TmdbEpisode] = None,
scrape: bool = None) -> TransferInfo:
"""
文件转移
:param path: 文件路径
@@ -75,39 +95,49 @@ class FileTransferModule(_ModuleBase):
:param transfer_type: 转移方式
:param target: 目标路径
:param episodes_info: 当前季的全部集信息
:param scrape: 是否刮削元数据
:return: {path, target_path, message}
"""
# 获取目标路径
if not target:
# 未指定目的目录,根据源目录选择一个媒体库
target = self.get_target_path(in_path=path)
# 拼装媒体库一、二级子目录
target = self.__get_dest_dir(mediainfo=mediainfo, target_dir=target)
else:
# 指定了目的目录
if target.is_file():
logger.error(f"转移目标路径是一个文件 {target} 是一个文件")
return TransferInfo(success=False,
path=path,
message=f"{target} 不是有效目录")
# 只拼装二级子目录(不要一级目录)
target = self.__get_dest_dir(mediainfo=mediainfo, target_dir=target, typename_dir=False)
if not target:
logger.error("未找到媒体库目录,无法转移文件")
# 目标路径不能是文件
if target and target.is_file():
logger.error(f"转移目标路径是一个文件 {target} 是一个文件")
return TransferInfo(success=False,
path=path,
message="未找到媒体库目录")
message=f"{target} 不是有效目录")
# 获取目标路径
directoryhelper = DirectoryHelper()
if target:
dir_info = directoryhelper.get_library_dir(mediainfo, in_path=path, to_path=target)
else:
logger.info(f"获取转移目标路径:{target}")
dir_info = directoryhelper.get_library_dir(mediainfo, in_path=path)
if dir_info:
# 是否需要刮削
if scrape is None:
need_scrape = dir_info.scrape
else:
need_scrape = scrape
# 拼装媒体库一、二级子目录
target = self.__get_dest_dir(mediainfo=mediainfo, target_dir=dir_info)
elif target:
# 自定义目标路径
need_scrape = scrape or False
else:
# 未找到有效的媒体库目录
logger.error(
f"{mediainfo.type.value} {mediainfo.title_year} 未找到有效的媒体库目录,无法转移文件,源路径:{path}")
return TransferInfo(success=False,
path=path,
message="未找到有效的媒体库目录")
logger.info(f"获取转移目标路径:{target}")
# 转移
return self.transfer_media(in_path=path,
in_meta=meta,
mediainfo=mediainfo,
transfer_type=transfer_type,
target_dir=target,
episodes_info=episodes_info)
episodes_info=episodes_info,
need_scrape=need_scrape)
@staticmethod
def __transfer_command(file_item: Path, target_file: Path, transfer_type: str) -> int:
@@ -380,43 +410,24 @@ class FileTransferModule(_ModuleBase):
over_flag=over_flag)
@staticmethod
def __get_dest_dir(mediainfo: MediaInfo, target_dir: Path, typename_dir: bool = True) -> Path:
def __get_dest_dir(mediainfo: MediaInfo, target_dir: MediaDirectory) -> Path:
"""
根据设置并装媒体库目录
:param mediainfo: 媒体信息
:target_dir: 媒体库根目录
:typename_dir: 是否加上类型目录
"""
if not target_dir:
return target_dir
if not target_dir.media_type and target_dir.auto_category:
# 一级自动分类
download_dir = Path(target_dir.path) / mediainfo.type.value
else:
download_dir = Path(target_dir.path)
if mediainfo.type == MediaType.MOVIE:
# 电影
if typename_dir:
# 目的目录加上类型和二级分类
target_dir = target_dir / settings.LIBRARY_MOVIE_NAME / mediainfo.category
else:
# 目的目录加上二级分类
target_dir = target_dir / mediainfo.category
if not target_dir.category and target_dir.auto_category and mediainfo.category:
# 二级自动分类
download_dir = download_dir / mediainfo.category
if mediainfo.type == MediaType.TV:
# 电视剧
if mediainfo.genre_ids \
and set(mediainfo.genre_ids).intersection(set(settings.ANIME_GENREIDS)):
# 动漫
if typename_dir:
target_dir = target_dir / (settings.LIBRARY_ANIME_NAME
or settings.LIBRARY_TV_NAME) / mediainfo.category
else:
target_dir = target_dir / mediainfo.category
else:
# 电视剧
if typename_dir:
target_dir = target_dir / settings.LIBRARY_TV_NAME / mediainfo.category
else:
target_dir = target_dir / mediainfo.category
return target_dir
return download_dir
def transfer_media(self,
in_path: Path,
@@ -424,7 +435,8 @@ class FileTransferModule(_ModuleBase):
mediainfo: MediaInfo,
transfer_type: str,
target_dir: Path,
episodes_info: List[TmdbEpisode] = None
episodes_info: List[TmdbEpisode] = None,
need_scrape: bool = False
) -> TransferInfo:
"""
识别并转移一个文件或者一个目录下的所有文件
@@ -434,6 +446,7 @@ class FileTransferModule(_ModuleBase):
:param target_dir: 媒体库根目录
:param transfer_type: 文件转移方式
:param episodes_info: 当前季的全部集信息
:param need_scrape: 是否需要刮削
:return: TransferInfo、错误信息
"""
# 检查目录路径
@@ -486,7 +499,8 @@ class FileTransferModule(_ModuleBase):
path=in_path,
target_path=new_path,
total_size=file_size,
is_bluray=bluray_flag)
is_bluray=bluray_flag,
need_scrape=need_scrape)
else:
# 转移单个文件
if mediainfo.type == MediaType.TV:
@@ -585,7 +599,8 @@ class FileTransferModule(_ModuleBase):
total_size=file_size,
is_bluray=False,
file_list=[str(in_path)],
file_list_new=[str(new_file)])
file_list_new=[str(new_file)],
need_scrape=need_scrape)
@staticmethod
def __get_naming_dict(meta: MetaBase, mediainfo: MediaInfo, file_ext: str = None,
@@ -685,96 +700,32 @@ class FileTransferModule(_ModuleBase):
else:
return Path(render_str)
@staticmethod
def get_library_path(path: Path):
"""
根据文件路径查询其所在的媒体库目录,查询不到的返回输入目录
"""
if not path:
return None
if not settings.LIBRARY_PATHS:
return path
# 目的路径,多路径以,分隔
dest_paths = settings.LIBRARY_PATHS
for libpath in dest_paths:
try:
if path.is_relative_to(libpath):
return libpath
except Exception as e:
logger.debug(f"计算媒体库路径时出错:{str(e)}")
continue
return path
@staticmethod
def get_target_path(in_path: Path = None) -> Optional[Path]:
"""
计算一个最好的目的目录有in_path时找与in_path同路径的没有in_path时顺序查找1个符合大小要求的没有in_path和size时返回第1个
:param in_path: 源目录
"""
if not settings.LIBRARY_PATHS:
return None
# 目的路径,多路径以,分隔
dest_paths = settings.LIBRARY_PATHS
# 只有一个路径,直接返回
if len(dest_paths) == 1:
return dest_paths[0]
# 匹配有最长共同上级路径的目录
max_length = 0
target_path = None
if in_path:
for path in dest_paths:
try:
# 计算in_path和path的公共字符串长度
relative = StringUtils.find_common_prefix(str(in_path), str(path))
if len(str(path)) == len(relative):
# 目录完整匹配的,直接返回
return path
if len(relative) > max_length:
# 更新最大长度
max_length = len(relative)
target_path = path
except Exception as e:
logger.debug(f"计算目标路径时出错:{str(e)}")
continue
if target_path:
return target_path
# 顺序匹配第1个满足空间存储要求的目录
if in_path.exists():
file_size = in_path.stat().st_size
for path in dest_paths:
if SystemUtils.free_space(path) > file_size:
return path
# 默认返回第1个
return dest_paths[0]
def media_exists(self, mediainfo: MediaInfo, **kwargs) -> Optional[ExistMediaInfo]:
"""
判断媒体文件是否存在于本地文件系统,只支持标准媒体库结构
:param mediainfo: 识别的媒体信息
:return: 如不存在返回None存在时返回信息包括每季已存在所有集{type: movie/tv, seasons: {season: [episodes]}}
"""
if not settings.LIBRARY_PATHS:
return None
# 目的路径
dest_paths = settings.LIBRARY_PATHS
dest_paths = DirectoryHelper().get_library_dirs()
# 检查每一个媒体库目录
for dest_path in dest_paths:
# 媒体库路径
target_dir = self.get_target_path(dest_path)
if not target_dir:
continue
# 媒体分类路径
target_dir = self.__get_dest_dir(mediainfo=mediainfo, target_dir=target_dir)
target_dir = self.__get_dest_dir(mediainfo=mediainfo, target_dir=dest_path)
if not target_dir.exists():
continue
# 重命名格式
rename_format = settings.TV_RENAME_FORMAT \
if mediainfo.type == MediaType.TV else settings.MOVIE_RENAME_FORMAT
# 相对路径
# 获取相对路径(重命名路径)
meta = MetaInfo(mediainfo.title)
rel_path = self.get_rename_path(
template_string=rename_format,
rename_dict=self.__get_naming_dict(meta=meta,
mediainfo=mediainfo)
)
# 取相对路径的第1层目录
if rel_path.parts:
media_path = target_dir / rel_path.parts[0]
@@ -818,8 +769,6 @@ class FileTransferModule(_ModuleBase):
删除目录下的所有版本文件
:param path: 目录路径
"""
if not path.exists():
return False
# 识别文件中的季集信息
meta = MetaInfoPath(path)
season = meta.season
@@ -832,7 +781,7 @@ class FileTransferModule(_ModuleBase):
return False
# 删除文件
for media_file in media_files:
if media_file == path:
if str(media_file) == str(path):
continue
# 识别文件中的季集信息
filemeta = MetaInfoPath(media_file)

View File

@@ -1,30 +1,42 @@
import threading
from pyparsing import Forward, Literal, Word, alphas, infixNotation, opAssoc, alphanums, Combine, nums, ParseResults
class RuleParser:
_lock = threading.Lock()
_thread_local = threading.local()
def __init__(self):
"""
定义语法规则
"""
# 表达式
expr: Forward = Forward()
# 原子
atom: Combine = Combine(Word(alphas, alphanums) | Word(nums) + Word(alphas, alphanums))
# 逻辑非操作符
operator_not: Literal = Literal('!').setParseAction(lambda: 'not')
# 逻辑操作符
operator_or: Literal = Literal('|').setParseAction(lambda: 'or')
# 逻辑操作符
operator_and: Literal = Literal('&').setParseAction(lambda: 'and')
# 定义表达式的语法规则
expr <<= operator_not + expr | operator_or | operator_and | atom | ('(' + expr + ')')
with self._lock:
if not hasattr(self._thread_local, 'initialized'):
# 表达式
expr: Forward = Forward()
# 原子
atom: Combine = Combine(Word(alphas, alphanums) | (Word(nums) + Word(alphas, alphanums)))
# 逻辑操作符
operator_not: Literal = Literal('!').setParseAction(lambda t: 'not')
# 逻辑操作符
operator_or: Literal = Literal('|').setParseAction(lambda t: 'or')
# 逻辑与操作符
operator_and: Literal = Literal('&').setParseAction(lambda t: 'and')
# 定义表达式的语法规则
expr <<= (operator_not + expr) | atom | ('(' + expr + ')')
# 运算符优先级
self.expr = infixNotation(expr,
[(operator_not, 1, opAssoc.RIGHT),
(operator_and, 2, opAssoc.LEFT),
(operator_or, 2, opAssoc.LEFT)])
# 运算符优先级
self.expr = infixNotation(expr,
[(operator_not, 1, opAssoc.RIGHT),
(operator_and, 2, opAssoc.LEFT),
(operator_or, 2, opAssoc.LEFT)])
self._thread_local.expr = self.expr
self._thread_local.initialized = True
else:
self.expr = self._thread_local.expr
def parse(self, expression: str) -> ParseResults:
"""
@@ -41,7 +53,9 @@ class RuleParser:
if __name__ == '__main__':
# 测试代码
expression_str = "!BLU & 4K & CN > !BLU & 1080P & CN > !BLU & 4K > !BLU & 1080P"
expression_str = """
SPECSUB & CNVOI & 4K & !BLU & !REMUX & !WEBDL & 60FPS & !DOLBY & !SDR & !3D > CNSUB & CNVOI & 4K & !BLU & !REMUX & !WEBDL & 60FPS & !DOLBY & !SDR & !3D > SPECSUB & 4K & !BLU & !REMUX & !WEBDL & 60FPS & !DOLBY & !SDR & !3D > CNSUB & 4K & !BLU & !REMUX & !WEBDL & 60FPS & !DOLBY & !SDR & !3D > SPECSUB & CNVOI & 4K & !BLU & !REMUX & !WEBDL & !DOLBY & HDR & !3D > CNSUB & CNVOI & 4K & !BLU & !REMUX & !WEBDL & !DOLBY & HDR & !3D > SPECSUB & CNVOI & 4K & !BLU & !REMUX & !WEBDL & !DOLBY & !3D > CNSUB & CNVOI & 4K & !BLU & !REMUX & !WEBDL & !DOLBY & !3D > SPECSUB & 4K & !BLU & !REMUX & !WEBDL & !DOLBY & HDR & !3D > CNSUB & 4K & !BLU & !REMUX & !WEBDL & !DOLBY & HDR & !3D > SPECSUB & 4K & !BLU & !REMUX & !WEBDL & !DOLBY & !3D > CNSUB & 4K & !BLU & !REMUX & !WEBDL & !DOLBY & !3D > SPECSUB & CNVOI & 4K & WEBDL & 60FPS & !DOLBY & !SDR & !3D > CNSUB & CNVOI & 4K & WEBDL & 60FPS & !DOLBY & !SDR & !3D > SPECSUB & 4K & WEBDL & 60FPS & !DOLBY & !SDR & !3D > CNSUB & 4K & WEBDL & 60FPS & !DOLBY & !SDR & !3D > SPECSUB & CNVOI & 4K & WEBDL & !DOLBY & HDR & !3D > CNSUB & CNVOI & 4K & WEBDL & !DOLBY & HDR & !3D > SPECSUB & CNVOI & 4K & WEBDL & !DOLBY & !3D > CNSUB & CNVOI & 4K & WEBDL & !DOLBY & !3D > SPECSUB & 4K & WEBDL & !DOLBY & HDR & !3D > CNSUB & 4K & WEBDL & !DOLBY & HDR & !3D > SPECSUB & 4K & WEBDL & !DOLBY & !3D > CNSUB & 4K & WEBDL & !DOLBY & !3D > SPECSUB & CNVOI & 4K & !BLU & !WEBDL & !DOLBY & HDR & !3D > CNSUB & CNVOI & 4K & !BLU & !WEBDL & !DOLBY & HDR & !3D > SPECSUB & CNVOI & 4K & !BLU & !WEBDL & !DOLBY & !3D > CNSUB & CNVOI & 4K & !BLU & !WEBDL & !DOLBY & !3D > SPECSUB & 4K & !BLU & !WEBDL & !DOLBY & HDR & !3D > CNSUB & 4K & !BLU & !WEBDL & !DOLBY & HDR & !3D > SPECSUB & 4K & !BLU & !WEBDL & !DOLBY & !SDR & !3D > CNSUB & 4K & !BLU & !WEBDL & !DOLBY & !SDR & !3D > 4K & !BLU & !REMUX & !DOLBY & HDR & !3D > 4K & !BLURAY & !REMUX & !DOLBY & !3D > SPECSUB & 1080P & !BLU & !REMUX & !WEBDL & !DOLBY & HDR & !3D > CNSUB & 1080P & !BLU & !REMUX & !WEBDL & !DOLBY & HDR & !3D > SPECSUB & 1080P & !BLU & !REMUX & !WEBDL & !DOLBY & !3D > CNSUB & 1080P & !BLU & !REMUX & !WEBDL & !DOLBY & !3D > SPECSUB & 1080P & !BLU & !WEBDL & !DOLBY & HDR & !3D > CNSUB & 1080P & !BLU & !WEBDL & !DOLBY & HDR & !3D > SPECSUB & 1080P & !BLU & !WEBDL & !DOLBY & !3D > CNSUB & 1080P & !BLU & !WEBDL & !DOLBY & !3D > SPECSUB & 1080P & WEBDL & !DOLBY & HDR & !3D > CNSUB & 1080P & WEBDL & !DOLBY & HDR & !3D > SPECSUB & 1080P & WEBDL & !DOLBY & !3D > CNSUB & 1080P & WEBDL & !DOLBY & !3D > 1080P & !BLU & !REMUX & !DOLBY & HDR & !3D > 1080P & !BLU & !REMUX & !DOLBY & !3D
"""
for exp in expression_str.split('>'):
parsed_expr = RuleParser().parse(exp)
print(parsed_expr.as_list())
parsed_expr = RuleParser().parse(exp.strip())
print(parsed_expr.asList())

View File

@@ -136,6 +136,10 @@ class FilterModule(_ModuleBase):
def init_module(self) -> None:
self.parser = RuleParser()
@staticmethod
def get_name() -> str:
return "过滤器"
def stop(self):
pass
@@ -169,6 +173,7 @@ class FilterModule(_ModuleBase):
continue
# 能命中优先级的才返回
if not self.__get_order(torrent, rule_string):
logger.debug(f"种子 {torrent.site_name} - {torrent.title} {torrent.description} 不匹配优先级规则")
continue
ret_torrents.append(torrent)
@@ -191,7 +196,7 @@ class FilterModule(_ModuleBase):
torrent_episodes = meta.episode_list
if not set(torrent_seasons).issubset(set(seasons)):
# 种子季不在过滤季中
logger.info(f"种子 {torrent.site_name} - {torrent.title} 包含季 {torrent_seasons} 不是需要的季 {seasons}")
logger.debug(f"种子 {torrent.site_name} - {torrent.title} 包含季 {torrent_seasons} 不是需要的季 {list(seasons)}")
return False
if not torrent_episodes:
# 整季按匹配处理
@@ -201,7 +206,7 @@ class FilterModule(_ModuleBase):
if need_episodes \
and not set(torrent_episodes).intersection(set(need_episodes)):
# 单季集没有交集的不要
logger.info(f"种子 {torrent.site_name} - {torrent.title} "
logger.debug(f"种子 {torrent.site_name} - {torrent.title} "
f"{torrent_episodes} 没有需要的集:{need_episodes}")
return False
return True
@@ -223,7 +228,7 @@ class FilterModule(_ModuleBase):
if self.__match_group(torrent, parsed_group.as_list()[0]):
# 出现匹配时中断
matched = True
logger.info(f"种子 {torrent.site_name} - {torrent.title} 优先级为 {100 - res_order + 1}")
logger.debug(f"种子 {torrent.site_name} - {torrent.title} 优先级为 {100 - res_order + 1}")
torrent.pri_order = res_order
break
# 优先级降低,继续匹配
@@ -333,7 +338,7 @@ class FilterModule(_ModuleBase):
info_values = [str(info_value).upper()]
# 过滤值转化为数组
if value.find(",") != -1:
values = [str(val).upper() for val in value.split(",")]
values = [str(val).upper() for val in value.split(",") if val]
else:
values = [str(value).upper()]
# 没有交集为不匹配

View File

@@ -13,6 +13,7 @@ from app.modules.indexer.mtorrent import MTorrentSpider
from app.modules.indexer.spider import TorrentSpider
from app.modules.indexer.tnode import TNodeSpider
from app.modules.indexer.torrentleech import TorrentLeech
from app.modules.indexer.yema import YemaSpider
from app.schemas.types import MediaType
from app.utils.string import StringUtils
@@ -25,6 +26,10 @@ class IndexerModule(_ModuleBase):
def init_module(self) -> None:
pass
@staticmethod
def get_name() -> str:
return "站点索引"
def stop(self):
pass
@@ -107,6 +112,12 @@ class IndexerModule(_ModuleBase):
mtype=mtype,
page=page
)
elif site.get('parser') == "Yema":
error_flag, result = YemaSpider(site).search(
keyword=search_word,
mtype=mtype,
page=page
)
else:
error_flag, result = self.__spider_search(
search_word=search_word,

View File

@@ -15,7 +15,7 @@ from app.utils.string import StringUtils
class MTorrentSpider:
"""
mTorrent API需要缓存ApiKey
mTorrent API
"""
_indexerid = None
_domain = None
@@ -27,6 +27,7 @@ class MTorrentSpider:
_searchurl = "%sapi/torrent/search"
_downloadurl = "%sapi/torrent/genDlToken"
_pageurl = "%sdetail/%s"
_timeout = 15
# 电影分类
_movie_category = ['401', '419', '420', '421', '439', '405', '404']
@@ -62,6 +63,7 @@ class MTorrentSpider:
self._ua = indexer.get('ua')
self._apikey = indexer.get('apikey')
self._token = indexer.get('token')
self._timeout = indexer.get('timeout') or 15
def search(self, keyword: str, mtype: MediaType = None, page: int = 0) -> Tuple[bool, List[dict]]:
"""
@@ -92,7 +94,7 @@ class MTorrentSpider:
},
proxies=self._proxy,
referer=f"{self._domain}browse",
timeout=15
timeout=self._timeout
).post_res(url=self._searchurl, json=params)
torrents = []
if res and res.status_code == 200:

View File

@@ -63,8 +63,8 @@ class TorrentSpider:
torrents_info: dict = {}
# 种子列表
torrents_info_array: list = []
# 搜索超时, 默认: 30
_timeout = 30
# 搜索超时, 默认: 15
_timeout = 15
def __init__(self,
indexer: CommentedMap,
@@ -674,7 +674,9 @@ class TorrentSpider:
try:
args = filter_item.get("args")
if method_name == "re_search" and isinstance(args, list):
text = re.search(r"%s" % args[0], text).group(args[-1])
rematch = re.search(r"%s" % args[0], text)
if rematch:
text = rematch.group(args[-1])
elif method_name == "split" and isinstance(args, list):
text = text.split(r"%s" % args[0])[args[-1]]
elif method_name == "replace" and isinstance(args, list):

View File

@@ -18,6 +18,7 @@ class TNodeSpider:
_ua = None
_token = None
_size = 100
_timeout = 15
_searchurl = "%sapi/torrent/advancedSearch"
_downloadurl = "%sapi/torrent/download/%s"
_pageurl = "%storrent/info/%s"
@@ -32,6 +33,7 @@ class TNodeSpider:
self._proxy = settings.PROXY
self._cookie = indexer.get('cookie')
self._ua = indexer.get('ua')
self._timeout = indexer.get('timeout') or 15
self.init_config()
def init_config(self):
@@ -43,7 +45,7 @@ class TNodeSpider:
res = RequestUtils(ua=self._ua,
cookies=self._cookie,
proxies=self._proxy,
timeout=15).get_res(url=self._domain)
timeout=self._timeout).get_res(url=self._domain)
if res and res.status_code == 200:
csrf_token = re.search(r'<meta name="x-csrf-token" content="(.+?)">', res.text)
if csrf_token:
@@ -77,7 +79,7 @@ class TNodeSpider:
},
cookies=self._cookie,
proxies=self._proxy,
timeout=15
timeout=self._timeout
).post_res(url=self._searchurl, json=params)
torrents = []
if res and res.status_code == 200:

View File

@@ -17,11 +17,13 @@ class TorrentLeech:
_browseurl = "%storrents/browse/list/page/2%s"
_downloadurl = "%sdownload/%s/%s"
_pageurl = "%storrent/%s"
_timeout = 15
def __init__(self, indexer: CommentedMap):
self._indexer = indexer
if indexer.get('proxy'):
self._proxy = settings.PROXY
self._timeout = indexer.get('timeout') or 15
def search(self, keyword: str, page: int = 0) -> Tuple[bool, List[dict]]:
@@ -40,7 +42,7 @@ class TorrentLeech:
},
cookies=self._indexer.get('cookie'),
proxies=self._proxy,
timeout=15
timeout=self._timeout
).get_res(url)
torrents = []
if res and res.status_code == 200:

148
app/modules/indexer/yema.py Normal file
View File

@@ -0,0 +1,148 @@
from typing import Tuple, List
from ruamel.yaml import CommentedMap
from app.core.config import settings
from app.db.systemconfig_oper import SystemConfigOper
from app.log import logger
from app.schemas import MediaType
from app.utils.http import RequestUtils
from app.utils.string import StringUtils
class YemaSpider:
"""
YemaPT API
"""
_indexerid = None
_domain = None
_name = ""
_proxy = None
_cookie = None
_ua = None
_size = 40
_searchurl = "%sapi/torrent/fetchCategoryOpenTorrentList"
_downloadurl = "%sapi/torrent/download?id=%s"
_pageurl = "%s#/torrent/detail/%s/"
_timeout = 15
# 分类
_movie_category = 4
_tv_category = 5
def __init__(self, indexer: CommentedMap):
self.systemconfig = SystemConfigOper()
if indexer:
self._indexerid = indexer.get('id')
self._domain = indexer.get('domain')
self._searchurl = self._searchurl % self._domain
self._name = indexer.get('name')
if indexer.get('proxy'):
self._proxy = settings.PROXY
self._cookie = indexer.get('cookie')
self._ua = indexer.get('ua')
self._timeout = indexer.get('timeout') or 15
def search(self, keyword: str, mtype: MediaType = None, page: int = 0) -> Tuple[bool, List[dict]]:
"""
搜索
"""
if not mtype:
categoryId = self._movie_category
elif mtype == MediaType.TV:
categoryId = self._tv_category
else:
categoryId = self._movie_category
params = {
"categoryId": categoryId,
"pageParam": {
"current": page + 1,
"pageSize": self._size,
"total": self._size
},
"sorter": {}
}
if keyword:
params.update({
"keyword": keyword,
})
res = RequestUtils(
headers={
"Content-Type": "application/json",
"User-Agent": f"{self._ua}",
"Accept": "application/json, text/plain, */*"
},
cookies=self._cookie,
proxies=self._proxy,
referer=f"{self._domain}",
timeout=self._timeout
).post_res(url=self._searchurl, json=params)
torrents = []
if res and res.status_code == 200:
results = res.json().get('data', []) or []
for result in results:
category_value = result.get('categoryId')
if category_value == self._tv_category:
category = MediaType.TV.value
elif category_value == self._movie_category:
category = MediaType.MOVIE.value
else:
category = MediaType.UNKNOWN.value
torrent = {
'title': result.get('showName'),
'description': result.get('shortDesc'),
'enclosure': self.__get_download_url(result.get('id')),
'pubdate': StringUtils.unify_datetime_str(result.get('gmtCreate')),
'size': result.get('fileSize'),
'seeders': result.get('seedNum'),
'peers': result.get('leechNum'),
'grabs': result.get('completedNum'),
'downloadvolumefactor': self.__get_downloadvolumefactor(result.get('downloadPromotion')),
'uploadvolumefactor': self.__get_uploadvolumefactor(result.get('uploadPromotion')),
'freedate': StringUtils.unify_datetime_str(result.get('downloadPromotionEndTime')),
'page_url': self._pageurl % (self._domain, result.get('id')),
'labels': [],
'category': category
}
torrents.append(torrent)
elif res is not None:
logger.warn(f"{self._name} 搜索失败,错误码:{res.status_code}")
return True, []
else:
logger.warn(f"{self._name} 搜索失败,无法连接 {self._domain}")
return True, []
return False, torrents
@staticmethod
def __get_downloadvolumefactor(discount: str) -> float:
"""
获取下载系数
"""
discount_dict = {
"free": 0,
"half": 0.5,
"none": 1
}
if discount:
return discount_dict.get(discount, 1)
return 1
@staticmethod
def __get_uploadvolumefactor(discount: str) -> float:
"""
获取上传系数
"""
discount_dict = {
"none": 1,
"one_half": 1.5,
"double_upload": 2
}
if discount:
return discount_dict.get(discount, 1)
return 1
def __get_download_url(self, torrent_id: str) -> str:
"""
获取下载链接
"""
return self._downloadurl % (self._domain, torrent_id)

View File

@@ -14,6 +14,10 @@ class JellyfinModule(_ModuleBase):
def init_module(self) -> None:
self.jellyfin = Jellyfin()
@staticmethod
def get_name() -> str:
return "Jellyfin"
def init_setting(self) -> Tuple[str, Union[str, bool]]:
return "MEDIASERVER", "jellyfin"

View File

@@ -52,12 +52,12 @@ class Jellyfin:
return []
req_url = "%sLibrary/SelectableMediaFolders?api_key=%s" % (self._host, self._apikey)
try:
with RequestUtils().get_res(req_url) as res:
if res:
return res.json()
else:
logger.error(f"Library/SelectableMediaFolders 未获取到返回数据")
return []
res = RequestUtils().get_res(req_url)
if res:
return res.json()
else:
logger.error(f"Library/SelectableMediaFolders 未获取到返回数据")
return []
except Exception as e:
logger.error(f"连接Library/SelectableMediaFolders 出错:" + str(e))
return []
@@ -70,29 +70,29 @@ class Jellyfin:
return []
req_url = "%sLibrary/VirtualFolders?api_key=%s" % (self._host, self._apikey)
try:
with RequestUtils().get_res(req_url) as res:
if res:
library_items = res.json()
librarys = []
for library_item in library_items:
library_name = library_item.get('Name')
pathInfos = library_item.get('LibraryOptions', {}).get('PathInfos')
library_paths = []
for path in pathInfos:
if path.get('NetworkPath'):
library_paths.append(path.get('NetworkPath'))
else:
library_paths.append(path.get('Path'))
res = RequestUtils().get_res(req_url)
if res:
library_items = res.json()
librarys = []
for library_item in library_items:
library_name = library_item.get('Name')
pathInfos = library_item.get('LibraryOptions', {}).get('PathInfos')
library_paths = []
for path in pathInfos:
if path.get('NetworkPath'):
library_paths.append(path.get('NetworkPath'))
else:
library_paths.append(path.get('Path'))
if library_name and library_paths:
librarys.append({
'Name': library_name,
'Path': library_paths
})
return librarys
else:
logger.error(f"Library/VirtualFolders 未获取到返回数据")
return []
if library_name and library_paths:
librarys.append({
'Name': library_name,
'Path': library_paths
})
return librarys
else:
logger.error(f"Library/VirtualFolders 未获取到返回数据")
return []
except Exception as e:
logger.error(f"连接Library/VirtualFolders 出错:" + str(e))
return []
@@ -109,12 +109,12 @@ class Jellyfin:
user = self.user
req_url = f"{self._host}Users/{user}/Views?api_key={self._apikey}"
try:
with RequestUtils().get_res(req_url) as res:
if res:
return res.json().get("Items")
else:
logger.error(f"Users/Views 未获取到返回数据")
return []
res = RequestUtils().get_res(req_url)
if res:
return res.json().get("Items")
else:
logger.error(f"Users/Views 未获取到返回数据")
return []
except Exception as e:
logger.error(f"连接Users/Views 出错:" + str(e))
return []
@@ -163,12 +163,12 @@ class Jellyfin:
return 0
req_url = "%sUsers?api_key=%s" % (self._host, self._apikey)
try:
with RequestUtils().get_res(req_url) as res:
if res:
return len(res.json())
else:
logger.error(f"Users 未获取到返回数据")
return 0
res = RequestUtils().get_res(req_url)
if res:
return len(res.json())
else:
logger.error(f"Users 未获取到返回数据")
return 0
except Exception as e:
logger.error(f"连接Users出错" + str(e))
return 0
@@ -181,20 +181,20 @@ class Jellyfin:
return None
req_url = "%sUsers?api_key=%s" % (self._host, self._apikey)
try:
with RequestUtils().get_res(req_url) as res:
if res:
users = res.json()
# 先查询是否有与当前用户名称匹配的
if user_name:
for user in users:
if user.get("Name") == user_name:
return user.get("Id")
# 查询管理员
res = RequestUtils().get_res(req_url)
if res:
users = res.json()
# 先查询是否有与当前用户名称匹配的
if user_name:
for user in users:
if user.get("Policy", {}).get("IsAdministrator"):
if user.get("Name") == user_name:
return user.get("Id")
else:
logger.error(f"Users 未获取到返回数据")
# 查询管理员
for user in users:
if user.get("Policy", {}).get("IsAdministrator"):
return user.get("Id")
else:
logger.error(f"Users 未获取到返回数据")
except Exception as e:
logger.error(f"连接Users出错" + str(e))
return None
@@ -244,11 +244,11 @@ class Jellyfin:
return None
req_url = "%sSystem/Info?api_key=%s" % (self._host, self._apikey)
try:
with RequestUtils().get_res(req_url) as res:
if res:
return res.json().get("Id")
else:
logger.error(f"System/Info 未获取到返回数据")
res = RequestUtils().get_res(req_url)
if res:
return res.json().get("Id")
else:
logger.error(f"System/Info 未获取到返回数据")
except Exception as e:
logger.error(f"连接System/Info出错" + str(e))
return None
@@ -262,17 +262,17 @@ class Jellyfin:
return schemas.Statistic()
req_url = "%sItems/Counts?api_key=%s" % (self._host, self._apikey)
try:
with RequestUtils().get_res(req_url) as res:
if res:
result = res.json()
return schemas.Statistic(
movie_count=result.get("MovieCount") or 0,
tv_count=result.get("SeriesCount") or 0,
episode_count=result.get("EpisodeCount") or 0
)
else:
logger.error(f"Items/Counts 未获取到返回数据")
return schemas.Statistic()
res = RequestUtils().get_res(req_url)
if res:
result = res.json()
return schemas.Statistic(
movie_count=result.get("MovieCount") or 0,
tv_count=result.get("SeriesCount") or 0,
episode_count=result.get("EpisodeCount") or 0
)
else:
logger.error(f"Items/Counts 未获取到返回数据")
return schemas.Statistic()
except Exception as e:
logger.error(f"连接Items/Counts出错" + str(e))
return schemas.Statistic()
@@ -287,14 +287,14 @@ class Jellyfin:
"api_key=%s&searchTerm=%s&IncludeItemTypes=Series&Limit=10&Recursive=true") % (
self._host, self.user, self._apikey, name)
try:
with RequestUtils().get_res(req_url) as res:
if res:
res_items = res.json().get("Items")
if res_items:
for res_item in res_items:
if res_item.get('Name') == name and (
not year or str(res_item.get('ProductionYear')) == str(year)):
return res_item.get('Id')
res = RequestUtils().get_res(req_url)
if res:
res_items = res.json().get("Items")
if res_items:
for res_item in res_items:
if res_item.get('Name') == name and (
not year or str(res_item.get('ProductionYear')) == str(year)):
return res_item.get('Id')
except Exception as e:
logger.error(f"连接Items出错" + str(e))
return None
@@ -317,36 +317,36 @@ class Jellyfin:
"api_key=%s&searchTerm=%s&IncludeItemTypes=Movie&Limit=10&Recursive=true") % (
self._host, self.user, self._apikey, title)
try:
with RequestUtils().get_res(req_url) as res:
if res:
res_items = res.json().get("Items")
if res_items:
ret_movies = []
for item in res_items:
item_tmdbid = item.get("ProviderIds", {}).get("Tmdb")
mediaserver_item = schemas.MediaServerItem(
server="jellyfin",
library=item.get("ParentId"),
item_id=item.get("Id"),
item_type=item.get("Type"),
title=item.get("Name"),
original_title=item.get("OriginalTitle"),
year=item.get("ProductionYear"),
tmdbid=int(item_tmdbid) if item_tmdbid else None,
imdbid=item.get("ProviderIds", {}).get("Imdb"),
tvdbid=item.get("ProviderIds", {}).get("Tvdb"),
path=item.get("Path")
)
if tmdb_id and item_tmdbid:
if str(item_tmdbid) != str(tmdb_id):
continue
else:
ret_movies.append(mediaserver_item)
continue
if mediaserver_item.title == title and (
not year or str(mediaserver_item.year) == str(year)):
res = RequestUtils().get_res(req_url)
if res:
res_items = res.json().get("Items")
if res_items:
ret_movies = []
for item in res_items:
item_tmdbid = item.get("ProviderIds", {}).get("Tmdb")
mediaserver_item = schemas.MediaServerItem(
server="jellyfin",
library=item.get("ParentId"),
item_id=item.get("Id"),
item_type=item.get("Type"),
title=item.get("Name"),
original_title=item.get("OriginalTitle"),
year=item.get("ProductionYear"),
tmdbid=int(item_tmdbid) if item_tmdbid else None,
imdbid=item.get("ProviderIds", {}).get("Imdb"),
tvdbid=item.get("ProviderIds", {}).get("Tvdb"),
path=item.get("Path")
)
if tmdb_id and item_tmdbid:
if str(item_tmdbid) != str(tmdb_id):
continue
else:
ret_movies.append(mediaserver_item)
return ret_movies
continue
if mediaserver_item.title == title and (
not year or str(mediaserver_item.year) == str(year)):
ret_movies.append(mediaserver_item)
return ret_movies
except Exception as e:
logger.error(f"连接Items出错" + str(e))
return None
@@ -387,25 +387,25 @@ class Jellyfin:
try:
req_url = "%sShows/%s/Episodes?season=%s&&userId=%s&isMissing=false&api_key=%s" % (
self._host, item_id, season, self.user, self._apikey)
with RequestUtils().get_res(req_url) as res_json:
if res_json:
tv_info = res_json.json()
res_items = tv_info.get("Items")
# 返回的季集信息
season_episodes = {}
for res_item in res_items:
season_index = res_item.get("ParentIndexNumber")
if not season_index:
continue
if season and season != season_index:
continue
episode_index = res_item.get("IndexNumber")
if not episode_index:
continue
if not season_episodes.get(season_index):
season_episodes[season_index] = []
season_episodes[season_index].append(episode_index)
return item_id, season_episodes
res_json = RequestUtils().get_res(req_url)
if res_json:
tv_info = res_json.json()
res_items = tv_info.get("Items")
# 返回的季集信息
season_episodes = {}
for res_item in res_items:
season_index = res_item.get("ParentIndexNumber")
if not season_index:
continue
if season and season != season_index:
continue
episode_index = res_item.get("IndexNumber")
if not episode_index:
continue
if not season_episodes.get(season_index):
season_episodes[season_index] = []
season_episodes[season_index].append(episode_index)
return item_id, season_episodes
except Exception as e:
logger.error(f"连接Shows/Id/Episodes出错" + str(e))
return None, None
@@ -458,13 +458,13 @@ class Jellyfin:
_host = self._playhost
req_url = "%sItems/%s/Images/%s" % (_host, item_id, image_type)
try:
with RequestUtils().get_res(req_url) as res:
if res and res.status_code != 404:
logger.info(f"影片图片链接:{res.url}")
return res.url
else:
logger.error("Items/Id/Images 未获取到返回数据或无该影片{}图片".format(image_type))
return None
res = RequestUtils().get_res(req_url)
if res and res.status_code != 404:
logger.info(f"影片图片链接:{res.url}")
return res.url
else:
logger.error("Items/Id/Images 未获取到返回数据或无该影片{}图片".format(image_type))
return None
except Exception as e:
logger.error(f"连接Items/Id/Images出错" + str(e))
return None
@@ -479,12 +479,12 @@ class Jellyfin:
"""
req_url = "%sItems/%s/Ancestors?api_key=%s" % (self._host, item_id, self._apikey)
try:
with RequestUtils().get_res(req_url) as res:
if res:
return res.json()[index].get(key)
else:
logger.error(f"Items/Id/Ancestors 未获取到返回数据")
return None
res = RequestUtils().get_res(req_url)
if res:
return res.json()[index].get(key)
else:
logger.error(f"Items/Id/Ancestors 未获取到返回数据")
return None
except Exception as e:
logger.error(f"连接Items/Id/Ancestors出错" + str(e))
return None
@@ -497,11 +497,11 @@ class Jellyfin:
return False
req_url = "%sLibrary/Refresh?api_key=%s" % (self._host, self._apikey)
try:
with RequestUtils().post_res(req_url) as res:
if res:
return True
else:
logger.info(f"刷新媒体库失败无法连接Jellyfin")
res = RequestUtils().post_res(req_url)
if res:
return True
else:
logger.info(f"刷新媒体库失败无法连接Jellyfin")
except Exception as e:
logger.error(f"连接Library/Refresh出错" + str(e))
return False
@@ -589,6 +589,8 @@ class Jellyfin:
eventItem.item_id = message.get('ItemId')
eventItem.tmdb_id = message.get('Provider_tmdb')
eventItem.overview = message.get('Overview')
eventItem.item_favorite = message.get('Favorite')
eventItem.save_reason = message.get('SaveReason')
eventItem.device_name = message.get('DeviceName')
eventItem.user_name = message.get('NotificationUsername')
eventItem.client = message.get('ClientName')
@@ -611,6 +613,11 @@ class Jellyfin:
eventItem.item_name = "%s %s" % (
message.get('Name'), "(" + str(message.get('Year')) + ")")
playback_position_ticks = message.get('PlaybackPositionTicks')
runtime_ticks = message.get('RunTimeTicks')
if playback_position_ticks is not None and runtime_ticks is not None:
eventItem.percentage = playback_position_ticks / runtime_ticks * 100
# 获取消息图片
if eventItem.item_id:
# 根据返回的item_id去调用媒体服务器获取
@@ -632,23 +639,23 @@ class Jellyfin:
req_url = "%sUsers/%s/Items/%s?api_key=%s" % (
self._host, self.user, itemid, self._apikey)
try:
with RequestUtils().get_res(req_url) as res:
if res and res.status_code == 200:
item = res.json()
tmdbid = item.get("ProviderIds", {}).get("Tmdb")
return schemas.MediaServerItem(
server="jellyfin",
library=item.get("ParentId"),
item_id=item.get("Id"),
item_type=item.get("Type"),
title=item.get("Name"),
original_title=item.get("OriginalTitle"),
year=item.get("ProductionYear"),
tmdbid=int(tmdbid) if tmdbid else None,
imdbid=item.get("ProviderIds", {}).get("Imdb"),
tvdbid=item.get("ProviderIds", {}).get("Tvdb"),
path=item.get("Path")
)
res = RequestUtils().get_res(req_url)
if res and res.status_code == 200:
item = res.json()
tmdbid = item.get("ProviderIds", {}).get("Tmdb")
return schemas.MediaServerItem(
server="jellyfin",
library=item.get("ParentId"),
item_id=item.get("Id"),
item_type=item.get("Type"),
title=item.get("Name"),
original_title=item.get("OriginalTitle"),
year=item.get("ProductionYear"),
tmdbid=int(tmdbid) if tmdbid else None,
imdbid=item.get("ProviderIds", {}).get("Imdb"),
tvdbid=item.get("ProviderIds", {}).get("Tvdb"),
path=item.get("Path")
)
except Exception as e:
logger.error(f"连接Users/Items出错" + str(e))
return None
@@ -663,17 +670,17 @@ class Jellyfin:
yield None
req_url = "%sUsers/%s/Items?parentId=%s&api_key=%s" % (self._host, self.user, parent, self._apikey)
try:
with RequestUtils().get_res(req_url) as res:
if res and res.status_code == 200:
results = res.json().get("Items") or []
for result in results:
if not result:
continue
if result.get("Type") in ["Movie", "Series"]:
yield self.get_iteminfo(result.get("Id"))
elif "Folder" in result.get("Type"):
for item in self.get_items(result.get("Id")):
yield item
res = RequestUtils().get_res(req_url)
if res and res.status_code == 200:
results = res.json().get("Items") or []
for result in results:
if not result:
continue
if result.get("Type") in ["Movie", "Series"]:
yield self.get_iteminfo(result.get("Id"))
elif "Folder" in result.get("Type"):
for item in self.get_items(result.get("Id")):
yield item
except Exception as e:
logger.error(f"连接Users/Items出错" + str(e))
yield None
@@ -761,50 +768,50 @@ class Jellyfin:
req_url = (f"{self._host}Users/{user}/Items/Resume?"
f"Limit=100&MediaTypes=Video&api_key={self._apikey}&Fields=ProductionYear,Path")
try:
with RequestUtils().get_res(req_url) as res:
if res:
result = res.json().get("Items") or []
ret_resume = []
# 用户媒体库文件夹列表(排除黑名单)
library_folders = self.get_user_library_folders()
for item in result:
if len(ret_resume) == num:
break
if item.get("Type") not in ["Movie", "Episode"]:
continue
item_path = item.get("Path")
if item_path and library_folders and not any(
str(item_path).startswith(folder) for folder in library_folders):
continue
item_type = MediaType.MOVIE.value if item.get("Type") == "Movie" else MediaType.TV.value
link = self.get_play_url(item.get("Id"))
if item.get("BackdropImageTags"):
image = self.__get_backdrop_url(item_id=item.get("Id"),
image_tag=item.get("BackdropImageTags")[0])
else:
image = self.__get_local_image_by_id(item.get("Id"))
# 小部分剧集无[xxx-S01E01-thumb.jpg]图片
with RequestUtils().get_res(image) as image_res:
if not image_res or image_res.status_code == 404:
image = self.generate_image_link(item.get("Id"), "Backdrop", False)
if item_type == MediaType.MOVIE.value:
title = item.get("Name")
subtitle = item.get("ProductionYear")
else:
title = f'{item.get("SeriesName")}'
subtitle = f'S{item.get("ParentIndexNumber")}:{item.get("IndexNumber")} - {item.get("Name")}'
ret_resume.append(schemas.MediaServerPlayItem(
id=item.get("Id"),
title=title,
subtitle=subtitle,
type=item_type,
image=image,
link=link,
percent=item.get("UserData", {}).get("PlayedPercentage")
))
return ret_resume
else:
logger.error(f"Users/Items/Resume 未获取到返回数据")
res = RequestUtils().get_res(req_url)
if res:
result = res.json().get("Items") or []
ret_resume = []
# 用户媒体库文件夹列表(排除黑名单)
library_folders = self.get_user_library_folders()
for item in result:
if len(ret_resume) == num:
break
if item.get("Type") not in ["Movie", "Episode"]:
continue
item_path = item.get("Path")
if item_path and library_folders and not any(
str(item_path).startswith(folder) for folder in library_folders):
continue
item_type = MediaType.MOVIE.value if item.get("Type") == "Movie" else MediaType.TV.value
link = self.get_play_url(item.get("Id"))
if item.get("BackdropImageTags"):
image = self.__get_backdrop_url(item_id=item.get("Id"),
image_tag=item.get("BackdropImageTags")[0])
else:
image = self.__get_local_image_by_id(item.get("Id"))
# 小部分剧集无[xxx-S01E01-thumb.jpg]图片
image_res = RequestUtils().get_res(image)
if not image_res or image_res.status_code == 404:
image = self.generate_image_link(item.get("Id"), "Backdrop", False)
if item_type == MediaType.MOVIE.value:
title = item.get("Name")
subtitle = item.get("ProductionYear")
else:
title = f'{item.get("SeriesName")}'
subtitle = f'S{item.get("ParentIndexNumber")}:{item.get("IndexNumber")} - {item.get("Name")}'
ret_resume.append(schemas.MediaServerPlayItem(
id=item.get("Id"),
title=title,
subtitle=subtitle,
type=item_type,
image=image,
link=link,
percent=item.get("UserData", {}).get("PlayedPercentage")
))
return ret_resume
else:
logger.error(f"Users/Items/Resume 未获取到返回数据")
except Exception as e:
logger.error(f"连接Users/Items/Resume出错" + str(e))
return []
@@ -822,35 +829,35 @@ class Jellyfin:
req_url = (f"{self._host}Users/{user}/Items/Latest?"
f"Limit=100&MediaTypes=Video&api_key={self._apikey}&Fields=ProductionYear,Path")
try:
with RequestUtils().get_res(req_url) as res:
if res:
result = res.json() or []
ret_latest = []
# 用户媒体库文件夹列表(排除黑名单)
library_folders = self.get_user_library_folders()
for item in result:
if len(ret_latest) == num:
break
if item.get("Type") not in ["Movie", "Series"]:
continue
item_path = item.get("Path")
if item_path and library_folders and not any(
str(item_path).startswith(folder) for folder in library_folders):
continue
item_type = MediaType.MOVIE.value if item.get("Type") == "Movie" else MediaType.TV.value
link = self.get_play_url(item.get("Id"))
image = self.__get_local_image_by_id(item_id=item.get("Id"))
ret_latest.append(schemas.MediaServerPlayItem(
id=item.get("Id"),
title=item.get("Name"),
subtitle=item.get("ProductionYear"),
type=item_type,
image=image,
link=link
))
return ret_latest
else:
logger.error(f"Users/Items/Latest 未获取到返回数据")
res = RequestUtils().get_res(req_url)
if res:
result = res.json() or []
ret_latest = []
# 用户媒体库文件夹列表(排除黑名单)
library_folders = self.get_user_library_folders()
for item in result:
if len(ret_latest) == num:
break
if item.get("Type") not in ["Movie", "Series"]:
continue
item_path = item.get("Path")
if item_path and library_folders and not any(
str(item_path).startswith(folder) for folder in library_folders):
continue
item_type = MediaType.MOVIE.value if item.get("Type") == "Movie" else MediaType.TV.value
link = self.get_play_url(item.get("Id"))
image = self.__get_local_image_by_id(item_id=item.get("Id"))
ret_latest.append(schemas.MediaServerPlayItem(
id=item.get("Id"),
title=item.get("Name"),
subtitle=item.get("ProductionYear"),
type=item_type,
image=image,
link=link
))
return ret_latest
else:
logger.error(f"Users/Items/Latest 未获取到返回数据")
except Exception as e:
logger.error(f"连接Users/Items/Latest出错" + str(e))
return []

View File

@@ -14,6 +14,10 @@ class PlexModule(_ModuleBase):
def init_module(self) -> None:
self.plex = Plex()
@staticmethod
def get_name() -> str:
return "Plex"
def stop(self):
pass

View File

@@ -4,7 +4,7 @@ from pathlib import Path
from typing import List, Optional, Dict, Tuple, Generator, Any
from urllib.parse import quote_plus
from plexapi import media, utils
from plexapi import media
from plexapi.server import PlexServer
from app import schemas
@@ -236,16 +236,19 @@ class Plex:
if item_id:
videos = self._plex.fetchItem(item_id)
else:
# 兼容年份为空的场景
kwargs = {"year": year} if year else {}
# 根据标题和年份模糊搜索,该结果不够准确
videos = self._plex.library.search(title=title,
year=year,
libtype="show")
libtype="show",
**kwargs)
if (not videos
and original_title
and str(original_title) != str(title)):
videos = self._plex.library.search(title=original_title,
year=year,
libtype="show")
libtype="show",
**kwargs)
if not videos:
return None, {}
if isinstance(videos, list):
@@ -314,7 +317,7 @@ class Plex:
# 否则一个一个刷新
for path, lib_key in result_dict.items():
logger.info(f"刷新媒体库:{lib_key} - {path}")
self._plex.query(f'/library/sections/{lib_key}/refresh?path={quote_plus(path)}')
self._plex.query(f'/library/sections/{lib_key}/refresh?path={quote_plus(str(Path(path).parent))}')
@staticmethod
def __find_librarie(path: Path, libraries: List[Any]) -> Tuple[str, str]:
@@ -375,26 +378,34 @@ class Plex:
@staticmethod
def __get_ids(guids: List[Any]) -> dict:
def parse_tmdb_id(value: str) -> (bool, int):
"""尝试将TMDB ID字符串转换为整数。如果成功返回(True, int),失败则返回(False, None)。"""
try:
int_value = int(value)
return True, int_value
except ValueError:
return False, None
guid_mapping = {
"imdb://": "imdb_id",
"tmdb://": "tmdb_id",
"tvdb://": "tvdb_id"
}
ids = {}
for prefix, varname in guid_mapping.items():
ids[varname] = None
ids = {varname: None for varname in guid_mapping.values()}
for guid in guids:
guid_id = guid['id'] if isinstance(guid, dict) else guid.id
for prefix, varname in guid_mapping.items():
if isinstance(guid, dict):
if guid['id'].startswith(prefix):
# 找到匹配的ID
ids[varname] = guid['id'][len(prefix):]
break
else:
if guid.id.startswith(prefix):
# 找到匹配的ID
ids[varname] = guid.id[len(prefix):]
break
if guid_id.startswith(prefix):
clean_id = guid_id[len(prefix):]
if varname == "tmdb_id":
# tmdb_id为intPlex可能存在脏数据特别处理tmdb_id
success, parsed_id = parse_tmdb_id(clean_id)
if success:
ids[varname] = parsed_id
else:
ids[varname] = clean_id
break
return ids
def get_items(self, parent: str) -> Generator:
@@ -409,25 +420,29 @@ class Plex:
section = self._plex.library.sectionByID(int(parent))
if section:
for item in section.all():
if not item:
try:
if not item:
continue
ids = self.__get_ids(item.guids)
path = None
if item.locations:
path = item.locations[0]
yield schemas.MediaServerItem(
server="plex",
library=item.librarySectionID,
item_id=item.key,
item_type=item.type,
title=item.title,
original_title=item.originalTitle,
year=item.year,
tmdbid=ids['tmdb_id'],
imdbid=ids['imdb_id'],
tvdbid=ids['tvdb_id'],
path=path,
)
except Exception as e:
logger.error(f"处理媒体项目时出错:{str(e)}, 跳过此项目。")
continue
ids = self.__get_ids(item.guids)
path = None
if item.locations:
path = item.locations[0]
yield schemas.MediaServerItem(
server="plex",
library=item.librarySectionID,
item_id=item.key,
item_type=item.type,
title=item.title,
original_title=item.originalTitle,
year=item.year,
tmdbid=ids['tmdb_id'],
imdbid=ids['imdb_id'],
tvdbid=ids['tvdb_id'],
path=path,
)
except Exception as err:
logger.error(f"获取媒体库列表出错:{str(err)}")
yield None
@@ -616,9 +631,8 @@ class Plex:
return []
# 媒体库白名单
allow_library = ",".join([lib.id for lib in self.get_librarys()])
query = {'contentDirectoryID': allow_library}
path = '/hubs/continueWatching/items' + utils.joinArgs(query)
items = self._plex.fetchItems(path, container_start=0, container_size=num)
params = {'contentDirectoryID': allow_library}
items = self._plex.fetchItems("/hubs/continueWatching/items", container_start=0, container_size=num, params=params)
ret_resume = []
for item in items:
item_type = MediaType.MOVIE.value if item.TYPE == "movie" else MediaType.TV.value
@@ -647,20 +661,65 @@ class Plex:
"""
if not self._plex:
return None
items = self._plex.fetchItems('/library/recentlyAdded', container_start=0, container_size=num)
# 请求参数(除黑名单)
allow_library = ",".join([lib.id for lib in self.get_librarys()])
params = {
"contentDirectoryID": allow_library,
"count": num,
"excludeContinueWatching": 1
}
ret_resume = []
for item in items:
item_type = MediaType.MOVIE.value if item.TYPE == "movie" else MediaType.TV.value
link = self.get_play_url(item.key)
title = item.title if item_type == MediaType.MOVIE.value else \
"%s%s" % (item.parentTitle, item.index)
image = item.posterUrl
ret_resume.append(schemas.MediaServerPlayItem(
id=item.key,
title=title,
subtitle=item.year,
type=item_type,
image=image,
link=link
))
sub_result = []
offset = 0
while True:
if len(ret_resume) >= num:
break
# 获取所有资料库
hubs = self._plex.fetchItems(
'/hubs/promoted',
container_start=offset,
container_size=num,
maxresults=num,
params=params
)
if len(hubs) == 0:
break
# 合并排序
for hub in hubs:
for item in hub.items:
sub_result.append(item)
sub_result.sort(key=lambda x: x.addedAt, reverse=True)
for item in sub_result:
if len(ret_resume) >= num:
break
item_type, title, image = "", "", ""
if item.TYPE == "movie":
item_type = MediaType.MOVIE.value
title = item.title
image = item.posterUrl
elif item.TYPE == "season":
item_type = MediaType.TV.value
title = "%s%s" % (item.parentTitle, item.index)
image = item.posterUrl
elif item.TYPE == "episode":
item_type = MediaType.TV.value
title = "%s%s季 第%s" % (item.grandparentTitle, item.parentIndex, item.index)
thumb = (item.parentThumb or item.grandparentThumb or '').lstrip('/')
image = (self._host + thumb + f"?X-Plex-Token={self._token}")
elif item.TYPE == "show":
item_type = MediaType.TV.value
title = "%s%s" % (item.title, item.seasonCount)
image = item.posterUrl
link = self.get_play_url(item.key)
ret_resume.append(schemas.MediaServerPlayItem(
id=item.key,
title=title,
subtitle=item.year,
type=item_type,
image=image,
link=link
))
offset += num
return ret_resume[:num]

View File

@@ -23,6 +23,10 @@ class QbittorrentModule(_ModuleBase):
def init_module(self) -> None:
self.qbittorrent = Qbittorrent()
@staticmethod
def get_name() -> str:
return "Qbittorrent"
def stop(self):
pass
@@ -188,11 +192,12 @@ class QbittorrentModule(_ModuleBase):
if content_path:
torrent_path = Path(content_path)
else:
torrent_path = settings.SAVE_PATH / torrent.get('name')
torrent_path = torrent.get('save_path') / torrent.get('name')
ret_torrents.append(TransferTorrent(
title=torrent.get('name'),
path=torrent_path,
hash=torrent.get('hash'),
size=torrent.get('total_size'),
tags=torrent.get('tags')
))
elif status == TorrentStatus.TRANSFER:
@@ -207,7 +212,7 @@ class QbittorrentModule(_ModuleBase):
if content_path:
torrent_path = Path(content_path)
else:
torrent_path = settings.SAVE_PATH / torrent.get('name')
torrent_path = torrent.get('save_path') / torrent.get('name')
ret_torrents.append(TransferTorrent(
title=torrent.get('name'),
path=torrent_path,
@@ -238,7 +243,7 @@ class QbittorrentModule(_ModuleBase):
return None
return ret_torrents
def transfer_completed(self, hashs: Union[str, list], path: Path = None,
def transfer_completed(self, hashs: str, path: Path = None,
downloader: str = settings.DEFAULT_DOWNLOADER) -> None:
"""
转移完成后的处理

View File

@@ -16,6 +16,10 @@ class SlackModule(_ModuleBase):
def init_module(self) -> None:
self.slack = Slack()
@staticmethod
def get_name() -> str:
return "Slack"
def stop(self):
self.slack.stop()

View File

@@ -28,6 +28,10 @@ class SubtitleModule(_ModuleBase):
def init_module(self) -> None:
pass
@staticmethod
def get_name() -> str:
return "站点字幕"
def init_setting(self) -> Tuple[str, Union[str, bool]]:
pass

View File

@@ -13,6 +13,10 @@ class SynologyChatModule(_ModuleBase):
def init_module(self) -> None:
self.synologychat = SynologyChat()
@staticmethod
def get_name() -> str:
return "Synology Chat"
def stop(self):
pass

View File

@@ -15,6 +15,10 @@ class TelegramModule(_ModuleBase):
def init_module(self) -> None:
self.telegram = Telegram()
@staticmethod
def get_name() -> str:
return "Telegram"
def stop(self):
self.telegram.stop()

View File

@@ -1,5 +1,5 @@
from pathlib import Path
from typing import Optional, List, Tuple, Union
from typing import Optional, List, Tuple, Union, Dict
import cn2an
@@ -39,6 +39,10 @@ class TheMovieDbModule(_ModuleBase):
self.category = CategoryHelper()
self.scraper = TmdbScraper(self.tmdb)
@staticmethod
def get_name() -> str:
return "TheMovieDb"
def stop(self):
self.cache.save()
self.tmdb.close()
@@ -221,6 +225,16 @@ class TheMovieDbModule(_ModuleBase):
"""
return self.tmdb.get_info(mtype=mtype, tmdbid=tmdbid)
def media_category(self) -> Optional[Dict[str, list]]:
"""
获取媒体分类
:return: 获取二级分类配置字典项,需包括电影、电视剧
"""
return {
MediaType.MOVIE.value: list(self.category.movie_categorys),
MediaType.TV.value: list(self.category.tv_categorys)
}
def search_medias(self, meta: MetaBase) -> Optional[List[MediaInfo]]:
"""
搜索媒体信息
@@ -531,7 +545,7 @@ class TheMovieDbModule(_ModuleBase):
detail = self.tmdb.get_person_detail(person_id=person_id)
if detail:
return schemas.MediaPerson(source="themoviedb", **detail)
return schemas.MediaPerson
return schemas.MediaPerson()
def tmdb_person_credits(self, person_id: int, page: int = 1) -> List[MediaInfo]:
"""

View File

@@ -15,7 +15,6 @@ class CategoryHelper(metaclass=Singleton):
_categorys = {}
_movie_categorys = {}
_tv_categorys = {}
_anime_categorys = {}
def __init__(self):
self._category_path: Path = settings.CONFIG_PATH / "category.yaml"
@@ -25,9 +24,6 @@ class CategoryHelper(metaclass=Singleton):
"""
初始化
"""
# 二级分类策略关闭
if not settings.LIBRARY_CATEGORY:
return
try:
if not self._category_path.exists():
shutil.copy(settings.INNER_CONFIG_PATH / "category.yaml", self._category_path)
@@ -44,7 +40,6 @@ class CategoryHelper(metaclass=Singleton):
if self._categorys:
self._movie_categorys = self._categorys.get('movie')
self._tv_categorys = self._categorys.get('tv')
self._anime_categorys = self._categorys.get('anime')
logger.info(f"已加载二级分类策略 category.yaml")
@property
@@ -83,15 +78,6 @@ class CategoryHelper(metaclass=Singleton):
return []
return self._tv_categorys.keys()
@property
def anime_categorys(self) -> list:
"""
获取动漫分类清单
"""
if not self._anime_categorys:
return []
return self._anime_categorys.keys()
def get_movie_category(self, tmdb_info) -> str:
"""
判断电影的分类
@@ -106,10 +92,6 @@ class CategoryHelper(metaclass=Singleton):
:param tmdb_info: 识别的TMDB中的信息
:return: 二级分类的名称
"""
genre_ids = tmdb_info.get("genre_ids") or []
if self._anime_categorys and genre_ids \
and set(genre_ids).intersection(set(settings.ANIME_GENREIDS)):
return self.get_category(self._anime_categorys, tmdb_info)
return self.get_category(self._tv_categorys, tmdb_info)
@staticmethod
@@ -144,7 +126,7 @@ class CategoryHelper(metaclass=Singleton):
info_values = [str(info_value).upper()]
if value.find(",") != -1:
values = [str(val).upper() for val in value.split(",")]
values = [str(val).upper() for val in value.split(",") if val]
else:
values = [str(value).upper()]

View File

@@ -176,7 +176,7 @@ class TmdbApi:
return None
# TMDB搜索
info = {}
if mtype == MediaType.MOVIE:
if mtype != MediaType.TV:
year_range = [year]
if year:
year_range.append(str(int(year) + 1))
@@ -530,7 +530,7 @@ class TmdbApi:
tmdbid: int) -> dict:
"""
给定TMDB号查询一条媒体信息
:param mtype: 类型:电影、电视剧、动漫,为空时都查(此时用不上年份)
:param mtype: 类型:电影、电视剧,为空时都查(此时用不上年份)
:param tmdbid: TMDB的ID有tmdbid时优先使用tmdbid否则使用年份和标题
"""
@@ -759,10 +759,10 @@ class TmdbApi:
if not self.movie:
return {}
try:
logger.info("正在查询TMDB电影%s ..." % tmdbid)
logger.debug("正在查询TMDB电影%s ..." % tmdbid)
tmdbinfo = self.movie.details(tmdbid, append_to_response)
if tmdbinfo:
logger.info(f"{tmdbid} 查询结果:{tmdbinfo.get('title')}")
logger.debug(f"{tmdbid} 查询结果:{tmdbinfo.get('title')}")
return tmdbinfo or {}
except Exception as e:
print(str(e))
@@ -942,10 +942,10 @@ class TmdbApi:
if not self.tv:
return {}
try:
logger.info("正在查询TMDB电视剧%s ..." % tmdbid)
logger.debug("正在查询TMDB电视剧%s ..." % tmdbid)
tmdbinfo = self.tv.details(tv_id=tmdbid, append_to_response=append_to_response)
if tmdbinfo:
logger.info(f"{tmdbid} 查询结果:{tmdbinfo.get('name')}")
logger.debug(f"{tmdbid} 查询结果:{tmdbinfo.get('name')}")
return tmdbinfo or {}
except Exception as e:
print(str(e))
@@ -1018,7 +1018,7 @@ class TmdbApi:
if not self.season:
return {}
try:
logger.info("正在查询TMDB电视剧%s,季:%s ..." % (tmdbid, season))
logger.debug("正在查询TMDB电视剧%s,季:%s ..." % (tmdbid, season))
tmdbinfo = self.season.details(tv_id=tmdbid, season_num=season)
return tmdbinfo or {}
except Exception as e:
@@ -1035,7 +1035,7 @@ class TmdbApi:
if not self.episode:
return {}
try:
logger.info("正在查询TMDB集详情%s,季:%s,集:%s ..." % (tmdbid, season, episode))
logger.debug("正在查询TMDB集详情%s,季:%s,集:%s ..." % (tmdbid, season, episode))
tmdbinfo = self.episode.details(tv_id=tmdbid, season_num=season, episode_num=episode)
return tmdbinfo or {}
except Exception as e:
@@ -1051,8 +1051,9 @@ class TmdbApi:
if not self.discover:
return []
try:
logger.info(f"正在发现电影:{kwargs}...")
tmdbinfo = self.discover.discover_movies(kwargs)
logger.debug(f"正在发现电影:{kwargs}...")
params_tuple = tuple(kwargs.items())
tmdbinfo = self.discover.discover_movies(params_tuple)
if tmdbinfo:
for info in tmdbinfo:
info['media_type'] = MediaType.MOVIE
@@ -1070,7 +1071,7 @@ class TmdbApi:
if not self.discover:
return []
try:
logger.info(f"正在发现电视剧:{kwargs}...")
logger.debug(f"正在发现电视剧:{kwargs}...")
tmdbinfo = self.discover.discover_tv_shows(kwargs)
if tmdbinfo:
for info in tmdbinfo:
@@ -1087,7 +1088,7 @@ class TmdbApi:
if not self.movie:
return {}
try:
logger.info(f"正在获取电影图片:{tmdbid}...")
logger.debug(f"正在获取电影图片:{tmdbid}...")
return self.movie.images(movie_id=tmdbid) or {}
except Exception as e:
print(str(e))
@@ -1100,7 +1101,7 @@ class TmdbApi:
if not self.tv:
return {}
try:
logger.info(f"正在获取电视剧图片:{tmdbid}...")
logger.debug(f"正在获取电视剧图片:{tmdbid}...")
return self.tv.images(tv_id=tmdbid) or {}
except Exception as e:
print(str(e))
@@ -1113,7 +1114,7 @@ class TmdbApi:
if not self.movie:
return []
try:
logger.info(f"正在获取相似电影:{tmdbid}...")
logger.debug(f"正在获取相似电影:{tmdbid}...")
return self.movie.similar(movie_id=tmdbid) or []
except Exception as e:
print(str(e))
@@ -1126,7 +1127,7 @@ class TmdbApi:
if not self.tv:
return []
try:
logger.info(f"正在获取相似电视剧:{tmdbid}...")
logger.debug(f"正在获取相似电视剧:{tmdbid}...")
return self.tv.similar(tv_id=tmdbid) or []
except Exception as e:
print(str(e))
@@ -1139,7 +1140,7 @@ class TmdbApi:
if not self.movie:
return []
try:
logger.info(f"正在获取推荐电影:{tmdbid}...")
logger.debug(f"正在获取推荐电影:{tmdbid}...")
return self.movie.recommendations(movie_id=tmdbid) or []
except Exception as e:
print(str(e))
@@ -1152,7 +1153,7 @@ class TmdbApi:
if not self.tv:
return []
try:
logger.info(f"正在获取推荐电视剧:{tmdbid}...")
logger.debug(f"正在获取推荐电视剧:{tmdbid}...")
return self.tv.recommendations(tv_id=tmdbid) or []
except Exception as e:
print(str(e))
@@ -1165,7 +1166,7 @@ class TmdbApi:
if not self.movie:
return []
try:
logger.info(f"正在获取电影演职人员:{tmdbid}...")
logger.debug(f"正在获取电影演职人员:{tmdbid}...")
info = self.movie.credits(movie_id=tmdbid) or {}
cast = info.get('cast') or []
if cast:
@@ -1182,7 +1183,7 @@ class TmdbApi:
if not self.tv:
return []
try:
logger.info(f"正在获取电视剧演职人员:{tmdbid}...")
logger.debug(f"正在获取电视剧演职人员:{tmdbid}...")
info = self.tv.credits(tv_id=tmdbid) or {}
cast = info.get('cast') or []
if cast:
@@ -1219,7 +1220,7 @@ class TmdbApi:
if not self.person:
return {}
try:
logger.info(f"正在获取人物详情:{person_id}...")
logger.debug(f"正在获取人物详情:{person_id}...")
return self.person.details(person_id=person_id) or {}
except Exception as e:
print(str(e))
@@ -1232,7 +1233,7 @@ class TmdbApi:
if not self.person:
return []
try:
logger.info(f"正在获取人物参演作品:{person_id}...")
logger.debug(f"正在获取人物参演作品:{person_id}...")
movies = self.person.movie_credits(person_id=person_id) or {}
tvs = self.person.tv_credits(person_id=person_id) or {}
cast = (movies.get('cast') or []) + (tvs.get('cast') or [])
@@ -1262,7 +1263,7 @@ class TmdbApi:
return {}
episode_years = {}
for episode_group in episode_groups:
logger.info(f"正在获取剧集组年份:{episode_group.get('id')}...")
logger.debug(f"正在获取剧集组年份:{episode_group.get('id')}...")
if episode_group.get('type') != 6:
# 只处理剧集部分
continue

View File

@@ -1,4 +1,5 @@
from ..tmdb import TMDb
from cachetools import cached, TTLCache
try:
from urllib import urlencode
@@ -12,14 +13,17 @@ class Discover(TMDb):
"tv": "/discover/tv"
}
def discover_movies(self, params):
@cached(cache=TTLCache(maxsize=1, ttl=43200))
def discover_movies(self, params_tuple):
"""
Discover movies by different types of data like average rating, number of votes, genres and certifications.
:param params: dict
:return:
"""
return self._request_obj(self._urls["movies"], urlencode(params), key="results")
params = dict(params_tuple)
return self._request_obj(self._urls["movies"], urlencode(params), key="results", call_cached=False)
@cached(cache=TTLCache(maxsize=1, ttl=43200))
def discover_tv_shows(self, params):
"""
Discover TV shows by different types of data like average rating, number of votes, genres,
@@ -27,4 +31,4 @@ class Discover(TMDb):
:param params: dict
:return:
"""
return self._request_obj(self._urls["tv"], urlencode(params), key="results")
return self._request_obj(self._urls["tv"], urlencode(params), key="results", call_cached=False)

View File

@@ -1,14 +1,21 @@
from cachetools import cached, TTLCache
from ..tmdb import TMDb
class Trending(TMDb):
_urls = {"trending": "/trending/%s/%s"}
@cached(cache=TTLCache(maxsize=1, ttl=43200))
def _trending(self, media_type="all", time_window="day", page=1):
"""
Get trending, TTLCache 12 hours
"""
return self._request_obj(
self._urls["trending"] % (media_type, time_window),
params="page=%s" % page,
key="results",
call_cached=False
)
def all_day(self, page=1):

View File

@@ -141,7 +141,7 @@ class TMDb(object):
def cached_request(self, method, url, data, json,
_ts=datetime.strftime(datetime.now(), '%Y%m%d')):
"""
缓存请求时间默认1天None不缓存
缓存请求
"""
return self.request(method, url, data, json)

View File

@@ -16,6 +16,10 @@ class TheTvDbModule(_ModuleBase):
select_first=True,
proxies=settings.PROXY)
@staticmethod
def get_name() -> str:
return "TheTvDb"
def stop(self):
self.tvdb.close()

View File

@@ -23,6 +23,10 @@ class TransmissionModule(_ModuleBase):
def init_module(self) -> None:
self.transmission = Transmission()
@staticmethod
def get_name() -> str:
return "Transmission"
def stop(self):
pass
@@ -181,6 +185,7 @@ class TransmissionModule(_ModuleBase):
title=torrent.name,
path=Path(torrent.download_dir) / torrent.name,
hash=torrent.hashString,
size=torrent.total_size,
tags=",".join(torrent.labels or [])
))
elif status == TorrentStatus.TRANSFER:
@@ -226,7 +231,7 @@ class TransmissionModule(_ModuleBase):
return None
return ret_torrents
def transfer_completed(self, hashs: Union[str, list], path: Path = None,
def transfer_completed(self, hashs: str, path: Path = None,
downloader: str = settings.DEFAULT_DOWNLOADER) -> None:
"""
转移完成后的处理
@@ -237,7 +242,14 @@ class TransmissionModule(_ModuleBase):
"""
if downloader != "transmission":
return None
self.transmission.set_torrent_tag(ids=hashs, tags=['已整理'])
# 获取原标签
org_tags = self.transmission.get_torrent_tags(ids=hashs)
# 种子打上已整理标签
if org_tags:
tags = org_tags + ['已整理']
else:
tags = ['已整理']
self.transmission.set_torrent_tag(ids=hashs, tags=tags)
# 移动模式删除种子
if settings.TRANSFER_TYPE == "move":
if self.remove_torrents(hashs):

View File

@@ -2,7 +2,7 @@ from typing import Optional, Union, Tuple, List, Dict
import transmission_rpc
from transmission_rpc import Client, Torrent, File
from transmission_rpc.session import SessionStats
from transmission_rpc.session import SessionStats, Session
from app.core.config import settings
from app.log import logger
@@ -130,21 +130,38 @@ class Transmission:
logger.error(f"获取正在下载的种子列表出错:{str(err)}")
return None
def set_torrent_tag(self, ids: str, tags: list) -> bool:
def set_torrent_tag(self, ids: str, tags: list, org_tags: list = None) -> bool:
"""
设置种子标签
设置种子标签注意TR默认会覆盖原有标签如需追加需传入原有标签
"""
if not self.trc:
return False
if not ids or not tags:
return False
try:
self.trc.change_torrent(labels=tags, ids=ids)
self.trc.change_torrent(labels=list(set((org_tags or []) + tags)), ids=ids)
return True
except Exception as err:
logger.error(f"设置种子标签出错:{str(err)}")
return False
def get_torrent_tags(self, ids: str) -> List[str]:
"""
获取所有种子标签
"""
if not self.trc:
return []
try:
torrent = self.trc.get_torrents(ids=ids, arguments=self._trarg)
if torrent:
labels = [str(tag).strip()
for tag in torrent.labels] if hasattr(torrent, "labels") else []
return labels
except Exception as err:
logger.error(f"获取种子标签出错:{str(err)}")
return []
return []
def add_torrent(self, content: Union[str, bytes],
is_paused: bool = False,
download_dir: str = None,
@@ -397,15 +414,15 @@ class Transmission:
logger.error(f"修改tracker出错{str(err)}")
return False
def get_session(self) -> Dict[str, Union[int, bool, str]]:
def get_session(self) -> Optional[Session]:
"""
获取Transmission当前的会话信息和配置设置
:return dict or False
:return dict
"""
if not self.trc:
return False
return None
try:
return self.trc.get_session()
except Exception as err:
logger.error(f"获取session出错{str(err)}")
return False
return None

View File

@@ -15,6 +15,10 @@ class VoceChatModule(_ModuleBase):
def init_module(self) -> None:
self.vocechat = VoceChat()
@staticmethod
def get_name() -> str:
return "VoceChat"
def stop(self):
pass

View File

@@ -17,6 +17,10 @@ class WechatModule(_ModuleBase):
def init_module(self) -> None:
self.wechat = WeChat()
@staticmethod
def get_name() -> str:
return "微信"
def stop(self):
pass

View File

@@ -71,18 +71,18 @@ class WeChat:
return None
try:
token_url = self._token_url % (self._corpid, self._appsecret)
with RequestUtils().get_res(token_url) as res:
if res:
ret_json = res.json()
if ret_json.get('errcode') == 0:
self._access_token = ret_json.get('access_token')
self._expires_in = ret_json.get('expires_in')
self._access_token_time = datetime.now()
elif res is not None:
logger.error(f"获取微信access_token失败错误码{res.status_code},错误原因:{res.reason}")
else:
logger.error(f"获取微信access_token失败未获取到返回信息")
raise Exception("获取微信access_token失败网络连接失败")
res = RequestUtils().get_res(token_url)
if res:
ret_json = res.json()
if ret_json.get('errcode') == 0:
self._access_token = ret_json.get('access_token')
self._expires_in = ret_json.get('expires_in')
self._access_token_time = datetime.now()
elif res is not None:
logger.error(f"获取微信access_token失败错误码{res.status_code},错误原因:{res.reason}")
else:
logger.error(f"获取微信access_token失败未获取到返回信息")
raise Exception("获取微信access_token失败网络连接失败")
except Exception as e:
logger.error(f"获取微信access_token失败错误信息{str(e)}")
return None

View File

@@ -1,6 +1,6 @@
from abc import ABCMeta, abstractmethod
from pathlib import Path
from typing import Any, List, Dict, Tuple
from typing import Any, List, Dict, Tuple, Optional
from app.chain import ChainBase
from app.core.config import settings
@@ -55,6 +55,13 @@ class _PluginBase(metaclass=ABCMeta):
"""
pass
@abstractmethod
def get_state(self) -> bool:
"""
获取插件运行状态
"""
pass
@staticmethod
@abstractmethod
def get_command() -> List[Dict[str, Any]]:
@@ -84,19 +91,6 @@ class _PluginBase(metaclass=ABCMeta):
"""
pass
def get_service(self) -> List[Dict[str, Any]]:
"""
注册插件公共服务
[{
"id": "服务ID",
"name": "服务名称",
"trigger": "触发器cron/interval/date/CronTrigger.from_crontab()",
"func": self.xxx,
"kwargs": {} # 定时器参数
}]
"""
pass
@abstractmethod
def get_form(self) -> Tuple[List[dict], Dict[str, Any]]:
"""
@@ -113,10 +107,52 @@ class _PluginBase(metaclass=ABCMeta):
"""
pass
@abstractmethod
def get_state(self) -> bool:
def get_service(self) -> List[Dict[str, Any]]:
"""
获取插件运行状态
注册插件公共服务
[{
"id": "服务ID",
"name": "服务名称",
"trigger": "触发器cron/interval/date/CronTrigger.from_crontab()",
"func": self.xxx,
"kwargs": {} # 定时器参数
}]
"""
pass
def get_dashboard(self, key: str, **kwargs) -> Optional[Tuple[Dict[str, Any], Dict[str, Any], List[dict]]]:
"""
获取插件仪表盘页面需要返回1、仪表板col配置字典2、全局配置自动刷新等3、仪表板页面元素配置json含数据
1、col配置参考
{
"cols": 12, "md": 6
}
2、全局配置参考
{
"refresh": 10, // 自动刷新时间,单位秒
"border": True, // 是否显示边框默认True为False时取消组件边框和边距由插件自行控制
"title": "组件标题", // 组件标题,如有将显示该标题,否则显示插件名称
"subtitle": "组件子标题", // 组件子标题,缺省时不展示子标题
}
3、页面配置使用Vuetify组件拼装参考https://vuetifyjs.com/
kwargs参数可获取的值1、user_agent浏览器UA
:param key: 仪表盘key根据指定的key返回相应的仪表盘数据缺省时返回一个固定的仪表盘数据兼容旧版
"""
pass
def get_dashboard_meta(self) -> Optional[List[Dict[str, str]]]:
"""
获取插件仪表盘元信息
返回示例:
[{
"key": "dashboard1", // 仪表盘的key在当前插件范围唯一
"name": "仪表盘1" // 仪表盘的名称
}, {
"key": "dashboard2",
"name": "仪表盘2"
}]
"""
pass

View File

@@ -18,10 +18,12 @@ from app.chain.tmdb import TmdbChain
from app.chain.torrents import TorrentsChain
from app.chain.transfer import TransferChain
from app.core.config import settings
from app.core.event import EventManager
from app.core.plugin import PluginManager
from app.helper.sites import SitesHelper
from app.log import logger
from app.schemas import Notification, NotificationType
from app.schemas.types import EventType
from app.utils.singleton import Singleton
from app.utils.timer import TimerUtils
@@ -75,8 +77,11 @@ class Scheduler(metaclass=Singleton):
# 最大重试次数
__max_try__ = 30
if self._auth_count > __max_try__:
SchedulerChain().messagehelper.put(title=f"用户认证失败",
message="用户认证失败次数过多,将不再尝试认证!",
role="system")
return
logger.info("用户未认证,正在试重新认证...")
logger.info("用户未认证,正在试重新认证...")
status, msg = SitesHelper().check_user()
if status:
self._auth_count = 0
@@ -92,23 +97,27 @@ class Scheduler(metaclass=Singleton):
self._auth_count += 1
logger.error(f"用户认证失败:{msg},共失败 {self._auth_count}")
if self._auth_count >= __max_try__:
logger.error("用户认证失败次数过多,将不再试认证!")
logger.error("用户认证失败次数过多,将不再试认证!")
# 各服务的运行状态
self._jobs = {
"cookiecloud": {
"name": "同步CookieCloud站点",
"func": SiteChain().sync_cookies,
"running": False,
},
"mediaserver_sync": {
"name": "同步媒体服务器",
"func": MediaServerChain().sync,
"running": False,
},
"subscribe_tmdb": {
"name": "订阅元数据更新",
"func": SubscribeChain().check,
"running": False,
},
"subscribe_search": {
"name": "订阅搜索补全",
"func": SubscribeChain().search,
"running": False,
"kwargs": {
@@ -116,6 +125,7 @@ class Scheduler(metaclass=Singleton):
}
},
"new_subscribe_search": {
"name": "新增订阅搜索",
"func": SubscribeChain().search,
"running": False,
"kwargs": {
@@ -123,16 +133,34 @@ class Scheduler(metaclass=Singleton):
}
},
"subscribe_refresh": {
"name": "订阅刷新",
"func": SubscribeChain().refresh,
"running": False,
},
"transfer": {
"name": "下载文件整理",
"func": TransferChain().process,
"running": False,
},
"clear_cache": {
"name": "缓存清理",
"func": clear_cache,
"running": False,
},
"user_auth": {
"name": "用户认证检查",
"func": user_auth,
"running": False,
},
"scheduler_job": {
"name": "公共定时服务",
"func": SchedulerChain().scheduler_job,
"running": False,
},
"random_wallpager": {
"name": "壁纸缓存",
"func": TmdbChain().get_random_wallpager,
"running": False,
}
}
@@ -263,17 +291,27 @@ class Scheduler(metaclass=Singleton):
# 后台刷新TMDB壁纸
self._scheduler.add_job(
TmdbChain().get_random_wallpager,
self.start,
"interval",
id="random_wallpager",
name="壁纸缓存",
minutes=30,
next_run_time=datetime.now(pytz.timezone(settings.TZ)) + timedelta(seconds=3)
next_run_time=datetime.now(pytz.timezone(settings.TZ)) + timedelta(seconds=3),
kwargs={
'job_id': 'random_wallpager'
}
)
# 公共定时服务
self._scheduler.add_job(
SchedulerChain().scheduler_job,
self.start,
"interval",
minutes=10
id="scheduler_job",
name="公共定时服务",
minutes=10,
kwargs={
'job_id': 'scheduler_job'
}
)
# 缓存清理服务每隔24小时
@@ -290,9 +328,14 @@ class Scheduler(metaclass=Singleton):
# 定时检查用户认证每隔10分钟
self._scheduler.add_job(
user_auth,
self.start,
"interval",
minutes=10
id="user_auth",
name="用户认证检查",
minutes=10,
kwargs={
'job_id': 'user_auth'
}
)
# 注册插件公共服务
@@ -314,8 +357,9 @@ class Scheduler(metaclass=Singleton):
job = self._jobs.get(job_id)
if not job:
return
job_name = job.get("name")
if job.get("running"):
logger.warning(f"定时任务 {job_id} 正在运行 ...")
logger.warning(f"定时任务 {job_id} - {job_name} 正在运行 ...")
return
self._jobs[job_id]["running"] = True
# 开始运行
@@ -324,7 +368,20 @@ class Scheduler(metaclass=Singleton):
kwargs = job.get("kwargs") or {}
job["func"](*args, **kwargs)
except Exception as e:
logger.error(f"定时任务 {job_id} 执行失败:{str(e)} - {traceback.format_exc()}")
logger.error(f"定时任务 {job_name} 执行失败:{str(e)} - {traceback.format_exc()}")
SchedulerChain().messagehelper.put(title=f"{job_name} 执行失败",
message=str(e),
role="system")
EventManager().send_event(
EventType.SystemError,
{
"type": "scheduler",
"scheduler_id": job_id,
"scheduler_name": job_name,
"error": str(e),
"traceback": traceback.format_exc()
}
)
# 运行结束
with self._lock:
try:
@@ -375,6 +432,9 @@ class Scheduler(metaclass=Singleton):
logger.info(f"注册插件{plugin_name}服务:{service['name']} - {service['trigger']}")
except Exception as e:
logger.error(f"注册插件{plugin_name}服务失败:{str(e)} - {service}")
SchedulerChain().messagehelper.put(title=f"插件 {plugin_name} 服务注册失败",
message=str(e),
role="system")
def remove_plugin_job(self, pid: str):
"""
@@ -396,6 +456,9 @@ class Scheduler(metaclass=Singleton):
logger.info(f"移除插件服务({plugin_name}){service.get('name')}")
except Exception as e:
logger.error(f"移除插件服务失败:{str(e)} - {job_id}: {service}")
SchedulerChain().messagehelper.put(title=f"插件 {plugin_name} 服务移除失败",
message=str(e),
role="system")
def list(self) -> List[schemas.ScheduleInfo]:
"""
@@ -453,10 +516,12 @@ class Scheduler(metaclass=Singleton):
"""
try:
if self._scheduler:
logger.info("正在停止定时任务...")
self._event.set()
self._scheduler.remove_all_jobs()
if self._scheduler.running:
self._scheduler.shutdown()
self._scheduler = None
logger.info("定时任务停止完成")
except Exception as e:
logger.error(f"停止定时任务失败::{str(e)} - {traceback.format_exc()}")

View File

@@ -14,3 +14,4 @@ from .message import *
from .tmdb import *
from .transfer import *
from .file import *
from .filetransfer import *

View File

@@ -18,3 +18,5 @@ class FileItem(BaseModel):
size: Optional[int] = None
# 修改时间
modify_time: Optional[float] = None
# 子节点
children: Optional[list] = []

View File

@@ -0,0 +1,25 @@
from typing import Optional
from pydantic import BaseModel
class MediaDirectory(BaseModel):
"""
下载目录/媒体库目录
"""
# 类型 download/library
type: Optional[str] = None
# 别名
name: Optional[str] = None
# 路径
path: Optional[str] = None
# 媒体类型 电影/电视剧
media_type: Optional[str] = None
# 媒体类别 动画电影/国产剧
category: Optional[str] = None
# 刮削媒体信息
scrape: Optional[bool] = False
# 自动二级分类,未指定类别时自动分类
auto_category: Optional[bool] = False
# 优先级
priority: Optional[int] = 0

View File

@@ -1,4 +1,4 @@
from typing import Optional
from typing import Optional, List
from pydantic import BaseModel
@@ -46,3 +46,20 @@ class Plugin(BaseModel):
history: Optional[dict] = {}
# 添加时间,值越小表示越靠后发布
add_time: Optional[int] = 0
class PluginDashboard(Plugin):
"""
插件仪表盘
"""
id: Optional[str] = None
# 名称
name: Optional[str] = None
# 仪表板key
key: Optional[str] = None
# 全局配置
attrs: Optional[dict] = {}
# col列数
cols: Optional[dict] = {}
# 页面元素
elements: Optional[List[dict]] = []

View File

@@ -34,6 +34,8 @@ class Site(BaseModel):
public: Optional[int] = 0
# 备注
note: Optional[str] = None
# 超时时间
timeout: Optional[int] = 0
# 流控单位周期
limit_interval: Optional[int] = None
# 流控次数

View File

@@ -53,7 +53,7 @@ class Subscribe(BaseModel):
# 订阅用户
username: Optional[str] = None
# 订阅站点
sites: Optional[List[int]] = None
sites: Optional[List[int]] = []
# 是否洗版
best_version: Optional[int] = 0
# 当前优先级

View File

@@ -12,6 +12,7 @@ class TransferTorrent(BaseModel):
path: Optional[Path] = None
hash: Optional[str] = None
tags: Optional[str] = None
size: Optional[int] = 0
userid: Optional[str] = None
@@ -59,6 +60,8 @@ class TransferInfo(BaseModel):
fail_list: Optional[list] = []
# 错误信息
message: Optional[str] = None
# 是否需要刮削
need_scrape: Optional[bool] = False
def to_dict(self):
"""

View File

@@ -32,6 +32,8 @@ class EventType(Enum):
HistoryDeleted = "history.deleted"
# 删除下载源文件
DownloadFileDeleted = "downloadfile.deleted"
# 删除下载任务
DownloadDeleted = "download.deleted"
# 收到用户外来消息
UserMessage = "user.message"
# 收到Webhook消息
@@ -46,6 +48,8 @@ class EventType(Enum):
SubscribeAdded = "subscribe.added"
# 订阅已完成
SubscribeComplete = "subscribe.complete"
# 系统错误
SystemError = "system.error"
# 系统配置Key字典
@@ -82,6 +86,14 @@ class SystemConfigKey(Enum):
TransferExcludeWords = "TransferExcludeWords"
# 插件安装统计
PluginInstallReport = "PluginInstallReport"
# 订阅统计
SubscribeReport = "SubscribeReport"
# 用户自定义CSS
UserCustomCSS = "UserCustomCSS"
# 下载目录定义
DownloadDirectories = "DownloadDirectories"
# 媒体库目录定义
LibraryDirectories = "LibraryDirectories"
# 处理进度Key字典
@@ -112,6 +124,8 @@ class NotificationType(Enum):
MediaServer = "媒体服务器通知"
# 处理失败需要人工干预
Manual = "手动处理通知"
# 插件消息
Plugin = "插件消息"
class MessageChannel(Enum):

View File

@@ -58,6 +58,7 @@ class RequestUtils:
verify=False,
headers=self._headers,
proxies=self._proxies,
cookies=self._cookies,
timeout=self._timeout,
json=json,
stream=False)
@@ -67,6 +68,7 @@ class RequestUtils:
verify=False,
headers=self._headers,
proxies=self._proxies,
cookies=self._cookies,
timeout=self._timeout,
json=json,
stream=False)
@@ -80,6 +82,7 @@ class RequestUtils:
verify=False,
headers=self._headers,
proxies=self._proxies,
cookies=self._cookies,
timeout=self._timeout,
params=params)
else:
@@ -87,6 +90,7 @@ class RequestUtils:
verify=False,
headers=self._headers,
proxies=self._proxies,
cookies=self._cookies,
timeout=self._timeout,
params=params)
return str(r.content, 'utf-8')

View File

@@ -35,7 +35,25 @@ class ObjectUtils:
"""
检查函数是否已实现
"""
return func.__code__.co_code not in [b'd\x01S\x00', b'\x97\x00d\x00S\x00']
try:
source = inspect.getsource(func)
in_comment = False
for line in source.split('\n'):
line = line.strip()
if not line:
continue
if line.startswith('"""') or line.startswith("'''"):
in_comment = not in_comment
continue
if not in_comment and not (line.startswith('#')
or line == "pass"
or line.startswith('@')
or line.startswith('def ')):
return True
except Exception as err:
print(str(err))
return func.__code__.co_code not in [b'd\x01S\x00', b'\x97\x00d\x00S\x00']
return False
@staticmethod
def check_signature(func: FunctionType, *args) -> bool:

View File

@@ -137,6 +137,13 @@ class StringUtils:
return False
return True
@staticmethod
def is_english_word(word: str) -> bool:
"""
判断是否为英文单词有空格时返回False
"""
return word.isalpha()
@staticmethod
def str_int(text: str) -> int:
"""

View File

@@ -465,3 +465,14 @@ class SystemUtils:
except Exception as err:
print(str(err))
return False, f"重启时发生错误:{str(err)}"
@staticmethod
def is_same_disk(src: Path, dest: Path) -> bool:
"""
判断两个路径是否在同一磁盘
"""
if not src.exists() or not dest.exists():
return False
if os.name == "nt":
return src.drive == dest.drive
return os.stat(src).st_dev == os.stat(dest).st_dev

Some files were not shown because too many files have changed in this diff Show More