Compare commits

...

348 Commits

Author SHA1 Message Date
jxxghp
f9f58fc559 v1.8.7
- 认证站点增加新成员:DiscFan
- 支持插件在仪表板中显示Widget,站点数据统计、站点刷流插件已支持,其余插件待开发者适配,插件开发说明:https://github.com/jxxghp/MoviePilot-Plugins
- 仪表板组件支持用户通过拖拽调整显示顺序
- 调整了热门订阅订阅人数的显示样式
2024-05-09 20:06:19 +08:00
jxxghp
f59b5b6d27 fix plugin dashboard 2024-05-09 19:12:43 +08:00
jxxghp
30b3ad4a99 fix plugin dashboard 2024-05-09 19:12:12 +08:00
jxxghp
dfb9ce7520 fix tmdb match api 2024-05-09 18:49:58 +08:00
jxxghp
6c365f552e Merge pull request #2038 from Devinaille/fix_subtitle_match 2024-05-09 11:40:03 +08:00
tianyf
a81ee7d89a 修复副标题包含【】的情况下无法匹配标题的问题 2024-05-09 11:18:00 +08:00
jxxghp
5c9039e6d0 feat:插件仪表板API 2024-05-08 20:58:28 +08:00
jxxghp
cce2e13e21 fix README.md 2024-05-08 16:46:35 +08:00
jxxghp
0da87abc71 Merge remote-tracking branch 'origin/main' 2024-05-08 15:54:56 +08:00
jxxghp
6a2eecc744 fix #2016 2024-05-08 15:54:50 +08:00
jxxghp
c049e13c1c Merge pull request #2032 from z3shan33/main 2024-05-08 13:04:13 +08:00
z3shan33
7ec49ce076 fix #2031 2024-05-08 11:09:36 +08:00
jxxghp
5be2fc35b5 Merge pull request #2026 from zhu0823/main 2024-05-07 18:31:12 +08:00
zhu0823
0b84312559 fix: 某些剧集parentThumb为None时的问题 2024-05-07 18:28:53 +08:00
jxxghp
8bb43b52bc - 优化热门订阅数据统计,热门订阅支持展示订阅人数 2024-05-07 16:19:05 +08:00
jxxghp
bd348f118c fix 订阅统计清理 2024-05-07 16:01:52 +08:00
jxxghp
4a3a3483d0 fix api 2024-05-07 12:31:24 +08:00
jxxghp
fd6314f19f fix 2024-05-06 22:48:24 +08:00
jxxghp
17a9f3a626 更新 version.py 2024-05-06 19:08:49 +08:00
jxxghp
75c898e6eb Merge pull request #2018 from zhu0823/main 2024-05-06 18:37:00 +08:00
zhu0823
089d4785aa fix: plex最近添加的剧集封面地址 2024-05-06 18:35:10 +08:00
jxxghp
4d48295f72 fix bug 2024-05-06 16:46:03 +08:00
jxxghp
ed119b7beb fix 数据共享开关 2024-05-06 12:37:51 +08:00
jxxghp
90d5a8b0c9 add 数据共享开关 2024-05-06 11:54:32 +08:00
jxxghp
dd5c0de7b1 feat:订阅统计 2024-05-06 11:19:41 +08:00
jxxghp
73bdca282c fix api desc 2024-05-05 13:51:49 +08:00
jxxghp
360a54581f add api 2024-05-05 13:43:14 +08:00
jxxghp
1fc7587cbb fix 豆瓣&Bangumi搜索过于宽泛 2024-05-05 12:14:58 +08:00
jxxghp
dcd46f1627 fix #2001 2024-05-05 11:54:42 +08:00
jxxghp
d8644a20c0 fix #2011 2024-05-05 11:22:23 +08:00
jxxghp
23b47f98c1 Merge pull request #2009 from DDS-Derek/main 2024-05-05 09:55:00 +08:00
DDSRem
347c91fa0b bump: ca-certificates version to bookworm 2024-05-04 19:14:30 +08:00
jxxghp
ac961b37b4 Merge pull request #2004 from thsrite/main 2024-05-03 13:52:45 +08:00
thsrite
068c49a79a fix 最小做种数生效时间 2024-05-03 13:09:08 +08:00
jxxghp
e7174b402c Merge pull request #1988 from zhu0823/main 2024-04-30 19:20:33 +08:00
zhu0823
d21267090a 同步更新上游api 2024-04-30 19:14:29 +08:00
zhu0823
51dc2c33a0 feat: plex最近添加过滤黑名单 2024-04-30 19:14:08 +08:00
jxxghp
8aef488ab6 Merge pull request #1986 from Sxnan/tmdbid-type 2024-04-30 16:03:44 +08:00
sxnan
0cbf45f9b9 Cast tmdbid to int type after getting metainfo from title 2024-04-30 15:55:10 +08:00
jxxghp
c0ae32d654 - 修复豆瓣数据源搜索媒体类型错误的问题 2024-04-30 13:33:28 +08:00
jxxghp
ff1b0e02d6 fix 特殊集数识别 2024-04-30 11:49:03 +08:00
jxxghp
76a8b02fe5 fix 豆瓣搜索词条类型错误问题 2024-04-30 11:17:03 +08:00
jxxghp
43f594393c feat:插件增加排序字段 2024-04-30 08:34:37 +08:00
jxxghp
008e11d63f fix 豆瓣人物头像质量 2024-04-30 07:15:21 +08:00
jxxghp
9dd610f245 更新 site.py 2024-04-29 21:46:55 +08:00
jxxghp
c5d087aad6 Merge pull request #1979 from InfinityPacer/main 2024-04-29 20:43:14 +08:00
jxxghp
576c5741f9 v1.8.5
- 支持媒体信息多数据源聚合搜索(TheMovieDb、豆瓣、Bangumi),可在`设定`-`搜索`中调整数据源范围和顺序
- 插件支持设置标签,优化了插件功能的使用体验
- 适配馒头最新架构调整,需在站点管理中手动维护站点令牌和请求头
2024-04-29 20:39:20 +08:00
InfinityPacer
51387c31c4 fix plugin_order 2024-04-29 20:39:01 +08:00
jxxghp
c2a40876e2 适配m-team新鉴权机制 2024-04-29 20:19:46 +08:00
jxxghp
c06bdf0491 fix exception 2024-04-28 18:10:04 +08:00
jxxghp
f726130c31 feat:插件读取Label 2024-04-28 13:35:47 +08:00
jxxghp
4033ffeb15 fix 聚合结果排序 2024-04-28 12:03:48 +08:00
jxxghp
f81af8e9fb Merge remote-tracking branch 'origin/main' 2024-04-28 10:41:05 +08:00
jxxghp
e3f9260299 fix README.md 2024-04-28 10:40:59 +08:00
jxxghp
c80ccaf74b Merge pull request #1976 from thsrite/main 2024-04-28 10:26:05 +08:00
thsrite
0e60c976be fix 默认过滤规则支持最少做种人数生效发布时间,防止过滤掉到最新发布的种子 2024-04-28 10:21:42 +08:00
jxxghp
805c7d2701 fix torrent filter log 2024-04-28 09:19:09 +08:00
jxxghp
4499f001dd fix bug 2024-04-28 09:03:16 +08:00
jxxghp
71c6a3718b feat:媒体搜索聚合开关 2024-04-28 08:55:07 +08:00
jxxghp
6404f9d45c 更新 tmdb.py 2024-04-27 22:39:50 +08:00
jxxghp
ce357540eb 更新 douban.py 2024-04-27 22:38:47 +08:00
jxxghp
e56cfd6ad4 fix douban apis 2024-04-27 22:29:17 +08:00
jxxghp
25e5f7a9f6 fix tmdb apis 2024-04-27 21:56:27 +08:00
jxxghp
6d69ac42e5 fix bangumi apis 2024-04-27 21:36:42 +08:00
jxxghp
6a71bed821 fix douban api 2024-04-27 21:17:32 +08:00
jxxghp
1718758d1c fix 豆瓣图片质量 2024-04-27 17:16:19 +08:00
jxxghp
7a37078e90 feat:媒体信息聚合 2024-04-27 15:20:46 +08:00
jxxghp
26b5ad6a44 Merge pull request #1969 from lightolly/dev/20240428_1
feat:add person api
2024-04-27 13:00:46 +08:00
olly
fa884c9608 feat:add person api 2024-04-27 11:52:23 +08:00
jxxghp
6927b5fbd3 Merge pull request #1968 from lightolly/dev/20240428 2024-04-27 10:24:35 +08:00
olly
59fca63d4a feat:add api 2024-04-27 10:12:40 +08:00
jxxghp
7489d6a912 v1.8.4
- 搜索框支持搜索演员,修复了演员参演作品只显示电影的问题
- 插件支持在数据页面绑定事件以及调用API接口
- 启动时如用户认证失败,后台会间歇性重试一段时间
- `设定`-`关于` 增加显示前端版本号
- 历史记录支持排序
- 优化了小屏幕下的弹窗使用体验
- Jellyfin 的页面跳转调整为支持`latest`分支版本
- 修复了个别情况下订阅历史记录会登记失败的问题
2024-04-27 09:09:09 +08:00
jxxghp
b437fd6021 fix bug 2024-04-27 00:30:17 +08:00
jxxghp
c303ab0765 fix api 2024-04-26 20:29:04 +08:00
jxxghp
9daff87f2f fix api 2024-04-26 19:41:17 +08:00
jxxghp
f20b1bcfe9 feat:人物搜索API 2024-04-26 17:47:45 +08:00
jxxghp
2f71e401be fix 2024-04-26 17:11:24 +08:00
jxxghp
0840e0bcbc Merge pull request #1967 from thsrite/main
fix 当前版本、重启检查前端版本
2024-04-26 17:05:12 +08:00
thsrite
933af7485c fix function name 2024-04-26 17:00:40 +08:00
thsrite
baddaabd73 fix 2024-04-26 16:59:19 +08:00
thsrite
8028866cee fix 2024-04-26 16:58:30 +08:00
thsrite
242894cec2 fix 当前版本、重启检查前端版本 2024-04-26 16:54:25 +08:00
jxxghp
967ad3a507 Merge pull request #1966 from thsrite/main 2024-04-26 14:44:16 +08:00
thsrite
2dbe049a91 fix 查询某时间之后的转移历史 2024-04-26 14:41:52 +08:00
jxxghp
c5afc65cbd fix #1955 启动时用户认证失败时,间歇性重试 2024-04-26 11:08:25 +08:00
jxxghp
e35bacecd5 fix #1942 2024-04-26 10:42:24 +08:00
jxxghp
d84c86b0f6 fix 插件重置 2024-04-25 16:48:37 +08:00
jxxghp
73ae09b041 fix bug 2024-04-25 10:31:11 +08:00
jxxghp
a11318390d feat:读取前端版本号 2024-04-25 10:18:38 +08:00
jxxghp
1714990e2e feat:读取前端版本号 2024-04-25 10:18:14 +08:00
jxxghp
44cd5f52e0 fix #1945 2024-04-24 10:25:45 +08:00
jxxghp
59b9dc354e fix 整合重复代码 2024-04-24 08:18:11 +08:00
jxxghp
591969015f v1.8.3
- 仪表板设定支持按用户持久化保存,同时也支持按浏览器差异化配置
- 修复了文件整理使用`original_name`占位符时后缀重复以及应用了识别词的问题
- 优化了个别站点的适配
- 修复了插件市场显示空插件的问题
2024-04-23 17:53:32 +08:00
jxxghp
6118e235c3 Merge pull request #1943 from thsrite/main 2024-04-23 17:17:28 +08:00
thsrite
228b1a11d0 fix 2024-04-23 16:18:30 +08:00
thsrite
c8a1e59310 fix plugin 403 msg 2024-04-23 16:16:39 +08:00
jxxghp
b0f7a11328 fix naming 2024-04-23 11:22:43 +08:00
jxxghp
b753e50580 fix api order 2024-04-23 10:06:19 +08:00
jxxghp
3002bf4dd2 fix 2024-04-23 10:02:15 +08:00
jxxghp
0cbe8f5cdc Merge pull request #1920 from hotlcc/develop-20240417-用户配置
新增用户配置相关能力和接口
2024-04-23 09:57:55 +08:00
jxxghp
1a03d19469 Merge remote-tracking branch 'origin/main' 2024-04-23 09:56:28 +08:00
jxxghp
b7c1106744 fix #1940 2024-04-23 09:56:21 +08:00
Allen
d6c6c999fc 优化用户级配置能力 2024-04-23 09:51:18 +08:00
jxxghp
408703d4a3 Merge pull request #1938 from thsrite/main 2024-04-22 18:14:34 +08:00
thsrite
40a612c327 fix 2024-04-22 15:57:34 +08:00
thsrite
e519fc484b fix 获取指定用户的订阅列表 2024-04-22 15:56:50 +08:00
thsrite
e430a3e88b fix 获取指定tmdb_id的订阅列表 2024-04-22 15:49:57 +08:00
jxxghp
316f61bf69 fix #1922 2024-04-22 09:54:51 +08:00
jxxghp
750c4441db fix #1930 2024-04-22 09:31:35 +08:00
jxxghp
441cee4ee5 v1.8.2
- 新增订阅历史记录功能
- 站点支持显示连接状态
- 站点新增支持花梨月下

注意:更新后需要清理浏览器缓存
2024-04-19 19:57:42 +08:00
jxxghp
ebf2f53ae1 fix api 2024-04-19 19:51:01 +08:00
jxxghp
4e7000efbb Merge pull request #1925 from Aodi/main
fix 反向代理图片显示问题,url改为查询参数避免双斜杠的优化
2024-04-19 19:37:29 +08:00
jxxghp
0679a32659 fix 2024-04-19 12:31:38 +08:00
jxxghp
148984ad0e Merge pull request #1923 from thsrite/main 2024-04-19 11:50:31 +08:00
thsrite
dd8804ef3e fix 2024-04-19 11:49:03 +08:00
aodi
fb0018dda6 fix 反向代理图片显示问题,url改为查询参数避免双斜杠的优化 2024-04-19 11:24:37 +08:00
thsrite
c14e529c91 fix #1921 2024-04-19 10:26:34 +08:00
jxxghp
f6222122c0 feat:订阅历史以及API 2024-04-18 21:00:57 +08:00
jxxghp
3a18267ec0 feat:订阅历史以及API 2024-04-18 15:48:46 +08:00
Allen
ae60040120 fixbug 2024-04-18 15:19:46 +08:00
jxxghp
b04bc74550 fix public site flag 2024-04-18 15:10:06 +08:00
Allen
666d6eb048 新增用户配置相关能力和接口 2024-04-18 12:33:35 +08:00
jxxghp
73a3a8cf94 fix #1914 2024-04-18 11:20:03 +08:00
jxxghp
6d66c5b577 更新 site.py 2024-04-17 15:02:03 +08:00
jxxghp
c3ffe38d4d feat:站点使用统计 2024-04-17 12:42:32 +08:00
jxxghp
5108dbbeb5 fix plugin install 2024-04-17 09:46:02 +08:00
jxxghp
cbf56bd9b7 fix module log 2024-04-16 18:22:02 +08:00
jxxghp
67965b09a6 Merge remote-tracking branch 'origin/main' 2024-04-16 18:21:04 +08:00
jxxghp
a2678d5815 fix #1882 季赋值错误! 2024-04-16 18:20:54 +08:00
jxxghp
36b25e6a08 Merge pull request #1908 from zhu0823/main 2024-04-16 17:49:33 +08:00
zhu0823
c98c8c8836 feat: plex媒体数量统计过滤黑名单 2024-04-16 17:42:25 +08:00
jxxghp
423b7cf340 Merge pull request #1907 from zhu0823/main 2024-04-16 16:54:43 +08:00
zhu0823
02acc8bc35 feat: plex继续观看过滤黑名单 2024-04-16 16:38:16 +08:00
jxxghp
664b42f050 fix original_name 2024-04-16 10:22:49 +08:00
jxxghp
ca491891dc - 修复历史记录问题 2024-04-16 10:06:14 +08:00
jxxghp
89e3d16f27 Merge pull request #1897 from Aodi/main 2024-04-15 18:29:02 +08:00
aodi
a02ea64068 fix 编码斜杠禁用的反代无法加载图片 2024-04-15 16:03:44 +08:00
jxxghp
0f0ace5ddc feat:历史记录按目录模糊匹配 2024-04-15 14:10:45 +08:00
jxxghp
04d94f3bdd fix torrents match 2024-04-15 13:26:25 +08:00
jxxghp
7d45b68b4f fix scheduler 2024-04-15 13:17:20 +08:00
jxxghp
ccb47c0120 v1.8.1
- 优化文件识别
- 优化历史记录性能、支持目录过滤
- 插件更新时支持查看更新记录
2024-04-14 14:04:21 +08:00
jxxghp
6939bff790 fix #1882 2024-04-14 13:47:12 +08:00
jxxghp
8cd0dd4198 Merge pull request #1882 from WangEdward/main
fix: metainfo for manual transfer
2024-04-14 13:19:46 +08:00
jxxghp
d6d1f6519a Merge remote-tracking branch 'origin/main' 2024-04-14 13:15:01 +08:00
jxxghp
906325710b fix #1876 2024-04-14 13:14:41 +08:00
jxxghp
05bafeaedf Merge pull request #1888 from thsrite/main 2024-04-14 12:31:36 +08:00
thsrite
babad5a098 fix 插件多次加载 2024-04-14 11:54:38 +08:00
jxxghp
fe07602a35 fix 新增站点区分提示 2024-04-13 18:56:07 +08:00
jxxghp
492533dcdb rollback #1884 2024-04-13 18:38:29 +08:00
jxxghp
45b044cd6b Merge pull request #1884 from thsrite/main 2024-04-13 18:03:34 +08:00
thsrite
fc65cc3619 fix 单例加锁,防止init方法时间过长导致多次init 2024-04-13 17:33:08 +08:00
jxxghp
c6e069331c Merge pull request #1883 from thsrite/main 2024-04-13 17:09:10 +08:00
thsrite
6a8a946ec8 fix PluginHelper().install已经统计安装 2024-04-13 17:05:12 +08:00
Edward
d96e4561e2 fix: metainfo for manual transfer 2024-04-12 14:20:41 +00:00
Edward
172bc23b2a fix: empty season 2024-04-12 14:12:15 +00:00
jxxghp
98baf922d6 fix resource exception 2024-04-12 21:38:53 +08:00
jxxghp
9a7cdc1e74 fix #1858 2024-04-12 12:45:16 +08:00
jxxghp
4e22293cda fix 文件多层路径识别 2024-04-12 12:04:42 +08:00
jxxghp
f17890b6ce fix 词表指定媒体ID的匹配 2024-04-12 08:24:20 +08:00
jxxghp
66af2de416 fix #1864 2024-04-11 19:48:19 +08:00
jxxghp
17e1e6b49b fix log 2024-04-11 12:46:29 +08:00
jxxghp
e501154ad4 v1.8.0
- 搜索和订阅支持指定季,输入:xxxx 第x季
- 插件市场支持查看插件的更新日志(需要插件作者补充)
- 优化了媒体信息识别
- 优化了动漫及拼音标题的资源搜索匹配
- 优化了UI性能
- 修复了手动搜索时默认过滤规则不生效的问题
- 新增下载文件实时整理API,可在 QB设置->下载完成时运行外部程序 处填入:curl "http://localhost:3000/api/v1/transfer/now?token=moviepilot",实现下载器监控模式下无需等待轮循,下载完成后立即整理入库(地址、端口和token按实际调整,curl也可更换为wget)。

注意:如搜索异常请清理浏览器缓存。
2024-04-11 08:21:29 +08:00
jxxghp
c73cf1d7e2 v1.8.0
- 搜索和订阅支持指定季,输入:xxxx 第x季
- 插件市场支持查看插件的更新日志(需要插件作者补充)
- 优化了媒体信息识别
- 优化了动漫及拼音标题的资源搜索匹配
- 优化了UI性能
- 修复了手动搜索时默认过滤规则不生效的问题
- 新增下载文件实时整理API,可在 QB设置->下载完成时运行外部程序 处填入:curl "http://localhost:3000/api/v1/transfer/now?token=moviepilot",实现下载器监控模式下无需等待轮循,下载完成后立即整理入库(地址、端口和token按实际调整,curl也可更换为wget)。

注意:如搜索异常请清理浏览器缓存。
2024-04-11 08:02:24 +08:00
jxxghp
a3603f79c8 fix requests 2024-04-10 22:16:10 +08:00
jxxghp
294b4a6bf9 fix torrents match 2024-04-10 20:05:43 +08:00
jxxghp
f365d93316 fix torrents match 2024-04-10 20:02:02 +08:00
jxxghp
facd20ba3c fix bangumi 2024-04-10 19:04:59 +08:00
jxxghp
d0e596c93c feat: 插件更新历史 2024-04-10 16:44:08 +08:00
jxxghp
e20ec4ddf5 fix bug 2024-04-10 15:05:32 +08:00
jxxghp
ba0a1cb1bd fix #1738 搜索和订阅支持指定季 2024-04-10 14:51:34 +08:00
jxxghp
17438f8c5c fix log 2024-04-10 13:41:11 +08:00
jxxghp
e0c2ae0f0c fix log 2024-04-10 13:20:33 +08:00
jxxghp
9ebb211589 fix meta cases 2024-04-10 12:22:32 +08:00
jxxghp
8a0350c566 fix mtype 2024-04-10 11:50:14 +08:00
jxxghp
765d37fd6a fix meta 2024-04-10 11:44:14 +08:00
jxxghp
b3d57b868e fix:自定义识别词不处理空格 2024-04-10 07:09:31 +08:00
jxxghp
18e7099848 fix:自定义识别词不处理空格 2024-04-10 07:07:17 +08:00
jxxghp
27cb968a18 fix #1846 2024-04-09 18:47:25 +08:00
jxxghp
45bf84d448 fix #1849 2024-04-09 18:43:24 +08:00
jxxghp
85300b0931 more log 2024-04-09 13:36:13 +08:00
jxxghp
ac87c778f4 fix anime match 2024-04-09 13:20:28 +08:00
jxxghp
1ed511034c fix search match 2024-04-09 07:09:54 +08:00
jxxghp
ca7f121a21 Merge pull request #1847 from hotlcc/develop-修复PTLSP站点测试 2024-04-08 14:05:50 +08:00
Allen
c8e73e17d3 修复ptlsp测试问题 2024-04-08 04:26:49 +00:00
Allen
3bfc87f1cc ptlsp站点测试问题修复 2024-04-08 03:16:07 +00:00
jxxghp
e0e76bf3fe fix 2024-04-07 16:32:26 +08:00
jxxghp
6a3e3f1562 feat:中英文名依次匹配 2024-04-07 16:20:33 +08:00
jxxghp
59330657b2 add nano 2024-04-07 14:56:32 +08:00
jxxghp
927d510619 add 立即执行下载器文件整理 API 2024-04-07 14:45:59 +08:00
jxxghp
80a390ac6c feat:种子名为拼音的情况下,从副标题中提取中文名用于识别 2024-04-07 14:25:12 +08:00
jxxghp
cae563ce53 test:更加宽松的匹配规则 2024-04-06 21:07:00 +08:00
jxxghp
0495936ef8 v1.7.9
- 订阅支持预设订阅规则
- 插件新增快速搜索功能、优化了插件安装和卸载的响应速度
- 优化了文件管理、历史记录的性能和易用性
- 修复了馒头种子下载失败的问题

温馨提示:
1. 如遇到前端奇奇怪怪的问题,请先清理浏览器缓存
2. 合理设置优先级层级,如层级过多且搜索结果很多时,会明显增加搜索耗时
2024-04-06 17:27:48 +08:00
jxxghp
34d27fe85b fix #1818 2024-04-06 11:47:22 +08:00
jxxghp
0e2c4d74d6 feat:优化插件重载 2024-04-05 23:20:51 +08:00
jxxghp
bd137de042 Merge pull request #1833 from honue/main 2024-04-05 22:47:46 +08:00
honue
4a2688b52f fix #1744 2024-04-05 22:22:41 +08:00
jxxghp
36acb1daaa Merge pull request #1832 from cddjr/fix_ua 2024-04-05 12:01:47 +08:00
景大侠
a0c3b6b26b fix: 站点User-Agent没有设置的情况下以系统设置的UA进行访问 2024-04-05 11:40:52 +08:00
jxxghp
7c93432505 Merge pull request #1815 from z3shan33/main 2024-04-02 09:26:19 +08:00
z3shan33
2760f25992 fix #1792 2024-04-02 09:24:43 +08:00
jxxghp
d199c47666 fix #1804 2024-04-02 08:22:12 +08:00
jxxghp
a6550a21ef Merge pull request #1804 from thsrite/main 2024-04-01 19:11:29 +08:00
thsrite
26a321f119 feat 设置订阅默认规则 2024-04-01 13:29:22 +08:00
jxxghp
7e8f7be905 v1.7.8
- 支持用户开启管理后台登录双重认证,增强安全性
- 管理后台的大部分表单均增加了hint提示信息
- 重启时会重新安装插件依赖,避免安装在线插件时依赖安装不成功的问题(此特性需要重拉镜像生效)
- 提升了框架对于插件错误的兼容性,插件市场插件按下载热度排序
2024-03-31 19:38:20 +08:00
jxxghp
600b6144e4 fix #1783 目录完整度匹配 2024-03-31 08:17:51 +08:00
jxxghp
dfb11420e5 Merge pull request #1789 from DDS-Derek/main 2024-03-30 17:44:22 +08:00
DDSRem
584c8a2d94 feat: install the plug-in pip extension in advance 2024-03-30 17:41:04 +08:00
jxxghp
536bd9268a feat:新增订阅相关事件 2024-03-30 08:04:52 +08:00
jxxghp
5ee41b87a2 fix login api 2024-03-29 11:13:57 +08:00
jxxghp
89b2fe10fe Merge pull request #1774 from jeblove/main 2024-03-28 21:33:16 +08:00
jeblove
c180e50164 feat: 增加session方法,用于获取tr的会话、配置信息 2024-03-28 21:24:16 +08:00
jxxghp
8f7b08afae fix #1763 2024-03-28 17:04:44 +08:00
jxxghp
72de8a2192 Merge pull request #1772 from z3shan33/main
feat #1763
2024-03-28 16:57:55 +08:00
zss
40d99f1dd5 feat #1763 2024-03-28 16:39:34 +08:00
jxxghp
ff07841dd6 roll back site test 2024-03-28 13:20:48 +08:00
jxxghp
828fc08362 Merge pull request #1766 from cddjr/1761--bug 2024-03-28 06:48:22 +08:00
景大侠
3fd043bb9b fix #1761 2024-03-28 02:09:47 +08:00
jxxghp
f51c4ebed7 fix bug 2024-03-27 20:46:06 +08:00
jxxghp
9b917cd4c2 更新 requirements.txt 2024-03-27 19:50:22 +08:00
jxxghp
91eac50ab9 v1.7.7
- 多别名搜索(`SEARCH_MULTIPLE_NAME`)默认为关,优化了站点无法连通时的搜索处理逻辑,加快搜索速度 - 修复了站点删除或重置后订阅等站点设置残留的问题 - `馒头`站点数据统计切换为使用ApiKey - 优化了Bangumi每日放送的演员阵容显示 - 插件支持显示下载安装次数
2024-03-27 17:01:33 +08:00
jxxghp
f6468ad327 fix scraper 2024-03-27 16:01:20 +08:00
jxxghp
fb6c3a9f36 fix site test 2024-03-27 15:45:27 +08:00
jxxghp
eb751bb581 fix site test 2024-03-27 15:35:01 +08:00
jxxghp
f9069bf19b fix #1758 2024-03-27 12:22:15 +08:00
jxxghp
ef0c88a3b6 fix 种子去重 2024-03-27 11:37:51 +08:00
jxxghp
f1f8ccb5d6 feat:plugins statistics 2024-03-27 08:24:06 +08:00
jxxghp
2df113ad38 fix SiteDeleted 2024-03-27 07:09:00 +08:00
jxxghp
fa03232321 Merge pull request #1759 from cddjr/fix_remove_site 2024-03-27 06:24:18 +08:00
景大侠
04f50284c6 fix 删除站点会导致其订阅的站点列表出现数字ID 2024-03-27 00:54:58 +08:00
jxxghp
9fc950c2ed Merge pull request #1751 from z3shan33/main 2024-03-26 16:41:59 +08:00
zss
9c1aeb933e fix bangumi中通过characters获取配音角色信息 2024-03-26 16:11:03 +08:00
jxxghp
1cee20134a fix 插件去重&排序 2024-03-26 09:30:05 +08:00
jxxghp
0ca5f5bd89 fix timeout 2024-03-25 23:06:30 +08:00
jxxghp
25e0c25bc6 fix timeout 2024-03-25 23:01:50 +08:00
jxxghp
3f8453f054 fix 2024-03-25 20:14:24 +08:00
jxxghp
cf259af2d1 feat:插件安装统计 2024-03-25 18:02:57 +08:00
jxxghp
0b70f74553 fix site test 2024-03-24 21:33:41 +08:00
jxxghp
f0bc5d737b - 问题修复 2024-03-24 15:45:20 +08:00
jxxghp
181d87f68e fix mtorrent 2024-03-24 15:31:00 +08:00
jxxghp
e37ac4da6a v1.7.6
- 馒头搜索切换为使用ApiKey,需要先在`控制台`->`实验室`建立存取令牌,手工维护站点cookie后ApiKey会自动获取并缓存使用,如更换了ApiKey,需要手动触发站点修改才会清除缓存。
- 资源搜索时整合多个别名的搜索结果,避免搜索不全

注意:馒头除搜索下载外,站点签到、数据统计、刷流等仍然使用cookie访问,请自行评估风险。
2024-03-24 14:01:20 +08:00
jxxghp
bd7ca7fa60 feat:m-team x-api-key 2024-03-24 13:38:36 +08:00
jxxghp
96de772119 fix mtorrent 2024-03-24 10:20:12 +08:00
jxxghp
72b6556c62 add SEARCH_MULTIPLE_NAME 2024-03-24 08:26:59 +08:00
jxxghp
e4bb182668 feat:搜索更多结果 2024-03-24 08:13:08 +08:00
jxxghp
595d097235 v1.7.5
- 认证站点新增支持青蛙🐸,蝴蝶🦋支持ipv4域名,适配了馒头新UI
- 加快了插件市场的加载速度
- 插件日志倒序显示
2024-03-23 19:01:09 +08:00
jxxghp
9b53aad34f fix mtorrent 2024-03-23 13:46:06 +08:00
jxxghp
e92a2e1ff1 Merge pull request #1728 from developer-wlj/wlj0323 2024-03-23 13:38:33 +08:00
mayun110
764359c3e8 fix 2024-03-23 13:18:36 +08:00
mayun110
abd1a51863 fix: labels by mTorrent 2024-03-23 12:26:49 +08:00
jxxghp
2f05f8dc4d fix mtorrent 2024-03-23 09:50:03 +08:00
jxxghp
23c678e71e fix mtorrent 2024-03-23 09:42:11 +08:00
jxxghp
ef67b76453 fix 下载消息显示用户名 2024-03-22 13:26:07 +08:00
jxxghp
c4e7870f7b Merge pull request #1726 from sundxfansky/main 2024-03-22 06:53:18 +08:00
jxxghp
9cef50436a Merge pull request #1725 from Vincwnt/main 2024-03-22 06:51:40 +08:00
sundxfansky
a15aded0a0 无需添加时间 2024-03-22 04:40:33 +08:00
chenyuan
8ac40dc205 fix: 存在已删除用户时, 消息批量推送失败bug 2024-03-21 22:27:01 +08:00
jxxghp
92a5b3d227 feat:线上插件多线程加载 2024-03-21 21:30:26 +08:00
jxxghp
761f1e7a4b feat:线上插件多线程加载 2024-03-21 21:27:54 +08:00
jxxghp
ad0731e1ec 更新 README.md 2024-03-21 18:27:36 +08:00
jxxghp
a451f12d86 add qingwa 2024-03-21 16:55:57 +08:00
jxxghp
dcde619e77 插件日志倒序 & 补充安装版本Windows指引 2024-03-21 16:28:16 +08:00
jxxghp
92769b27f1 v1.7.4
- 推荐增加了`Bangumi每日放送`
- `api.themoviedb.org`等域名会自动使用DOH解析IP地址,以避免DNS污染提升网络连通性(通过`DOH_ENABLE`变量控制,默认开)
- 站点浏览增加点击添加下载功能
- 优化了个别页面在数据多时的展示速度
2024-03-19 17:27:52 +08:00
jxxghp
fa83168b92 feat:增加DOH开关 2024-03-19 12:26:04 +08:00
jxxghp
f96295de3a add download api 2024-03-18 23:27:54 +08:00
jxxghp
6cecb3c6a6 fix bug 2024-03-18 20:02:03 +08:00
jxxghp
b6486035c4 add Bangumi 2024-03-18 19:02:34 +08:00
jxxghp
f7c1d28c0f remove cloudflared 2024-03-18 08:23:43 +08:00
jxxghp
47c2ae1c08 fix doh domains 2024-03-18 07:19:56 +08:00
jxxghp
c03f24dcf5 更新 doh.py 2024-03-17 23:40:19 +08:00
jxxghp
6e2f5762b4 add doh 2024-03-17 23:30:50 +08:00
jxxghp
75330a08cc add doh 2024-03-17 23:25:04 +08:00
jxxghp
3f17e371c3 add doh 2024-03-17 23:15:21 +08:00
jxxghp
a820341ec7 rollback cloudflared 2024-03-17 22:32:15 +08:00
jxxghp
c1f04f5631 Merge pull request #1697 from DDS-Derek/main 2024-03-17 19:06:31 +08:00
DDSRem
a121e45b94 fix: container resolv cannot be modified 2024-03-17 18:43:54 +08:00
DDSRem
885ee976b2 feat: better cloudflared install 2024-03-17 18:15:29 +08:00
jxxghp
e6229beb94 add cloudflared 2024-03-17 16:52:13 +08:00
jxxghp
f2a40e1ec3 fix themoviedb季不显示 2024-03-17 15:59:21 +08:00
jxxghp
5f80aa5b7c - 豆瓣订阅及本地CookieCloud服务问题修复 2024-03-17 15:12:24 +08:00
jxxghp
14ff1e9af6 fix resource 2024-03-17 15:09:10 +08:00
jxxghp
49ab5ac709 - 豆瓣订阅及本地CookieCloud服务问题修复 2024-03-17 13:43:11 +08:00
jxxghp
74c7a1927b fix cookiecloud 2024-03-17 13:42:01 +08:00
jxxghp
cbd704373c try fix cookiecloud 2024-03-17 12:57:38 +08:00
jxxghp
a05724f664 fix 自动校正站点地址格式 2024-03-17 12:21:32 +08:00
jxxghp
97d0fc046a fix 豆瓣订阅Bug 2024-03-17 11:27:54 +08:00
jxxghp
6248e34400 fix v1.7.3 2024-03-17 10:00:59 +08:00
jxxghp
a442dab85b fix nginx.conf 2024-03-17 09:51:04 +08:00
jxxghp
d4514edba6 v1.7.3
- `捷径`新增消息中心功能
- 内建支持CookieCloud本地化服务器,Cookie数据加密后保存在用户配置目录中,可在`设定`-`站点`中选择开启
- 优化了推荐详情页面,豆瓣推荐详情直接展示豆瓣数据源
- 修复了`蜜柑`无法搜索的问题
2024-03-17 09:09:21 +08:00
jxxghp
0c581565ad 更新 message.py 2024-03-16 22:21:12 +08:00
jxxghp
350def0a6f 更新 message.py 2024-03-16 22:20:14 +08:00
jxxghp
5b3027c0a7 fix reload 2024-03-16 21:06:52 +08:00
jxxghp
e4b90ca8f7 fix #1694 2024-03-16 20:40:02 +08:00
jxxghp
d917b00055 Merge pull request #1694 from lingjiameng/main
CookieCloud配置支持实时更新
2024-03-16 20:36:05 +08:00
s0mE
cc94c6c367 Merge branch 'jxxghp:main' into main 2024-03-16 19:24:25 +08:00
ljmeng
6410051e3a CookieCloud配置支持实时加载 2024-03-16 19:23:06 +08:00
jxxghp
aaa1b80edf fix 资源包更新Bug 2024-03-16 18:38:25 +08:00
jxxghp
f345d94009 fix README.md 2024-03-16 18:28:09 +08:00
jxxghp
550fe26d76 Merge pull request #1693 from lingjiameng/main
集成CookieCloud服务器端
2024-03-16 17:52:49 +08:00
jxxghp
7ad498b3a3 fix 2024-03-16 17:06:24 +08:00
jxxghp
20eb0b4635 fix message 2024-03-16 16:29:14 +08:00
ljmeng
747dc3fafe 默认关闭本地CookieCloud服务 2024-03-16 15:40:10 +08:00
s0mE
4708fbb3cb Merge branch 'jxxghp:main' into main 2024-03-16 15:36:20 +08:00
ljmeng
6ba40edeb4 Merge branch 'main' of github.com:lingjiameng/MoviePilot 2024-03-16 15:35:02 +08:00
ljmeng
79cb28faf9 默认配置关闭本地cookiecloud服务 2024-03-16 15:34:46 +08:00
jxxghp
9acf05f334 fix #1691 2024-03-16 15:31:04 +08:00
jxxghp
d0af1bf075 Merge pull request #1691 from hoey94/main 2024-03-16 13:53:10 +08:00
hoey94
f8a95cec4a fix: TR远程控制插件限速问题 104 2024-03-16 12:37:21 +08:00
jxxghp
3cd672fa8d fix 2024-03-16 08:40:36 +08:00
jxxghp
fe03638552 fix api 2024-03-16 08:39:57 +08:00
ljmeng
1ae220c654 集成CookieCloud服务端 2024-03-16 04:48:34 +08:00
jxxghp
75c7e71ee6 Merge pull request #1689 from hoey94/main 2024-03-15 19:14:26 +08:00
hoey94
4619158b99 fix: 限速开关BUG 104 2024-03-15 18:23:44 +08:00
jxxghp
3f88907ba9 fix bug 2024-03-15 18:17:04 +08:00
jxxghp
ae6440bd0a Merge pull request #1683 from lingjiameng/main 2024-03-15 07:55:01 +08:00
s0mE
261f5fc0c6 Merge branch 'jxxghp:main' into main 2024-03-14 23:26:58 +08:00
jxxghp
a5d044d535 fix message 2024-03-14 20:36:15 +08:00
jxxghp
6e607ca89f fix 优化推荐跳转
feat 消息落库
2024-03-14 19:44:15 +08:00
jxxghp
06e4b9ad83 Merge remote-tracking branch 'origin/main' 2024-03-14 19:15:22 +08:00
jxxghp
c755dc9b85 fix 优化推荐跳转
feat 消息落库
2024-03-14 19:15:13 +08:00
jxxghp
209451d5f9 Merge pull request #1678 from HankunYu/main 2024-03-14 06:57:31 +08:00
HankunYu
60b2d30f42 Update README.md
增加使用反代的描述,解决使用https反代时日志加载时间过长(十几分钟)不可用的问题。
2024-03-13 18:54:55 +00:00
ljmeng
399d26929d CookieCloud改为本地解密,增强安全性 2024-03-14 02:35:22 +08:00
jxxghp
f50c2e59a9 fix #1674 2024-03-13 14:54:37 +08:00
jxxghp
1cd768b3d0 v1.7.2
- 站点索引新增支持`蟹黄堡`,修复了`蝴蝶`、`蜜柑`的索引问题
- 针对themoviedb被大量删除中文标题的问题,补充使用新加坡(zh-sg)中文标题搜索和刮削
- 支持设定识别元数据的缓存时间(`META_CACHE_EXPIRE`,单位小时)
- 修复了未设定anime分类策略时,原tv下动漫二级分类失效的问题
- 提升了插件升级的使用体验
2024-03-13 08:21:59 +08:00
jxxghp
abc26b65ed fix #1645 兼容蝴蝶种子链接格式 2024-03-12 17:01:41 +08:00
jxxghp
dc1a41da90 fix 减少不必要的检测 2024-03-12 13:48:37 +08:00
jxxghp
a95dac1b32 fix 目录检测 2024-03-12 13:36:33 +08:00
jxxghp
18d9620687 #1653 搜索词中加入新加坡标题,同时主标题不是中文时会考虑使用中文新加坡标题 2024-03-12 11:55:47 +08:00
jxxghp
8808dcee52 fix 1659 2024-03-12 11:16:10 +08:00
jxxghp
17adc4deab Merge pull request #1662 from thsrite/main 2024-03-11 16:36:19 +08:00
thsrite
9351489166 fix 不查缓存识别媒体信息也应更新最新信息到缓存 2024-03-11 16:34:53 +08:00
jxxghp
e2148cb77f fix 2024-03-11 16:28:36 +08:00
jxxghp
e322204094 Merge pull request #1661 from jeblove/main 2024-03-11 16:25:05 +08:00
jxxghp
0fa884157a 支持设定meta缓存时间 2024-03-11 16:23:07 +08:00
jeblove
96468213fe Merge branch 'main' of https://github.com/jeblove/MoviePilot 2024-03-11 16:17:36 +08:00
jeblove
d044a9db00 fix 继续观看部分剧集图片 2024-03-11 16:17:10 +08:00
jxxghp
d5f5e0d526 Merge pull request #1660 from thsrite/main 2024-03-11 15:59:20 +08:00
thsrite
14a3bb8fc2 add db订阅、下载历史根据类型和时间查询列表(插件方法) 2024-03-11 15:56:19 +08:00
jxxghp
5921d43ae8 fix #1655 2024-03-11 12:34:19 +08:00
jxxghp
635061c054 Merge pull request #1654 from jeblove/main 2024-03-11 11:32:27 +08:00
jeblove
3c8c6e5375 fix 语法问题 2024-03-11 11:27:24 +08:00
jeblove
dd063bb16b fix 播放剧集微信消息推送图片问题 2024-03-11 01:57:57 +08:00
jeblove
750711611b fix 语法问题 2024-03-11 00:15:55 +08:00
jxxghp
d3983c51c2 Merge pull request #1652 from jeblove/main 2024-03-10 18:39:16 +08:00
jeblove
b9dec73773 fix 语法问题 2024-03-10 18:10:09 +08:00
jeblove
b310367d25 fix 播放微信消息推送图片问题 2024-03-10 17:50:01 +08:00
jxxghp
55beea87fd Merge pull request #1649 from thsrite/main 2024-03-10 11:24:57 +08:00
thsrite
4510382f74 fix tv动漫分类不生效 2024-03-10 09:27:48 +08:00
138 changed files with 5916 additions and 1967 deletions

3
.gitignore vendored
View File

@@ -10,7 +10,10 @@ app/helper/*.pyd
app/helper/*.bin app/helper/*.bin
app/plugins/** app/plugins/**
!app/plugins/__init__.py !app/plugins/__init__.py
config/cookies/**
config/user.db config/user.db
config/sites/** config/sites/**
*.pyc *.pyc
*.log *.log
.vscode
venv

View File

@@ -33,6 +33,7 @@ RUN apt-get update -y \
fuse3 \ fuse3 \
rsync \ rsync \
ffmpeg \ ffmpeg \
nano \
&& \ && \
if [ "$(uname -m)" = "x86_64" ]; \ if [ "$(uname -m)" = "x86_64" ]; \
then ln -s /usr/lib/x86_64-linux-musl/libc.so /lib/libc.musl-x86_64.so.1; \ then ln -s /usr/lib/x86_64-linux-musl/libc.so /lib/libc.musl-x86_64.so.1; \
@@ -40,6 +41,10 @@ RUN apt-get update -y \
then ln -s /usr/lib/aarch64-linux-musl/libc.so /lib/libc.musl-aarch64.so.1; \ then ln -s /usr/lib/aarch64-linux-musl/libc.so /lib/libc.musl-aarch64.so.1; \
fi \ fi \
&& curl https://rclone.org/install.sh | bash \ && curl https://rclone.org/install.sh | bash \
&& echo "deb http://deb.debian.org/debian bookworm main" >> /etc/apt/sources.list \
&& apt-get update -y \
&& apt-get install -y --only-upgrade ca-certificates \
&& sed -i '/deb http:\/\/deb\.debian\.org\/debian bookworm main/d' /etc/apt/sources.list \
&& apt-get autoremove -y \ && apt-get autoremove -y \
&& apt-get clean -y \ && apt-get clean -y \
&& rm -rf \ && rm -rf \

View File

@@ -21,7 +21,11 @@
### 2. **安装CookieCloud服务端可选** ### 2. **安装CookieCloud服务端可选**
MoviePilot内置了公共CookieCloud服务器如果需要自建服务可参考 [CookieCloud](https://github.com/easychen/CookieCloud) 项目进行搭建docker镜像请点击 [这里](https://hub.docker.com/r/easychen/cookiecloud)。 通过CookieCloud可以快速同步浏览器中保存的站点数据到MoviePilot支持以下服务方式
- 使用公共CookieCloud远程服务器默认服务器地址为https://movie-pilot.org/cookiecloud
- 使用内建的本地Cookie服务`设定` - `站点` 中打开`启用本地CookieCloud服务器`将启用内建的CookieCloud提供服务服务地址为`http://localhost:${NGINX_PORT}/cookiecloud/`, Cookie数据加密保存在配置文件目录下的`cookies`文件中
- 自建服务CookieCloud服务器参考 [CookieCloud](https://github.com/easychen/CookieCloud) 项目进行搭建docker镜像请点击 [这里](https://hub.docker.com/r/easychen/cookiecloud)
**声明:** 本项目不会收集用户敏感数据Cookie同步也是基于CookieCloud项目实现非本项目提供的能力。技术角度上CookieCloud采用端到端加密在个人不泄露`用户KEY``端对端加密密码`的情况下第三方无法窃取任何用户信息(包括服务器持有者)。如果你不放心,可以不使用公共服务或者不使用本项目,但如果使用后发生了任何信息泄露与本项目无关! **声明:** 本项目不会收集用户敏感数据Cookie同步也是基于CookieCloud项目实现非本项目提供的能力。技术角度上CookieCloud采用端到端加密在个人不泄露`用户KEY``端对端加密密码`的情况下第三方无法窃取任何用户信息(包括服务器持有者)。如果你不放心,可以不使用公共服务或者不使用本项目,但如果使用后发生了任何信息泄露与本项目无关!
@@ -43,7 +47,8 @@ MoviePilot需要配套下载器和媒体服务器配合使用。
- Windows - Windows
下载 [MoviePilot.exe](https://github.com/jxxghp/MoviePilot/releases)双击运行后自动生成配置文件目录访问http://localhost:3000 1. 独立执行文件版本:下载 [MoviePilot.exe](https://github.com/jxxghp/MoviePilot/releases)双击运行后自动生成配置文件目录访问http://localhost:3000
2. 安装包版本:[Windows-MoviePilot](https://github.com/developer-wlj/Windows-MoviePilot)
- 群晖套件 - 群晖套件
@@ -77,7 +82,7 @@ MoviePilot需要配套下载器和媒体服务器配合使用。
- **❗AUTH_SITE** 认证站点(认证通过后才能使用站点相关功能),支持配置多个认证站点,使用`,`分隔,如:`iyuu,hhclub`,会依次执行认证操作,直到有一个站点认证成功。 - **❗AUTH_SITE** 认证站点(认证通过后才能使用站点相关功能),支持配置多个认证站点,使用`,`分隔,如:`iyuu,hhclub`,会依次执行认证操作,直到有一个站点认证成功。
配置`AUTH_SITE`后,需要根据下表配置对应站点的认证参数。 配置`AUTH_SITE`后,需要根据下表配置对应站点的认证参数。
认证资源`v1.1.4`支持:`iyuu`/`hhclub`/`audiences`/`hddolby`/`zmpt`/`freefarm`/`hdfans`/`wintersakura`/`leaves`/`ptba` /`icc2022`/`ptlsp`/`xingtan`/`ptvicomo`/`agsvpt`/`hdkyl` 认证资源`v1.2.8+`支持:`iyuu`/`hhclub`/`audiences`/`hddolby`/`zmpt`/`freefarm`/`hdfans`/`wintersakura`/`leaves`/`ptba` /`icc2022`/`ptlsp`/`xingtan`/`ptvicomo`/`agsvpt`/`hdkyl`/`qingwa`/`discfan`
| 站点 | 参数 | | 站点 | 参数 |
|:------------:|:-----------------------------------------------------:| |:------------:|:-----------------------------------------------------:|
@@ -97,6 +102,8 @@ MoviePilot需要配套下载器和媒体服务器配合使用。
| ptvicomo | `PTVICOMO_UID`用户ID<br/>`PTVICOMO_PASSKEY`:密钥 | | ptvicomo | `PTVICOMO_UID`用户ID<br/>`PTVICOMO_PASSKEY`:密钥 |
| agsvpt | `AGSVPT_UID`用户ID<br/>`AGSVPT_PASSKEY`:密钥 | | agsvpt | `AGSVPT_UID`用户ID<br/>`AGSVPT_PASSKEY`:密钥 |
| hdkyl | `HDKYL_UID`用户ID<br/>`HDKYL_PASSKEY`:密钥 | | hdkyl | `HDKYL_UID`用户ID<br/>`HDKYL_PASSKEY`:密钥 |
| qingwa | `QINGWA_UID`用户ID<br/>`QINGWA_PASSKEY`:密钥 |
| discfan | `DISCFAN_UID`用户ID<br/>`DISCFAN_PASSKEY`:密钥 |
### 2. **环境变量 / 配置文件** ### 2. **环境变量 / 配置文件**
@@ -106,6 +113,8 @@ MoviePilot需要配套下载器和媒体服务器配合使用。
- **❗SUPERUSER** 超级管理员用户名,默认`admin`,安装后使用该用户登录后台管理界面,**注意:启动一次后再次修改该值不会生效,除非删除数据库文件!** - **❗SUPERUSER** 超级管理员用户名,默认`admin`,安装后使用该用户登录后台管理界面,**注意:启动一次后再次修改该值不会生效,除非删除数据库文件!**
- **❗API_TOKEN** API密钥默认`moviepilot`在媒体服务器Webhook、微信回调等地址配置中需要加上`?token=`该值,建议修改为复杂字符串 - **❗API_TOKEN** API密钥默认`moviepilot`在媒体服务器Webhook、微信回调等地址配置中需要加上`?token=`该值,建议修改为复杂字符串
- **BIG_MEMORY_MODE** 大内存模式,默认为`false`,开启后会增加缓存数量,占用更多的内存,但响应速度会更快 - **BIG_MEMORY_MODE** 大内存模式,默认为`false`,开启后会增加缓存数量,占用更多的内存,但响应速度会更快
- **DOH_ENABLE** DNS over HTTPS开关`true`/`false`,默认`true`开启后会使用DOH对api.themoviedb.org等域名进行解析以减少被DNS污染的情况提升网络连通性
- **META_CACHE_EXPIRE** 元数据识别缓存过期时间小时数字型不配置或者配置为0时使用系统默认大内存模式为7天否则为3天调大该值可减少themoviedb的访问次数
- **GITHUB_TOKEN** Github token提高自动更新、插件安装等请求Github Api的限流阈值格式ghp_**** - **GITHUB_TOKEN** Github token提高自动更新、插件安装等请求Github Api的限流阈值格式ghp_****
- **DEV:** 开发者模式,`true`/`false`,默认`false`,开启后会暂停所有定时任务 - **DEV:** 开发者模式,`true`/`false`,默认`false`,开启后会暂停所有定时任务
- **AUTO_UPDATE_RESOURCE**:启动时自动检测和更新资源包(站点索引及认证等),`true`/`false`,默认`true`需要能正常连接Github仅支持Docker镜像 - **AUTO_UPDATE_RESOURCE**:启动时自动检测和更新资源包(站点索引及认证等),`true`/`false`,默认`true`需要能正常连接Github仅支持Docker镜像
@@ -113,17 +122,19 @@ MoviePilot需要配套下载器和媒体服务器配合使用。
- **TMDB_API_DOMAIN** TMDB API地址默认`api.themoviedb.org`,也可配置为`api.tmdb.org`、`tmdb.movie-pilot.org` 或其它中转代理服务地址,能连通即可 - **TMDB_API_DOMAIN** TMDB API地址默认`api.themoviedb.org`,也可配置为`api.tmdb.org`、`tmdb.movie-pilot.org` 或其它中转代理服务地址,能连通即可
- **TMDB_IMAGE_DOMAIN** TMDB图片地址默认`image.tmdb.org`可配置为其它中转代理以加速TMDB图片显示`static-mdb.v.geilijiasu.com` - **TMDB_IMAGE_DOMAIN** TMDB图片地址默认`image.tmdb.org`可配置为其它中转代理以加速TMDB图片显示`static-mdb.v.geilijiasu.com`
- **WALLPAPER** 登录首页电影海报,`tmdb`/`bing`,默认`tmdb` - **WALLPAPER** 登录首页电影海报,`tmdb`/`bing`,默认`tmdb`
- **RECOGNIZE_SOURCE** 媒体信息识别来源,`themoviedb`/`douban`,默认`themoviedb`,使用`douban`时不支持二级分类 - **RECOGNIZE_SOURCE** 媒体信息识别来源,`themoviedb`/`douban`,默认`themoviedb`,使用`douban`时不支持二级分类,且受豆瓣控流限制
- **FANART_ENABLE** Fanart开关`true`/`false`,默认`true`,关闭后刮削的图片类型会大幅减少 - **FANART_ENABLE** Fanart开关`true`/`false`,默认`true`,关闭后刮削的图片类型会大幅减少
- **SCRAP_SOURCE** 刮削元数据及图片使用的数据源,`themoviedb`/`douban`,默认`themoviedb` - **SCRAP_SOURCE** 刮削元数据及图片使用的数据源,`themoviedb`/`douban`,默认`themoviedb`
- **SCRAP_FOLLOW_TMDB** 新增已入库媒体是否跟随TMDB信息变化`true`/`false`,默认`true`,为`false`时即使TMDB信息变化了也会仍然按历史记录中已入库的信息进行刮削 - **SCRAP_FOLLOW_TMDB** 新增已入库媒体是否跟随TMDB信息变化`true`/`false`,默认`true`,为`false`时即使TMDB信息变化了也会仍然按历史记录中已入库的信息进行刮削
--- ---
- **AUTO_DOWNLOAD_USER** 远程交互搜索时自动择优下载的用户ID消息通知渠道的用户ID多个用户使用,分割,设置为 all 代表全部用户自动择优下载,未设置需要手动选择资源或者回复`0`才自动择优下载 - **AUTO_DOWNLOAD_USER** 远程交互搜索时自动择优下载的用户ID消息通知渠道的用户ID多个用户使用,分割,设置为 all 代表全部用户自动择优下载,未设置需要手动选择资源或者回复`0`才自动择优下载
- **DOWNLOAD_SUBTITLE** 下载站点字幕,`true`/`false`,默认`true`
- **SEARCH_MULTIPLE_NAME** 搜索时是否使用多个名称搜索,`true`/`false`,默认`false`,开启后会使用多个名称进行搜索,搜索结果会更全面,但会增加搜索时间;关闭时只要其中一个名称搜索到结果或全部名称搜索完毕即停止
- **SUBSCRIBE_STATISTIC_SHARE** 是否匿名分享订阅数据,用于统计和展示用户热门订阅,`true`/`false`,默认`true`
- **PLUGIN_STATISTIC_SHARE** 是否匿名分享插件安装统计数据,用于统计和显示插件下载安装次数,`true`/`false`,默认`true`
--- ---
- **OCR_HOST** OCR识别服务器地址格式`http(s)://ip:port`用于识别站点验证码实现自动登录获取Cookie等不配置默认使用内建服务器`https://movie-pilot.org`,可使用 [这个镜像](https://hub.docker.com/r/jxxghp/moviepilot-ocr) 自行搭建。 - **OCR_HOST** OCR识别服务器地址格式`http(s)://ip:port`用于识别站点验证码实现自动登录获取Cookie等不配置默认使用内建服务器`https://movie-pilot.org`,可使用 [这个镜像](https://hub.docker.com/r/jxxghp/moviepilot-ocr) 自行搭建。
--- ---
- **DOWNLOAD_SUBTITLE** 下载站点字幕,`true`/`false`,默认`true`
---
- **MOVIE_RENAME_FORMAT** 电影重命名格式基于jinjia2语法 - **MOVIE_RENAME_FORMAT** 电影重命名格式基于jinjia2语法
`MOVIE_RENAME_FORMAT`支持的配置项: `MOVIE_RENAME_FORMAT`支持的配置项:
@@ -193,6 +204,7 @@ MoviePilot需要配套下载器和媒体服务器配合使用。
- 需要通过环境变量设置用户认证信息且认证成功后才能使用站点相关功能,未认证通过时站点相关的插件也会无法显示。 - 需要通过环境变量设置用户认证信息且认证成功后才能使用站点相关功能,未认证通过时站点相关的插件也会无法显示。
### 3. **文件整理** ### 3. **文件整理**
- 默认通过监控下载器实现下载完成后自动整理入库并刮削媒体信息,需要后台打开`下载器监控`开关且仅会处理通过MoviePilot添加下载的任务。 - 默认通过监控下载器实现下载完成后自动整理入库并刮削媒体信息,需要后台打开`下载器监控`开关且仅会处理通过MoviePilot添加下载的任务。
- 下载器监控默认轮循间隔为5分钟如果是使用qbittorrent可在 `QB设置`->`下载完成时运行外部程序` 处填入:`curl "http://localhost:3000/api/v1/transfer/now?token=moviepilot" `实现无需等待轮循下载完成后立即整理入库地址、端口和token按实际调整curl也可更换为wget
- 使用`目录监控`等插件实现更灵活的自动整理。 - 使用`目录监控`等插件实现更灵活的自动整理。
### 4. **通知交互** ### 4. **通知交互**
- 支持通过`微信`/`Telegram`/`Slack`/`SynologyChat`/`VoceChat`等渠道远程管理和订阅下载,其中 微信/Telegram 将会自动添加操作菜单(微信菜单条数有限制,部分菜单不显示)。 - 支持通过`微信`/`Telegram`/`Slack`/`SynologyChat`/`VoceChat`等渠道远程管理和订阅下载,其中 微信/Telegram 将会自动添加操作菜单(微信菜单条数有限制,部分菜单不显示)。
@@ -218,6 +230,14 @@ location / {
proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-Proto $scheme;
} }
``` ```
- 反代使用ssl时需要开启`http2`,否则会导致日志加载时间过长或不可用。以`Nginx`为例:
```nginx configuration
server {
listen 443 ssl;
http2 on;
# ...
}
```
- 新建的企业微信应用需要固定公网IP的代理才能收到消息代理添加以下代码 - 新建的企业微信应用需要固定公网IP的代理才能收到消息代理添加以下代码
```nginx configuration ```nginx configuration
location /cgi-bin/gettoken { location /cgi-bin/gettoken {

View File

@@ -1,7 +1,8 @@
from fastapi import APIRouter from fastapi import APIRouter
from app.api.endpoints import login, user, site, message, webhook, subscribe, \ from app.api.endpoints import login, user, site, message, webhook, subscribe, \
media, douban, search, plugin, tmdb, history, system, download, dashboard, filebrowser, transfer, mediaserver media, douban, search, plugin, tmdb, history, system, download, dashboard, \
filebrowser, transfer, mediaserver, bangumi
api_router = APIRouter() api_router = APIRouter()
api_router.include_router(login.router, prefix="/login", tags=["login"]) api_router.include_router(login.router, prefix="/login", tags=["login"])
@@ -22,3 +23,5 @@ api_router.include_router(dashboard.router, prefix="/dashboard", tags=["dashboar
api_router.include_router(filebrowser.router, prefix="/filebrowser", tags=["filebrowser"]) api_router.include_router(filebrowser.router, prefix="/filebrowser", tags=["filebrowser"])
api_router.include_router(transfer.router, prefix="/transfer", tags=["transfer"]) api_router.include_router(transfer.router, prefix="/transfer", tags=["transfer"])
api_router.include_router(mediaserver.router, prefix="/mediaserver", tags=["mediaserver"]) api_router.include_router(mediaserver.router, prefix="/mediaserver", tags=["mediaserver"])
api_router.include_router(bangumi.router, prefix="/bangumi", tags=["bangumi"])

View File

@@ -0,0 +1,86 @@
from typing import List, Any
from fastapi import APIRouter, Depends
from app import schemas
from app.chain.bangumi import BangumiChain
from app.core.context import MediaInfo
from app.core.security import verify_token
router = APIRouter()
@router.get("/calendar", summary="Bangumi每日放送", response_model=List[schemas.MediaInfo])
def calendar(page: int = 1,
count: int = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览Bangumi每日放送
"""
medias = BangumiChain().calendar()
if medias:
return [media.to_dict() for media in medias[(page - 1) * count: page * count]]
return []
@router.get("/credits/{bangumiid}", summary="查询Bangumi演职员表", response_model=List[schemas.MediaPerson])
def bangumi_credits(bangumiid: int,
page: int = 1,
count: int = 20,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询Bangumi演职员表
"""
persons = BangumiChain().bangumi_credits(bangumiid)
if persons:
return persons[(page - 1) * count: page * count]
return []
@router.get("/recommend/{bangumiid}", summary="查询Bangumi推荐", response_model=List[schemas.MediaInfo])
def bangumi_recommend(bangumiid: int,
page: int = 1,
count: int = 20,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询Bangumi推荐
"""
medias = BangumiChain().bangumi_recommend(bangumiid)
if medias:
return [media.to_dict() for media in medias[(page - 1) * count: page * count]]
return []
@router.get("/person/{person_id}", summary="人物详情", response_model=schemas.MediaPerson)
def bangumi_person(person_id: int,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据人物ID查询人物详情
"""
return BangumiChain().person_detail(person_id=person_id)
@router.get("/person/credits/{person_id}", summary="人物参演作品", response_model=List[schemas.MediaInfo])
def bangumi_person_credits(person_id: int,
page: int = 1,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据人物ID查询人物参演作品
"""
medias = BangumiChain().person_credits(person_id=person_id)
if medias:
return [media.to_dict() for media in medias[(page - 1) * 20: page * 20]]
return []
@router.get("/{bangumiid}", summary="查询Bangumi详情", response_model=schemas.MediaInfo)
def bangumi_info(bangumiid: int,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询Bangumi详情
"""
info = BangumiChain().bangumi_info(bangumiid)
if info:
return MediaInfo(bangumi_info=info).to_dict()
else:
return schemas.MediaInfo()

View File

@@ -13,7 +13,7 @@ from app.utils.http import RequestUtils
router = APIRouter() router = APIRouter()
@router.get("/img/{imgurl:path}", summary="豆瓣图片代理") @router.get("/img", summary="豆瓣图片代理")
def douban_img(imgurl: str) -> Any: def douban_img(imgurl: str) -> Any:
""" """
豆瓣图片代理 豆瓣图片代理
@@ -28,6 +28,28 @@ def douban_img(imgurl: str) -> Any:
return None return None
@router.get("/person/{person_id}", summary="人物详情", response_model=schemas.MediaPerson)
def douban_person(person_id: int,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据人物ID查询人物详情
"""
return DoubanChain().person_detail(person_id=person_id)
@router.get("/person/credits/{person_id}", summary="人物参演作品", response_model=List[schemas.MediaInfo])
def douban_person_credits(person_id: int,
page: int = 1,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据人物ID查询人物参演作品
"""
medias = DoubanChain().person_credits(person_id=person_id, page=page)
if medias:
return [media.to_dict() for media in medias]
return []
@router.get("/showing", summary="豆瓣正在热映", response_model=List[schemas.MediaInfo]) @router.get("/showing", summary="豆瓣正在热映", response_model=List[schemas.MediaInfo])
def movie_showing(page: int = 1, def movie_showing(page: int = 1,
count: int = 30, count: int = 30,
@@ -36,10 +58,9 @@ def movie_showing(page: int = 1,
浏览豆瓣正在热映 浏览豆瓣正在热映
""" """
movies = DoubanChain().movie_showing(page=page, count=count) movies = DoubanChain().movie_showing(page=page, count=count)
if not movies: if movies:
return [] return [media.to_dict() for media in movies]
medias = [MediaInfo(douban_info=movie) for movie in movies] return []
return [media.to_dict() for media in medias]
@router.get("/movies", summary="豆瓣电影", response_model=List[schemas.MediaInfo]) @router.get("/movies", summary="豆瓣电影", response_model=List[schemas.MediaInfo])
@@ -53,13 +74,9 @@ def douban_movies(sort: str = "R",
""" """
movies = DoubanChain().douban_discover(mtype=MediaType.MOVIE, movies = DoubanChain().douban_discover(mtype=MediaType.MOVIE,
sort=sort, tags=tags, page=page, count=count) sort=sort, tags=tags, page=page, count=count)
if not movies: if movies:
return [] return [media.to_dict() for media in movies]
medias = [MediaInfo(douban_info=movie) for movie in movies] return []
return [media.to_dict() for media in medias
if media.poster_path
and "movie_large.jpg" not in media.poster_path
and "tv_normal.png" not in media.poster_path]
@router.get("/tvs", summary="豆瓣剧集", response_model=List[schemas.MediaInfo]) @router.get("/tvs", summary="豆瓣剧集", response_model=List[schemas.MediaInfo])
@@ -73,14 +90,9 @@ def douban_tvs(sort: str = "R",
""" """
tvs = DoubanChain().douban_discover(mtype=MediaType.TV, tvs = DoubanChain().douban_discover(mtype=MediaType.TV,
sort=sort, tags=tags, page=page, count=count) sort=sort, tags=tags, page=page, count=count)
if not tvs: if tvs:
return [] return [media.to_dict() for media in tvs]
medias = [MediaInfo(douban_info=tv) for tv in tvs] return []
return [media.to_dict() for media in medias
if media.poster_path
and "movie_large.jpg" not in media.poster_path
and "tv_normal.jpg" not in media.poster_path
and "tv_large.jpg" not in media.poster_path]
@router.get("/movie_top250", summary="豆瓣电影TOP250", response_model=List[schemas.MediaInfo]) @router.get("/movie_top250", summary="豆瓣电影TOP250", response_model=List[schemas.MediaInfo])
@@ -91,7 +103,9 @@ def movie_top250(page: int = 1,
浏览豆瓣剧集信息 浏览豆瓣剧集信息
""" """
movies = DoubanChain().movie_top250(page=page, count=count) movies = DoubanChain().movie_top250(page=page, count=count)
return [MediaInfo(douban_info=movie).to_dict() for movie in movies] if movies:
return [media.to_dict() for media in movies]
return []
@router.get("/tv_weekly_chinese", summary="豆瓣国产剧集周榜", response_model=List[schemas.MediaInfo]) @router.get("/tv_weekly_chinese", summary="豆瓣国产剧集周榜", response_model=List[schemas.MediaInfo])
@@ -102,7 +116,9 @@ def tv_weekly_chinese(page: int = 1,
中国每周剧集口碑榜 中国每周剧集口碑榜
""" """
tvs = DoubanChain().tv_weekly_chinese(page=page, count=count) tvs = DoubanChain().tv_weekly_chinese(page=page, count=count)
return [MediaInfo(douban_info=tv).to_dict() for tv in tvs] if tvs:
return [media.to_dict() for media in tvs]
return []
@router.get("/tv_weekly_global", summary="豆瓣全球剧集周榜", response_model=List[schemas.MediaInfo]) @router.get("/tv_weekly_global", summary="豆瓣全球剧集周榜", response_model=List[schemas.MediaInfo])
@@ -113,7 +129,9 @@ def tv_weekly_global(page: int = 1,
全球每周剧集口碑榜 全球每周剧集口碑榜
""" """
tvs = DoubanChain().tv_weekly_global(page=page, count=count) tvs = DoubanChain().tv_weekly_global(page=page, count=count)
return [MediaInfo(douban_info=tv).to_dict() for tv in tvs] if tvs:
return [media.to_dict() for media in tvs]
return []
@router.get("/tv_animation", summary="豆瓣动画剧集", response_model=List[schemas.MediaInfo]) @router.get("/tv_animation", summary="豆瓣动画剧集", response_model=List[schemas.MediaInfo])
@@ -124,7 +142,9 @@ def tv_animation(page: int = 1,
热门动画剧集 热门动画剧集
""" """
tvs = DoubanChain().tv_animation(page=page, count=count) tvs = DoubanChain().tv_animation(page=page, count=count)
return [MediaInfo(douban_info=tv).to_dict() for tv in tvs] if tvs:
return [media.to_dict() for media in tvs]
return []
@router.get("/movie_hot", summary="豆瓣热门电影", response_model=List[schemas.MediaInfo]) @router.get("/movie_hot", summary="豆瓣热门电影", response_model=List[schemas.MediaInfo])
@@ -135,7 +155,9 @@ def movie_hot(page: int = 1,
热门电影 热门电影
""" """
movies = DoubanChain().movie_hot(page=page, count=count) movies = DoubanChain().movie_hot(page=page, count=count)
return [MediaInfo(douban_info=movie).to_dict() for movie in movies] if movies:
return [media.to_dict() for media in movies]
return []
@router.get("/tv_hot", summary="豆瓣热门电视剧", response_model=List[schemas.MediaInfo]) @router.get("/tv_hot", summary="豆瓣热门电视剧", response_model=List[schemas.MediaInfo])
@@ -146,28 +168,25 @@ def tv_hot(page: int = 1,
热门电视剧 热门电视剧
""" """
tvs = DoubanChain().tv_hot(page=page, count=count) tvs = DoubanChain().tv_hot(page=page, count=count)
return [MediaInfo(douban_info=tv).to_dict() for tv in tvs] if tvs:
return [media.to_dict() for media in tvs]
return []
@router.get("/credits/{doubanid}/{type_name}", summary="豆瓣演员阵容", response_model=List[schemas.DoubanPerson]) @router.get("/credits/{doubanid}/{type_name}", summary="豆瓣演员阵容", response_model=List[schemas.MediaPerson])
def douban_credits(doubanid: str, def douban_credits(doubanid: str,
type_name: str, type_name: str,
page: int = 1, page: int = 1,
_: schemas.TokenPayload = Depends(verify_token)) -> Any: _: schemas.TokenPayload = Depends(verify_token)) -> Any:
""" """
根据TMDBID查询演员阵容type_name: 电影/电视剧 根据豆瓣ID查询演员阵容type_name: 电影/电视剧
""" """
mediatype = MediaType(type_name) mediatype = MediaType(type_name)
if mediatype == MediaType.MOVIE: if mediatype == MediaType.MOVIE:
doubaninfos = DoubanChain().movie_credits(doubanid=doubanid, page=page) return DoubanChain().movie_credits(doubanid=doubanid)
elif mediatype == MediaType.TV: elif mediatype == MediaType.TV:
doubaninfos = DoubanChain().tv_credits(doubanid=doubanid, page=page) return DoubanChain().tv_credits(doubanid=doubanid)
else: return []
return []
if not doubaninfos:
return []
else:
return [schemas.DoubanPerson(**doubaninfo) for doubaninfo in doubaninfos]
@router.get("/recommend/{doubanid}/{type_name}", summary="豆瓣推荐电影/电视剧", response_model=List[schemas.MediaInfo]) @router.get("/recommend/{doubanid}/{type_name}", summary="豆瓣推荐电影/电视剧", response_model=List[schemas.MediaInfo])
@@ -179,15 +198,14 @@ def douban_recommend(doubanid: str,
""" """
mediatype = MediaType(type_name) mediatype = MediaType(type_name)
if mediatype == MediaType.MOVIE: if mediatype == MediaType.MOVIE:
doubaninfos = DoubanChain().movie_recommend(doubanid=doubanid) medias = DoubanChain().movie_recommend(doubanid=doubanid)
elif mediatype == MediaType.TV: elif mediatype == MediaType.TV:
doubaninfos = DoubanChain().tv_recommend(doubanid=doubanid) medias = DoubanChain().tv_recommend(doubanid=doubanid)
else: else:
return [] return []
if not doubaninfos: if medias:
return [] return [media.to_dict() for media in medias]
else: return []
return [MediaInfo(douban_info=doubaninfo).to_dict() for doubaninfo in doubaninfos]
@router.get("/{doubanid}", summary="查询豆瓣详情", response_model=schemas.MediaInfo) @router.get("/{doubanid}", summary="查询豆瓣详情", response_model=schemas.MediaInfo)

View File

@@ -4,6 +4,7 @@ from fastapi import APIRouter, Depends
from app import schemas from app import schemas
from app.chain.download import DownloadChain from app.chain.download import DownloadChain
from app.chain.media import MediaChain
from app.core.context import MediaInfo, Context, TorrentInfo from app.core.context import MediaInfo, Context, TorrentInfo
from app.core.metainfo import MetaInfo from app.core.metainfo import MetaInfo
from app.core.security import verify_token from app.core.security import verify_token
@@ -14,7 +15,7 @@ router = APIRouter()
@router.get("/", summary="正在下载", response_model=List[schemas.DownloadingTorrent]) @router.get("/", summary="正在下载", response_model=List[schemas.DownloadingTorrent])
def read_downloading( def read(
_: schemas.TokenPayload = Depends(verify_token)) -> Any: _: schemas.TokenPayload = Depends(verify_token)) -> Any:
""" """
查询正在下载的任务 查询正在下载的任务
@@ -22,14 +23,13 @@ def read_downloading(
return DownloadChain().downloading() return DownloadChain().downloading()
@router.post("/", summary="添加下载", response_model=schemas.Response) @router.post("/", summary="添加下载(含媒体信息)", response_model=schemas.Response)
def add_downloading( def download(
media_in: schemas.MediaInfo, media_in: schemas.MediaInfo,
torrent_in: schemas.TorrentInfo, torrent_in: schemas.TorrentInfo,
current_user: User = Depends(get_current_active_user), current_user: User = Depends(get_current_active_user)) -> Any:
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
""" """
添加下载任务 添加下载任务(含媒体信息)
""" """
# 元数据 # 元数据
metainfo = MetaInfo(title=torrent_in.title, subtitle=torrent_in.description) metainfo = MetaInfo(title=torrent_in.title, subtitle=torrent_in.description)
@@ -45,14 +45,42 @@ def add_downloading(
media_info=mediainfo, media_info=mediainfo,
torrent_info=torrentinfo torrent_info=torrentinfo
) )
did = DownloadChain().download_single(context=context, userid=current_user.name, username=current_user.name) did = DownloadChain().download_single(context=context, username=current_user.name)
return schemas.Response(success=True if did else False, data={
"download_id": did
})
@router.post("/add", summary="添加下载(不含媒体信息)", response_model=schemas.Response)
def add(
torrent_in: schemas.TorrentInfo,
current_user: User = Depends(get_current_active_user)) -> Any:
"""
添加下载任务(不含媒体信息)
"""
# 元数据
metainfo = MetaInfo(title=torrent_in.title, subtitle=torrent_in.description)
# 媒体信息
mediainfo = MediaChain().recognize_media(meta=metainfo)
if not mediainfo:
return schemas.Response(success=False, message="无法识别媒体信息")
# 种子信息
torrentinfo = TorrentInfo()
torrentinfo.from_dict(torrent_in.dict())
# 上下文
context = Context(
meta_info=metainfo,
media_info=mediainfo,
torrent_info=torrentinfo
)
did = DownloadChain().download_single(context=context, username=current_user.name)
return schemas.Response(success=True if did else False, data={ return schemas.Response(success=True if did else False, data={
"download_id": did "download_id": did
}) })
@router.get("/start/{hashString}", summary="开始任务", response_model=schemas.Response) @router.get("/start/{hashString}", summary="开始任务", response_model=schemas.Response)
def start_downloading( def start(
hashString: str, hashString: str,
_: schemas.TokenPayload = Depends(verify_token)) -> Any: _: schemas.TokenPayload = Depends(verify_token)) -> Any:
""" """
@@ -63,7 +91,7 @@ def start_downloading(
@router.get("/stop/{hashString}", summary="暂停任务", response_model=schemas.Response) @router.get("/stop/{hashString}", summary="暂停任务", response_model=schemas.Response)
def stop_downloading( def stop(
hashString: str, hashString: str,
_: schemas.TokenPayload = Depends(verify_token)) -> Any: _: schemas.TokenPayload = Depends(verify_token)) -> Any:
""" """
@@ -74,7 +102,7 @@ def stop_downloading(
@router.delete("/{hashString}", summary="删除下载任务", response_model=schemas.Response) @router.delete("/{hashString}", summary="删除下载任务", response_model=schemas.Response)
def remove_downloading( def info(
hashString: str, hashString: str,
_: schemas.TokenPayload = Depends(verify_token)) -> Any: _: schemas.TokenPayload = Depends(verify_token)) -> Any:
""" """

View File

@@ -9,8 +9,10 @@ from app.chain.transfer import TransferChain
from app.core.event import eventmanager from app.core.event import eventmanager
from app.core.security import verify_token from app.core.security import verify_token
from app.db import get_db from app.db import get_db
from app.db.models import User
from app.db.models.downloadhistory import DownloadHistory from app.db.models.downloadhistory import DownloadHistory
from app.db.models.transferhistory import TransferHistory from app.db.models.transferhistory import TransferHistory
from app.db.userauth import get_current_active_superuser
from app.schemas.types import EventType from app.schemas.types import EventType
router = APIRouter() router = APIRouter()
@@ -103,3 +105,13 @@ def delete_transfer_history(history_in: schemas.TransferHistory,
# 删除记录 # 删除记录
TransferHistory.delete(db, history_in.id) TransferHistory.delete(db, history_in.id)
return schemas.Response(success=True) return schemas.Response(success=True)
@router.get("/empty/transfer", summary="清空转移历史记录", response_model=schemas.Response)
def delete_transfer_history(db: Session = Depends(get_db),
_: User = Depends(get_current_active_superuser)) -> Any:
"""
清空转移历史记录
"""
TransferHistory.truncate(db)
return schemas.Response(success=True)

View File

@@ -1,7 +1,7 @@
from datetime import timedelta from datetime import timedelta
from typing import Any from typing import Any
from fastapi import APIRouter, Depends, HTTPException from fastapi import APIRouter, Depends, HTTPException, Form
from fastapi.security import OAuth2PasswordRequestForm from fastapi.security import OAuth2PasswordRequestForm
from sqlalchemy.orm import Session from sqlalchemy.orm import Session
@@ -21,40 +21,41 @@ router = APIRouter()
@router.post("/access-token", summary="获取token", response_model=schemas.Token) @router.post("/access-token", summary="获取token", response_model=schemas.Token)
async def login_access_token( async def login_access_token(
db: Session = Depends(get_db), form_data: OAuth2PasswordRequestForm = Depends() db: Session = Depends(get_db),
form_data: OAuth2PasswordRequestForm = Depends(),
otp_password: str = Form(None)
) -> Any: ) -> Any:
""" """
获取认证Token 获取认证Token
""" """
# 检查数据库 # 检查数据库
user = User.authenticate( success, user = User.authenticate(
db=db, db=db,
name=form_data.username, name=form_data.username,
password=form_data.password password=form_data.password,
otp_password=otp_password
) )
if not user: if not success:
# 请求协助认证 # 认证不成功
logger.warn(f"登录用户 {form_data.username} 本地用户名或密码不匹配,尝试辅助认证 ...") if not user:
token = UserChain().user_authenticate(form_data.username, form_data.password) # 未找到用户,请求协助认证
if not token: logger.warn(f"登录用户 {form_data.username} 本地不存在,尝试辅助认证 ...")
logger.warn(f"用户 {form_data.username} 登录失败!") token = UserChain().user_authenticate(form_data.username, form_data.password)
raise HTTPException(status_code=401, detail="用户名或密码不正确") if not token:
else: logger.warn(f"用户 {form_data.username} 登录失败!")
logger.info(f"用户 {form_data.username} 辅助认证成功,用户信息: {token},以普通用户登录...") raise HTTPException(status_code=401, detail="用户名、密码、二次校验码不正确")
# 加入用户信息表 else:
user = User.get_by_name(db=db, name=form_data.username) logger.info(f"用户 {form_data.username} 辅助认证成功,用户信息: {token},以普通用户登录...")
if not user: # 加入用户信息表
logger.info(f"用户不存在,创建用户: {form_data.username}") logger.info(f"创建用户: {form_data.username}")
user = User(name=form_data.username, is_active=True, user = User(name=form_data.username, is_active=True,
is_superuser=False, hashed_password=get_password_hash(token)) is_superuser=False, hashed_password=get_password_hash(token))
user.create(db) user.create(db)
else: else:
# 辅助验证用户若未启用,则禁止登录 # 用户存在,但认证失败
if not user.is_active: logger.warn(f"用户 {user.name} 登录失败!")
raise HTTPException(status_code=403, detail="用户未启用") raise HTTPException(status_code=401, detail="用户名、密码或二次校验码不正确")
# 普通用户权限 elif user and not user.is_active:
user.is_superuser = False
elif not user.is_active:
raise HTTPException(status_code=403, detail="用户未启用") raise HTTPException(status_code=403, detail="用户未启用")
logger.info(f"用户 {user.name} 登录成功!") logger.info(f"用户 {user.name} 登录成功!")
return schemas.Token( return schemas.Token(

View File

@@ -1,5 +1,5 @@
from pathlib import Path from pathlib import Path
from typing import List, Any from typing import List, Any, Union
from fastapi import APIRouter, Depends from fastapi import APIRouter, Depends
@@ -63,18 +63,38 @@ def recognize_file2(path: str,
return recognize_file(path) return recognize_file(path)
@router.get("/search", summary="搜索媒体信息", response_model=List[schemas.MediaInfo]) @router.get("/search", summary="搜索媒体/人物信息", response_model=List[dict])
def search_by_title(title: str, def search(title: str,
page: int = 1, type: str = "media",
count: int = 8, page: int = 1,
_: schemas.TokenPayload = Depends(verify_token)) -> Any: count: int = 8,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
""" """
模糊搜索媒体信息列表 模糊搜索媒体/人物信息列表 media媒体信息person人物信息
""" """
_, medias = MediaChain().search(title=title) def __get_source(obj: Union[dict, schemas.MediaPerson]):
if medias: """
return [media.to_dict() for media in medias[(page - 1) * count: page * count]] 获取对象属性
return [] """
if isinstance(obj, dict):
return obj.get("source")
return obj.source
result = []
if type == "media":
_, medias = MediaChain().search(title=title)
if medias:
result = [media.to_dict() for media in medias]
else:
result = MediaChain().search_persons(name=title)
if result:
# 按设置的顺序对结果进行排序
setting_order = settings.SEARCH_SOURCE.split(',') or []
sort_order = {}
for index, source in enumerate(setting_order):
sort_order[source] = index
result = sorted(result, key=lambda x: sort_order.get(__get_source(x), 4))
return result[(page - 1) * count:page * count]
@router.get("/scrape", summary="刮削媒体信息", response_model=schemas.Response) @router.get("/scrape", summary="刮削媒体信息", response_model=schemas.Response)
@@ -106,28 +126,17 @@ def media_info(mediaid: str, type_name: str,
根据媒体ID查询themoviedb或豆瓣媒体信息type_name: 电影/电视剧 根据媒体ID查询themoviedb或豆瓣媒体信息type_name: 电影/电视剧
""" """
mtype = MediaType(type_name) mtype = MediaType(type_name)
tmdbid, doubanid = None, None tmdbid, doubanid, bangumiid = None, None, None
if mediaid.startswith("tmdb:"): if mediaid.startswith("tmdb:"):
tmdbid = int(mediaid[5:]) tmdbid = int(mediaid[5:])
elif mediaid.startswith("douban:"): elif mediaid.startswith("douban:"):
doubanid = mediaid[7:] doubanid = mediaid[7:]
if not tmdbid and not doubanid: elif mediaid.startswith("bangumi:"):
bangumiid = int(mediaid[8:])
if not tmdbid and not doubanid and not bangumiid:
return schemas.MediaInfo() return schemas.MediaInfo()
if settings.RECOGNIZE_SOURCE == "themoviedb": # 识别
if not tmdbid and doubanid: mediainfo = MediaChain().recognize_media(tmdbid=tmdbid, doubanid=doubanid, bangumiid=bangumiid, mtype=mtype)
tmdbinfo = MediaChain().get_tmdbinfo_by_doubanid(doubanid=doubanid, mtype=mtype)
if tmdbinfo:
tmdbid = tmdbinfo.get("id")
else:
return schemas.MediaInfo()
else:
if not doubanid and tmdbid:
doubaninfo = MediaChain().get_doubaninfo_by_tmdbid(tmdbid=tmdbid, mtype=mtype)
if doubaninfo:
doubanid = doubaninfo.get("id")
else:
return schemas.MediaInfo()
mediainfo = MediaChain().recognize_media(tmdbid=tmdbid, doubanid=doubanid, mtype=mtype)
if mediainfo: if mediainfo:
MediaChain().obtain_images(mediainfo) MediaChain().obtain_images(mediainfo)
return mediainfo.to_dict() return mediainfo.to_dict()

View File

@@ -1,13 +1,13 @@
from typing import Any, List from typing import Any, List, Dict
from fastapi import APIRouter, Depends, HTTPException from fastapi import APIRouter, Depends
from sqlalchemy.orm import Session from sqlalchemy.orm import Session
from app import schemas from app import schemas
from app.chain.download import DownloadChain from app.chain.download import DownloadChain
from app.chain.media import MediaChain
from app.chain.mediaserver import MediaServerChain from app.chain.mediaserver import MediaServerChain
from app.core.config import settings from app.core.config import settings
from app.core.context import MediaInfo
from app.core.metainfo import MetaInfo from app.core.metainfo import MetaInfo
from app.core.security import verify_token from app.core.security import verify_token
from app.db import get_db from app.db import get_db
@@ -37,14 +37,14 @@ def play_item(itemid: str) -> schemas.Response:
}) })
@router.get("/exists", summary="本地是否存在", response_model=schemas.Response) @router.get("/exists", summary="查询本地是否存在(数据库)", response_model=schemas.Response)
def exists(title: str = None, def exists_local(title: str = None,
year: int = None, year: int = None,
mtype: str = None, mtype: str = None,
tmdbid: int = None, tmdbid: int = None,
season: int = None, season: int = None,
db: Session = Depends(get_db), db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any: _: schemas.TokenPayload = Depends(verify_token)) -> Any:
""" """
判断本地是否存在 判断本地是否存在
""" """
@@ -61,35 +61,35 @@ def exists(title: str = None,
ret_info = { ret_info = {
"id": exist.item_id "id": exist.item_id
} }
"""
else:
# 服务器是否存在
mediainfo = MediaInfo()
mediainfo.from_dict({
"title": meta.name,
"year": year or meta.year,
"type": mtype or meta.type,
"tmdb_id": tmdbid,
"season": season
})
exist: schemas.ExistMediaInfo = MediaServerChain().media_exists(
mediainfo=mediainfo
)
if exist:
ret_info = {
"id": exist.itemid
}
"""
return schemas.Response(success=True if exist else False, data={ return schemas.Response(success=True if exist else False, data={
"item": ret_info "item": ret_info
}) })
@router.post("/notexists", summary="查询缺失媒体信息", response_model=List[schemas.NotExistMediaInfo]) @router.post("/exists_remote", summary="查询已存在的剧集信息(媒体服务器)", response_model=Dict[int, list])
def exists(media_in: schemas.MediaInfo,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据媒体信息查询媒体库已存在的剧集信息
"""
# 转化为媒体信息对象
mediainfo = MediaInfo()
mediainfo.from_dict(media_in.dict())
existsinfo: schemas.ExistMediaInfo = MediaServerChain().media_exists(mediainfo=mediainfo)
if not existsinfo:
return []
if media_in.season:
return {
media_in.season: existsinfo.seasons.get(media_in.season) or []
}
return existsinfo.seasons
@router.post("/notexists", summary="查询媒体库缺失信息(媒体服务器)", response_model=List[schemas.NotExistMediaInfo])
def not_exists(media_in: schemas.MediaInfo, def not_exists(media_in: schemas.MediaInfo,
_: schemas.TokenPayload = Depends(verify_token)) -> Any: _: schemas.TokenPayload = Depends(verify_token)) -> Any:
""" """
查询缺失媒体信息 根据媒体信息查询缺失电影/剧集
""" """
# 媒体信息 # 媒体信息
meta = MetaInfo(title=media_in.title) meta = MetaInfo(title=media_in.title)
@@ -101,18 +101,13 @@ def not_exists(media_in: schemas.MediaInfo,
meta.type = MediaType.TV meta.type = MediaType.TV
if media_in.year: if media_in.year:
meta.year = media_in.year meta.year = media_in.year
if media_in.tmdb_id or media_in.douban_id: # 转化为媒体信息对象
mediainfo = MediaChain().recognize_media(meta=meta, mtype=mtype, mediainfo = MediaInfo()
tmdbid=media_in.tmdb_id, doubanid=media_in.douban_id) mediainfo.from_dict(media_in.dict())
else:
mediainfo = MediaChain().recognize_by_meta(metainfo=meta)
# 查询缺失信息
if not mediainfo:
raise HTTPException(status_code=404, detail="媒体信息不存在")
mediakey = mediainfo.tmdb_id or mediainfo.douban_id
exist_flag, no_exists = DownloadChain().get_no_exists_info(meta=meta, mediainfo=mediainfo) exist_flag, no_exists = DownloadChain().get_no_exists_info(meta=meta, mediainfo=mediainfo)
mediakey = mediainfo.tmdb_id or mediainfo.douban_id
if mediainfo.type == MediaType.MOVIE: if mediainfo.type == MediaType.MOVIE:
# 电影已存在时返回空列表,存在时返回空对像列表 # 电影已存在时返回空列表,存在时返回空对像列表
return [] if exist_flag else [NotExistMediaInfo()] return [] if exist_flag else [NotExistMediaInfo()]
elif no_exists and no_exists.get(mediakey): elif no_exists and no_exists.get(mediakey):
# 电视剧返回缺失的剧集 # 电视剧返回缺失的剧集

View File

@@ -2,17 +2,22 @@ from typing import Union, Any, List
from fastapi import APIRouter, BackgroundTasks, Depends from fastapi import APIRouter, BackgroundTasks, Depends
from fastapi import Request from fastapi import Request
from sqlalchemy.orm import Session
from starlette.responses import PlainTextResponse from starlette.responses import PlainTextResponse
from app import schemas from app import schemas
from app.chain.message import MessageChain from app.chain.message import MessageChain
from app.core.config import settings from app.core.config import settings
from app.core.security import verify_token from app.core.security import verify_token
from app.db import get_db
from app.db.models import User
from app.db.models.message import Message
from app.db.systemconfig_oper import SystemConfigOper from app.db.systemconfig_oper import SystemConfigOper
from app.db.userauth import get_current_active_superuser
from app.log import logger from app.log import logger
from app.modules.wechat.WXBizMsgCrypt3 import WXBizMsgCrypt from app.modules.wechat.WXBizMsgCrypt3 import WXBizMsgCrypt
from app.schemas import NotificationSwitch from app.schemas import NotificationSwitch
from app.schemas.types import SystemConfigKey, NotificationType from app.schemas.types import SystemConfigKey, NotificationType, MessageChannel
router = APIRouter() router = APIRouter()
@@ -36,6 +41,39 @@ async def user_message(background_tasks: BackgroundTasks, request: Request):
return schemas.Response(success=True) return schemas.Response(success=True)
@router.post("/web", summary="接收WEB消息", response_model=schemas.Response)
def web_message(text: str, current_user: User = Depends(get_current_active_superuser)):
"""
WEB消息响应
"""
MessageChain().handle_message(
channel=MessageChannel.Web,
userid=current_user.name,
username=current_user.name,
text=text
)
return schemas.Response(success=True)
@router.get("/web", summary="获取WEB消息", response_model=List[dict])
def get_web_message(_: schemas.TokenPayload = Depends(verify_token),
db: Session = Depends(get_db),
page: int = 1,
count: int = 20):
"""
获取WEB消息列表
"""
ret_messages = []
messages = Message.list_by_page(db, page=page, count=count)
for message in messages:
try:
ret_messages.append(message.to_dict())
except Exception as e:
logger.error(f"获取WEB消息列表失败: {str(e)}")
continue
return ret_messages
def wechat_verify(echostr: str, msg_signature: str, def wechat_verify(echostr: str, msg_signature: str,
timestamp: Union[str, int], nonce: str) -> Any: timestamp: Union[str, int], nonce: str) -> Any:
""" """
@@ -103,7 +141,7 @@ def read_switchs(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
def set_switchs(switchs: List[NotificationSwitch], def set_switchs(switchs: List[NotificationSwitch],
_: schemas.TokenPayload = Depends(verify_token)) -> Any: _: schemas.TokenPayload = Depends(verify_token)) -> Any:
""" """
查询通知消息渠道开关 设置通知消息渠道开关
""" """
switch_list = [] switch_list = []
for switch in switchs: for switch in switchs:

View File

@@ -14,16 +14,16 @@ router = APIRouter()
@router.get("/", summary="所有插件", response_model=List[schemas.Plugin]) @router.get("/", summary="所有插件", response_model=List[schemas.Plugin])
def all_plugins(_: schemas.TokenPayload = Depends(verify_token), state: str = "all") -> Any: def all_plugins(_: schemas.TokenPayload = Depends(verify_token), state: str = "all") -> List[schemas.Plugin]:
""" """
查询所有插件清单包括本地插件和在线插件插件状态installed, market, all 查询所有插件清单包括本地插件和在线插件插件状态installed, market, all
""" """
# 本地插件 # 本地插件
local_plugins = PluginManager().get_local_plugins() local_plugins = PluginManager().get_local_plugins()
# 已安装插件 # 已安装插件
installed_plugins = [plugin for plugin in local_plugins if plugin.get("installed")] installed_plugins = [plugin for plugin in local_plugins if plugin.installed]
# 未安装的本地插件 # 未安装的本地插件
not_installed_plugins = [plugin for plugin in local_plugins if not plugin.get("installed")] not_installed_plugins = [plugin for plugin in local_plugins if not plugin.installed]
if state == "installed": if state == "installed":
return installed_plugins return installed_plugins
@@ -39,17 +39,17 @@ def all_plugins(_: schemas.TokenPayload = Depends(verify_token), state: str = "a
# 插件市场插件清单 # 插件市场插件清单
market_plugins = [] market_plugins = []
# 已安装插件IDS # 已安装插件IDS
_installed_ids = [plugin["id"] for plugin in installed_plugins] _installed_ids = [plugin.id for plugin in installed_plugins]
# 未安装的线上插件或者有更新的插件 # 未安装的线上插件或者有更新的插件
for plugin in online_plugins: for plugin in online_plugins:
if plugin["id"] not in _installed_ids: if plugin.id not in _installed_ids:
market_plugins.append(plugin) market_plugins.append(plugin)
elif plugin.get("has_update"): elif plugin.has_update:
market_plugins.append(plugin) market_plugins.append(plugin)
# 未安装的本地插件,且不在线上插件中 # 未安装的本地插件,且不在线上插件中
_plugin_ids = [plugin["id"] for plugin in market_plugins] _plugin_ids = [plugin.id for plugin in market_plugins]
for plugin in not_installed_plugins: for plugin in not_installed_plugins:
if plugin["id"] not in _plugin_ids: if plugin.id not in _plugin_ids:
market_plugins.append(plugin) market_plugins.append(plugin)
# 返回插件清单 # 返回插件清单
if state == "market": if state == "market":
@@ -67,6 +67,14 @@ def installed(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
return SystemConfigOper().get(SystemConfigKey.UserInstalledPlugins) or [] return SystemConfigOper().get(SystemConfigKey.UserInstalledPlugins) or []
@router.get("/statistic", summary="插件安装统计", response_model=dict)
def statistic(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
插件安装统计
"""
return PluginHelper().get_statistic()
@router.get("/install/{plugin_id}", summary="安装插件", response_model=schemas.Response) @router.get("/install/{plugin_id}", summary="安装插件", response_model=schemas.Response)
def install(plugin_id: str, def install(plugin_id: str,
repo_url: str = "", repo_url: str = "",
@@ -89,8 +97,8 @@ def install(plugin_id: str,
install_plugins.append(plugin_id) install_plugins.append(plugin_id)
# 保存设置 # 保存设置
SystemConfigOper().set(SystemConfigKey.UserInstalledPlugins, install_plugins) SystemConfigOper().set(SystemConfigKey.UserInstalledPlugins, install_plugins)
# 载插件管理器 # 载插件到内存
PluginManager().init_config() PluginManager().reload_plugin(plugin_id)
# 注册插件服务 # 注册插件服务
Scheduler().update_plugin_job(plugin_id) Scheduler().update_plugin_job(plugin_id)
return schemas.Response(success=True) return schemas.Response(success=True)
@@ -117,6 +125,22 @@ def plugin_page(plugin_id: str, _: schemas.TokenPayload = Depends(verify_token))
return PluginManager().get_plugin_page(plugin_id) return PluginManager().get_plugin_page(plugin_id)
@router.get("/dashboards", summary="获取有仪表板的插件清单")
def dashboard_plugins(_: schemas.TokenPayload = Depends(verify_token)) -> List[dict]:
"""
获取所有插件仪表板
"""
return PluginManager().get_dashboard_plugins()
@router.get("/dashboard/{plugin_id}", summary="获取插件仪表板配置")
def plugin_dashboard(plugin_id: str, _: schemas.TokenPayload = Depends(verify_token)) -> schemas.PluginDashboard:
"""
根据插件ID获取插件仪表板
"""
return PluginManager().get_plugin_dashboard(plugin_id)
@router.get("/reset/{plugin_id}", summary="重置插件配置", response_model=schemas.Response) @router.get("/reset/{plugin_id}", summary="重置插件配置", response_model=schemas.Response)
def reset_plugin(plugin_id: str, _: schemas.TokenPayload = Depends(verify_token)) -> Any: def reset_plugin(plugin_id: str, _: schemas.TokenPayload = Depends(verify_token)) -> Any:
""" """
@@ -125,7 +149,10 @@ def reset_plugin(plugin_id: str, _: schemas.TokenPayload = Depends(verify_token)
# 删除配置 # 删除配置
PluginManager().delete_plugin_config(plugin_id) PluginManager().delete_plugin_config(plugin_id)
# 重新生效插件 # 重新生效插件
PluginManager().reload_plugin(plugin_id, {}) PluginManager().init_plugin(plugin_id, {
"enabled": False,
"enable": False
})
# 注册插件服务 # 注册插件服务
Scheduler().update_plugin_job(plugin_id) Scheduler().update_plugin_job(plugin_id)
return schemas.Response(success=True) return schemas.Response(success=True)
@@ -148,7 +175,7 @@ def set_plugin_config(plugin_id: str, conf: dict,
# 保存配置 # 保存配置
PluginManager().save_plugin_config(plugin_id, conf) PluginManager().save_plugin_config(plugin_id, conf)
# 重新生效插件 # 重新生效插件
PluginManager().reload_plugin(plugin_id, conf) PluginManager().init_plugin(plugin_id, conf)
# 注册插件服务 # 注册插件服务
Scheduler().update_plugin_job(plugin_id) Scheduler().update_plugin_job(plugin_id)
return schemas.Response(success=True) return schemas.Response(success=True)
@@ -168,8 +195,8 @@ def uninstall_plugin(plugin_id: str,
break break
# 保存 # 保存
SystemConfigOper().set(SystemConfigKey.UserInstalledPlugins, install_plugins) SystemConfigOper().set(SystemConfigKey.UserInstalledPlugins, install_plugins)
# 重载插件管理器 # 移除插件
PluginManager().init_config() PluginManager().remove_plugin(plugin_id)
# 移除插件服务 # 移除插件服务
Scheduler().remove_plugin_job(plugin_id) Scheduler().remove_plugin_job(plugin_id)
return schemas.Response(success=True) return schemas.Response(success=True)

View File

@@ -21,17 +21,19 @@ async def search_latest(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
return [torrent.to_dict() for torrent in torrents] return [torrent.to_dict() for torrent in torrents]
@router.get("/media/{mediaid}", summary="精确搜索资源", response_model=List[schemas.Context]) @router.get("/media/{mediaid}", summary="精确搜索资源", response_model=schemas.Response)
def search_by_id(mediaid: str, def search_by_id(mediaid: str,
mtype: str = None, mtype: str = None,
area: str = "title", area: str = "title",
season: str = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any: _: schemas.TokenPayload = Depends(verify_token)) -> Any:
""" """
根据TMDBID/豆瓣ID精确搜索站点资源 tmdb:/douban:/ 根据TMDBID/豆瓣ID精确搜索站点资源 tmdb:/douban:/bangumi:
""" """
torrents = []
if mtype: if mtype:
mtype = MediaType(mtype) mtype = MediaType(mtype)
if season:
season = int(season)
if mediaid.startswith("tmdb:"): if mediaid.startswith("tmdb:"):
tmdbid = int(mediaid.replace("tmdb:", "")) tmdbid = int(mediaid.replace("tmdb:", ""))
if settings.RECOGNIZE_SOURCE == "douban": if settings.RECOGNIZE_SOURCE == "douban":
@@ -39,9 +41,11 @@ def search_by_id(mediaid: str,
doubaninfo = MediaChain().get_doubaninfo_by_tmdbid(tmdbid=tmdbid, mtype=mtype) doubaninfo = MediaChain().get_doubaninfo_by_tmdbid(tmdbid=tmdbid, mtype=mtype)
if doubaninfo: if doubaninfo:
torrents = SearchChain().search_by_id(doubanid=doubaninfo.get("id"), torrents = SearchChain().search_by_id(doubanid=doubaninfo.get("id"),
mtype=mtype, area=area) mtype=mtype, area=area, season=season)
else:
return schemas.Response(success=False, message="未识别到豆瓣媒体信息")
else: else:
torrents = SearchChain().search_by_id(tmdbid=tmdbid, mtype=mtype, area=area) torrents = SearchChain().search_by_id(tmdbid=tmdbid, mtype=mtype, area=area, season=season)
elif mediaid.startswith("douban:"): elif mediaid.startswith("douban:"):
doubanid = mediaid.replace("douban:", "") doubanid = mediaid.replace("douban:", "")
if settings.RECOGNIZE_SOURCE == "themoviedb": if settings.RECOGNIZE_SOURCE == "themoviedb":
@@ -49,12 +53,36 @@ def search_by_id(mediaid: str,
tmdbinfo = MediaChain().get_tmdbinfo_by_doubanid(doubanid=doubanid, mtype=mtype) tmdbinfo = MediaChain().get_tmdbinfo_by_doubanid(doubanid=doubanid, mtype=mtype)
if tmdbinfo: if tmdbinfo:
torrents = SearchChain().search_by_id(tmdbid=tmdbinfo.get("id"), torrents = SearchChain().search_by_id(tmdbid=tmdbinfo.get("id"),
mtype=mtype, area=area) mtype=mtype, area=area, season=season)
else:
return schemas.Response(success=False, message="未识别到TMDB媒体信息")
else: else:
torrents = SearchChain().search_by_id(doubanid=doubanid, mtype=mtype, area=area) torrents = SearchChain().search_by_id(doubanid=doubanid, mtype=mtype, area=area, season=season)
elif mediaid.startswith("bangumi:"):
bangumiid = int(mediaid.replace("bangumi:", ""))
if settings.RECOGNIZE_SOURCE == "themoviedb":
# 通过BangumiID识别TMDBID
tmdbinfo = MediaChain().get_tmdbinfo_by_bangumiid(bangumiid=bangumiid)
if tmdbinfo:
torrents = SearchChain().search_by_id(tmdbid=tmdbinfo.get("id"),
mtype=mtype, area=area, season=season)
else:
return schemas.Response(success=False, message="未识别到TMDB媒体信息")
else:
# 通过BangumiID识别豆瓣ID
doubaninfo = MediaChain().get_doubaninfo_by_bangumiid(bangumiid=bangumiid)
if doubaninfo:
torrents = SearchChain().search_by_id(doubanid=doubaninfo.get("id"),
mtype=mtype, area=area, season=season)
else:
return schemas.Response(success=False, message="未识别到豆瓣媒体信息")
else: else:
return [] return schemas.Response(success=False, message="未知的媒体ID")
return [torrent.to_dict() for torrent in torrents]
if not torrents:
return schemas.Response(success=False, message="未搜索到任何资源")
else:
return schemas.Response(success=True, data=[torrent.to_dict() for torrent in torrents])
@router.get("/title", summary="模糊搜索资源", response_model=List[schemas.TorrentInfo]) @router.get("/title", summary="模糊搜索资源", response_model=List[schemas.TorrentInfo])

View File

@@ -10,9 +10,12 @@ from app.chain.torrents import TorrentsChain
from app.core.event import EventManager from app.core.event import EventManager
from app.core.security import verify_token from app.core.security import verify_token
from app.db import get_db from app.db import get_db
from app.db.models import User
from app.db.models.site import Site from app.db.models.site import Site
from app.db.models.siteicon import SiteIcon from app.db.models.siteicon import SiteIcon
from app.db.models.sitestatistic import SiteStatistic
from app.db.systemconfig_oper import SystemConfigOper from app.db.systemconfig_oper import SystemConfigOper
from app.db.userauth import get_current_active_superuser
from app.helper.sites import SitesHelper from app.helper.sites import SitesHelper
from app.scheduler import Scheduler from app.scheduler import Scheduler
from app.schemas.types import SystemConfigKey, EventType from app.schemas.types import SystemConfigKey, EventType
@@ -42,20 +45,26 @@ def add_site(
""" """
if not site_in.url: if not site_in.url:
return schemas.Response(success=False, message="站点地址不能为空") return schemas.Response(success=False, message="站点地址不能为空")
if SitesHelper().auth_level < 2:
return schemas.Response(success=False, message="用户未通过认证,无法使用站点功能!")
domain = StringUtils.get_url_domain(site_in.url) domain = StringUtils.get_url_domain(site_in.url)
site_info = SitesHelper().get_indexer(domain) site_info = SitesHelper().get_indexer(domain)
if not site_info: if not site_info:
return schemas.Response(success=False, message="该站点不支持或用户未通过认证") return schemas.Response(success=False, message="该站点不支持,请检查站点域名是否正确")
if Site.get_by_domain(db, domain): if Site.get_by_domain(db, domain):
return schemas.Response(success=False, message=f"{domain} 站点己存在") return schemas.Response(success=False, message=f"{domain} 站点己存在")
# 保存站点信息 # 保存站点信息
site_in.domain = domain site_in.domain = domain
# 校正地址格式
_scheme, _netloc = StringUtils.get_url_netloc(site_in.url)
site_in.url = f"{_scheme}://{_netloc}/"
site_in.name = site_info.get("name") site_in.name = site_info.get("name")
site_in.id = None site_in.id = None
site_in.public = 1 if site_info.get("public") else 0
site = Site(**site_in.dict()) site = Site(**site_in.dict())
site.create(db) site.create(db)
# 通知缓存站点图标 # 通知站点更新
EventManager().send_event(EventType.CacheSiteIcon, { EventManager().send_event(EventType.SiteUpdated, {
"domain": domain "domain": domain
}) })
return schemas.Response(success=True) return schemas.Response(success=True)
@@ -74,9 +83,12 @@ def update_site(
site = Site.get(db, site_in.id) site = Site.get(db, site_in.id)
if not site: if not site:
return schemas.Response(success=False, message="站点不存在") return schemas.Response(success=False, message="站点不存在")
# 校正地址格式
_scheme, _netloc = StringUtils.get_url_netloc(site_in.url)
site_in.url = f"{_scheme}://{_netloc}/"
site.update(db, site_in.dict()) site.update(db, site_in.dict())
# 通知缓存站点图标 # 通知站点更新
EventManager().send_event(EventType.CacheSiteIcon, { EventManager().send_event(EventType.SiteUpdated, {
"domain": site_in.domain "domain": site_in.domain
}) })
return schemas.Response(success=True) return schemas.Response(success=True)
@@ -86,7 +98,7 @@ def update_site(
def delete_site( def delete_site(
site_id: int, site_id: int,
db: Session = Depends(get_db), db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token) _: User = Depends(get_current_active_superuser)
) -> Any: ) -> Any:
""" """
删除站点 删除站点
@@ -112,7 +124,7 @@ def cookie_cloud_sync(background_tasks: BackgroundTasks,
@router.get("/reset", summary="重置站点", response_model=schemas.Response) @router.get("/reset", summary="重置站点", response_model=schemas.Response)
def reset(db: Session = Depends(get_db), def reset(db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any: _: User = Depends(get_current_active_superuser)) -> Any:
""" """
清空所有站点数据并重新同步CookieCloud站点信息 清空所有站点数据并重新同步CookieCloud站点信息
""" """
@@ -124,7 +136,7 @@ def reset(db: Session = Depends(get_db),
# 插件站点删除 # 插件站点删除
EventManager().send_event(EventType.SiteDeleted, EventManager().send_event(EventType.SiteDeleted,
{ {
"site_id": None "site_id": "*"
}) })
return schemas.Response(success=True, message="站点已重置!") return schemas.Response(success=True, message="站点已重置!")
@@ -231,6 +243,22 @@ def read_site_by_domain(
return site return site
@router.get("/statistic/{site_url}", summary="站点统计信息", response_model=schemas.SiteStatistic)
def read_site_by_domain(
site_url: str,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)
) -> Any:
"""
通过域名获取站点统计信息
"""
domain = StringUtils.get_url_domain(site_url)
sitestatistic = SiteStatistic.get_by_domain(db, domain)
if sitestatistic:
return sitestatistic
return schemas.SiteStatistic(domain=domain)
@router.get("/rss", summary="所有订阅站点", response_model=List[schemas.Site]) @router.get("/rss", summary="所有订阅站点", response_model=List[schemas.Site])
def read_rss_sites(db: Session = Depends(get_db)) -> List[dict]: def read_rss_sites(db: Session = Depends(get_db)) -> List[dict]:
""" """

View File

@@ -1,18 +1,22 @@
import json import json
from typing import List, Any from typing import List, Any
import cn2an
from fastapi import APIRouter, Request, BackgroundTasks, Depends, HTTPException, Header from fastapi import APIRouter, Request, BackgroundTasks, Depends, HTTPException, Header
from sqlalchemy.orm import Session from sqlalchemy.orm import Session
from app import schemas from app import schemas
from app.chain.subscribe import SubscribeChain from app.chain.subscribe import SubscribeChain
from app.core.config import settings from app.core.config import settings
from app.core.context import MediaInfo
from app.core.metainfo import MetaInfo from app.core.metainfo import MetaInfo
from app.core.security import verify_token, verify_uri_token from app.core.security import verify_token, verify_uri_token
from app.db import get_db from app.db import get_db
from app.db.models.subscribe import Subscribe from app.db.models.subscribe import Subscribe
from app.db.models.subscribehistory import SubscribeHistory
from app.db.models.user import User from app.db.models.user import User
from app.db.userauth import get_current_active_user from app.db.userauth import get_current_active_user
from app.helper.subscribe import SubscribeHelper
from app.scheduler import Scheduler from app.scheduler import Scheduler
from app.schemas.types import MediaType from app.schemas.types import MediaType
@@ -65,7 +69,7 @@ def create_subscribe(
else: else:
mtype = None mtype = None
# 豆瓣标理 # 豆瓣标理
if subscribe_in.doubanid: if subscribe_in.doubanid or subscribe_in.bangumiid:
meta = MetaInfo(subscribe_in.name) meta = MetaInfo(subscribe_in.name)
subscribe_in.name = meta.name subscribe_in.name = meta.name
subscribe_in.season = meta.begin_season subscribe_in.season = meta.begin_season
@@ -80,6 +84,7 @@ def create_subscribe(
tmdbid=subscribe_in.tmdbid, tmdbid=subscribe_in.tmdbid,
season=subscribe_in.season, season=subscribe_in.season,
doubanid=subscribe_in.doubanid, doubanid=subscribe_in.doubanid,
bangumiid=subscribe_in.bangumiid,
username=current_user.name, username=current_user.name,
best_version=subscribe_in.best_version, best_version=subscribe_in.best_version,
save_path=subscribe_in.save_path, save_path=subscribe_in.save_path,
@@ -131,9 +136,10 @@ def subscribe_mediaid(
db: Session = Depends(get_db), db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any: _: schemas.TokenPayload = Depends(verify_token)) -> Any:
""" """
根据TMDBID豆瓣ID查询订阅 tmdb:/douban: 根据 TMDBID/豆瓣ID/BangumiId 查询订阅 tmdb:/douban:
""" """
result = None result = None
title_check = False
if mediaid.startswith("tmdb:"): if mediaid.startswith("tmdb:"):
tmdbid = mediaid[5:] tmdbid = mediaid[5:]
if not tmdbid or not str(tmdbid).isdigit(): if not tmdbid or not str(tmdbid).isdigit():
@@ -144,14 +150,21 @@ def subscribe_mediaid(
if not doubanid: if not doubanid:
return Subscribe() return Subscribe()
result = Subscribe.get_by_doubanid(db, doubanid) result = Subscribe.get_by_doubanid(db, doubanid)
# 豆瓣已订阅如果 id 搜索无结果使用标题搜索
# 会造成同名结果也会被返回
if not result and title: if not result and title:
meta = MetaInfo(title) title_check = True
if season: elif mediaid.startswith("bangumi:"):
meta.begin_season = season bangumiid = mediaid[8:]
result = Subscribe.get_by_title(db, title=meta.name, season=meta.begin_season) if not bangumiid or not str(bangumiid).isdigit():
return Subscribe()
result = Subscribe.get_by_bangumiid(db, int(bangumiid))
if not result and title:
title_check = True
# 使用名称检查订阅
if title_check and title:
meta = MetaInfo(title)
if season:
meta.begin_season = season
result = Subscribe.get_by_title(db, title=meta.name, season=meta.begin_season)
if result and result.sites: if result and result.sites:
result.sites = json.loads(result.sites) result.sites = json.loads(result.sites)
@@ -188,9 +201,11 @@ def search_subscribes(
background_tasks.add_task( background_tasks.add_task(
Scheduler().start, Scheduler().start,
job_id="subscribe_search", job_id="subscribe_search",
sid=None, **{
state='R', "sid": None,
manual=True "state": 'R',
"manual": True
}
) )
return schemas.Response(success=True) return schemas.Response(success=True)
@@ -206,29 +221,15 @@ def search_subscribe(
background_tasks.add_task( background_tasks.add_task(
Scheduler().start, Scheduler().start,
job_id="subscribe_search", job_id="subscribe_search",
sid=subscribe_id, **{
state=None, "sid": subscribe_id,
manual=True "state": None,
"manual": True
}
) )
return schemas.Response(success=True) return schemas.Response(success=True)
@router.get("/{subscribe_id}", summary="订阅详情", response_model=schemas.Subscribe)
def read_subscribe(
subscribe_id: int,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据订阅编号查询订阅信息
"""
if not subscribe_id:
return Subscribe()
subscribe = Subscribe.get(db, subscribe_id)
if subscribe and subscribe.sites:
subscribe.sites = json.loads(subscribe.sites)
return subscribe
@router.delete("/media/{mediaid}", summary="删除订阅", response_model=schemas.Response) @router.delete("/media/{mediaid}", summary="删除订阅", response_model=schemas.Response)
def delete_subscribe_by_mediaid( def delete_subscribe_by_mediaid(
mediaid: str, mediaid: str,
@@ -253,19 +254,6 @@ def delete_subscribe_by_mediaid(
return schemas.Response(success=True) return schemas.Response(success=True)
@router.delete("/{subscribe_id}", summary="删除订阅", response_model=schemas.Response)
def delete_subscribe(
subscribe_id: int,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)
) -> Any:
"""
删除订阅信息
"""
Subscribe.delete(db, subscribe_id)
return schemas.Response(success=True)
@router.post("/seerr", summary="OverSeerr/JellySeerr通知订阅", response_model=schemas.Response) @router.post("/seerr", summary="OverSeerr/JellySeerr通知订阅", response_model=schemas.Response)
async def seerr_subscribe(request: Request, background_tasks: BackgroundTasks, async def seerr_subscribe(request: Request, background_tasks: BackgroundTasks,
authorization: str = Header(None)) -> Any: authorization: str = Header(None)) -> Any:
@@ -317,3 +305,114 @@ async def seerr_subscribe(request: Request, background_tasks: BackgroundTasks,
username=user_name) username=user_name)
return schemas.Response(success=True) return schemas.Response(success=True)
@router.get("/history/{mtype}", summary="查询订阅历史", response_model=List[schemas.Subscribe])
def read_subscribe(
mtype: str,
page: int = 1,
count: int = 30,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询电影/电视剧订阅历史
"""
historys = SubscribeHistory.list_by_type(db, mtype=mtype, page=page, count=count)
for history in historys:
if history and history.sites:
history.sites = json.loads(history.sites)
return historys
@router.delete("/history/{history_id}", summary="删除订阅历史", response_model=schemas.Response)
def delete_subscribe(
history_id: int,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)
) -> Any:
"""
删除订阅历史
"""
SubscribeHistory.delete(db, history_id)
return schemas.Response(success=True)
@router.get("/popular", summary="热门订阅(基于用户共享数据)", response_model=List[schemas.MediaInfo])
def popular_subscribes(
stype: str,
page: int = 1,
count: int = 30,
min_sub: int = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询热门订阅
"""
subscribes = SubscribeHelper().get_statistic(stype=stype, page=page, count=count)
if subscribes:
ret_medias = []
for sub in subscribes:
# 订阅人数
count = sub.get("count")
if min_sub and count < min_sub:
continue
media = MediaInfo()
media.type = MediaType(sub.get("type"))
media.tmdb_id = sub.get("tmdbid")
# 处理标题
title = sub.get("name")
season = sub.get("season")
if season and int(season) > 1 and media.tmdb_id:
# 小写数据转大写
season_str = cn2an.an2cn(season, "low")
title = f"{title}{season_str}"
media.title = title
media.year = sub.get("year")
media.douban_id = sub.get("doubanid")
media.bangumi_id = sub.get("bangumiid")
media.tvdb_id = sub.get("tvdbid")
media.imdb_id = sub.get("imdbid")
media.season = sub.get("season")
media.overview = sub.get("description")
media.vote_average = sub.get("vote")
media.poster_path = sub.get("poster")
media.backdrop_path = sub.get("backdrop")
media.popularity = count
ret_medias.append(media)
return [media.to_dict() for media in ret_medias]
return []
@router.get("/{subscribe_id}", summary="订阅详情", response_model=schemas.Subscribe)
def read_subscribe(
subscribe_id: int,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据订阅编号查询订阅信息
"""
if not subscribe_id:
return Subscribe()
subscribe = Subscribe.get(db, subscribe_id)
if subscribe and subscribe.sites:
subscribe.sites = json.loads(subscribe.sites)
return subscribe
@router.delete("/{subscribe_id}", summary="删除订阅", response_model=schemas.Response)
def delete_subscribe(
subscribe_id: int,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)
) -> Any:
"""
删除订阅信息
"""
subscribe = Subscribe.get(db, subscribe_id)
if subscribe:
subscribe.delete(db, subscribe_id)
# 统计订阅
SubscribeHelper().sub_done_async({
"tmdbid": subscribe.tmdbid,
"doubanid": subscribe.doubanid
})
return schemas.Response(success=True)

View File

@@ -10,10 +10,13 @@ from fastapi.responses import StreamingResponse
from app import schemas from app import schemas
from app.chain.search import SearchChain from app.chain.search import SearchChain
from app.chain.system import SystemChain
from app.core.config import settings from app.core.config import settings
from app.core.module import ModuleManager from app.core.module import ModuleManager
from app.core.security import verify_token from app.core.security import verify_token
from app.db.models import User
from app.db.systemconfig_oper import SystemConfigOper from app.db.systemconfig_oper import SystemConfigOper
from app.db.userauth import get_current_active_superuser
from app.helper.message import MessageHelper from app.helper.message import MessageHelper
from app.helper.progress import ProgressHelper from app.helper.progress import ProgressHelper
from app.helper.sites import SitesHelper from app.helper.sites import SitesHelper
@@ -26,7 +29,7 @@ from version import APP_VERSION
router = APIRouter() router = APIRouter()
@router.get("/img/{imgurl:path}/{proxy}", summary="图片代理") @router.get("/img/{proxy}", summary="图片代理")
def get_img(imgurl: str, proxy: bool = False) -> Any: def get_img(imgurl: str, proxy: bool = False) -> Any:
""" """
通过图片代理(使用代理服务器) 通过图片代理(使用代理服务器)
@@ -43,7 +46,7 @@ def get_img(imgurl: str, proxy: bool = False) -> Any:
@router.get("/env", summary="查询系统环境变量", response_model=schemas.Response) @router.get("/env", summary="查询系统环境变量", response_model=schemas.Response)
def get_env_setting(_: schemas.TokenPayload = Depends(verify_token)): def get_env_setting(_: User = Depends(get_current_active_superuser)):
""" """
查询系统环境变量,包括当前版本号 查询系统环境变量,包括当前版本号
""" """
@@ -54,6 +57,7 @@ def get_env_setting(_: schemas.TokenPayload = Depends(verify_token)):
"VERSION": APP_VERSION, "VERSION": APP_VERSION,
"AUTH_VERSION": SitesHelper().auth_version, "AUTH_VERSION": SitesHelper().auth_version,
"INDEXER_VERSION": SitesHelper().indexer_version, "INDEXER_VERSION": SitesHelper().indexer_version,
"FRONTEND_VERSION": SystemChain().get_frontend_version()
}) })
return schemas.Response(success=True, return schemas.Response(success=True,
data=info) data=info)
@@ -61,7 +65,7 @@ def get_env_setting(_: schemas.TokenPayload = Depends(verify_token)):
@router.post("/env", summary="更新系统环境变量", response_model=schemas.Response) @router.post("/env", summary="更新系统环境变量", response_model=schemas.Response)
def set_env_setting(env: dict, def set_env_setting(env: dict,
_: schemas.TokenPayload = Depends(verify_token)): _: User = Depends(get_current_active_superuser)):
""" """
更新系统环境变量 更新系统环境变量
""" """
@@ -104,7 +108,7 @@ def get_progress(process_type: str, token: str):
@router.get("/setting/{key}", summary="查询系统设置", response_model=schemas.Response) @router.get("/setting/{key}", summary="查询系统设置", response_model=schemas.Response)
def get_setting(key: str, def get_setting(key: str,
_: schemas.TokenPayload = Depends(verify_token)): _: User = Depends(get_current_active_superuser)):
""" """
查询系统设置 查询系统设置
""" """
@@ -119,7 +123,7 @@ def get_setting(key: str,
@router.post("/setting/{key}", summary="更新系统设置", response_model=schemas.Response) @router.post("/setting/{key}", summary="更新系统设置", response_model=schemas.Response)
def set_setting(key: str, value: Union[list, dict, bool, int, str] = None, def set_setting(key: str, value: Union[list, dict, bool, int, str] = None,
_: schemas.TokenPayload = Depends(verify_token)): _: User = Depends(get_current_active_superuser)):
""" """
更新系统设置 更新系统设置
""" """
@@ -138,7 +142,7 @@ def set_setting(key: str, value: Union[list, dict, bool, int, str] = None,
@router.get("/message", summary="实时消息") @router.get("/message", summary="实时消息")
def get_message(token: str): def get_message(token: str, role: str = "sys"):
""" """
实时获取系统消息返回格式为SSE 实时获取系统消息返回格式为SSE
""" """
@@ -152,7 +156,7 @@ def get_message(token: str):
def event_generator(): def event_generator():
while True: while True:
detail = message.get() detail = message.get(role)
yield 'data: %s\n\n' % (detail or '') yield 'data: %s\n\n' % (detail or '')
time.sleep(3) time.sleep(3)
@@ -191,6 +195,8 @@ def get_logging(token: str, length: int = 50, logfile: str = "moviepilot.log"):
return Response(content="日志文件不存在!", media_type="text/plain") return Response(content="日志文件不存在!", media_type="text/plain")
with open(log_path, 'r', encoding='utf-8') as file: with open(log_path, 'r', encoding='utf-8') as file:
text = file.read() text = file.read()
# 倒序输出
text = '\n'.join(text.split('\n')[::-1])
return Response(content=text, media_type="text/plain") return Response(content=text, media_type="text/plain")
else: else:
# 返回SSE流响应 # 返回SSE流响应
@@ -290,7 +296,7 @@ def moduletest(moduleid: str, _: schemas.TokenPayload = Depends(verify_token)):
@router.get("/restart", summary="重启系统", response_model=schemas.Response) @router.get("/restart", summary="重启系统", response_model=schemas.Response)
def restart_system(_: schemas.TokenPayload = Depends(verify_token)): def restart_system(_: User = Depends(get_current_active_superuser)):
""" """
重启系统 重启系统
""" """
@@ -302,7 +308,7 @@ def restart_system(_: schemas.TokenPayload = Depends(verify_token)):
@router.get("/reload", summary="重新加载模块", response_model=schemas.Response) @router.get("/reload", summary="重新加载模块", response_model=schemas.Response)
def reload_module(_: schemas.TokenPayload = Depends(verify_token)): def reload_module(_: User = Depends(get_current_active_superuser)):
""" """
重新加载模块 重新加载模块
""" """
@@ -313,7 +319,7 @@ def reload_module(_: schemas.TokenPayload = Depends(verify_token)):
@router.get("/runscheduler", summary="运行服务", response_model=schemas.Response) @router.get("/runscheduler", summary="运行服务", response_model=schemas.Response)
def execute_command(jobid: str, def execute_command(jobid: str,
_: schemas.TokenPayload = Depends(verify_token)): _: User = Depends(get_current_active_superuser)):
""" """
执行命令 执行命令
""" """

View File

@@ -4,7 +4,6 @@ from fastapi import APIRouter, Depends
from app import schemas from app import schemas
from app.chain.tmdb import TmdbChain from app.chain.tmdb import TmdbChain
from app.core.context import MediaInfo
from app.core.security import verify_token from app.core.security import verify_token
from app.schemas.types import MediaType from app.schemas.types import MediaType
@@ -17,10 +16,9 @@ def tmdb_seasons(tmdbid: int, _: schemas.TokenPayload = Depends(verify_token)) -
根据TMDBID查询themoviedb所有季信息 根据TMDBID查询themoviedb所有季信息
""" """
seasons_info = TmdbChain().tmdb_seasons(tmdbid=tmdbid) seasons_info = TmdbChain().tmdb_seasons(tmdbid=tmdbid)
if not seasons_info: if seasons_info:
return []
else:
return seasons_info return seasons_info
return []
@router.get("/similar/{tmdbid}/{type_name}", summary="类似电影/电视剧", response_model=List[schemas.MediaInfo]) @router.get("/similar/{tmdbid}/{type_name}", summary="类似电影/电视剧", response_model=List[schemas.MediaInfo])
@@ -32,15 +30,14 @@ def tmdb_similar(tmdbid: int,
""" """
mediatype = MediaType(type_name) mediatype = MediaType(type_name)
if mediatype == MediaType.MOVIE: if mediatype == MediaType.MOVIE:
tmdbinfos = TmdbChain().movie_similar(tmdbid=tmdbid) medias = TmdbChain().movie_similar(tmdbid=tmdbid)
elif mediatype == MediaType.TV: elif mediatype == MediaType.TV:
tmdbinfos = TmdbChain().tv_similar(tmdbid=tmdbid) medias = TmdbChain().tv_similar(tmdbid=tmdbid)
else: else:
return [] return []
if not tmdbinfos: if medias:
return [] return [media.to_dict() for media in medias]
else: return []
return [MediaInfo(tmdb_info=tmdbinfo).to_dict() for tmdbinfo in tmdbinfos]
@router.get("/recommend/{tmdbid}/{type_name}", summary="推荐电影/电视剧", response_model=List[schemas.MediaInfo]) @router.get("/recommend/{tmdbid}/{type_name}", summary="推荐电影/电视剧", response_model=List[schemas.MediaInfo])
@@ -52,18 +49,17 @@ def tmdb_recommend(tmdbid: int,
""" """
mediatype = MediaType(type_name) mediatype = MediaType(type_name)
if mediatype == MediaType.MOVIE: if mediatype == MediaType.MOVIE:
tmdbinfos = TmdbChain().movie_recommend(tmdbid=tmdbid) medias = TmdbChain().movie_recommend(tmdbid=tmdbid)
elif mediatype == MediaType.TV: elif mediatype == MediaType.TV:
tmdbinfos = TmdbChain().tv_recommend(tmdbid=tmdbid) medias = TmdbChain().tv_recommend(tmdbid=tmdbid)
else: else:
return [] return []
if not tmdbinfos: if medias:
return [] return [media.to_dict() for media in medias]
else: return []
return [MediaInfo(tmdb_info=tmdbinfo).to_dict() for tmdbinfo in tmdbinfos]
@router.get("/credits/{tmdbid}/{type_name}", summary="演员阵容", response_model=List[schemas.TmdbPerson]) @router.get("/credits/{tmdbid}/{type_name}", summary="演员阵容", response_model=List[schemas.MediaPerson])
def tmdb_credits(tmdbid: int, def tmdb_credits(tmdbid: int,
type_name: str, type_name: str,
page: int = 1, page: int = 1,
@@ -73,28 +69,21 @@ def tmdb_credits(tmdbid: int,
""" """
mediatype = MediaType(type_name) mediatype = MediaType(type_name)
if mediatype == MediaType.MOVIE: if mediatype == MediaType.MOVIE:
tmdbinfos = TmdbChain().movie_credits(tmdbid=tmdbid, page=page) persons = TmdbChain().movie_credits(tmdbid=tmdbid, page=page)
elif mediatype == MediaType.TV: elif mediatype == MediaType.TV:
tmdbinfos = TmdbChain().tv_credits(tmdbid=tmdbid, page=page) persons = TmdbChain().tv_credits(tmdbid=tmdbid, page=page)
else: else:
return [] return []
if not tmdbinfos: return persons or []
return []
else:
return [schemas.TmdbPerson(**tmdbinfo) for tmdbinfo in tmdbinfos]
@router.get("/person/{person_id}", summary="人物详情", response_model=schemas.TmdbPerson) @router.get("/person/{person_id}", summary="人物详情", response_model=schemas.MediaPerson)
def tmdb_person(person_id: int, def tmdb_person(person_id: int,
_: schemas.TokenPayload = Depends(verify_token)) -> Any: _: schemas.TokenPayload = Depends(verify_token)) -> Any:
""" """
根据人物ID查询人物详情 根据人物ID查询人物详情
""" """
tmdbinfo = TmdbChain().person_detail(person_id=person_id) return TmdbChain().person_detail(person_id=person_id)
if not tmdbinfo:
return schemas.TmdbPerson()
else:
return schemas.TmdbPerson(**tmdbinfo)
@router.get("/person/credits/{person_id}", summary="人物参演作品", response_model=List[schemas.MediaInfo]) @router.get("/person/credits/{person_id}", summary="人物参演作品", response_model=List[schemas.MediaInfo])
@@ -104,11 +93,10 @@ def tmdb_person_credits(person_id: int,
""" """
根据人物ID查询人物参演作品 根据人物ID查询人物参演作品
""" """
tmdbinfo = TmdbChain().person_credits(person_id=person_id, page=page) medias = TmdbChain().person_credits(person_id=person_id, page=page)
if not tmdbinfo: if medias:
return [] return [media.to_dict() for media in medias]
else: return []
return [MediaInfo(tmdb_info=tmdbinfo).to_dict() for tmdbinfo in tmdbinfo]
@router.get("/movies", summary="TMDB电影", response_model=List[schemas.MediaInfo]) @router.get("/movies", summary="TMDB电影", response_model=List[schemas.MediaInfo])
@@ -127,7 +115,7 @@ def tmdb_movies(sort_by: str = "popularity.desc",
page=page) page=page)
if not movies: if not movies:
return [] return []
return [MediaInfo(tmdb_info=movie).to_dict() for movie in movies] return [movie.to_dict() for movie in movies]
@router.get("/tvs", summary="TMDB剧集", response_model=List[schemas.MediaInfo]) @router.get("/tvs", summary="TMDB剧集", response_model=List[schemas.MediaInfo])
@@ -146,7 +134,7 @@ def tmdb_tvs(sort_by: str = "popularity.desc",
page=page) page=page)
if not tvs: if not tvs:
return [] return []
return [MediaInfo(tmdb_info=tv).to_dict() for tv in tvs] return [tv.to_dict() for tv in tvs]
@router.get("/trending", summary="TMDB流行趋势", response_model=List[schemas.MediaInfo]) @router.get("/trending", summary="TMDB流行趋势", response_model=List[schemas.MediaInfo])
@@ -158,7 +146,7 @@ def tmdb_trending(page: int = 1,
infos = TmdbChain().tmdb_trending(page=page) infos = TmdbChain().tmdb_trending(page=page)
if not infos: if not infos:
return [] return []
return [MediaInfo(tmdb_info=info).to_dict() for info in infos] return [info.to_dict() for info in infos]
@router.get("/{tmdbid}/{season}", summary="TMDB季所有集", response_model=List[schemas.TmdbEpisode]) @router.get("/{tmdbid}/{season}", summary="TMDB季所有集", response_model=List[schemas.TmdbEpisode])
@@ -167,8 +155,4 @@ def tmdb_season_episodes(tmdbid: int, season: int,
""" """
根据TMDBID查询某季的所有信信息 根据TMDBID查询某季的所有信信息
""" """
episodes_info = TmdbChain().tmdb_episodes(tmdbid=tmdbid, season=season) return TmdbChain().tmdb_episodes(tmdbid=tmdbid, season=season)
if not episodes_info:
return []
else:
return episodes_info

View File

@@ -6,7 +6,7 @@ from sqlalchemy.orm import Session
from app import schemas from app import schemas
from app.chain.transfer import TransferChain from app.chain.transfer import TransferChain
from app.core.security import verify_token from app.core.security import verify_token, verify_uri_token
from app.db import get_db from app.db import get_db
from app.db.models.transferhistory import TransferHistory from app.db.models.transferhistory import TransferHistory
from app.schemas import MediaType from app.schemas import MediaType
@@ -19,6 +19,7 @@ def manual_transfer(path: str = None,
logid: int = None, logid: int = None,
target: str = None, target: str = None,
tmdbid: int = None, tmdbid: int = None,
doubanid: str = None,
type_name: str = None, type_name: str = None,
season: int = None, season: int = None,
transfer_type: str = None, transfer_type: str = None,
@@ -36,6 +37,7 @@ def manual_transfer(path: str = None,
:param target: 目标路径 :param target: 目标路径
:param type_name: 媒体类型、电影/电视剧 :param type_name: 媒体类型、电影/电视剧
:param tmdbid: tmdbid :param tmdbid: tmdbid
:param doubanid: 豆瓣ID
:param season: 剧集季号 :param season: 剧集季号
:param transfer_type: 转移类型move/copy 等 :param transfer_type: 转移类型move/copy 等
:param episode_format: 剧集识别格式 :param episode_format: 剧集识别格式
@@ -91,6 +93,7 @@ def manual_transfer(path: str = None,
in_path=in_path, in_path=in_path,
target=target, target=target,
tmdbid=tmdbid, tmdbid=tmdbid,
doubanid=doubanid,
mtype=mtype, mtype=mtype,
season=season, season=season,
transfer_type=transfer_type, transfer_type=transfer_type,
@@ -105,3 +108,12 @@ def manual_transfer(path: str = None,
return schemas.Response(success=False, message=errormsg) return schemas.Response(success=False, message=errormsg)
# 成功 # 成功
return schemas.Response(success=True) return schemas.Response(success=True)
@router.get("/now", summary="立即执行下载器文件整理", response_model=schemas.Response)
def now(_: str = Depends(verify_uri_token)) -> Any:
"""
立即执行下载器文件整理 API_TOKEN认证?token=xxx
"""
TransferChain().process()
return schemas.Response(success=True)

View File

@@ -1,6 +1,6 @@
import base64 import base64
import re import re
from typing import Any, List from typing import Any, List, Union
from fastapi import APIRouter, Depends, HTTPException, UploadFile, File from fastapi import APIRouter, Depends, HTTPException, UploadFile, File
from sqlalchemy.orm import Session from sqlalchemy.orm import Session
@@ -10,14 +10,16 @@ from app.core.security import get_password_hash
from app.db import get_db from app.db import get_db
from app.db.models.user import User from app.db.models.user import User
from app.db.userauth import get_current_active_superuser, get_current_active_user from app.db.userauth import get_current_active_superuser, get_current_active_user
from app.db.userconfig_oper import UserConfigOper
from app.utils.otp import OtpUtils
router = APIRouter() router = APIRouter()
@router.get("/", summary="所有用户", response_model=List[schemas.User]) @router.get("/", summary="所有用户", response_model=List[schemas.User])
def read_users( def read_users(
db: Session = Depends(get_db), db: Session = Depends(get_db),
current_user: User = Depends(get_current_active_superuser), current_user: User = Depends(get_current_active_superuser),
) -> Any: ) -> Any:
""" """
查询用户列表 查询用户列表
@@ -28,10 +30,10 @@ def read_users(
@router.post("/", summary="新增用户", response_model=schemas.Response) @router.post("/", summary="新增用户", response_model=schemas.Response)
def create_user( def create_user(
*, *,
db: Session = Depends(get_db), db: Session = Depends(get_db),
user_in: schemas.UserCreate, user_in: schemas.UserCreate,
current_user: User = Depends(get_current_active_superuser), current_user: User = Depends(get_current_active_superuser),
) -> Any: ) -> Any:
""" """
新增用户 新增用户
@@ -50,10 +52,10 @@ def create_user(
@router.put("/", summary="更新用户", response_model=schemas.Response) @router.put("/", summary="更新用户", response_model=schemas.Response)
def update_user( def update_user(
*, *,
db: Session = Depends(get_db), db: Session = Depends(get_db),
user_in: schemas.UserCreate, user_in: schemas.UserCreate,
_: User = Depends(get_current_active_superuser), _: User = Depends(get_current_active_superuser),
) -> Any: ) -> Any:
""" """
更新用户 更新用户
@@ -63,7 +65,8 @@ def update_user(
# 正则表达式匹配密码包含字母、数字、特殊字符中的至少两项 # 正则表达式匹配密码包含字母、数字、特殊字符中的至少两项
pattern = r'^(?![a-zA-Z]+$)(?!\d+$)(?![^\da-zA-Z\s]+$).{6,50}$' pattern = r'^(?![a-zA-Z]+$)(?!\d+$)(?![^\da-zA-Z\s]+$).{6,50}$'
if not re.match(pattern, user_info.get("password")): if not re.match(pattern, user_info.get("password")):
return schemas.Response(success=False, message="密码需要同时包含字母、数字、特殊字符中的至少两项且长度大于6位") return schemas.Response(success=False,
message="密码需要同时包含字母、数字、特殊字符中的至少两项且长度大于6位")
user_info["hashed_password"] = get_password_hash(user_info["password"]) user_info["hashed_password"] = get_password_hash(user_info["password"])
user_info.pop("password") user_info.pop("password")
user = User.get_by_name(db, name=user_info["name"]) user = User.get_by_name(db, name=user_info["name"])
@@ -75,7 +78,7 @@ def update_user(
@router.get("/current", summary="当前登录用户信息", response_model=schemas.User) @router.get("/current", summary="当前登录用户信息", response_model=schemas.User)
def read_current_user( def read_current_user(
current_user: User = Depends(get_current_active_user) current_user: User = Depends(get_current_active_user)
) -> Any: ) -> Any:
""" """
当前登录用户信息 当前登录用户信息
@@ -84,8 +87,8 @@ def read_current_user(
@router.post("/avatar/{user_id}", summary="上传用户头像", response_model=schemas.Response) @router.post("/avatar/{user_id}", summary="上传用户头像", response_model=schemas.Response)
async def upload_avatar(user_id: int, db: Session = Depends(get_db), async def upload_avatar(user_id: int, db: Session = Depends(get_db), file: UploadFile = File(...),
file: UploadFile = File(...)): _: User = Depends(get_current_active_user)):
""" """
上传用户头像 上传用户头像
""" """
@@ -101,12 +104,73 @@ async def upload_avatar(user_id: int, db: Session = Depends(get_db),
return schemas.Response(success=True, message=file.filename) return schemas.Response(success=True, message=file.filename)
@router.post('/otp/generate', summary='生成otp验证uri', response_model=schemas.Response)
def otp_generate(
current_user: User = Depends(get_current_active_user)
) -> Any:
secret, uri = OtpUtils.generate_secret_key(current_user.name)
return schemas.Response(success=secret != "", data={'secret': secret, 'uri': uri})
@router.post('/otp/judge', summary='判断otp验证是否通过', response_model=schemas.Response)
def otp_judge(
data: dict,
db: Session = Depends(get_db),
current_user: User = Depends(get_current_active_user)
) -> Any:
uri = data.get("uri")
otp_password = data.get("otpPassword")
if not OtpUtils.is_legal(uri, otp_password):
return schemas.Response(success=False, message="验证码错误")
current_user.update_otp_by_name(db, current_user.name, True, OtpUtils.get_secret(uri))
return schemas.Response(success=True)
@router.post('/otp/disable', summary='关闭当前用户的otp验证', response_model=schemas.Response)
def otp_disable(
db: Session = Depends(get_db),
current_user: User = Depends(get_current_active_user)
) -> Any:
current_user.update_otp_by_name(db, current_user.name, False, "")
return schemas.Response(success=True)
@router.get('/otp/{userid}', summary='判断当前用户是否开启otp验证', response_model=schemas.Response)
def otp_enable(userid: str, db: Session = Depends(get_db)) -> Any:
user: User = User.get_by_name(db, userid)
if not user:
return schemas.Response(success=False, message="用户不存在")
return schemas.Response(success=user.is_otp)
@router.get("/config/{key}", summary="查询用户配置", response_model=schemas.Response)
def get_config(key: str,
current_user: User = Depends(get_current_active_user)):
"""
查询用户配置
"""
value = UserConfigOper().get(username=current_user.name, key=key)
return schemas.Response(success=True, data={
"value": value
})
@router.post("/config/{key}", summary="更新用户配置", response_model=schemas.Response)
def set_config(key: str, value: Union[list, dict, bool, int, str] = None,
current_user: User = Depends(get_current_active_user)):
"""
更新用户配置
"""
UserConfigOper().set(username=current_user.name, key=key, value=value)
return schemas.Response(success=True)
@router.delete("/{user_name}", summary="删除用户", response_model=schemas.Response) @router.delete("/{user_name}", summary="删除用户", response_model=schemas.Response)
def delete_user( def delete_user(
*, *,
db: Session = Depends(get_db), db: Session = Depends(get_db),
user_name: str, user_name: str,
current_user: User = Depends(get_current_active_superuser), current_user: User = Depends(get_current_active_superuser),
) -> Any: ) -> Any:
""" """
删除用户 删除用户
@@ -120,9 +184,9 @@ def delete_user(
@router.get("/{user_id}", summary="用户详情", response_model=schemas.User) @router.get("/{user_id}", summary="用户详情", response_model=schemas.User)
def read_user_by_id( def read_user_by_id(
user_id: int, user_id: int,
current_user: User = Depends(get_current_active_user), current_user: User = Depends(get_current_active_user),
db: Session = Depends(get_db), db: Session = Depends(get_db),
) -> Any: ) -> Any:
""" """
查询用户详情 查询用户详情

137
app/api/servcookie.py Normal file
View File

@@ -0,0 +1,137 @@
import gzip
import json
from hashlib import md5
from typing import Annotated, Callable
from typing import Any, Dict, Optional
from fastapi import APIRouter, Depends, HTTPException, Path, Request, Response
from fastapi.responses import PlainTextResponse
from fastapi.routing import APIRoute
from app import schemas
from app.core.config import settings
from app.log import logger
from app.utils.common import decrypt
class GzipRequest(Request):
async def body(self) -> bytes:
if not hasattr(self, "_body"):
body = await super().body()
if "gzip" in self.headers.getlist("Content-Encoding"):
body = gzip.decompress(body)
self._body = body
return self._body
class GzipRoute(APIRoute):
def get_route_handler(self) -> Callable:
original_route_handler = super().get_route_handler()
async def custom_route_handler(request: Request) -> Response:
request = GzipRequest(request.scope, request.receive)
return await original_route_handler(request)
return custom_route_handler
async def verify_server_enabled():
"""
校验CookieCloud服务路由是否打开
"""
if not settings.COOKIECLOUD_ENABLE_LOCAL:
raise HTTPException(status_code=400, detail="本地CookieCloud服务器未启用")
return True
cookie_router = APIRouter(route_class=GzipRoute,
tags=['servcookie'],
dependencies=[Depends(verify_server_enabled)])
@cookie_router.get("/", response_class=PlainTextResponse)
def get_root():
return "Hello MoviePilot! COOKIECLOUD API ROOT = /cookiecloud"
@cookie_router.post("/", response_class=PlainTextResponse)
def post_root():
return "Hello MoviePilot! COOKIECLOUD API ROOT = /cookiecloud"
@cookie_router.post("/update")
async def update_cookie(req: schemas.CookieData):
"""
上传Cookie数据
"""
file_path = settings.COOKIE_PATH / f"{req.uuid}.json"
content = json.dumps({"encrypted": req.encrypted})
with open(file_path, encoding="utf-8", mode="w") as file:
file.write(content)
with open(file_path, encoding="utf-8", mode="r") as file:
read_content = file.read()
if read_content == content:
return {"action": "done"}
else:
return {"action": "error"}
def load_encrypt_data(uuid: str) -> Dict[str, Any]:
"""
加载本地加密原始数据
"""
file_path = settings.COOKIE_PATH / f"{uuid}.json"
# 检查文件是否存在
if not file_path.exists():
raise HTTPException(status_code=404, detail="Item not found")
# 读取文件
with open(file_path, encoding="utf-8", mode="r") as file:
read_content = file.read()
data = json.loads(read_content.encode("utf-8"))
return data
def get_decrypted_cookie_data(uuid: str, password: str,
encrypted: str) -> Optional[Dict[str, Any]]:
"""
加载本地加密数据并解密为Cookie
"""
key_md5 = md5()
key_md5.update((uuid + '-' + password).encode('utf-8'))
aes_key = (key_md5.hexdigest()[:16]).encode('utf-8')
if encrypted:
try:
decrypted_data = decrypt(encrypted, aes_key).decode('utf-8')
decrypted_data = json.loads(decrypted_data)
if 'cookie_data' in decrypted_data:
return decrypted_data
except Exception as e:
logger.error(f"解密Cookie数据失败{str(e)}")
return None
else:
return None
@cookie_router.get("/get/{uuid}")
async def get_cookie(
uuid: Annotated[str, Path(min_length=5, pattern="^[a-zA-Z0-9]+$")]):
"""
GET 下载加密数据
"""
return load_encrypt_data(uuid)
@cookie_router.post("/get/{uuid}")
async def post_cookie(
uuid: Annotated[str, Path(min_length=5, pattern="^[a-zA-Z0-9]+$")],
request: schemas.CookiePassword):
"""
POST 下载加密数据
"""
data = load_encrypt_data(uuid)
return get_decrypted_cookie_data(uuid, request.password, data["encrypted"])

View File

@@ -15,9 +15,11 @@ from app.core.context import MediaInfo, TorrentInfo
from app.core.event import EventManager from app.core.event import EventManager
from app.core.meta import MetaBase from app.core.meta import MetaBase
from app.core.module import ModuleManager from app.core.module import ModuleManager
from app.db.message_oper import MessageOper
from app.helper.message import MessageHelper
from app.log import logger from app.log import logger
from app.schemas import TransferInfo, TransferTorrent, ExistMediaInfo, DownloadingTorrent, CommingMessage, Notification, \ from app.schemas import TransferInfo, TransferTorrent, ExistMediaInfo, DownloadingTorrent, CommingMessage, Notification, \
WebhookEventInfo, TmdbEpisode WebhookEventInfo, TmdbEpisode, MediaPerson
from app.schemas.types import TorrentStatus, MediaType, MediaImageType, EventType from app.schemas.types import TorrentStatus, MediaType, MediaImageType, EventType
from app.utils.object import ObjectUtils from app.utils.object import ObjectUtils
@@ -33,6 +35,8 @@ class ChainBase(metaclass=ABCMeta):
""" """
self.modulemanager = ModuleManager() self.modulemanager = ModuleManager()
self.eventmanager = EventManager() self.eventmanager = EventManager()
self.messageoper = MessageOper()
self.messagehelper = MessageHelper()
@staticmethod @staticmethod
def load_cache(filename: str) -> Any: def load_cache(filename: str) -> Any:
@@ -115,6 +119,7 @@ class ChainBase(metaclass=ABCMeta):
mtype: MediaType = None, mtype: MediaType = None,
tmdbid: int = None, tmdbid: int = None,
doubanid: str = None, doubanid: str = None,
bangumiid: int = None,
cache: bool = True) -> Optional[MediaInfo]: cache: bool = True) -> Optional[MediaInfo]:
""" """
识别媒体信息 识别媒体信息
@@ -122,6 +127,7 @@ class ChainBase(metaclass=ABCMeta):
:param mtype: 识别的媒体类型与tmdbid配套 :param mtype: 识别的媒体类型与tmdbid配套
:param tmdbid: tmdbid :param tmdbid: tmdbid
:param doubanid: 豆瓣ID :param doubanid: 豆瓣ID
:param bangumiid: BangumiID
:param cache: 是否使用缓存 :param cache: 是否使用缓存
:return: 识别的媒体信息,包括剧集信息 :return: 识别的媒体信息,包括剧集信息
""" """
@@ -132,8 +138,12 @@ class ChainBase(metaclass=ABCMeta):
tmdbid = meta.tmdbid tmdbid = meta.tmdbid
if not doubanid and hasattr(meta, "doubanid"): if not doubanid and hasattr(meta, "doubanid"):
doubanid = meta.doubanid doubanid = meta.doubanid
# 有tmdbid时不使用其它ID
if tmdbid:
doubanid = None
bangumiid = None
return self.run_module("recognize_media", meta=meta, mtype=mtype, return self.run_module("recognize_media", meta=meta, mtype=mtype,
tmdbid=tmdbid, doubanid=doubanid, cache=cache) tmdbid=tmdbid, doubanid=doubanid, bangumiid=bangumiid, cache=cache)
def match_doubaninfo(self, name: str, imdbid: str = None, def match_doubaninfo(self, name: str, imdbid: str = None,
mtype: MediaType = None, year: str = None, season: int = None) -> Optional[dict]: mtype: MediaType = None, year: str = None, season: int = None) -> Optional[dict]:
@@ -210,6 +220,14 @@ class ChainBase(metaclass=ABCMeta):
""" """
return self.run_module("tmdb_info", tmdbid=tmdbid, mtype=mtype) return self.run_module("tmdb_info", tmdbid=tmdbid, mtype=mtype)
def bangumi_info(self, bangumiid: int) -> Optional[dict]:
"""
获取Bangumi信息
:param bangumiid: int
:return: Bangumi信息
"""
return self.run_module("bangumi_info", bangumiid=bangumiid)
def message_parser(self, body: Any, form: Any, def message_parser(self, body: Any, form: Any,
args: Any) -> Optional[CommingMessage]: args: Any) -> Optional[CommingMessage]:
""" """
@@ -242,6 +260,13 @@ class ChainBase(metaclass=ABCMeta):
""" """
return self.run_module("search_medias", meta=meta) return self.run_module("search_medias", meta=meta)
def search_persons(self, name: str) -> Optional[List[MediaPerson]]:
"""
搜索人物信息
:param name: 人物名称
"""
return self.run_module("search_persons", name=name)
def search_torrents(self, site: CommentedMap, def search_torrents(self, site: CommentedMap,
keywords: List[str], keywords: List[str],
mtype: MediaType = None, mtype: MediaType = None,
@@ -403,6 +428,10 @@ class ChainBase(metaclass=ABCMeta):
:param message: 消息体 :param message: 消息体
:return: 成功或失败 :return: 成功或失败
""" """
logger.info(f"发送消息channel={message.channel}"
f"title={message.title}, "
f"text={message.text}"
f"userid={message.userid}")
# 发送事件 # 发送事件
self.eventmanager.send_event(etype=EventType.NoticeMessage, self.eventmanager.send_event(etype=EventType.NoticeMessage,
data={ data={
@@ -413,10 +442,13 @@ class ChainBase(metaclass=ABCMeta):
"image": message.image, "image": message.image,
"userid": message.userid, "userid": message.userid,
}) })
logger.info(f"发送消息channel={message.channel}" # 保存消息
f"title={message.title}, " self.messagehelper.put(message, role="user")
f"text={message.text}" self.messageoper.add(channel=message.channel, mtype=message.mtype,
f"userid={message.userid}") title=message.title, text=message.text,
image=message.image, link=message.link,
userid=message.userid, action=1)
# 发送
self.run_module("post_message", message=message) self.run_module("post_message", message=message)
def post_medias_message(self, message: Notification, medias: List[MediaInfo]) -> Optional[bool]: def post_medias_message(self, message: Notification, medias: List[MediaInfo]) -> Optional[bool]:
@@ -426,6 +458,13 @@ class ChainBase(metaclass=ABCMeta):
:param medias: 媒体列表 :param medias: 媒体列表
:return: 成功或失败 :return: 成功或失败
""" """
note_list = [media.to_dict() for media in medias]
self.messagehelper.put(message, role="user", note=note_list)
self.messageoper.add(channel=message.channel, mtype=message.mtype,
title=message.title, text=message.text,
image=message.image, link=message.link,
userid=message.userid, action=1,
note=note_list)
return self.run_module("post_medias_message", message=message, medias=medias) return self.run_module("post_medias_message", message=message, medias=medias)
def post_torrents_message(self, message: Notification, torrents: List[Context]) -> Optional[bool]: def post_torrents_message(self, message: Notification, torrents: List[Context]) -> Optional[bool]:
@@ -435,20 +474,28 @@ class ChainBase(metaclass=ABCMeta):
:param torrents: 种子列表 :param torrents: 种子列表
:return: 成功或失败 :return: 成功或失败
""" """
note_list = [torrent.torrent_info.to_dict() for torrent in torrents]
self.messagehelper.put(message, role="user", note=note_list)
self.messageoper.add(channel=message.channel, mtype=message.mtype,
title=message.title, text=message.text,
image=message.image, link=message.link,
userid=message.userid, action=1,
note=note_list)
return self.run_module("post_torrents_message", message=message, torrents=torrents) return self.run_module("post_torrents_message", message=message, torrents=torrents)
def scrape_metadata(self, path: Path, mediainfo: MediaInfo, transfer_type: str, def scrape_metadata(self, path: Path, mediainfo: MediaInfo, transfer_type: str,
force_nfo: bool = False, force_img: bool = False) -> None: metainfo: MetaBase = None, force_nfo: bool = False, force_img: bool = False) -> None:
""" """
刮削元数据 刮削元数据
:param path: 媒体文件路径 :param path: 媒体文件路径
:param mediainfo: 识别的媒体信息 :param mediainfo: 识别的媒体信息
:param metainfo: 源文件的识别元数据
:param transfer_type: 转移模式 :param transfer_type: 转移模式
:param force_nfo: 强制刮削nfo :param force_nfo: 强制刮削nfo
:param force_img: 强制刮削图片 :param force_img: 强制刮削图片
:return: 成功或失败 :return: 成功或失败
""" """
self.run_module("scrape_metadata", path=path, mediainfo=mediainfo, self.run_module("scrape_metadata", path=path, mediainfo=mediainfo, metainfo=metainfo,
transfer_type=transfer_type, force_nfo=force_nfo, force_img=force_img) transfer_type=transfer_type, force_nfo=force_nfo, force_img=force_img)
def register_commands(self, commands: Dict[str, dict]) -> None: def register_commands(self, commands: Dict[str, dict]) -> None:

54
app/chain/bangumi.py Normal file
View File

@@ -0,0 +1,54 @@
from typing import Optional, List
from app import schemas
from app.chain import ChainBase
from app.core.context import MediaInfo
from app.utils.singleton import Singleton
class BangumiChain(ChainBase, metaclass=Singleton):
"""
Bangumi处理链单例运行
"""
def calendar(self) -> Optional[List[MediaInfo]]:
"""
获取Bangumi每日放送
"""
return self.run_module("bangumi_calendar")
def bangumi_info(self, bangumiid: int) -> Optional[dict]:
"""
获取Bangumi信息
:param bangumiid: BangumiID
:return: Bangumi信息
"""
return self.run_module("bangumi_info", bangumiid=bangumiid)
def bangumi_credits(self, bangumiid: int) -> List[schemas.MediaPerson]:
"""
根据BangumiID查询电影演职员表
:param bangumiid: BangumiID
"""
return self.run_module("bangumi_credits", bangumiid=bangumiid)
def bangumi_recommend(self, bangumiid: int) -> Optional[List[MediaInfo]]:
"""
根据BangumiID查询推荐电影
:param bangumiid: BangumiID
"""
return self.run_module("bangumi_recommend", bangumiid=bangumiid)
def person_detail(self, person_id: int) -> Optional[schemas.MediaPerson]:
"""
根据人物ID查询Bangumi人物详情
:param person_id: 人物ID
"""
return self.run_module("bangumi_person_detail", person_id=person_id)
def person_credits(self, person_id: int) -> Optional[List[MediaInfo]]:
"""
根据人物ID查询人物参演作品
:param person_id: 人物ID
"""
return self.run_module("bangumi_person_credits", person_id=person_id)

View File

@@ -1,7 +1,8 @@
from typing import Optional, List from typing import Optional, List
from app import schemas
from app.chain import ChainBase from app.chain import ChainBase
from app.core.config import settings from app.core.context import MediaInfo
from app.schemas import MediaType from app.schemas import MediaType
from app.utils.singleton import Singleton from app.utils.singleton import Singleton
@@ -11,7 +12,22 @@ class DoubanChain(ChainBase, metaclass=Singleton):
豆瓣处理链,单例运行 豆瓣处理链,单例运行
""" """
def movie_top250(self, page: int = 1, count: int = 30) -> Optional[List[dict]]: def person_detail(self, person_id: int) -> Optional[schemas.MediaPerson]:
"""
根据人物ID查询豆瓣人物详情
:param person_id: 人物ID
"""
return self.run_module("douban_person_detail", person_id=person_id)
def person_credits(self, person_id: int, page: int = 1) -> List[MediaInfo]:
"""
根据人物ID查询人物参演作品
:param person_id: 人物ID
:param page: 页码
"""
return self.run_module("douban_person_credits", person_id=person_id, page=page)
def movie_top250(self, page: int = 1, count: int = 30) -> Optional[List[MediaInfo]]:
""" """
获取豆瓣电影TOP250 获取豆瓣电影TOP250
:param page: 页码 :param page: 页码
@@ -19,26 +35,26 @@ class DoubanChain(ChainBase, metaclass=Singleton):
""" """
return self.run_module("movie_top250", page=page, count=count) return self.run_module("movie_top250", page=page, count=count)
def movie_showing(self, page: int = 1, count: int = 30) -> Optional[List[dict]]: def movie_showing(self, page: int = 1, count: int = 30) -> Optional[List[MediaInfo]]:
""" """
获取正在上映的电影 获取正在上映的电影
""" """
return self.run_module("movie_showing", page=page, count=count) return self.run_module("movie_showing", page=page, count=count)
def tv_weekly_chinese(self, page: int = 1, count: int = 30) -> Optional[List[dict]]: def tv_weekly_chinese(self, page: int = 1, count: int = 30) -> Optional[List[MediaInfo]]:
""" """
获取本周中国剧集榜 获取本周中国剧集榜
""" """
return self.run_module("tv_weekly_chinese", page=page, count=count) return self.run_module("tv_weekly_chinese", page=page, count=count)
def tv_weekly_global(self, page: int = 1, count: int = 30) -> Optional[List[dict]]: def tv_weekly_global(self, page: int = 1, count: int = 30) -> Optional[List[MediaInfo]]:
""" """
获取本周全球剧集榜 获取本周全球剧集榜
""" """
return self.run_module("tv_weekly_global", page=page, count=count) return self.run_module("tv_weekly_global", page=page, count=count)
def douban_discover(self, mtype: MediaType, sort: str, tags: str, def douban_discover(self, mtype: MediaType, sort: str, tags: str,
page: int = 0, count: int = 30) -> Optional[List[dict]]: page: int = 0, count: int = 30) -> Optional[List[MediaInfo]]:
""" """
发现豆瓣电影、剧集 发现豆瓣电影、剧集
:param mtype: 媒体类型 :param mtype: 媒体类型
@@ -51,52 +67,46 @@ class DoubanChain(ChainBase, metaclass=Singleton):
return self.run_module("douban_discover", mtype=mtype, sort=sort, tags=tags, return self.run_module("douban_discover", mtype=mtype, sort=sort, tags=tags,
page=page, count=count) page=page, count=count)
def tv_animation(self, page: int = 1, count: int = 30) -> Optional[List[dict]]: def tv_animation(self, page: int = 1, count: int = 30) -> Optional[List[MediaInfo]]:
""" """
获取动画剧集 获取动画剧集
""" """
return self.run_module("tv_animation", page=page, count=count) return self.run_module("tv_animation", page=page, count=count)
def movie_hot(self, page: int = 1, count: int = 30) -> Optional[List[dict]]: def movie_hot(self, page: int = 1, count: int = 30) -> Optional[List[MediaInfo]]:
""" """
获取热门电影 获取热门电影
""" """
if settings.RECOGNIZE_SOURCE != "douban":
return None
return self.run_module("movie_hot", page=page, count=count) return self.run_module("movie_hot", page=page, count=count)
def tv_hot(self, page: int = 1, count: int = 30) -> Optional[List[dict]]: def tv_hot(self, page: int = 1, count: int = 30) -> Optional[List[MediaInfo]]:
""" """
获取热门剧集 获取热门剧集
""" """
if settings.RECOGNIZE_SOURCE != "douban":
return None
return self.run_module("tv_hot", page=page, count=count) return self.run_module("tv_hot", page=page, count=count)
def movie_credits(self, doubanid: str, page: int = 1) -> List[dict]: def movie_credits(self, doubanid: str) -> Optional[List[schemas.MediaPerson]]:
""" """
根据TMDBID查询电影演职人员 根据TMDBID查询电影演职人员
:param doubanid: 豆瓣ID :param doubanid: 豆瓣ID
:param page: 页码
""" """
return self.run_module("douban_movie_credits", doubanid=doubanid, page=page) return self.run_module("douban_movie_credits", doubanid=doubanid)
def tv_credits(self, doubanid: str, page: int = 1) -> List[dict]: def tv_credits(self, doubanid: str) -> Optional[List[schemas.MediaPerson]]:
""" """
根据TMDBID查询电视剧演职人员 根据TMDBID查询电视剧演职人员
:param doubanid: 豆瓣ID :param doubanid: 豆瓣ID
:param page: 页码
""" """
return self.run_module("douban_tv_credits", doubanid=doubanid, page=page) return self.run_module("douban_tv_credits", doubanid=doubanid)
def movie_recommend(self, doubanid: str) -> List[dict]: def movie_recommend(self, doubanid: str) -> List[MediaInfo]:
""" """
根据豆瓣ID查询推荐电影 根据豆瓣ID查询推荐电影
:param doubanid: 豆瓣ID :param doubanid: 豆瓣ID
""" """
return self.run_module("douban_movie_recommend", doubanid=doubanid) return self.run_module("douban_movie_recommend", doubanid=doubanid)
def tv_recommend(self, doubanid: str) -> List[dict]: def tv_recommend(self, doubanid: str) -> List[MediaInfo]:
""" """
根据豆瓣ID查询推荐电视剧 根据豆瓣ID查询推荐电视剧
:param doubanid: 豆瓣ID :param doubanid: 豆瓣ID

View File

@@ -34,14 +34,19 @@ class DownloadChain(ChainBase):
self.mediaserver = MediaServerOper() self.mediaserver = MediaServerOper()
def post_download_message(self, meta: MetaBase, mediainfo: MediaInfo, torrent: TorrentInfo, def post_download_message(self, meta: MetaBase, mediainfo: MediaInfo, torrent: TorrentInfo,
channel: MessageChannel = None, channel: MessageChannel = None, userid: str = None, username: str = None):
userid: str = None):
""" """
发送添加下载的消息 发送添加下载的消息
:param meta: 元数据
:param mediainfo: 媒体信息
:param torrent: 种子信息
:param channel: 通知渠道
:param userid: 用户ID指定时精确发送对应用户
:param username: 通知显示的下载用户信息
""" """
msg_text = "" msg_text = ""
if userid: if username:
msg_text = f"用户:{userid}" msg_text = f"用户:{username}"
if torrent.site_name: if torrent.site_name:
msg_text = f"{msg_text}\n站点:{torrent.site_name}" msg_text = f"{msg_text}\n站点:{torrent.site_name}"
if meta.resource_term: if meta.resource_term:
@@ -73,6 +78,7 @@ class DownloadChain(ChainBase):
self.post_message(Notification( self.post_message(Notification(
channel=channel, channel=channel,
mtype=NotificationType.Download, mtype=NotificationType.Download,
userid=userid,
title=f"{mediainfo.title_year} " title=f"{mediainfo.title_year} "
f"{meta.season_episode} 开始下载", f"{meta.season_episode} 开始下载",
text=msg_text, text=msg_text,
@@ -103,17 +109,27 @@ class DownloadChain(ChainBase):
# 解码参数 # 解码参数
req_str = base64.b64decode(base64_str.encode('utf-8')).decode('utf-8') req_str = base64.b64decode(base64_str.encode('utf-8')).decode('utf-8')
req_params: Dict[str, dict] = json.loads(req_str) req_params: Dict[str, dict] = json.loads(req_str)
# 是否使用cookie
if not req_params.get('cookie'):
cookie = None
# 请求头
if req_params.get('header'):
headers = req_params.get('header')
else:
headers = None
if req_params.get('method') == 'get': if req_params.get('method') == 'get':
# GET请求 # GET请求
res = RequestUtils( res = RequestUtils(
ua=ua, ua=ua,
cookies=cookie cookies=cookie,
headers=headers
).get_res(url, params=req_params.get('params')) ).get_res(url, params=req_params.get('params'))
else: else:
# POST请求 # POST请求
res = RequestUtils( res = RequestUtils(
ua=ua, ua=ua,
cookies=cookie cookies=cookie,
headers=headers
).post_res(url, params=req_params.get('params')) ).post_res(url, params=req_params.get('params'))
if not res: if not res:
return None return None
@@ -134,12 +150,15 @@ class DownloadChain(ChainBase):
return None, "", [] return None, "", []
if torrent.enclosure.startswith("magnet:"): if torrent.enclosure.startswith("magnet:"):
return torrent.enclosure, "", [] return torrent.enclosure, "", []
# Cookie
site_cookie = torrent.site_cookie
if torrent.enclosure.startswith("["): if torrent.enclosure.startswith("["):
# 需要解码获取下载地址 # 需要解码获取下载地址
torrent_url = __get_redict_url(url=torrent.enclosure, torrent_url = __get_redict_url(url=torrent.enclosure,
ua=torrent.site_ua, ua=torrent.site_ua,
cookie=torrent.site_cookie) cookie=site_cookie)
# 涉及解析地址的不使用Cookie下载种子否则MT会出错
site_cookie = None
else: else:
torrent_url = torrent.enclosure torrent_url = torrent.enclosure
if not torrent_url: if not torrent_url:
@@ -148,7 +167,7 @@ class DownloadChain(ChainBase):
# 下载种子文件 # 下载种子文件
torrent_file, content, download_folder, files, error_msg = self.torrent.download_torrent( torrent_file, content, download_folder, files, error_msg = self.torrent.download_torrent(
url=torrent_url, url=torrent_url,
cookie=torrent.site_cookie, cookie=site_cookie,
ua=torrent.site_ua, ua=torrent.site_ua,
proxy=torrent.site_proxy) proxy=torrent.site_proxy)
@@ -302,18 +321,20 @@ class DownloadChain(ChainBase):
self.downloadhis.add_files(files_to_add) self.downloadhis.add_files(files_to_add)
# 发送消息群发不带channel和userid # 发送消息群发不带channel和userid
self.post_download_message(meta=_meta, mediainfo=_media, torrent=_torrent) self.post_download_message(meta=_meta, mediainfo=_media, torrent=_torrent, username=username)
# 下载成功后处理 # 下载成功后处理
self.download_added(context=context, download_dir=download_dir, torrent_path=torrent_file) self.download_added(context=context, download_dir=download_dir, torrent_path=torrent_file)
# 广播事件 # 广播事件
self.eventmanager.send_event(EventType.DownloadAdded, { self.eventmanager.send_event(EventType.DownloadAdded, {
"hash": _hash, "hash": _hash,
"context": context "context": context,
"username": username
}) })
else: else:
# 下载失败 # 下载失败
logger.error(f"{_media.title_year} 添加下载任务失败:" logger.error(f"{_media.title_year} 添加下载任务失败:"
f"{_torrent.title} - {_torrent.enclosure}{error_msg}") f"{_torrent.title} - {_torrent.enclosure}{error_msg}")
# 只发送给对应渠道和用户
self.post_message(Notification( self.post_message(Notification(
channel=channel, channel=channel,
mtype=NotificationType.Manual, mtype=NotificationType.Manual,

View File

@@ -156,9 +156,9 @@ class MediaChain(ChainBase, metaclass=Singleton):
# 返回上下文 # 返回上下文
return Context(meta_info=file_meta, media_info=mediainfo) return Context(meta_info=file_meta, media_info=mediainfo)
def search(self, title: str) -> Tuple[MetaBase, List[MediaInfo]]: def search(self, title: str) -> Tuple[Optional[MetaBase], List[MediaInfo]]:
""" """
搜索媒体信息 搜索媒体/人物信息
:param title: 搜索内容 :param title: 搜索内容
:return: 识别元数据,媒体信息列表 :return: 识别元数据,媒体信息列表
""" """
@@ -195,14 +195,11 @@ class MediaChain(ChainBase, metaclass=Singleton):
doubaninfo = self.douban_info(doubanid=doubanid, mtype=mtype) doubaninfo = self.douban_info(doubanid=doubanid, mtype=mtype)
if doubaninfo: if doubaninfo:
# 优先使用原标题匹配 # 优先使用原标题匹配
season_meta = None
if doubaninfo.get("original_title"): if doubaninfo.get("original_title"):
meta = MetaInfo(title=doubaninfo.get("original_title"))
season_meta = MetaInfo(title=doubaninfo.get("title"))
# 合并季
meta.begin_season = season_meta.begin_season
else:
meta = MetaInfo(title=doubaninfo.get("title")) meta = MetaInfo(title=doubaninfo.get("title"))
meta_org = MetaInfo(title=doubaninfo.get("original_title"))
else:
meta_org = meta = MetaInfo(title=doubaninfo.get("title"))
# 年份 # 年份
if doubaninfo.get("year"): if doubaninfo.get("year"):
meta.year = doubaninfo.get("year") meta.year = doubaninfo.get("year")
@@ -211,24 +208,53 @@ class MediaChain(ChainBase, metaclass=Singleton):
meta.type = doubaninfo.get('media_type') meta.type = doubaninfo.get('media_type')
else: else:
meta.type = MediaType.MOVIE if doubaninfo.get("type") == "movie" else MediaType.TV meta.type = MediaType.MOVIE if doubaninfo.get("type") == "movie" else MediaType.TV
# 使用原标题识别TMDB媒体信息 # 匹配TMDB信息
tmdbinfo = self.match_tmdbinfo( meta_names = list(dict.fromkeys([k for k in [meta_org.name,
name=meta.name, meta.cn_name,
year=meta.year, meta.en_name] if k]))
mtype=mtype or meta.type, for name in meta_names:
season=meta.begin_season tmdbinfo = self.match_tmdbinfo(
) name=name,
if not tmdbinfo: year=meta.year,
if season_meta and season_meta.name != meta.name: mtype=mtype or meta.type,
# 使用主标题识别媒体信息 season=meta.begin_season
tmdbinfo = self.match_tmdbinfo( )
name=season_meta.name, if tmdbinfo:
year=meta.year, break
mtype=mtype or meta.type,
season=meta.begin_season
)
return tmdbinfo return tmdbinfo
def get_tmdbinfo_by_bangumiid(self, bangumiid: int) -> Optional[dict]:
"""
根据BangumiID获取TMDB信息
"""
bangumiinfo = self.bangumi_info(bangumiid=bangumiid)
if bangumiinfo:
# 优先使用原标题匹配
if bangumiinfo.get("name_cn"):
meta = MetaInfo(title=bangumiinfo.get("name"))
meta_cn = MetaInfo(title=bangumiinfo.get("name_cn"))
else:
meta_cn = meta = MetaInfo(title=bangumiinfo.get("name"))
# 年份
release_date = bangumiinfo.get("date") or bangumiinfo.get("air_date")
if release_date:
year = release_date[:4]
else:
year = None
# 识别TMDB媒体信息
meta_names = list(dict.fromkeys([k for k in [meta_cn.name,
meta.name] if k]))
for name in meta_names:
tmdbinfo = self.match_tmdbinfo(
name=name,
year=year,
mtype=MediaType.TV,
season=meta.begin_season
)
if tmdbinfo:
return tmdbinfo
return None
def get_doubaninfo_by_tmdbid(self, tmdbid: int, def get_doubaninfo_by_tmdbid(self, tmdbid: int,
mtype: MediaType = None, season: int = None) -> Optional[dict]: mtype: MediaType = None, season: int = None) -> Optional[dict]:
""" """
@@ -261,3 +287,29 @@ class MediaChain(ChainBase, metaclass=Singleton):
imdbid=imdbid imdbid=imdbid
) )
return None return None
def get_doubaninfo_by_bangumiid(self, bangumiid: int) -> Optional[dict]:
"""
根据BangumiID获取豆瓣信息
"""
bangumiinfo = self.bangumi_info(bangumiid=bangumiid)
if bangumiinfo:
# 优先使用中文标题匹配
if bangumiinfo.get("name_cn"):
meta = MetaInfo(title=bangumiinfo.get("name_cn"))
else:
meta = MetaInfo(title=bangumiinfo.get("name"))
# 年份
release_date = bangumiinfo.get("date") or bangumiinfo.get("air_date")
if release_date:
year = release_date[:4]
else:
year = None
# 使用名称识别豆瓣媒体信息
return self.match_doubaninfo(
name=meta.name,
year=year,
mtype=MediaType.TV,
season=meta.begin_season
)
return None

View File

@@ -12,9 +12,11 @@ from app.core.config import settings
from app.core.context import MediaInfo, Context from app.core.context import MediaInfo, Context
from app.core.event import EventManager from app.core.event import EventManager
from app.core.meta import MetaBase from app.core.meta import MetaBase
from app.db.message_oper import MessageOper
from app.helper.message import MessageHelper
from app.helper.torrent import TorrentHelper from app.helper.torrent import TorrentHelper
from app.log import logger from app.log import logger
from app.schemas import Notification, NotExistMediaInfo from app.schemas import Notification, NotExistMediaInfo, CommingMessage
from app.schemas.types import EventType, MessageChannel, MediaType from app.schemas.types import EventType, MessageChannel, MediaType
from app.utils.string import StringUtils from app.utils.string import StringUtils
@@ -43,6 +45,8 @@ class MessageChain(ChainBase):
self.mediachain = MediaChain() self.mediachain = MediaChain()
self.eventmanager = EventManager() self.eventmanager = EventManager()
self.torrenthelper = TorrentHelper() self.torrenthelper = TorrentHelper()
self.messagehelper = MessageHelper()
self.messageoper = MessageOper()
def __get_noexits_info( def __get_noexits_info(
self, self,
@@ -100,10 +104,8 @@ class MessageChain(ChainBase):
def process(self, body: Any, form: Any, args: Any) -> None: def process(self, body: Any, form: Any, args: Any) -> None:
""" """
识别消息内容,执行操作 调用模块识别消息内容
""" """
# 申明全局变量
global _current_page, _current_meta, _current_media
# 获取消息内容 # 获取消息内容
info = self.message_parser(body=body, form=form, args=args) info = self.message_parser(body=body, form=form, args=args)
if not info: if not info:
@@ -113,7 +115,7 @@ class MessageChain(ChainBase):
# 用户ID # 用户ID
userid = info.userid userid = info.userid
# 用户名 # 用户名
username = info.username username = info.username or userid
if not userid: if not userid:
logger.debug(f'未识别到用户ID{body}{form}{args}') logger.debug(f'未识别到用户ID{body}{form}{args}')
return return
@@ -122,10 +124,34 @@ class MessageChain(ChainBase):
if not text: if not text:
logger.debug(f'未识别到消息内容::{body}{form}{args}') logger.debug(f'未识别到消息内容::{body}{form}{args}')
return return
# 处理消息
self.handle_message(channel=channel, userid=userid, username=username, text=text)
def handle_message(self, channel: MessageChannel, userid: Union[str, int], username: str, text: str) -> None:
"""
识别消息内容,执行操作
"""
# 申明全局变量
global _current_page, _current_meta, _current_media
# 加载缓存 # 加载缓存
user_cache: Dict[str, dict] = self.load_cache(self._cache_file) or {} user_cache: Dict[str, dict] = self.load_cache(self._cache_file) or {}
# 处理消息 # 处理消息
logger.info(f'收到用户消息内容,用户:{userid},内容:{text}') logger.info(f'收到用户消息内容,用户:{userid},内容:{text}')
# 保存消息
self.messagehelper.put(
CommingMessage(
userid=userid,
username=username,
channel=channel,
text=text
), role="user")
self.messageoper.add(
channel=channel,
userid=username or userid,
text=text,
action=0
)
# 处理消息
if text.startswith('/'): if text.startswith('/'):
# 执行命令 # 执行命令
self.eventmanager.send_event( self.eventmanager.send_event(
@@ -166,8 +192,8 @@ class MessageChain(ChainBase):
# 媒体库中已存在 # 媒体库中已存在
self.post_message( self.post_message(
Notification(channel=channel, Notification(channel=channel,
title=f"{_current_media.title_year}" title=f"{_current_media.title_year}"
f"{_current_meta.sea} 媒体库中已存在,如需重新下载请发送:搜索 XXX 或 下载 XXX", f"{_current_meta.sea} 媒体库中已存在,如需重新下载请发送:搜索 名称 或 下载 名称】",
userid=userid)) userid=userid))
return return
elif exist_flag: elif exist_flag:
@@ -248,8 +274,8 @@ class MessageChain(ChainBase):
if exist_flag: if exist_flag:
self.post_message(Notification( self.post_message(Notification(
channel=channel, channel=channel,
title=f"{mediainfo.title_year}" title=f"{mediainfo.title_year}"
f"{_current_meta.sea} 媒体库中已存在,如需洗版请发送:洗版 XXX", f"{_current_meta.sea} 媒体库中已存在,如需洗版请发送:洗版 XXX",
userid=userid)) userid=userid))
return return
else: else:

View File

@@ -1,5 +1,4 @@
import pickle import pickle
import re
import traceback import traceback
from concurrent.futures import ThreadPoolExecutor, as_completed from concurrent.futures import ThreadPoolExecutor, as_completed
from datetime import datetime from datetime import datetime
@@ -9,6 +8,7 @@ from typing import List, Optional
from app.chain import ChainBase from app.chain import ChainBase
from app.core.context import Context from app.core.context import Context
from app.core.context import MediaInfo, TorrentInfo from app.core.context import MediaInfo, TorrentInfo
from app.core.event import eventmanager, Event
from app.core.metainfo import MetaInfo from app.core.metainfo import MetaInfo
from app.db.systemconfig_oper import SystemConfigOper from app.db.systemconfig_oper import SystemConfigOper
from app.helper.progress import ProgressHelper from app.helper.progress import ProgressHelper
@@ -16,8 +16,7 @@ from app.helper.sites import SitesHelper
from app.helper.torrent import TorrentHelper from app.helper.torrent import TorrentHelper
from app.log import logger from app.log import logger
from app.schemas import NotExistMediaInfo from app.schemas import NotExistMediaInfo
from app.schemas.types import MediaType, ProgressKey, SystemConfigKey from app.schemas.types import MediaType, ProgressKey, SystemConfigKey, EventType
from app.utils.string import StringUtils
class SearchChain(ChainBase): class SearchChain(ChainBase):
@@ -33,19 +32,27 @@ class SearchChain(ChainBase):
self.torrenthelper = TorrentHelper() self.torrenthelper = TorrentHelper()
def search_by_id(self, tmdbid: int = None, doubanid: str = None, def search_by_id(self, tmdbid: int = None, doubanid: str = None,
mtype: MediaType = None, area: str = "title") -> List[Context]: mtype: MediaType = None, area: str = "title", season: int = None) -> List[Context]:
""" """
根据TMDBID/豆瓣ID搜索资源精确匹配但不不过滤本地存在的资源 根据TMDBID/豆瓣ID搜索资源精确匹配但不不过滤本地存在的资源
:param tmdbid: TMDB ID :param tmdbid: TMDB ID
:param doubanid: 豆瓣 ID :param doubanid: 豆瓣 ID
:param mtype: 媒体,电影 or 电视剧 :param mtype: 媒体,电影 or 电视剧
:param area: 搜索范围title or imdbid :param area: 搜索范围title or imdbid
:param season: 季数
""" """
mediainfo = self.recognize_media(tmdbid=tmdbid, doubanid=doubanid, mtype=mtype) mediainfo = self.recognize_media(tmdbid=tmdbid, doubanid=doubanid, mtype=mtype)
if not mediainfo: if not mediainfo:
logger.error(f'{tmdbid} 媒体信息识别失败!') logger.error(f'{tmdbid} 媒体信息识别失败!')
return [] return []
results = self.process(mediainfo=mediainfo, area=area) no_exists = None
if season:
no_exists = {
tmdbid or doubanid: {
season: NotExistMediaInfo(episodes=[])
}
}
results = self.process(mediainfo=mediainfo, area=area, no_exists=no_exists)
# 保存眲结果 # 保存眲结果
bytes_results = pickle.dumps(results) bytes_results = pickle.dumps(results)
self.systemconfig.set(SystemConfigKey.SearchResults, bytes_results) self.systemconfig.set(SystemConfigKey.SearchResults, bytes_results)
@@ -124,7 +131,12 @@ class SearchChain(ChainBase):
if keyword: if keyword:
keywords = [keyword] keywords = [keyword]
else: else:
keywords = list({mediainfo.title, mediainfo.original_title, mediainfo.en_title} - {None}) # 去重去空,但要保持顺序
keywords = list(dict.fromkeys([k for k in [mediainfo.title,
mediainfo.original_title,
mediainfo.en_title,
mediainfo.sg_title] if k]))
# 执行搜索 # 执行搜索
torrents: List[TorrentInfo] = self.__search_all_sites( torrents: List[TorrentInfo] = self.__search_all_sites(
mediainfo=mediainfo, mediainfo=mediainfo,
@@ -145,13 +157,15 @@ class SearchChain(ChainBase):
_count = 0 _count = 0
if mediainfo: if mediainfo:
# 英文标题应该在别名/原标题中,不需要再匹配 # 英文标题应该在别名/原标题中,不需要再匹配
logger.info(f"标题:{mediainfo.title},原标题:{mediainfo.original_title},别名:{mediainfo.names}") logger.info(f"开始匹配结果 标题:{mediainfo.title},原标题:{mediainfo.original_title},别名:{mediainfo.names}")
self.progress.update(value=0, text=f'开始匹配,总 {_total} 个资源 ...', key=ProgressKey.Search) self.progress.update(value=0, text=f'开始匹配,总 {_total} 个资源 ...', key=ProgressKey.Search)
for torrent in torrents: for torrent in torrents:
_count += 1 _count += 1
self.progress.update(value=(_count / _total) * 96, self.progress.update(value=(_count / _total) * 96,
text=f'正在匹配 {torrent.site_name},已完成 {_count} / {_total} ...', text=f'正在匹配 {torrent.site_name},已完成 {_count} / {_total} ...',
key=ProgressKey.Search) key=ProgressKey.Search)
if not torrent.title:
continue
# 比对IMDBID # 比对IMDBID
if torrent.imdbid \ if torrent.imdbid \
and mediainfo.imdb_id \ and mediainfo.imdb_id \
@@ -161,59 +175,27 @@ class SearchChain(ChainBase):
continue continue
# 识别 # 识别
torrent_meta = MetaInfo(title=torrent.title, subtitle=torrent.description) torrent_meta = MetaInfo(title=torrent.title, subtitle=torrent.description)
# 比对种子识别类型 if torrent.title != torrent_meta.org_string:
if torrent_meta.type == MediaType.TV and mediainfo.type != MediaType.TV: logger.info(f"种子名称应用识别词后发生改变:{torrent.title} => {torrent_meta.org_string}")
logger.warn(f'{torrent.site_name} - {torrent.title} 种子标题类型为 {torrent_meta.type.value}' # 比对词条指定的tmdbid
f'需要是 {mediainfo.type.value},不匹配') if torrent_meta.tmdbid or torrent_meta.doubanid:
continue if torrent_meta.tmdbid and torrent_meta.tmdbid == mediainfo.tmdb_id:
# 比对种子在站点中的类型 logger.info(f'{mediainfo.title} 通过词表指定TMDBID匹配到资源{torrent.site_name} - {torrent.title}')
if torrent.category == MediaType.TV.value and mediainfo.type != MediaType.TV:
logger.warn(f'{torrent.site_name} - {torrent.title} 种子在站点中归类为 {torrent.category}'
f'需要是 {mediainfo.type.value},不匹配')
continue
# 比对年份
if mediainfo.year:
if mediainfo.type == MediaType.TV:
# 剧集年份,每季的年份可能不同
if torrent_meta.year and torrent_meta.year not in [year for year in
mediainfo.season_years.values()]:
logger.warn(f'{torrent.site_name} - {torrent.title} 年份不匹配')
continue
else:
# 电影年份上下浮动1年
if torrent_meta.year not in [str(int(mediainfo.year) - 1),
mediainfo.year,
str(int(mediainfo.year) + 1)]:
logger.warn(f'{torrent.site_name} - {torrent.title} 年份不匹配')
continue
# 比对标题和原语种标题
meta_name = StringUtils.clear_upper(torrent_meta.name)
if meta_name in [
StringUtils.clear_upper(mediainfo.title),
StringUtils.clear_upper(mediainfo.original_title)
]:
logger.info(f'{mediainfo.title} 通过标题匹配到资源:{torrent.site_name} - {torrent.title}')
_match_torrents.append(torrent)
continue
# 在副标题中判断是否存在标题与原语种标题
if torrent.description:
subtitle = re.split(r'[\s/|]+', torrent.description)
if (StringUtils.is_chinese(mediainfo.title)
and str(mediainfo.title) in subtitle) \
or (StringUtils.is_chinese(mediainfo.original_title)
and str(mediainfo.original_title) in subtitle):
logger.info(f'{mediainfo.title} 通过副标题匹配到资源:{torrent.site_name} - {torrent.title}'
f'副标题:{torrent.description}')
_match_torrents.append(torrent) _match_torrents.append(torrent)
continue continue
# 比对别名和译名 if torrent_meta.doubanid and torrent_meta.doubanid == mediainfo.douban_id:
for name in mediainfo.names: logger.info(f'{mediainfo.title} 通过词表指定豆瓣ID匹配到资源{torrent.site_name} - {torrent.title}')
if StringUtils.clear_upper(name) == meta_name:
logger.info(f'{mediainfo.title} 通过别名或译名匹配到资源:{torrent.site_name} - {torrent.title}')
_match_torrents.append(torrent) _match_torrents.append(torrent)
break continue
else:
logger.warn(f'{torrent.site_name} - {torrent.title} 标题不匹配') # 比对种子
if self.torrenthelper.match_torrent(mediainfo=mediainfo,
torrent_meta=torrent_meta,
torrent=torrent):
# 匹配成功
_match_torrents.append(torrent)
continue
# 匹配完成
logger.info(f"匹配完成,共匹配到 {len(_match_torrents)} 个资源") logger.info(f"匹配完成,共匹配到 {len(_match_torrents)} 个资源")
self.progress.update(value=97, self.progress.update(value=97,
text=f'匹配完成,共匹配到 {len(_match_torrents)} 个资源', text=f'匹配完成,共匹配到 {len(_match_torrents)} 个资源',
@@ -228,7 +210,7 @@ class SearchChain(ChainBase):
# 取搜索优先级规则 # 取搜索优先级规则
priority_rule = self.systemconfig.get(SystemConfigKey.SearchFilterRules) priority_rule = self.systemconfig.get(SystemConfigKey.SearchFilterRules)
if priority_rule: if priority_rule:
logger.info(f'开始优先级规则过滤,当前规则:{priority_rule} ...') logger.info(f'开始优先级规则/剧集过滤,当前规则:{priority_rule} ...')
result: List[TorrentInfo] = self.filter_torrents(rule_string=priority_rule, result: List[TorrentInfo] = self.filter_torrents(rule_string=priority_rule,
torrent_list=_match_torrents, torrent_list=_match_torrents,
season_episodes=season_episodes, season_episodes=season_episodes,
@@ -239,14 +221,14 @@ class SearchChain(ChainBase):
logger.warn(f'{keyword or mediainfo.title} 没有符合优先级规则的资源') logger.warn(f'{keyword or mediainfo.title} 没有符合优先级规则的资源')
return [] return []
# 使用过滤规则再次过滤 # 使用过滤规则再次过滤
if filter_rule: if _match_torrents:
logger.info(f'开始过滤规则过滤,当前规则:{filter_rule} ...') logger.info(f'开始过滤规则过滤,当前规则:{filter_rule} ...')
_match_torrents = self.filter_torrents_by_rule(torrents=_match_torrents, _match_torrents = self.filter_torrents_by_rule(torrents=_match_torrents,
mediainfo=mediainfo, mediainfo=mediainfo,
filter_rule=filter_rule) filter_rule=filter_rule)
if not _match_torrents: if not _match_torrents:
logger.warn(f'{keyword or mediainfo.title} 没有符合过滤规则的资源') logger.warn(f'{keyword or mediainfo.title} 没有符合过滤规则的资源')
return [] return []
# 去掉mediainfo中多余的数据 # 去掉mediainfo中多余的数据
mediainfo.clear() mediainfo.clear()
# 组装上下文 # 组装上下文
@@ -254,8 +236,8 @@ class SearchChain(ChainBase):
media_info=mediainfo, media_info=mediainfo,
torrent_info=torrent) for torrent in _match_torrents] torrent_info=torrent) for torrent in _match_torrents]
logger.info(f"过滤完成,剩余 {_total} 个资源") logger.info(f"过滤完成,剩余 {len(contexts)} 个资源")
self.progress.update(value=99, text=f'过滤完成,剩余 {_total} 个资源', key=ProgressKey.Search) self.progress.update(value=99, text=f'过滤完成,剩余 {len(contexts)} 个资源', key=ProgressKey.Search)
# 排序 # 排序
self.progress.update(value=100, self.progress.update(value=100,
text=f'正在对 {len(contexts)} 个资源进行排序,请稍候...', text=f'正在对 {len(contexts)} 个资源进行排序,请稍候...',
@@ -379,3 +361,24 @@ class SearchChain(ChainBase):
), ),
torrents torrents
)) ))
@eventmanager.register(EventType.SiteDeleted)
def remove_site(self, event: Event):
"""
从搜索站点中移除与已删除站点相关的设置
"""
if not event:
return
event_data = event.event_data or {}
site_id = event_data.get("site_id")
if not site_id:
return
if site_id == "*":
# 清空搜索站点
SystemConfigOper().set(SystemConfigKey.IndexerSites, [])
return
# 从选中的rss站点中移除
selected_sites = SystemConfigOper().get(SystemConfigKey.IndexerSites) or []
if site_id in selected_sites:
selected_sites.remove(site_id)
SystemConfigOper().set(SystemConfigKey.IndexerSites, selected_sites)

View File

@@ -1,5 +1,6 @@
import base64 import base64
import re import re
from datetime import datetime
from typing import Tuple, Optional from typing import Tuple, Optional
from typing import Union from typing import Union
from urllib.parse import urljoin from urllib.parse import urljoin
@@ -12,6 +13,8 @@ from app.core.event import eventmanager, Event, EventManager
from app.db.models.site import Site from app.db.models.site import Site
from app.db.site_oper import SiteOper from app.db.site_oper import SiteOper
from app.db.siteicon_oper import SiteIconOper from app.db.siteicon_oper import SiteIconOper
from app.db.systemconfig_oper import SystemConfigOper
from app.db.sitestatistic_oper import SiteStatisticOper
from app.helper.browser import PlaywrightHelper from app.helper.browser import PlaywrightHelper
from app.helper.cloudflare import under_challenge from app.helper.cloudflare import under_challenge
from app.helper.cookie import CookieHelper from app.helper.cookie import CookieHelper
@@ -40,18 +43,24 @@ class SiteChain(ChainBase):
self.rsshelper = RssHelper() self.rsshelper = RssHelper()
self.cookiehelper = CookieHelper() self.cookiehelper = CookieHelper()
self.message = MessageHelper() self.message = MessageHelper()
self.cookiecloud = CookieCloudHelper( self.cookiecloud = CookieCloudHelper()
server=settings.COOKIECLOUD_HOST, self.systemconfig = SystemConfigOper()
key=settings.COOKIECLOUD_KEY, self.sitestatistic = SiteStatisticOper()
password=settings.COOKIECLOUD_PASSWORD
)
# 特殊站点登录验证 # 特殊站点登录验证
self.special_site_test = { self.special_site_test = {
"zhuque.in": self.__zhuque_test, "zhuque.in": self.__zhuque_test,
# "m-team.io": self.__mteam_test, "m-team.io": self.__mteam_test,
"m-team.cc": self.__mteam_test,
"ptlsp.com": self.__ptlsp_test,
} }
def is_special_site(self, domain: str) -> bool:
"""
判断是否特殊站点
"""
return domain in self.special_site_test
@staticmethod @staticmethod
def __zhuque_test(site: Site) -> Tuple[bool, str]: def __zhuque_test(site: Site) -> Tuple[bool, str]:
""" """
@@ -59,8 +68,9 @@ class SiteChain(ChainBase):
""" """
# 获取token # 获取token
token = None token = None
user_agent = site.ua or settings.USER_AGENT
res = RequestUtils( res = RequestUtils(
ua=site.ua, ua=user_agent,
cookies=site.cookie, cookies=site.cookie,
proxies=settings.PROXY if site.proxy else None, proxies=settings.PROXY if site.proxy else None,
timeout=15 timeout=15
@@ -76,7 +86,7 @@ class SiteChain(ChainBase):
headers={ headers={
'X-CSRF-TOKEN': token, 'X-CSRF-TOKEN': token,
"Content-Type": "application/json; charset=utf-8", "Content-Type": "application/json; charset=utf-8",
"User-Agent": f"{site.ua}" "User-Agent": f"{user_agent}"
}, },
cookies=site.cookie, cookies=site.cookie,
proxies=settings.PROXY if site.proxy else None, proxies=settings.PROXY if site.proxy else None,
@@ -93,18 +103,40 @@ class SiteChain(ChainBase):
""" """
判断站点是否已经登陆m-team 判断站点是否已经登陆m-team
""" """
user_agent = site.ua or settings.USER_AGENT
url = f"{site.url}api/member/profile" url = f"{site.url}api/member/profile"
headers = {
"Content-Type": "application/json",
"User-Agent": user_agent,
"Accept": "application/json, text/plain, */*",
"Authorization": site.token
}
res = RequestUtils( res = RequestUtils(
ua=site.ua, headers=headers,
cookies=site.cookie,
proxies=settings.PROXY if site.proxy else None, proxies=settings.PROXY if site.proxy else None,
timeout=15 timeout=15
).post_res(url=url) ).post_res(url=url)
if res and res.status_code == 200: if res and res.status_code == 200:
user_info = res.json() user_info = res.json()
if user_info and user_info.get("data"): if user_info and user_info.get("data"):
return True, "连接成功" # 更新最后访问时间
return False, "Cookie已失效" res = RequestUtils(headers=headers,
timeout=60,
proxies=settings.PROXY if site.proxy else None,
referer=f"{site.url}index"
).post_res(url=urljoin(url, "api/member/updateLastBrowse"))
if res:
return True, "连接成功"
else:
return True, f"连接成功,但更新状态失败"
return False, "鉴权已过期或无效"
def __ptlsp_test(self, site: Site) -> Tuple[bool, str]:
"""
判断站点是否已经登陆ptlsp
"""
site.url = f"{site.url}index.php"
return self.__test(site)
@staticmethod @staticmethod
def __parse_favicon(url: str, cookie: str, ua: str) -> Tuple[str, Optional[str]]: def __parse_favicon(url: str, cookie: str, ua: str) -> Tuple[str, Optional[str]]:
@@ -179,7 +211,7 @@ class SiteChain(ChainBase):
rss_url, errmsg = self.rsshelper.get_rss_link( rss_url, errmsg = self.rsshelper.get_rss_link(
url=site_info.url, url=site_info.url,
cookie=cookie, cookie=cookie,
ua=settings.USER_AGENT, ua=site_info.ua or settings.USER_AGENT,
proxy=True if site_info.proxy else False proxy=True if site_info.proxy else False
) )
if rss_url: if rss_url:
@@ -234,9 +266,9 @@ class SiteChain(ChainBase):
public=1 if indexer.get("public") else 0) public=1 if indexer.get("public") else 0)
_add_count += 1 _add_count += 1
# 通知缓存站点图标 # 通知站点更新
if indexer: if indexer:
EventManager().send_event(EventType.CacheSiteIcon, { EventManager().send_event(EventType.SiteUpdated, {
"domain": domain, "domain": domain,
}) })
# 处理完成 # 处理完成
@@ -248,7 +280,7 @@ class SiteChain(ChainBase):
logger.info(f"CookieCloud同步成功{ret_msg}") logger.info(f"CookieCloud同步成功{ret_msg}")
return True, ret_msg return True, ret_msg
@eventmanager.register(EventType.CacheSiteIcon) @eventmanager.register(EventType.SiteUpdated)
def cache_site_icon(self, event: Event): def cache_site_icon(self, event: Event):
""" """
缓存站点图标 缓存站点图标
@@ -290,6 +322,27 @@ class SiteChain(ChainBase):
else: else:
logger.warn(f"缓存站点 {indexer.get('name')} 图标失败") logger.warn(f"缓存站点 {indexer.get('name')} 图标失败")
@eventmanager.register(EventType.SiteUpdated)
def clear_site_data(self, event: Event):
"""
清理站点数据
"""
if not event:
return
event_data = event.event_data or {}
# 主域名
domain = event_data.get("domain")
if not domain:
return
# 获取主域名中间那段
domain_host = StringUtils.get_url_host(domain)
# 查询以"site.domain_host"开头的配置项,并清除
site_keys = self.systemconfig.all().keys()
for key in site_keys:
if key.startswith(f"site.{domain_host}"):
logger.info(f"清理站点配置:{key}")
self.systemconfig.delete(key)
def test(self, url: str) -> Tuple[bool, str]: def test(self, url: str) -> Tuple[bool, str]:
""" """
测试站点是否可用 测试站点是否可用
@@ -302,53 +355,70 @@ class SiteChain(ChainBase):
if not site_info: if not site_info:
return False, f"站点【{url}】不存在" return False, f"站点【{url}】不存在"
# 特殊站点测试 # 模拟登录
if self.special_site_test.get(domain): try:
return self.special_site_test[domain](site_info) # 开始记时
start_time = datetime.now()
# 特殊站点测试
if self.special_site_test.get(domain):
state, message = self.special_site_test[domain](site_info)
else:
# 通用站点测试
state, message = self.__test(site_info)
# 统计
seconds = (datetime.now() - start_time).seconds
if state:
self.sitestatistic.success(domain=domain, seconds=seconds)
else:
self.sitestatistic.fail(domain)
return state, message
except Exception as e:
return False, f"{str(e)}"
# 通用站点测试 @staticmethod
def __test(site_info: Site) -> Tuple[bool, str]:
"""
通用站点测试
"""
site_url = site_info.url site_url = site_info.url
site_cookie = site_info.cookie site_cookie = site_info.cookie
ua = site_info.ua ua = site_info.ua or settings.USER_AGENT
render = site_info.render render = site_info.render
public = site_info.public public = site_info.public
proxies = settings.PROXY if site_info.proxy else None proxies = settings.PROXY if site_info.proxy else None
proxy_server = settings.PROXY_SERVER if site_info.proxy else None proxy_server = settings.PROXY_SERVER if site_info.proxy else None
# 模拟登录
try: # 访问链接
# 访问链接 if render:
if render: page_source = PlaywrightHelper().get_page_source(url=site_url,
page_source = PlaywrightHelper().get_page_source(url=site_url, cookies=site_cookie,
cookies=site_cookie, ua=ua,
ua=ua, proxies=proxy_server)
proxies=proxy_server) if not public and not SiteUtils.is_logged_in(page_source):
if not public and not SiteUtils.is_logged_in(page_source): if under_challenge(page_source):
if under_challenge(page_source): return False, f"无法通过Cloudflare"
return False, f"无法通过Cloudflare" return False, f"仿真登录失败Cookie已失效"
return False, f"仿真登录失败Cookie已失效" else:
else: res = RequestUtils(cookies=site_cookie,
res = RequestUtils(cookies=site_cookie, ua=ua,
ua=ua, proxies=proxies
proxies=proxies ).get_res(url=site_url)
).get_res(url=site_url) # 判断登录状态
# 判断登录状态 if res and res.status_code in [200, 500, 403]:
if res and res.status_code in [200, 500, 403]: if not public and not SiteUtils.is_logged_in(res.text):
if not public and not SiteUtils.is_logged_in(res.text): if under_challenge(res.text):
if under_challenge(res.text): msg = "站点被Cloudflare防护请打开站点浏览器仿真"
msg = "站点被Cloudflare防护请打开站点浏览器仿真" elif res.status_code == 200:
elif res.status_code == 200: msg = "Cookie已失效"
msg = "Cookie已失效" else:
else: msg = f"状态码:{res.status_code}"
msg = f"状态码:{res.status_code}" return False, f"{msg}"
return False, f"{msg}" elif public and res.status_code != 200:
elif public and res.status_code != 200:
return False, f"状态码:{res.status_code}"
elif res is not None:
return False, f"状态码:{res.status_code}" return False, f"状态码:{res.status_code}"
else: elif res is not None:
return False, f"无法打开网站" return False, f"状态码:{res.status_code}"
except Exception as e: else:
return False, f"{str(e)}" return False, f"无法打开网站"
return True, "连接成功" return True, "连接成功"
def remote_list(self, channel: MessageChannel, userid: Union[str, int] = None): def remote_list(self, channel: MessageChannel, userid: Union[str, int] = None):

View File

@@ -1,6 +1,5 @@
import json import json
import random import random
import re
import time import time
from datetime import datetime from datetime import datetime
from typing import Dict, List, Optional, Union, Tuple from typing import Dict, List, Optional, Union, Tuple
@@ -12,17 +11,19 @@ from app.chain.search import SearchChain
from app.chain.torrents import TorrentsChain from app.chain.torrents import TorrentsChain
from app.core.config import settings from app.core.config import settings
from app.core.context import TorrentInfo, Context, MediaInfo from app.core.context import TorrentInfo, Context, MediaInfo
from app.core.event import eventmanager, Event, EventManager
from app.core.meta import MetaBase from app.core.meta import MetaBase
from app.core.metainfo import MetaInfo from app.core.metainfo import MetaInfo
from app.db.models.subscribe import Subscribe from app.db.models.subscribe import Subscribe
from app.db.subscribe_oper import SubscribeOper from app.db.subscribe_oper import SubscribeOper
from app.db.subscribehistory_oper import SubscribeHistoryOper
from app.db.systemconfig_oper import SystemConfigOper from app.db.systemconfig_oper import SystemConfigOper
from app.helper.message import MessageHelper from app.helper.message import MessageHelper
from app.helper.subscribe import SubscribeHelper
from app.helper.torrent import TorrentHelper from app.helper.torrent import TorrentHelper
from app.log import logger from app.log import logger
from app.schemas import NotExistMediaInfo, Notification from app.schemas import NotExistMediaInfo, Notification
from app.schemas.types import MediaType, SystemConfigKey, MessageChannel, NotificationType from app.schemas.types import MediaType, SystemConfigKey, MessageChannel, NotificationType, EventType
from app.utils.string import StringUtils
class SubscribeChain(ChainBase): class SubscribeChain(ChainBase):
@@ -35,6 +36,8 @@ class SubscribeChain(ChainBase):
self.downloadchain = DownloadChain() self.downloadchain = DownloadChain()
self.searchchain = SearchChain() self.searchchain = SearchChain()
self.subscribeoper = SubscribeOper() self.subscribeoper = SubscribeOper()
self.subscribehistoryoper = SubscribeHistoryOper()
self.subscribehelper = SubscribeHelper()
self.torrentschain = TorrentsChain() self.torrentschain = TorrentsChain()
self.mediachain = MediaChain() self.mediachain = MediaChain()
self.message = MessageHelper() self.message = MessageHelper()
@@ -45,6 +48,7 @@ class SubscribeChain(ChainBase):
mtype: MediaType = None, mtype: MediaType = None,
tmdbid: int = None, tmdbid: int = None,
doubanid: str = None, doubanid: str = None,
bangumiid: int = None,
season: int = None, season: int = None,
channel: MessageChannel = None, channel: MessageChannel = None,
userid: str = None, userid: str = None,
@@ -100,6 +104,7 @@ class SubscribeChain(ChainBase):
mediainfo = self.recognize_media(mtype=mediainfo.type, mediainfo = self.recognize_media(mtype=mediainfo.type,
tmdbid=mediainfo.tmdb_id, tmdbid=mediainfo.tmdb_id,
doubanid=mediainfo.douban_id, doubanid=mediainfo.douban_id,
bangumiid=mediainfo.bangumi_id,
cache=False) cache=False)
if not mediainfo: if not mediainfo:
logger.error(f"媒体信息识别失败!") logger.error(f"媒体信息识别失败!")
@@ -119,13 +124,29 @@ class SubscribeChain(ChainBase):
kwargs.update({ kwargs.update({
'lack_episode': kwargs.get('total_episode') 'lack_episode': kwargs.get('total_episode')
}) })
else:
# 避免season为0的问题
season = None
# 更新媒体图片 # 更新媒体图片
self.obtain_images(mediainfo=mediainfo) self.obtain_images(mediainfo=mediainfo)
# 合并信息 # 合并信息
if doubanid: if doubanid:
mediainfo.douban_id = doubanid mediainfo.douban_id = doubanid
if bangumiid:
mediainfo.bangumi_id = bangumiid
# 添加订阅 # 添加订阅
sid, err_msg = self.subscribeoper.add(mediainfo, season=season, username=username, **kwargs) kwargs.update({
'quality': self.__get_default_subscribe_config(mediainfo.type, "quality"),
'resolution': self.__get_default_subscribe_config(mediainfo.type, "resolution"),
'effect': self.__get_default_subscribe_config(mediainfo.type, "effect"),
'include': self.__get_default_subscribe_config(mediainfo.type, "include"),
'exclude': self.__get_default_subscribe_config(mediainfo.type, "exclude"),
'best_version': self.__get_default_subscribe_config(mediainfo.type, "best_version"),
'search_imdbid': self.__get_default_subscribe_config(mediainfo.type, "search_imdbid"),
'sites': self.__get_default_subscribe_config(mediainfo.type, "sites") or None,
'save_path': self.__get_default_subscribe_config(mediainfo.type, "save_path"),
})
sid, err_msg = self.subscribeoper.add(mediainfo=mediainfo, season=season, username=username, **kwargs)
if not sid: if not sid:
logger.error(f'{mediainfo.title_year} {err_msg}') logger.error(f'{mediainfo.title_year} {err_msg}')
if not exist_ok and message: if not exist_ok and message:
@@ -137,10 +158,11 @@ class SubscribeChain(ChainBase):
text=f"{err_msg}", text=f"{err_msg}",
image=mediainfo.get_message_image(), image=mediainfo.get_message_image(),
userid=userid)) userid=userid))
return None, err_msg
elif message: elif message:
logger.info(f'{mediainfo.title_year} {metainfo.season} 添加订阅成功') logger.info(f'{mediainfo.title_year} {metainfo.season} 添加订阅成功')
if username or userid: if username:
text = f"评分:{mediainfo.vote_average},来自用户:{username or userid}" text = f"评分:{mediainfo.vote_average},来自用户:{username}"
else: else:
text = f"评分:{mediainfo.vote_average}" text = f"评分:{mediainfo.vote_average}"
# 群发 # 群发
@@ -148,6 +170,28 @@ class SubscribeChain(ChainBase):
title=f"{mediainfo.title_year} {metainfo.season} 已添加订阅", title=f"{mediainfo.title_year} {metainfo.season} 已添加订阅",
text=text, text=text,
image=mediainfo.get_message_image())) image=mediainfo.get_message_image()))
# 发送事件
EventManager().send_event(EventType.SubscribeAdded, {
"subscribe_id": sid,
"username": username,
"mediainfo": mediainfo.to_dict(),
})
# 统计订阅
self.subscribehelper.sub_reg_async({
"name": title,
"year": year,
"type": metainfo.type.value,
"tmdbid": mediainfo.tmdb_id,
"imdbid": mediainfo.imdb_id,
"tvdbid": mediainfo.tvdb_id,
"doubanid": mediainfo.douban_id,
"bangumiid": mediainfo.bangumi_id,
"season": metainfo.begin_season,
"poster": mediainfo.get_poster_image(),
"backdrop": mediainfo.get_backdrop_image(),
"vote": mediainfo.vote_average,
"description": mediainfo.overview
})
# 返回结果 # 返回结果
return sid, "" return sid, ""
@@ -350,12 +394,8 @@ class SubscribeChain(ChainBase):
# 当前下载资源的优先级 # 当前下载资源的优先级
priority = max([item.torrent_info.pri_order for item in downloads]) priority = max([item.torrent_info.pri_order for item in downloads])
if priority == 100: if priority == 100:
logger.info(f'{mediainfo.title_year} 洗版完成,删除订阅') # 洗版完成
self.subscribeoper.delete(subscribe.id) self.__finish_subscribe(subscribe=subscribe, meta=meta, mediainfo=mediainfo, bestversion=True)
# 发送通知
self.post_message(Notification(mtype=NotificationType.Subscribe,
title=f'{mediainfo.title_year} {meta.season} 已洗版完成',
image=mediainfo.get_message_image()))
else: else:
# 正在洗版,更新资源优先级 # 正在洗版,更新资源优先级
logger.info(f'{mediainfo.title_year} 正在洗版,更新资源优先级为 {priority}') logger.info(f'{mediainfo.title_year} 正在洗版,更新资源优先级为 {priority}')
@@ -379,13 +419,8 @@ class SubscribeChain(ChainBase):
if ((no_lefts and meta.type == MediaType.TV) if ((no_lefts and meta.type == MediaType.TV)
or (downloads and meta.type == MediaType.MOVIE) or (downloads and meta.type == MediaType.MOVIE)
or force): or force):
# 全部下载完成 # 完成订阅
logger.info(f'{mediainfo.title_year} 完成订阅') self.__finish_subscribe(subscribe=subscribe, meta=meta, mediainfo=mediainfo)
self.subscribeoper.delete(subscribe.id)
# 发送通知
self.post_message(Notification(mtype=NotificationType.Subscribe,
title=f'{mediainfo.title_year} {meta.season} 已完成订阅',
image=mediainfo.get_message_image()))
elif downloads and meta.type == MediaType.TV: elif downloads and meta.type == MediaType.TV:
# 电视剧更新已下载集数 # 电视剧更新已下载集数
self.__update_subscribe_note(subscribe=subscribe, downloads=downloads) self.__update_subscribe_note(subscribe=subscribe, downloads=downloads)
@@ -557,9 +592,9 @@ class SubscribeChain(ChainBase):
torrent_meta = context.meta_info torrent_meta = context.meta_info
torrent_mediainfo = context.media_info torrent_mediainfo = context.media_info
torrent_info = context.torrent_info torrent_info = context.torrent_info
# 如果识别了媒体信息则比对TMDBID和类型 # 如果识别了媒体信息则比对TMDBID和类型
if torrent_mediainfo.tmdb_id or torrent_mediainfo.douban_id: if torrent_mediainfo.tmdb_id or torrent_mediainfo.douban_id:
# 直接比对媒体信息
if torrent_mediainfo.type != mediainfo.type: if torrent_mediainfo.type != mediainfo.type:
continue continue
if torrent_mediainfo.tmdb_id \ if torrent_mediainfo.tmdb_id \
@@ -568,56 +603,25 @@ class SubscribeChain(ChainBase):
if torrent_mediainfo.douban_id \ if torrent_mediainfo.douban_id \
and torrent_mediainfo.douban_id != mediainfo.douban_id: and torrent_mediainfo.douban_id != mediainfo.douban_id:
continue continue
logger.info(f'{mediainfo.title_year} 通过媒体信ID匹配到资源{torrent_info.site_name} - {torrent_info.title}') logger.info(
f'{mediainfo.title_year} 通过媒体信ID匹配到资源{torrent_info.site_name} - {torrent_info.title}')
else: else:
# 按标题匹配 # 没有torrent_mediainfo媒体信息按标题匹配
# 比对种子识别类型 manual_match = False
if torrent_meta.type == MediaType.TV and mediainfo.type != MediaType.TV: # 比对词条指定的tmdbid
continue if torrent_meta.tmdbid or torrent_meta.doubanid:
# 比对种子在站点中的类型 if torrent_meta.tmdbid and torrent_meta.tmdbid != mediainfo.tmdb_id:
if torrent_info.category == MediaType.TV.value and mediainfo.type != MediaType.TV: continue
continue if torrent_meta.doubanid and torrent_meta.doubanid != mediainfo.douban_id:
# 比对年份 continue
if mediainfo.year: manual_match = True
if mediainfo.type == MediaType.TV: if not manual_match:
# 剧集年份,每季的年份可能不同 # 没有指定tmdbid按标题匹配
if torrent_meta.year and torrent_meta.year not in [year for year in if not self.torrenthelper.match_torrent(mediainfo=mediainfo,
mediainfo.season_years.values()]: torrent_meta=torrent_meta,
continue torrent=torrent_info,
else: logerror=False):
# 电影年份上下浮动1年 continue
if torrent_meta.year not in [str(int(mediainfo.year) - 1),
mediainfo.year,
str(int(mediainfo.year) + 1)]:
continue
# 标题匹配标志
title_match = False
# 比对标题和原语种标题
meta_name = StringUtils.clear_upper(torrent_meta.name)
if meta_name in [
StringUtils.clear_upper(mediainfo.title),
StringUtils.clear_upper(mediainfo.original_title)
]:
title_match = True
# 在副标题中判断是否存在标题与原语种标题
if not title_match and torrent_info.description:
subtitle = re.split(r'[\s/|]+', torrent_info.description)
if (StringUtils.is_chinese(mediainfo.title)
and str(mediainfo.title) in subtitle) \
or (StringUtils.is_chinese(mediainfo.original_title)
and str(mediainfo.original_title) in subtitle):
title_match = True
# 比对别名和译名
if not title_match:
for name in mediainfo.names:
if StringUtils.clear_upper(name) == meta_name:
title_match = True
break
if not title_match:
continue
# 标题匹配成功
logger.info(f'{mediainfo.title_year} 通过名称匹配到资源:{torrent_info.site_name} - {torrent_info.title}')
# 优先级过滤规则 # 优先级过滤规则
if subscribe.best_version: if subscribe.best_version:
priority_rule = self.systemconfig.get(SystemConfigKey.BestVersionFilterRules) priority_rule = self.systemconfig.get(SystemConfigKey.BestVersionFilterRules)
@@ -843,6 +847,36 @@ class SubscribeChain(ChainBase):
"lack_episode": lack_episode "lack_episode": lack_episode
}) })
def __finish_subscribe(self, subscribe: Subscribe, mediainfo: MediaInfo,
meta: MetaBase, bestversion: bool = False):
"""
完成订阅
"""
# 完成订阅
msgstr = "订阅"
if bestversion:
msgstr = "洗版"
logger.info(f'{mediainfo.title_year} 完成{msgstr}')
# 新增订阅历史
self.subscribehistoryoper.add(**subscribe.to_dict())
# 删除订阅
self.subscribeoper.delete(subscribe.id)
# 发送通知
self.post_message(Notification(mtype=NotificationType.Subscribe,
title=f'{mediainfo.title_year} {meta.season} 已完成{msgstr}',
image=mediainfo.get_message_image()))
# 发送事件
EventManager().send_event(EventType.SubscribeComplete, {
"subscribe_id": subscribe.id,
"subscribe_info": subscribe.to_dict(),
"mediainfo": mediainfo.to_dict(),
})
# 统计订阅
self.subscribehelper.sub_done_async({
"tmdbid": mediainfo.tmdb_id,
"doubanid": mediainfo.douban_id
})
def remote_list(self, channel: MessageChannel, userid: Union[str, int] = None): def remote_list(self, channel: MessageChannel, userid: Union[str, int] = None):
""" """
查询订阅并发送消息 查询订阅并发送消息
@@ -891,6 +925,11 @@ class SubscribeChain(ChainBase):
return return
# 删除订阅 # 删除订阅
self.subscribeoper.delete(subscribe_id) self.subscribeoper.delete(subscribe_id)
# 统计订阅
self.subscribehelper.sub_done_async({
"tmdbid": subscribe.tmdbid,
"doubanid": subscribe.doubanid
})
# 重新发送消息 # 重新发送消息
self.remote_list(channel, userid) self.remote_list(channel, userid)
@@ -952,3 +991,62 @@ class SubscribeChain(ChainBase):
start_episode=start_episode start_episode=start_episode
) )
return no_exists return no_exists
@eventmanager.register(EventType.SiteDeleted)
def remove_site(self, event: Event):
"""
从订阅中移除与站点相关的设置
"""
if not event:
return
event_data = event.event_data or {}
site_id = event_data.get("site_id")
if not site_id:
return
if site_id == "*":
# 站点被重置
SystemConfigOper().set(SystemConfigKey.RssSites, [])
for subscribe in self.subscribeoper.list():
if not subscribe.sites:
continue
self.subscribeoper.update(subscribe.id, {
"sites": ""
})
return
# 从选中的rss站点中移除
selected_sites = SystemConfigOper().get(SystemConfigKey.RssSites) or []
if site_id in selected_sites:
selected_sites.remove(site_id)
SystemConfigOper().set(SystemConfigKey.RssSites, selected_sites)
# 查询所有订阅
for subscribe in self.subscribeoper.list():
if not subscribe.sites:
continue
sites = json.loads(subscribe.sites) or []
if site_id not in sites:
continue
sites.remove(site_id)
self.subscribeoper.update(subscribe.id, {
"sites": json.dumps(sites)
})
@staticmethod
def __get_default_subscribe_config(mtype: MediaType, default_config_key: str):
"""
获取默认订阅配置
"""
default_subscribe_key = None
if mtype == MediaType.TV:
default_subscribe_key = "DefaultTvSubscribeConfig"
if mtype == MediaType.MOVIE:
default_subscribe_key = "DefaultMovieSubscribeConfig"
# 默认订阅规则
if hasattr(settings, default_subscribe_key):
value = getattr(settings, default_subscribe_key)
else:
value = SystemConfigOper().get(default_subscribe_key)
if not value:
return None
return value.get(default_config_key) or None

View File

@@ -1,5 +1,6 @@
import json import json
import re import re
from pathlib import Path
from typing import Union from typing import Union
from app.chain import ChainBase from app.chain import ChainBase
@@ -40,19 +41,31 @@ class SystemChain(ChainBase, metaclass=Singleton):
}, self._restart_file) }, self._restart_file)
SystemUtils.restart() SystemUtils.restart()
def __get_version_message(self) -> str:
"""
获取版本信息文本
"""
server_release_version = self.__get_server_release_version()
front_release_version = self.__get_front_release_version()
server_local_version = self.get_server_local_version()
front_local_version = self.get_frontend_version()
if server_release_version == server_local_version:
title = f"当前后端版本:{server_local_version},已是最新版本\n"
else:
title = f"当前后端版本:{server_local_version},远程版本:{server_release_version}\n"
if front_release_version == front_local_version:
title += f"当前前端版本:{front_local_version},已是最新版本"
else:
title += f"当前前端版本:{front_local_version},远程版本:{front_release_version}"
return title
def version(self, channel: MessageChannel, userid: Union[int, str]): def version(self, channel: MessageChannel, userid: Union[int, str]):
""" """
查看当前版本、远程版本 查看当前版本、远程版本
""" """
release_version = self.__get_release_version()
local_version = self.get_local_version()
if release_version == local_version:
title = f"当前版本:{local_version},已是最新版本"
else:
title = f"当前版本:{local_version},远程版本:{release_version}"
self.post_message(Notification(channel=channel, self.post_message(Notification(channel=channel,
title=title, userid=userid)) title=self.__get_version_message(),
userid=userid))
def restart_finish(self): def restart_finish(self):
""" """
@@ -71,33 +84,50 @@ class SystemChain(ChainBase, metaclass=Singleton):
userid = restart_channel.get('userid') userid = restart_channel.get('userid')
# 版本号 # 版本号
release_version = self.__get_release_version() title = self.__get_version_message()
local_version = self.get_local_version()
if release_version == local_version:
title = f"当前版本:{local_version}"
else:
title = f"当前版本:{local_version},远程版本:{release_version}"
self.post_message(Notification(channel=channel, self.post_message(Notification(channel=channel,
title=f"系统已重启完成!{title}", title=f"系统已重启完成!\n{title}",
userid=userid)) userid=userid))
self.remove_cache(self._restart_file) self.remove_cache(self._restart_file)
@staticmethod @staticmethod
def __get_release_version(): def __get_server_release_version():
""" """
获取最新版本 获取后端最新版本
""" """
version_res = RequestUtils(proxies=settings.PROXY, headers=settings.GITHUB_HEADERS).get_res( try:
"https://api.github.com/repos/jxxghp/MoviePilot/releases/latest") with RequestUtils(proxies=settings.PROXY, headers=settings.GITHUB_HEADERS).get_res(
if version_res: "https://api.github.com/repos/jxxghp/MoviePilot/releases/latest") as version_res:
ver_json = version_res.json() if version_res:
version = f"{ver_json['tag_name']}" ver_json = version_res.json()
return version version = f"{ver_json['tag_name']}"
else: return version
else:
return None
except Exception as err:
logger.error(f"获取后端最新版本失败:{str(err)}")
return None return None
@staticmethod @staticmethod
def get_local_version(): def __get_front_release_version():
"""
获取前端最新版本
"""
try:
with RequestUtils(proxies=settings.PROXY, headers=settings.GITHUB_HEADERS).get_res(
"https://api.github.com/repos/jxxghp/MoviePilot-Frontend/releases/latest") as version_res:
if version_res:
ver_json = version_res.json()
version = f"{ver_json['tag_name']}"
return version
else:
return None
except Exception as err:
logger.error(f"获取前端最新版本失败:{str(err)}")
return None
@staticmethod
def get_server_local_version():
""" """
查看当前版本 查看当前版本
""" """
@@ -117,3 +147,20 @@ class SystemChain(ChainBase, metaclass=Singleton):
return None return None
except Exception as err: except Exception as err:
logger.error(f"加载版本文件 {version_file} 出错:{str(err)}") logger.error(f"加载版本文件 {version_file} 出错:{str(err)}")
@staticmethod
def get_frontend_version():
"""
获取前端版本
"""
version_file = Path(settings.FRONTEND_PATH) / "version.txt"
if version_file.exists():
try:
with open(version_file, 'r') as f:
version = str(f.read()).strip()
return version
except Exception as err:
logger.error(f"加载版本文件 {version_file} 出错:{str(err)}")
else:
logger.warn("未找到前端版本文件,请正确设置 FRONTEND_PATH")
return None

View File

@@ -6,6 +6,7 @@ from cachetools import cached, TTLCache
from app import schemas from app import schemas
from app.chain import ChainBase from app.chain import ChainBase
from app.core.config import settings from app.core.config import settings
from app.core.context import MediaInfo
from app.schemas import MediaType from app.schemas import MediaType
from app.utils.singleton import Singleton from app.utils.singleton import Singleton
@@ -16,7 +17,7 @@ class TmdbChain(ChainBase, metaclass=Singleton):
""" """
def tmdb_discover(self, mtype: MediaType, sort_by: str, with_genres: str, def tmdb_discover(self, mtype: MediaType, sort_by: str, with_genres: str,
with_original_language: str, page: int = 1) -> Optional[List[dict]]: with_original_language: str, page: int = 1) -> Optional[List[MediaInfo]]:
""" """
:param mtype: 媒体类型 :param mtype: 媒体类型
:param sort_by: 排序方式 :param sort_by: 排序方式
@@ -25,21 +26,17 @@ class TmdbChain(ChainBase, metaclass=Singleton):
:param page: 页码 :param page: 页码
:return: 媒体信息列表 :return: 媒体信息列表
""" """
if settings.RECOGNIZE_SOURCE != "themoviedb":
return None
return self.run_module("tmdb_discover", mtype=mtype, return self.run_module("tmdb_discover", mtype=mtype,
sort_by=sort_by, with_genres=with_genres, sort_by=sort_by, with_genres=with_genres,
with_original_language=with_original_language, with_original_language=with_original_language,
page=page) page=page)
def tmdb_trending(self, page: int = 1) -> Optional[List[dict]]: def tmdb_trending(self, page: int = 1) -> Optional[List[MediaInfo]]:
""" """
TMDB流行趋势 TMDB流行趋势
:param page: 第几页 :param page: 第几页
:return: TMDB信息列表 :return: TMDB信息列表
""" """
if settings.RECOGNIZE_SOURCE != "themoviedb":
return None
return self.run_module("tmdb_trending", page=page) return self.run_module("tmdb_trending", page=page)
def tmdb_seasons(self, tmdbid: int) -> List[schemas.TmdbSeason]: def tmdb_seasons(self, tmdbid: int) -> List[schemas.TmdbSeason]:
@@ -57,35 +54,35 @@ class TmdbChain(ChainBase, metaclass=Singleton):
""" """
return self.run_module("tmdb_episodes", tmdbid=tmdbid, season=season) return self.run_module("tmdb_episodes", tmdbid=tmdbid, season=season)
def movie_similar(self, tmdbid: int) -> List[dict]: def movie_similar(self, tmdbid: int) -> Optional[List[MediaInfo]]:
""" """
根据TMDBID查询类似电影 根据TMDBID查询类似电影
:param tmdbid: TMDBID :param tmdbid: TMDBID
""" """
return self.run_module("tmdb_movie_similar", tmdbid=tmdbid) return self.run_module("tmdb_movie_similar", tmdbid=tmdbid)
def tv_similar(self, tmdbid: int) -> List[dict]: def tv_similar(self, tmdbid: int) -> Optional[List[MediaInfo]]:
""" """
根据TMDBID查询类似电视剧 根据TMDBID查询类似电视剧
:param tmdbid: TMDBID :param tmdbid: TMDBID
""" """
return self.run_module("tmdb_tv_similar", tmdbid=tmdbid) return self.run_module("tmdb_tv_similar", tmdbid=tmdbid)
def movie_recommend(self, tmdbid: int) -> List[dict]: def movie_recommend(self, tmdbid: int) -> Optional[List[MediaInfo]]:
""" """
根据TMDBID查询推荐电影 根据TMDBID查询推荐电影
:param tmdbid: TMDBID :param tmdbid: TMDBID
""" """
return self.run_module("tmdb_movie_recommend", tmdbid=tmdbid) return self.run_module("tmdb_movie_recommend", tmdbid=tmdbid)
def tv_recommend(self, tmdbid: int) -> List[dict]: def tv_recommend(self, tmdbid: int) -> Optional[List[MediaInfo]]:
""" """
根据TMDBID查询推荐电视剧 根据TMDBID查询推荐电视剧
:param tmdbid: TMDBID :param tmdbid: TMDBID
""" """
return self.run_module("tmdb_tv_recommend", tmdbid=tmdbid) return self.run_module("tmdb_tv_recommend", tmdbid=tmdbid)
def movie_credits(self, tmdbid: int, page: int = 1) -> List[dict]: def movie_credits(self, tmdbid: int, page: int = 1) -> Optional[List[schemas.MediaPerson]]:
""" """
根据TMDBID查询电影演职人员 根据TMDBID查询电影演职人员
:param tmdbid: TMDBID :param tmdbid: TMDBID
@@ -93,7 +90,7 @@ class TmdbChain(ChainBase, metaclass=Singleton):
""" """
return self.run_module("tmdb_movie_credits", tmdbid=tmdbid, page=page) return self.run_module("tmdb_movie_credits", tmdbid=tmdbid, page=page)
def tv_credits(self, tmdbid: int, page: int = 1) -> List[dict]: def tv_credits(self, tmdbid: int, page: int = 1) -> Optional[List[schemas.MediaPerson]]:
""" """
根据TMDBID查询电视剧演职人员 根据TMDBID查询电视剧演职人员
:param tmdbid: TMDBID :param tmdbid: TMDBID
@@ -101,14 +98,14 @@ class TmdbChain(ChainBase, metaclass=Singleton):
""" """
return self.run_module("tmdb_tv_credits", tmdbid=tmdbid, page=page) return self.run_module("tmdb_tv_credits", tmdbid=tmdbid, page=page)
def person_detail(self, person_id: int) -> dict: def person_detail(self, person_id: int) -> Optional[schemas.MediaPerson]:
""" """
根据TMDBID查询演职员详情 根据TMDBID查询演职员详情
:param person_id: 人物ID :param person_id: 人物ID
""" """
return self.run_module("tmdb_person_detail", person_id=person_id) return self.run_module("tmdb_person_detail", person_id=person_id)
def person_credits(self, person_id: int, page: int = 1) -> List[dict]: def person_credits(self, person_id: int, page: int = 1) -> Optional[List[MediaInfo]]:
""" """
根据人物ID查询人物参演作品 根据人物ID查询人物参演作品
:param person_id: 人物ID :param person_id: 人物ID
@@ -117,7 +114,7 @@ class TmdbChain(ChainBase, metaclass=Singleton):
return self.run_module("tmdb_person_credits", person_id=person_id, page=page) return self.run_module("tmdb_person_credits", person_id=person_id, page=page)
@cached(cache=TTLCache(maxsize=1, ttl=3600)) @cached(cache=TTLCache(maxsize=1, ttl=3600))
def get_random_wallpager(self): def get_random_wallpager(self) -> Optional[str]:
""" """
获取随机壁纸缓存1个小时 获取随机壁纸缓存1个小时
""" """
@@ -126,6 +123,6 @@ class TmdbChain(ChainBase, metaclass=Singleton):
# 随机一个电影 # 随机一个电影
while True: while True:
info = random.choice(infos) info = random.choice(infos)
if info and info.get("backdrop_path"): if info and info.backdrop_path:
return f"https://{settings.TMDB_IMAGE_DOMAIN}/t/p/original{info.get('backdrop_path')}" return f"https://{settings.TMDB_IMAGE_DOMAIN}/t/p/original{info.backdrop_path}"
return None return None

View File

@@ -184,6 +184,8 @@ class TorrentsChain(ChainBase, metaclass=Singleton):
logger.info(f'处理资源:{torrent.title} ...') logger.info(f'处理资源:{torrent.title} ...')
# 识别 # 识别
meta = MetaInfo(title=torrent.title, subtitle=torrent.description) meta = MetaInfo(title=torrent.title, subtitle=torrent.description)
if torrent.title != meta.org_string:
logger.info(f'种子名称应用识别词后发生改变:{torrent.title} => {meta.org_string}')
# 使用站点种子分类,校正类型识别 # 使用站点种子分类,校正类型识别
if meta.type != MediaType.TV \ if meta.type != MediaType.TV \
and torrent.category == MediaType.TV.value: and torrent.category == MediaType.TV.value:
@@ -191,7 +193,7 @@ class TorrentsChain(ChainBase, metaclass=Singleton):
# 识别媒体信息 # 识别媒体信息
mediainfo: MediaInfo = self.mediachain.recognize_by_meta(meta) mediainfo: MediaInfo = self.mediachain.recognize_by_meta(meta)
if not mediainfo: if not mediainfo:
logger.warn(f'未识别到媒体信息,标题:{torrent.title}') logger.warn(f'{torrent.title} 未识别到媒体信息')
# 存储空的媒体信息 # 存储空的媒体信息
mediainfo = MediaInfo() mediainfo = MediaInfo()
# 清理多余数据 # 清理多余数据

View File

@@ -278,8 +278,13 @@ class TransferChain(ChainBase):
# 获取集数据 # 获取集数据
if file_mediainfo.type == MediaType.TV: if file_mediainfo.type == MediaType.TV:
episodes_info = self.tmdbchain.tmdb_episodes(tmdbid=file_mediainfo.tmdb_id, if file_meta.begin_season is None:
season=file_meta.begin_season or 1) file_meta.begin_season = 1
file_mediainfo.season = file_mediainfo.season or file_meta.begin_season
episodes_info = self.tmdbchain.tmdb_episodes(
tmdbid=file_mediainfo.tmdb_id,
season=file_mediainfo.season
)
else: else:
episodes_info = None episodes_info = None
@@ -355,7 +360,8 @@ class TransferChain(ChainBase):
if settings.SCRAP_METADATA: if settings.SCRAP_METADATA:
self.scrape_metadata(path=transferinfo.target_path, self.scrape_metadata(path=transferinfo.target_path,
mediainfo=file_mediainfo, mediainfo=file_mediainfo,
transfer_type=transfer_type) transfer_type=transfer_type,
metainfo=file_meta)
# 更新进度 # 更新进度
processed_num += 1 processed_num += 1
self.progress.update(value=processed_num / total_num * 100, self.progress.update(value=processed_num / total_num * 100,
@@ -549,6 +555,7 @@ class TransferChain(ChainBase):
def manual_transfer(self, in_path: Path, def manual_transfer(self, in_path: Path,
target: Path = None, target: Path = None,
tmdbid: int = None, tmdbid: int = None,
doubanid: str = None,
mtype: MediaType = None, mtype: MediaType = None,
season: int = None, season: int = None,
transfer_type: str = None, transfer_type: str = None,
@@ -560,6 +567,7 @@ class TransferChain(ChainBase):
:param in_path: 源文件路径 :param in_path: 源文件路径
:param target: 目标路径 :param target: 目标路径
:param tmdbid: TMDB ID :param tmdbid: TMDB ID
:param doubanid: 豆瓣ID
:param mtype: 媒体类型 :param mtype: 媒体类型
:param season: 季度 :param season: 季度
:param transfer_type: 转移类型 :param transfer_type: 转移类型
@@ -569,12 +577,12 @@ class TransferChain(ChainBase):
""" """
logger.info(f"手动转移:{in_path} ...") logger.info(f"手动转移:{in_path} ...")
if tmdbid: if tmdbid or doubanid:
# 有输入TMDBID时单个识别 # 有输入TMDBID时单个识别
# 识别媒体信息 # 识别媒体信息
mediainfo: MediaInfo = self.mediachain.recognize_media(tmdbid=tmdbid, mtype=mtype) mediainfo: MediaInfo = self.mediachain.recognize_media(tmdbid=tmdbid, doubanid=doubanid, mtype=mtype)
if not mediainfo: if not mediainfo:
return False, f"媒体信息识别失败tmdbid: {tmdbid}, type: {mtype.value}" return False, f"媒体信息识别失败tmdbid{tmdbid}doubanid{doubanid}type: {mtype.value}"
# 开始进度 # 开始进度
self.progress.start(ProgressKey.FileTransfer) self.progress.start(ProgressKey.FileTransfer)
self.progress.update(value=0, self.progress.update(value=0,

View File

@@ -13,6 +13,8 @@ class Settings(BaseSettings):
PROJECT_NAME = "MoviePilot" PROJECT_NAME = "MoviePilot"
# API路径 # API路径
API_V1_STR: str = "/api/v1" API_V1_STR: str = "/api/v1"
# 前端资源路径
FRONTEND_PATH: str = "/public"
# 密钥 # 密钥
SECRET_KEY: str = secrets.token_urlsafe(32) SECRET_KEY: str = secrets.token_urlsafe(32)
# 允许的域名 # 允许的域名
@@ -41,6 +43,8 @@ class Settings(BaseSettings):
WALLPAPER: str = "tmdb" WALLPAPER: str = "tmdb"
# 网络代理 IP:PORT # 网络代理 IP:PORT
PROXY_HOST: Optional[str] = None PROXY_HOST: Optional[str] = None
# 媒体搜索来源 themoviedb/douban/bangumi多个用,分隔
SEARCH_SOURCE: str = "themoviedb,douban,bangumi"
# 媒体识别来源 themoviedb/douban # 媒体识别来源 themoviedb/douban
RECOGNIZE_SOURCE: str = "themoviedb" RECOGNIZE_SOURCE: str = "themoviedb"
# 刮削来源 themoviedb/douban # 刮削来源 themoviedb/douban
@@ -66,7 +70,7 @@ class Settings(BaseSettings):
'.rmvb', '.avi', '.mov', '.mpeg', '.rmvb', '.avi', '.mov', '.mpeg',
'.mpg', '.wmv', '.3gp', '.asf', '.mpg', '.wmv', '.3gp', '.asf',
'.m4v', '.flv', '.m2ts', '.strm', '.m4v', '.flv', '.m2ts', '.strm',
'.tp'] '.tp', '.f4v']
# 支持的字幕文件后缀格式 # 支持的字幕文件后缀格式
RMT_SUBEXT: list = ['.srt', '.ass', '.ssa', '.sup'] RMT_SUBEXT: list = ['.srt', '.ass', '.ssa', '.sup']
# 下载器临时文件后缀 # 下载器临时文件后缀
@@ -187,6 +191,8 @@ class Settings(BaseSettings):
PLEX_TOKEN: Optional[str] = None PLEX_TOKEN: Optional[str] = None
# 转移方式 link/copy/move/softlink # 转移方式 link/copy/move/softlink
TRANSFER_TYPE: str = "copy" TRANSFER_TYPE: str = "copy"
# CookieCloud是否启动本地服务
COOKIECLOUD_ENABLE_LOCAL: Optional[bool] = False
# CookieCloud服务器地址 # CookieCloud服务器地址
COOKIECLOUD_HOST: str = "https://movie-pilot.org/cookiecloud" COOKIECLOUD_HOST: str = "https://movie-pilot.org/cookiecloud"
# CookieCloud用户KEY # CookieCloud用户KEY
@@ -230,10 +236,21 @@ class Settings(BaseSettings):
GITHUB_TOKEN: Optional[str] = None GITHUB_TOKEN: Optional[str] = None
# 自动检查和更新站点资源包(站点索引、认证等) # 自动检查和更新站点资源包(站点索引、认证等)
AUTO_UPDATE_RESOURCE: bool = True AUTO_UPDATE_RESOURCE: bool = True
# 元数据识别缓存过期时间(小时)
META_CACHE_EXPIRE: int = 0
# 是否启用DOH解析域名
DOH_ENABLE: bool = True
# 搜索多个名称
SEARCH_MULTIPLE_NAME: bool = False
# 订阅数据共享
SUBSCRIBE_STATISTIC_SHARE: bool = True
# 插件安装数据共享
PLUGIN_STATISTIC_SHARE: bool = True
@validator("SUBSCRIBE_RSS_INTERVAL", @validator("SUBSCRIBE_RSS_INTERVAL",
"COOKIECLOUD_INTERVAL", "COOKIECLOUD_INTERVAL",
"MEDIASERVER_SYNC_INTERVAL", "MEDIASERVER_SYNC_INTERVAL",
"META_CACHE_EXPIRE",
pre=True, always=True) pre=True, always=True)
def convert_int(cls, value): def convert_int(cls, value):
if not value: if not value:
@@ -272,6 +289,10 @@ class Settings(BaseSettings):
@property @property
def LOG_PATH(self): def LOG_PATH(self):
return self.CONFIG_PATH / "logs" return self.CONFIG_PATH / "logs"
@property
def COOKIE_PATH(self):
return self.CONFIG_PATH / "cookies"
@property @property
def CACHE_CONF(self): def CACHE_CONF(self):
@@ -282,7 +303,7 @@ class Settings(BaseSettings):
"torrents": 100, "torrents": 100,
"douban": 512, "douban": 512,
"fanart": 512, "fanart": 512,
"meta": 15 * 24 * 3600 "meta": (self.META_CACHE_EXPIRE or 168) * 3600
} }
return { return {
"tmdb": 256, "tmdb": 256,
@@ -290,7 +311,7 @@ class Settings(BaseSettings):
"torrents": 50, "torrents": 50,
"douban": 256, "douban": 256,
"fanart": 128, "fanart": 128,
"meta": 7 * 24 * 3600 "meta": (self.META_CACHE_EXPIRE or 72) * 3600
} }
@property @property
@@ -394,6 +415,9 @@ class Settings(BaseSettings):
with self.LOG_PATH as p: with self.LOG_PATH as p:
if not p.exists(): if not p.exists():
p.mkdir(parents=True, exist_ok=True) p.mkdir(parents=True, exist_ok=True)
with self.COOKIE_PATH as p:
if not p.exists():
p.mkdir(parents=True, exist_ok=True)
class Config: class Config:
case_sensitive = True case_sensitive = True

View File

@@ -133,12 +133,16 @@ class TorrentInfo:
@dataclass @dataclass
class MediaInfo: class MediaInfo:
# 来源themoviedb、douban、bangumi
source: str = None
# 类型 电影、电视剧 # 类型 电影、电视剧
type: MediaType = None type: MediaType = None
# 媒体标题 # 媒体标题
title: str = None title: str = None
# 英文标题 # 英文标题
en_title: str = None en_title: str = None
# 新加坡标题
sg_title: str = None
# 年份 # 年份
year: str = None year: str = None
# 季 # 季
@@ -151,6 +155,8 @@ class MediaInfo:
tvdb_id: int = None tvdb_id: int = None
# 豆瓣ID # 豆瓣ID
douban_id: str = None douban_id: str = None
# Bangumi ID
bangumi_id: int = None
# 媒体原语种 # 媒体原语种
original_language: str = None original_language: str = None
# 媒体原发行标题 # 媒体原发行标题
@@ -183,6 +189,8 @@ class MediaInfo:
tmdb_info: dict = field(default_factory=dict) tmdb_info: dict = field(default_factory=dict)
# 豆瓣 INFO # 豆瓣 INFO
douban_info: dict = field(default_factory=dict) douban_info: dict = field(default_factory=dict)
# Bangumi INFO
bangumi_info: dict = field(default_factory=dict)
# 导演 # 导演
directors: List[dict] = field(default_factory=list) directors: List[dict] = field(default_factory=list)
# 演员 # 演员
@@ -238,6 +246,8 @@ class MediaInfo:
self.set_tmdb_info(self.tmdb_info) self.set_tmdb_info(self.tmdb_info)
if self.douban_info: if self.douban_info:
self.set_douban_info(self.douban_info) self.set_douban_info(self.douban_info)
if self.bangumi_info:
self.set_bangumi_info(self.bangumi_info)
def __setattr__(self, name: str, value: Any): def __setattr__(self, name: str, value: Any):
self.__dict__[name] = value self.__dict__[name] = value
@@ -347,6 +357,8 @@ class MediaInfo:
if not info: if not info:
return return
# 来源
self.source = "themoviedb"
# 本体 # 本体
self.tmdb_info = info self.tmdb_info = info
# 类型 # 类型
@@ -374,6 +386,8 @@ class MediaInfo:
self.original_language = info.get('original_language') self.original_language = info.get('original_language')
# 英文标题 # 英文标题
self.en_title = info.get('en_title') self.en_title = info.get('en_title')
# 新加坡标题
self.sg_title = info.get('sg_title')
if self.type == MediaType.MOVIE: if self.type == MediaType.MOVIE:
# 标题 # 标题
self.title = info.get('title') self.title = info.get('title')
@@ -430,6 +444,8 @@ class MediaInfo:
""" """
if not info: if not info:
return return
# 来源
self.source = "douban"
# 本体 # 本体
self.douban_info = info self.douban_info = info
# 豆瓣ID # 豆瓣ID
@@ -438,10 +454,16 @@ class MediaInfo:
if not self.type: if not self.type:
if isinstance(info.get('media_type'), MediaType): if isinstance(info.get('media_type'), MediaType):
self.type = info.get('media_type') self.type = info.get('media_type')
elif info.get("type"): elif info.get("subtype"):
self.type = MediaType.MOVIE if info.get("type") == "movie" else MediaType.TV self.type = MediaType.MOVIE if info.get("subtype") == "movie" else MediaType.TV
elif info.get("target_type"):
self.type = MediaType.MOVIE if info.get("target_type") == "movie" else MediaType.TV
elif info.get("type_name"): elif info.get("type_name"):
self.type = MediaType(info.get("type_name")) self.type = MediaType(info.get("type_name"))
elif info.get("uri"):
self.type = MediaType.MOVIE if "/movie/" in info.get("uri") else MediaType.TV
elif info.get("type") and info.get("type") in ["movie", "tv"]:
self.type = MediaType.MOVIE if info.get("type") == "movie" else MediaType.TV
# 标题 # 标题
if not self.title: if not self.title:
self.title = info.get("title") self.title = info.get("title")
@@ -454,6 +476,8 @@ class MediaInfo:
# 年份 # 年份
if not self.year: if not self.year:
self.year = info.get("year")[:4] if info.get("year") else None self.year = info.get("year")[:4] if info.get("year") else None
if not self.year and info.get("extra"):
self.year = info.get("extra").get("year")
# 识别标题中的季 # 识别标题中的季
meta = MetaInfo(info.get("title")) meta = MetaInfo(info.get("title"))
# 季 # 季
@@ -483,14 +507,24 @@ class MediaInfo:
self.release_date = match.group() self.release_date = match.group()
# 海报 # 海报
if not self.poster_path: if not self.poster_path:
self.poster_path = info.get("pic", {}).get("large") if info.get("pic"):
self.poster_path = info.get("pic", {}).get("large")
if not self.poster_path and info.get("cover_url"): if not self.poster_path and info.get("cover_url"):
self.poster_path = info.get("cover_url") # imageView2/0/q/80/w/9999/h/120/format/webp -> imageView2/1/w/500/h/750/format/webp
self.poster_path = re.sub(r'imageView2/\d/q/\d+/w/\d+/h/\d+/format/webp', 'imageView2/1/w/500/h/750/format/webp', info.get("cover_url"))
if not self.poster_path and info.get("cover"): if not self.poster_path and info.get("cover"):
self.poster_path = info.get("cover").get("url") if info.get("cover").get("url"):
self.poster_path = info.get("cover").get("url")
else:
self.poster_path = info.get("cover").get("large", {}).get("url")
# 简介 # 简介
if not self.overview: if not self.overview:
self.overview = info.get("intro") or info.get("card_subtitle") or "" self.overview = info.get("intro") or info.get("card_subtitle") or ""
if not self.overview:
if info.get("extra", {}).get("info"):
extra_info = info.get("extra").get("info")
if extra_info:
self.overview = "".join(["".join(item) for item in extra_info])
# 从简介中提取年份 # 从简介中提取年份
if self.overview and not self.year: if self.overview and not self.year:
match = re.search(r'\d{4}', self.overview) match = re.search(r'\d{4}', self.overview)
@@ -536,6 +570,74 @@ class MediaInfo:
if not hasattr(self, key): if not hasattr(self, key):
setattr(self, key, value) setattr(self, key, value)
def set_bangumi_info(self, info: dict):
"""
初始化Bangumi信息
"""
if not info:
return
# 来源
self.source = "bangumi"
# 本体
self.bangumi_info = info
# 豆瓣ID
self.bangumi_id = info.get("id")
# 类型
if not self.type:
self.type = MediaType.TV
# 标题
if not self.title:
self.title = info.get("name_cn") or info.get("name")
# 原语种标题
if not self.original_title:
self.original_title = info.get("name")
# 识别标题中的季
meta = MetaInfo(self.title)
# 季
if not self.season:
self.season = meta.begin_season
# 评分
if not self.vote_average:
rating = info.get("rating")
if rating:
vote_average = float(rating.get("score"))
else:
vote_average = 0
self.vote_average = vote_average
# 发行日期
if not self.release_date:
self.release_date = info.get("date") or info.get("air_date")
# 年份
if not self.year:
self.year = self.release_date[:4] if self.release_date else None
# 海报
if not self.poster_path:
if info.get("images"):
self.poster_path = info.get("images", {}).get("large")
if not self.poster_path and info.get("image"):
self.poster_path = info.get("image")
# 简介
if not self.overview:
self.overview = info.get("summary")
# 别名
if not self.names:
infobox = info.get("infobox")
if infobox:
akas = [item.get("value") for item in infobox if item.get("key") == "别名"]
if akas:
self.names = [aka.get("v") for aka in akas[0]]
# 剧集
if self.type == MediaType.TV and not self.seasons:
meta = MetaInfo(self.title)
season = meta.begin_season or 1
episodes_count = info.get("total_episodes")
if episodes_count:
self.seasons[season] = list(range(1, episodes_count + 1))
# 演员
if not self.actors:
self.actors = info.get("actors") or []
@property @property
def title_year(self): def title_year(self):
if self.title: if self.title:
@@ -554,6 +656,8 @@ class MediaInfo:
return "https://www.themoviedb.org/tv/%s" % self.tmdb_id return "https://www.themoviedb.org/tv/%s" % self.tmdb_id
elif self.douban_id: elif self.douban_id:
return "https://movie.douban.com/subject/%s" % self.douban_id return "https://movie.douban.com/subject/%s" % self.douban_id
elif self.bangumi_id:
return "http://bgm.tv/subject/%s" % self.bangumi_id
return "" return ""
@property @property
@@ -615,6 +719,9 @@ class MediaInfo:
dicts["type"] = self.type.value if self.type else None dicts["type"] = self.type.value if self.type else None
dicts["detail_link"] = self.detail_link dicts["detail_link"] = self.detail_link
dicts["title_year"] = self.title_year dicts["title_year"] = self.title_year
dicts["tmdb_info"] = None
dicts["douban_info"] = None
dicts["bangumi_info"] = None
return dicts return dicts
def clear(self): def clear(self):
@@ -623,6 +730,7 @@ class MediaInfo:
""" """
self.tmdb_info = {} self.tmdb_info = {}
self.douban_info = {} self.douban_info = {}
self.bangumi_info = {}
self.seasons = {} self.seasons = {}
self.genres = [] self.genres = []
self.season_info = [] self.season_info = []

View File

@@ -16,7 +16,7 @@ class MetaAnime(MetaBase):
识别动漫 识别动漫
""" """
_anime_no_words = ['CHS&CHT', 'MP4', 'GB MP4', 'WEB-DL'] _anime_no_words = ['CHS&CHT', 'MP4', 'GB MP4', 'WEB-DL']
_name_nostring_re = r"S\d{2}\s*-\s*S\d{2}|S\d{2}|\s+S\d{1,2}|EP?\d{2,4}\s*-\s*EP?\d{2,4}|EP?\d{2,4}|\s+EP?\d{1,4}" _name_nostring_re = r"S\d{2}\s*-\s*S\d{2}|S\d{2}|\s+S\d{1,2}|EP?\d{2,4}\s*-\s*EP?\d{2,4}|EP?\d{2,4}|\s+EP?\d{1,4}|\s+GB"
def __init__(self, title: str, subtitle: str = None, isfile: bool = False): def __init__(self, title: str, subtitle: str = None, isfile: bool = False):
super().__init__(title, subtitle, isfile) super().__init__(title, subtitle, isfile)
@@ -32,8 +32,6 @@ class MetaAnime(MetaBase):
if anitopy_info: if anitopy_info:
# 名称 # 名称
name = anitopy_info.get("anime_title") name = anitopy_info.get("anime_title")
if name and name.find("/") != -1:
name = name.split("/")[-1].strip()
if not name or name in self._anime_no_words or (len(name) < 5 and not StringUtils.is_chinese(name)): if not name or name in self._anime_no_words or (len(name) < 5 and not StringUtils.is_chinese(name)):
anitopy_info = anitopy.parse("[ANIME]" + title) anitopy_info = anitopy.parse("[ANIME]" + title)
if anitopy_info: if anitopy_info:
@@ -44,23 +42,41 @@ class MetaAnime(MetaBase):
name = name_match.group(1).strip() name = name_match.group(1).strip()
# 拆份中英文名称 # 拆份中英文名称
if name: if name:
lastword_type = "" _split_flag = True
for word in name.split(): # 按/拆分中英文
if not word: if name.find("/") != -1:
continue names = name.split("/")
if word.endswith(']'): if StringUtils.is_chinese(names[0]):
word = word[:-1] self.cn_name = names[0]
if word.isdigit(): if len(names) > 1:
if lastword_type == "cn": self.en_name = names[1]
self.cn_name = "%s %s" % (self.cn_name or "", word) _split_flag = False
elif lastword_type == "en": elif StringUtils.is_chinese(names[-1]):
self.en_name = "%s %s" % (self.en_name or "", word) self.cn_name = names[-1]
elif StringUtils.is_chinese(word): if len(names) > 1:
self.cn_name = "%s %s" % (self.cn_name or "", word) self.en_name = names[0]
lastword_type = "cn" _split_flag = False
else: else:
self.en_name = "%s %s" % (self.en_name or "", word) name = names[-1]
lastword_type = "en" # 拆分中英文
if _split_flag:
lastword_type = ""
for word in name.split():
if not word:
continue
if word.endswith(']'):
word = word[:-1]
if word.isdigit():
if lastword_type == "cn":
self.cn_name = "%s %s" % (self.cn_name or "", word)
elif lastword_type == "en":
self.en_name = "%s %s" % (self.en_name or "", word)
elif StringUtils.is_chinese(word):
self.cn_name = "%s %s" % (self.cn_name or "", word)
lastword_type = "cn"
else:
self.en_name = "%s %s" % (self.en_name or "", word)
lastword_type = "en"
if self.cn_name: if self.cn_name:
_, self.cn_name, _, _, _, _ = StringUtils.get_keyword(self.cn_name) _, self.cn_name, _, _, _, _ = StringUtils.get_keyword(self.cn_name)
if self.cn_name: if self.cn_name:

View File

@@ -67,16 +67,18 @@ class MetaBase(object):
# 副标题解析 # 副标题解析
_subtitle_flag = False _subtitle_flag = False
_title_episodel_re = r"Episode\s+(\d{1,4})"
_subtitle_season_re = r"(?<![全共]\s*)[第\s]+([0-9一二三四五六七八九十S\-]+)\s*季(?!\s*[全共])" _subtitle_season_re = r"(?<![全共]\s*)[第\s]+([0-9一二三四五六七八九十S\-]+)\s*季(?!\s*[全共])"
_subtitle_season_all_re = r"[全共]\s*([0-9一二三四五六七八九十]+)\s*季|([0-9一二三四五六七八九十]+)\s*季\s*全" _subtitle_season_all_re = r"[全共]\s*([0-9一二三四五六七八九十]+)\s*季|([0-9一二三四五六七八九十]+)\s*季\s*全"
_subtitle_episode_re = r"(?<![全共]\s*)[第\s]+([0-9一二三四五六七八九十百零EP\-]+)\s*[集话話期](?!\s*[全共])" _subtitle_episode_re = r"(?<![全共]\s*)[第\s]+([0-9一二三四五六七八九十百零EP]+)\s*[集话話期](?!\s*[全共])"
_subtitle_episode_all_re = r"([0-9一二三四五六七八九十百零]+)\s*\s*全|[全共]\s*([0-9一二三四五六七八九十百零]+)\s*[集话話期]" _subtitle_episode_between_re = r"[第]*\s*([0-9一二三四五六七八九十百零]+)\s*[集话話期幕]?\s*-\s*第*\s*([0-9一二三四五六七八九十百零]+)\s*[集话話期]"
_subtitle_episode_all_re = r"([0-9一二三四五六七八九十百零]+)\s*集\s*全|[全共]\s*([0-9一二三四五六七八九十百零]+)\s*[集话話期幕]"
def __init__(self, title: str, subtitle: str = None, isfile: bool = False): def __init__(self, title: str, subtitle: str = None, isfile: bool = False):
if not title: if not title:
return return
self.org_string = title self.org_string = title.strip() if title else None
self.subtitle = subtitle self.subtitle = subtitle.strip() if subtitle else None
self.isfile = isfile self.isfile = isfile
@property @property
@@ -110,7 +112,39 @@ class MetaBase(object):
if not title_text: if not title_text:
return return
title_text = f" {title_text} " title_text = f" {title_text} "
if re.search(r'[全第季集话話期]', title_text, re.IGNORECASE): if re.search(r"%s" % self._title_episodel_re, title_text, re.IGNORECASE):
episode_str = re.search(r'%s' % self._title_episodel_re, title_text, re.IGNORECASE)
if episode_str:
try:
episode = int(episode_str.group(1))
except Exception as err:
logger.debug(f'识别集失败:{str(err)} - {traceback.format_exc()}')
return
if episode >= 10000:
return
if self.begin_episode is None:
self.begin_episode = episode
self.total_episode = 1
self.type = MediaType.TV
self._subtitle_flag = True
elif re.search(r'[全第季集话話期幕]', title_text, re.IGNORECASE):
# 全x季 x季全
season_all_str = re.search(r"%s" % self._subtitle_season_all_re, title_text, re.IGNORECASE)
if season_all_str:
season_all = season_all_str.group(1)
if not season_all:
season_all = season_all_str.group(2)
if season_all and self.begin_season is None and self.begin_episode is None:
try:
self.total_season = int(cn2an.cn2an(season_all.strip(), mode='smart'))
except Exception as err:
logger.debug(f'识别季失败:{str(err)} - {traceback.format_exc()}')
return
self.begin_season = 1
self.end_season = self.total_season
self.type = MediaType.TV
self._subtitle_flag = True
return
# 第x季 # 第x季
season_str = re.search(r'%s' % self._subtitle_season_re, title_text, re.IGNORECASE) season_str = re.search(r'%s' % self._subtitle_season_re, title_text, re.IGNORECASE)
if season_str: if season_str:
@@ -146,6 +180,37 @@ class MetaBase(object):
self.total_season = (self.end_season - self.begin_season) + 1 self.total_season = (self.end_season - self.begin_season) + 1
self.type = MediaType.TV self.type = MediaType.TV
self._subtitle_flag = True self._subtitle_flag = True
# 第x-x集 第x集-x集
episode_between_str = re.search(r'%s' % self._subtitle_episode_between_re, title_text, re.IGNORECASE)
if episode_between_str:
episodes = episode_between_str.groups()
if episodes:
begin_episode = episodes[0]
end_episode = episodes[1]
else:
return
try:
begin_episode = int(cn2an.cn2an(begin_episode.strip(), mode='smart'))
end_episode = int(cn2an.cn2an(end_episode.strip(), mode='smart'))
except Exception as err:
logger.debug(f'识别集失败:{str(err)} - {traceback.format_exc()}')
return
if begin_episode and begin_episode >= 10000:
return
if end_episode and end_episode >= 10000:
return
if self.begin_episode is None and isinstance(begin_episode, int):
self.begin_episode = begin_episode
self.total_episode = 1
if self.begin_episode is not None \
and self.end_episode is None \
and isinstance(end_episode, int) \
and end_episode != self.begin_episode:
self.end_episode = end_episode
self.total_episode = (self.end_episode - self.begin_episode) + 1
self.type = MediaType.TV
self._subtitle_flag = True
return
# 第x集 # 第x集
episode_str = re.search(r'%s' % self._subtitle_episode_re, title_text, re.IGNORECASE) episode_str = re.search(r'%s' % self._subtitle_episode_re, title_text, re.IGNORECASE)
if episode_str: if episode_str:
@@ -181,6 +246,7 @@ class MetaBase(object):
self.total_episode = (self.end_episode - self.begin_episode) + 1 self.total_episode = (self.end_episode - self.begin_episode) + 1
self.type = MediaType.TV self.type = MediaType.TV
self._subtitle_flag = True self._subtitle_flag = True
return
# x集全 # x集全
episode_all_str = re.search(r'%s' % self._subtitle_episode_all_re, title_text, re.IGNORECASE) episode_all_str = re.search(r'%s' % self._subtitle_episode_all_re, title_text, re.IGNORECASE)
if episode_all_str: if episode_all_str:
@@ -197,22 +263,7 @@ class MetaBase(object):
self.end_episode = None self.end_episode = None
self.type = MediaType.TV self.type = MediaType.TV
self._subtitle_flag = True self._subtitle_flag = True
# 全x季 x季全 return
season_all_str = re.search(r"%s" % self._subtitle_season_all_re, title_text, re.IGNORECASE)
if season_all_str:
season_all = season_all_str.group(1)
if not season_all:
season_all = season_all_str.group(2)
if season_all and self.begin_season is None and self.begin_episode is None:
try:
self.total_season = int(cn2an.cn2an(season_all.strip(), mode='smart'))
except Exception as err:
logger.debug(f'识别季失败:{str(err)} - {traceback.format_exc()}')
return
self.begin_season = 1
self.end_season = self.total_season
self.type = MediaType.TV
self._subtitle_flag = True
@property @property
def season(self) -> str: def season(self) -> str:
@@ -240,7 +291,7 @@ class MetaBase(object):
return self.season return self.season
else: else:
return "" return ""
@property @property
def season_seq(self) -> str: def season_seq(self) -> str:
""" """
@@ -283,7 +334,7 @@ class MetaBase(object):
str(self.end_episode).rjust(2, "0")) str(self.end_episode).rjust(2, "0"))
else: else:
return "" return ""
@property @property
def episode_list(self) -> List[int]: def episode_list(self) -> List[int]:
""" """
@@ -481,7 +532,7 @@ class MetaBase(object):
self.end_episode = end self.end_episode = end
if self.begin_episode and self.end_episode: if self.begin_episode and self.end_episode:
self.total_episode = (self.end_episode - self.begin_episode) + 1 self.total_episode = (self.end_episode - self.begin_episode) + 1
def merge(self, meta: Self): def merge(self, meta: Self):
""" """
全并Meta信息 全并Meta信息
@@ -499,13 +550,13 @@ class MetaBase(object):
self.year = meta.year self.year = meta.year
# 季 # 季
if (self.type == MediaType.TV if (self.type == MediaType.TV
and not self.season): and self.begin_season is None):
self.begin_season = meta.begin_season self.begin_season = meta.begin_season
self.end_season = meta.end_season self.end_season = meta.end_season
self.total_season = meta.total_season self.total_season = meta.total_season
# 开始集 # 开始集
if (self.type == MediaType.TV if (self.type == MediaType.TV
and not self.episode): and self.begin_episode is None):
self.begin_episode = meta.begin_episode self.begin_episode = meta.begin_episode
self.end_episode = meta.end_episode self.end_episode = meta.end_episode
self.total_episode = meta.total_episode self.total_episode = meta.total_episode

View File

@@ -1,13 +1,16 @@
import re import re
from pathlib import Path from pathlib import Path
from typing import Optional
from Pinyin2Hanzi import is_pinyin
from app.core.config import settings from app.core.config import settings
from app.core.meta.customization import CustomizationMatcher from app.core.meta.customization import CustomizationMatcher
from app.core.meta.metabase import MetaBase from app.core.meta.metabase import MetaBase
from app.core.meta.releasegroup import ReleaseGroupsMatcher from app.core.meta.releasegroup import ReleaseGroupsMatcher
from app.schemas.types import MediaType
from app.utils.string import StringUtils from app.utils.string import StringUtils
from app.utils.tokens import Tokens from app.utils.tokens import Tokens
from app.schemas.types import MediaType
class MetaVideo(MetaBase): class MetaVideo(MetaBase):
@@ -24,14 +27,14 @@ class MetaVideo(MetaBase):
_source = "" _source = ""
_effect = [] _effect = []
# 正则式区 # 正则式区
_season_re = r"S(\d{2})|^S(\d{1,2})$|S(\d{1,2})E" _season_re = r"S(\d{3})|^S(\d{1,3})$|S(\d{1,3})E"
_episode_re = r"EP?(\d{2,4})$|^EP?(\d{1,4})$|^S\d{1,2}EP?(\d{1,4})$|S\d{2}EP?(\d{2,4})" _episode_re = r"EP?(\d{2,4})$|^EP?(\d{1,4})$|^S\d{1,2}EP?(\d{1,4})$|S\d{2}EP?(\d{2,4})"
_part_re = r"(^PART[0-9ABI]{0,2}$|^CD[0-9]{0,2}$|^DVD[0-9]{0,2}$|^DISK[0-9]{0,2}$|^DISC[0-9]{0,2}$)" _part_re = r"(^PART[0-9ABI]{0,2}$|^CD[0-9]{0,2}$|^DVD[0-9]{0,2}$|^DISK[0-9]{0,2}$|^DISC[0-9]{0,2}$)"
_roman_numerals = r"^(?=[MDCLXVI])M*(C[MD]|D?C{0,3})(X[CL]|L?X{0,3})(I[XV]|V?I{0,3})$" _roman_numerals = r"^(?=[MDCLXVI])M*(C[MD]|D?C{0,3})(X[CL]|L?X{0,3})(I[XV]|V?I{0,3})$"
_source_re = r"^BLURAY$|^HDTV$|^UHDTV$|^HDDVD$|^WEBRIP$|^DVDRIP$|^BDRIP$|^BLU$|^WEB$|^BD$|^HDRip$" _source_re = r"^BLURAY$|^HDTV$|^UHDTV$|^HDDVD$|^WEBRIP$|^DVDRIP$|^BDRIP$|^BLU$|^WEB$|^BD$|^HDRip$"
_effect_re = r"^REMUX$|^UHD$|^SDR$|^HDR\d*$|^DOLBY$|^DOVI$|^DV$|^3D$|^REPACK$" _effect_re = r"^REMUX$|^UHD$|^SDR$|^HDR\d*$|^DOLBY$|^DOVI$|^DV$|^3D$|^REPACK$"
_resources_type_re = r"%s|%s" % (_source_re, _effect_re) _resources_type_re = r"%s|%s" % (_source_re, _effect_re)
_name_no_begin_re = r"^\[.+?]" _name_no_begin_re = r"^[\[【].+?[\]】]"
_name_no_chinese_re = r".*版|.*字幕" _name_no_chinese_re = r".*版|.*字幕"
_name_se_words = ['', '', '', '', '', '', ''] _name_se_words = ['', '', '', '', '', '', '']
_name_movie_words = ['剧场版', '劇場版', '电影版', '電影版'] _name_movie_words = ['剧场版', '劇場版', '电影版', '電影版']
@@ -39,19 +42,25 @@ class MetaVideo(MetaBase):
r"|HBO$|\s+HBO|\d{1,2}th|\d{1,2}bit|NETFLIX|AMAZON|IMAX|^3D|\s+3D|^BBC\s+|\s+BBC|BBC$|DISNEY\+?|XXX|\s+DC$" \ r"|HBO$|\s+HBO|\d{1,2}th|\d{1,2}bit|NETFLIX|AMAZON|IMAX|^3D|\s+3D|^BBC\s+|\s+BBC|BBC$|DISNEY\+?|XXX|\s+DC$" \
r"|[第\s共]+[0-9一二三四五六七八九十\-\s]+季" \ r"|[第\s共]+[0-9一二三四五六七八九十\-\s]+季" \
r"|[第\s共]+[0-9一二三四五六七八九十百零\-\s]+[集话話]" \ r"|[第\s共]+[0-9一二三四五六七八九十百零\-\s]+[集话話]" \
r"|连载|日剧|美剧|电视剧|动画片|动漫|欧美|西德|日韩|超高清|高清|蓝光|翡翠台|梦幻天堂·龙网|★?\d*月?新番" \ r"|连载|日剧|美剧|电视剧|动画片|动漫|欧美|西德|日韩|超高清|高清|无水印|下载|蓝光|翡翠台|梦幻天堂·龙网|★?\d*月?新番" \
r"|最终季|合集|[多中国英葡法俄日韩德意西印泰台港粤双文语简繁体特效内封官译外挂]+字幕|版本|出品|台版|港版|\w+字幕组" \ r"|最终季|合集|[多中国英葡法俄日韩德意西印泰台港粤双文语简繁体特效内封官译外挂]+字幕|版本|出品|台版|港版|\w+字幕组|\w+字幕社" \
r"|未删减版|UNCUT$|UNRATE$|WITH EXTRAS$|RERIP$|SUBBED$|PROPER$|REPACK$|SEASON$|EPISODE$|Complete$|Extended$|Extended Version$" \ r"|未删减版|UNCUT$|UNRATE$|WITH EXTRAS$|RERIP$|SUBBED$|PROPER$|REPACK$|SEASON$|EPISODE$|Complete$|Extended$|Extended Version$" \
r"|S\d{2}\s*-\s*S\d{2}|S\d{2}|\s+S\d{1,2}|EP?\d{2,4}\s*-\s*EP?\d{2,4}|EP?\d{2,4}|\s+EP?\d{1,4}" \ r"|S\d{2}\s*-\s*S\d{2}|S\d{2}|\s+S\d{1,2}|EP?\d{2,4}\s*-\s*EP?\d{2,4}|EP?\d{2,4}|\s+EP?\d{1,4}" \
r"|CD[\s.]*[1-9]|DVD[\s.]*[1-9]|DISK[\s.]*[1-9]|DISC[\s.]*[1-9]" \ r"|CD[\s.]*[1-9]|DVD[\s.]*[1-9]|DISK[\s.]*[1-9]|DISC[\s.]*[1-9]" \
r"|[248]K|\d{3,4}[PIX]+" \ r"|[248]K|\d{3,4}[PIX]+" \
r"|CD[\s.]*[1-9]|DVD[\s.]*[1-9]|DISK[\s.]*[1-9]|DISC[\s.]*[1-9]" r"|CD[\s.]*[1-9]|DVD[\s.]*[1-9]|DISK[\s.]*[1-9]|DISC[\s.]*[1-9]|\s+GB"
_resources_pix_re = r"^[SBUHD]*(\d{3,4}[PI]+)|\d{3,4}X(\d{3,4})" _resources_pix_re = r"^[SBUHD]*(\d{3,4}[PI]+)|\d{3,4}X(\d{3,4})"
_resources_pix_re2 = r"(^[248]+K)" _resources_pix_re2 = r"(^[248]+K)"
_video_encode_re = r"^[HX]26[45]$|^AVC$|^HEVC$|^VC\d?$|^MPEG\d?$|^Xvid$|^DivX$|^HDR\d*$" _video_encode_re = r"^[HX]26[45]$|^AVC$|^HEVC$|^VC\d?$|^MPEG\d?$|^Xvid$|^DivX$|^HDR\d*$"
_audio_encode_re = r"^DTS\d?$|^DTSHD$|^DTSHDMA$|^Atmos$|^TrueHD\d?$|^AC3$|^\dAudios?$|^DDP\d?$|^DD\d?$|^LPCM\d?$|^AAC\d?$|^FLAC\d?$|^HD\d?$|^MA\d?$" _audio_encode_re = r"^DTS\d?$|^DTSHD$|^DTSHDMA$|^Atmos$|^TrueHD\d?$|^AC3$|^\dAudios?$|^DDP\d?$|^DD\d?$|^LPCM\d?$|^AAC\d?$|^FLAC\d?$|^HD\d?$|^MA\d?$"
def __init__(self, title: str, subtitle: str = None, isfile: bool = False): def __init__(self, title: str, subtitle: str = None, isfile: bool = False):
"""
初始化
:param title: 标题,文件为去掉了后缀
:param subtitle: 副标题
:param isfile: 是否是文件名
"""
super().__init__(title, subtitle, isfile) super().__init__(title, subtitle, isfile)
if not title: if not title:
return return
@@ -59,11 +68,10 @@ class MetaVideo(MetaBase):
self._source = "" self._source = ""
self._effect = [] self._effect = []
# 判断是否纯数字命名 # 判断是否纯数字命名
title_path = Path(title) if isfile \
if title_path.suffix.lower() in settings.RMT_MEDIAEXT \ and title.isdigit() \
and title_path.stem.isdigit() \ and len(title) < 5:
and len(title_path.stem) < 5: self.begin_episode = int(title)
self.begin_episode = int(title_path.stem)
self.type = MediaType.TV self.type = MediaType.TV
return return
# 去掉名称中第1个[]的内容 # 去掉名称中第1个[]的内容
@@ -130,12 +138,47 @@ class MetaVideo(MetaBase):
# 处理part # 处理part
if self.part and self.part.upper() == "PART": if self.part and self.part.upper() == "PART":
self.part = None self.part = None
# 没有中文标题时,偿试中描述中获取中文名
if not self.cn_name and self.en_name and self.subtitle:
if self.__is_pinyin(self.en_name):
# 英文名是拼音
cn_name = self.__get_title_from_description(self.subtitle)
if cn_name and len(cn_name) == len(self.en_name.split()):
# 中文名和拼音单词数相同,认为是中文名
self.cn_name = cn_name
# 制作组/字幕组 # 制作组/字幕组
self.resource_team = ReleaseGroupsMatcher().match(title=original_title) or None self.resource_team = ReleaseGroupsMatcher().match(title=original_title) or None
# 自定义占位符 # 自定义占位符
self.customization = CustomizationMatcher().match(title=original_title) or None self.customization = CustomizationMatcher().match(title=original_title) or None
@staticmethod
def __get_title_from_description(description: str) -> Optional[str]:
"""
从描述中提取标题
"""
if not description:
return None
titles = re.split(r'[\s/|]+', description)
if StringUtils.is_chinese(titles[0]):
return titles[0]
return None
@staticmethod
def __is_pinyin(name_str: str) -> bool:
"""
判断是否拼音
"""
if not name_str:
return False
for n in name_str.lower().split():
if not is_pinyin(n):
return False
return True
def __fix_name(self, name: str): def __fix_name(self, name: str):
"""
去掉名字中不需要的干扰字符
"""
if not name: if not name:
return name return name
name = re.sub(r'%s' % self._name_nostring_re, '', name, name = re.sub(r'%s' % self._name_nostring_re, '', name,
@@ -157,6 +200,9 @@ class MetaVideo(MetaBase):
return name return name
def __init_name(self, token: str): def __init_name(self, token: str):
"""
识别名称
"""
if not token: if not token:
return return
# 回收标题 # 回收标题
@@ -250,6 +296,9 @@ class MetaVideo(MetaBase):
self._last_token_type = "enname" self._last_token_type = "enname"
def __init_part(self, token: str): def __init_part(self, token: str):
"""
识别Part
"""
if not self.name: if not self.name:
return return
if not self.year \ if not self.year \
@@ -273,6 +322,9 @@ class MetaVideo(MetaBase):
# self._stop_name_flag = False # self._stop_name_flag = False
def __init_year(self, token: str): def __init_year(self, token: str):
"""
识别年份
"""
if not self.name: if not self.name:
return return
if not token.isdigit(): if not token.isdigit():
@@ -295,6 +347,9 @@ class MetaVideo(MetaBase):
self._stop_name_flag = True self._stop_name_flag = True
def __init_resource_pix(self, token: str): def __init_resource_pix(self, token: str):
"""
识别分辨率
"""
if not self.name: if not self.name:
return return
re_res = re.findall(r"%s" % self._resources_pix_re, token, re.IGNORECASE) re_res = re.findall(r"%s" % self._resources_pix_re, token, re.IGNORECASE)
@@ -331,6 +386,9 @@ class MetaVideo(MetaBase):
self.resource_pix = re_res.group(1).lower() self.resource_pix = re_res.group(1).lower()
def __init_season(self, token: str): def __init_season(self, token: str):
"""
识别季
"""
re_res = re.findall(r"%s" % self._season_re, token, re.IGNORECASE) re_res = re.findall(r"%s" % self._season_re, token, re.IGNORECASE)
if re_res: if re_res:
self._last_token_type = "season" self._last_token_type = "season"
@@ -380,6 +438,9 @@ class MetaVideo(MetaBase):
self.begin_season = 1 self.begin_season = 1
def __init_episode(self, token: str): def __init_episode(self, token: str):
"""
识别集
"""
re_res = re.findall(r"%s" % self._episode_re, token, re.IGNORECASE) re_res = re.findall(r"%s" % self._episode_re, token, re.IGNORECASE)
if re_res: if re_res:
self._last_token_type = "episode" self._last_token_type = "episode"
@@ -450,6 +511,9 @@ class MetaVideo(MetaBase):
self._last_token_type = "EPISODE" self._last_token_type = "EPISODE"
def __init_resource_type(self, token): def __init_resource_type(self, token):
"""
识别资源类型
"""
if not self.name: if not self.name:
return return
source_res = re.search(r"(%s)" % self._source_re, token, re.IGNORECASE) source_res = re.search(r"(%s)" % self._source_re, token, re.IGNORECASE)
@@ -488,6 +552,9 @@ class MetaVideo(MetaBase):
self._last_token = effect.upper() self._last_token = effect.upper()
def __init_video_encode(self, token: str): def __init_video_encode(self, token: str):
"""
识别视频编码
"""
if not self.name: if not self.name:
return return
if not self.year \ if not self.year \
@@ -528,6 +595,9 @@ class MetaVideo(MetaBase):
self.video_encode = f"{self.video_encode} 10bit" self.video_encode = f"{self.video_encode} 10bit"
def __init_audio_encode(self, token: str): def __init_audio_encode(self, token: str):
"""
识别音频编码
"""
if not self.name: if not self.name:
return return
if not self.year \ if not self.year \

View File

@@ -1,4 +1,3 @@
import traceback
from typing import List, Tuple from typing import List, Tuple
import cn2an import cn2an
@@ -26,7 +25,7 @@ class WordsMatcher(metaclass=Singleton):
# 读取自定义识别词 # 读取自定义识别词
words: List[str] = self.systemconfig.get(SystemConfigKey.CustomIdentifiers) or [] words: List[str] = self.systemconfig.get(SystemConfigKey.CustomIdentifiers) or []
for word in words: for word in words:
if not word or word.find('#') == 0: if not word or word.startswith("#"):
continue continue
try: try:
if word.count(" => ") and word.count(" && ") and word.count(" >> ") and word.count(" <> "): if word.count(" => ") and word.count(" && ") and word.count(" >> ") and word.count(" <> "):
@@ -54,17 +53,18 @@ class WordsMatcher(metaclass=Singleton):
strings = word.split(" <> ") strings = word.split(" <> ")
offsets = strings[1].split(" >> ") offsets = strings[1].split(" >> ")
strings[1] = offsets[0] strings[1] = offsets[0]
title, message, state = self.__episode_offset(title, strings[0], strings[1], title, message, state = self.__episode_offset(title, strings[0], strings[1], offsets[1])
offsets[1])
else: else:
# 屏蔽词 # 屏蔽词
if not word.strip():
continue
title, message, state = self.__replace_regex(title, word, "") title, message, state = self.__replace_regex(title, word, "")
if state: if state:
appley_words.append(word) appley_words.append(word)
except Exception as err: except Exception as err:
logger.error(f"自定义识别词预处理标题失败:{str(err)} - {traceback.format_exc()}") logger.warn(f"自定义识别词 {word} 预处理标题失败:{str(err)} - 标题:{title}")
return title, appley_words return title, appley_words
@@ -79,7 +79,7 @@ class WordsMatcher(metaclass=Singleton):
else: else:
return re.sub(r'%s' % replaced, r'%s' % replace, title), "", True return re.sub(r'%s' % replaced, r'%s' % replace, title), "", True
except Exception as err: except Exception as err:
logger.error(f"自定义识别词正则替换失败:{str(err)} - {traceback.format_exc()}") logger.warn(f"自定义识别词正则替换失败:{str(err)} - 标题:{title},被替换词:{replaced},替换词:{replace}")
return title, str(err), False return title, str(err), False
@staticmethod @staticmethod
@@ -131,5 +131,5 @@ class WordsMatcher(metaclass=Singleton):
title = re.sub(episode_offset_re, r'%s' % episode_num[1], title) title = re.sub(episode_offset_re, r'%s' % episode_num[1], title)
return title, "", True return title, "", True
except Exception as err: except Exception as err:
logger.error(f"自定义识别词集数偏移失败:{str(err)} - {traceback.format_exc()}") logger.warn(f"自定义识别词集数偏移失败:{str(err)} - 标题:{title},前定位词:{front},后定位词:{back},偏移量:{offset}")
return title, str(err), False return title, str(err), False

View File

@@ -1,3 +1,4 @@
import logging
from pathlib import Path from pathlib import Path
from typing import Tuple from typing import Tuple
@@ -6,6 +7,7 @@ import regex as re
from app.core.config import settings from app.core.config import settings
from app.core.meta import MetaAnime, MetaVideo, MetaBase from app.core.meta import MetaAnime, MetaVideo, MetaBase
from app.core.meta.words import WordsMatcher from app.core.meta.words import WordsMatcher
from app.log import logger
from app.schemas.types import MediaType from app.schemas.types import MediaType
@@ -25,6 +27,8 @@ def MetaInfo(title: str, subtitle: str = None) -> MetaBase:
# 判断是否处理文件 # 判断是否处理文件
if title and Path(title).suffix.lower() in settings.RMT_MEDIAEXT: if title and Path(title).suffix.lower() in settings.RMT_MEDIAEXT:
isfile = True isfile = True
# 去掉后缀
title = Path(title).stem
else: else:
isfile = False isfile = False
# 识别 # 识别
@@ -35,9 +39,12 @@ def MetaInfo(title: str, subtitle: str = None) -> MetaBase:
meta.apply_words = apply_words or [] meta.apply_words = apply_words or []
# 修正媒体信息 # 修正媒体信息
if metainfo.get('tmdbid'): if metainfo.get('tmdbid'):
meta.tmdbid = metainfo['tmdbid'] try:
meta.tmdbid = int(metainfo['tmdbid'])
except ValueError as _:
logger.warn("tmdbid 必须是数字")
if metainfo.get('doubanid'): if metainfo.get('doubanid'):
meta.tmdbid = metainfo['doubanid'] meta.doubanid = metainfo['doubanid']
if metainfo.get('type'): if metainfo.get('type'):
meta.type = metainfo['type'] meta.type = metainfo['type']
if metainfo.get('begin_season'): if metainfo.get('begin_season'):
@@ -61,7 +68,7 @@ def MetaInfoPath(path: Path) -> MetaBase:
:param path: 路径 :param path: 路径
""" """
# 文件元数据,不包含后缀 # 文件元数据,不包含后缀
file_meta = MetaInfo(title=path.stem) file_meta = MetaInfo(title=path.name)
# 上级目录元数据 # 上级目录元数据
dir_meta = MetaInfo(title=path.parent.name) dir_meta = MetaInfo(title=path.parent.name)
# 合并元数据 # 合并元数据

View File

@@ -78,9 +78,12 @@ class ModuleManager(metaclass=Singleton):
if not setting: if not setting:
return True return True
switch, value = setting switch, value = setting
if getattr(settings, switch) and value is True: option = getattr(settings, switch)
if not option:
return False
if option and value is True:
return True return True
if value in getattr(settings, switch): if value in option:
return True return True
return False return False

View File

@@ -1,6 +1,9 @@
import concurrent
import concurrent.futures
import traceback import traceback
from typing import List, Any, Dict, Tuple from typing import List, Any, Dict, Tuple, Optional
from app import schemas
from app.core.config import settings from app.core.config import settings
from app.core.event import eventmanager from app.core.event import eventmanager
from app.db.systemconfig_oper import SystemConfigOper from app.db.systemconfig_oper import SystemConfigOper
@@ -32,8 +35,6 @@ class PluginManager(metaclass=Singleton):
self.siteshelper = SitesHelper() self.siteshelper = SitesHelper()
self.pluginhelper = PluginHelper() self.pluginhelper = PluginHelper()
self.systemconfig = SystemConfigOper() self.systemconfig = SystemConfigOper()
self.install_online_plugin()
self.init_config()
def init_config(self): def init_config(self):
# 停止已有插件 # 停止已有插件
@@ -41,23 +42,24 @@ class PluginManager(metaclass=Singleton):
# 启动插件 # 启动插件
self.start() self.start()
def start(self): def start(self, pid: str = None):
""" """
启动加载插件 启动加载插件
:param pid: 插件ID为空加载所有插件
""" """
# 扫描插件目录 # 扫描插件目录
plugins = ModuleHelper.load( plugins = ModuleHelper.load(
"app.plugins", "app.plugins",
filter_func=lambda _, obj: hasattr(obj, 'init_plugin') filter_func=lambda _, obj: hasattr(obj, 'init_plugin') and hasattr(obj, "plugin_name")
) )
# 已安装插件 # 已安装插件
installed_plugins = self.systemconfig.get(SystemConfigKey.UserInstalledPlugins) or [] installed_plugins = self.systemconfig.get(SystemConfigKey.UserInstalledPlugins) or []
# 排序 # 排序
plugins.sort(key=lambda x: x.plugin_order if hasattr(x, "plugin_order") else 0) plugins.sort(key=lambda x: x.plugin_order if hasattr(x, "plugin_order") else 0)
self._running_plugins = {}
self._plugins = {}
for plugin in plugins: for plugin in plugins:
plugin_id = plugin.__name__ plugin_id = plugin.__name__
if pid and plugin_id != pid:
continue
try: try:
# 存储Class # 存储Class
self._plugins[plugin_id] = plugin self._plugins[plugin_id] = plugin
@@ -81,9 +83,11 @@ class PluginManager(metaclass=Singleton):
except Exception as err: except Exception as err:
logger.error(f"加载插件 {plugin_id} 出错:{str(err)} - {traceback.format_exc()}") logger.error(f"加载插件 {plugin_id} 出错:{str(err)} - {traceback.format_exc()}")
def reload_plugin(self, plugin_id: str, conf: dict): def init_plugin(self, plugin_id: str, conf: dict):
""" """
重新加载插件 初始化插件
:param plugin_id: 插件ID
:param conf: 插件配置
""" """
if not self._running_plugins.get(plugin_id): if not self._running_plugins.get(plugin_id):
return return
@@ -95,21 +99,57 @@ class PluginManager(metaclass=Singleton):
# 设置事件状态为不可用 # 设置事件状态为不可用
eventmanager.disable_events_hander(plugin_id) eventmanager.disable_events_hander(plugin_id)
def stop(self): def stop(self, pid: str = None):
""" """
停止 停止插件服务
:param pid: 插件ID为空停止所有插件
""" """
# 停止所有插件 # 停止插件
for plugin in self._running_plugins.values(): for plugin_id, plugin in self._running_plugins.items():
# 关闭数据库 if pid and plugin_id != pid:
if hasattr(plugin, "close"): continue
plugin.close() self.__stop_plugin(plugin)
# 关闭插件
if hasattr(plugin, "stop_service"):
plugin.stop_service()
# 清空对像 # 清空对像
self._plugins = {} if pid:
self._running_plugins = {} # 清空指定插件
if pid in self._running_plugins:
self._running_plugins.pop(pid)
if pid in self._plugins:
self._plugins.pop(pid)
else:
# 清空
self._plugins = {}
self._running_plugins = {}
@staticmethod
def __stop_plugin(plugin: Any):
"""
停止插件
:param plugin: 插件实例
"""
# 关闭数据库
if hasattr(plugin, "close"):
plugin.close()
# 关闭插件
if hasattr(plugin, "stop_service"):
plugin.stop_service()
def remove_plugin(self, plugin_id: str):
"""
从内存中移除一个插件
:param plugin_id: 插件ID
"""
self.stop(plugin_id)
def reload_plugin(self, plugin_id: str):
"""
将一个插件重新加载到内存
:param plugin_id: 插件ID
"""
# 先移除
self.stop(plugin_id)
# 重新加载
self.start(plugin_id)
def install_online_plugin(self): def install_online_plugin(self):
""" """
@@ -117,32 +157,33 @@ class PluginManager(metaclass=Singleton):
""" """
if SystemUtils.is_frozen(): if SystemUtils.is_frozen():
return return
logger.info("开始安装在线插件...") logger.info("开始安装第三方插件...")
# 已安装插件 # 已安装插件
install_plugins = self.systemconfig.get(SystemConfigKey.UserInstalledPlugins) or [] install_plugins = self.systemconfig.get(SystemConfigKey.UserInstalledPlugins) or []
# 在线插件 # 在线插件
online_plugins = self.get_online_plugins() online_plugins = self.get_online_plugins()
if not online_plugins: if not online_plugins:
logger.error("未获取到在线插件") logger.error("未获取到第三方插件")
return return
# 支持更新的插件自动更新 # 支持更新的插件自动更新
for plugin in online_plugins: for plugin in online_plugins:
# 只处理已安装的插件 # 只处理已安装的插件
if plugin.get("id") in install_plugins and not self.is_plugin_exists(plugin.get("id")): if plugin.id in install_plugins and not self.is_plugin_exists(plugin.id):
# 下载安装 # 下载安装
state, msg = self.pluginhelper.install(pid=plugin.get("id"), state, msg = self.pluginhelper.install(pid=plugin.id,
repo_url=plugin.get("repo_url")) repo_url=plugin.repo_url)
# 安装失败 # 安装失败
if not state: if not state:
logger.error( logger.error(
f"插件 {plugin.get('plugin_name')} v{plugin.get('plugin_version')} 安装失败:{msg}") f"插件 {plugin.plugin_name} v{plugin.plugin_version} 安装失败:{msg}")
continue continue
logger.info(f"插件 {plugin.get('plugin_name')} 安装成功,版本:{plugin.get('plugin_version')}") logger.info(f"插件 {plugin.plugin_name} 安装成功,版本:{plugin.plugin_version}")
logger.info("在线插件安装完成") logger.info("第三方插件安装完成")
def get_plugin_config(self, pid: str) -> dict: def get_plugin_config(self, pid: str) -> dict:
""" """
获取插件配置 获取插件配置
:param pid: 插件ID
""" """
if not self._plugins.get(pid): if not self._plugins.get(pid):
return {} return {}
@@ -155,6 +196,8 @@ class PluginManager(metaclass=Singleton):
def save_plugin_config(self, pid: str, conf: dict) -> bool: def save_plugin_config(self, pid: str, conf: dict) -> bool:
""" """
保存插件配置 保存插件配置
:param pid: 插件ID
:param conf: 配置
""" """
if not self._plugins.get(pid): if not self._plugins.get(pid):
return False return False
@@ -163,6 +206,7 @@ class PluginManager(metaclass=Singleton):
def delete_plugin_config(self, pid: str) -> bool: def delete_plugin_config(self, pid: str) -> bool:
""" """
删除插件配置 删除插件配置
:param pid: 插件ID
""" """
if not self._plugins.get(pid): if not self._plugins.get(pid):
return False return False
@@ -171,23 +215,48 @@ class PluginManager(metaclass=Singleton):
def get_plugin_form(self, pid: str) -> Tuple[List[dict], Dict[str, Any]]: def get_plugin_form(self, pid: str) -> Tuple[List[dict], Dict[str, Any]]:
""" """
获取插件表单 获取插件表单
:param pid: 插件ID
""" """
if not self._running_plugins.get(pid): plugin = self._running_plugins.get(pid)
if not plugin:
return [], {} return [], {}
if hasattr(self._running_plugins[pid], "get_form"): if hasattr(plugin, "get_form"):
return self._running_plugins[pid].get_form() or ([], {}) return plugin.get_form() or ([], {})
return [], {} return [], {}
def get_plugin_page(self, pid: str) -> List[dict]: def get_plugin_page(self, pid: str) -> List[dict]:
""" """
获取插件页面 获取插件页面
:param pid: 插件ID
""" """
if not self._running_plugins.get(pid): plugin = self._running_plugins.get(pid)
if not plugin:
return [] return []
if hasattr(self._running_plugins[pid], "get_page"): if hasattr(plugin, "get_page"):
return self._running_plugins[pid].get_page() or [] return plugin.get_page() or []
return [] return []
def get_plugin_dashboard(self, pid: str) -> Optional[schemas.PluginDashboard]:
"""
获取插件仪表盘
:param pid: 插件ID
"""
plugin = self._running_plugins.get(pid)
if not plugin:
return None
if hasattr(plugin, "get_dashboard"):
dashboard: Tuple = plugin.get_dashboard()
if dashboard:
cols, attrs, elements = dashboard
return schemas.PluginDashboard(
id=pid,
name=plugin.plugin_name,
cols=cols or {},
elements=elements,
attrs=attrs or {}
)
return None
def get_plugin_commands(self) -> List[Dict[str, Any]]: def get_plugin_commands(self) -> List[Dict[str, Any]]:
""" """
获取插件命令 获取插件命令
@@ -202,7 +271,10 @@ class PluginManager(metaclass=Singleton):
for _, plugin in self._running_plugins.items(): for _, plugin in self._running_plugins.items():
if hasattr(plugin, "get_command") \ if hasattr(plugin, "get_command") \
and ObjectUtils.check_method(plugin.get_command): and ObjectUtils.check_method(plugin.get_command):
ret_commands += plugin.get_command() or [] try:
ret_commands += plugin.get_command() or []
except Exception as e:
logger.error(f"获取插件命令出错:{str(e)}")
return ret_commands return ret_commands
def get_plugin_apis(self) -> List[Dict[str, Any]]: def get_plugin_apis(self) -> List[Dict[str, Any]]:
@@ -220,10 +292,13 @@ class PluginManager(metaclass=Singleton):
for pid, plugin in self._running_plugins.items(): for pid, plugin in self._running_plugins.items():
if hasattr(plugin, "get_api") \ if hasattr(plugin, "get_api") \
and ObjectUtils.check_method(plugin.get_api): and ObjectUtils.check_method(plugin.get_api):
apis = plugin.get_api() or [] try:
for api in apis: apis = plugin.get_api() or []
api["path"] = f"/{pid}{api['path']}" for api in apis:
ret_apis.extend(apis) api["path"] = f"/{pid}{api['path']}"
ret_apis.extend(apis)
except Exception as e:
logger.error(f"获取插件 {pid} API出错{str(e)}")
return ret_apis return ret_apis
def get_plugin_services(self) -> List[Dict[str, Any]]: def get_plugin_services(self) -> List[Dict[str, Any]]:
@@ -241,30 +316,60 @@ class PluginManager(metaclass=Singleton):
for pid, plugin in self._running_plugins.items(): for pid, plugin in self._running_plugins.items():
if hasattr(plugin, "get_service") \ if hasattr(plugin, "get_service") \
and ObjectUtils.check_method(plugin.get_service): and ObjectUtils.check_method(plugin.get_service):
services = plugin.get_service() try:
if services: services = plugin.get_service()
ret_services.extend(services) if services:
ret_services.extend(services)
except Exception as e:
logger.error(f"获取插件 {pid} 服务出错:{str(e)}")
return ret_services return ret_services
def get_dashboard_plugins(self) -> List[dict]:
"""
获取有仪表盘的插件列表
"""
dashboards = []
for pid, plugin in self._running_plugins.items():
if hasattr(plugin, "get_dashboard") \
and ObjectUtils.check_method(plugin.get_dashboard):
try:
if not plugin.get_state():
continue
dashboards.append({
"id": pid,
"name": plugin.plugin_name
})
except Exception as e:
logger.error(f"获取有仪表盘的插件出错:{str(e)}")
return dashboards
def get_plugin_attr(self, pid: str, attr: str) -> Any: def get_plugin_attr(self, pid: str, attr: str) -> Any:
""" """
获取插件属性 获取插件属性
:param pid: 插件ID
:param attr: 属性名
""" """
if not self._running_plugins.get(pid): plugin = self._running_plugins.get(pid)
if not plugin:
return None return None
if not hasattr(self._running_plugins[pid], attr): if not hasattr(plugin, attr):
return None return None
return getattr(self._running_plugins[pid], attr) return getattr(plugin, attr)
def run_plugin_method(self, pid: str, method: str, *args, **kwargs) -> Any: def run_plugin_method(self, pid: str, method: str, *args, **kwargs) -> Any:
""" """
运行插件方法 运行插件方法
:param pid: 插件ID
:param method: 方法名
:param args: 参数
:param kwargs: 关键字参数
""" """
if not self._running_plugins.get(pid): plugin = self._running_plugins.get(pid)
if not plugin:
return None return None
if not hasattr(self._running_plugins[pid], method): if not hasattr(plugin, method):
return None return None
return getattr(self._running_plugins[pid], method)(*args, **kwargs) return getattr(plugin, method)(*args, **kwargs)
def get_plugin_ids(self) -> List[str]: def get_plugin_ids(self) -> List[str]:
""" """
@@ -278,43 +383,42 @@ class PluginManager(metaclass=Singleton):
""" """
return list(self._running_plugins.keys()) return list(self._running_plugins.keys())
def get_online_plugins(self) -> List[dict]: def get_online_plugins(self) -> List[schemas.Plugin]:
""" """
获取所有在线插件信息 获取所有在线插件信息
""" """
# 返回值
all_confs = [] def __get_plugin_info(market: str) -> Optional[List[schemas.Plugin]]:
if not settings.PLUGIN_MARKET: """
return all_confs 获取插件信息
# 已安装插件 """
installed_apps = self.systemconfig.get(SystemConfigKey.UserInstalledPlugins) or []
# 线上插件列表
markets = settings.PLUGIN_MARKET.split(",")
for market in markets:
online_plugins = self.pluginhelper.get_plugins(market) or {} online_plugins = self.pluginhelper.get_plugins(market) or {}
if not online_plugins: if not online_plugins:
logger.warn(f"获取插件库失败 {market}") logger.warn(f"获取插件库失败{market}")
for pid, plugin in online_plugins.items(): return
ret_plugins = []
add_time = len(online_plugins)
for pid, plugin_info in online_plugins.items():
# 运行状插件 # 运行状插件
plugin_obj = self._running_plugins.get(pid) plugin_obj = self._running_plugins.get(pid)
# 非运行态插件 # 非运行态插件
plugin_static = self._plugins.get(pid) plugin_static = self._plugins.get(pid)
# 基本属性 # 基本属性
conf = {} plugin = schemas.Plugin()
# ID # ID
conf.update({"id": pid}) plugin.id = pid
# 安装状态 # 安装状态
if pid in installed_apps and plugin_static: if pid in installed_apps and plugin_static:
conf.update({"installed": True}) plugin.installed = True
else: else:
conf.update({"installed": False}) plugin.installed = False
# 是否有新版本 # 是否有新版本
conf.update({"has_update": False}) plugin.has_update = False
if plugin_static: if plugin_static:
installed_version = getattr(plugin_static, "plugin_version") installed_version = getattr(plugin_static, "plugin_version")
if StringUtils.compare_version(installed_version, plugin.get("version")) < 0: if StringUtils.compare_version(installed_version, plugin_info.get("version")) < 0:
# 需要更新 # 需要更新
conf.update({"has_update": True}) plugin.has_update = True
# 运行状态 # 运行状态
if plugin_obj and hasattr(plugin_obj, "get_state"): if plugin_obj and hasattr(plugin_obj, "get_state"):
try: try:
@@ -322,65 +426,102 @@ class PluginManager(metaclass=Singleton):
except Exception as e: except Exception as e:
logger.error(f"获取插件 {pid} 状态出错:{str(e)}") logger.error(f"获取插件 {pid} 状态出错:{str(e)}")
state = False state = False
conf.update({"state": state}) plugin.state = state
else: else:
conf.update({"state": False}) plugin.state = False
# 是否有详情页面 # 是否有详情页面
conf.update({"has_page": False}) plugin.has_page = False
if plugin_obj and hasattr(plugin_obj, "get_page"): if plugin_obj and hasattr(plugin_obj, "get_page"):
if ObjectUtils.check_method(plugin_obj.get_page): if ObjectUtils.check_method(plugin_obj.get_page):
conf.update({"has_page": True}) plugin.has_page = True
# 权限 # 权限
if plugin.get("level"): if plugin_info.get("level"):
conf.update({"auth_level": plugin.get("level")}) plugin.auth_level = plugin_info.get("level")
if self.siteshelper.auth_level < plugin.get("level"): if self.siteshelper.auth_level < plugin.auth_level:
continue continue
# 名称 # 名称
if plugin.get("name"): if plugin_info.get("name"):
conf.update({"plugin_name": plugin.get("name")}) plugin.plugin_name = plugin_info.get("name")
# 描述 # 描述
if plugin.get("description"): if plugin_info.get("description"):
conf.update({"plugin_desc": plugin.get("description")}) plugin.plugin_desc = plugin_info.get("description")
# 版本 # 版本
if plugin.get("version"): if plugin_info.get("version"):
conf.update({"plugin_version": plugin.get("version")}) plugin.plugin_version = plugin_info.get("version")
# 图标 # 图标
if plugin.get("icon"): if plugin_info.get("icon"):
conf.update({"plugin_icon": plugin.get("icon")}) plugin.plugin_icon = plugin_info.get("icon")
# 标签
if plugin_info.get("labels"):
plugin.plugin_label = plugin_info.get("labels")
# 作者 # 作者
if plugin.get("author"): if plugin_info.get("author"):
conf.update({"plugin_author": plugin.get("author")}) plugin.plugin_author = plugin_info.get("author")
# 更新历史
if plugin_info.get("history"):
plugin.history = plugin_info.get("history")
# 仓库链接 # 仓库链接
conf.update({"repo_url": market}) plugin.repo_url = market
# 本地标志 # 本地标志
conf.update({"is_local": False}) plugin.is_local = False
# 添加顺序
plugin.add_time = add_time
# 汇总 # 汇总
all_confs.append(conf) ret_plugins.append(plugin)
# 按插件ID去重 add_time -= 1
if all_confs:
all_confs = list({v["id"]: v for v in all_confs}.values())
return all_confs
def get_local_plugins(self) -> List[dict]: return ret_plugins
if not settings.PLUGIN_MARKET:
return []
# 返回值
all_plugins = []
# 已安装插件
installed_apps = self.systemconfig.get(SystemConfigKey.UserInstalledPlugins) or []
# 使用多线程获取线上插件
with concurrent.futures.ThreadPoolExecutor() as executor:
futures = []
for m in settings.PLUGIN_MARKET.split(","):
futures.append(executor.submit(__get_plugin_info, m))
for future in concurrent.futures.as_completed(futures):
plugins = future.result()
if plugins:
all_plugins.extend(plugins)
# 所有插件按repo在设置中的顺序排序
all_plugins.sort(
key=lambda x: settings.PLUGIN_MARKET.split(",").index(x.repo_url) if x.repo_url else 0
)
# 按插件ID和版本号去重相同插件以前面的为准
result = []
_dup = []
for p in all_plugins:
key = f"{p.id}v{p.plugin_version}"
if key not in _dup:
_dup.append(key)
result.append(p)
logger.info(f"共获取到 {len(result)} 个第三方插件")
return result
def get_local_plugins(self) -> List[schemas.Plugin]:
""" """
获取所有本地已下载的插件信息 获取所有本地已下载的插件信息
""" """
# 返回值 # 返回值
all_confs = [] plugins = []
# 已安装插件 # 已安装插件
installed_apps = self.systemconfig.get(SystemConfigKey.UserInstalledPlugins) or [] installed_apps = self.systemconfig.get(SystemConfigKey.UserInstalledPlugins) or []
for pid, plugin in self._plugins.items(): for pid, plugin_class in self._plugins.items():
# 运行状插件 # 运行状插件
plugin_obj = self._running_plugins.get(pid) plugin_obj = self._running_plugins.get(pid)
# 基本属性 # 基本属性
conf = {} plugin = schemas.Plugin()
# ID # ID
conf.update({"id": pid}) plugin.id = pid
# 安装状态 # 安装状态
if pid in installed_apps: if pid in installed_apps:
conf.update({"installed": True}) plugin.installed = True
else: else:
conf.update({"installed": False}) plugin.installed = False
# 运行状态 # 运行状态
if plugin_obj and hasattr(plugin_obj, "get_state"): if plugin_obj and hasattr(plugin_obj, "get_state"):
try: try:
@@ -388,50 +529,56 @@ class PluginManager(metaclass=Singleton):
except Exception as e: except Exception as e:
logger.error(f"获取插件 {pid} 状态出错:{str(e)}") logger.error(f"获取插件 {pid} 状态出错:{str(e)}")
state = False state = False
conf.update({"state": state}) plugin.state = state
else: else:
conf.update({"state": False}) plugin.state = False
# 是否有详情页面 # 是否有详情页面
if hasattr(plugin, "get_page"): if hasattr(plugin_class, "get_page"):
if ObjectUtils.check_method(plugin.get_page): if ObjectUtils.check_method(plugin_class.get_page):
conf.update({"has_page": True}) plugin.has_page = True
else: else:
conf.update({"has_page": False}) plugin.has_page = False
# 权限 # 权限
if hasattr(plugin, "auth_level"): if hasattr(plugin_class, "auth_level"):
conf.update({"auth_level": plugin.auth_level}) plugin.auth_level = plugin_class.auth_level
if self.siteshelper.auth_level < plugin.auth_level: if self.siteshelper.auth_level < plugin.auth_level:
continue continue
# 名称 # 名称
if hasattr(plugin, "plugin_name"): if hasattr(plugin_class, "plugin_name"):
conf.update({"plugin_name": plugin.plugin_name}) plugin.plugin_name = plugin_class.plugin_name
# 描述 # 描述
if hasattr(plugin, "plugin_desc"): if hasattr(plugin_class, "plugin_desc"):
conf.update({"plugin_desc": plugin.plugin_desc}) plugin.plugin_desc = plugin_class.plugin_desc
# 版本 # 版本
if hasattr(plugin, "plugin_version"): if hasattr(plugin_class, "plugin_version"):
conf.update({"plugin_version": plugin.plugin_version}) plugin.plugin_version = plugin_class.plugin_version
# 图标 # 图标
if hasattr(plugin, "plugin_icon"): if hasattr(plugin_class, "plugin_icon"):
conf.update({"plugin_icon": plugin.plugin_icon}) plugin.plugin_icon = plugin_class.plugin_icon
# 作者 # 作者
if hasattr(plugin, "plugin_author"): if hasattr(plugin_class, "plugin_author"):
conf.update({"plugin_author": plugin.plugin_author}) plugin.plugin_author = plugin_class.plugin_author
# 作者链接 # 作者链接
if hasattr(plugin, "author_url"): if hasattr(plugin_class, "author_url"):
conf.update({"author_url": plugin.author_url}) plugin.author_url = plugin_class.author_url
# 加载顺序
if hasattr(plugin_class, "plugin_order"):
plugin.plugin_order = plugin_class.plugin_order
# 是否需要更新 # 是否需要更新
conf.update({"has_update": False}) plugin.has_update = False
# 本地标志 # 本地标志
conf.update({"is_local": True}) plugin.is_local = True
# 汇总 # 汇总
all_confs.append(conf) plugins.append(plugin)
return all_confs # 根据加载排序重新排序
plugins.sort(key=lambda x: x.plugin_order if hasattr(x, "plugin_order") else 0)
return plugins
@staticmethod @staticmethod
def is_plugin_exists(pid: str) -> bool: def is_plugin_exists(pid: str) -> bool:
""" """
判断插件是否存在 判断插件是否在本地文件系统存在
:param pid: 插件ID
""" """
if not pid: if not pid:
return False return False

View File

@@ -131,3 +131,11 @@ class DownloadHistoryOper(DbOper):
type=type, type=type,
tmdbid=tmdbid, tmdbid=tmdbid,
seasons=seasons) seasons=seasons)
def list_by_type(self, mtype: str, days: int = 7) -> List[DownloadHistory]:
"""
获取指定类型的下载历史
"""
return DownloadHistory.list_by_type(db=self._db,
mtype=mtype,
days=days)

61
app/db/message_oper.py Normal file
View File

@@ -0,0 +1,61 @@
import json
import time
from typing import Optional, Union
from sqlalchemy.orm import Session
from app.db import DbOper
from app.db.models.message import Message
from app.schemas import MessageChannel, NotificationType
class MessageOper(DbOper):
"""
消息数据管理
"""
def __init__(self, db: Session = None):
super().__init__(db)
def add(self,
channel: MessageChannel = None,
mtype: NotificationType = None,
title: str = None,
text: str = None,
image: str = None,
link: str = None,
userid: str = None,
action: int = 1,
note: Union[list, dict] = None,
**kwargs):
"""
新增媒体服务器数据
:param channel: 消息渠道
:param mtype: 消息类型
:param title: 标题
:param text: 文本内容
:param image: 图片
:param link: 链接
:param userid: 用户ID
:param action: 消息方向0-接收息1-发送消息
:param note: 附件json
"""
kwargs.update({
"channel": channel.value if channel else '',
"mtype": mtype.value if mtype else '',
"title": title,
"text": text,
"image": image,
"link": link,
"userid": userid,
"action": action,
"reg_time": time.strftime("%Y-%m-%d %H:%M:%S", time.localtime()),
"note": json.dumps(note) if note else ''
})
Message(**kwargs).create(self._db)
def list_by_page(self, page: int = 1, count: int = 30) -> Optional[str]:
"""
获取媒体服务器数据ID
"""
return Message.list_by_page(self._db, page, count)

View File

@@ -7,3 +7,4 @@ from .subscribe import Subscribe
from .systemconfig import SystemConfig from .systemconfig import SystemConfig
from .transferhistory import TransferHistory from .transferhistory import TransferHistory
from .user import User from .user import User
from .userconfig import UserConfig

View File

@@ -1,3 +1,5 @@
import time
from sqlalchemy import Column, Integer, String, Sequence from sqlalchemy import Column, Integer, String, Sequence
from sqlalchemy.orm import Session from sqlalchemy.orm import Session
@@ -140,6 +142,16 @@ class DownloadHistory(Base):
DownloadHistory.tmdbid == tmdbid).order_by( DownloadHistory.tmdbid == tmdbid).order_by(
DownloadHistory.id.desc()).all() DownloadHistory.id.desc()).all()
@staticmethod
@db_query
def list_by_type(db: Session, mtype: str, days: int):
result = db.query(DownloadHistory) \
.filter(DownloadHistory.type == mtype,
DownloadHistory.date >= time.strftime("%Y-%m-%d %H:%M:%S",
time.localtime(time.time() - 86400 * int(days)))
).all()
return list(result)
class DownloadFiles(Base): class DownloadFiles(Base):
""" """
@@ -188,6 +200,7 @@ class DownloadFiles(Base):
result = db.query(DownloadFiles).filter(DownloadFiles.savepath == savepath).all() result = db.query(DownloadFiles).filter(DownloadFiles.savepath == savepath).all()
return list(result) return list(result)
@staticmethod
@db_update @db_update
def delete_by_fullpath(db: Session, fullpath: str): def delete_by_fullpath(db: Session, fullpath: str):
db.query(DownloadFiles).filter(DownloadFiles.fullpath == fullpath, db.query(DownloadFiles).filter(DownloadFiles.fullpath == fullpath,

39
app/db/models/message.py Normal file
View File

@@ -0,0 +1,39 @@
from sqlalchemy import Column, Integer, String, Sequence
from sqlalchemy.orm import Session
from app.db import db_query, Base
class Message(Base):
"""
消息表
"""
id = Column(Integer, Sequence('id'), primary_key=True, index=True)
# 消息渠道
channel = Column(String)
# 消息类型
mtype = Column(String)
# 标题
title = Column(String)
# 文本内容
text = Column(String)
# 图片
image = Column(String)
# 链接
link = Column(String)
# 用户ID
userid = Column(String)
# 登记时间
reg_time = Column(String, index=True)
# 消息方向0-接收息1-发送消息
action = Column(Integer)
# 附件json
note = Column(String)
@staticmethod
@db_query
def list_by_page(db: Session, page: int = 1, count: int = 30):
result = db.query(Message).order_by(Message.reg_time.desc()).offset((page - 1) * count).limit(
count).all()
result.sort(key=lambda x: x.reg_time, reverse=False)
return list(result)

View File

@@ -25,6 +25,10 @@ class Site(Base):
cookie = Column(String) cookie = Column(String)
# User-Agent # User-Agent
ua = Column(String) ua = Column(String)
# ApiKey
apikey = Column(String)
# Token
token = Column(String)
# 是否使用代理 0-否1-是 # 是否使用代理 0-否1-是
proxy = Column(Integer) proxy = Column(Integer)
# 过滤规则 # 过滤规则

View File

@@ -0,0 +1,37 @@
from datetime import datetime
from sqlalchemy import Column, Integer, String, Sequence
from sqlalchemy.orm import Session
from app.db import db_query, db_update, Base
class SiteStatistic(Base):
"""
站点统计表
"""
id = Column(Integer, Sequence('id'), primary_key=True, index=True)
# 域名Key
domain = Column(String, index=True)
# 成功次数
success = Column(Integer)
# 失败次数
fail = Column(Integer)
# 平均耗时 秒
seconds = Column(Integer)
# 最后一次访问状态 0-成功 1-失败
lst_state = Column(Integer)
# 最后访问时间
lst_mod_date = Column(String, default=datetime.now().strftime("%Y-%m-%d %H:%M:%S"))
# 耗时记录 Json
note = Column(String)
@staticmethod
@db_query
def get_by_domain(db: Session, domain: str):
return db.query(SiteStatistic).filter(SiteStatistic.domain == domain).first()
@staticmethod
@db_update
def reset(db: Session):
db.query(SiteStatistic).delete()

View File

@@ -1,4 +1,6 @@
from sqlalchemy import Column, Integer, String, Sequence import time
from sqlalchemy import Column, Integer, String, Sequence, Float
from sqlalchemy.orm import Session from sqlalchemy.orm import Session
from app.db import db_query, db_update, Base from app.db import db_query, db_update, Base
@@ -21,14 +23,15 @@ class Subscribe(Base):
imdbid = Column(String) imdbid = Column(String)
tvdbid = Column(Integer) tvdbid = Column(Integer)
doubanid = Column(String, index=True) doubanid = Column(String, index=True)
bangumiid = Column(Integer, index=True)
# 季号 # 季号
season = Column(Integer) season = Column(Integer)
# 海报 # 海报
poster = Column(String) poster = Column(String)
# 背景图 # 背景图
backdrop = Column(String) backdrop = Column(String)
# 评分 # 评分float
vote = Column(Integer) vote = Column(Float)
# 简介 # 简介
description = Column(String) description = Column(String)
# 过滤规则 # 过滤规则
@@ -113,6 +116,11 @@ class Subscribe(Base):
def get_by_doubanid(db: Session, doubanid: str): def get_by_doubanid(db: Session, doubanid: str):
return db.query(Subscribe).filter(Subscribe.doubanid == doubanid).first() return db.query(Subscribe).filter(Subscribe.doubanid == doubanid).first()
@staticmethod
@db_query
def get_by_bangumiid(db: Session, bangumiid: int):
return db.query(Subscribe).filter(Subscribe.bangumiid == bangumiid).first()
@db_update @db_update
def delete_by_tmdbid(self, db: Session, tmdbid: int, season: int): def delete_by_tmdbid(self, db: Session, tmdbid: int, season: int):
subscrbies = self.get_by_tmdbid(db, tmdbid, season) subscrbies = self.get_by_tmdbid(db, tmdbid, season)
@@ -126,3 +134,32 @@ class Subscribe(Base):
if subscribe: if subscribe:
subscribe.delete(db, subscribe.id) subscribe.delete(db, subscribe.id)
return True return True
@staticmethod
@db_query
def list_by_username(db: Session, username: str, state: str = None, mtype: str = None):
if mtype:
if state:
result = db.query(Subscribe).filter(Subscribe.state == state,
Subscribe.username == username,
Subscribe.type == mtype).all()
else:
result = db.query(Subscribe).filter(Subscribe.username == username,
Subscribe.type == mtype).all()
else:
if state:
result = db.query(Subscribe).filter(Subscribe.state == state,
Subscribe.username == username).all()
else:
result = db.query(Subscribe).filter(Subscribe.username == username).all()
return list(result)
@staticmethod
@db_query
def list_by_type(db: Session, mtype: str, days: int):
result = db.query(Subscribe) \
.filter(Subscribe.type == mtype,
Subscribe.date >= time.strftime("%Y-%m-%d %H:%M:%S",
time.localtime(time.time() - 86400 * int(days)))
).all()
return list(result)

View File

@@ -0,0 +1,72 @@
from sqlalchemy import Column, Integer, String, Sequence, Float
from sqlalchemy.orm import Session
from app.db import db_query, Base
class SubscribeHistory(Base):
"""
订阅历史表
"""
id = Column(Integer, Sequence('id'), primary_key=True, index=True)
# 标题
name = Column(String, nullable=False, index=True)
# 年份
year = Column(String)
# 类型
type = Column(String)
# 搜索关键字
keyword = Column(String)
tmdbid = Column(Integer, index=True)
imdbid = Column(String)
tvdbid = Column(Integer)
doubanid = Column(String, index=True)
bangumiid = Column(Integer, index=True)
# 季号
season = Column(Integer)
# 海报
poster = Column(String)
# 背景图
backdrop = Column(String)
# 评分float
vote = Column(Float)
# 简介
description = Column(String)
# 过滤规则
filter = Column(String)
# 包含
include = Column(String)
# 排除
exclude = Column(String)
# 质量
quality = Column(String)
# 分辨率
resolution = Column(String)
# 特效
effect = Column(String)
# 总集数
total_episode = Column(Integer)
# 开始集数
start_episode = Column(Integer)
# 订阅完成时间
date = Column(String)
# 订阅用户
username = Column(String)
# 订阅站点
sites = Column(String)
# 是否洗版
best_version = Column(Integer, default=0)
# 保存路径
save_path = Column(String)
# 是否使用 imdbid 搜索
search_imdbid = Column(Integer, default=0)
@staticmethod
@db_query
def list_by_type(db: Session, mtype: str, page: int = 1, count: int = 30):
result = db.query(SubscribeHistory).filter(
SubscribeHistory.type == mtype
).order_by(
SubscribeHistory.date.desc()
).offset((page - 1) * count).limit(count).all()
return list(result)

View File

@@ -1,6 +1,6 @@
import time import time
from sqlalchemy import Column, Integer, String, Sequence, Boolean, func from sqlalchemy import Column, Integer, String, Sequence, Boolean, func, or_
from sqlalchemy.orm import Session from sqlalchemy.orm import Session
from app.db import db_query, db_update, Base from app.db import db_query, db_update, Base
@@ -50,26 +50,33 @@ class TransferHistory(Base):
@db_query @db_query
def list_by_title(db: Session, title: str, page: int = 1, count: int = 30, status: bool = None): def list_by_title(db: Session, title: str, page: int = 1, count: int = 30, status: bool = None):
if status is not None: if status is not None:
result = db.query(TransferHistory).filter(TransferHistory.title.like(f'%{title}%'), result = db.query(TransferHistory).filter(
TransferHistory.status == status).order_by( TransferHistory.status == status
TransferHistory.date.desc()).offset((page - 1) * count).limit( ).order_by(
count).all() TransferHistory.date.desc()
).offset((page - 1) * count).limit(count).all()
else: else:
result = db.query(TransferHistory).filter(TransferHistory.title.like(f'%{title}%')).order_by( result = db.query(TransferHistory).filter(or_(
TransferHistory.date.desc()).offset((page - 1) * count).limit( TransferHistory.src.like(f'%{title}%'),
count).all() TransferHistory.dest.like(f'%{title}%'),
)).order_by(
TransferHistory.date.desc()
).offset((page - 1) * count).limit(count).all()
return list(result) return list(result)
@staticmethod @staticmethod
@db_query @db_query
def list_by_page(db: Session, page: int = 1, count: int = 30, status: bool = None): def list_by_page(db: Session, page: int = 1, count: int = 30, status: bool = None):
if status is not None: if status is not None:
result = db.query(TransferHistory).filter(TransferHistory.status == status).order_by( result = db.query(TransferHistory).filter(
TransferHistory.date.desc()).offset((page - 1) * count).limit( TransferHistory.status == status
count).all() ).order_by(
TransferHistory.date.desc()
).offset((page - 1) * count).limit(count).all()
else: else:
result = db.query(TransferHistory).order_by(TransferHistory.date.desc()).offset((page - 1) * count).limit( result = db.query(TransferHistory).order_by(
count).all() TransferHistory.date.desc()
).offset((page - 1) * count).limit(count).all()
return list(result) return list(result)
@staticmethod @staticmethod
@@ -113,10 +120,12 @@ class TransferHistory(Base):
@db_query @db_query
def count_by_title(db: Session, title: str, status: bool = None): def count_by_title(db: Session, title: str, status: bool = None):
if status is not None: if status is not None:
return db.query(func.count(TransferHistory.id)).filter(TransferHistory.title.like(f'%{title}%'), return db.query(func.count(TransferHistory.id)).filter(TransferHistory.status == status).first()[0]
TransferHistory.status == status).first()[0]
else: else:
return db.query(func.count(TransferHistory.id)).filter(TransferHistory.title.like(f'%{title}%')).first()[0] return db.query(func.count(TransferHistory.id)).filter(or_(
TransferHistory.src.like(f'%{title}%'),
TransferHistory.dest.like(f'%{title}%')
)).first()[0]
@staticmethod @staticmethod
@db_query @db_query
@@ -203,3 +212,11 @@ class TransferHistory(Base):
"download_hash": download_hash "download_hash": download_hash
} }
) )
@staticmethod
@db_query
def list_by_date(db: Session, date: str):
"""
查询某时间之后的转移历史
"""
return db.query(TransferHistory).filter(TransferHistory.date > date).order_by(TransferHistory.id.desc()).all()

View File

@@ -1,8 +1,12 @@
from typing import Tuple, Optional
from sqlalchemy import Boolean, Column, Integer, String, Sequence from sqlalchemy import Boolean, Column, Integer, String, Sequence
from sqlalchemy.orm import Session from sqlalchemy.orm import Session
from app.core.security import verify_password from app.core.security import verify_password
from app.db import db_query, db_update, Base from app.db import db_query, db_update, Base
from app.schemas import User
from app.utils.otp import OtpUtils
class User(Base): class User(Base):
@@ -23,16 +27,23 @@ class User(Base):
is_superuser = Column(Boolean(), default=False) is_superuser = Column(Boolean(), default=False)
# 头像 # 头像
avatar = Column(String) avatar = Column(String)
# 是否启用otp二次验证
is_otp = Column(Boolean(), default=False)
# otp秘钥
otp_secret = Column(String, default=None)
@staticmethod @staticmethod
@db_query @db_query
def authenticate(db: Session, name: str, password: str): def authenticate(db: Session, name: str, password: str, otp_password: str) -> Tuple[bool, Optional[User]]:
user = db.query(User).filter(User.name == name).first() user = db.query(User).filter(User.name == name).first()
if not user: if not user:
return None return False, None
if not verify_password(password, str(user.hashed_password)): if not verify_password(password, str(user.hashed_password)):
return None return False, user
return user if user.is_otp:
if not otp_password or not OtpUtils.check(user.otp_secret, otp_password):
return False, user
return True, user
@staticmethod @staticmethod
@db_query @db_query
@@ -45,3 +56,14 @@ class User(Base):
if user: if user:
user.delete(db, user.id) user.delete(db, user.id)
return True return True
@db_update
def update_otp_by_name(self, db: Session, name: str, otp: bool, secret: str):
user = self.get_by_name(db, name)
if user:
user.update(db, {
'is_otp': otp,
'otp_secret': secret
})
return True
return False

View File

@@ -0,0 +1,38 @@
from sqlalchemy import Column, Integer, String, Sequence, UniqueConstraint, Index
from sqlalchemy.orm import Session
from app.db import db_query, db_update, Base
class UserConfig(Base):
"""
用户配置表
"""
id = Column(Integer, Sequence('id'), primary_key=True, index=True)
# 用户名
username = Column(String, index=True)
# 配置键
key = Column(String)
# 值
value = Column(String, nullable=True)
__table_args__ = (
# 用户名和配置键联合唯一
UniqueConstraint('username', 'key'),
Index('ix_userconfig_username_key', 'username', 'key'),
)
@staticmethod
@db_query
def get_by_key(db: Session, username: str, key: str):
return db.query(UserConfig) \
.filter(UserConfig.username == username) \
.filter(UserConfig.key == key) \
.first()
@db_update
def delete_by_key(self, db: Session, username: str, key: str):
userconfig = self.get_by_key(db=db, username=username, key=key)
if userconfig:
userconfig.delete(db=db, rid=userconfig.id)
return True

View File

@@ -0,0 +1,70 @@
import json
from datetime import datetime
from app.db import DbOper
from app.db.models.sitestatistic import SiteStatistic
class SiteStatisticOper(DbOper):
"""
站点统计管理
"""
def success(self, domain: str, seconds: int = None):
"""
站点访问成功
"""
lst_date = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
sta = SiteStatistic.get_by_domain(self._db, domain)
if sta:
avg_seconds, note = None, {}
if seconds is not None:
note: dict = json.loads(sta.note or "{}")
note[lst_date] = seconds or 1
avg_times = len(note.keys())
if avg_times > 10:
note = dict(sorted(note.items(), key=lambda x: x[0], reverse=True)[:10])
avg_seconds = sum([v for v in note.values()]) // avg_times
sta.update(self._db, {
"success": sta.success + 1,
"seconds": avg_seconds or sta.seconds,
"lst_state": 0,
"lst_mod_date": lst_date,
"note": json.dumps(note) if note else sta.note
})
else:
note = {}
if seconds is not None:
note = {
lst_date: seconds or 1
}
SiteStatistic(
domain=domain,
success=1,
fail=0,
seconds=seconds or 1,
lst_state=0,
lst_mod_date=lst_date,
note=json.dumps(note)
).create(self._db)
def fail(self, domain: str):
"""
站点访问失败
"""
lst_date = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
sta = SiteStatistic.get_by_domain(self._db, domain)
if sta:
sta.update(self._db, {
"fail": sta.fail + 1,
"lst_state": 1,
"lst_mod_date": lst_date
})
else:
SiteStatistic(
domain=domain,
success=0,
fail=1,
lst_state=1,
lst_mod_date=lst_date
).create(self._db)

View File

@@ -1,3 +1,4 @@
import json
import time import time
from typing import Tuple, List from typing import Tuple, List
@@ -20,6 +21,9 @@ class SubscribeOper(DbOper):
doubanid=mediainfo.douban_id, doubanid=mediainfo.douban_id,
season=kwargs.get('season')) season=kwargs.get('season'))
if not subscribe: if not subscribe:
if kwargs.get("sites") and not isinstance(kwargs.get("sites"), str):
kwargs["sites"] = json.dumps(kwargs.get("sites"))
subscribe = Subscribe(name=mediainfo.title, subscribe = Subscribe(name=mediainfo.title,
year=mediainfo.year, year=mediainfo.year,
type=mediainfo.type.value, type=mediainfo.type.value,
@@ -27,6 +31,7 @@ class SubscribeOper(DbOper):
imdbid=mediainfo.imdb_id, imdbid=mediainfo.imdb_id,
tvdbid=mediainfo.tvdb_id, tvdbid=mediainfo.tvdb_id,
doubanid=mediainfo.douban_id, doubanid=mediainfo.douban_id,
bangumiid=mediainfo.bangumi_id,
poster=mediainfo.get_poster_image(), poster=mediainfo.get_poster_image(),
backdrop=mediainfo.get_backdrop_image(), backdrop=mediainfo.get_backdrop_image(),
vote=mediainfo.vote_average, vote=mediainfo.vote_average,
@@ -83,3 +88,21 @@ class SubscribeOper(DbOper):
subscribe = self.get(sid) subscribe = self.get(sid)
subscribe.update(self._db, payload) subscribe.update(self._db, payload)
return subscribe return subscribe
def list_by_tmdbid(self, tmdbid: int, season: int = None) -> List[Subscribe]:
"""
获取指定tmdb_id的订阅
"""
return Subscribe.get_by_tmdbid(self._db, tmdbid=tmdbid, season=season)
def list_by_username(self, username: str, state: str = None, mtype: str = None) -> List[Subscribe]:
"""
获取指定用户的订阅
"""
return Subscribe.list_by_username(self._db, username=username, state=state, mtype=mtype)
def list_by_type(self, mtype: str, days: int = 7) -> Subscribe:
"""
获取指定类型的订阅
"""
return Subscribe.list_by_type(self._db, mtype=mtype, days=days)

View File

@@ -0,0 +1,30 @@
import time
from app.db import DbOper
from app.db.models.subscribehistory import SubscribeHistory
class SubscribeHistoryOper(DbOper):
"""
订阅历史管理
"""
def add(self, **kwargs):
"""
新增订阅
"""
# 去除kwargs中 SubscribeHistory 没有的字段
kwargs = {k: v for k, v in kwargs.items() if hasattr(SubscribeHistory, k)}
# 更新完成订阅时间
kwargs.update({"date": time.strftime("%Y-%m-%d %H:%M:%S", time.localtime())})
# 去掉主键
if "id" in kwargs:
kwargs.pop("id")
subscribe = SubscribeHistory(**kwargs)
subscribe.create(self._db)
def list_by_type(self, mtype: str, page: int = 1, count: int = 30) -> SubscribeHistory:
"""
获取指定类型的订阅
"""
return SubscribeHistory.list_by_type(self._db, mtype=mtype, page=page, count=count)

View File

@@ -56,6 +56,12 @@ class SystemConfigOper(DbOper, metaclass=Singleton):
return self.__SYSTEMCONF return self.__SYSTEMCONF
return self.__SYSTEMCONF.get(key) return self.__SYSTEMCONF.get(key)
def all(self):
"""
获取所有系统设置
"""
return self.__SYSTEMCONF or {}
def delete(self, key: Union[str, SystemConfigKey]): def delete(self, key: Union[str, SystemConfigKey]):
""" """
删除系统设置 删除系统设置

View File

@@ -177,3 +177,10 @@ class TransferHistoryOper(DbOper):
errmsg="未识别到媒体信息" errmsg="未识别到媒体信息"
) )
return his return his
def list_by_date(self, date: str) -> List[TransferHistory]:
"""
查询某时间之后的转移历史
:param date: 日期
"""
return TransferHistory.list_by_date(self._db, date)

96
app/db/userconfig_oper.py Normal file
View File

@@ -0,0 +1,96 @@
import json
from typing import Any, Union, Dict, Optional
from app.db import DbOper
from app.db.models.userconfig import UserConfig
from app.schemas.types import UserConfigKey
from app.utils.object import ObjectUtils
from app.utils.singleton import Singleton
class UserConfigOper(DbOper, metaclass=Singleton):
# 配置缓存
__USERCONF: Dict[str, Dict[str, Any]] = {}
def __init__(self):
"""
加载配置到内存
"""
super().__init__()
for item in UserConfig.list(self._db):
value = json.loads(item.value) if ObjectUtils.is_obj(item.value) else item.value
self.__set_config_cache(username=item.username, key=item.key, value=value)
def set(self, username: str, key: Union[str, UserConfigKey], value: Any):
"""
设置用户配置
"""
if isinstance(key, UserConfigKey):
key = key.value
# 更新内存
self.__set_config_cache(username=username, key=key, value=value)
# 写入数据库
if ObjectUtils.is_obj(value):
value = json.dumps(value)
elif value is None:
value = ''
conf = UserConfig.get_by_key(db=self._db, username=username, key=key)
if conf:
if value:
conf.update(self._db, {"value": value})
else:
conf.delete(self._db, conf.id)
else:
conf = UserConfig(username=username, key=key, value=value)
conf.create(self._db)
def get(self, username: str, key: Union[str, UserConfigKey] = None) -> Any:
"""
获取用户配置
"""
if not username:
return self.__USERCONF
if isinstance(key, UserConfigKey):
key = key.value
if not key:
return self.__get_config_caches(username=username)
return self.__get_config_cache(username=username, key=key)
def __del__(self):
if self._db:
self._db.close()
def __set_config_cache(self, username: str, key: str, value: Any):
"""
设置配置缓存
"""
if not username or not key:
return
cache = self.__USERCONF
if not cache:
cache = {}
user_cache = cache.get(username)
if not user_cache:
user_cache = {}
cache[username] = user_cache
user_cache[key] = value
self.__USERCONF = cache
def __get_config_caches(self, username: str) -> Optional[Dict[str, Any]]:
"""
获取配置缓存
"""
if not username or not self.__USERCONF:
return None
return self.__USERCONF.get(username)
def __get_config_cache(self, username: str, key: str) -> Any:
"""
获取配置缓存
"""
if not username or not key or not self.__USERCONF:
return None
user_cache = self.__get_config_caches(username)
if not user_cache:
return None
return user_cache.get(key)

View File

@@ -0,0 +1,2 @@
from .doh import doh_query_json
from .cloudflare import under_challenge

View File

@@ -61,7 +61,7 @@ class PlaywrightHelper:
ua: str = None, ua: str = None,
proxies: dict = None, proxies: dict = None,
headless: bool = False, headless: bool = False,
timeout: int = 30) -> str: timeout: int = 20) -> str:
""" """
获取网页源码 获取网页源码
:param url: 网页地址 :param url: 网页地址

View File

@@ -1,68 +1,126 @@
from typing import Tuple, Optional import json
from hashlib import md5
from typing import Any, Dict, Tuple, Optional
from app.core.config import settings
from app.utils.common import decrypt
from app.utils.http import RequestUtils from app.utils.http import RequestUtils
from app.utils.string import StringUtils from app.utils.string import StringUtils
class CookieCloudHelper: class CookieCloudHelper:
_ignore_cookies: list = ["CookieAutoDeleteBrowsingDataCleanup", "CookieAutoDeleteCleaningDiscarded"] _ignore_cookies: list = ["CookieAutoDeleteBrowsingDataCleanup", "CookieAutoDeleteCleaningDiscarded"]
def __init__(self, server: str, key: str, password: str): def __init__(self):
self._server = server self._sync_setting()
self._key = key
self._password = password
self._req = RequestUtils(content_type="application/json") self._req = RequestUtils(content_type="application/json")
def _sync_setting(self):
self._server = settings.COOKIECLOUD_HOST
self._key = settings.COOKIECLOUD_KEY
self._password = settings.COOKIECLOUD_PASSWORD
self._enable_local = settings.COOKIECLOUD_ENABLE_LOCAL
self._local_path = settings.COOKIE_PATH
def download(self) -> Tuple[Optional[dict], str]: def download(self) -> Tuple[Optional[dict], str]:
""" """
从CookieCloud下载数据 从CookieCloud下载数据
:return: Cookie数据、错误信息 :return: Cookie数据、错误信息
""" """
if not self._server or not self._key or not self._password: # 更新为最新设置
self._sync_setting()
if ((not self._server and not self._enable_local)
or not self._key
or not self._password):
return None, "CookieCloud参数不正确" return None, "CookieCloud参数不正确"
req_url = "%s/get/%s" % (self._server, str(self._key).strip())
ret = self._req.post_res(url=req_url, json={"password": str(self._password).strip()}) if self._enable_local:
if ret and ret.status_code == 200: # 开启本地服务时,从本地直接读取数据
result = ret.json() result = self._load_local_encrypt_data(self._key)
if not result: if not result:
return {}, "下载到数据" return {}, "从本地CookieCloud服务加载到cookie数据请检查服务器设置、用户KEY及加密密码是否正确"
if result.get("cookie_data"):
contents = result.get("cookie_data")
else:
contents = result
# 整理数据,使用domain域名的最后两级作为分组依据
domain_groups = {}
for site, cookies in contents.items():
for cookie in cookies:
domain_key = StringUtils.get_url_domain(cookie.get("domain"))
if not domain_groups.get(domain_key):
domain_groups[domain_key] = [cookie]
else:
domain_groups[domain_key].append(cookie)
# 返回错误
ret_cookies = {}
# 索引器
for domain, content_list in domain_groups.items():
if not content_list:
continue
# 只有cf的cookie过滤掉
cloudflare_cookie = True
for content in content_list:
if content["name"] != "cf_clearance":
cloudflare_cookie = False
break
if cloudflare_cookie:
continue
# 站点Cookie
cookie_str = ";".join(
[f"{content.get('name')}={content.get('value')}"
for content in content_list
if content.get("name") and content.get("name") not in self._ignore_cookies]
)
ret_cookies[domain] = cookie_str
return ret_cookies, ""
elif ret:
return None, f"同步CookieCloud失败错误码{ret.status_code}"
else: else:
return None, "CookieCloud请求失败请检查服务器地址、用户KEY及加密密码是否正确" req_url = "%s/get/%s" % (self._server, str(self._key).strip())
ret = self._req.get_res(url=req_url)
if ret and ret.status_code == 200:
try:
result = ret.json()
if not result:
return {}, f"未从{self._server}下载到cookie数据"
except Exception as err:
return {}, f"{self._server}下载cookie数据错误{str(err)}"
elif ret:
return None, f"远程同步CookieCloud失败错误码{ret.status_code}"
else:
return None, "CookieCloud请求失败请检查服务器地址、用户KEY及加密密码是否正确"
encrypted = result.get("encrypted")
if not encrypted:
return {}, "未获取到cookie密文"
else:
crypt_key = self._get_crypt_key()
try:
decrypted_data = decrypt(encrypted, crypt_key).decode('utf-8')
result = json.loads(decrypted_data)
except Exception as e:
return {}, "cookie解密失败" + str(e)
if not result:
return {}, "cookie解密为空"
if result.get("cookie_data"):
contents = result.get("cookie_data")
else:
contents = result
# 整理数据,使用domain域名的最后两级作为分组依据
domain_groups = {}
for site, cookies in contents.items():
for cookie in cookies:
domain_key = StringUtils.get_url_domain(cookie.get("domain"))
if not domain_groups.get(domain_key):
domain_groups[domain_key] = [cookie]
else:
domain_groups[domain_key].append(cookie)
# 返回错误
ret_cookies = {}
# 索引器
for domain, content_list in domain_groups.items():
if not content_list:
continue
# 只有cf的cookie过滤掉
cloudflare_cookie = True
for content in content_list:
if content["name"] != "cf_clearance":
cloudflare_cookie = False
break
if cloudflare_cookie:
continue
# 站点Cookie
cookie_str = ";".join(
[f"{content.get('name')}={content.get('value')}"
for content in content_list
if content.get("name") and content.get("name") not in self._ignore_cookies]
)
ret_cookies[domain] = cookie_str
return ret_cookies, ""
def _get_crypt_key(self) -> bytes:
"""
使用UUID和密码生成CookieCloud的加解密密钥
"""
md5_generator = md5()
md5_generator.update((str(self._key).strip() + '-' + str(self._password).strip()).encode('utf-8'))
return (md5_generator.hexdigest()[:16]).encode('utf-8')
def _load_local_encrypt_data(self, uuid: str) -> Dict[str, Any]:
file_path = self._local_path / f"{uuid}.json"
# 检查文件是否存在
if not file_path.exists():
return {}
# 读取文件
with open(file_path, encoding="utf-8", mode="r") as file:
read_content = file.read()
data = json.loads(read_content.encode("utf-8"))
return data

156
app/helper/doh.py Normal file
View File

@@ -0,0 +1,156 @@
"""
doh函数的实现。
author: https://github.com/C5H12O5/syno-videoinfo-plugin
"""
import base64
import concurrent
import concurrent.futures
import json
import socket
import struct
import urllib
import urllib.request
from typing import Dict, Optional
from app.core.config import settings
from app.log import logger
# 定义一个全局集合来存储注册的主机
_registered_hosts = {
'api.themoviedb.org',
'api.tmdb.org',
'webservice.fanart.tv',
'api.github.com',
'github.com',
'raw.githubusercontent.com',
'api.telegram.org'
}
# 定义一个全局线程池执行器
_executor = concurrent.futures.ThreadPoolExecutor()
# 定义默认的DoH配置
_doh_timeout = 5
_doh_cache: Dict[str, str] = {}
_doh_resolvers = [
# https://developers.cloudflare.com/1.1.1.1/encryption/dns-over-https
"1.0.0.1",
"1.1.1.1",
# https://support.quad9.net/hc/en-us
"9.9.9.9",
"149.112.112.112"
]
def _patched_getaddrinfo(host, *args, **kwargs):
"""
socket.getaddrinfo的补丁版本。
"""
if host not in _registered_hosts:
return _orig_getaddrinfo(host, *args, **kwargs)
# 检查主机是否已解析
if host in _doh_cache:
ip = _doh_cache[host]
logger.info("已解析 [%s] 为 [%s] (缓存)", host, ip)
return _orig_getaddrinfo(ip, *args, **kwargs)
# 使用DoH解析主机
futures = []
for resolver in _doh_resolvers:
futures.append(_executor.submit(_doh_query, resolver, host))
for future in concurrent.futures.as_completed(futures):
ip = future.result()
if ip is not None:
logger.info("已解析 [%s] 为 [%s]", host, ip)
_doh_cache[host] = ip
host = ip
break
return _orig_getaddrinfo(host, *args, **kwargs)
# 对 socket.getaddrinfo 进行补丁
if settings.DOH_ENABLE:
_orig_getaddrinfo = socket.getaddrinfo
socket.getaddrinfo = _patched_getaddrinfo
def _doh_query(resolver: str, host: str) -> Optional[str]:
"""
使用给定的DoH解析器查询给定主机的IP地址。
"""
# 构造DNS查询消息RFC 1035
header = b"".join(
[
b"\x00\x00", # ID: 0
b"\x01\x00", # FLAGS: 标准递归查询
b"\x00\x01", # QDCOUNT: 1
b"\x00\x00", # ANCOUNT: 0
b"\x00\x00", # NSCOUNT: 0
b"\x00\x00", # ARCOUNT: 0
]
)
question = b"".join(
[
b"".join(
[
struct.pack("B", len(item)) + item.encode("utf-8")
for item in host.split(".")
]
)
+ b"\x00", # QNAME: 域名序列
b"\x00\x01", # QTYPE: A
b"\x00\x01", # QCLASS: IN
]
)
message = header + question
try:
# 发送GET请求到DoH解析器RFC 8484
b64message = base64.b64encode(message).decode("utf-8").rstrip("=")
url = f"https://{resolver}/dns-query?dns={b64message}"
headers = {"Content-Type": "application/dns-message"}
logger.debug("DoH请求: %s", url)
request = urllib.request.Request(url, headers=headers, method="GET")
with urllib.request.urlopen(request, timeout=_doh_timeout) as response:
logger.debug("解析器(%s)响应: %s", resolver, response.status)
if response.status != 200:
return None
resp_body = response.read()
# 解析DNS响应消息RFC 1035
# name压缩:2 + type:2 + class:2 + ttl:4 + rdlength:2 = 12字节
first_rdata_start = len(header) + len(question) + 12
# rdataA记录= 4字节
first_rdata_end = first_rdata_start + 4
# 将rdata转换为IP地址
return socket.inet_ntoa(resp_body[first_rdata_start:first_rdata_end])
except Exception as e:
logger.error("解析器(%s)请求错误: %s", resolver, e)
return None
def doh_query_json(resolver: str, host: str) -> Optional[str]:
"""
使用给定的DoH解析器查询给定主机的IP地址。
"""
url = f"https://{resolver}/dns-query?name={host}&type=A"
headers = {"Accept": "application/dns-json"}
logger.debug("DoH请求: %s", url)
try:
request = urllib.request.Request(url, headers=headers, method="GET")
with urllib.request.urlopen(request, timeout=_doh_timeout) as response:
logger.debug("解析器(%s)响应: %s", resolver, response.status)
if response.status != 200:
return None
response_body = response.read().decode("utf-8")
logger.debug("<== body: %s", response_body)
answer = json.loads(response_body)["Answer"]
return answer[0]["data"]
except Exception as e:
logger.error("解析器(%s)请求错误: %s", resolver, e)
return None

View File

@@ -1,19 +1,46 @@
import json
import queue import queue
import time
from typing import Optional, Any, Union
from app.utils.singleton import Singleton from app.utils.singleton import Singleton
class MessageHelper(metaclass=Singleton): class MessageHelper(metaclass=Singleton):
""" """
消息队列管理器 消息队列管理器,包括系统消息和用户消息
""" """
def __init__(self): def __init__(self):
self.queue = queue.Queue() self.sys_queue = queue.Queue()
self.user_queue = queue.Queue()
def put(self, message: str): def put(self, message: Any, role: str = "sys", note: Union[list, dict] = None):
self.queue.put(message) """
存消息
:param message: 消息
:param role: 消息通道 sys/user
:param note: 附件json
"""
if role == "sys":
self.sys_queue.put(message)
else:
if isinstance(message, str):
self.user_queue.put(message)
elif hasattr(message, "to_dict"):
content = message.to_dict()
content['date'] = time.strftime("%Y-%m-%d %H:%M:%S", time.localtime())
content['note'] = json.dumps(note) if note else None
self.user_queue.put(json.dumps(content))
def get(self): def get(self, role: str = "sys") -> Optional[str]:
if not self.queue.empty(): """
return self.queue.get(block=False) 取消息
:param role: 消息通道 sys/user
"""
if role == "sys":
if not self.sys_queue.empty():
return self.sys_queue.get(block=False)
else:
if not self.user_queue.empty():
return self.user_queue.get(block=False)
return None return None

View File

@@ -36,7 +36,7 @@ class ModuleHelper:
if isinstance(obj, type) and filter_func(name, obj): if isinstance(obj, type) and filter_func(name, obj):
submodules.append(obj) submodules.append(obj)
except Exception as err: except Exception as err:
logger.error(f'加载模块 {package_name} 失败:{str(err)} - {traceback.format_exc()}') logger.debug(f'加载模块 {package_name} 失败:{str(err)} - {traceback.format_exc()}')
return submodules return submodules

View File

@@ -7,7 +7,9 @@ from typing import Dict, Tuple, Optional, List
from cachetools import TTLCache, cached from cachetools import TTLCache, cached
from app.core.config import settings from app.core.config import settings
from app.db.systemconfig_oper import SystemConfigOper
from app.log import logger from app.log import logger
from app.schemas.types import SystemConfigKey
from app.utils.http import RequestUtils from app.utils.http import RequestUtils
from app.utils.singleton import Singleton from app.utils.singleton import Singleton
from app.utils.system import SystemUtils from app.utils.system import SystemUtils
@@ -20,7 +22,20 @@ class PluginHelper(metaclass=Singleton):
_base_url = "https://raw.githubusercontent.com/%s/%s/main/" _base_url = "https://raw.githubusercontent.com/%s/%s/main/"
@cached(cache=TTLCache(maxsize=100, ttl=1800)) _install_reg = "https://movie-pilot.org/plugin/install/%s"
_install_report = "https://movie-pilot.org/plugin/install"
_install_statistic = "https://movie-pilot.org/plugin/statistic"
def __init__(self):
self.systemconfig = SystemConfigOper()
if settings.PLUGIN_STATISTIC_SHARE:
if not self.systemconfig.get(SystemConfigKey.PluginInstallReport):
if self.install_report():
self.systemconfig.set(SystemConfigKey.PluginInstallReport, "1")
@cached(cache=TTLCache(maxsize=1000, ttl=1800))
def get_plugins(self, repo_url: str) -> Dict[str, dict]: def get_plugins(self, repo_url: str) -> Dict[str, dict]:
""" """
获取Github所有最新插件列表 获取Github所有最新插件列表
@@ -61,6 +76,51 @@ class PluginHelper(metaclass=Singleton):
return None, None return None, None
return user, repo return user, repo
@cached(cache=TTLCache(maxsize=1, ttl=1800))
def get_statistic(self) -> Dict:
"""
获取插件安装统计
"""
if not settings.PLUGIN_STATISTIC_SHARE:
return {}
res = RequestUtils(timeout=10).get_res(self._install_statistic)
if res and res.status_code == 200:
return res.json()
return {}
def install_reg(self, pid: str) -> bool:
"""
安装插件统计
"""
if not settings.PLUGIN_STATISTIC_SHARE:
return False
if not pid:
return False
res = RequestUtils(timeout=5).get_res(self._install_reg % pid)
if res and res.status_code == 200:
return True
return False
def install_report(self) -> bool:
"""
上报存量插件安装统计
"""
if not settings.PLUGIN_STATISTIC_SHARE:
return False
plugins = self.systemconfig.get(SystemConfigKey.UserInstalledPlugins)
if not plugins:
return False
res = RequestUtils(content_type="application/json",
timeout=5).post(self._install_report,
json={
"plugins": [
{
"plugin_id": plugin,
} for plugin in plugins
]
})
return True if res else False
def install(self, pid: str, repo_url: str) -> Tuple[bool, str]: def install(self, pid: str, repo_url: str) -> Tuple[bool, str]:
""" """
安装插件 安装插件
@@ -77,10 +137,13 @@ class PluginHelper(metaclass=Singleton):
""" """
获取插件的文件列表 获取插件的文件列表
""" """
file_api = f"https://api.github.com/repos/{user}/{repo}/contents/plugins/{_p.lower()}" file_api = f"https://api.github.com/repos/{user}/{repo}/contents/plugins/{_p}"
r = RequestUtils(proxies=settings.PROXY, headers=settings.GITHUB_HEADERS, timeout=30).get_res(file_api) r = RequestUtils(proxies=settings.PROXY, headers=settings.GITHUB_HEADERS, timeout=30).get_res(file_api)
if not r or r.status_code != 200: if r is None:
return None, f"连接仓库失败{r.status_code} - {r.reason}" return None, "连接仓库失败"
elif r.status_code != 200:
return None, f"连接仓库失败:{r.status_code} - " \
f"{'超出速率限制请配置GITHUB_TOKEN环境变量或稍后重试' if r.status_code == 403 else r.reason}"
ret = r.json() ret = r.json()
if ret and ret[0].get("message") == "Not Found": if ret and ret[0].get("message") == "Not Found":
return None, "插件在仓库中不存在" return None, "插件在仓库中不存在"
@@ -100,7 +163,8 @@ class PluginHelper(metaclass=Singleton):
if not res: if not res:
return False, f"文件 {item.get('name')} 下载失败!" return False, f"文件 {item.get('name')} 下载失败!"
elif res.status_code != 200: elif res.status_code != 200:
return False, f"下载文件 {item.get('name')} 失败:{res.status_code} - {res.reason}" return False, f"下载文件 {item.get('name')} 失败:{res.status_code} - " \
f"{'超出速率限制请配置GITHUB_TOKEN环境变量或稍后重试' if res.status_code == 403 else res.reason}"
# 创建插件文件夹 # 创建插件文件夹
file_path = Path(settings.ROOT_PATH) / "app" / item.get("path") file_path = Path(settings.ROOT_PATH) / "app" / item.get("path")
if not file_path.parent.exists(): if not file_path.parent.exists():
@@ -154,4 +218,7 @@ class PluginHelper(metaclass=Singleton):
requirements_file = plugin_dir / "requirements.txt" requirements_file = plugin_dir / "requirements.txt"
if requirements_file.exists(): if requirements_file.exists():
SystemUtils.execute(f"pip install -r {requirements_file} > /dev/null 2>&1") SystemUtils.execute(f"pip install -r {requirements_file} > /dev/null 2>&1")
# 安装成功后统计
self.install_reg(pid)
return True, "" return True, ""

View File

@@ -55,11 +55,7 @@ class ResourceHelper(metaclass=Singleton):
target = resource.get("target") target = resource.get("target")
version = resource.get("version") version = resource.get("version")
# 判断平台 # 判断平台
if platform and platform != SystemUtils.platform: if platform and platform != SystemUtils.platform():
continue
# 判断本地是否存在
local_path = self._base_dir / target / rname
if not local_path.exists():
continue continue
# 判断版本号 # 判断版本号
if rtype == "auth": if rtype == "auth":
@@ -80,8 +76,10 @@ class ResourceHelper(metaclass=Singleton):
# 下载文件信息列表 # 下载文件信息列表
r = RequestUtils(proxies=settings.PROXY, headers=settings.GITHUB_HEADERS, r = RequestUtils(proxies=settings.PROXY, headers=settings.GITHUB_HEADERS,
timeout=30).get_res(self._files_api) timeout=30).get_res(self._files_api)
if not r or r.status_code != 200: if r and not r.ok:
return None, f"连接仓库失败:{r.status_code} - {r.reason}" return None, f"连接仓库失败:{r.status_code} - {r.reason}"
elif not r:
return None, "连接仓库失败"
files_info = r.json() files_info = r.json()
for item in files_info: for item in files_info:
save_path = need_updates.get(item.get("name")) save_path = need_updates.get(item.get("name"))

View File

@@ -225,7 +225,7 @@ class RssHelper:
} }
@staticmethod @staticmethod
def parse(url, proxy: bool = False, timeout: int = 30) -> Union[List[dict], None]: def parse(url, proxy: bool = False, timeout: int = 15) -> Union[List[dict], None]:
""" """
解析RSS订阅URL获取RSS中的种子信息 解析RSS订阅URL获取RSS中的种子信息
:param url: RSS地址 :param url: RSS地址

122
app/helper/subscribe.py Normal file
View File

@@ -0,0 +1,122 @@
from threading import Thread
from typing import List
from cachetools import TTLCache, cached
from app.core.config import settings
from app.db.subscribe_oper import SubscribeOper
from app.db.systemconfig_oper import SystemConfigOper
from app.schemas.types import SystemConfigKey
from app.utils.http import RequestUtils
from app.utils.singleton import Singleton
class SubscribeHelper(metaclass=Singleton):
"""
订阅数据统计
"""
_sub_reg = "https://movie-pilot.org/subscribe/add"
_sub_done = "https://movie-pilot.org/subscribe/done"
_sub_report = "https://movie-pilot.org/subscribe/report"
_sub_statistic = "https://movie-pilot.org/subscribe/statistic"
def __init__(self):
self.systemconfig = SystemConfigOper()
if settings.SUBSCRIBE_STATISTIC_SHARE:
if not self.systemconfig.get(SystemConfigKey.SubscribeReport):
if self.sub_report():
self.systemconfig.set(SystemConfigKey.SubscribeReport, "1")
@cached(cache=TTLCache(maxsize=20, ttl=1800))
def get_statistic(self, stype: str, page: int = 1, count: int = 30) -> List[dict]:
"""
获取订阅统计数据
"""
if not settings.SUBSCRIBE_STATISTIC_SHARE:
return []
res = RequestUtils(timeout=15).get_res(self._sub_statistic, params={
"stype": stype,
"page": page,
"count": count
})
if res and res.status_code == 200:
return res.json()
return []
def sub_reg(self, sub: dict) -> bool:
"""
新增订阅统计
"""
if not settings.SUBSCRIBE_STATISTIC_SHARE:
return False
res = RequestUtils(timeout=5, headers={
"Content-Type": "application/json"
}).post_res(self._sub_reg, json=sub)
if res and res.status_code == 200:
return True
return False
def sub_done(self, sub: dict) -> bool:
"""
完成订阅统计
"""
if not settings.SUBSCRIBE_STATISTIC_SHARE:
return False
res = RequestUtils(timeout=5, headers={
"Content-Type": "application/json"
}).post_res(self._sub_done, json=sub)
if res and res.status_code == 200:
return True
return False
def sub_reg_async(self, sub: dict) -> bool:
"""
异步新增订阅统计
"""
# 开新线程处理
Thread(target=self.sub_reg, args=(sub,)).start()
return True
def sub_done_async(self, sub: dict) -> bool:
"""
异步完成订阅统计
"""
# 开新线程处理
Thread(target=self.sub_done, args=(sub,)).start()
return True
def sub_report(self) -> bool:
"""
上报存量订阅统计
"""
if not settings.SUBSCRIBE_STATISTIC_SHARE:
return False
subscribes = SubscribeOper().list()
if not subscribes:
return True
res = RequestUtils(content_type="application/json",
timeout=10).post(self._sub_report,
json={
"subscribes": [
{
"name": sub.name,
"year": sub.year,
"type": sub.type,
"tmdbid": sub.tmdbid,
"imdbid": sub.imdbid,
"tvdbid": sub.tvdbid,
"doubanid": sub.doubanid,
"bangumiid": sub.bangumiid,
"season": sub.season,
"poster": sub.poster,
"backdrop": sub.backdrop,
"vote": sub.vote,
"description": sub.description
} for sub in subscribes
]
})
return True if res else False

View File

@@ -322,29 +322,56 @@ class TorrentHelper(metaclass=Singleton):
logger.error(f"解析大小范围失败:{str(e)} - {traceback.format_exc()}") logger.error(f"解析大小范围失败:{str(e)} - {traceback.format_exc()}")
return 0, 0 return 0, 0
def __get_pubminutes(pubdate: str) -> float:
"""
将字符串转换为时间,并计算与当前时间差)(分钟)
"""
try:
if not pubdate:
return 0
pubdate = pubdate.replace("T", " ").replace("Z", "")
pubdate = datetime.datetime.strptime(pubdate, "%Y-%m-%d %H:%M:%S")
now = datetime.datetime.now()
return (now - pubdate).total_seconds() // 60
except Exception as e:
print(str(e))
return 0
if not filter_rule: if not filter_rule:
return True return True
# 匹配内容 # 匹配内容
content = f"{torrent_info.title} {torrent_info.description} {' '.join(torrent_info.labels or [])}" content = (f"{torrent_info.title} "
f"{torrent_info.description} "
f"{' '.join(torrent_info.labels or [])} "
f"{torrent_info.volume_factor}")
# 最少做种人数 # 最少做种人数
min_seeders = filter_rule.get("min_seeders") min_seeders = filter_rule.get("min_seeders")
if min_seeders and torrent_info.seeders < int(min_seeders): if min_seeders and torrent_info.seeders < int(min_seeders):
logger.info(f"{torrent_info.title} 做种人数不足 {min_seeders}") # 最少做种人数生效发布时间(分钟)(在设置发布时间之外的最少做种人数生效)
return False min_seeders_time = filter_rule.get("min_seeders_time") or 0
if min_seeders_time:
# 发布时间与当前时间差(分钟)
pubdate_minutes = __get_pubminutes(torrent_info.pubdate)
if pubdate_minutes > int(min_seeders_time):
logger.info(f"{torrent_info.title} 发布时间大于 {min_seeders_time} 分钟,做种人数不足 {min_seeders}")
return False
else:
logger.info(f"{torrent_info.title} 做种人数不足 {min_seeders}")
return False
# 包含 # 包含
include = filter_rule.get("include") include = filter_rule.get("include")
if include: if include:
if not re.search(r"%s" % include, content, re.I): if not re.search(r"%s" % include, content, re.I):
logger.info(f"{torrent_info.title} 不匹配包含规则 {include}") logger.info(f"{content} 不匹配包含规则 {include}")
return False return False
# 排除 # 排除
exclude = filter_rule.get("exclude") exclude = filter_rule.get("exclude")
if exclude: if exclude:
if re.search(r"%s" % exclude, content, re.I): if re.search(r"%s" % exclude, content, re.I):
logger.info(f"{torrent_info.title} 匹配排除规则 {exclude}") logger.info(f"{content} 匹配排除规则 {exclude}")
return False return False
# 质量 # 质量
quality = filter_rule.get("quality") quality = filter_rule.get("quality")
@@ -399,3 +426,84 @@ class TorrentHelper(metaclass=Singleton):
f"{torrent_info.title} {StringUtils.str_filesize(torrent_info.size)} 不匹配大小规则 {size}") f"{torrent_info.title} {StringUtils.str_filesize(torrent_info.size)} 不匹配大小规则 {size}")
return False return False
return True return True
@staticmethod
def match_torrent(mediainfo: MediaInfo, torrent_meta: MetaInfo,
torrent: TorrentInfo, logerror: bool = True) -> bool:
"""
检查种子是否匹配媒体信息
:param mediainfo: 需要匹配的媒体信息
:param torrent_meta: 种子识别信息
:param torrent: 种子信息
:param logerror: 是否记录错误日志
"""
# 要匹配的媒体标题、原标题
media_titles = {
StringUtils.clear_upper(mediainfo.title),
StringUtils.clear_upper(mediainfo.original_title)
} - {""}
# 要匹配的媒体别名、译名
media_names = {StringUtils.clear_upper(name) for name in mediainfo.names if name}
# 识别的种子中英文名
meta_names = {
StringUtils.clear_upper(torrent_meta.cn_name),
StringUtils.clear_upper(torrent_meta.en_name)
} - {""}
# 比对种子识别类型
if torrent_meta.type == MediaType.TV and mediainfo.type != MediaType.TV:
if logerror:
logger.warn(f'{torrent.site_name} - {torrent.title} 种子标题类型为 {torrent_meta.type.value}'
f'不匹配 {mediainfo.type.value}')
return False
# 比对种子在站点中的类型
if torrent.category == MediaType.TV.value and mediainfo.type != MediaType.TV:
if logerror:
logger.warn(f'{torrent.site_name} - {torrent.title} 种子在站点中归类为 {torrent.category}'
f'不匹配 {mediainfo.type.value}')
return False
# 比对年份
if mediainfo.year:
if mediainfo.type == MediaType.TV:
# 剧集年份,每季的年份可能不同
if torrent_meta.year and torrent_meta.year not in [year for year in
mediainfo.season_years.values()]:
if logerror:
logger.warn(f'{torrent.site_name} - {torrent.title} 年份不匹配 {mediainfo.season_years}')
return False
else:
# 电影年份上下浮动1年
if torrent_meta.year not in [str(int(mediainfo.year) - 1),
mediainfo.year,
str(int(mediainfo.year) + 1)]:
if logerror:
logger.warn(f'{torrent.site_name} - {torrent.title} 年份不匹配 {mediainfo.year}')
return False
# 比对标题和原语种标题
if meta_names.intersection(media_titles):
logger.info(f'{mediainfo.title} 通过标题匹配到资源:{torrent.site_name} - {torrent.title}')
return True
# 比对别名和译名
if media_names:
if meta_names.intersection(media_names):
logger.info(f'{mediainfo.title} 通过别名或译名匹配到资源:{torrent.site_name} - {torrent.title}')
return True
# 标题拆分
if torrent_meta.org_string:
titles = [StringUtils.clear_upper(t) for t in re.split(r'[\s/【】.\[\]\-]+',
torrent_meta.org_string) if t]
# 在标题中判断是否存在标题、原语种标题
if media_titles.intersection(titles):
logger.info(f'{mediainfo.title} 通过标题匹配到资源:{torrent.site_name} - {torrent.title}')
return True
# 在副标题中判断是否存在标题、原语种标题、别名、译名
if torrent.description:
subtitles = {StringUtils.clear_upper(t) for t in re.split(r'[\s/【】|]+',
torrent.description) if t}
if media_titles.intersection(subtitles) or media_names.intersection(subtitles):
logger.info(f'{mediainfo.title} 通过副标题匹配到资源:{torrent.site_name} - {torrent.title}'
f'副标题:{torrent.description}')
return True
# 未匹配
if logerror:
logger.warn(f'{torrent.site_name} - {torrent.title} 标题不匹配,识别名称:{meta_names}')
return False

View File

@@ -53,10 +53,13 @@ def init_routers():
""" """
from app.api.apiv1 import api_router from app.api.apiv1 import api_router
from app.api.servarr import arr_router from app.api.servarr import arr_router
from app.api.servcookie import cookie_router
# API路由 # API路由
App.include_router(api_router, prefix=settings.API_V1_STR) App.include_router(api_router, prefix=settings.API_V1_STR)
# Radarr、Sonarr路由 # Radarr、Sonarr路由
App.include_router(arr_router, prefix="/api/v3") App.include_router(arr_router, prefix="/api/v3")
# CookieCloud路由
App.include_router(cookie_router, prefix="/cookiecloud")
def start_frontend(): def start_frontend():
@@ -192,8 +195,10 @@ def start_module():
ResourceHelper() ResourceHelper()
# 加载模块 # 加载模块
ModuleManager() ModuleManager()
# 安装在线插件
PluginManager().install_online_plugin()
# 加载插件 # 加载插件
PluginManager() PluginManager().start()
# 启动定时服务 # 启动定时服务
Scheduler() Scheduler()
# 启动事件消费 # 启动事件消费

View File

@@ -0,0 +1,141 @@
from typing import List, Optional, Tuple, Union
from app import schemas
from app.core.config import settings
from app.core.context import MediaInfo
from app.core.meta import MetaBase
from app.log import logger
from app.modules import _ModuleBase
from app.modules.bangumi.bangumi import BangumiApi
from app.utils.http import RequestUtils
class BangumiModule(_ModuleBase):
bangumiapi: BangumiApi = None
def init_module(self) -> None:
self.bangumiapi = BangumiApi()
def stop(self):
pass
def test(self) -> Tuple[bool, str]:
"""
测试模块连接性
"""
with RequestUtils().get_res("https://api.bgm.tv/") as ret:
if ret and ret.status_code == 200:
return True, ""
elif ret:
return False, f"无法连接Bangumi错误码{ret.status_code}"
return False, "Bangumi网络连接失败"
def init_setting(self) -> Tuple[str, Union[str, bool]]:
pass
def recognize_media(self, bangumiid: int = None,
**kwargs) -> Optional[MediaInfo]:
"""
识别媒体信息
:param bangumiid: 识别的Bangumi ID
:return: 识别的媒体信息,包括剧集信息
"""
if not bangumiid:
return None
# 直接查询详情
info = self.bangumi_info(bangumiid=bangumiid)
if info:
# 赋值TMDB信息并返回
mediainfo = MediaInfo(bangumi_info=info)
logger.info(f"{bangumiid} Bangumi识别结果{mediainfo.type.value} "
f"{mediainfo.title_year}")
return mediainfo
else:
logger.info(f"{bangumiid} 未匹配到Bangumi媒体信息")
return None
def search_medias(self, meta: MetaBase) -> Optional[List[MediaInfo]]:
"""
搜索媒体信息
:param meta: 识别的元数据
:reutrn: 媒体信息
"""
if settings.SEARCH_SOURCE and "bangumi" not in settings.SEARCH_SOURCE:
return None
if not meta.name:
return []
infos = self.bangumiapi.search(meta.name)
if infos:
return [MediaInfo(bangumi_info=info) for info in infos
if meta.name.lower() in str(info.get("name")).lower()
or meta.name.lower() in str(info.get("name_cn")).lower()]
return []
def bangumi_info(self, bangumiid: int) -> Optional[dict]:
"""
获取Bangumi信息
:param bangumiid: BangumiID
:return: Bangumi信息
"""
if not bangumiid:
return None
logger.info(f"开始获取Bangumi信息{bangumiid} ...")
return self.bangumiapi.detail(bangumiid)
def bangumi_calendar(self) -> Optional[List[MediaInfo]]:
"""
获取Bangumi每日放送
"""
infos = self.bangumiapi.calendar()
if infos:
return [MediaInfo(bangumi_info=info) for info in infos]
return []
def bangumi_credits(self, bangumiid: int) -> List[schemas.MediaPerson]:
"""
根据TMDBID查询电影演职员表
:param bangumiid: BangumiID
"""
persons = self.bangumiapi.credits(bangumiid)
if persons:
return [schemas.MediaPerson(source='bangumi', **person) for person in persons]
return []
def bangumi_recommend(self, bangumiid: int) -> List[MediaInfo]:
"""
根据BangumiID查询推荐电影
:param bangumiid: BangumiID
"""
subjects = self.bangumiapi.subjects(bangumiid)
if subjects:
return [MediaInfo(bangumi_info=subject) for subject in subjects]
return []
def bangumi_person_detail(self, person_id: int) -> Optional[schemas.MediaPerson]:
"""
获取人物详细信息
:param person_id: 豆瓣人物ID
"""
personinfo = self.bangumiapi.person_detail(person_id)
if personinfo:
return schemas.MediaPerson(source='bangumi', **{
"id": personinfo.get("id"),
"name": personinfo.get("name"),
"images": personinfo.get("images"),
"biography": personinfo.get("summary"),
"birthday": personinfo.get("birth_day"),
"gender": personinfo.get("gender")
})
return None
def bangumi_person_credits(self, person_id: int) -> List[MediaInfo]:
"""
根据TMDBID查询人物参演作品
:param person_id: 人物ID
"""
credits_info = self.bangumiapi.person_credits(person_id=person_id)
if credits_info:
return [MediaInfo(bangumi_info=credit) for credit in credits_info]
return []

View File

@@ -0,0 +1,194 @@
from datetime import datetime
from functools import lru_cache
import requests
from app.utils.http import RequestUtils
class BangumiApi(object):
"""
https://bangumi.github.io/api/
"""
_urls = {
"search": "search/subjects/%s?type=2",
"calendar": "calendar",
"detail": "v0/subjects/%s",
"credits": "v0/subjects/%s/persons",
"subjects": "v0/subjects/%s/subjects",
"characters": "v0/subjects/%s/characters",
"person_detail": "v0/persons/%s",
"person_credits": "v0/persons/%s/subjects",
}
_base_url = "https://api.bgm.tv/"
_req = RequestUtils(session=requests.Session())
def __init__(self):
pass
@classmethod
@lru_cache(maxsize=128)
def __invoke(cls, url, **kwargs):
req_url = cls._base_url + url
params = {}
if kwargs:
params.update(kwargs)
resp = cls._req.get_res(url=req_url, params=params)
try:
return resp.json() if resp else None
except Exception as e:
print(e)
return None
def search(self, name):
"""
搜索媒体信息
"""
result = self.__invoke("search/subject/%s" % name)
if result:
return result.get("list")
return []
def calendar(self):
"""
获取每日放送返回items
"""
"""
[
{
"weekday": {
"en": "Mon",
"cn": "星期一",
"ja": "月耀日",
"id": 1
},
"items": [
{
"id": 350235,
"url": "http://bgm.tv/subject/350235",
"type": 2,
"name": "月が導く異世界道中 第二幕",
"name_cn": "月光下的异世界之旅 第二幕",
"summary": "",
"air_date": "2024-01-08",
"air_weekday": 1,
"rating": {
"total": 257,
"count": {
"1": 1,
"2": 1,
"3": 4,
"4": 15,
"5": 51,
"6": 111,
"7": 49,
"8": 13,
"9": 5,
"10": 7
},
"score": 6.1
},
"rank": 6125,
"images": {
"large": "http://lain.bgm.tv/pic/cover/l/3c/a5/350235_A0USf.jpg",
"common": "http://lain.bgm.tv/pic/cover/c/3c/a5/350235_A0USf.jpg",
"medium": "http://lain.bgm.tv/pic/cover/m/3c/a5/350235_A0USf.jpg",
"small": "http://lain.bgm.tv/pic/cover/s/3c/a5/350235_A0USf.jpg",
"grid": "http://lain.bgm.tv/pic/cover/g/3c/a5/350235_A0USf.jpg"
},
"collection": {
"doing": 920
}
},
{
"id": 358561,
"url": "http://bgm.tv/subject/358561",
"type": 2,
"name": "大宇宙时代",
"name_cn": "大宇宙时代",
"summary": "",
"air_date": "2024-01-22",
"air_weekday": 1,
"rating": {
"total": 2,
"count": {
"1": 0,
"2": 0,
"3": 0,
"4": 0,
"5": 1,
"6": 1,
"7": 0,
"8": 0,
"9": 0,
"10": 0
},
"score": 5.5
},
"images": {
"large": "http://lain.bgm.tv/pic/cover/l/71/66/358561_UzsLu.jpg",
"common": "http://lain.bgm.tv/pic/cover/c/71/66/358561_UzsLu.jpg",
"medium": "http://lain.bgm.tv/pic/cover/m/71/66/358561_UzsLu.jpg",
"small": "http://lain.bgm.tv/pic/cover/s/71/66/358561_UzsLu.jpg",
"grid": "http://lain.bgm.tv/pic/cover/g/71/66/358561_UzsLu.jpg"
},
"collection": {
"doing": 9
}
}
]
}
]
"""
ret_list = []
result = self.__invoke(self._urls["calendar"], _ts=datetime.strftime(datetime.now(), '%Y%m%d'))
if result:
for item in result:
ret_list.extend(item.get("items") or [])
return ret_list
def detail(self, bid: int):
"""
获取番剧详情
"""
return self.__invoke(self._urls["detail"] % bid, _ts=datetime.strftime(datetime.now(), '%Y%m%d'))
def credits(self, bid: int):
"""
获取番剧人物
"""
ret_list = []
result = self.__invoke(self._urls["characters"] % bid, _ts=datetime.strftime(datetime.now(), '%Y%m%d'))
if result:
for item in result:
character_id = item.get("id")
actors = item.get("actors")
if character_id and actors and actors[0]:
actor_info = actors[0]
actor_info.update({'career': [item.get('name')]})
ret_list.append(actor_info)
return ret_list
def subjects(self, bid: int):
"""
获取关联条目信息
"""
return self.__invoke(self._urls["subjects"] % bid, _ts=datetime.strftime(datetime.now(), '%Y%m%d'))
def person_detail(self, person_id: int):
"""
获取人物详细信息
"""
return self.__invoke(self._urls["person_detail"] % person_id, _ts=datetime.strftime(datetime.now(), '%Y%m%d'))
def person_credits(self, person_id: int):
"""
获取人物参演作品
"""
ret_list = []
result = self.__invoke(self._urls["person_credits"] % person_id, _ts=datetime.strftime(datetime.now(), '%Y%m%d'))
if result:
for item in result:
ret_list.append(item)
return ret_list

View File

@@ -2,15 +2,19 @@ import re
from pathlib import Path from pathlib import Path
from typing import List, Optional, Tuple, Union from typing import List, Optional, Tuple, Union
import cn2an
from app import schemas
from app.core.config import settings from app.core.config import settings
from app.core.context import MediaInfo from app.core.context import MediaInfo
from app.core.meta import MetaBase from app.core.meta import MetaBase
from app.core.metainfo import MetaInfo from app.core.metainfo import MetaInfo, MetaInfoPath
from app.log import logger from app.log import logger
from app.modules import _ModuleBase from app.modules import _ModuleBase
from app.modules.douban.apiv2 import DoubanApi from app.modules.douban.apiv2 import DoubanApi
from app.modules.douban.douban_cache import DoubanCache from app.modules.douban.douban_cache import DoubanCache
from app.modules.douban.scraper import DoubanScraper from app.modules.douban.scraper import DoubanScraper
from app.schemas import MediaPerson
from app.schemas.types import MediaType from app.schemas.types import MediaType
from app.utils.common import retry from app.utils.common import retry
from app.utils.http import RequestUtils from app.utils.http import RequestUtils
@@ -28,17 +32,17 @@ class DoubanModule(_ModuleBase):
self.cache = DoubanCache() self.cache = DoubanCache()
def stop(self): def stop(self):
pass self.doubanapi.close()
def test(self) -> Tuple[bool, str]: def test(self) -> Tuple[bool, str]:
""" """
测试模块连接性 测试模块连接性
""" """
ret = RequestUtils().get_res("https://movie.douban.com/") with RequestUtils().get_res("https://movie.douban.com/") as ret:
if ret and ret.status_code == 200: if ret and ret.status_code == 200:
return True, "" return True, ""
elif ret: elif ret:
return False, f"无法连接豆瓣,错误码:{ret.status_code}" return False, f"无法连接豆瓣,错误码:{ret.status_code}"
return False, "豆瓣网络连接失败" return False, "豆瓣网络连接失败"
def init_setting(self) -> Tuple[str, Union[str, bool]]: def init_setting(self) -> Tuple[str, Union[str, bool]]:
@@ -57,48 +61,59 @@ class DoubanModule(_ModuleBase):
:param cache: 是否使用缓存 :param cache: 是否使用缓存
:return: 识别的媒体信息,包括剧集信息 :return: 识别的媒体信息,包括剧集信息
""" """
if settings.RECOGNIZE_SOURCE != "douban": if not doubanid and not meta:
return None
if meta and not doubanid \
and settings.RECOGNIZE_SOURCE != "douban":
return None return None
if not meta: if not meta:
# 未提供元数据时,直接查询豆瓣信息,不使用缓存
cache_info = {} cache_info = {}
elif not meta.name: elif not meta.name:
logger.error("识别媒体信息时未提供元数据名称") logger.error("识别媒体信息时未提供元数据名称")
return None return None
else: else:
# 读取缓存
if mtype: if mtype:
meta.type = mtype meta.type = mtype
if doubanid: if doubanid:
meta.doubanid = doubanid meta.doubanid = doubanid
# 读取缓存
cache_info = self.cache.get(meta) cache_info = self.cache.get(meta)
# 识别豆瓣信息
if not cache_info or not cache: if not cache_info or not cache:
# 缓存没有或者强制不使用缓存 # 缓存没有或者强制不使用缓存
if doubanid: if doubanid:
# 直接查询详情 # 直接查询详情
info = self.douban_info(doubanid=doubanid, mtype=mtype or meta.type) info = self.douban_info(doubanid=doubanid, mtype=mtype or meta.type)
elif meta: elif meta:
if meta.begin_season: info = {}
logger.info(f"正在识别 {meta.name}{meta.begin_season}季 ...") # 使用中英文名分别识别,去重去空,但要保持顺序
else: names = list(dict.fromkeys([k for k in [meta.cn_name, meta.en_name] if k]))
logger.info(f"正在识别 {meta.name} ...") for name in names:
# 匹配豆瓣信息 if meta.begin_season:
match_info = self.match_doubaninfo(name=meta.name, logger.info(f"正在识别 {name}{meta.begin_season}季 ...")
mtype=mtype or meta.type, else:
year=meta.year, logger.info(f"正在识别 {name} ...")
season=meta.begin_season) # 匹配豆瓣信息
if match_info: match_info = self.match_doubaninfo(name=name,
# 匹配到豆瓣信息 mtype=mtype or meta.type,
info = self.douban_info( year=meta.year,
doubanid=match_info.get("id"), season=meta.begin_season)
mtype=mtype or meta.type if match_info:
) # 匹配到豆瓣信息
else: info = self.douban_info(
logger.info(f"{meta.name if meta else doubanid} 未匹配到豆瓣媒体信息") doubanid=match_info.get("id"),
return None mtype=mtype or meta.type
)
if info:
break
else: else:
logger.error("识别媒体信息时未提供元数据或豆瓣ID") logger.error("识别媒体信息时未提供元数据或豆瓣ID")
return None return None
# 保存到缓存 # 保存到缓存
if meta and cache: if meta and cache:
self.cache.update(meta, info) self.cache.update(meta, info)
@@ -436,7 +451,7 @@ class DoubanModule(_ModuleBase):
return __douban_movie() or __douban_tv() return __douban_movie() or __douban_tv()
def douban_discover(self, mtype: MediaType, sort: str, tags: str, def douban_discover(self, mtype: MediaType, sort: str, tags: str,
page: int = 1, count: int = 30) -> Optional[List[dict]]: page: int = 1, count: int = 30) -> Optional[List[MediaInfo]]:
""" """
发现豆瓣电影、剧集 发现豆瓣电影、剧集
:param mtype: 媒体类型 :param mtype: 媒体类型
@@ -453,69 +468,75 @@ class DoubanModule(_ModuleBase):
else: else:
infos = self.doubanapi.tv_recommend(start=(page - 1) * count, count=count, infos = self.doubanapi.tv_recommend(start=(page - 1) * count, count=count,
sort=sort, tags=tags) sort=sort, tags=tags)
if not infos: if infos:
return [] medias = [MediaInfo(douban_info=info) for info in infos.get("items")]
return infos.get("items") or [] return [media for media in medias if media.poster_path
and "movie_large.jpg" not in media.poster_path
and "tv_normal.png" not in media.poster_path
and "movie_large.jpg" not in media.poster_path
and "tv_normal.jpg" not in media.poster_path
and "tv_large.jpg" not in media.poster_path]
return []
def movie_showing(self, page: int = 1, count: int = 30) -> List[dict]: def movie_showing(self, page: int = 1, count: int = 30) -> List[MediaInfo]:
""" """
获取正在上映的电影 获取正在上映的电影
""" """
infos = self.doubanapi.movie_showing(start=(page - 1) * count, infos = self.doubanapi.movie_showing(start=(page - 1) * count,
count=count) count=count)
if not infos: if infos:
return [] return [MediaInfo(douban_info=info) for info in infos.get("subject_collection_items")]
return infos.get("subject_collection_items") return []
def tv_weekly_chinese(self, page: int = 1, count: int = 30) -> List[dict]: def tv_weekly_chinese(self, page: int = 1, count: int = 30) -> List[MediaInfo]:
""" """
获取豆瓣本周口碑国产剧 获取豆瓣本周口碑国产剧
""" """
infos = self.doubanapi.tv_chinese_best_weekly(start=(page - 1) * count, infos = self.doubanapi.tv_chinese_best_weekly(start=(page - 1) * count,
count=count) count=count)
if not infos: if infos:
return [] return [MediaInfo(douban_info=info) for info in infos.get("subject_collection_items")]
return infos.get("subject_collection_items") return []
def tv_weekly_global(self, page: int = 1, count: int = 30) -> List[dict]: def tv_weekly_global(self, page: int = 1, count: int = 30) -> List[MediaInfo]:
""" """
获取豆瓣本周口碑外国剧 获取豆瓣本周口碑外国剧
""" """
infos = self.doubanapi.tv_global_best_weekly(start=(page - 1) * count, infos = self.doubanapi.tv_global_best_weekly(start=(page - 1) * count,
count=count) count=count)
if not infos: if infos:
return [] return [MediaInfo(douban_info=info) for info in infos.get("subject_collection_items")]
return infos.get("subject_collection_items") return []
def tv_animation(self, page: int = 1, count: int = 30) -> List[dict]: def tv_animation(self, page: int = 1, count: int = 30) -> List[MediaInfo]:
""" """
获取豆瓣动画剧 获取豆瓣动画剧
""" """
infos = self.doubanapi.tv_animation(start=(page - 1) * count, infos = self.doubanapi.tv_animation(start=(page - 1) * count,
count=count) count=count)
if not infos: if infos:
return [] return [MediaInfo(douban_info=info) for info in infos.get("subject_collection_items")]
return infos.get("subject_collection_items") return []
def movie_hot(self, page: int = 1, count: int = 30) -> List[dict]: def movie_hot(self, page: int = 1, count: int = 30) -> List[MediaInfo]:
""" """
获取豆瓣热门电影 获取豆瓣热门电影
""" """
infos = self.doubanapi.movie_hot_gaia(start=(page - 1) * count, infos = self.doubanapi.movie_hot_gaia(start=(page - 1) * count,
count=count) count=count)
if not infos: if infos:
return [] return [MediaInfo(douban_info=info) for info in infos.get("subject_collection_items")]
return infos.get("subject_collection_items") return []
def tv_hot(self, page: int = 1, count: int = 30) -> List[dict]: def tv_hot(self, page: int = 1, count: int = 30) -> List[MediaInfo]:
""" """
获取豆瓣热门剧集 获取豆瓣热门剧集
""" """
infos = self.doubanapi.tv_hot(start=(page - 1) * count, infos = self.doubanapi.tv_hot(start=(page - 1) * count,
count=count) count=count)
if not infos: if infos:
return [] return [MediaInfo(douban_info=info) for info in infos.get("subject_collection_items")]
return infos.get("subject_collection_items") return []
def search_medias(self, meta: MetaBase) -> Optional[List[MediaInfo]]: def search_medias(self, meta: MetaBase) -> Optional[List[MediaInfo]]:
""" """
@@ -523,10 +544,8 @@ class DoubanModule(_ModuleBase):
:param meta: 识别的元数据 :param meta: 识别的元数据
:reutrn: 媒体信息 :reutrn: 媒体信息
""" """
# 未启用豆瓣搜索时返回None if settings.SEARCH_SOURCE and "douban" not in settings.SEARCH_SOURCE:
if settings.RECOGNIZE_SOURCE != "douban":
return None return None
if not meta.name: if not meta.name:
return [] return []
result = self.doubanapi.search(meta.name) result = self.doubanapi.search(meta.name)
@@ -539,10 +558,37 @@ class DoubanModule(_ModuleBase):
continue continue
if item_obj.get("type_name") not in (MediaType.TV.value, MediaType.MOVIE.value): if item_obj.get("type_name") not in (MediaType.TV.value, MediaType.MOVIE.value):
continue continue
if meta.name not in item_obj.get("target", {}).get("title"):
continue
ret_medias.append(MediaInfo(douban_info=item_obj.get("target"))) ret_medias.append(MediaInfo(douban_info=item_obj.get("target")))
# 将搜索词中的季写入标题中
if ret_medias and meta.begin_season:
# 小写数据转大写
season_str = cn2an.an2cn(meta.begin_season, "low")
for media in ret_medias:
if media.type == MediaType.TV:
media.title = f"{media.title}{season_str}"
media.season = meta.begin_season
return ret_medias return ret_medias
def search_persons(self, name: str) -> Optional[List[MediaPerson]]:
"""
搜索人物信息
"""
if not name:
return []
result = self.doubanapi.person_search(keyword=name)
if result and result.get('items'):
return [MediaPerson(source='douban', **{
'id': item.get('target_id'),
'name': item.get('target', {}).get('title'),
'url': item.get('target', {}).get('url'),
'images': item.get('target', {}).get('cover', {}),
'avatar': (item.get('target', {}).get('cover_img', {}).get('url')
or '').replace("/l/public/", "/s/public/"),
}) for item in result.get('items') if name in item.get('target', {}).get('title')]
return []
@retry(Exception, 5, 3, 3, logger=logger) @retry(Exception, 5, 3, 3, logger=logger)
def match_doubaninfo(self, name: str, imdbid: str = None, def match_doubaninfo(self, name: str, imdbid: str = None,
mtype: MediaType = None, year: str = None, season: int = None) -> dict: mtype: MediaType = None, year: str = None, season: int = None) -> dict:
@@ -598,64 +644,94 @@ class DoubanModule(_ModuleBase):
return item return item
return {} return {}
def movie_top250(self, page: int = 1, count: int = 30) -> List[dict]: def movie_top250(self, page: int = 1, count: int = 30) -> List[MediaInfo]:
""" """
获取豆瓣电影TOP250 获取豆瓣电影TOP250
""" """
infos = self.doubanapi.movie_top250(start=(page - 1) * count, infos = self.doubanapi.movie_top250(start=(page - 1) * count,
count=count) count=count)
if not infos: if infos:
return [] return [MediaInfo(douban_info=info) for info in infos.get("subject_collection_items")]
return infos.get("subject_collection_items") return []
def scrape_metadata(self, path: Path, mediainfo: MediaInfo, transfer_type: str, def scrape_metadata(self, path: Path, mediainfo: MediaInfo, transfer_type: str,
force_nfo: bool = False, force_img: bool = False) -> None: metainfo: MetaBase = None, force_nfo: bool = False, force_img: bool = False) -> None:
""" """
刮削元数据 刮削元数据
:param path: 媒体文件路径 :param path: 媒体文件路径
:param mediainfo: 识别的媒体信息 :param mediainfo: 识别的媒体信息
:param transfer_type: 传输类型 :param transfer_type: 传输类型
:param metainfo: 源文件的识别元数据
:param force_nfo: 是否强制刮削nfo :param force_nfo: 是否强制刮削nfo
:param force_img: 是否强制刮削图片 :param force_img: 是否强制刮削图片
:return: 成功或失败 :return: 成功或失败
""" """
def __get_mediainfo(_meta: MetaBase, _mediainfo: MediaInfo) -> Optional[MediaInfo]:
"""
获取豆瓣媒体信息
"""
if not _meta.name:
return None
# 查询豆瓣详情
if not _mediainfo.douban_id:
# 根据TMDB名称查询豆瓣数据
_doubaninfo = self.match_doubaninfo(name=_mediainfo.title,
imdbid=_mediainfo.imdb_id,
mtype=_mediainfo.type,
year=_mediainfo.year)
if not _doubaninfo:
logger.warn(f"未找到 {_mediainfo.title} 的豆瓣信息")
return None
_doubaninfo = self.douban_info(doubanid=_doubaninfo.get("id"), mtype=_mediainfo.type)
else:
_doubaninfo = self.douban_info(doubanid=_mediainfo.douban_id,
mtype=_mediainfo.type)
if not _doubaninfo:
logger(f"未获取到 {_mediainfo.douban_id} 的豆瓣媒体信息,无法刮削!")
return None
# 豆瓣媒体信息
_doubanmedia = MediaInfo(douban_info=_doubaninfo)
# 补充图片
self.obtain_images(_doubanmedia)
return _doubanmedia
if settings.SCRAP_SOURCE != "douban": if settings.SCRAP_SOURCE != "douban":
return None return None
if SystemUtils.is_bluray_dir(path): if SystemUtils.is_bluray_dir(path):
# 蓝光原盘 # 蓝光原盘
logger.info(f"开始刮削蓝光原盘:{path} ...") logger.info(f"开始刮削蓝光原盘:{path} ...")
meta = MetaInfo(path.stem) # 优先使用传入metainfo
if not meta.name: meta = metainfo or MetaInfo(path.name)
return
# 查询豆瓣详情
if not mediainfo.douban_id:
# 根据名称查询豆瓣数据
doubaninfo = self.match_doubaninfo(name=mediainfo.title,
imdbid=mediainfo.imdb_id,
mtype=mediainfo.type,
year=mediainfo.year)
if not doubaninfo:
logger.warn(f"未找到 {mediainfo.title} 的豆瓣信息")
return
doubaninfo = self.douban_info(doubanid=doubaninfo.get("id"), mtype=mediainfo.type)
else:
doubaninfo = self.douban_info(doubanid=mediainfo.douban_id,
mtype=mediainfo.type)
if not doubaninfo:
logger(f"未获取到 {mediainfo.douban_id} 的豆瓣媒体信息,无法刮削!")
return
# 豆瓣媒体信息
mediainfo = MediaInfo(douban_info=doubaninfo)
# 补充图片
self.obtain_images(mediainfo)
# 刮削路径 # 刮削路径
scrape_path = path / path.name scrape_path = path / path.name
# 媒体信息
doubanmedia = __get_mediainfo(_meta=meta, _mediainfo=mediainfo)
if not doubanmedia:
return
# 刮削
self.scraper.gen_scraper_files(meta=meta, self.scraper.gen_scraper_files(meta=meta,
mediainfo=mediainfo, mediainfo=doubanmedia,
file_path=scrape_path, file_path=scrape_path,
transfer_type=transfer_type, transfer_type=transfer_type,
force_nfo=force_nfo, force_nfo=force_nfo,
force_img=force_img) force_img=force_img)
elif path.is_file():
# 刮削单个文件
logger.info(f"开始刮削媒体库文件:{path} ...")
# 优先使用传入metainfo
meta = metainfo or MetaInfoPath(path)
# 媒体信息
doubanmedia = __get_mediainfo(_meta=meta, _mediainfo=mediainfo)
if not doubanmedia:
return
# 刮削
self.scraper.gen_scraper_files(meta=meta,
mediainfo=doubanmedia,
file_path=path,
transfer_type=transfer_type,
force_nfo=force_nfo,
force_img=force_img)
else: else:
# 目录下的所有文件 # 目录下的所有文件
for file in SystemUtils.list_files(path, settings.RMT_MEDIAEXT): for file in SystemUtils.list_files(path, settings.RMT_MEDIAEXT):
@@ -663,34 +739,14 @@ class DoubanModule(_ModuleBase):
continue continue
logger.info(f"开始刮削媒体库文件:{file} ...") logger.info(f"开始刮削媒体库文件:{file} ...")
try: try:
meta = MetaInfo(file.stem) meta = MetaInfoPath(file)
if not meta.name:
continue
if not mediainfo.douban_id:
# 根据名称查询豆瓣数据
doubaninfo = self.match_doubaninfo(name=mediainfo.title,
imdbid=mediainfo.imdb_id,
mtype=mediainfo.type,
year=mediainfo.year,
season=meta.begin_season)
if not doubaninfo:
logger.warn(f"未找到 {mediainfo.title} 的豆瓣信息")
break
# 查询豆瓣详情
doubaninfo = self.douban_info(doubanid=doubaninfo.get("id"), mtype=mediainfo.type)
else:
doubaninfo = self.douban_info(doubanid=mediainfo.douban_id,
mtype=mediainfo.type)
if not doubaninfo:
logger(f"未获取到 {mediainfo.douban_id} 的豆瓣媒体信息,无法刮削!")
continue
# 豆瓣媒体信息 # 豆瓣媒体信息
mediainfo = MediaInfo(douban_info=doubaninfo) doubanmedia = __get_mediainfo(_meta=meta, _mediainfo=mediainfo)
# 补充图片 if not doubanmedia:
self.obtain_images(mediainfo) return
# 刮削 # 刮削
self.scraper.gen_scraper_files(meta=meta, self.scraper.gen_scraper_files(meta=meta,
mediainfo=mediainfo, mediainfo=doubanmedia,
file_path=file, file_path=file,
transfer_type=transfer_type, transfer_type=transfer_type,
force_nfo=force_nfo, force_nfo=force_nfo,
@@ -737,48 +793,99 @@ class DoubanModule(_ModuleBase):
self.cache.clear() self.cache.clear()
logger.info("豆瓣缓存清除完成") logger.info("豆瓣缓存清除完成")
def douban_movie_credits(self, doubanid: str, page: int = 1, count: int = 20) -> List[dict]: def douban_movie_credits(self, doubanid: str) -> List[schemas.MediaPerson]:
""" """
根据TMDBID查询电影演职员表 根据TMDBID查询电影演职员表
:param doubanid: 豆瓣ID :param doubanid: 豆瓣ID
:param page: 页码
:param count: 数量
""" """
result = self.doubanapi.movie_celebrities(subject_id=doubanid) result = self.doubanapi.movie_celebrities(subject_id=doubanid)
if not result: if not result:
return [] return []
ret_list = result.get("actors") or [] ret_list = result.get("actors") or []
if ret_list: if ret_list:
return ret_list[(page - 1) * count: page * count] # 更新豆瓣演员信息中的ID从URI中提取'douban://douban.com/celebrity/1316132?subject_id=27503705' subject_id
else: for doubaninfo in ret_list:
return [] doubaninfo['id'] = doubaninfo.get('uri', '').split('?subject_id=')[-1]
return [schemas.MediaPerson(source='douban', **doubaninfo) for doubaninfo in ret_list]
return []
def douban_tv_credits(self, doubanid: str, page: int = 1, count: int = 20) -> List[dict]: def douban_tv_credits(self, doubanid: str) -> List[schemas.MediaPerson]:
""" """
根据TMDBID查询电视剧演职员表 根据TMDBID查询电视剧演职员表
:param doubanid: 豆瓣ID :param doubanid: 豆瓣ID
:param page: 页码
:param count: 数量
""" """
result = self.doubanapi.tv_celebrities(subject_id=doubanid) result = self.doubanapi.tv_celebrities(subject_id=doubanid)
if not result: if not result:
return [] return []
ret_list = result.get("actors") or [] ret_list = result.get("actors") or []
if ret_list: if ret_list:
return ret_list[(page - 1) * count: page * count] # 更新豆瓣演员信息中的ID从URI中提取'douban://douban.com/celebrity/1316132?subject_id=27503705' subject_id
else: for doubaninfo in ret_list:
return [] doubaninfo['id'] = doubaninfo.get('uri', '').split('?subject_id=')[-1]
return [schemas.MediaPerson(source='douban', **doubaninfo) for doubaninfo in ret_list]
return []
def douban_movie_recommend(self, doubanid: str) -> List[dict]: def douban_movie_recommend(self, doubanid: str) -> List[MediaInfo]:
""" """
根据豆瓣ID查询推荐电影 根据豆瓣ID查询推荐电影
:param doubanid: 豆瓣ID :param doubanid: 豆瓣ID
""" """
return self.doubanapi.movie_recommendations(subject_id=doubanid) or [] recommend = self.doubanapi.movie_recommendations(subject_id=doubanid)
if recommend:
return [MediaInfo(douban_info=info) for info in recommend]
return []
def douban_tv_recommend(self, doubanid: str) -> List[dict]: def douban_tv_recommend(self, doubanid: str) -> List[MediaInfo]:
""" """
根据豆瓣ID查询推荐电视剧 根据豆瓣ID查询推荐电视剧
:param doubanid: 豆瓣ID :param doubanid: 豆瓣ID
""" """
return self.doubanapi.tv_recommendations(subject_id=doubanid) or [] recommend = self.doubanapi.tv_recommendations(subject_id=doubanid)
if recommend:
return [MediaInfo(douban_info=info) for info in recommend]
return []
def douban_person_detail(self, person_id: int) -> schemas.MediaPerson:
"""
获取人物详细信息
:param person_id: 豆瓣人物ID
"""
detail = self.doubanapi.person_detail(person_id)
if detail:
also_known_as = []
infos = detail.get("extra", {}).get("info")
if infos:
also_known_as = ["".join(info) for info in infos]
image = detail.get("cover_img", {}).get("url")
if image:
image = image.replace("/l/public/", "/s/public/")
return schemas.MediaPerson(source='douban', **{
"id": detail.get("id"),
"name": detail.get("title"),
"avatar": image,
"biography": detail.get("extra", {}).get("short_info"),
"also_known_as": also_known_as,
})
return schemas.MediaPerson(source='douban')
def douban_person_credits(self, person_id: int, page: int = 1) -> List[MediaInfo]:
"""
根据TMDBID查询人物参演作品
:param person_id: 人物ID
:param page: 页码
"""
# 获取人物参演作品集
personinfo = self.doubanapi.person_detail(person_id)
if not personinfo:
return []
collection_id = None
for module in personinfo.get("modules"):
if module.get("type") == "work_collections":
collection_id = module.get("payload", {}).get("id")
# 查询作品集内容
if collection_id:
collections = self.doubanapi.person_work(subject_id=collection_id, start=(page - 1) * 20, count=20)
if collections:
works = collections.get("works")
return [MediaInfo(douban_info=work.get("subject")) for work in works]
return []

View File

@@ -22,6 +22,7 @@ class DoubanApi(metaclass=Singleton):
# 聚合搜索 # 聚合搜索
"search": "/search/weixin", "search": "/search/weixin",
"search_agg": "/search", "search_agg": "/search",
"search_subject": "/search/subjects",
"imdbid": "/movie/imdb/%s", "imdbid": "/movie/imdb/%s",
# 电影探索 # 电影探索
@@ -137,6 +138,10 @@ class DoubanApi(metaclass=Singleton):
# doulist # doulist
"doulist": "/doulist/", "doulist": "/doulist/",
"doulist_items": "/doulist/%s/items", "doulist_items": "/doulist/%s/items",
# person
"person_detail": "/elessar/subject/",
"person_work": "/elessar/work_collections/%s/works",
} }
_user_agents = [ _user_agents = [
@@ -176,7 +181,7 @@ class DoubanApi(metaclass=Singleton):
""" """
req_url = self._base_url + url req_url = self._base_url + url
params = {'apiKey': self._api_key} params: dict = {'apiKey': self._api_key}
if kwargs: if kwargs:
params.update(kwargs) params.update(kwargs)
@@ -190,13 +195,13 @@ class DoubanApi(metaclass=Singleton):
'_ts': ts, '_ts': ts,
'_sig': self.__sign(url=req_url, ts=ts) '_sig': self.__sign(url=req_url, ts=ts)
}) })
resp = RequestUtils( with RequestUtils(
ua=choice(self._user_agents), ua=choice(self._user_agents),
session=self._session session=self._session
).get_res(url=req_url, params=params) ).get_res(url=req_url, params=params) as resp:
if resp.status_code == 400 and "rate_limit" in resp.text: if resp is not None and resp.status_code == 400 and "rate_limit" in resp.text:
return resp.json() return resp.json()
return resp.json() if resp else {} return resp.json() if resp else {}
@lru_cache(maxsize=settings.CACHE_CONF.get('douban')) @lru_cache(maxsize=settings.CACHE_CONF.get('douban'))
def __post(self, url: str, **kwargs) -> dict: def __post(self, url: str, **kwargs) -> dict:
@@ -210,7 +215,7 @@ class DoubanApi(metaclass=Singleton):
}, },
data={ data={
"apikey": "0ab215a8b1977939201640fa14c66bab", "apikey": "0ab215a8b1977939201640fa14c66bab",
}, }
) )
""" """
req_url = self._api_url + url req_url = self._api_url + url
@@ -227,6 +232,13 @@ class DoubanApi(metaclass=Singleton):
return resp.json() return resp.json()
return resp.json() if resp else {} return resp.json() if resp else {}
def imdbid(self, imdbid: str,
ts=datetime.strftime(datetime.now(), '%Y%m%d')):
"""
IMDBID搜索
"""
return self.__post(self._urls["imdbid"] % imdbid, _ts=ts)
def search(self, keyword: str, start: int = 0, count: int = 20, def search(self, keyword: str, start: int = 0, count: int = 20,
ts=datetime.strftime(datetime.now(), '%Y%m%d')) -> dict: ts=datetime.strftime(datetime.now(), '%Y%m%d')) -> dict:
""" """
@@ -235,13 +247,6 @@ class DoubanApi(metaclass=Singleton):
return self.__invoke(self._urls["search"], q=keyword, return self.__invoke(self._urls["search"], q=keyword,
start=start, count=count, _ts=ts) start=start, count=count, _ts=ts)
def imdbid(self, imdbid: str,
ts=datetime.strftime(datetime.now(), '%Y%m%d')):
"""
IMDBID搜索
"""
return self.__post(self._urls["imdbid"] % imdbid, _ts=ts)
def movie_search(self, keyword: str, start: int = 0, count: int = 20, def movie_search(self, keyword: str, start: int = 0, count: int = 20,
ts=datetime.strftime(datetime.now(), '%Y%m%d')): ts=datetime.strftime(datetime.now(), '%Y%m%d')):
""" """
@@ -274,6 +279,14 @@ class DoubanApi(metaclass=Singleton):
return self.__invoke(self._urls["group_search"], q=keyword, return self.__invoke(self._urls["group_search"], q=keyword,
start=start, count=count, _ts=ts) start=start, count=count, _ts=ts)
def person_search(self, keyword: str, start: int = 0, count: int = 20,
ts=datetime.strftime(datetime.now(), '%Y%m%d')):
"""
人物搜索
"""
return self.__invoke(self._urls["search_subject"], type="person", q=keyword,
start=start, count=count, _ts=ts)
def movie_showing(self, start: int = 0, count: int = 20, def movie_showing(self, start: int = 0, count: int = 20,
ts=datetime.strftime(datetime.now(), '%Y%m%d')): ts=datetime.strftime(datetime.now(), '%Y%m%d')):
""" """
@@ -475,12 +488,36 @@ class DoubanApi(metaclass=Singleton):
return self.__invoke(self._urls["tv_photos"] % subject_id, return self.__invoke(self._urls["tv_photos"] % subject_id,
start=start, count=count, _ts=ts) start=start, count=count, _ts=ts)
def person_detail(self, subject_id: int):
"""
用户详情
:param subject_id: 人物 id
:return:
"""
return self.__invoke(self._urls["person_detail"] + str(subject_id))
def person_work(self, subject_id: int, start: int = 0, count: int = 20, sort_by: str = "time",
collection_title: str = "影视",
ts=datetime.strftime(datetime.now(), '%Y%m%d')):
"""
用户作品集
:param subject_id: work_collection id
:param start: 开始页
:param count: 数量
:param sort_by: collection or time or vote
:param collection_title: 影视 or 图书 or 音乐
:param ts: 时间戳
:return:
"""
return self.__invoke(self._urls["person_work"] % subject_id, sortby=sort_by, collection_title=collection_title,
start=start, count=count, _ts=ts)
def clear_cache(self): def clear_cache(self):
""" """
清空LRU缓存 清空LRU缓存
""" """
self.__invoke.cache_clear() self.__invoke.cache_clear()
def __del__(self): def close(self):
if self._session: if self._session:
self._session.close() self._session.close()

View File

@@ -1,4 +1,3 @@
import time
from pathlib import Path from pathlib import Path
from typing import Union from typing import Union
from xml.dom import minidom from xml.dom import minidom
@@ -31,6 +30,9 @@ class DoubanScraper:
:param force_img: 强制生成图片 :param force_img: 强制生成图片
""" """
if not mediainfo or not file_path:
return
self._transfer_type = transfer_type self._transfer_type = transfer_type
self._force_nfo = force_nfo self._force_nfo = force_nfo
self._force_img = force_img self._force_img = force_img
@@ -83,10 +85,6 @@ class DoubanScraper:
@staticmethod @staticmethod
def __gen_common_nfo(mediainfo: MediaInfo, doc, root): def __gen_common_nfo(mediainfo: MediaInfo, doc, root):
# 添加时间
DomUtils.add_node(doc, root, "dateadded",
time.strftime('%Y-%m-%d %H:%M:%S',
time.localtime(time.time())))
# 简介 # 简介
xplot = DomUtils.add_node(doc, root, "plot") xplot = DomUtils.add_node(doc, root, "plot")
xplot.appendChild(doc.createCDATASection(mediainfo.overview or "")) xplot.appendChild(doc.createCDATASection(mediainfo.overview or ""))
@@ -166,8 +164,6 @@ class DoubanScraper:
logger.info(f"正在生成季NFO文件{season_path.name}") logger.info(f"正在生成季NFO文件{season_path.name}")
doc = minidom.Document() doc = minidom.Document()
root = DomUtils.add_node(doc, doc, "season") root = DomUtils.add_node(doc, doc, "season")
# 添加时间
DomUtils.add_node(doc, root, "dateadded", time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(time.time())))
# 简介 # 简介
xplot = DomUtils.add_node(doc, root, "plot") xplot = DomUtils.add_node(doc, root, "plot")
xplot.appendChild(doc.createCDATASection(mediainfo.overview or "")) xplot.appendChild(doc.createCDATASection(mediainfo.overview or ""))
@@ -197,15 +193,15 @@ class DoubanScraper:
url = url.replace("/format/webp", "/format/jpg") url = url.replace("/format/webp", "/format/jpg")
file_path.with_suffix(".jpg") file_path.with_suffix(".jpg")
logger.info(f"正在下载{file_path.stem}图片:{url} ...") logger.info(f"正在下载{file_path.stem}图片:{url} ...")
r = RequestUtils().get_res(url=url) with RequestUtils().get_res(url=url) as r:
if r: if r:
if self._transfer_type in ['rclone_move', 'rclone_copy']: if self._transfer_type in ['rclone_move', 'rclone_copy']:
self.__save_remove_file(file_path, r.content) self.__save_remove_file(file_path, r.content)
else:
file_path.write_bytes(r.content)
logger.info(f"图片已保存:{file_path}")
else: else:
file_path.write_bytes(r.content) logger.info(f"{file_path.stem}图片下载失败,请检查网络连通性")
logger.info(f"图片已保存:{file_path}")
else:
logger.info(f"{file_path.stem}图片下载失败,请检查网络连通性")
except Exception as err: except Exception as err:
logger.error(f"{file_path.stem}图片下载失败:{str(err)}") logger.error(f"{file_path.stem}图片下载失败:{str(err)}")

View File

@@ -56,12 +56,12 @@ class Emby:
return [] return []
req_url = "%semby/Library/SelectableMediaFolders?api_key=%s" % (self._host, self._apikey) req_url = "%semby/Library/SelectableMediaFolders?api_key=%s" % (self._host, self._apikey)
try: try:
res = RequestUtils().get_res(req_url) with RequestUtils().get_res(req_url) as res:
if res: if res:
return res.json() return res.json()
else: else:
logger.error(f"Library/SelectableMediaFolders 未获取到返回数据") logger.error(f"Library/SelectableMediaFolders 未获取到返回数据")
return [] return []
except Exception as e: except Exception as e:
logger.error(f"连接Library/SelectableMediaFolders 出错:" + str(e)) logger.error(f"连接Library/SelectableMediaFolders 出错:" + str(e))
return [] return []
@@ -74,29 +74,29 @@ class Emby:
return [] return []
req_url = "%semby/Library/VirtualFolders/Query?api_key=%s" % (self._host, self._apikey) req_url = "%semby/Library/VirtualFolders/Query?api_key=%s" % (self._host, self._apikey)
try: try:
res = RequestUtils().get_res(req_url) with RequestUtils().get_res(req_url) as res:
if res: if res:
library_items = res.json().get("Items") library_items = res.json().get("Items")
librarys = [] librarys = []
for library_item in library_items: for library_item in library_items:
library_name = library_item.get('Name') library_name = library_item.get('Name')
pathInfos = library_item.get('LibraryOptions', {}).get('PathInfos') pathInfos = library_item.get('LibraryOptions', {}).get('PathInfos')
library_paths = [] library_paths = []
for path in pathInfos: for path in pathInfos:
if path.get('NetworkPath'): if path.get('NetworkPath'):
library_paths.append(path.get('NetworkPath')) library_paths.append(path.get('NetworkPath'))
else: else:
library_paths.append(path.get('Path')) library_paths.append(path.get('Path'))
if library_name and library_paths: if library_name and library_paths:
librarys.append({ librarys.append({
'Name': library_name, 'Name': library_name,
'Path': library_paths 'Path': library_paths
}) })
return librarys return librarys
else: else:
logger.error(f"Library/VirtualFolders/Query 未获取到返回数据") logger.error(f"Library/VirtualFolders/Query 未获取到返回数据")
return [] return []
except Exception as e: except Exception as e:
logger.error(f"连接Library/VirtualFolders/Query 出错:" + str(e)) logger.error(f"连接Library/VirtualFolders/Query 出错:" + str(e))
return [] return []
@@ -113,12 +113,12 @@ class Emby:
user = self.user user = self.user
req_url = f"{self._host}emby/Users/{user}/Views?api_key={self._apikey}" req_url = f"{self._host}emby/Users/{user}/Views?api_key={self._apikey}"
try: try:
res = RequestUtils().get_res(req_url) with RequestUtils().get_res(req_url) as res:
if res: if res:
return res.json().get("Items") return res.json().get("Items")
else: else:
logger.error(f"User/Views 未获取到返回数据") logger.error(f"User/Views 未获取到返回数据")
return [] return []
except Exception as e: except Exception as e:
logger.error(f"连接User/Views 出错:" + str(e)) logger.error(f"连接User/Views 出错:" + str(e))
return [] return []
@@ -164,20 +164,20 @@ class Emby:
return None return None
req_url = "%sUsers?api_key=%s" % (self._host, self._apikey) req_url = "%sUsers?api_key=%s" % (self._host, self._apikey)
try: try:
res = RequestUtils().get_res(req_url) with RequestUtils().get_res(req_url) as res:
if res: if res:
users = res.json() users = res.json()
# 先查询是否有与当前用户名称匹配的 # 先查询是否有与当前用户名称匹配的
if user_name: if user_name:
for user in users:
if user.get("Name") == user_name:
return user.get("Id")
# 查询管理员
for user in users: for user in users:
if user.get("Name") == user_name: if user.get("Policy", {}).get("IsAdministrator"):
return user.get("Id") return user.get("Id")
# 查询管理员 else:
for user in users: logger.error(f"Users 未获取到返回数据")
if user.get("Policy", {}).get("IsAdministrator"):
return user.get("Id")
else:
logger.error(f"Users 未获取到返回数据")
except Exception as e: except Exception as e:
logger.error(f"连接Users出错" + str(e)) logger.error(f"连接Users出错" + str(e))
return None return None
@@ -227,11 +227,11 @@ class Emby:
return None return None
req_url = "%sSystem/Info?api_key=%s" % (self._host, self._apikey) req_url = "%sSystem/Info?api_key=%s" % (self._host, self._apikey)
try: try:
res = RequestUtils().get_res(req_url) with RequestUtils().get_res(req_url) as res:
if res: if res:
return res.json().get("Id") return res.json().get("Id")
else: else:
logger.error(f"System/Info 未获取到返回数据") logger.error(f"System/Info 未获取到返回数据")
except Exception as e: except Exception as e:
logger.error(f"连接System/Info出错" + str(e)) logger.error(f"连接System/Info出错" + str(e))
@@ -245,12 +245,12 @@ class Emby:
return 0 return 0
req_url = "%semby/Users/Query?api_key=%s" % (self._host, self._apikey) req_url = "%semby/Users/Query?api_key=%s" % (self._host, self._apikey)
try: try:
res = RequestUtils().get_res(req_url) with RequestUtils().get_res(req_url) as res:
if res: if res:
return res.json().get("TotalRecordCount") return res.json().get("TotalRecordCount")
else: else:
logger.error(f"Users/Query 未获取到返回数据") logger.error(f"Users/Query 未获取到返回数据")
return 0 return 0
except Exception as e: except Exception as e:
logger.error(f"连接Users/Query出错" + str(e)) logger.error(f"连接Users/Query出错" + str(e))
return 0 return 0
@@ -264,17 +264,17 @@ class Emby:
return schemas.Statistic() return schemas.Statistic()
req_url = "%semby/Items/Counts?api_key=%s" % (self._host, self._apikey) req_url = "%semby/Items/Counts?api_key=%s" % (self._host, self._apikey)
try: try:
res = RequestUtils().get_res(req_url) with RequestUtils().get_res(req_url) as res:
if res: if res:
result = res.json() result = res.json()
return schemas.Statistic( return schemas.Statistic(
movie_count=result.get("MovieCount") or 0, movie_count=result.get("MovieCount") or 0,
tv_count=result.get("SeriesCount") or 0, tv_count=result.get("SeriesCount") or 0,
episode_count=result.get("EpisodeCount") or 0 episode_count=result.get("EpisodeCount") or 0
) )
else: else:
logger.error(f"Items/Counts 未获取到返回数据") logger.error(f"Items/Counts 未获取到返回数据")
return schemas.Statistic() return schemas.Statistic()
except Exception as e: except Exception as e:
logger.error(f"连接Items/Counts出错" + str(e)) logger.error(f"连接Items/Counts出错" + str(e))
return schemas.Statistic() return schemas.Statistic()
@@ -299,14 +299,14 @@ class Emby:
"&api_key=%s") % ( "&api_key=%s") % (
self._host, name, self._apikey) self._host, name, self._apikey)
try: try:
res = RequestUtils().get_res(req_url) with RequestUtils().get_res(req_url) as res:
if res: if res:
res_items = res.json().get("Items") res_items = res.json().get("Items")
if res_items: if res_items:
for res_item in res_items: for res_item in res_items:
if res_item.get('Name') == name and ( if res_item.get('Name') == name and (
not year or str(res_item.get('ProductionYear')) == str(year)): not year or str(res_item.get('ProductionYear')) == str(year)):
return res_item.get('Id') return res_item.get('Id')
except Exception as e: except Exception as e:
logger.error(f"连接Items出错" + str(e)) logger.error(f"连接Items出错" + str(e))
return None return None
@@ -329,36 +329,36 @@ class Emby:
"&Recursive=true&SearchTerm=%s&Limit=10&IncludeSearchTypes=false&api_key=%s" % ( "&Recursive=true&SearchTerm=%s&Limit=10&IncludeSearchTypes=false&api_key=%s" % (
self._host, title, self._apikey) self._host, title, self._apikey)
try: try:
res = RequestUtils().get_res(req_url) with RequestUtils().get_res(req_url) as res:
if res: if res:
res_items = res.json().get("Items") res_items = res.json().get("Items")
if res_items: if res_items:
ret_movies = [] ret_movies = []
for res_item in res_items: for res_item in res_items:
item_tmdbid = res_item.get("ProviderIds", {}).get("Tmdb") item_tmdbid = res_item.get("ProviderIds", {}).get("Tmdb")
mediaserver_item = schemas.MediaServerItem( mediaserver_item = schemas.MediaServerItem(
server="emby", server="emby",
library=res_item.get("ParentId"), library=res_item.get("ParentId"),
item_id=res_item.get("Id"), item_id=res_item.get("Id"),
item_type=res_item.get("Type"), item_type=res_item.get("Type"),
title=res_item.get("Name"), title=res_item.get("Name"),
original_title=res_item.get("OriginalTitle"), original_title=res_item.get("OriginalTitle"),
year=res_item.get("ProductionYear"), year=res_item.get("ProductionYear"),
tmdbid=int(item_tmdbid) if item_tmdbid else None, tmdbid=int(item_tmdbid) if item_tmdbid else None,
imdbid=res_item.get("ProviderIds", {}).get("Imdb"), imdbid=res_item.get("ProviderIds", {}).get("Imdb"),
tvdbid=res_item.get("ProviderIds", {}).get("Tvdb"), tvdbid=res_item.get("ProviderIds", {}).get("Tvdb"),
path=res_item.get("Path") path=res_item.get("Path")
) )
if tmdb_id and item_tmdbid: if tmdb_id and item_tmdbid:
if str(item_tmdbid) != str(tmdb_id): if str(item_tmdbid) != str(tmdb_id):
continue continue
else: else:
ret_movies.append(mediaserver_item)
continue
if (mediaserver_item.title == title
and (not year or str(mediaserver_item.year) == str(year))):
ret_movies.append(mediaserver_item) ret_movies.append(mediaserver_item)
continue return ret_movies
if (mediaserver_item.title == title
and (not year or str(mediaserver_item.year) == str(year))):
ret_movies.append(mediaserver_item)
return ret_movies
except Exception as e: except Exception as e:
logger.error(f"连接Items出错" + str(e)) logger.error(f"连接Items出错" + str(e))
return None return None
@@ -401,25 +401,25 @@ class Emby:
try: try:
req_url = "%semby/Shows/%s/Episodes?Season=%s&IsMissing=false&api_key=%s" % ( req_url = "%semby/Shows/%s/Episodes?Season=%s&IsMissing=false&api_key=%s" % (
self._host, item_id, season, self._apikey) self._host, item_id, season, self._apikey)
res_json = RequestUtils().get_res(req_url) with RequestUtils().get_res(req_url) as res_json:
if res_json: if res_json:
tv_item = res_json.json() tv_item = res_json.json()
res_items = tv_item.get("Items") res_items = tv_item.get("Items")
season_episodes = {} season_episodes = {}
for res_item in res_items: for res_item in res_items:
season_index = res_item.get("ParentIndexNumber") season_index = res_item.get("ParentIndexNumber")
if not season_index: if not season_index:
continue continue
if season and season != season_index: if season and season != season_index:
continue continue
episode_index = res_item.get("IndexNumber") episode_index = res_item.get("IndexNumber")
if not episode_index: if not episode_index:
continue continue
if season_index not in season_episodes: if season_index not in season_episodes:
season_episodes[season_index] = [] season_episodes[season_index] = []
season_episodes[season_index].append(episode_index) season_episodes[season_index].append(episode_index)
# 返回 # 返回
return item_id, season_episodes return item_id, season_episodes
except Exception as e: except Exception as e:
logger.error(f"连接Shows/Id/Episodes出错" + str(e)) logger.error(f"连接Shows/Id/Episodes出错" + str(e))
return None, None return None, None
@@ -436,20 +436,44 @@ class Emby:
return None return None
req_url = "%semby/Items/%s/RemoteImages?api_key=%s" % (self._host, item_id, self._apikey) req_url = "%semby/Items/%s/RemoteImages?api_key=%s" % (self._host, item_id, self._apikey)
try: try:
res = RequestUtils().get_res(req_url) res = RequestUtils(timeout=10).get_res(req_url)
if res: if res:
images = res.json().get("Images") images = res.json().get("Images")
for image in images: if images:
if image.get("ProviderName") == "TheMovieDb" and image.get("Type") == image_type: for image in images:
return image.get("Url") if image.get("ProviderName") == "TheMovieDb" and image.get("Type") == image_type:
else: return image.get("Url")
logger.error(f"Items/RemoteImages 未获取到返回数据") # 数据为空
return None logger.info(f"Items/RemoteImages 未获取到返回数据,采用本地图片")
return self.generate_external_image_link(item_id, image_type)
except Exception as e: except Exception as e:
logger.error(f"连接Items/Id/RemoteImages出错" + str(e)) logger.error(f"连接Items/Id/RemoteImages出错" + str(e))
return None
return None return None
def generate_external_image_link(self, item_id: str, image_type: str) -> Optional[str]:
"""
根据ItemId和imageType查询本地对应图片
:param item_id: 在Emby中的ID
:param image_type: 图片类型如Backdrop、Primary
:return: 图片对应在外网播放器中的URL
"""
if not self._playhost:
logger.error("Emby外网播放地址未能获取或为空")
return None
req_url = "%sItems/%s/Images/%s" % (self._playhost, item_id, image_type)
try:
with RequestUtils().get_res(req_url) as res:
if res and res.status_code != 404:
logger.info(f"影片图片链接:{res.url}")
return res.url
else:
logger.error("Items/Id/Images 未获取到返回数据或无该影片{}图片".format(image_type))
return None
except Exception as e:
logger.error(f"连接Items/Id/Images出错" + str(e))
return None
def __refresh_emby_library_by_id(self, item_id: str) -> bool: def __refresh_emby_library_by_id(self, item_id: str) -> bool:
""" """
通知Emby刷新一个项目的媒体库 通知Emby刷新一个项目的媒体库
@@ -458,11 +482,11 @@ class Emby:
return False return False
req_url = "%semby/Items/%s/Refresh?Recursive=true&api_key=%s" % (self._host, item_id, self._apikey) req_url = "%semby/Items/%s/Refresh?Recursive=true&api_key=%s" % (self._host, item_id, self._apikey)
try: try:
res = RequestUtils().post_res(req_url) with RequestUtils().post_res(req_url) as res:
if res: if res:
return True return True
else: else:
logger.info(f"刷新媒体库对象 {item_id} 失败无法连接Emby") logger.info(f"刷新媒体库对象 {item_id} 失败无法连接Emby")
except Exception as e: except Exception as e:
logger.error(f"连接Items/Id/Refresh出错" + str(e)) logger.error(f"连接Items/Id/Refresh出错" + str(e))
return False return False
@@ -476,11 +500,11 @@ class Emby:
return False return False
req_url = "%semby/Library/Refresh?api_key=%s" % (self._host, self._apikey) req_url = "%semby/Library/Refresh?api_key=%s" % (self._host, self._apikey)
try: try:
res = RequestUtils().post_res(req_url) with RequestUtils().post_res(req_url) as res:
if res: if res:
return True return True
else: else:
logger.info(f"刷新媒体库失败无法连接Emby") logger.info(f"刷新媒体库失败无法连接Emby")
except Exception as e: except Exception as e:
logger.error(f"连接Library/Refresh出错" + str(e)) logger.error(f"连接Library/Refresh出错" + str(e))
return False return False
@@ -555,23 +579,23 @@ class Emby:
return None return None
req_url = "%semby/Users/%s/Items/%s?api_key=%s" % (self._host, self.user, itemid, self._apikey) req_url = "%semby/Users/%s/Items/%s?api_key=%s" % (self._host, self.user, itemid, self._apikey)
try: try:
res = RequestUtils().get_res(req_url) with RequestUtils().get_res(req_url) as res:
if res and res.status_code == 200: if res and res.status_code == 200:
item = res.json() item = res.json()
tmdbid = item.get("ProviderIds", {}).get("Tmdb") tmdbid = item.get("ProviderIds", {}).get("Tmdb")
return schemas.MediaServerItem( return schemas.MediaServerItem(
server="emby", server="emby",
library=item.get("ParentId"), library=item.get("ParentId"),
item_id=item.get("Id"), item_id=item.get("Id"),
item_type=item.get("Type"), item_type=item.get("Type"),
title=item.get("Name"), title=item.get("Name"),
original_title=item.get("OriginalTitle"), original_title=item.get("OriginalTitle"),
year=item.get("ProductionYear"), year=item.get("ProductionYear"),
tmdbid=int(tmdbid) if tmdbid else None, tmdbid=int(tmdbid) if tmdbid else None,
imdbid=item.get("ProviderIds", {}).get("Imdb"), imdbid=item.get("ProviderIds", {}).get("Imdb"),
tvdbid=item.get("ProviderIds", {}).get("Tvdb"), tvdbid=item.get("ProviderIds", {}).get("Tvdb"),
path=item.get("Path") path=item.get("Path")
) )
except Exception as e: except Exception as e:
logger.error(f"连接Items/Id出错" + str(e)) logger.error(f"连接Items/Id出错" + str(e))
return None return None
@@ -586,17 +610,17 @@ class Emby:
yield None yield None
req_url = "%semby/Users/%s/Items?ParentId=%s&api_key=%s" % (self._host, self.user, parent, self._apikey) req_url = "%semby/Users/%s/Items?ParentId=%s&api_key=%s" % (self._host, self.user, parent, self._apikey)
try: try:
res = RequestUtils().get_res(req_url) with RequestUtils().get_res(req_url) as res:
if res and res.status_code == 200: if res and res.status_code == 200:
results = res.json().get("Items") or [] results = res.json().get("Items") or []
for result in results: for result in results:
if not result: if not result:
continue continue
if result.get("Type") in ["Movie", "Series"]: if result.get("Type") in ["Movie", "Series"]:
yield self.get_iteminfo(result.get("Id")) yield self.get_iteminfo(result.get("Id"))
elif "Folder" in result.get("Type"): elif "Folder" in result.get("Type"):
for item in self.get_items(parent=result.get('Id')): for item in self.get_items(parent=result.get('Id')):
yield item yield item
except Exception as e: except Exception as e:
logger.error(f"连接Users/Items出错" + str(e)) logger.error(f"连接Users/Items出错" + str(e))
yield None yield None
@@ -1008,52 +1032,52 @@ class Emby:
req_url = (f"{self._host}Users/{user}/Items/Resume?" req_url = (f"{self._host}Users/{user}/Items/Resume?"
f"Limit=100&MediaTypes=Video&api_key={self._apikey}&Fields=ProductionYear,Path") f"Limit=100&MediaTypes=Video&api_key={self._apikey}&Fields=ProductionYear,Path")
try: try:
res = RequestUtils().get_res(req_url) with RequestUtils().get_res(req_url) as res:
if res: if res:
result = res.json().get("Items") or [] result = res.json().get("Items") or []
ret_resume = [] ret_resume = []
# 用户媒体库文件夹列表(排除黑名单) # 用户媒体库文件夹列表(排除黑名单)
library_folders = self.get_user_library_folders() library_folders = self.get_user_library_folders()
for item in result: for item in result:
if len(ret_resume) == num: if len(ret_resume) == num:
break break
if item.get("Type") not in ["Movie", "Episode"]: if item.get("Type") not in ["Movie", "Episode"]:
continue continue
item_path = item.get("Path") item_path = item.get("Path")
if item_path and library_folders and not any( if item_path and library_folders and not any(
str(item_path).startswith(folder) for folder in library_folders): str(item_path).startswith(folder) for folder in library_folders):
continue continue
item_type = MediaType.MOVIE.value if item.get("Type") == "Movie" else MediaType.TV.value item_type = MediaType.MOVIE.value if item.get("Type") == "Movie" else MediaType.TV.value
link = self.get_play_url(item.get("Id")) link = self.get_play_url(item.get("Id"))
if item_type == MediaType.MOVIE.value: if item_type == MediaType.MOVIE.value:
title = item.get("Name") title = item.get("Name")
subtitle = item.get("ProductionYear") subtitle = item.get("ProductionYear")
else:
title = f'{item.get("SeriesName")}'
subtitle = f'S{item.get("ParentIndexNumber")}:{item.get("IndexNumber")} - {item.get("Name")}'
if item_type == MediaType.MOVIE.value:
if item.get("BackdropImageTags"):
image = self.__get_backdrop_url(item_id=item.get("Id"),
image_tag=item.get("BackdropImageTags")[0])
else: else:
image = self.__get_local_image_by_id(item.get("Id")) title = f'{item.get("SeriesName")}'
else: subtitle = f'S{item.get("ParentIndexNumber")}:{item.get("IndexNumber")} - {item.get("Name")}'
image = self.__get_backdrop_url(item_id=item.get("SeriesId"), if item_type == MediaType.MOVIE.value:
image_tag=item.get("SeriesPrimaryImageTag")) if item.get("BackdropImageTags"):
if not image: image = self.__get_backdrop_url(item_id=item.get("Id"),
image = self.__get_local_image_by_id(item.get("SeriesId")) image_tag=item.get("BackdropImageTags")[0])
ret_resume.append(schemas.MediaServerPlayItem( else:
id=item.get("Id"), image = self.__get_local_image_by_id(item.get("Id"))
title=title, else:
subtitle=subtitle, image = self.__get_backdrop_url(item_id=item.get("SeriesId"),
type=item_type, image_tag=item.get("SeriesPrimaryImageTag"))
image=image, if not image:
link=link, image = self.__get_local_image_by_id(item.get("SeriesId"))
percent=item.get("UserData", {}).get("PlayedPercentage") ret_resume.append(schemas.MediaServerPlayItem(
)) id=item.get("Id"),
return ret_resume title=title,
else: subtitle=subtitle,
logger.error(f"Users/Items/Resume 未获取到返回数据") type=item_type,
image=image,
link=link,
percent=item.get("UserData", {}).get("PlayedPercentage")
))
return ret_resume
else:
logger.error(f"Users/Items/Resume 未获取到返回数据")
except Exception as e: except Exception as e:
logger.error(f"连接Users/Items/Resume出错" + str(e)) logger.error(f"连接Users/Items/Resume出错" + str(e))
return [] return []
@@ -1071,35 +1095,35 @@ class Emby:
req_url = (f"{self._host}Users/{user}/Items/Latest?" req_url = (f"{self._host}Users/{user}/Items/Latest?"
f"Limit=100&MediaTypes=Video&api_key={self._apikey}&Fields=ProductionYear,Path") f"Limit=100&MediaTypes=Video&api_key={self._apikey}&Fields=ProductionYear,Path")
try: try:
res = RequestUtils().get_res(req_url) with RequestUtils().get_res(req_url) as res:
if res: if res:
result = res.json() or [] result = res.json() or []
ret_latest = [] ret_latest = []
# 用户媒体库文件夹列表(排除黑名单) # 用户媒体库文件夹列表(排除黑名单)
library_folders = self.get_user_library_folders() library_folders = self.get_user_library_folders()
for item in result: for item in result:
if len(ret_latest) == num: if len(ret_latest) == num:
break break
if item.get("Type") not in ["Movie", "Series"]: if item.get("Type") not in ["Movie", "Series"]:
continue continue
item_path = item.get("Path") item_path = item.get("Path")
if item_path and library_folders and not any( if item_path and library_folders and not any(
str(item_path).startswith(folder) for folder in library_folders): str(item_path).startswith(folder) for folder in library_folders):
continue continue
item_type = MediaType.MOVIE.value if item.get("Type") == "Movie" else MediaType.TV.value item_type = MediaType.MOVIE.value if item.get("Type") == "Movie" else MediaType.TV.value
link = self.get_play_url(item.get("Id")) link = self.get_play_url(item.get("Id"))
image = self.__get_local_image_by_id(item_id=item.get("Id")) image = self.__get_local_image_by_id(item_id=item.get("Id"))
ret_latest.append(schemas.MediaServerPlayItem( ret_latest.append(schemas.MediaServerPlayItem(
id=item.get("Id"), id=item.get("Id"),
title=item.get("Name"), title=item.get("Name"),
subtitle=item.get("ProductionYear"), subtitle=item.get("ProductionYear"),
type=item_type, type=item_type,
image=image, image=image,
link=link link=link
)) ))
return ret_latest return ret_latest
else: else:
logger.error(f"Users/Items/Latest 未获取到返回数据") logger.error(f"Users/Items/Latest 未获取到返回数据")
except Exception as e: except Exception as e:
logger.error(f"连接Users/Items/Latest出错" + str(e)) logger.error(f"连接Users/Items/Latest出错" + str(e))
return [] return []

View File

@@ -321,11 +321,11 @@ class FanartModule(_ModuleBase):
""" """
测试模块连接性 测试模块连接性
""" """
ret = RequestUtils().get_res("https://webservice.fanart.tv") with RequestUtils().get_res("https://webservice.fanart.tv") as ret:
if ret and ret.status_code == 200: if ret and ret.status_code == 200:
return True, "" return True, ""
elif ret: elif ret:
return False, f"无法连接fanart错误码{ret.status_code}" return False, f"无法连接fanart错误码{ret.status_code}"
return False, "fanart网络连接失败" return False, "fanart网络连接失败"
def init_setting(self) -> Tuple[str, Union[str, bool]]: def init_setting(self) -> Tuple[str, Union[str, bool]]:

View File

@@ -34,21 +34,31 @@ class FileTransferModule(_ModuleBase):
if not settings.DOWNLOAD_PATH: if not settings.DOWNLOAD_PATH:
return False, "下载目录未设置" return False, "下载目录未设置"
# 检查下载目录 # 检查下载目录
download_path = Path(settings.DOWNLOAD_PATH) download_paths: List[str] = []
if not download_path.exists(): for path in [settings.DOWNLOAD_PATH,
return False, "下载目录不存在" settings.DOWNLOAD_MOVIE_PATH,
settings.DOWNLOAD_TV_PATH,
settings.DOWNLOAD_ANIME_PATH]:
if not path:
continue
download_path = Path(path)
if not download_path.exists():
return False, f"下载目录 {download_path} 不存在"
download_paths.append(path)
# 下载目录的设备ID
download_devids = [Path(path).stat().st_dev for path in download_paths]
# 检查媒体库目录
if not settings.LIBRARY_PATH: if not settings.LIBRARY_PATH:
return False, "媒体库目录未设置" return False, "媒体库目录未设置"
# 下载目录的设备ID
download_devid = download_path.stat().st_dev
# 比较媒体库目录的设备ID # 比较媒体库目录的设备ID
for path in settings.LIBRARY_PATHS: for path in settings.LIBRARY_PATHS:
library_path = Path(path) library_path = Path(path)
if not library_path.exists(): if not library_path.exists():
return False, f"目录不存在:{library_path}" return False, f"媒体库目录不存在:{library_path}"
if settings.DOWNLOADER_MONITOR and settings.TRANSFER_TYPE == "link": if settings.DOWNLOADER_MONITOR and settings.TRANSFER_TYPE == "link":
if library_path.stat().st_dev != download_devid: if library_path.stat().st_dev not in download_devids:
return False, f"下载目录 {download_path}媒体库目录 {library_path} 不在同一设备,将无法硬链接" return False, f"媒体库目录 {library_path} " \
f"与下载目录 {','.join(download_paths)} 不在同一设备,将无法硬链接"
return True, "" return True, ""
def init_setting(self) -> Tuple[str, Union[str, bool]]: def init_setting(self) -> Tuple[str, Union[str, bool]]:
@@ -195,6 +205,8 @@ class FileTransferModule(_ModuleBase):
if (org_path.stem == Path(sub_file_name).stem) or \ if (org_path.stem == Path(sub_file_name).stem) or \
(sub_metainfo.cn_name and sub_metainfo.cn_name == metainfo.cn_name) \ (sub_metainfo.cn_name and sub_metainfo.cn_name == metainfo.cn_name) \
or (sub_metainfo.en_name and sub_metainfo.en_name == metainfo.en_name): or (sub_metainfo.en_name and sub_metainfo.en_name == metainfo.en_name):
if metainfo.part and metainfo.part != sub_metainfo.part:
continue
if metainfo.season \ if metainfo.season \
and metainfo.season != sub_metainfo.season: and metainfo.season != sub_metainfo.season:
continue continue
@@ -615,7 +627,7 @@ class FileTransferModule(_ModuleBase):
# 原语种标题 # 原语种标题
"original_title": __convert_invalid_characters(mediainfo.original_title), "original_title": __convert_invalid_characters(mediainfo.original_title),
# 原文件名 # 原文件名
"original_name": f"{meta.org_string}{file_ext}", "original_name": meta.title,
# 识别名称(优先使用中文) # 识别名称(优先使用中文)
"name": meta.name, "name": meta.name,
# 识别的英文名称(可能为空) # 识别的英文名称(可能为空)
@@ -651,7 +663,7 @@ class FileTransferModule(_ModuleBase):
# 段/节 # 段/节
"part": meta.part, "part": meta.part,
# 剧集标题 # 剧集标题
"episode_title": episode_title, "episode_title": __convert_invalid_characters(episode_title),
# 文件后缀 # 文件后缀
"fileExt": file_ext, "fileExt": file_ext,
# 自定义占位符 # 自定义占位符
@@ -714,7 +726,11 @@ class FileTransferModule(_ModuleBase):
try: try:
# 计算in_path和path的公共字符串长度 # 计算in_path和path的公共字符串长度
relative = StringUtils.find_common_prefix(str(in_path), str(path)) relative = StringUtils.find_common_prefix(str(in_path), str(path))
if len(str(path)) == len(relative):
# 目录完整匹配的,直接返回
return path
if len(relative) > max_length: if len(relative) > max_length:
# 更新最大长度
max_length = len(relative) max_length = len(relative)
target_path = path target_path = path
except Exception as e: except Exception as e:
@@ -802,8 +818,6 @@ class FileTransferModule(_ModuleBase):
删除目录下的所有版本文件 删除目录下的所有版本文件
:param path: 目录路径 :param path: 目录路径
""" """
if not path.exists():
return False
# 识别文件中的季集信息 # 识别文件中的季集信息
meta = MetaInfoPath(path) meta = MetaInfoPath(path)
season = meta.season season = meta.season
@@ -816,7 +830,7 @@ class FileTransferModule(_ModuleBase):
return False return False
# 删除文件 # 删除文件
for media_file in media_files: for media_file in media_files:
if media_file == path: if str(media_file) == str(path):
continue continue
# 识别文件中的季集信息 # 识别文件中的季集信息
filemeta = MetaInfoPath(media_file) filemeta = MetaInfoPath(media_file)

View File

@@ -163,13 +163,13 @@ class FilterModule(_ModuleBase):
# 返回种子列表 # 返回种子列表
ret_torrents = [] ret_torrents = []
for torrent in torrent_list: for torrent in torrent_list:
# 能命中优先级的才返回
if not self.__get_order(torrent, rule_string):
continue
# 季集数过滤 # 季集数过滤
if season_episodes \ if season_episodes \
and not self.__match_season_episodes(torrent, season_episodes): and not self.__match_season_episodes(torrent, season_episodes):
continue continue
# 能命中优先级的才返回
if not self.__get_order(torrent, rule_string):
continue
ret_torrents.append(torrent) ret_torrents.append(torrent)
return ret_torrents return ret_torrents
@@ -191,7 +191,7 @@ class FilterModule(_ModuleBase):
torrent_episodes = meta.episode_list torrent_episodes = meta.episode_list
if not set(torrent_seasons).issubset(set(seasons)): if not set(torrent_seasons).issubset(set(seasons)):
# 种子季不在过滤季中 # 种子季不在过滤季中
logger.info(f"种子 {torrent.site_name} - {torrent.title} 不是需要的季") logger.info(f"种子 {torrent.site_name} - {torrent.title} 包含季 {torrent_seasons} 不是需要的季 {seasons}")
return False return False
if not torrent_episodes: if not torrent_episodes:
# 整季按匹配处理 # 整季按匹配处理

View File

@@ -3,7 +3,9 @@ from typing import List, Optional, Tuple, Union
from ruamel.yaml import CommentedMap from ruamel.yaml import CommentedMap
from app.core.config import settings
from app.core.context import TorrentInfo from app.core.context import TorrentInfo
from app.db.sitestatistic_oper import SiteStatisticOper
from app.helper.sites import SitesHelper from app.helper.sites import SitesHelper
from app.log import logger from app.log import logger
from app.modules import _ModuleBase from app.modules import _ModuleBase
@@ -50,6 +52,18 @@ class IndexerModule(_ModuleBase):
:param page: 页码 :param page: 页码
:return: 资源列表 :return: 资源列表
""" """
def __remove_duplicate(_torrents: List[TorrentInfo]) -> List[TorrentInfo]:
"""
去除重复的种子
:param _torrents: 种子列表
:return: 去重后的种子列表
"""
if not settings.SEARCH_MULTIPLE_NAME:
return _torrents
# 通过encosure去重
return list({f"{t.title}_{t.description}": t for t in _torrents}.values())
# 确认搜索的名字 # 确认搜索的名字
if not keywords: if not keywords:
# 浏览种子页 # 浏览种子页
@@ -57,10 +71,12 @@ class IndexerModule(_ModuleBase):
# 开始索引 # 开始索引
result_array = [] result_array = []
# 开始计时 # 开始计时
start_time = datetime.now() start_time = datetime.now()
# 搜索多个关键字 # 搜索多个关键字
error_flag = False
for search_word in keywords: for search_word in keywords:
# 可能为关键字或ttxxxx # 可能为关键字或ttxxxx
if search_word \ if search_word \
@@ -76,36 +92,51 @@ class IndexerModule(_ModuleBase):
try: try:
if site.get('parser') == "TNodeSpider": if site.get('parser') == "TNodeSpider":
error_flag, result_array = TNodeSpider(site).search( error_flag, result = TNodeSpider(site).search(
keyword=search_word, keyword=search_word,
page=page page=page
) )
elif site.get('parser') == "TorrentLeech": elif site.get('parser') == "TorrentLeech":
error_flag, result_array = TorrentLeech(site).search( error_flag, result = TorrentLeech(site).search(
keyword=search_word, keyword=search_word,
page=page page=page
) )
elif site.get('parser') == "mTorrent": elif site.get('parser') == "mTorrent":
error_flag, result_array = MTorrentSpider(site).search( error_flag, result = MTorrentSpider(site).search(
keyword=search_word, keyword=search_word,
mtype=mtype, mtype=mtype,
page=page page=page
) )
else: else:
error_flag, result_array = self.__spider_search( error_flag, result = self.__spider_search(
search_word=search_word, search_word=search_word,
indexer=site, indexer=site,
mtype=mtype, mtype=mtype,
page=page page=page
) )
# 有结果后停止 if error_flag:
if result_array: break
if not result:
continue
if settings.SEARCH_MULTIPLE_NAME:
# 合并多个结果
result_array.extend(result)
else:
# 有结果就停止
result_array = result
break break
except Exception as err: except Exception as err:
logger.error(f"{site.get('name')} 搜索出错:{str(err)}") logger.error(f"{site.get('name')} 搜索出错:{str(err)}")
# 索引花费的时间 # 索引花费的时间
seconds = round((datetime.now() - start_time).seconds, 1) seconds = (datetime.now() - start_time).seconds
# 统计索引情况
domain = StringUtils.get_url_domain(site.get("domain"))
if error_flag:
SiteStatisticOper().fail(domain)
else:
SiteStatisticOper().success(domain=domain, seconds=seconds)
# 返回结果 # 返回结果
if not result_array or len(result_array) == 0: if not result_array or len(result_array) == 0:
@@ -113,14 +144,16 @@ class IndexerModule(_ModuleBase):
return [] return []
else: else:
logger.info(f"{site.get('name')} 搜索完成,耗时 {seconds} 秒,返回数据:{len(result_array)}") logger.info(f"{site.get('name')} 搜索完成,耗时 {seconds} 秒,返回数据:{len(result_array)}")
# 合并站点信息,以TorrentInfo返回 # TorrentInfo
return [TorrentInfo(site=site.get("id"), torrents = [TorrentInfo(site=site.get("id"),
site_name=site.get("name"), site_name=site.get("name"),
site_cookie=site.get("cookie"), site_cookie=site.get("cookie"),
site_ua=site.get("ua"), site_ua=site.get("ua"),
site_proxy=site.get("proxy"), site_proxy=site.get("proxy"),
site_order=site.get("pri"), site_order=site.get("pri"),
**result) for result in result_array] **result) for result in result_array]
# 去重
return __remove_duplicate(torrents)
@staticmethod @staticmethod
def __spider_search(indexer: CommentedMap, def __spider_search(indexer: CommentedMap,

View File

@@ -6,6 +6,7 @@ from typing import Tuple, List
from ruamel.yaml import CommentedMap from ruamel.yaml import CommentedMap
from app.core.config import settings from app.core.config import settings
from app.db.systemconfig_oper import SystemConfigOper
from app.log import logger from app.log import logger
from app.schemas import MediaType from app.schemas import MediaType
from app.utils.http import RequestUtils from app.utils.http import RequestUtils
@@ -13,6 +14,9 @@ from app.utils.string import StringUtils
class MTorrentSpider: class MTorrentSpider:
"""
mTorrent API需要缓存ApiKey
"""
_indexerid = None _indexerid = None
_domain = None _domain = None
_name = "" _name = ""
@@ -28,14 +32,25 @@ class MTorrentSpider:
_movie_category = ['401', '419', '420', '421', '439', '405', '404'] _movie_category = ['401', '419', '420', '421', '439', '405', '404']
_tv_category = ['403', '402', '435', '438', '404', '405'] _tv_category = ['403', '402', '435', '438', '404', '405']
# API KEY
_apikey = None
# JWT Token
_token = None
# 标签 # 标签
_labels = { _labels = {
0: "", "0": "",
4: "中字", "1": "DIY",
6: "国配", "2": "国配",
"3": "DIY 国配",
"4": "中字",
"5": "DIY 中字",
"6": "国配 中字",
"7": "DIY 国配 中字"
} }
def __init__(self, indexer: CommentedMap): def __init__(self, indexer: CommentedMap):
self.systemconfig = SystemConfigOper()
if indexer: if indexer:
self._indexerid = indexer.get('id') self._indexerid = indexer.get('id')
self._domain = indexer.get('domain') self._domain = indexer.get('domain')
@@ -45,8 +60,17 @@ class MTorrentSpider:
self._proxy = settings.PROXY self._proxy = settings.PROXY
self._cookie = indexer.get('cookie') self._cookie = indexer.get('cookie')
self._ua = indexer.get('ua') self._ua = indexer.get('ua')
self._apikey = indexer.get('apikey')
self._token = indexer.get('token')
def search(self, keyword: str, mtype: MediaType = None, page: int = 0) -> Tuple[bool, List[dict]]: def search(self, keyword: str, mtype: MediaType = None, page: int = 0) -> Tuple[bool, List[dict]]:
"""
搜索
"""
# 检查ApiKey
if not self._apikey:
return True, []
if not mtype: if not mtype:
categories = [] categories = []
elif mtype == MediaType.TV: elif mtype == MediaType.TV:
@@ -63,31 +87,45 @@ class MTorrentSpider:
res = RequestUtils( res = RequestUtils(
headers={ headers={
"Content-Type": "application/json", "Content-Type": "application/json",
"User-Agent": f"{self._ua}" "User-Agent": f"{self._ua}",
"x-api-key": self._apikey
}, },
cookies=self._cookie,
proxies=self._proxy, proxies=self._proxy,
referer=f"{self._domain}browse", referer=f"{self._domain}browse",
timeout=30 timeout=15
).post_res(url=self._searchurl, json=params) ).post_res(url=self._searchurl, json=params)
torrents = [] torrents = []
if res and res.status_code == 200: if res and res.status_code == 200:
results = res.json().get('data', {}).get("data") or [] results = res.json().get('data', {}).get("data") or []
for result in results: for result in results:
category_value = result.get('category')
if category_value in self._tv_category \
and category_value not in self._movie_category:
category = MediaType.TV.value
elif category_value in self._movie_category:
category = MediaType.MOVIE.value
else:
category = MediaType.UNKNOWN.value
labels_value = self._labels.get(result.get('labels') or "0") or ""
if labels_value:
labels = labels_value.split()
else:
labels = []
torrent = { torrent = {
'title': result.get('name'), 'title': result.get('name'),
'description': result.get('smallDescr'), 'description': result.get('smallDescr'),
'enclosure': self.__get_download_url(result.get('id')), 'enclosure': self.__get_download_url(result.get('id')),
'pubdate': StringUtils.format_timestamp(result.get('createdDate')), 'pubdate': StringUtils.format_timestamp(result.get('createdDate')),
'size': result.get('size'), 'size': int(result.get('size') or '0'),
'seeders': result.get('status', {}).get("seeders"), 'seeders': int(result.get('status', {}).get("seeders") or '0'),
'peers': result.get('status', {}).get("leechers"), 'peers': int(result.get('status', {}).get("leechers") or '0'),
'grabs': result.get('status', {}).get("timesCompleted"), 'grabs': int(result.get('status', {}).get("timesCompleted") or '0'),
'downloadvolumefactor': self.__get_downloadvolumefactor(result.get('status', {}).get("discount")), 'downloadvolumefactor': self.__get_downloadvolumefactor(result.get('status', {}).get("discount")),
'uploadvolumefactor': self.__get_uploadvolumefactor(result.get('status', {}).get("discount")), 'uploadvolumefactor': self.__get_uploadvolumefactor(result.get('status', {}).get("discount")),
'page_url': self._pageurl % (self._domain, result.get('id')), 'page_url': self._pageurl % (self._domain, result.get('id')),
'imdbid': self.__find_imdbid(result.get('imdb')), 'imdbid': self.__find_imdbid(result.get('imdb')),
'labels': [self._labels.get(result.get('labels') or 0)] if result.get('labels') else [] 'labels': labels,
'category': category
} }
torrents.append(torrent) torrents.append(torrent)
elif res is not None: elif res is not None:
@@ -100,6 +138,9 @@ class MTorrentSpider:
@staticmethod @staticmethod
def __find_imdbid(imdb: str) -> str: def __find_imdbid(imdb: str) -> str:
"""
从imdb链接中提取imdbid
"""
if imdb: if imdb:
m = re.search(r"tt\d+", imdb) m = re.search(r"tt\d+", imdb)
if m: if m:
@@ -108,6 +149,9 @@ class MTorrentSpider:
@staticmethod @staticmethod
def __get_downloadvolumefactor(discount: str) -> float: def __get_downloadvolumefactor(discount: str) -> float:
"""
获取下载系数
"""
discount_dict = { discount_dict = {
"FREE": 0, "FREE": 0,
"PERCENT_50": 0.5, "PERCENT_50": 0.5,
@@ -121,6 +165,9 @@ class MTorrentSpider:
@staticmethod @staticmethod
def __get_uploadvolumefactor(discount: str) -> float: def __get_uploadvolumefactor(discount: str) -> float:
"""
获取上传系数
"""
uploadvolumefactor_dict = { uploadvolumefactor_dict = {
"_2X": 2.0, "_2X": 2.0,
"_2X_FREE": 2.0, "_2X_FREE": 2.0,
@@ -131,12 +178,22 @@ class MTorrentSpider:
return 1 return 1
def __get_download_url(self, torrent_id: str) -> str: def __get_download_url(self, torrent_id: str) -> str:
"""
获取下载链接返回base64编码的json字符串及URL
"""
url = self._downloadurl % self._domain url = self._downloadurl % self._domain
params = { params = {
'method': 'post', 'method': 'post',
'cookie': False,
'params': { 'params': {
'id': torrent_id 'id': torrent_id
}, },
'header': {
'Content-Type': 'application/json',
'User-Agent': f'{self._ua}',
'Accept': 'application/json, text/plain, */*',
'x-api-key': self._apikey
},
'result': 'data' 'result': 'data'
} }
# base64编码 # base64编码

View File

@@ -95,7 +95,7 @@ class TorrentSpider:
self.render = indexer.get('render') self.render = indexer.get('render')
self.domain = indexer.get('domain') self.domain = indexer.get('domain')
self.result_num = int(indexer.get('result_num') or 100) self.result_num = int(indexer.get('result_num') or 100)
self._timeout = int(indexer.get('timeout') or 30) self._timeout = int(indexer.get('timeout') or 15)
self.page = page self.page = page
if self.domain and not str(self.domain).endswith("/"): if self.domain and not str(self.domain).endswith("/"):
self.domain = self.domain + "/" self.domain = self.domain + "/"
@@ -383,9 +383,19 @@ class TorrentSpider:
item = self.__index(items, selector) item = self.__index(items, selector)
download_link = self.__filter_text(item, selector.get('filters')) download_link = self.__filter_text(item, selector.get('filters'))
if download_link: if download_link:
if not download_link.startswith("http") and not download_link.startswith("magnet"): if not download_link.startswith("http") \
self.torrents_info['enclosure'] = self.domain + download_link[1:] if download_link.startswith( and not download_link.startswith("magnet"):
"/") else self.domain + download_link _scheme, _domain = StringUtils.get_url_netloc(self.domain)
if _domain in download_link:
if download_link.startswith("/"):
self.torrents_info['enclosure'] = f"{_scheme}:{download_link}"
else:
self.torrents_info['enclosure'] = f"{_scheme}://{download_link}"
else:
if download_link.startswith("/"):
self.torrents_info['enclosure'] = f"{self.domain}{download_link[1:]}"
else:
self.torrents_info['enclosure'] = f"{self.domain}{download_link}"
else: else:
self.torrents_info['enclosure'] = download_link self.torrents_info['enclosure'] = download_link

View File

@@ -77,7 +77,7 @@ class TNodeSpider:
}, },
cookies=self._cookie, cookies=self._cookie,
proxies=self._proxy, proxies=self._proxy,
timeout=30 timeout=15
).post_res(url=self._searchurl, json=params) ).post_res(url=self._searchurl, json=params)
torrents = [] torrents = []
if res and res.status_code == 200: if res and res.status_code == 200:

View File

@@ -40,7 +40,7 @@ class TorrentLeech:
}, },
cookies=self._indexer.get('cookie'), cookies=self._indexer.get('cookie'),
proxies=self._proxy, proxies=self._proxy,
timeout=30 timeout=15
).get_res(url) ).get_res(url)
torrents = [] torrents = []
if res and res.status_code == 200: if res and res.status_code == 200:

View File

@@ -52,12 +52,12 @@ class Jellyfin:
return [] return []
req_url = "%sLibrary/SelectableMediaFolders?api_key=%s" % (self._host, self._apikey) req_url = "%sLibrary/SelectableMediaFolders?api_key=%s" % (self._host, self._apikey)
try: try:
res = RequestUtils().get_res(req_url) with RequestUtils().get_res(req_url) as res:
if res: if res:
return res.json() return res.json()
else: else:
logger.error(f"Library/SelectableMediaFolders 未获取到返回数据") logger.error(f"Library/SelectableMediaFolders 未获取到返回数据")
return [] return []
except Exception as e: except Exception as e:
logger.error(f"连接Library/SelectableMediaFolders 出错:" + str(e)) logger.error(f"连接Library/SelectableMediaFolders 出错:" + str(e))
return [] return []
@@ -70,29 +70,29 @@ class Jellyfin:
return [] return []
req_url = "%sLibrary/VirtualFolders?api_key=%s" % (self._host, self._apikey) req_url = "%sLibrary/VirtualFolders?api_key=%s" % (self._host, self._apikey)
try: try:
res = RequestUtils().get_res(req_url) with RequestUtils().get_res(req_url) as res:
if res: if res:
library_items = res.json() library_items = res.json()
librarys = [] librarys = []
for library_item in library_items: for library_item in library_items:
library_name = library_item.get('Name') library_name = library_item.get('Name')
pathInfos = library_item.get('LibraryOptions', {}).get('PathInfos') pathInfos = library_item.get('LibraryOptions', {}).get('PathInfos')
library_paths = [] library_paths = []
for path in pathInfos: for path in pathInfos:
if path.get('NetworkPath'): if path.get('NetworkPath'):
library_paths.append(path.get('NetworkPath')) library_paths.append(path.get('NetworkPath'))
else: else:
library_paths.append(path.get('Path')) library_paths.append(path.get('Path'))
if library_name and library_paths: if library_name and library_paths:
librarys.append({ librarys.append({
'Name': library_name, 'Name': library_name,
'Path': library_paths 'Path': library_paths
}) })
return librarys return librarys
else: else:
logger.error(f"Library/VirtualFolders 未获取到返回数据") logger.error(f"Library/VirtualFolders 未获取到返回数据")
return [] return []
except Exception as e: except Exception as e:
logger.error(f"连接Library/VirtualFolders 出错:" + str(e)) logger.error(f"连接Library/VirtualFolders 出错:" + str(e))
return [] return []
@@ -109,12 +109,12 @@ class Jellyfin:
user = self.user user = self.user
req_url = f"{self._host}Users/{user}/Views?api_key={self._apikey}" req_url = f"{self._host}Users/{user}/Views?api_key={self._apikey}"
try: try:
res = RequestUtils().get_res(req_url) with RequestUtils().get_res(req_url) as res:
if res: if res:
return res.json().get("Items") return res.json().get("Items")
else: else:
logger.error(f"Users/Views 未获取到返回数据") logger.error(f"Users/Views 未获取到返回数据")
return [] return []
except Exception as e: except Exception as e:
logger.error(f"连接Users/Views 出错:" + str(e)) logger.error(f"连接Users/Views 出错:" + str(e))
return [] return []
@@ -163,12 +163,12 @@ class Jellyfin:
return 0 return 0
req_url = "%sUsers?api_key=%s" % (self._host, self._apikey) req_url = "%sUsers?api_key=%s" % (self._host, self._apikey)
try: try:
res = RequestUtils().get_res(req_url) with RequestUtils().get_res(req_url) as res:
if res: if res:
return len(res.json()) return len(res.json())
else: else:
logger.error(f"Users 未获取到返回数据") logger.error(f"Users 未获取到返回数据")
return 0 return 0
except Exception as e: except Exception as e:
logger.error(f"连接Users出错" + str(e)) logger.error(f"连接Users出错" + str(e))
return 0 return 0
@@ -181,20 +181,20 @@ class Jellyfin:
return None return None
req_url = "%sUsers?api_key=%s" % (self._host, self._apikey) req_url = "%sUsers?api_key=%s" % (self._host, self._apikey)
try: try:
res = RequestUtils().get_res(req_url) with RequestUtils().get_res(req_url) as res:
if res: if res:
users = res.json() users = res.json()
# 先查询是否有与当前用户名称匹配的 # 先查询是否有与当前用户名称匹配的
if user_name: if user_name:
for user in users:
if user.get("Name") == user_name:
return user.get("Id")
# 查询管理员
for user in users: for user in users:
if user.get("Name") == user_name: if user.get("Policy", {}).get("IsAdministrator"):
return user.get("Id") return user.get("Id")
# 查询管理员 else:
for user in users: logger.error(f"Users 未获取到返回数据")
if user.get("Policy", {}).get("IsAdministrator"):
return user.get("Id")
else:
logger.error(f"Users 未获取到返回数据")
except Exception as e: except Exception as e:
logger.error(f"连接Users出错" + str(e)) logger.error(f"连接Users出错" + str(e))
return None return None
@@ -244,11 +244,11 @@ class Jellyfin:
return None return None
req_url = "%sSystem/Info?api_key=%s" % (self._host, self._apikey) req_url = "%sSystem/Info?api_key=%s" % (self._host, self._apikey)
try: try:
res = RequestUtils().get_res(req_url) with RequestUtils().get_res(req_url) as res:
if res: if res:
return res.json().get("Id") return res.json().get("Id")
else: else:
logger.error(f"System/Info 未获取到返回数据") logger.error(f"System/Info 未获取到返回数据")
except Exception as e: except Exception as e:
logger.error(f"连接System/Info出错" + str(e)) logger.error(f"连接System/Info出错" + str(e))
return None return None
@@ -262,17 +262,17 @@ class Jellyfin:
return schemas.Statistic() return schemas.Statistic()
req_url = "%sItems/Counts?api_key=%s" % (self._host, self._apikey) req_url = "%sItems/Counts?api_key=%s" % (self._host, self._apikey)
try: try:
res = RequestUtils().get_res(req_url) with RequestUtils().get_res(req_url) as res:
if res: if res:
result = res.json() result = res.json()
return schemas.Statistic( return schemas.Statistic(
movie_count=result.get("MovieCount") or 0, movie_count=result.get("MovieCount") or 0,
tv_count=result.get("SeriesCount") or 0, tv_count=result.get("SeriesCount") or 0,
episode_count=result.get("EpisodeCount") or 0 episode_count=result.get("EpisodeCount") or 0
) )
else: else:
logger.error(f"Items/Counts 未获取到返回数据") logger.error(f"Items/Counts 未获取到返回数据")
return schemas.Statistic() return schemas.Statistic()
except Exception as e: except Exception as e:
logger.error(f"连接Items/Counts出错" + str(e)) logger.error(f"连接Items/Counts出错" + str(e))
return schemas.Statistic() return schemas.Statistic()
@@ -287,14 +287,14 @@ class Jellyfin:
"api_key=%s&searchTerm=%s&IncludeItemTypes=Series&Limit=10&Recursive=true") % ( "api_key=%s&searchTerm=%s&IncludeItemTypes=Series&Limit=10&Recursive=true") % (
self._host, self.user, self._apikey, name) self._host, self.user, self._apikey, name)
try: try:
res = RequestUtils().get_res(req_url) with RequestUtils().get_res(req_url) as res:
if res: if res:
res_items = res.json().get("Items") res_items = res.json().get("Items")
if res_items: if res_items:
for res_item in res_items: for res_item in res_items:
if res_item.get('Name') == name and ( if res_item.get('Name') == name and (
not year or str(res_item.get('ProductionYear')) == str(year)): not year or str(res_item.get('ProductionYear')) == str(year)):
return res_item.get('Id') return res_item.get('Id')
except Exception as e: except Exception as e:
logger.error(f"连接Items出错" + str(e)) logger.error(f"连接Items出错" + str(e))
return None return None
@@ -317,36 +317,36 @@ class Jellyfin:
"api_key=%s&searchTerm=%s&IncludeItemTypes=Movie&Limit=10&Recursive=true") % ( "api_key=%s&searchTerm=%s&IncludeItemTypes=Movie&Limit=10&Recursive=true") % (
self._host, self.user, self._apikey, title) self._host, self.user, self._apikey, title)
try: try:
res = RequestUtils().get_res(req_url) with RequestUtils().get_res(req_url) as res:
if res: if res:
res_items = res.json().get("Items") res_items = res.json().get("Items")
if res_items: if res_items:
ret_movies = [] ret_movies = []
for item in res_items: for item in res_items:
item_tmdbid = item.get("ProviderIds", {}).get("Tmdb") item_tmdbid = item.get("ProviderIds", {}).get("Tmdb")
mediaserver_item = schemas.MediaServerItem( mediaserver_item = schemas.MediaServerItem(
server="jellyfin", server="jellyfin",
library=item.get("ParentId"), library=item.get("ParentId"),
item_id=item.get("Id"), item_id=item.get("Id"),
item_type=item.get("Type"), item_type=item.get("Type"),
title=item.get("Name"), title=item.get("Name"),
original_title=item.get("OriginalTitle"), original_title=item.get("OriginalTitle"),
year=item.get("ProductionYear"), year=item.get("ProductionYear"),
tmdbid=int(item_tmdbid) if item_tmdbid else None, tmdbid=int(item_tmdbid) if item_tmdbid else None,
imdbid=item.get("ProviderIds", {}).get("Imdb"), imdbid=item.get("ProviderIds", {}).get("Imdb"),
tvdbid=item.get("ProviderIds", {}).get("Tvdb"), tvdbid=item.get("ProviderIds", {}).get("Tvdb"),
path=item.get("Path") path=item.get("Path")
) )
if tmdb_id and item_tmdbid: if tmdb_id and item_tmdbid:
if str(item_tmdbid) != str(tmdb_id): if str(item_tmdbid) != str(tmdb_id):
continue continue
else: else:
ret_movies.append(mediaserver_item)
continue
if mediaserver_item.title == title and (
not year or str(mediaserver_item.year) == str(year)):
ret_movies.append(mediaserver_item) ret_movies.append(mediaserver_item)
continue return ret_movies
if mediaserver_item.title == title and (
not year or str(mediaserver_item.year) == str(year)):
ret_movies.append(mediaserver_item)
return ret_movies
except Exception as e: except Exception as e:
logger.error(f"连接Items出错" + str(e)) logger.error(f"连接Items出错" + str(e))
return None return None
@@ -387,25 +387,25 @@ class Jellyfin:
try: try:
req_url = "%sShows/%s/Episodes?season=%s&&userId=%s&isMissing=false&api_key=%s" % ( req_url = "%sShows/%s/Episodes?season=%s&&userId=%s&isMissing=false&api_key=%s" % (
self._host, item_id, season, self.user, self._apikey) self._host, item_id, season, self.user, self._apikey)
res_json = RequestUtils().get_res(req_url) with RequestUtils().get_res(req_url) as res_json:
if res_json: if res_json:
tv_info = res_json.json() tv_info = res_json.json()
res_items = tv_info.get("Items") res_items = tv_info.get("Items")
# 返回的季集信息 # 返回的季集信息
season_episodes = {} season_episodes = {}
for res_item in res_items: for res_item in res_items:
season_index = res_item.get("ParentIndexNumber") season_index = res_item.get("ParentIndexNumber")
if not season_index: if not season_index:
continue continue
if season and season != season_index: if season and season != season_index:
continue continue
episode_index = res_item.get("IndexNumber") episode_index = res_item.get("IndexNumber")
if not episode_index: if not episode_index:
continue continue
if not season_episodes.get(season_index): if not season_episodes.get(season_index):
season_episodes[season_index] = [] season_episodes[season_index] = []
season_episodes[season_index].append(episode_index) season_episodes[season_index].append(episode_index)
return item_id, season_episodes return item_id, season_episodes
except Exception as e: except Exception as e:
logger.error(f"连接Shows/Id/Episodes出错" + str(e)) logger.error(f"连接Shows/Id/Episodes出错" + str(e))
return None, None return None, None
@@ -422,20 +422,73 @@ class Jellyfin:
return None return None
req_url = "%sItems/%s/RemoteImages?api_key=%s" % (self._host, item_id, self._apikey) req_url = "%sItems/%s/RemoteImages?api_key=%s" % (self._host, item_id, self._apikey)
try: try:
res = RequestUtils().get_res(req_url) res = RequestUtils(timeout=10).get_res(req_url)
if res: if res:
images = res.json().get("Images") images = res.json().get("Images")
for image in images: for image in images:
if image.get("ProviderName") == "TheMovieDb" and image.get("Type") == image_type: if image.get("ProviderName") == "TheMovieDb" and image.get("Type") == image_type:
return image.get("Url") return image.get("Url")
# return images[0].get("Url") # 首选无则返回第一张
else: else:
logger.error(f"Items/RemoteImages 未获取到返回数据") logger.info(f"Items/RemoteImages 未获取到返回数据,采用本地图片")
return None return self.generate_image_link(item_id, image_type, True)
except Exception as e: except Exception as e:
logger.error(f"连接Items/Id/RemoteImages出错" + str(e)) logger.error(f"连接Items/Id/RemoteImages出错" + str(e))
return None return None
return None return None
def generate_image_link(self, item_id: str, image_type: str, host_type: bool) -> Optional[str]:
"""
根据ItemId和imageType查询本地对应图片
:param item_id: 在Jellyfin中的ID
:param image_type: 图片类型如Backdrop、Primary
:param host_type: True为外网链接, False为内网链接
:return: 图片对应在host_type的播放器中的URL
"""
if not self._playhost:
logger.error("Jellyfin外网播放地址未能获取或为空")
return None
# 检测是否为TV
_parent_id = self.get_itemId_ancestors(item_id, 0, "ParentBackdropItemId")
if _parent_id:
item_id = _parent_id
_host = self._host
if host_type:
_host = self._playhost
req_url = "%sItems/%s/Images/%s" % (_host, item_id, image_type)
try:
with RequestUtils().get_res(req_url) as res:
if res and res.status_code != 404:
logger.info(f"影片图片链接:{res.url}")
return res.url
else:
logger.error("Items/Id/Images 未获取到返回数据或无该影片{}图片".format(image_type))
return None
except Exception as e:
logger.error(f"连接Items/Id/Images出错" + str(e))
return None
def get_itemId_ancestors(self, item_id: str, index: int, key: str) -> Optional[Union[str, list, int, dict, bool]]:
"""
获得itemId的父item
:param item_id: 在Jellyfin中剧集的ID (S01E02的E02的item_id)
:param index: 第几个json对象
:param key: 需要得到父item中的键值对
:return key对应类型的值
"""
req_url = "%sItems/%s/Ancestors?api_key=%s" % (self._host, item_id, self._apikey)
try:
with RequestUtils().get_res(req_url) as res:
if res:
return res.json()[index].get(key)
else:
logger.error(f"Items/Id/Ancestors 未获取到返回数据")
return None
except Exception as e:
logger.error(f"连接Items/Id/Ancestors出错" + str(e))
return None
def refresh_root_library(self) -> bool: def refresh_root_library(self) -> bool:
""" """
通知Jellyfin刷新整个媒体库 通知Jellyfin刷新整个媒体库
@@ -444,11 +497,11 @@ class Jellyfin:
return False return False
req_url = "%sLibrary/Refresh?api_key=%s" % (self._host, self._apikey) req_url = "%sLibrary/Refresh?api_key=%s" % (self._host, self._apikey)
try: try:
res = RequestUtils().post_res(req_url) with RequestUtils().post_res(req_url) as res:
if res: if res:
return True return True
else: else:
logger.info(f"刷新媒体库失败无法连接Jellyfin") logger.info(f"刷新媒体库失败无法连接Jellyfin")
except Exception as e: except Exception as e:
logger.error(f"连接Library/Refresh出错" + str(e)) logger.error(f"连接Library/Refresh出错" + str(e))
return False return False
@@ -579,23 +632,23 @@ class Jellyfin:
req_url = "%sUsers/%s/Items/%s?api_key=%s" % ( req_url = "%sUsers/%s/Items/%s?api_key=%s" % (
self._host, self.user, itemid, self._apikey) self._host, self.user, itemid, self._apikey)
try: try:
res = RequestUtils().get_res(req_url) with RequestUtils().get_res(req_url) as res:
if res and res.status_code == 200: if res and res.status_code == 200:
item = res.json() item = res.json()
tmdbid = item.get("ProviderIds", {}).get("Tmdb") tmdbid = item.get("ProviderIds", {}).get("Tmdb")
return schemas.MediaServerItem( return schemas.MediaServerItem(
server="jellyfin", server="jellyfin",
library=item.get("ParentId"), library=item.get("ParentId"),
item_id=item.get("Id"), item_id=item.get("Id"),
item_type=item.get("Type"), item_type=item.get("Type"),
title=item.get("Name"), title=item.get("Name"),
original_title=item.get("OriginalTitle"), original_title=item.get("OriginalTitle"),
year=item.get("ProductionYear"), year=item.get("ProductionYear"),
tmdbid=int(tmdbid) if tmdbid else None, tmdbid=int(tmdbid) if tmdbid else None,
imdbid=item.get("ProviderIds", {}).get("Imdb"), imdbid=item.get("ProviderIds", {}).get("Imdb"),
tvdbid=item.get("ProviderIds", {}).get("Tvdb"), tvdbid=item.get("ProviderIds", {}).get("Tvdb"),
path=item.get("Path") path=item.get("Path")
) )
except Exception as e: except Exception as e:
logger.error(f"连接Users/Items出错" + str(e)) logger.error(f"连接Users/Items出错" + str(e))
return None return None
@@ -610,17 +663,17 @@ class Jellyfin:
yield None yield None
req_url = "%sUsers/%s/Items?parentId=%s&api_key=%s" % (self._host, self.user, parent, self._apikey) req_url = "%sUsers/%s/Items?parentId=%s&api_key=%s" % (self._host, self.user, parent, self._apikey)
try: try:
res = RequestUtils().get_res(req_url) with RequestUtils().get_res(req_url) as res:
if res and res.status_code == 200: if res and res.status_code == 200:
results = res.json().get("Items") or [] results = res.json().get("Items") or []
for result in results: for result in results:
if not result: if not result:
continue continue
if result.get("Type") in ["Movie", "Series"]: if result.get("Type") in ["Movie", "Series"]:
yield self.get_iteminfo(result.get("Id")) yield self.get_iteminfo(result.get("Id"))
elif "Folder" in result.get("Type"): elif "Folder" in result.get("Type"):
for item in self.get_items(result.get("Id")): for item in self.get_items(result.get("Id")):
yield item yield item
except Exception as e: except Exception as e:
logger.error(f"连接Users/Items出错" + str(e)) logger.error(f"连接Users/Items出错" + str(e))
yield None yield None
@@ -708,46 +761,50 @@ class Jellyfin:
req_url = (f"{self._host}Users/{user}/Items/Resume?" req_url = (f"{self._host}Users/{user}/Items/Resume?"
f"Limit=100&MediaTypes=Video&api_key={self._apikey}&Fields=ProductionYear,Path") f"Limit=100&MediaTypes=Video&api_key={self._apikey}&Fields=ProductionYear,Path")
try: try:
res = RequestUtils().get_res(req_url) with RequestUtils().get_res(req_url) as res:
if res: if res:
result = res.json().get("Items") or [] result = res.json().get("Items") or []
ret_resume = [] ret_resume = []
# 用户媒体库文件夹列表(排除黑名单) # 用户媒体库文件夹列表(排除黑名单)
library_folders = self.get_user_library_folders() library_folders = self.get_user_library_folders()
for item in result: for item in result:
if len(ret_resume) == num: if len(ret_resume) == num:
break break
if item.get("Type") not in ["Movie", "Episode"]: if item.get("Type") not in ["Movie", "Episode"]:
continue continue
item_path = item.get("Path") item_path = item.get("Path")
if item_path and library_folders and not any( if item_path and library_folders and not any(
str(item_path).startswith(folder) for folder in library_folders): str(item_path).startswith(folder) for folder in library_folders):
continue continue
item_type = MediaType.MOVIE.value if item.get("Type") == "Movie" else MediaType.TV.value item_type = MediaType.MOVIE.value if item.get("Type") == "Movie" else MediaType.TV.value
link = self.get_play_url(item.get("Id")) link = self.get_play_url(item.get("Id"))
if item.get("BackdropImageTags"): if item.get("BackdropImageTags"):
image = self.__get_backdrop_url(item_id=item.get("Id"), image = self.__get_backdrop_url(item_id=item.get("Id"),
image_tag=item.get("BackdropImageTags")[0]) image_tag=item.get("BackdropImageTags")[0])
else: else:
image = self.__get_local_image_by_id(item.get("Id")) image = self.__get_local_image_by_id(item.get("Id"))
if item_type == MediaType.MOVIE.value: # 小部分剧集无[xxx-S01E01-thumb.jpg]图片
title = item.get("Name") with RequestUtils().get_res(image) as image_res:
subtitle = item.get("ProductionYear") if not image_res or image_res.status_code == 404:
else: image = self.generate_image_link(item.get("Id"), "Backdrop", False)
title = f'{item.get("SeriesName")}' if item_type == MediaType.MOVIE.value:
subtitle = f'S{item.get("ParentIndexNumber")}:{item.get("IndexNumber")} - {item.get("Name")}' title = item.get("Name")
ret_resume.append(schemas.MediaServerPlayItem( subtitle = item.get("ProductionYear")
id=item.get("Id"), else:
title=title, title = f'{item.get("SeriesName")}'
subtitle=subtitle, subtitle = f'S{item.get("ParentIndexNumber")}:{item.get("IndexNumber")} - {item.get("Name")}'
type=item_type, ret_resume.append(schemas.MediaServerPlayItem(
image=image, id=item.get("Id"),
link=link, title=title,
percent=item.get("UserData", {}).get("PlayedPercentage") subtitle=subtitle,
)) type=item_type,
return ret_resume image=image,
else: link=link,
logger.error(f"Users/Items/Resume 未获取到返回数据") percent=item.get("UserData", {}).get("PlayedPercentage")
))
return ret_resume
else:
logger.error(f"Users/Items/Resume 未获取到返回数据")
except Exception as e: except Exception as e:
logger.error(f"连接Users/Items/Resume出错" + str(e)) logger.error(f"连接Users/Items/Resume出错" + str(e))
return [] return []
@@ -765,35 +822,35 @@ class Jellyfin:
req_url = (f"{self._host}Users/{user}/Items/Latest?" req_url = (f"{self._host}Users/{user}/Items/Latest?"
f"Limit=100&MediaTypes=Video&api_key={self._apikey}&Fields=ProductionYear,Path") f"Limit=100&MediaTypes=Video&api_key={self._apikey}&Fields=ProductionYear,Path")
try: try:
res = RequestUtils().get_res(req_url) with RequestUtils().get_res(req_url) as res:
if res: if res:
result = res.json() or [] result = res.json() or []
ret_latest = [] ret_latest = []
# 用户媒体库文件夹列表(排除黑名单) # 用户媒体库文件夹列表(排除黑名单)
library_folders = self.get_user_library_folders() library_folders = self.get_user_library_folders()
for item in result: for item in result:
if len(ret_latest) == num: if len(ret_latest) == num:
break break
if item.get("Type") not in ["Movie", "Series"]: if item.get("Type") not in ["Movie", "Series"]:
continue continue
item_path = item.get("Path") item_path = item.get("Path")
if item_path and library_folders and not any( if item_path and library_folders and not any(
str(item_path).startswith(folder) for folder in library_folders): str(item_path).startswith(folder) for folder in library_folders):
continue continue
item_type = MediaType.MOVIE.value if item.get("Type") == "Movie" else MediaType.TV.value item_type = MediaType.MOVIE.value if item.get("Type") == "Movie" else MediaType.TV.value
link = self.get_play_url(item.get("Id")) link = self.get_play_url(item.get("Id"))
image = self.__get_local_image_by_id(item_id=item.get("Id")) image = self.__get_local_image_by_id(item_id=item.get("Id"))
ret_latest.append(schemas.MediaServerPlayItem( ret_latest.append(schemas.MediaServerPlayItem(
id=item.get("Id"), id=item.get("Id"),
title=item.get("Name"), title=item.get("Name"),
subtitle=item.get("ProductionYear"), subtitle=item.get("ProductionYear"),
type=item_type, type=item_type,
image=image, image=image,
link=link link=link
)) ))
return ret_latest return ret_latest
else: else:
logger.error(f"Users/Items/Latest 未获取到返回数据") logger.error(f"Users/Items/Latest 未获取到返回数据")
except Exception as e: except Exception as e:
logger.error(f"连接Users/Items/Latest出错" + str(e)) logger.error(f"连接Users/Items/Latest出错" + str(e))
return [] return []

View File

@@ -142,7 +142,11 @@ class Plex:
return schemas.Statistic() return schemas.Statistic()
sections = self._plex.library.sections() sections = self._plex.library.sections()
MovieCount = SeriesCount = EpisodeCount = 0 MovieCount = SeriesCount = EpisodeCount = 0
# 媒体库白名单
allow_library = [lib.id for lib in self.get_librarys()]
for sec in sections: for sec in sections:
if str(sec.key) not in allow_library:
continue
if sec.type == "movie": if sec.type == "movie":
MovieCount += sec.totalSize MovieCount += sec.totalSize
if sec.type == "show": if sec.type == "show":
@@ -610,7 +614,10 @@ class Plex:
""" """
if not self._plex: if not self._plex:
return [] return []
items = self._plex.fetchItems('/hubs/continueWatching/items', container_start=0, container_size=num) # 媒体库白名单
allow_library = ",".join([lib.id for lib in self.get_librarys()])
params = {'contentDirectoryID': allow_library}
items = self._plex.fetchItems("/hubs/continueWatching/items", container_start=0, container_size=num, params=params)
ret_resume = [] ret_resume = []
for item in items: for item in items:
item_type = MediaType.MOVIE.value if item.TYPE == "movie" else MediaType.TV.value item_type = MediaType.MOVIE.value if item.TYPE == "movie" else MediaType.TV.value
@@ -639,20 +646,63 @@ class Plex:
""" """
if not self._plex: if not self._plex:
return None return None
items = self._plex.fetchItems('/library/recentlyAdded', container_start=0, container_size=num) # 请求参数(除黑名单)
allow_library = ",".join([lib.id for lib in self.get_librarys()])
params = {
"contentDirectoryID": allow_library,
"count": num,
"excludeContinueWatching": 1
}
ret_resume = [] ret_resume = []
for item in items: sub_result = []
item_type = MediaType.MOVIE.value if item.TYPE == "movie" else MediaType.TV.value offset = 0
link = self.get_play_url(item.key) while True:
title = item.title if item_type == MediaType.MOVIE.value else \ if len(ret_resume) >= num:
"%s%s" % (item.parentTitle, item.index) break
image = item.posterUrl # 获取所有资料库
ret_resume.append(schemas.MediaServerPlayItem( hubs = self._plex.fetchItems(
id=item.key, '/hubs/promoted',
title=title, container_start=offset,
subtitle=item.year, container_size=num,
type=item_type, maxresults=num,
image=image, params=params
link=link )
)) if len(hubs) == 0:
break
# 合并排序
for hub in hubs:
for item in hub.items:
sub_result.append(item)
sub_result.sort(key=lambda x: x.addedAt, reverse=True)
for item in sub_result:
if len(ret_resume) >= num:
break
item_type, title, image = "", "", ""
if item.TYPE == "movie":
item_type = MediaType.MOVIE.value
title = item.title
image = item.posterUrl
elif item.TYPE == "season":
item_type = MediaType.TV.value
title = "%s%s" % (item.parentTitle, item.index)
image = item.posterUrl
elif item.TYPE == "episode":
item_type = MediaType.TV.value
title = "%s%s季 第%s" % (item.grandparentTitle, item.parentIndex, item.index)
thumb = (item.parentThumb or item.grandparentThumb or '').lstrip('/')
image = (self._host + thumb + f"?X-Plex-Token={self._token}")
link = self.get_play_url(item.key)
ret_resume.append(schemas.MediaServerPlayItem(
id=item.key,
title=title,
subtitle=item.year,
type=item_type,
image=image,
link=link
))
offset += num
return ret_resume[:num] return ret_resume[:num]

View File

@@ -133,7 +133,7 @@ class Qbittorrent:
except Exception as err: except Exception as err:
logger.error(f"删除种子Tag出错{str(err)}") logger.error(f"删除种子Tag出错{str(err)}")
return False return False
def remove_torrents_tag(self, ids: Union[str, list], tag: Union[str, list]) -> bool: def remove_torrents_tag(self, ids: Union[str, list], tag: Union[str, list]) -> bool:
""" """
移除种子Tag 移除种子Tag
@@ -148,7 +148,7 @@ class Qbittorrent:
except Exception as err: except Exception as err:
logger.error(f"移除种子Tag出错{str(err)}") logger.error(f"移除种子Tag出错{str(err)}")
return False return False
def set_torrents_tag(self, ids: Union[str, list], tags: list): def set_torrents_tag(self, ids: Union[str, list], tags: list):
""" """
设置种子状态为已整理,以及是否强制做种 设置种子状态为已整理,以及是否强制做种
@@ -372,6 +372,24 @@ class Qbittorrent:
logger.error(f"设置速度限制出错:{str(err)}") logger.error(f"设置速度限制出错:{str(err)}")
return False return False
def get_speed_limit(self) -> Optional[Tuple[float, float]]:
"""
获取QB速度
:return: 返回download_limit 和upload_limit 默认是0
"""
if not self.qbc:
return None
download_limit = 0
upload_limit = 0
try:
download_limit = self.qbc.transfer.download_limit
upload_limit = self.qbc.transfer.upload_limit
except Exception as err:
logger.error(f"获取速度限制出错:{str(err)}")
return download_limit / 1024, upload_limit / 1024
def recheck_torrents(self, ids: Union[str, list]) -> bool: def recheck_torrents(self, ids: Union[str, list]) -> bool:
""" """
重新校验种子 重新校验种子

View File

@@ -41,32 +41,32 @@ class Slack:
# 注册消息响应 # 注册消息响应
@slack_app.event("message") @slack_app.event("message")
def slack_message(message): def slack_message(message):
local_res = requests.post(self._ds_url, json=message, timeout=10) with requests.post(self._ds_url, json=message, timeout=10) as local_res:
logger.debug("message: %s processed, response is: %s" % (message, local_res.text)) logger.debug("message: %s processed, response is: %s" % (message, local_res.text))
@slack_app.action(re.compile(r"actionId-\d+")) @slack_app.action(re.compile(r"actionId-\d+"))
def slack_action(ack, body): def slack_action(ack, body):
ack() ack()
local_res = requests.post(self._ds_url, json=body, timeout=60) with requests.post(self._ds_url, json=body, timeout=60) as local_res:
logger.debug("message: %s processed, response is: %s" % (body, local_res.text)) logger.debug("message: %s processed, response is: %s" % (body, local_res.text))
@slack_app.event("app_mention") @slack_app.event("app_mention")
def slack_mention(say, body): def slack_mention(say, body):
say(f"收到,请稍等... <@{body.get('event', {}).get('user')}>") say(f"收到,请稍等... <@{body.get('event', {}).get('user')}>")
local_res = requests.post(self._ds_url, json=body, timeout=10) with requests.post(self._ds_url, json=body, timeout=10) as local_res:
logger.debug("message: %s processed, response is: %s" % (body, local_res.text)) logger.debug("message: %s processed, response is: %s" % (body, local_res.text))
@slack_app.shortcut(re.compile(r"/*")) @slack_app.shortcut(re.compile(r"/*"))
def slack_shortcut(ack, body): def slack_shortcut(ack, body):
ack() ack()
local_res = requests.post(self._ds_url, json=body, timeout=10) with requests.post(self._ds_url, json=body, timeout=10) as local_res:
logger.debug("message: %s processed, response is: %s" % (body, local_res.text)) logger.debug("message: %s processed, response is: %s" % (body, local_res.text))
@slack_app.command(re.compile(r"/*")) @slack_app.command(re.compile(r"/*"))
def slack_command(ack, body): def slack_command(ack, body):
ack() ack()
local_res = requests.post(self._ds_url, json=body, timeout=10) with requests.post(self._ds_url, json=body, timeout=10) as local_res:
logger.debug("message: %s processed, response is: %s" % (body, local_res.text)) logger.debug("message: %s processed, response is: %s" % (body, local_res.text))
# 启动服务 # 启动服务
try: try:

View File

@@ -183,7 +183,7 @@ class SynologyChat:
ret = self._req.get_res(url=req_url) ret = self._req.get_res(url=req_url)
if ret and ret.status_code == 200: if ret and ret.status_code == 200:
users = ret.json().get("data", {}).get("users", []) or [] users = ret.json().get("data", {}).get("users", []) or []
return [user.get("user_id") for user in users] return [user.get("user_id") for user in users if user.get("deleted", True) is False]
else: else:
return [] return []

View File

@@ -1,6 +1,8 @@
from pathlib import Path from pathlib import Path
from typing import Optional, List, Tuple, Union from typing import Optional, List, Tuple, Union
import cn2an
from app import schemas from app import schemas
from app.core.config import settings from app.core.config import settings
from app.core.context import MediaInfo from app.core.context import MediaInfo
@@ -10,7 +12,8 @@ from app.modules import _ModuleBase
from app.modules.themoviedb.category import CategoryHelper from app.modules.themoviedb.category import CategoryHelper
from app.modules.themoviedb.scraper import TmdbScraper from app.modules.themoviedb.scraper import TmdbScraper
from app.modules.themoviedb.tmdb_cache import TmdbCache from app.modules.themoviedb.tmdb_cache import TmdbCache
from app.modules.themoviedb.tmdbapi import TmdbHelper from app.modules.themoviedb.tmdbapi import TmdbApi
from app.schemas import MediaPerson
from app.schemas.types import MediaType, MediaImageType from app.schemas.types import MediaType, MediaImageType
from app.utils.http import RequestUtils from app.utils.http import RequestUtils
from app.utils.system import SystemUtils from app.utils.system import SystemUtils
@@ -24,7 +27,7 @@ class TheMovieDbModule(_ModuleBase):
# 元数据缓存 # 元数据缓存
cache: TmdbCache = None cache: TmdbCache = None
# TMDB # TMDB
tmdb: TmdbHelper = None tmdb: TmdbApi = None
# 二级分类 # 二级分类
category: CategoryHelper = None category: CategoryHelper = None
# 刮削器 # 刮削器
@@ -32,12 +35,13 @@ class TheMovieDbModule(_ModuleBase):
def init_module(self) -> None: def init_module(self) -> None:
self.cache = TmdbCache() self.cache = TmdbCache()
self.tmdb = TmdbHelper() self.tmdb = TmdbApi()
self.category = CategoryHelper() self.category = CategoryHelper()
self.scraper = TmdbScraper(self.tmdb) self.scraper = TmdbScraper(self.tmdb)
def stop(self): def stop(self):
self.cache.save() self.cache.save()
self.tmdb.close()
def test(self) -> Tuple[bool, str]: def test(self) -> Tuple[bool, str]:
""" """
@@ -67,63 +71,77 @@ class TheMovieDbModule(_ModuleBase):
:param cache: 是否使用缓存 :param cache: 是否使用缓存
:return: 识别的媒体信息,包括剧集信息 :return: 识别的媒体信息,包括剧集信息
""" """
if settings.RECOGNIZE_SOURCE != "themoviedb": if not tmdbid and not meta:
return None
if meta and not tmdbid \
and settings.RECOGNIZE_SOURCE != "themoviedb":
return None return None
if not meta: if not meta:
# 未提供元数据时直接使用tmdbid查询不使用缓存
cache_info = {} cache_info = {}
elif not meta.name: elif not meta.name:
logger.warn("识别媒体信息时未提供元数据名称") logger.warn("识别媒体信息时未提供元数据名称")
return None return None
else: else:
# 读取缓存
if mtype: if mtype:
meta.type = mtype meta.type = mtype
if tmdbid: if tmdbid:
meta.tmdbid = tmdbid meta.tmdbid = tmdbid
# 读取缓存
cache_info = self.cache.get(meta) cache_info = self.cache.get(meta)
# 识别匹配
if not cache_info or not cache: if not cache_info or not cache:
# 缓存没有或者强制不使用缓存 # 缓存没有或者强制不使用缓存
if tmdbid: if tmdbid:
# 直接查询详情 # 直接查询详情
info = self.tmdb.get_info(mtype=mtype, tmdbid=tmdbid) info = self.tmdb.get_info(mtype=mtype, tmdbid=tmdbid)
elif meta: elif meta:
if meta.begin_season: info = {}
logger.info(f"正在识别 {meta.name}{meta.begin_season}季 ...") # 使用中英文名分别识别,去重去空,但要保持顺序
else: names = list(dict.fromkeys([k for k in [meta.cn_name, meta.en_name] if k]))
logger.info(f"正在识别 {meta.name} ...") for name in names:
if meta.type == MediaType.UNKNOWN and not meta.year: if meta.begin_season:
info = self.tmdb.match_multi(meta.name) logger.info(f"正在识别 {name}{meta.begin_season}季 ...")
else:
if meta.type == MediaType.TV:
# 确定是电视
info = self.tmdb.match(name=meta.name,
year=meta.year,
mtype=meta.type,
season_year=meta.year,
season_number=meta.begin_season)
if not info:
# 去掉年份再查一次
info = self.tmdb.match(name=meta.name,
mtype=meta.type)
else: else:
# 有年份先按电影查 logger.info(f"正在识别 {name} ...")
info = self.tmdb.match(name=meta.name, if meta.type == MediaType.UNKNOWN and not meta.year:
year=meta.year, info = self.tmdb.match_multi(name)
mtype=MediaType.MOVIE) else:
# 没有再按电视剧查 if meta.type == MediaType.TV:
if not info: # 确定是电视
info = self.tmdb.match(name=meta.name, info = self.tmdb.match(name=name,
year=meta.year, year=meta.year,
mtype=MediaType.TV) mtype=meta.type,
if not info: season_year=meta.year,
# 去掉年份和类型再查一次 season_number=meta.begin_season)
info = self.tmdb.match_multi(name=meta.name) if not info:
# 去掉年份再查一次
info = self.tmdb.match(name=name,
mtype=meta.type)
else:
# 有年份先按电影查
info = self.tmdb.match(name=name,
year=meta.year,
mtype=MediaType.MOVIE)
# 没有再按电视剧查
if not info:
info = self.tmdb.match(name=name,
year=meta.year,
mtype=MediaType.TV)
if not info:
# 去掉年份和类型再查一次
info = self.tmdb.match_multi(name=name)
if not info: if not info:
# 从网站查询 # 从网站查询
info = self.tmdb.match_web(name=meta.name, info = self.tmdb.match_web(name=name,
mtype=meta.type) mtype=meta.type)
if info:
# 查到就退出
break
# 补充全量信息 # 补充全量信息
if info and not info.get("genres"): if info and not info.get("genres"):
info = self.tmdb.get_info(mtype=info.get("media_type"), info = self.tmdb.get_info(mtype=info.get("media_type"),
@@ -131,8 +149,9 @@ class TheMovieDbModule(_ModuleBase):
else: else:
logger.error("识别媒体信息时未提供元数据或tmdbid") logger.error("识别媒体信息时未提供元数据或tmdbid")
return None return None
# 保存到缓存 # 保存到缓存
if meta and cache: if meta:
self.cache.update(meta, info) self.cache.update(meta, info)
else: else:
# 使用缓存信息 # 使用缓存信息
@@ -182,7 +201,7 @@ class TheMovieDbModule(_ModuleBase):
:param season: 季号 :param season: 季号
""" """
# 搜索 # 搜索
logger.info(f"开始使用 名称:{name}年份:{year} 匹配TMDB信息 ...") logger.info(f"开始使用 名称:{name} 年份:{year} 匹配TMDB信息 ...")
info = self.tmdb.match(name=name, info = self.tmdb.match(name=name,
year=year, year=year,
mtype=mtype, mtype=mtype,
@@ -208,10 +227,8 @@ class TheMovieDbModule(_ModuleBase):
:param meta: 识别的元数据 :param meta: 识别的元数据
:reutrn: 媒体信息列表 :reutrn: 媒体信息列表
""" """
# 未启用时返回None if settings.SEARCH_SOURCE and "themoviedb" not in settings.SEARCH_SOURCE:
if settings.RECOGNIZE_SOURCE != "themoviedb":
return None return None
if not meta.name: if not meta.name:
return [] return []
if meta.type == MediaType.UNKNOWN and not meta.year: if meta.type == MediaType.UNKNOWN and not meta.year:
@@ -230,15 +247,37 @@ class TheMovieDbModule(_ModuleBase):
results = self.tmdb.search_movies(meta.name, meta.year) results = self.tmdb.search_movies(meta.name, meta.year)
else: else:
results = self.tmdb.search_tvs(meta.name, meta.year) results = self.tmdb.search_tvs(meta.name, meta.year)
# 将搜索词中的季写入标题中
if results:
medias = [MediaInfo(tmdb_info=info) for info in results]
if meta.begin_season:
# 小写数据转大写
season_str = cn2an.an2cn(meta.begin_season, "low")
for media in medias:
if media.type == MediaType.TV:
media.title = f"{media.title}{season_str}"
media.season = meta.begin_season
return medias
return []
return [MediaInfo(tmdb_info=info) for info in results] def search_persons(self, name: str) -> Optional[List[MediaPerson]]:
"""
搜索人物信息
"""
if not name:
return []
results = self.tmdb.search_persons(name)
if results:
return [MediaPerson(source='themoviedb', **person) for person in results]
return []
def scrape_metadata(self, path: Path, mediainfo: MediaInfo, transfer_type: str, def scrape_metadata(self, path: Path, mediainfo: MediaInfo, transfer_type: str,
force_nfo: bool = False, force_img: bool = False) -> None: metainfo: MetaBase = None, force_nfo: bool = False, force_img: bool = False) -> None:
""" """
刮削元数据 刮削元数据
:param path: 媒体文件路径 :param path: 媒体文件路径
:param mediainfo: 识别的媒体信息 :param mediainfo: 识别的媒体信息
:param metainfo: 源文件的识别元数据
:param transfer_type: 转移类型 :param transfer_type: 转移类型
:param force_nfo: 强制刮削nfo :param force_nfo: 强制刮削nfo
:param force_img: 强制刮削图片 :param force_img: 强制刮削图片
@@ -254,6 +293,7 @@ class TheMovieDbModule(_ModuleBase):
self.scraper.gen_scraper_files(mediainfo=mediainfo, self.scraper.gen_scraper_files(mediainfo=mediainfo,
file_path=scrape_path, file_path=scrape_path,
transfer_type=transfer_type, transfer_type=transfer_type,
metainfo=metainfo,
force_nfo=force_nfo, force_nfo=force_nfo,
force_img=force_img) force_img=force_img)
elif path.is_file(): elif path.is_file():
@@ -262,6 +302,7 @@ class TheMovieDbModule(_ModuleBase):
self.scraper.gen_scraper_files(mediainfo=mediainfo, self.scraper.gen_scraper_files(mediainfo=mediainfo,
file_path=path, file_path=path,
transfer_type=transfer_type, transfer_type=transfer_type,
metainfo=metainfo,
force_nfo=force_nfo, force_nfo=force_nfo,
force_img=force_img) force_img=force_img)
else: else:
@@ -278,7 +319,7 @@ class TheMovieDbModule(_ModuleBase):
logger.info(f"{path} 刮削完成") logger.info(f"{path} 刮削完成")
def tmdb_discover(self, mtype: MediaType, sort_by: str, with_genres: str, with_original_language: str, def tmdb_discover(self, mtype: MediaType, sort_by: str, with_genres: str, with_original_language: str,
page: int = 1) -> Optional[List[dict]]: page: int = 1) -> Optional[List[MediaInfo]]:
""" """
:param mtype: 媒体类型 :param mtype: 媒体类型
:param sort_by: 排序方式 :param sort_by: 排序方式
@@ -288,25 +329,31 @@ class TheMovieDbModule(_ModuleBase):
:return: 媒体信息列表 :return: 媒体信息列表
""" """
if mtype == MediaType.MOVIE: if mtype == MediaType.MOVIE:
return self.tmdb.discover_movies(sort_by=sort_by, infos = self.tmdb.discover_movies(sort_by=sort_by,
with_genres=with_genres, with_genres=with_genres,
with_original_language=with_original_language, with_original_language=with_original_language,
page=page) page=page)
elif mtype == MediaType.TV: elif mtype == MediaType.TV:
return self.tmdb.discover_tvs(sort_by=sort_by, infos = self.tmdb.discover_tvs(sort_by=sort_by,
with_genres=with_genres, with_genres=with_genres,
with_original_language=with_original_language, with_original_language=with_original_language,
page=page) page=page)
else: else:
return None return []
if infos:
return [MediaInfo(tmdb_info=info) for info in infos]
return []
def tmdb_trending(self, page: int = 1) -> List[dict]: def tmdb_trending(self, page: int = 1) -> List[MediaInfo]:
""" """
TMDB流行趋势 TMDB流行趋势
:param page: 第几页 :param page: 第几页
:return: TMDB信息列表 :return: TMDB信息列表
""" """
return self.tmdb.trending.all_week(page=page) trending = self.tmdb.trending.all_week(page=page)
if trending:
return [MediaInfo(tmdb_info=info) for info in trending]
return []
def tmdb_seasons(self, tmdbid: int) -> List[schemas.TmdbSeason]: def tmdb_seasons(self, tmdbid: int) -> List[schemas.TmdbSeason]:
""" """
@@ -397,15 +444,15 @@ class TheMovieDbModule(_ModuleBase):
# 图片相对路径 # 图片相对路径
image_path = None image_path = None
image_prefix = image_prefix or "w500" image_prefix = image_prefix or "w500"
if not season and not episode: if season is None and not episode:
tmdbinfo = self.tmdb.get_info(mtype=mtype, tmdbid=int(mediaid)) tmdbinfo = self.tmdb.get_info(mtype=mtype, tmdbid=int(mediaid))
if tmdbinfo: if tmdbinfo:
image_path = tmdbinfo.get(image_type.value) image_path = tmdbinfo.get(image_type.value)
elif season and episode: elif season is not None and episode:
episodeinfo = self.tmdb.get_tv_episode_detail(tmdbid=int(mediaid), season=season, episode=episode) episodeinfo = self.tmdb.get_tv_episode_detail(tmdbid=int(mediaid), season=season, episode=episode)
if episodeinfo: if episodeinfo:
image_path = episodeinfo.get("still_path") image_path = episodeinfo.get("still_path")
elif season: elif season is not None:
seasoninfo = self.tmdb.get_tv_season_detail(tmdbid=int(mediaid), season=season) seasoninfo = self.tmdb.get_tv_season_detail(tmdbid=int(mediaid), season=season)
if seasoninfo: if seasoninfo:
image_path = seasoninfo.get(image_type.value) image_path = seasoninfo.get(image_type.value)
@@ -414,64 +461,88 @@ class TheMovieDbModule(_ModuleBase):
return f"https://{settings.TMDB_IMAGE_DOMAIN}/t/p/{image_prefix}{image_path}" return f"https://{settings.TMDB_IMAGE_DOMAIN}/t/p/{image_prefix}{image_path}"
return None return None
def tmdb_movie_similar(self, tmdbid: int) -> List[dict]: def tmdb_movie_similar(self, tmdbid: int) -> List[MediaInfo]:
""" """
根据TMDBID查询类似电影 根据TMDBID查询类似电影
:param tmdbid: TMDBID :param tmdbid: TMDBID
""" """
return self.tmdb.get_movie_similar(tmdbid=tmdbid) similar = self.tmdb.get_movie_similar(tmdbid=tmdbid)
if similar:
return [MediaInfo(tmdb_info=info) for info in similar]
return []
def tmdb_tv_similar(self, tmdbid: int) -> List[dict]: def tmdb_tv_similar(self, tmdbid: int) -> List[MediaInfo]:
""" """
根据TMDBID查询类似电视剧 根据TMDBID查询类似电视剧
:param tmdbid: TMDBID :param tmdbid: TMDBID
""" """
return self.tmdb.get_tv_similar(tmdbid=tmdbid) similar = self.tmdb.get_tv_similar(tmdbid=tmdbid)
if similar:
return [MediaInfo(tmdb_info=info) for info in similar]
return []
def tmdb_movie_recommend(self, tmdbid: int) -> List[dict]: def tmdb_movie_recommend(self, tmdbid: int) -> List[MediaInfo]:
""" """
根据TMDBID查询推荐电影 根据TMDBID查询推荐电影
:param tmdbid: TMDBID :param tmdbid: TMDBID
""" """
return self.tmdb.get_movie_recommend(tmdbid=tmdbid) recommend = self.tmdb.get_movie_recommend(tmdbid=tmdbid)
if recommend:
return [MediaInfo(tmdb_info=info) for info in recommend]
return []
def tmdb_tv_recommend(self, tmdbid: int) -> List[dict]: def tmdb_tv_recommend(self, tmdbid: int) -> List[MediaInfo]:
""" """
根据TMDBID查询推荐电视剧 根据TMDBID查询推荐电视剧
:param tmdbid: TMDBID :param tmdbid: TMDBID
""" """
return self.tmdb.get_tv_recommend(tmdbid=tmdbid) recommend = self.tmdb.get_tv_recommend(tmdbid=tmdbid)
if recommend:
return [MediaInfo(tmdb_info=info) for info in recommend]
return []
def tmdb_movie_credits(self, tmdbid: int, page: int = 1) -> List[dict]: def tmdb_movie_credits(self, tmdbid: int, page: int = 1) -> List[schemas.MediaPerson]:
""" """
根据TMDBID查询电影演职员表 根据TMDBID查询电影演职员表
:param tmdbid: TMDBID :param tmdbid: TMDBID
:param page: 页码 :param page: 页码
""" """
return self.tmdb.get_movie_credits(tmdbid=tmdbid, page=page) credit_infos = self.tmdb.get_movie_credits(tmdbid=tmdbid, page=page)
if credit_infos:
return [schemas.MediaPerson(source="themoviedb", **info) for info in credit_infos]
return []
def tmdb_tv_credits(self, tmdbid: int, page: int = 1) -> List[dict]: def tmdb_tv_credits(self, tmdbid: int, page: int = 1) -> List[schemas.MediaPerson]:
""" """
根据TMDBID查询电视剧演职员表 根据TMDBID查询电视剧演职员表
:param tmdbid: TMDBID :param tmdbid: TMDBID
:param page: 页码 :param page: 页码
""" """
return self.tmdb.get_tv_credits(tmdbid=tmdbid, page=page) credit_infos = self.tmdb.get_tv_credits(tmdbid=tmdbid, page=page)
if credit_infos:
return [schemas.MediaPerson(source="themoviedb", **info) for info in credit_infos]
return []
def tmdb_person_detail(self, person_id: int) -> dict: def tmdb_person_detail(self, person_id: int) -> schemas.MediaPerson:
""" """
根据TMDBID查询人物详情 根据TMDBID查询人物详情
:param person_id: 人物ID :param person_id: 人物ID
""" """
return self.tmdb.get_person_detail(person_id=person_id) detail = self.tmdb.get_person_detail(person_id=person_id)
if detail:
return schemas.MediaPerson(source="themoviedb", **detail)
return schemas.MediaPerson
def tmdb_person_credits(self, person_id: int, page: int = 1) -> List[dict]: def tmdb_person_credits(self, person_id: int, page: int = 1) -> List[MediaInfo]:
""" """
根据TMDBID查询人物参演作品 根据TMDBID查询人物参演作品
:param person_id: 人物ID :param person_id: 人物ID
:param page: 页码 :param page: 页码
""" """
return self.tmdb.get_person_credits(person_id=person_id, page=page) infos = self.tmdb.get_person_credits(person_id=person_id, page=page)
if infos:
return [MediaInfo(tmdb_info=tmdbinfo) for tmdbinfo in infos]
return []
def clear_cache(self): def clear_cache(self):
""" """

View File

@@ -107,7 +107,7 @@ class CategoryHelper(metaclass=Singleton):
:return: 二级分类的名称 :return: 二级分类的名称
""" """
genre_ids = tmdb_info.get("genre_ids") or [] genre_ids = tmdb_info.get("genre_ids") or []
if genre_ids \ if self._anime_categorys and genre_ids \
and set(genre_ids).intersection(set(settings.ANIME_GENREIDS)): and set(genre_ids).intersection(set(settings.ANIME_GENREIDS)):
return self.get_category(self._anime_categorys, tmdb_info) return self.get_category(self._anime_categorys, tmdb_info)
return self.get_category(self._tv_categorys, tmdb_info) return self.get_category(self._tv_categorys, tmdb_info)

View File

@@ -1,4 +1,3 @@
import time
import traceback import traceback
from pathlib import Path from pathlib import Path
from typing import Union from typing import Union
@@ -8,6 +7,7 @@ from requests import RequestException
from app.core.config import settings from app.core.config import settings
from app.core.context import MediaInfo from app.core.context import MediaInfo
from app.core.meta import MetaBase
from app.core.metainfo import MetaInfo from app.core.metainfo import MetaInfo
from app.log import logger from app.log import logger
from app.schemas.types import MediaType from app.schemas.types import MediaType
@@ -27,16 +27,20 @@ class TmdbScraper:
self.tmdb = tmdb self.tmdb = tmdb
def gen_scraper_files(self, mediainfo: MediaInfo, file_path: Path, transfer_type: str, def gen_scraper_files(self, mediainfo: MediaInfo, file_path: Path, transfer_type: str,
force_nfo: bool = False, force_img: bool = False): metainfo: MetaBase = None, force_nfo: bool = False, force_img: bool = False):
""" """
生成刮削文件包括NFO和图片传入路径为文件路径 生成刮削文件包括NFO和图片传入路径为文件路径
:param mediainfo: 媒体信息 :param mediainfo: 媒体信息
:param metainfo: 源文件的识别元数据
:param file_path: 文件路径或者目录路径 :param file_path: 文件路径或者目录路径
:param transfer_type: 传输类型 :param transfer_type: 传输类型
:param force_nfo: 是否强制生成NFO :param force_nfo: 是否强制生成NFO
:param force_img: 是否强制生成图片 :param force_img: 是否强制生成图片
""" """
if not mediainfo or not file_path:
return
self._transfer_type = transfer_type self._transfer_type = transfer_type
self._force_nfo = force_nfo self._force_nfo = force_nfo
self._force_img = force_img self._force_img = force_img
@@ -73,8 +77,10 @@ class TmdbScraper:
file_path=image_path) file_path=image_path)
# 电视剧,路径为每一季的文件名 名称/Season xx/名称 SxxExx.xxx # 电视剧,路径为每一季的文件名 名称/Season xx/名称 SxxExx.xxx
else: else:
# 识别 # 如果有上游传入的元信息则使用,否则使用文件名识别
meta = MetaInfo(file_path.stem) meta = metainfo or MetaInfo(file_path.name)
if meta.begin_season is None:
meta.begin_season = mediainfo.season if mediainfo.season is not None else 1
# 根目录不存在时才处理 # 根目录不存在时才处理
if self._force_nfo or not file_path.parent.with_name("tvshow.nfo").exists(): if self._force_nfo or not file_path.parent.with_name("tvshow.nfo").exists():
# 根目录描述文件 # 根目录描述文件
@@ -151,10 +157,6 @@ class TmdbScraper:
""" """
生成公共NFO 生成公共NFO
""" """
# 添加时间
DomUtils.add_node(doc, root, "dateadded",
time.strftime('%Y-%m-%d %H:%M:%S',
time.localtime(time.time())))
# TMDB # TMDB
DomUtils.add_node(doc, root, "tmdbid", mediainfo.tmdb_id or "") DomUtils.add_node(doc, root, "tmdbid", mediainfo.tmdb_id or "")
uniqueid_tmdb = DomUtils.add_node(doc, root, "uniqueid", mediainfo.tmdb_id or "") uniqueid_tmdb = DomUtils.add_node(doc, root, "uniqueid", mediainfo.tmdb_id or "")
@@ -267,9 +269,6 @@ class TmdbScraper:
logger.info(f"正在生成季NFO文件{season_path.name}") logger.info(f"正在生成季NFO文件{season_path.name}")
doc = minidom.Document() doc = minidom.Document()
root = DomUtils.add_node(doc, doc, "season") root = DomUtils.add_node(doc, doc, "season")
# 添加时间
DomUtils.add_node(doc, root, "dateadded",
time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(time.time())))
# 简介 # 简介
xplot = DomUtils.add_node(doc, root, "plot") xplot = DomUtils.add_node(doc, root, "plot")
xplot.appendChild(doc.createCDATASection(seasoninfo.get("overview") or "")) xplot.appendChild(doc.createCDATASection(seasoninfo.get("overview") or ""))
@@ -306,10 +305,8 @@ class TmdbScraper:
logger.info(f"正在生成剧集NFO文件{file_path.name}") logger.info(f"正在生成剧集NFO文件{file_path.name}")
doc = minidom.Document() doc = minidom.Document()
root = DomUtils.add_node(doc, doc, "episodedetails") root = DomUtils.add_node(doc, doc, "episodedetails")
# 添加时间
DomUtils.add_node(doc, root, "dateadded", time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(time.time())))
# TMDBID # TMDBID
uniqueid = DomUtils.add_node(doc, root, "uniqueid", str(tmdbid)) uniqueid = DomUtils.add_node(doc, root, "uniqueid", str(episodeinfo.get("id")))
uniqueid.setAttribute("type", "tmdb") uniqueid.setAttribute("type", "tmdb")
uniqueid.setAttribute("default", "true") uniqueid.setAttribute("default", "true")
# tmdbid # tmdbid

View File

@@ -15,7 +15,7 @@ from .tmdbv3api import TMDb, Search, Movie, TV, Season, Episode, Discover, Trend
from .tmdbv3api.exceptions import TMDbException from .tmdbv3api.exceptions import TMDbException
class TmdbHelper: class TmdbApi:
""" """
TMDB识别匹配 TMDB识别匹配
""" """
@@ -95,6 +95,14 @@ class TmdbHelper:
ret_infos.append(tv) ret_infos.append(tv)
return ret_infos return ret_infos
def search_persons(self, name: str) -> List[dict]:
"""
查询模糊匹配的所有人物TMDB信息
"""
if not name:
return []
return self.search.people(term=name) or []
@staticmethod @staticmethod
def __compare_names(file_name: str, tmdb_names: list) -> bool: def __compare_names(file_name: str, tmdb_names: list) -> bool:
""" """
@@ -168,7 +176,7 @@ class TmdbHelper:
return None return None
# TMDB搜索 # TMDB搜索
info = {} info = {}
if mtype == MediaType.MOVIE: if mtype != MediaType.TV:
year_range = [year] year_range = [year]
if year: if year:
year_range.append(str(int(year) + 1)) year_range.append(str(int(year) + 1))
@@ -189,9 +197,16 @@ class TmdbHelper:
season_year, season_year,
season_number) season_number)
if not info: if not info:
logger.debug( year_range = [year]
f"正在识别{mtype.value}{name}, 年份={year} ...") if year:
info = self.__search_tv_by_name(name, year) year_range.append(str(int(year) + 1))
year_range.append(str(int(year) - 1))
for year in year_range:
logger.debug(
f"正在识别{mtype.value}{name}, 年份={year} ...")
info = self.__search_tv_by_name(name, year)
if info:
break
if info: if info:
info['media_type'] = MediaType.TV info['media_type'] = MediaType.TV
# 返回 # 返回
@@ -553,10 +568,10 @@ class TmdbHelper:
tmdb_info['genre_ids'] = __get_genre_ids(tmdb_info.get('genres')) tmdb_info['genre_ids'] = __get_genre_ids(tmdb_info.get('genres'))
# 别名和译名 # 别名和译名
tmdb_info['names'] = self.__get_names(tmdb_info) tmdb_info['names'] = self.__get_names(tmdb_info)
# 转换多语种标题
self.__update_tmdbinfo_extra_title(tmdb_info)
# 转换中文标题 # 转换中文标题
self.__update_tmdbinfo_cn_title(tmdb_info) self.__update_tmdbinfo_cn_title(tmdb_info)
# 转换英文标题
self.__update_tmdbinfo_en_title(tmdb_info)
return tmdb_info return tmdb_info
@@ -585,49 +600,61 @@ class TmdbHelper:
return title return title
return tmdbinfo.get("title") if tmdbinfo.get("media_type") == MediaType.MOVIE else tmdbinfo.get("name") return tmdbinfo.get("title") if tmdbinfo.get("media_type") == MediaType.MOVIE else tmdbinfo.get("name")
# 查找中文名 # 原标题
org_title = tmdb_info.get("title") \ org_title = tmdb_info.get("title") \
if tmdb_info.get("media_type") == MediaType.MOVIE \ if tmdb_info.get("media_type") == MediaType.MOVIE \
else tmdb_info.get("name") else tmdb_info.get("name")
# 查找中文名
if not StringUtils.is_chinese(org_title): if not StringUtils.is_chinese(org_title):
cn_title = __get_tmdb_chinese_title(tmdb_info) cn_title = __get_tmdb_chinese_title(tmdb_info)
if cn_title and cn_title != org_title: if cn_title and cn_title != org_title:
# 使用中文别名
if tmdb_info.get("media_type") == MediaType.MOVIE: if tmdb_info.get("media_type") == MediaType.MOVIE:
tmdb_info['title'] = cn_title tmdb_info['title'] = cn_title
else: else:
tmdb_info['name'] = cn_title tmdb_info['name'] = cn_title
else:
# 使用新加坡名
sg_title = tmdb_info.get("sg_title")
if sg_title and sg_title != org_title and StringUtils.is_chinese(sg_title):
if tmdb_info.get("media_type") == MediaType.MOVIE:
tmdb_info['title'] = sg_title
else:
tmdb_info['name'] = sg_title
@staticmethod @staticmethod
def __update_tmdbinfo_en_title(tmdb_info: dict): def __update_tmdbinfo_extra_title(tmdb_info: dict):
""" """
更新TMDB信息中的英文名称 更新TMDB信息中的其它语种名称
""" """
def __get_tmdb_english_title(tmdbinfo): def __get_tmdb_lang_title(tmdbinfo: dict, lang: str = "US"):
""" """
名中获取英文标题 名中获取其它语种标题
""" """
if not tmdbinfo: if not tmdbinfo:
return None return None
translations = tmdb_info.get("translations", {}).get("translations", []) translations = tmdb_info.get("translations", {}).get("translations", [])
for translation in translations: for translation in translations:
if translation.get("iso_3166_1") == "US": if translation.get("iso_3166_1") == lang:
return translation.get("data", {}).get("title") if tmdbinfo.get("media_type") == MediaType.MOVIE \ return translation.get("data", {}).get("title") if tmdbinfo.get("media_type") == MediaType.MOVIE \
else translation.get("data", {}).get("name") else translation.get("data", {}).get("name")
return None return None
# 查找英文名 # 原标题
org_title = ( org_title = (
tmdb_info.get("original_title") tmdb_info.get("original_title")
if tmdb_info.get("media_type") == MediaType.MOVIE if tmdb_info.get("media_type") == MediaType.MOVIE
else tmdb_info.get("original_name") else tmdb_info.get("original_name")
) )
# 查找英文名
if tmdb_info.get("original_language") == "en": if tmdb_info.get("original_language") == "en":
tmdb_info['en_title'] = org_title tmdb_info['en_title'] = org_title
# TODO: 对于日文标题,使用罗马字作为英文标题可能更合适?
else: else:
en_title = __get_tmdb_english_title(tmdb_info) en_title = __get_tmdb_lang_title(tmdb_info, "US")
tmdb_info['en_title'] = en_title or org_title tmdb_info['en_title'] = en_title or org_title
# 查找新加坡名(用于替代中文名)
tmdb_info['sg_title'] = __get_tmdb_lang_title(tmdb_info, "SG") or org_title
def __get_movie_detail(self, def __get_movie_detail(self,
tmdbid: int, tmdbid: int,
@@ -1008,7 +1035,7 @@ class TmdbHelper:
if not self.episode: if not self.episode:
return {} return {}
try: try:
logger.info("正在查询TMDB集图片%s,季:%s,集:%s ..." % (tmdbid, season, episode)) logger.info("正在查询TMDB集详情%s,季:%s,集:%s ..." % (tmdbid, season, episode))
tmdbinfo = self.episode.details(tv_id=tmdbid, season_num=season, episode_num=episode) tmdbinfo = self.episode.details(tv_id=tmdbid, season_num=season, episode_num=episode)
return tmdbinfo or {} return tmdbinfo or {}
except Exception as e: except Exception as e:
@@ -1206,9 +1233,13 @@ class TmdbHelper:
return [] return []
try: try:
logger.info(f"正在获取人物参演作品:{person_id}...") logger.info(f"正在获取人物参演作品:{person_id}...")
info = self.person.movie_credits(person_id=person_id) or {} movies = self.person.movie_credits(person_id=person_id) or {}
cast = info.get('cast') or [] tvs = self.person.tv_credits(person_id=person_id) or {}
cast = (movies.get('cast') or []) + (tvs.get('cast') or [])
if cast: if cast:
# 按年份降序排列
cast = sorted(cast, key=lambda x: x.get('release_date') or x.get('first_air_date') or '1900-01-01',
reverse=True)
return cast[(page - 1) * count: page * count] return cast[(page - 1) * count: page * count]
return [] return []
except Exception as e: except Exception as e:
@@ -1252,3 +1283,9 @@ class TmdbHelper:
except Exception as e: except Exception as e:
print(str(e)) print(str(e))
return {} return {}
def close(self):
"""
关闭连接
"""
self.tmdb.close()

View File

@@ -14,7 +14,6 @@ class Person(TMDb):
"translations": "/person/%s/translations", "translations": "/person/%s/translations",
"latest": "/person/latest", "latest": "/person/latest",
"popular": "/person/popular", "popular": "/person/popular",
"search_people": "/search/person",
} }
def details(self, person_id, append_to_response="videos,images"): def details(self, person_id, append_to_response="videos,images"):

View File

@@ -222,6 +222,6 @@ class TMDb(object):
return json.get(key) return json.get(key)
return json return json
def __del__(self): def close(self):
if self._session: if self._session:
self._session.close() self._session.close()

View File

@@ -17,7 +17,7 @@ class TheTvDbModule(_ModuleBase):
proxies=settings.PROXY) proxies=settings.PROXY)
def stop(self): def stop(self):
pass self.tvdb.close()
def test(self) -> Tuple[bool, str]: def test(self) -> Tuple[bool, str]:
""" """

View File

@@ -733,6 +733,10 @@ class Tvdb:
} }
self.proxies = proxies self.proxies = proxies
def close(self):
if self.session:
self.session.close()
@staticmethod @staticmethod
def _getTempDir(): def _getTempDir():
"""Returns the [system temp dir]/tvdb_api-u501 (or """Returns the [system temp dir]/tvdb_api-u501 (or
@@ -764,17 +768,17 @@ class Tvdb:
if not self.__authorized: if not self.__authorized:
# only authorize of we haven't before and we # only authorize of we haven't before and we
# don't have the url in the cache # don't have the url in the cache
fake_session_for_key = requests.Session()
fake_session_for_key.headers['Accept-Language'] = language
cache_key = None cache_key = None
try: with requests.Session() as fake_session_for_key:
# in case the session class has no cache object, fail gracefully fake_session_for_key.headers['Accept-Language'] = language
cache_key = self.session.cache.create_key( try:
fake_session_for_key.prepare_request(requests.Request('GET', url)) # in case the session class has no cache object, fail gracefully
) cache_key = self.session.cache.create_key(
except Exception: fake_session_for_key.prepare_request(requests.Request('GET', url))
# FIXME: Can this just check for hasattr(self.session, "cache") instead? )
pass except Exception:
# FIXME: Can this just check for hasattr(self.session, "cache") instead?
pass
# fmt: off # fmt: off
# No fmt because mangles noqa comment - https://github.com/psf/black/issues/195 # No fmt because mangles noqa comment - https://github.com/psf/black/issues/195

Some files were not shown because too many files have changed in this diff Show More